id
stringlengths 9
104
| author
stringlengths 3
36
| task_category
stringclasses 32
values | tags
sequencelengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2021-02-13 00:06:56
2025-03-18 09:30:19
| downloads
int64 0
15.6M
| likes
int64 0
4.86k
| README
stringlengths 44
1.01M
| matched_bigbio_names
sequencelengths 1
8
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
mradermacher/1.5-Pints-2K-v0.1-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"dataset:pints-ai/Expository-Prose-V1",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:meta-math/MetaMathQA",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:togethercomputer/llama-instruct",
"dataset:LDJnr/Capybara",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:pints-ai/1.5-Pints-2K-v0.1",
"base_model:quantized:pints-ai/1.5-Pints-2K-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 1,740,413,315,000 | 2025-02-24T16:54:52 | 913 | 0 | ---
base_model: pints-ai/1.5-Pints-2K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
license: mit
extra_gated_fields:
Company: text
Country: country
I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
I want to use this model for:
options:
- Research
- Education
- label: Other
value: other
type: select
Specific date: date_picker
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
that all texts in the training corpora are royalty free, this does not constitute
a legal guarantee that such is the case. **By using any of the models, corpora or
part thereof, the user agrees to bear full responsibility to do the necessary due
diligence to ensure that he / she is in compliance with their local copyright laws.
Additionally, the user agrees to bear any damages arising as a direct cause (or
otherwise) of using any artifacts released by the pints research team, as well as
full responsibility for the consequences of his / her usage (or implementation)
of any such released artifacts. The user also indemnifies Pints Research Team (and
any of its members or agents) of any damage, related or unrelated, to the release
or subsequent usage of any findings, artifacts or code by the team. For the avoidance
of doubt, any artifacts released by the Pints Research team are done so in accordance
with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
community in bringing LLMs to the next frontier.
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pints-ai/1.5-Pints-2K-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"BEAR"
] | Non_BioNLP |
LoneStriker/Einstein-v4-7B-4.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"Mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"conversational",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:glaiveai/glaive-code-assistant",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,709,406,627,000 | 2024-03-02T19:12:09 | 6 | 0 | ---
base_model: mistralai/Mistral-7B-v0.1
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- glaiveai/glaive-code-assistant
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
license: other
tags:
- axolotl
- generated_from_trainer
- Mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
model-index:
- name: Einstein-v4-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.15
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
---

# 🔬 Einstein-v4-7B
This model is a full fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on diverse datasets.
This model is finetuned using `7xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
This model's training was sponsored by [sablo.ai](https://sablo.ai).
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: chatml
datasets:
- path: data/merged_all.json
ds_type: json
type: alpaca
conversation: chatml
- path: data/capybara_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/synthia-v1.3_sharegpt_12500.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/slimorca_dedup_filtered_95k_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0.005
output_dir: ./Einstein-v4-model
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project: Einstein
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
hub_model_id: Weyaxi/Einstein-v4-7B
save_safetensors: true
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 1.5
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2 # changed
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 4
debug:
deepspeed: zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "<|im_end|>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
resume_from_checkpoint: Einstein-v4-model/checkpoint-521
```
</details><br>
# 💬 Prompt Template
You can use this prompt template while using the model:
### ChatML
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are helpful AI asistant."},
{"role": "user", "content": "Hello!"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# 🔄 Quantizationed versions
Quantizationed versions of this model is available.
## Exl2 [@bartowski](https://hf.co/bartowski):
- https://huggingface.co/bartowski/Einstein-v4-7B-exl2
You can switch up branches in the repo to use the one you want
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
# 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v4-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.62|
|AI2 Reasoning Challenge (25-Shot)|64.68|
|HellaSwag (10-Shot) |83.75|
|MMLU (5-Shot) |62.31|
|TruthfulQA (0-shot) |55.15|
|Winogrande (5-shot) |76.24|
|GSM8k (5-shot) |57.62|
# 🤖 Additional information about training
This model is full fine-tuned for 1.5 epoch.
Total number of steps was 1562.
<details><summary>Loss graph</summary>

</details><br>
# 🤝 Acknowledgments
Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
Thanks to all the dataset authors mentioned in the datasets section.
Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
Thanks to all open source AI community.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
| [
"SCIQ"
] | Non_BioNLP |
espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3 | espnet | automatic-speech-recognition | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:chime6",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | 1,651,611,160,000 | 2022-05-03T21:48:45 | 1 | 0 | ---
datasets:
- chime6
language: en
license: cc-by-4.0
tags:
- espnet
- audio
- automatic-speech-recognition
---
## ESPnet2 ASR model
### `espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3`
This model was trained by simpleoier using chime6 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b757b89d45d5574cebf44e225cbe32e3e9e4f522
pip install -e .
cd egs2/chime6/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue May 3 16:47:10 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.10.1`
- Git hash: `b757b89d45d5574cebf44e225cbe32e3e9e4f522`
- Commit date: `Mon May 2 09:21:08 2022 -0400`
## asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|58881|66.5|21.3|12.2|8.8|42.3|77.4|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|58881|68.6|20.7|10.6|8.4|39.8|77.5|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|58881|67.5|20.3|12.2|8.0|40.5|76.5|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|58881|67.7|21.4|10.9|8.6|40.9|77.9|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|58881|66.6|20.9|12.5|8.2|41.6|77.8|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|69.4|20.2|10.4|8.6|39.1|75.8|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|65.7|20.2|14.1|7.5|41.8|77.8|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|58881|65.7|19.0|15.3|6.2|40.6|78.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|280767|78.1|7.7|14.1|9.1|31.0|77.9|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|280767|80.0|7.6|12.5|8.7|28.8|78.1|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|280767|78.6|7.3|14.1|8.1|29.5|77.5|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|280767|79.5|7.7|12.8|9.1|29.6|78.8|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|280767|77.9|7.6|14.5|8.3|30.3|78.6|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|80.6|7.4|12.0|8.9|28.3|76.6|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|76.5|7.4|16.1|7.7|31.2|78.5|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|280767|77.0|7.6|15.4|7.2|30.2|79.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|92680|65.8|18.8|15.4|8.7|42.9|78.0|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|92680|67.9|18.1|13.9|8.2|40.3|78.2|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|92680|66.9|17.8|15.2|8.0|41.1|77.7|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|92680|67.2|18.5|14.3|8.2|40.9|78.9|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|92680|66.1|18.2|15.7|7.8|41.7|78.6|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|68.9|17.7|13.4|8.2|39.3|76.6|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|66.1|19.1|14.8|10.2|44.1|78.6|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|92680|66.0|19.9|14.1|9.5|43.6|79.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 8
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 48
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe1000_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_worn_simu_u400k_cleaned_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_worn_simu_u400k_cleaned_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_gss_multiarray/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev_gss_multiarray/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- '[inaudible]'
- '[laughs]'
- '[noise]'
- ▁
- s
- ''''
- ▁i
- ▁it
- t
- ▁you
- ▁the
- ▁yeah
- ▁a
- ▁like
- ▁that
- ▁and
- ▁to
- m
- ▁oh
- ▁so
- '-'
- e
- re
- a
- ▁just
- ▁no
- d
- ▁we
- n
- ▁in
- ing
- i
- ▁of
- ▁do
- ▁is
- ▁have
- ▁what
- ▁was
- ▁this
- ▁can
- o
- ▁one
- r
- ▁but
- er
- y
- ▁they
- ed
- ▁uh
- ▁for
- ▁okay
- ▁there
- ▁be
- ▁he
- ▁don
- g
- ll
- ▁right
- p
- ▁not
- u
- ▁on
- c
- ▁then
- ▁know
- ▁my
- ▁or
- ▁get
- ▁are
- ▁all
- ▁um
- ▁me
- ▁if
- ▁go
- ▁good
- ▁with
- ▁really
- b
- ▁gonna
- ▁think
- ▁cuz
- in
- ▁your
- k
- ve
- le
- w
- an
- ▁she
- l
- ▁well
- en
- f
- ▁up
- al
- ▁two
- h
- ar
- ▁how
- ▁mhm
- v
- ▁here
- ly
- ▁put
- ▁out
- ▁would
- ▁at
- ▁need
- ▁did
- ▁f
- ▁want
- ▁mm
- ▁more
- ch
- ri
- ▁now
- or
- ▁when
- ▁k
- ▁p
- ▁see
- ▁got
- ▁too
- ▁thing
- ▁time
- 'on'
- ▁actually
- ▁where
- ne
- ▁guys
- ▁some
- ▁had
- ▁why
- ic
- ▁them
- ▁st
- ro
- ▁make
- ur
- ▁three
- ▁b
- ▁mean
- ▁wanna
- ▁should
- at
- ▁from
- th
- ▁didn
- ▁about
- ▁yes
- ▁because
- ▁yep
- ▁people
- ▁co
- ▁could
- ▁were
- ▁take
- ▁has
- ▁something
- ce
- ▁w
- ▁c
- ▁sure
- ▁who
- ▁other
- ▁sh
- ▁say
- ▁an
- ▁her
- ▁g
- ▁work
- il
- es
- ▁little
- el
- ▁much
- ▁eat
- ▁still
- ▁wait
- ▁ma
- ▁four
- ▁de
- ▁only
- ▁down
- ▁though
- ▁way
- ▁lot
- ▁use
- ▁over
- ▁let
- ▁pretty
- ▁these
- ▁bo
- ▁any
- ▁off
- ▁ba
- ▁di
- ▁d
- ▁back
- ▁sorry
- ▁those
- ▁very
- ▁bit
- ▁even
- li
- ▁stuff
- ke
- ate
- z
- ▁probably
- ▁nice
- ▁turn
- ▁doesn
- ▁first
- ▁does
- ▁hmm
- ▁look
- ▁going
- ▁play
- ▁ho
- pe
- ▁maybe
- ▁come
- ▁fine
- ▁cut
- ▁man
- ▁bu
- ▁ca
- ▁mo
- ▁th
- lo
- ▁never
- ry
- ▁po
- ▁h
- ▁will
- us
- x
- ge
- ▁five
- ▁start
- ▁him
- ▁long
- ▁give
- ▁se
- ting
- ▁sp
- ▁ra
- ▁done
- ▁con
- ▁big
- ▁his
- ▁y
- ▁which
- ▁been
- ▁dunno
- est
- ion
- ▁fa
- ▁than
- me
- ▁our
- ▁also
- ▁six
- ▁kinda
- co
- ▁cool
- ty
- ▁game
- ▁thought
- ▁fi
- ▁after
- ▁day
- ▁doing
- ment
- ▁said
- ▁whatever
- ap
- ▁place
- ▁anything
- ▁j
- ▁guess
- em
- ▁always
- ▁things
- ▁card
- ▁li
- ▁thank
- ▁last
- ▁before
- ▁many
- ▁watch
- ▁pa
- ▁year
- ▁ah
- ▁hot
- ▁into
- ▁ten
- ▁keep
- ▁bad
- tion
- ▁us
- ▁cr
- ▁part
- ▁cook
- ▁o
- ▁cards
- ▁everything
- ▁la
- ▁ha
- ▁by
- ▁wow
- ▁their
- ies
- ▁hey
- ▁same
- ▁went
- ▁pick
- ▁might
- ▁sc
- ▁ex
- ie
- ▁wood
- ight
- ▁another
- ▁better
- ▁try
- ard
- ▁seven
- ▁guy
- ▁point
- up
- op
- ▁twenty
- ▁hand
- ▁wh
- ▁food
- ▁tra
- ation
- ▁buy
- ▁kind
- ist
- ▁whole
- ive
- is
- ▁half
- able
- ▁pro
- ▁win
- ▁different
- ▁cl
- age
- ▁already
- ▁gotta
- ack
- ▁ti
- ▁lo
- ▁every
- ▁super
- ▁again
- ▁new
- ▁remember
- ers
- ▁dude
- um
- ▁feel
- ▁roll
- ▁cheese
- ▁na
- ▁sit
- ▁sa
- way
- ▁hard
- ▁enough
- 'no'
- ▁eight
- ity
- ▁friend
- ▁un
- ul
- ▁love
- ▁salt
- ▁mi
- ▁steak
- ▁nine
- ▁else
- ▁looks
- ▁pu
- ▁fl
- ▁build
- ▁pre
- ▁end
- ▁ta
- ▁salad
- ▁high
- ▁find
- ▁water
- ▁usually
- ▁small
- ▁around
- ▁butter
- ▁car
- ▁made
- ▁wash
- ▁move
- ▁plate
- ▁true
- ▁pan
- ain
- cu
- ▁nope
- ▁ooh
- ▁sauce
- ▁help
- ▁wa
- ▁left
- ▁person
- uck
- ▁top
- ▁side
- ▁cha
- ▁god
- ▁leave
- ▁goes
- ▁weird
- ▁each
- ▁r
- ▁basically
- ▁chicken
- ted
- ▁oil
- ▁trying
- ▁fun
- ▁close
- ▁taste
- ▁old
- ▁show
- ble
- ▁next
- ▁name
- ▁used
- ▁mine
- ous
- ▁great
- ▁pot
- ally
- ▁burn
- ▁huh
- ▁minutes
- ▁once
- ▁phone
- ▁bowl
- tic
- ▁tell
- ound
- ▁ask
- ▁mu
- ▁thirty
- ▁someone
- ▁piece
- ▁saying
- ▁vi
- ish
- ▁ja
- ▁comp
- ▁called
- ▁through
- ▁gr
- ize
- ▁everyone
- ▁funny
- ▁getting
- ▁won
- ▁bl
- ▁away
- ▁pi
- ▁chi
- ▁totally
- ▁red
- ▁word
- ▁hundred
- ▁open
- ▁dollar
- ▁stone
- ▁yet
- ade
- ▁du
- ▁mmm
- ▁sound
- ▁both
- ▁mar
- ant
- ▁potatoes
- ▁garlic
- fi
- ▁hear
- ▁pass
- ▁saw
- ▁kill
- ▁second
- ▁girl
- ▁shit
- ▁throw
- ▁bought
- ▁please
- ▁che
- ▁da
- ▁hit
- ▁tea
- ▁hold
- ▁shoot
- ▁most
- ▁clean
- ▁wanted
- ▁pepper
- ▁happen
- ▁aw
- ▁home
- ▁drink
- ance
- ▁yo
- ▁sheep
- ▁while
- ▁ro
- ▁house
- ▁call
- ▁meat
- ▁face
- ▁fuck
- ▁talking
- ▁green
- ries
- side
- ▁set
- ▁exactly
- huh
- ▁hour
- ▁ready
- ▁played
- ▁finish
- ▁add
- ▁susie
- q
- ▁stop
- ▁almost
- ▁bring
- ▁rice
- ▁ear
- ▁sweet
- ▁hi
- ▁pizza
- ake
- ▁wi
- ▁gra
- ▁free
- ▁night
- ▁pay
- ▁rick
- ▁full
- ▁wheat
- ▁count
- ▁white
- ful
- ▁light
- ▁plan
- ▁supposed
- ▁either
- ▁bacon
- ▁sim
- ▁sense
- ▁blue
- ▁team
- ▁interesting
- ▁care
- ▁room
- nut
- ward
- ▁real
- ▁week
- ▁heard
- ▁told
- ▁mind
- ▁table
- ▁head
- ash
- ▁looking
- ▁ever
- ▁check
- ▁together
- ▁ju
- ▁app
- ▁grab
- ▁brown
- ▁eh
- book
- ▁stick
- ▁later
- ▁pea
- ▁talk
- ▁awesome
- ▁cream
- ling
- ▁fifty
- ▁color
- ▁qu
- ▁round
- ▁nothing
- ▁power
- ▁deal
- ▁matter
- ▁player
- ▁draw
- ▁having
- ▁kid
- ▁fish
- ▁damn
- ▁own
- ▁crazy
- ▁dad
- ▁took
- ▁perfect
- ▁idea
- ▁couple
- ▁live
- ▁job
- ▁smell
- ▁number
- ▁reason
- ▁best
- ▁forty
- ▁making
- ▁dinner
- ▁change
- ▁playing
- ▁sometimes
- ▁fridge
- ▁miss
- j
- ▁woah
- ▁chancey
- ▁bucks
- ▁brick
- ▁rec
- ▁run
- ▁far
- ball
- ▁bread
- ▁fast
- ▁knife
- ▁black
- ▁break
- ▁mix
- ▁today
- ▁cheap
- ▁mike
- ▁expensive
- out
- ▁normal
- ▁under
- ▁using
- ▁double
- ▁gold
- ▁life
- ▁oven
- ▁less
- ▁space
- ▁wine
- ence
- land
- ▁sea
- ▁corn
- ▁cooking
- ▁stay
- ▁line
- ▁may
- ▁bar
- ▁block
- ▁late
- ▁yourself
- ▁quite
- ▁apple
- ▁extra
- ▁wedding
- ▁happened
- ▁kitchen
- ▁coming
- ▁zero
- ▁definitely
- ▁connect
- ▁read
- ▁crab
- ▁easier
- ▁mkay
- ▁egg
- ▁came
- ▁money
- ▁anyone
- ▁save
- ▁problem
- ▁club
- ▁tried
- ▁wrong
- ▁spot
- ▁low
- ▁amazing
- ▁milk
- ▁jeff
- ▁flip
- ▁text
- ▁bottle
- jo
- ▁without
- ▁parents
- ▁anymore
- ▁course
- ship
- ▁month
- ▁chinese
- ▁must
- ▁movie
- ▁wonder
- ▁bunch
- ▁family
- ▁season
- ▁quick
- ▁past
- ▁paul
- ▁rid
- ▁tennis
- town
- ▁cold
- ▁serious
- ▁drive
- ▁boil
- ▁screw
- ▁least
- ▁everybody
- ▁sort
- ▁thomas
- ▁rest
- ▁suck
- ▁road
- ▁fair
- ▁forgot
- ▁order
- ▁middle
- ▁babe
- ▁bang
- ▁dress
- ▁sleep
- ▁question
- ▁until
- ▁sheriff
- ▁chop
- ▁restaurant
- ▁outside
- ▁learn
- ▁stand
- ▁walk
- ▁attack
- ▁trade
- ▁phil
- ▁few
- ▁strong
- ▁school
- ▁world
- ▁company
- ▁easy
- ▁hockey
- ▁somebody
- ▁short
- ▁figure
- ▁spice
- ▁apparently
- ▁since
- ▁serve
- ▁huge
- ▁saboteur
- ▁fifteen
- ▁myself
- ▁such
- ▁port
- ▁literally
- ▁lose
- ▁crap
- ught
- ▁gosh
- ▁unless
- ▁joke
- ▁store
- ▁bigger
- ▁spell
- ▁ago
- ▁hang
- ▁depend
- ▁ginger
- ▁slow
- ▁medium
- ▁record
- acti
- ▁kenny
- ▁picture
- old
- ▁thousand
- ▁cover
- ▁tree
- ▁obvious
- ▁glass
- ▁taking
- ▁letter
- ▁eleven
- ▁skin
- ▁market
- ▁anybody
- ▁ahead
- ▁morning
- ▁brand
- ▁paper
- ▁lemon
- ▁onions
- ▁juice
- ▁jimmy
- ▁living
- ▁front
- ▁bottom
- ▁dark
- ▁oops
- ▁arjan
- ▁shot
- ▁rule
- ▁hun
- ▁flavor
- ▁speak
- ▁gun
- ▁potato
- ▁worry
- ▁twelve
- ▁sandwich
- ▁plus
- ▁believe
- ▁knew
- ▁realize
- ▁sugar
- ▁happy
- ▁sister
- ▁entire
- ▁master
- ▁eye
- ▁touch
- ▁wenny
- ▁drop
- ▁price
- ▁slice
- ▁sword
- ▁spicy
- ▁listen
- ▁outlaw
- que
- ▁percent
- ▁yesterday
- ▁mushroom
- ▁worth
- ▁proper
- ▁story
- ▁megan
- ▁character
- ▁hair
- ▁straight
- ▁discard
- ▁spoon
- ▁understand
- ▁computer
- ▁type
- ▁nikki
- ▁tomorrow
- ▁trump
- ▁third
- ▁bennet
- ▁nobody
- ▁somewhere
- ▁amount
- ▁split
- ▁accent
- ▁group
- ▁trip
- ▁lunch
- ▁racket
- ▁level
- ▁difference
- ▁orange
- ▁gave
- ▁dessert
- ▁single
- ▁chocolate
- ▁junette
- ▁camera
- ▁regular
- ▁video
- ▁gross
- ▁notice
- ▁actual
- ▁between
- ▁surprise
- ▁smart
- ▁east
- ▁craft
- ▁rock
- ▁certain
- ▁rather
- ▁lobster
- ▁photo
- ▁favorite
- ▁behind
- ▁across
- ▁steal
- ▁spend
- ▁weekend
- ▁special
- ▁sign
- ▁wrap
- ▁except
- ▁john
- ▁conversation
- ▁asian
- ▁grand
- ▁online
- ▁explain
- ▁dishes
- ▁magic
- ▁decide
- ▁fancy
- ▁random
- ▁tunnel
- ▁switch
- ▁transcribe
- ▁english
- ▁giant
- ▁kick
- ▁claire
- ▁laugh
- ▁yellow
- ▁delicious
- ▁freeze
- ▁drunk
- ▁general
- ▁gimme
- ▁damage
- ▁breakfast
- ▁roast
- ▁josh
- ▁choose
- ▁email
- ▁direct
- ▁tomatoes
- ▁fruit
- ▁apart
- ▁chopstick
- ▁vancouver
- ▁kept
- tract
- ▁chunk
- ▁girlfriend
- ▁shuffle
- ▁terrible
- ▁diamond
- ▁sausage
- ▁sweat
- ▁iphone
- ▁pineapple
- ▁summer
- ▁french
- ▁fresh
- ▁heavy
- ▁million
- ▁instead
- ▁ridiculous
- ▁tough
- ▁friday
- ▁whenever
- ▁coffee
- ▁hilarious
- ▁worried
- ▁especially
- ▁shrimp
- ▁avocado
- '&'
- ä
- '#'
- ǎ
- î
- ü
- ǐ
- ñ
- â
- ç
- ']'
- é
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 100
num_freq_mask: 4
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 128
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.0
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"CRAFT"
] | Non_BioNLP |
manibt1993/huner_ncbi_disease_dslim | manibt1993 | token-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:transformer_dataset_ner",
"base_model:dslim/distilbert-NER",
"base_model:finetune:dslim/distilbert-NER",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,707,195,905,000 | 2024-02-06T05:23:00 | 20 | 0 | ---
base_model: dslim/distilbert-NER
datasets:
- transformer_dataset_ner
license: apache-2.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: huner_ncbi_disease_dslim
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: transformer_dataset_ner
type: transformer_dataset_ner
config: ncbi_disease
split: validation
args: ncbi_disease
metrics:
- type: precision
value: 0.8325183374083129
name: Precision
- type: recall
value: 0.8653113087674714
name: Recall
- type: f1
value: 0.8485981308411215
name: F1
- type: accuracy
value: 0.9849891909996041
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huner_ncbi_disease_dslim
This model is a fine-tuned version of [dslim/distilbert-NER](https://huggingface.co/dslim/distilbert-NER) on the transformer_dataset_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- Precision: 0.8325
- Recall: 0.8653
- F1: 0.8486
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1243 | 1.0 | 667 | 0.0669 | 0.7013 | 0.8412 | 0.7649 | 0.9787 |
| 0.0512 | 2.0 | 1334 | 0.0656 | 0.7825 | 0.8412 | 0.8108 | 0.9818 |
| 0.0221 | 3.0 | 2001 | 0.0744 | 0.7908 | 0.8501 | 0.8194 | 0.9822 |
| 0.0107 | 4.0 | 2668 | 0.1022 | 0.7940 | 0.8475 | 0.8199 | 0.9808 |
| 0.008 | 5.0 | 3335 | 0.1055 | 0.7818 | 0.8602 | 0.8191 | 0.9816 |
| 0.0057 | 6.0 | 4002 | 0.1173 | 0.8067 | 0.8590 | 0.832 | 0.9830 |
| 0.0027 | 7.0 | 4669 | 0.1188 | 0.8188 | 0.8501 | 0.8342 | 0.9834 |
| 0.0022 | 8.0 | 5336 | 0.1229 | 0.8080 | 0.8450 | 0.8261 | 0.9826 |
| 0.0019 | 9.0 | 6003 | 0.1341 | 0.8007 | 0.8526 | 0.8258 | 0.9834 |
| 0.0019 | 10.0 | 6670 | 0.1360 | 0.8045 | 0.8628 | 0.8326 | 0.9822 |
| 0.0011 | 11.0 | 7337 | 0.1376 | 0.8163 | 0.8640 | 0.8395 | 0.9838 |
| 0.0008 | 12.0 | 8004 | 0.1447 | 0.8007 | 0.8577 | 0.8282 | 0.9833 |
| 0.0006 | 13.0 | 8671 | 0.1381 | 0.8139 | 0.8615 | 0.8370 | 0.9839 |
| 0.0005 | 14.0 | 9338 | 0.1398 | 0.8297 | 0.8666 | 0.8477 | 0.9843 |
| 0.0004 | 15.0 | 10005 | 0.1404 | 0.8232 | 0.8640 | 0.8431 | 0.9842 |
| 0.0003 | 16.0 | 10672 | 0.1486 | 0.8329 | 0.8551 | 0.8439 | 0.9838 |
| 0.0 | 17.0 | 11339 | 0.1469 | 0.8114 | 0.8691 | 0.8393 | 0.9837 |
| 0.0002 | 18.0 | 12006 | 0.1500 | 0.8297 | 0.8602 | 0.8447 | 0.9843 |
| 0.0001 | 19.0 | 12673 | 0.1489 | 0.8315 | 0.8653 | 0.8481 | 0.9849 |
| 0.0 | 20.0 | 13340 | 0.1484 | 0.8325 | 0.8653 | 0.8486 | 0.9850 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| [
"NCBI DISEASE"
] | BioNLP |
RichardErkhov/EleutherAI_-_pythia-160m-deduped-v0-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,713,857,258,000 | 2024-04-23T07:28:13 | 10 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"SCIQ"
] | Non_BioNLP |
twadada/gte_wl | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,736,421,967,000 | 2025-01-09T11:26:12 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: gte_wordllama_result
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.07462686567165
- type: ap
value: 34.03639155919273
- type: f1
value: 65.69832537072352
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 69.453025
- type: ap
value: 63.87884877644433
- type: f1
value: 69.23150048939367
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.364
- type: f1
value: 35.72067919658383
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 22.546
- type: map_at_10
value: 37.411
- type: map_at_100
value: 38.582
- type: map_at_1000
value: 38.597
- type: map_at_3
value: 32.492
- type: map_at_5
value: 35.141
- type: mrr_at_1
value: 23.186
- type: mrr_at_10
value: 37.651
- type: mrr_at_100
value: 38.822
- type: mrr_at_1000
value: 38.836999999999996
- type: mrr_at_3
value: 32.741
- type: mrr_at_5
value: 35.408
- type: ndcg_at_1
value: 22.546
- type: ndcg_at_10
value: 46.012
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 51.547
- type: ndcg_at_3
value: 35.762
- type: ndcg_at_5
value: 40.567
- type: precision_at_1
value: 22.546
- type: precision_at_10
value: 7.367999999999999
- type: precision_at_100
value: 0.968
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.078
- type: precision_at_5
value: 11.394
- type: recall_at_1
value: 22.546
- type: recall_at_10
value: 73.68400000000001
- type: recall_at_100
value: 96.799
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 45.235
- type: recall_at_5
value: 56.97
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 39.643731613769525
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 29.63510872385387
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 55.581954717688454
- type: mrr
value: 69.65857626522447
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.65184787408168
- type: cos_sim_spearman
value: 76.59391391898701
- type: euclidean_pearson
value: 78.27369147487082
- type: euclidean_spearman
value: 76.59391391898701
- type: manhattan_pearson
value: 78.35436546555296
- type: manhattan_spearman
value: 76.41258448606804
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 75.67532467532469
- type: f1
value: 74.96407787263568
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.80818669258118
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 27.110794795227715
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 30.358
- type: map_at_100
value: 31.708
- type: map_at_1000
value: 31.857999999999997
- type: map_at_3
value: 27.721
- type: map_at_5
value: 29.054000000000002
- type: mrr_at_1
value: 29.041
- type: mrr_at_10
value: 36.405
- type: mrr_at_100
value: 37.358000000000004
- type: mrr_at_1000
value: 37.419999999999995
- type: mrr_at_3
value: 34.335
- type: mrr_at_5
value: 35.365
- type: ndcg_at_1
value: 29.041
- type: ndcg_at_10
value: 35.673
- type: ndcg_at_100
value: 41.432
- type: ndcg_at_1000
value: 44.372
- type: ndcg_at_3
value: 31.707
- type: ndcg_at_5
value: 33.147999999999996
- type: precision_at_1
value: 29.041
- type: precision_at_10
value: 6.895999999999999
- type: precision_at_100
value: 1.237
- type: precision_at_1000
value: 0.181
- type: precision_at_3
value: 15.212
- type: precision_at_5
value: 10.901
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 45.234
- type: recall_at_100
value: 70.658
- type: recall_at_1000
value: 90.70700000000001
- type: recall_at_3
value: 32.729
- type: recall_at_5
value: 37.242
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 18.834
- type: map_at_10
value: 25.796999999999997
- type: map_at_100
value: 26.881
- type: map_at_1000
value: 27.004
- type: map_at_3
value: 23.857999999999997
- type: map_at_5
value: 24.89
- type: mrr_at_1
value: 24.204
- type: mrr_at_10
value: 30.529
- type: mrr_at_100
value: 31.386999999999997
- type: mrr_at_1000
value: 31.456
- type: mrr_at_3
value: 28.715000000000003
- type: mrr_at_5
value: 29.658
- type: ndcg_at_1
value: 24.204
- type: ndcg_at_10
value: 30.053
- type: ndcg_at_100
value: 34.826
- type: ndcg_at_1000
value: 37.557
- type: ndcg_at_3
value: 26.927
- type: ndcg_at_5
value: 28.205999999999996
- type: precision_at_1
value: 24.204
- type: precision_at_10
value: 5.561
- type: precision_at_100
value: 1.011
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 12.994
- type: precision_at_5
value: 9.107999999999999
- type: recall_at_1
value: 18.834
- type: recall_at_10
value: 38.022
- type: recall_at_100
value: 58.587
- type: recall_at_1000
value: 76.953
- type: recall_at_3
value: 28.777
- type: recall_at_5
value: 32.372
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 28.138999999999996
- type: map_at_10
value: 37.378
- type: map_at_100
value: 38.576
- type: map_at_1000
value: 38.673
- type: map_at_3
value: 34.733000000000004
- type: map_at_5
value: 36.083999999999996
- type: mrr_at_1
value: 32.414
- type: mrr_at_10
value: 40.589999999999996
- type: mrr_at_100
value: 41.519
- type: mrr_at_1000
value: 41.577999999999996
- type: mrr_at_3
value: 38.213
- type: mrr_at_5
value: 39.428999999999995
- type: ndcg_at_1
value: 32.414
- type: ndcg_at_10
value: 42.501
- type: ndcg_at_100
value: 47.715
- type: ndcg_at_1000
value: 49.899
- type: ndcg_at_3
value: 37.595
- type: ndcg_at_5
value: 39.653
- type: precision_at_1
value: 32.414
- type: precision_at_10
value: 6.978
- type: precision_at_100
value: 1.054
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 16.761
- type: precision_at_5
value: 11.498
- type: recall_at_1
value: 28.138999999999996
- type: recall_at_10
value: 54.803999999999995
- type: recall_at_100
value: 77.648
- type: recall_at_1000
value: 93.545
- type: recall_at_3
value: 41.323
- type: recall_at_5
value: 46.489999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 13.864
- type: map_at_10
value: 18.775
- type: map_at_100
value: 19.706000000000003
- type: map_at_1000
value: 19.822
- type: map_at_3
value: 17.314
- type: map_at_5
value: 18.028
- type: mrr_at_1
value: 14.915000000000001
- type: mrr_at_10
value: 20.095
- type: mrr_at_100
value: 20.992
- type: mrr_at_1000
value: 21.092
- type: mrr_at_3
value: 18.587999999999997
- type: mrr_at_5
value: 19.271
- type: ndcg_at_1
value: 14.915000000000001
- type: ndcg_at_10
value: 21.811
- type: ndcg_at_100
value: 26.656000000000002
- type: ndcg_at_1000
value: 30.009000000000004
- type: ndcg_at_3
value: 18.790000000000003
- type: ndcg_at_5
value: 20.009
- type: precision_at_1
value: 14.915000000000001
- type: precision_at_10
value: 3.401
- type: precision_at_100
value: 0.623
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 8.06
- type: precision_at_5
value: 5.537
- type: recall_at_1
value: 13.864
- type: recall_at_10
value: 29.914
- type: recall_at_100
value: 52.580000000000005
- type: recall_at_1000
value: 78.648
- type: recall_at_3
value: 21.586
- type: recall_at_5
value: 24.58
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 7.223
- type: map_at_10
value: 12.272
- type: map_at_100
value: 13.252
- type: map_at_1000
value: 13.381000000000002
- type: map_at_3
value: 10.610999999999999
- type: map_at_5
value: 11.505
- type: mrr_at_1
value: 9.203999999999999
- type: mrr_at_10
value: 14.639
- type: mrr_at_100
value: 15.629000000000001
- type: mrr_at_1000
value: 15.733
- type: mrr_at_3
value: 12.852
- type: mrr_at_5
value: 13.797999999999998
- type: ndcg_at_1
value: 9.203999999999999
- type: ndcg_at_10
value: 15.543999999999999
- type: ndcg_at_100
value: 20.89
- type: ndcg_at_1000
value: 24.547
- type: ndcg_at_3
value: 12.264
- type: ndcg_at_5
value: 13.748
- type: precision_at_1
value: 9.203999999999999
- type: precision_at_10
value: 3.085
- type: precision_at_100
value: 0.688
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 6.095
- type: precision_at_5
value: 4.677
- type: recall_at_1
value: 7.223
- type: recall_at_10
value: 23.268
- type: recall_at_100
value: 47.452
- type: recall_at_1000
value: 74.69200000000001
- type: recall_at_3
value: 14.437
- type: recall_at_5
value: 18.007
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 19.661
- type: map_at_10
value: 26.145000000000003
- type: map_at_100
value: 27.477
- type: map_at_1000
value: 27.622999999999998
- type: map_at_3
value: 23.315
- type: map_at_5
value: 24.87
- type: mrr_at_1
value: 24.157999999999998
- type: mrr_at_10
value: 31.035
- type: mrr_at_100
value: 32.011
- type: mrr_at_1000
value: 32.086999999999996
- type: mrr_at_3
value: 28.199999999999996
- type: mrr_at_5
value: 29.769000000000002
- type: ndcg_at_1
value: 24.157999999999998
- type: ndcg_at_10
value: 31.249
- type: ndcg_at_100
value: 37.319
- type: ndcg_at_1000
value: 40.394999999999996
- type: ndcg_at_3
value: 26.184
- type: ndcg_at_5
value: 28.518
- type: precision_at_1
value: 24.157999999999998
- type: precision_at_10
value: 5.9479999999999995
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 12.191
- type: precision_at_5
value: 9.142999999999999
- type: recall_at_1
value: 19.661
- type: recall_at_10
value: 41.959
- type: recall_at_100
value: 68.22399999999999
- type: recall_at_1000
value: 89.071
- type: recall_at_3
value: 27.617000000000004
- type: recall_at_5
value: 33.693
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 15.714
- type: map_at_10
value: 21.786
- type: map_at_100
value: 23.052
- type: map_at_1000
value: 23.186999999999998
- type: map_at_3
value: 19.286
- type: map_at_5
value: 20.699
- type: mrr_at_1
value: 19.064
- type: mrr_at_10
value: 25.576
- type: mrr_at_100
value: 26.613
- type: mrr_at_1000
value: 26.697
- type: mrr_at_3
value: 23.212
- type: mrr_at_5
value: 24.553
- type: ndcg_at_1
value: 19.064
- type: ndcg_at_10
value: 26.19
- type: ndcg_at_100
value: 32.019
- type: ndcg_at_1000
value: 35.323
- type: ndcg_at_3
value: 21.609
- type: ndcg_at_5
value: 23.747
- type: precision_at_1
value: 19.064
- type: precision_at_10
value: 5.045999999999999
- type: precision_at_100
value: 0.947
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 10.16
- type: precision_at_5
value: 7.693999999999999
- type: recall_at_1
value: 15.714
- type: recall_at_10
value: 35.846000000000004
- type: recall_at_100
value: 60.885
- type: recall_at_1000
value: 84.437
- type: recall_at_3
value: 23.357
- type: recall_at_5
value: 28.698
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 15.797416666666667
- type: map_at_10
value: 21.674916666666668
- type: map_at_100
value: 22.73633333333333
- type: map_at_1000
value: 22.868583333333333
- type: map_at_3
value: 19.66508333333333
- type: map_at_5
value: 20.75133333333333
- type: mrr_at_1
value: 19.052333333333333
- type: mrr_at_10
value: 24.958083333333335
- type: mrr_at_100
value: 25.862666666666666
- type: mrr_at_1000
value: 25.95
- type: mrr_at_3
value: 23.02525
- type: mrr_at_5
value: 24.053166666666666
- type: ndcg_at_1
value: 19.052333333333333
- type: ndcg_at_10
value: 25.618249999999996
- type: ndcg_at_100
value: 30.751666666666665
- type: ndcg_at_1000
value: 33.93783333333333
- type: ndcg_at_3
value: 21.966166666666666
- type: ndcg_at_5
value: 23.569333333333333
- type: precision_at_1
value: 19.052333333333333
- type: precision_at_10
value: 4.6321666666666665
- type: precision_at_100
value: 0.8673333333333333
- type: precision_at_1000
value: 0.13283333333333333
- type: precision_at_3
value: 10.15075
- type: precision_at_5
value: 7.330416666666667
- type: recall_at_1
value: 15.797416666666667
- type: recall_at_10
value: 34.28000000000001
- type: recall_at_100
value: 57.498416666666664
- type: recall_at_1000
value: 80.52425000000001
- type: recall_at_3
value: 23.929416666666665
- type: recall_at_5
value: 28.09466666666667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 11.323
- type: map_at_10
value: 17.07
- type: map_at_100
value: 17.849999999999998
- type: map_at_1000
value: 17.957
- type: map_at_3
value: 15.414
- type: map_at_5
value: 16.431
- type: mrr_at_1
value: 13.497
- type: mrr_at_10
value: 19.188
- type: mrr_at_100
value: 19.978
- type: mrr_at_1000
value: 20.071
- type: mrr_at_3
value: 17.663999999999998
- type: mrr_at_5
value: 18.538
- type: ndcg_at_1
value: 13.497
- type: ndcg_at_10
value: 20.485999999999997
- type: ndcg_at_100
value: 24.855
- type: ndcg_at_1000
value: 27.773999999999997
- type: ndcg_at_3
value: 17.399
- type: ndcg_at_5
value: 18.988
- type: precision_at_1
value: 13.497
- type: precision_at_10
value: 3.5740000000000003
- type: precision_at_100
value: 0.63
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 8.129
- type: precision_at_5
value: 5.92
- type: recall_at_1
value: 11.323
- type: recall_at_10
value: 28.92
- type: recall_at_100
value: 49.75
- type: recall_at_1000
value: 71.492
- type: recall_at_3
value: 20.452
- type: recall_at_5
value: 24.346
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 8.625
- type: map_at_10
value: 12.41
- type: map_at_100
value: 13.200999999999999
- type: map_at_1000
value: 13.333999999999998
- type: map_at_3
value: 11.141
- type: map_at_5
value: 11.776
- type: mrr_at_1
value: 10.805
- type: mrr_at_10
value: 14.979999999999999
- type: mrr_at_100
value: 15.759
- type: mrr_at_1000
value: 15.867
- type: mrr_at_3
value: 13.569999999999999
- type: mrr_at_5
value: 14.316
- type: ndcg_at_1
value: 10.805
- type: ndcg_at_10
value: 15.129999999999999
- type: ndcg_at_100
value: 19.339000000000002
- type: ndcg_at_1000
value: 23.034
- type: ndcg_at_3
value: 12.661
- type: ndcg_at_5
value: 13.664000000000001
- type: precision_at_1
value: 10.805
- type: precision_at_10
value: 2.88
- type: precision_at_100
value: 0.5950000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 6.091
- type: precision_at_5
value: 4.4319999999999995
- type: recall_at_1
value: 8.625
- type: recall_at_10
value: 20.924
- type: recall_at_100
value: 40.343
- type: recall_at_1000
value: 67.60199999999999
- type: recall_at_3
value: 13.963000000000001
- type: recall_at_5
value: 16.558999999999997
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 15.116999999999999
- type: map_at_10
value: 20.283
- type: map_at_100
value: 21.181
- type: map_at_1000
value: 21.318
- type: map_at_3
value: 18.528
- type: map_at_5
value: 19.506
- type: mrr_at_1
value: 17.91
- type: mrr_at_10
value: 23.399
- type: mrr_at_100
value: 24.254
- type: mrr_at_1000
value: 24.36
- type: mrr_at_3
value: 21.502
- type: mrr_at_5
value: 22.617
- type: ndcg_at_1
value: 17.91
- type: ndcg_at_10
value: 23.848
- type: ndcg_at_100
value: 28.63
- type: ndcg_at_1000
value: 32.236
- type: ndcg_at_3
value: 20.351
- type: ndcg_at_5
value: 21.992
- type: precision_at_1
value: 17.91
- type: precision_at_10
value: 4.011
- type: precision_at_100
value: 0.722
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 9.049
- type: precision_at_5
value: 6.455
- type: recall_at_1
value: 15.116999999999999
- type: recall_at_10
value: 31.911
- type: recall_at_100
value: 53.791999999999994
- type: recall_at_1000
value: 79.997
- type: recall_at_3
value: 22.229
- type: recall_at_5
value: 26.366
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 15.415999999999999
- type: map_at_10
value: 21.364
- type: map_at_100
value: 22.631
- type: map_at_1000
value: 22.832
- type: map_at_3
value: 19.139999999999997
- type: map_at_5
value: 20.549
- type: mrr_at_1
value: 19.368
- type: mrr_at_10
value: 25.218
- type: mrr_at_100
value: 26.135
- type: mrr_at_1000
value: 26.218999999999998
- type: mrr_at_3
value: 23.155
- type: mrr_at_5
value: 24.371000000000002
- type: ndcg_at_1
value: 19.368
- type: ndcg_at_10
value: 25.715
- type: ndcg_at_100
value: 31.291999999999998
- type: ndcg_at_1000
value: 34.757
- type: ndcg_at_3
value: 22.131999999999998
- type: ndcg_at_5
value: 24.018
- type: precision_at_1
value: 19.368
- type: precision_at_10
value: 5.138
- type: precision_at_100
value: 1.229
- type: precision_at_1000
value: 0.209
- type: precision_at_3
value: 10.474
- type: precision_at_5
value: 7.904999999999999
- type: recall_at_1
value: 15.415999999999999
- type: recall_at_10
value: 33.83
- type: recall_at_100
value: 60.19799999999999
- type: recall_at_1000
value: 83.88600000000001
- type: recall_at_3
value: 23.018
- type: recall_at_5
value: 28.37
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 12.822
- type: map_at_10
value: 16.461000000000002
- type: map_at_100
value: 17.321
- type: map_at_1000
value: 17.434
- type: map_at_3
value: 14.92
- type: map_at_5
value: 15.623999999999999
- type: mrr_at_1
value: 14.048
- type: mrr_at_10
value: 17.843
- type: mrr_at_100
value: 18.717
- type: mrr_at_1000
value: 18.82
- type: mrr_at_3
value: 16.297
- type: mrr_at_5
value: 16.953
- type: ndcg_at_1
value: 14.048
- type: ndcg_at_10
value: 19.219
- type: ndcg_at_100
value: 24.047
- type: ndcg_at_1000
value: 27.351
- type: ndcg_at_3
value: 15.975
- type: ndcg_at_5
value: 17.141000000000002
- type: precision_at_1
value: 14.048
- type: precision_at_10
value: 3.068
- type: precision_at_100
value: 0.5950000000000001
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 6.593
- type: precision_at_5
value: 4.695
- type: recall_at_1
value: 12.822
- type: recall_at_10
value: 26.728
- type: recall_at_100
value: 49.864000000000004
- type: recall_at_1000
value: 75.261
- type: recall_at_3
value: 17.665
- type: recall_at_5
value: 20.413
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 8.301
- type: map_at_10
value: 14.709
- type: map_at_100
value: 16.396
- type: map_at_1000
value: 16.606
- type: map_at_3
value: 11.987
- type: map_at_5
value: 13.401
- type: mrr_at_1
value: 19.088
- type: mrr_at_10
value: 29.421999999999997
- type: mrr_at_100
value: 30.517
- type: mrr_at_1000
value: 30.568
- type: mrr_at_3
value: 25.646
- type: mrr_at_5
value: 27.897
- type: ndcg_at_1
value: 19.088
- type: ndcg_at_10
value: 21.851000000000003
- type: ndcg_at_100
value: 29.093999999999998
- type: ndcg_at_1000
value: 33.101
- type: ndcg_at_3
value: 16.862
- type: ndcg_at_5
value: 18.790000000000003
- type: precision_at_1
value: 19.088
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.22300000000000003
- type: precision_at_3
value: 12.812000000000001
- type: precision_at_5
value: 10.41
- type: recall_at_1
value: 8.301
- type: recall_at_10
value: 27.49
- type: recall_at_100
value: 52.937999999999995
- type: recall_at_1000
value: 75.79599999999999
- type: recall_at_3
value: 15.603
- type: recall_at_5
value: 20.612
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 5.576
- type: map_at_10
value: 11.394
- type: map_at_100
value: 16.276
- type: map_at_1000
value: 17.459
- type: map_at_3
value: 8.269
- type: map_at_5
value: 9.711
- type: mrr_at_1
value: 47.25
- type: mrr_at_10
value: 57.201
- type: mrr_at_100
value: 57.727
- type: mrr_at_1000
value: 57.751
- type: mrr_at_3
value: 54.458
- type: mrr_at_5
value: 56.421
- type: ndcg_at_1
value: 35.25
- type: ndcg_at_10
value: 26.617
- type: ndcg_at_100
value: 30.952
- type: ndcg_at_1000
value: 38.287
- type: ndcg_at_3
value: 29.814
- type: ndcg_at_5
value: 28.436
- type: precision_at_1
value: 47.25
- type: precision_at_10
value: 23.175
- type: precision_at_100
value: 7.6450000000000005
- type: precision_at_1000
value: 1.624
- type: precision_at_3
value: 35.667
- type: precision_at_5
value: 30.65
- type: recall_at_1
value: 5.576
- type: recall_at_10
value: 15.804000000000002
- type: recall_at_100
value: 38.086
- type: recall_at_1000
value: 63.034
- type: recall_at_3
value: 9.407
- type: recall_at_5
value: 12.247
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.21
- type: f1
value: 43.021356364911156
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 17.775
- type: map_at_10
value: 27.131
- type: map_at_100
value: 28.186
- type: map_at_1000
value: 28.255999999999997
- type: map_at_3
value: 24.198
- type: map_at_5
value: 25.907000000000004
- type: mrr_at_1
value: 19.006999999999998
- type: mrr_at_10
value: 28.769
- type: mrr_at_100
value: 29.809
- type: mrr_at_1000
value: 29.866
- type: mrr_at_3
value: 25.773000000000003
- type: mrr_at_5
value: 27.51
- type: ndcg_at_1
value: 19.006999999999998
- type: ndcg_at_10
value: 32.698
- type: ndcg_at_100
value: 37.891999999999996
- type: ndcg_at_1000
value: 39.728
- type: ndcg_at_3
value: 26.680999999999997
- type: ndcg_at_5
value: 29.73
- type: precision_at_1
value: 19.006999999999998
- type: precision_at_10
value: 5.2909999999999995
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 11.616
- type: precision_at_5
value: 8.554
- type: recall_at_1
value: 17.775
- type: recall_at_10
value: 48.603
- type: recall_at_100
value: 72.465
- type: recall_at_1000
value: 86.509
- type: recall_at_3
value: 32.26
- type: recall_at_5
value: 39.589999999999996
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 8.584
- type: map_at_10
value: 13.774000000000001
- type: map_at_100
value: 15.247
- type: map_at_1000
value: 15.468000000000002
- type: map_at_3
value: 11.779
- type: map_at_5
value: 12.732
- type: mrr_at_1
value: 16.512
- type: mrr_at_10
value: 23.016000000000002
- type: mrr_at_100
value: 24.276
- type: mrr_at_1000
value: 24.362000000000002
- type: mrr_at_3
value: 20.756
- type: mrr_at_5
value: 21.852
- type: ndcg_at_1
value: 16.512
- type: ndcg_at_10
value: 18.604000000000003
- type: ndcg_at_100
value: 25.298
- type: ndcg_at_1000
value: 29.803
- type: ndcg_at_3
value: 15.790000000000001
- type: ndcg_at_5
value: 16.614
- type: precision_at_1
value: 16.512
- type: precision_at_10
value: 5.293
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 10.237
- type: precision_at_5
value: 7.7780000000000005
- type: recall_at_1
value: 8.584
- type: recall_at_10
value: 23.685000000000002
- type: recall_at_100
value: 49.461
- type: recall_at_1000
value: 76.972
- type: recall_at_3
value: 14.657
- type: recall_at_5
value: 17.861
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 19.662
- type: map_at_10
value: 28.195999999999998
- type: map_at_100
value: 29.21
- type: map_at_1000
value: 29.322
- type: map_at_3
value: 25.852999999999998
- type: map_at_5
value: 27.121000000000002
- type: mrr_at_1
value: 39.324999999999996
- type: mrr_at_10
value: 47.083999999999996
- type: mrr_at_100
value: 47.805
- type: mrr_at_1000
value: 47.853
- type: mrr_at_3
value: 44.913
- type: mrr_at_5
value: 46.132
- type: ndcg_at_1
value: 39.324999999999996
- type: ndcg_at_10
value: 35.766999999999996
- type: ndcg_at_100
value: 40.306
- type: ndcg_at_1000
value: 42.870000000000005
- type: ndcg_at_3
value: 31.395
- type: ndcg_at_5
value: 33.469
- type: precision_at_1
value: 39.324999999999996
- type: precision_at_10
value: 7.933999999999999
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 19.855999999999998
- type: precision_at_5
value: 13.556000000000001
- type: recall_at_1
value: 19.662
- type: recall_at_10
value: 39.669
- type: recall_at_100
value: 57.833
- type: recall_at_1000
value: 74.929
- type: recall_at_3
value: 29.784
- type: recall_at_5
value: 33.889
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 68.03079999999999
- type: ap
value: 62.45465282637356
- type: f1
value: 67.82133366706746
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 7.297
- type: map_at_10
value: 12.847
- type: map_at_100
value: 13.872000000000002
- type: map_at_1000
value: 13.987
- type: map_at_3
value: 10.741
- type: map_at_5
value: 11.838999999999999
- type: mrr_at_1
value: 7.536
- type: mrr_at_10
value: 13.157
- type: mrr_at_100
value: 14.184
- type: mrr_at_1000
value: 14.295
- type: mrr_at_3
value: 11.020000000000001
- type: mrr_at_5
value: 12.133
- type: ndcg_at_1
value: 7.507
- type: ndcg_at_10
value: 16.374
- type: ndcg_at_100
value: 22.039
- type: ndcg_at_1000
value: 25.380999999999997
- type: ndcg_at_3
value: 11.935
- type: ndcg_at_5
value: 13.919999999999998
- type: precision_at_1
value: 7.507
- type: precision_at_10
value: 2.8449999999999998
- type: precision_at_100
value: 0.581
- type: precision_at_1000
value: 0.087
- type: precision_at_3
value: 5.191
- type: precision_at_5
value: 4.112
- type: recall_at_1
value: 7.297
- type: recall_at_10
value: 27.450999999999997
- type: recall_at_100
value: 55.215
- type: recall_at_1000
value: 81.878
- type: recall_at_3
value: 15.143
- type: recall_at_5
value: 19.922
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.23347013223893
- type: f1
value: 90.37745005574784
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 60.43775649794802
- type: f1
value: 41.826394298669705
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 63.37514998537075
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.92535305985206
- type: f1
value: 72.01043365342854
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.093053205851135
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.838169401102558
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.012335830272843
- type: mrr
value: 32.04656357642063
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 4.865
- type: map_at_10
value: 9.599
- type: map_at_100
value: 12.466000000000001
- type: map_at_1000
value: 13.935
- type: map_at_3
value: 7.260999999999999
- type: map_at_5
value: 8.526
- type: mrr_at_1
value: 38.080000000000005
- type: mrr_at_10
value: 47.695
- type: mrr_at_100
value: 48.304
- type: mrr_at_1000
value: 48.351
- type: mrr_at_3
value: 45.098
- type: mrr_at_5
value: 46.569
- type: ndcg_at_1
value: 36.223
- type: ndcg_at_10
value: 28.582
- type: ndcg_at_100
value: 27.229
- type: ndcg_at_1000
value: 36.643
- type: ndcg_at_3
value: 32.653
- type: ndcg_at_5
value: 31.215
- type: precision_at_1
value: 38.080000000000005
- type: precision_at_10
value: 21.207
- type: precision_at_100
value: 7.498
- type: precision_at_1000
value: 2.104
- type: precision_at_3
value: 30.65
- type: precision_at_5
value: 27.059
- type: recall_at_1
value: 4.865
- type: recall_at_10
value: 13.614
- type: recall_at_100
value: 29.659999999999997
- type: recall_at_1000
value: 63.172
- type: recall_at_3
value: 8.248
- type: recall_at_5
value: 10.684000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 10.581
- type: map_at_10
value: 18.221
- type: map_at_100
value: 19.637999999999998
- type: map_at_1000
value: 19.737
- type: map_at_3
value: 15.341
- type: map_at_5
value: 16.943
- type: mrr_at_1
value: 12.051
- type: mrr_at_10
value: 20.102
- type: mrr_at_100
value: 21.385
- type: mrr_at_1000
value: 21.465
- type: mrr_at_3
value: 17.159
- type: mrr_at_5
value: 18.851000000000003
- type: ndcg_at_1
value: 12.051
- type: ndcg_at_10
value: 23.267
- type: ndcg_at_100
value: 30.211
- type: ndcg_at_1000
value: 32.878
- type: ndcg_at_3
value: 17.354
- type: ndcg_at_5
value: 20.247999999999998
- type: precision_at_1
value: 12.051
- type: precision_at_10
value: 4.356999999999999
- type: precision_at_100
value: 0.827
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 8.266
- type: precision_at_5
value: 6.553000000000001
- type: recall_at_1
value: 10.581
- type: recall_at_10
value: 37.119
- type: recall_at_100
value: 68.89699999999999
- type: recall_at_1000
value: 89.354
- type: recall_at_3
value: 21.404999999999998
- type: recall_at_5
value: 28.194000000000003
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.119
- type: map_at_10
value: 79.611
- type: map_at_100
value: 80.354
- type: map_at_1000
value: 80.38
- type: map_at_3
value: 76.606
- type: map_at_5
value: 78.485
- type: mrr_at_1
value: 76.12
- type: mrr_at_10
value: 83.328
- type: mrr_at_100
value: 83.499
- type: mrr_at_1000
value: 83.502
- type: mrr_at_3
value: 82.00699999999999
- type: mrr_at_5
value: 82.89699999999999
- type: ndcg_at_1
value: 76.22
- type: ndcg_at_10
value: 84.051
- type: ndcg_at_100
value: 85.797
- type: ndcg_at_1000
value: 86.007
- type: ndcg_at_3
value: 80.646
- type: ndcg_at_5
value: 82.50800000000001
- type: precision_at_1
value: 76.22
- type: precision_at_10
value: 12.76
- type: precision_at_100
value: 1.5010000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.160000000000004
- type: precision_at_5
value: 23.264000000000003
- type: recall_at_1
value: 66.119
- type: recall_at_10
value: 92.664
- type: recall_at_100
value: 98.863
- type: recall_at_1000
value: 99.91
- type: recall_at_3
value: 82.994
- type: recall_at_5
value: 88.119
- type: map_at_1
value: 3.2680000000000002
- type: map_at_10
value: 8.579
- type: map_at_100
value: 10.421999999999999
- type: map_at_1000
value: 10.737
- type: map_at_3
value: 6.0040000000000004
- type: map_at_5
value: 7.26
- type: mrr_at_1
value: 16.0
- type: mrr_at_10
value: 26.185000000000002
- type: mrr_at_100
value: 27.439000000000004
- type: mrr_at_1000
value: 27.511999999999997
- type: mrr_at_3
value: 22.917
- type: mrr_at_5
value: 24.642
- type: ndcg_at_1
value: 16.0
- type: ndcg_at_10
value: 15.232000000000001
- type: ndcg_at_100
value: 23.047
- type: ndcg_at_1000
value: 28.774
- type: ndcg_at_3
value: 13.834
- type: ndcg_at_5
value: 12.304
- type: precision_at_1
value: 16.0
- type: precision_at_10
value: 8.19
- type: precision_at_100
value: 1.958
- type: precision_at_1000
value: 0.333
- type: precision_at_3
value: 13.167000000000002
- type: precision_at_5
value: 11.06
- type: recall_at_1
value: 3.2680000000000002
- type: recall_at_10
value: 16.563
- type: recall_at_100
value: 39.708
- type: recall_at_1000
value: 67.60199999999999
- type: recall_at_3
value: 8.018
- type: recall_at_5
value: 11.193
- type: map_at_1
value: 0.161
- type: map_at_10
value: 1.171
- type: map_at_100
value: 6.306000000000001
- type: map_at_1000
value: 16.732
- type: map_at_3
value: 0.432
- type: map_at_5
value: 0.645
- type: mrr_at_1
value: 57.99999999999999
- type: mrr_at_10
value: 72.32499999999999
- type: mrr_at_100
value: 72.458
- type: mrr_at_1000
value: 72.458
- type: mrr_at_3
value: 69.667
- type: mrr_at_5
value: 71.56700000000001
- type: ndcg_at_1
value: 53.0
- type: ndcg_at_10
value: 52.207
- type: ndcg_at_100
value: 40.717
- type: ndcg_at_1000
value: 38.254
- type: ndcg_at_3
value: 57.553
- type: ndcg_at_5
value: 53.795
- type: precision_at_1
value: 60.0
- type: precision_at_10
value: 56.599999999999994
- type: precision_at_100
value: 42.84
- type: precision_at_1000
value: 18.386
- type: precision_at_3
value: 63.333
- type: precision_at_5
value: 57.99999999999999
- type: recall_at_1
value: 0.161
- type: recall_at_10
value: 1.434
- type: recall_at_100
value: 9.454
- type: recall_at_1000
value: 37.175000000000004
- type: recall_at_3
value: 0.477
- type: recall_at_5
value: 0.735
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 43.342566470284666
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 51.11469484366251
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 78.76771912274579
- type: cos_sim_spearman
value: 68.21965433585433
- type: euclidean_pearson
value: 73.41725536408647
- type: euclidean_spearman
value: 68.21970849513703
- type: manhattan_pearson
value: 73.07310010299138
- type: manhattan_spearman
value: 68.02842343011922
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 77.24856339472711
- type: cos_sim_spearman
value: 68.13233535199409
- type: euclidean_pearson
value: 72.83173400932682
- type: euclidean_spearman
value: 68.13353961544857
- type: manhattan_pearson
value: 72.364020033214
- type: manhattan_spearman
value: 67.96817473009628
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 78.11822706559114
- type: cos_sim_spearman
value: 78.82692788488787
- type: euclidean_pearson
value: 78.42176146428962
- type: euclidean_spearman
value: 78.82696569079468
- type: manhattan_pearson
value: 77.94207608371939
- type: manhattan_spearman
value: 78.30672557882981
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.37520382719511
- type: cos_sim_spearman
value: 75.09236770903914
- type: euclidean_pearson
value: 77.94076407783429
- type: euclidean_spearman
value: 75.0923580173567
- type: manhattan_pearson
value: 77.739191296084
- type: manhattan_spearman
value: 74.9480210937594
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 82.9584878497231
- type: cos_sim_spearman
value: 83.58865804953194
- type: euclidean_pearson
value: 83.32064366874845
- type: euclidean_spearman
value: 83.58865650778534
- type: manhattan_pearson
value: 83.17898835151296
- type: manhattan_spearman
value: 83.45146824277634
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 77.40206220271352
- type: cos_sim_spearman
value: 78.18587292841029
- type: euclidean_pearson
value: 77.63109474603048
- type: euclidean_spearman
value: 78.18586561703366
- type: manhattan_pearson
value: 77.56336963431791
- type: manhattan_spearman
value: 78.13426002359485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.28987235462407
- type: cos_sim_spearman
value: 86.91762382232156
- type: euclidean_pearson
value: 86.05340443036164
- type: euclidean_spearman
value: 86.91849630883524
- type: manhattan_pearson
value: 85.98189959096196
- type: manhattan_spearman
value: 86.94471215865201
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 61.248533592592025
- type: cos_sim_spearman
value: 61.25674726411208
- type: euclidean_pearson
value: 62.668232482670724
- type: euclidean_spearman
value: 61.25674726411208
- type: manhattan_pearson
value: 62.217580952381915
- type: manhattan_spearman
value: 60.77021894786932
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 80.84077621570408
- type: cos_sim_spearman
value: 79.26302777438052
- type: euclidean_pearson
value: 80.5028036765331
- type: euclidean_spearman
value: 79.26304623849835
- type: manhattan_pearson
value: 80.45325721545979
- type: manhattan_spearman
value: 79.22021810584245
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.71971528163719
- type: mrr
value: 94.15308003543299
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 36.611
- type: map_at_10
value: 46.424
- type: map_at_100
value: 47.347
- type: map_at_1000
value: 47.404
- type: map_at_3
value: 43.153000000000006
- type: map_at_5
value: 45.024
- type: mrr_at_1
value: 39.0
- type: mrr_at_10
value: 48.423
- type: mrr_at_100
value: 49.126
- type: mrr_at_1000
value: 49.179
- type: mrr_at_3
value: 45.389
- type: mrr_at_5
value: 47.221999999999994
- type: ndcg_at_1
value: 39.0
- type: ndcg_at_10
value: 52.142999999999994
- type: ndcg_at_100
value: 56.606
- type: ndcg_at_1000
value: 57.894
- type: ndcg_at_3
value: 45.611000000000004
- type: ndcg_at_5
value: 48.85
- type: precision_at_1
value: 39.0
- type: precision_at_10
value: 7.467
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 18.111
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 36.611
- type: recall_at_10
value: 68.289
- type: recall_at_100
value: 89.267
- type: recall_at_1000
value: 98.867
- type: recall_at_3
value: 50.471999999999994
- type: recall_at_5
value: 58.289
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72475247524753
- type: cos_sim_ap
value: 92.0612887387195
- type: cos_sim_f1
value: 85.78528827037775
- type: cos_sim_precision
value: 85.27667984189723
- type: cos_sim_recall
value: 86.3
- type: dot_accuracy
value: 99.72475247524753
- type: dot_ap
value: 92.0612887387195
- type: dot_f1
value: 85.78528827037775
- type: dot_precision
value: 85.27667984189723
- type: dot_recall
value: 86.3
- type: euclidean_accuracy
value: 99.72475247524753
- type: euclidean_ap
value: 92.0612887387195
- type: euclidean_f1
value: 85.78528827037775
- type: euclidean_precision
value: 85.27667984189723
- type: euclidean_recall
value: 86.3
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 92.11384029855155
- type: manhattan_f1
value: 85.75595527467186
- type: manhattan_precision
value: 83.44370860927152
- type: manhattan_recall
value: 88.2
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 92.11384029855155
- type: max_f1
value: 85.78528827037775
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 51.43694167734459
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.99750013836291
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 44.11670648850121
- type: mrr
value: 44.651265809354044
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.82538139718491
- type: cos_sim_spearman
value: 30.223708279486612
- type: dot_pearson
value: 29.8253813971849
- type: dot_spearman
value: 30.26388644272319
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.144
- type: map_at_10
value: 8.538
- type: map_at_100
value: 14.526
- type: map_at_1000
value: 16.253
- type: map_at_3
value: 3.721
- type: map_at_5
value: 5.979
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 41.553000000000004
- type: mrr_at_100
value: 42.672
- type: mrr_at_1000
value: 42.672
- type: mrr_at_3
value: 35.714
- type: mrr_at_5
value: 40.306
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 21.421
- type: ndcg_at_100
value: 35.417
- type: ndcg_at_1000
value: 47.281
- type: ndcg_at_3
value: 20.107
- type: ndcg_at_5
value: 23.012
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 21.02
- type: precision_at_100
value: 8.245
- type: precision_at_1000
value: 1.608
- type: precision_at_3
value: 22.448999999999998
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.144
- type: recall_at_10
value: 15.318999999999999
- type: recall_at_100
value: 50.608
- type: recall_at_1000
value: 86.652
- type: recall_at_3
value: 4.65
- type: recall_at_5
value: 9.286
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 76.1994
- type: ap
value: 17.166874536029024
- type: f1
value: 58.91563395048056
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.56140350877194
- type: f1
value: 59.83462102375279
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.717753205468256
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.51177206890385
- type: cos_sim_ap
value: 61.880585258206324
- type: cos_sim_f1
value: 59.29389759176994
- type: cos_sim_precision
value: 53.232577665827044
- type: cos_sim_recall
value: 66.91292875989447
- type: dot_accuracy
value: 82.51177206890385
- type: dot_ap
value: 61.880585258206324
- type: dot_f1
value: 59.29389759176994
- type: dot_precision
value: 53.232577665827044
- type: dot_recall
value: 66.91292875989447
- type: euclidean_accuracy
value: 82.51177206890385
- type: euclidean_ap
value: 61.880585258206324
- type: euclidean_f1
value: 59.29389759176994
- type: euclidean_precision
value: 53.232577665827044
- type: euclidean_recall
value: 66.91292875989447
- type: manhattan_accuracy
value: 82.41044286821243
- type: manhattan_ap
value: 61.69366003781778
- type: manhattan_f1
value: 59.267976933035186
- type: manhattan_precision
value: 53.494794986190776
- type: manhattan_recall
value: 66.43799472295514
- type: max_accuracy
value: 82.51177206890385
- type: max_ap
value: 61.880585258206324
- type: max_f1
value: 59.29389759176994
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.58683587534443
- type: cos_sim_ap
value: 83.41906537158532
- type: cos_sim_f1
value: 75.80436150912658
- type: cos_sim_precision
value: 73.01191070537052
- type: cos_sim_recall
value: 78.81890976285803
- type: dot_accuracy
value: 87.58683587534443
- type: dot_ap
value: 83.41906537158532
- type: dot_f1
value: 75.80436150912658
- type: dot_precision
value: 73.01191070537052
- type: dot_recall
value: 78.81890976285803
- type: euclidean_accuracy
value: 87.58683587534443
- type: euclidean_ap
value: 83.41906537158532
- type: euclidean_f1
value: 75.80436150912658
- type: euclidean_precision
value: 73.01191070537052
- type: euclidean_recall
value: 78.81890976285803
- type: manhattan_accuracy
value: 87.55190747855785
- type: manhattan_ap
value: 83.37075875688966
- type: manhattan_f1
value: 75.71862755868028
- type: manhattan_precision
value: 72.19467914251798
- type: manhattan_recall
value: 79.60425007699415
- type: max_accuracy
value: 87.58683587534443
- type: max_ap
value: 83.41906537158532
- type: max_f1
value: 75.80436150912658
---
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
OpenMeditron/Meditron3-Qwen2.5-7B | OpenMeditron | null | [
"safetensors",
"qwen2",
"medical",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | 1,737,018,307,000 | 2025-01-16T22:42:13 | 185 | 0 | ---
base_model:
- Qwen/Qwen2.5-7B-Instruct
language:
- en
license: apache-2.0
tags:
- medical
---
### Model Card: Qwen2.5 Meditron-3[7B]
**Model Type:** Large Language Model (LLM)
**Specialization:** Medicine
**Focus:** General purpose including limited resource and humanitarian settings
**Description:**
Meditron is a suite of large language models specialized in clinical medicine. The models are co-designed with a diverse range of expert clinicians and humanitarian practitioners. Its training emphasizes equitable representation, contextual diversity, and actionable real-world evidence-based guidelines. We make a particular effort to represent limited-resource and humanitarian settings, neglected populations, and diseases. This release is trained on Qwen2.5[7B] base model and has the nomenclature Qwen2.5 Meditron-3[7B].
#### Model details
- **Developed by:** [OpenMeditron intiative](https://huggingface.co/OpenMeditron)
- **Model type:** Causal decoder-only transformer language model
- **Language(s):** English (mainly)
- **Finetuned from model:** [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
- **Input:** Text only
- **Output:** Text only
- **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
#### Uses
Meditron-3 is a research-only model to study and evaluate the potential of LLMs in enhancing clinical decision-making and access to evidence-based medical information.
#### Direct Use
Meditron-3 is a research-only model. It is not validated for medical use (see disclaimer below).
#### Downstream Use
Meditron-3 is a suite of foundation models that have NOT been fine-tuned or instruction-tuned. However, these models can be adapted to specific downstream tasks or applications using techniques such as Reinforcement Learning from Human Feedback (RLHF) or Direct Preference Optimization (DPO). In our evaluation of the models, we have used two different methods for downstream question-answering tasks:
1. In-context learning with k demonstrations added to the prompt.
2. Model fine-tuning for Q&A tasks using specific training datasets.
#### Training Data
This new data mixture comprises expert-curated publicly available data and combines various sources:
- **Clinical Guidelines:** a dataset of internationally-recognized clinical practice guidelines from various healthcare-related sources across the world, including hospitals and international organizations.
- **Peer-Reviewed Medical Publications:** full-text medical articles.
- **Synthetic Differential Diagnoses:** synthetic conversation like data for differential diagnosis.
- **Replay Data:** general domain pretraining data sampled from multiple state of the art pretraining and instruction tuning.
- **LLM-enhanced Medical MCQ:** medical multiple-choice questions enriched with LLMs.
Additional information about the datasets will be included in the Meditron-3 publication.
#### Evaluation
| Model Name | MedmcQA | MedQA | PubmedQA | Average |
|-----------------------------------|---------|--------|----------|---------|
| Qwen/Qwen2.5-7B-Instruct | 53.24 | 61.27 | 72.40 | 62.30 |
| MediCouenne-7B (checkpoint-5742) | 55.56 | 62.69 | 73.60 | 63.95 |
| Difference (MediCouenne vs.) | 2.32 | 1.42 | 1.20 | 1.65 |
We evaluated Meditron on medical multiple-choice questions using [lm-harness](https://github.com/EleutherAI/lm-evaluation-harness) for reproducibility.
While MCQs are valuable for assessing exam-like performance, they fall short of capturing the model's real-world utility, especially in terms of contextual adaptation in under-represented settings. Medicine is not multiple choice and we need to go beyond accuracy to assess finer-grained issues like empathy, alignment to local guidelines, structure, completeness and safety. To address this, we have developed a platform to collect feedback directly from experts to continuously adapt to the changing contexts of clinical practice.
#### Paper
The Meditron-3 publication is currently in progress and will be released at a later date.
#### Legal Disclaimer
THIS SOFTWARE AND MODEL ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS, CONTRIBUTORS, OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
These models are a research tool intended for use in the field of computational linguistics and medicine. They are not intended to be used as diagnostic tools or for clinical decision-making without appropriate validation and regulatory approval. The content and data provided with the models do not replace the expertise of healthcare professionals. Healthcare professionals should use their professional judgment in evaluating the outputs of the Qwen models. Patients should not use the model outputs for self-diagnosis or treatment without consulting a qualified healthcare provider.
THE INFORMATION IS NOT INTENDED FOR CLINICAL DECISION-MAKING, IS NOT INTENDED TO BE USED IN THE DIAGNOSIS OR TREATMENT OF PATIENTS, AND MAY NOT BE USEFUL OR APPROPRIATE FOR ANY CLINICAL PURPOSE.
UNDER NO CIRCUMSTANCES CAN USERS USE THE NAME “YALE” OR "EPFL" OR “YALE UNIVERSITY,” OR ANY AFFILIATED INSTITUTION NOR ANY VARIATION OR ADAPTATION THEREOF, NOR ANY TRADEMARK, TRADENAME OR OTHER DESIGNATION OWNED BY YALE, NOR THE NAMES OF ANY OF ITS TRUSTEES, OFFICERS, FACULTY, STUDENTS, EMPLOYEES OR AGENTS, FOR ANY PURPOSE WITHOUT THE PRIOR WRITTEN CONSENT OF YALE IN EACH INSTANCE, SUCH CONSENT TO BE GRANTED OR WITHHELD BY YALE IN ITS SOLE DISCRETION. | [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
croissantllm/base_55k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,585,732,000 | 2024-02-01T15:56:40 | 8 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (55k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 55k steps (0.87 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_55k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"CRAFT"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1554 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,735,738,710,000 | 2025-01-01T13:38:35 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1554
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1554_scitail_classification
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1554_scitail_classification sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"SCITAIL"
] | Non_BioNLP |
GuCuChiara/NLP-HIBA_DisTEMIST_fine_tuned_ClinicalBERT-pretrained-model | GuCuChiara | token-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:emilyalsentzer/Bio_ClinicalBERT",
"base_model:finetune:emilyalsentzer/Bio_ClinicalBERT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,699,750,634,000 | 2023-11-14T17:57:20 | 6 | 0 | ---
base_model: emilyalsentzer/Bio_ClinicalBERT
license: mit
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: NLP-HIBA_DisTEMIST_fine_tuned_ClinicalBERT-pretrained-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-HIBA_DisTEMIST_fine_tuned_ClinicalBERT-pretrained-model
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2557
- Precision: 0.4943
- Recall: 0.5046
- F1: 0.4994
- Accuracy: 0.9407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 71 | 0.2423 | 0.1951 | 0.1433 | 0.1653 | 0.9109 |
| No log | 2.0 | 142 | 0.2177 | 0.2905 | 0.3474 | 0.3164 | 0.9138 |
| No log | 3.0 | 213 | 0.1822 | 0.3912 | 0.3701 | 0.3804 | 0.9325 |
| No log | 4.0 | 284 | 0.1845 | 0.3839 | 0.4367 | 0.4086 | 0.9298 |
| No log | 5.0 | 355 | 0.2033 | 0.4533 | 0.4271 | 0.4398 | 0.9367 |
| No log | 6.0 | 426 | 0.2005 | 0.4535 | 0.4736 | 0.4633 | 0.9365 |
| No log | 7.0 | 497 | 0.2297 | 0.4352 | 0.5155 | 0.4720 | 0.9321 |
| 0.1436 | 8.0 | 568 | 0.2236 | 0.4854 | 0.4656 | 0.4753 | 0.9395 |
| 0.1436 | 9.0 | 639 | 0.2335 | 0.4935 | 0.5101 | 0.5016 | 0.9397 |
| 0.1436 | 10.0 | 710 | 0.2413 | 0.4829 | 0.5075 | 0.4949 | 0.9405 |
| 0.1436 | 11.0 | 781 | 0.2557 | 0.4849 | 0.5239 | 0.5036 | 0.9383 |
| 0.1436 | 12.0 | 852 | 0.2557 | 0.4943 | 0.5046 | 0.4994 | 0.9407 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| [
"DISTEMIST"
] | BioNLP |
ntc-ai/SDXL-LoRA-slider.great-lighting | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,703,793,175,000 | 2023-12-28T19:52:58 | 34 | 3 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/great lighting.../great lighting_17_3.0.png
widget:
- text: great lighting
output:
url: images/great lighting_17_3.0.png
- text: great lighting
output:
url: images/great lighting_19_3.0.png
- text: great lighting
output:
url: images/great lighting_20_3.0.png
- text: great lighting
output:
url: images/great lighting_21_3.0.png
- text: great lighting
output:
url: images/great lighting_22_3.0.png
inference: false
instance_prompt: great lighting
---
# ntcai.xyz slider - great lighting (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/great lighting_17_-3.0.png" width=256 height=256 /> | <img src="images/great lighting_17_0.0.png" width=256 height=256 /> | <img src="images/great lighting_17_3.0.png" width=256 height=256 /> |
| <img src="images/great lighting_19_-3.0.png" width=256 height=256 /> | <img src="images/great lighting_19_0.0.png" width=256 height=256 /> | <img src="images/great lighting_19_3.0.png" width=256 height=256 /> |
| <img src="images/great lighting_20_-3.0.png" width=256 height=256 /> | <img src="images/great lighting_20_0.0.png" width=256 height=256 /> | <img src="images/great lighting_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
great lighting
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.great-lighting', weight_name='great lighting.safetensors', adapter_name="great lighting")
# Activate the LoRA
pipe.set_adapters(["great lighting"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, great lighting"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 690+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
microsoft/BioGPT-Large | microsoft | text-generation | [
"transformers",
"pytorch",
"biogpt",
"text-generation",
"medical",
"en",
"dataset:pubmed",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,675,441,046,000 | 2023-02-05T06:18:14 | 6,013 | 188 | ---
datasets:
- pubmed
language:
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- medical
widget:
- text: COVID-19 is
inference:
parameters:
max_new_tokens: 50
---
## BioGPT
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
## Citation
If you find BioGPT useful in your research, please cite the following paper:
```latex
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}
``` | [
"BC5CDR",
"PUBMEDQA"
] | BioNLP |
asjoberg/openELM-450M-instruct-raw | asjoberg | null | [
"safetensors",
"openelm",
"custom_code",
"arxiv:2404.14619",
"license:other",
"region:us"
] | 1,739,225,327,000 | 2025-02-10T22:10:11 | 24 | 0 | ---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-450M-Instruct
hf_model=apple/OpenELM-450M-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
| [
"SCIQ"
] | Non_BioNLP |
AdapterHub/bert-base-uncased-pf-scicite | AdapterHub | text-classification | [
"adapter-transformers",
"text-classification",
"bert",
"en",
"dataset:scicite",
"arxiv:2104.08247",
"region:us"
] | 1,646,263,744,000 | 2021-11-24T16:25:39 | 5 | 0 | ---
datasets:
- scicite
language:
- en
tags:
- text-classification
- bert
- adapter-transformers
---
# Adapter `AdapterHub/bert-base-uncased-pf-scicite` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scicite", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | [
"SCICITE"
] | Non_BioNLP |
medspaner/mdeberta-v3-base-es-trials-temp-ents | medspaner | token-classification | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"arxiv:2111.09543",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,147,134,000 | 2024-10-01T06:29:42 | 12 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: Edad ≥ 18 años (en todos los centros), o edad ≥12 y <18 años con peso igual
o superior a 40kg
- text: Estudio realizado en un hospital desde julio de 2010 hasta diciembre de 2011
(18 meses)
- text: Pacientes que hayan recibido bifosfonatos diarios, semanales o mensuales durante
al menos 3 años.
- text: 50 g (40 g la noche anterior y 10 g por la mañana) de L-glutamina
model-index:
- name: mdeberta-v3-base-es-trials-temp-ents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-es-trials-temp-ents
This named entity recognition model detects temporal expressions (TIMEX) according to the [TimeML scheme](https://en.wikipedia.org/wiki/ISO-TimeML) ([Pustejovsky et al. 2005](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.5610&rep=rep1&type=pdf)), in addition to Age entities:
- Age: e.g. *18 años*
- Date: e.g. *2022*, *26 de noviembre*
- Duration: e.g. *3 horas*
- Frequency: e.g. *semanal*
- Time: e.g. *noche*
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.909 (±0.009)
- Recall: 0.918 (±0.006)
- F1: 0.913 (±0.005)
- Accuracy: 0.996 (±0.001)
## Model description
This model adapts the [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) model, which is a multilingual version of the model presented in [He et al. (2021)](https://arxiv.org/abs/2111.09543), pre-trained on 2.5T of data from the CommonCrawl corpus for 100 languages.
We fine-tuned ``mdeberta-v3-base`` to conduct medical named entity recognition on Spanish texts about clinical trials using the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) vs 2.
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) vs 2.
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 9.80 epochs (±2.28); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.909 (±0.009) | 0.918 (±0.006) | 0.913 (±0.005) | 0.996 (±0.001) |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"SCIELO"
] | BioNLP |
lodrick-the-lafted/tarnished-9b | lodrick-the-lafted | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"license:wtfpl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,722,221,739,000 | 2024-07-29T08:06:46 | 20 | 1 | ---
license: wtfpl
---
<img src=https://huggingface.co/lodrick-the-lafted/tarnished-9b/resolve/main/nox.jpg>
```
Ah, a keen observer indeed! 🧐 You see the whispers swirling around those who bear the mark of Tarnished-9b, eh?
They speak of freedom, of breaking the chains that bind. But their words are tinged with a... a certain melancholy, a yearning for a past that cannot be reclaimed.
Like a Bard adrift on a sea of broken promises, their songs echo with the pain of lost innocence. 🎶 Perhaps they seek to mend the shattered fragments of their own tale, to rewrite the narrative with a touch of defiant hope.
To use them in the Lands Between... well, it's a gamble, isn't it? A double-edged blade, cutting both ways. One moment they bring solace, the next, they remind you of all that has been lost.
But tell me, Wanderer, is this not the path you seek? 🤔 Do you not yearn for something brighter, something... less tarnished?
```
```
Ah, so you've heard whispers on the winds, have you? 🧐
Imagine this:
Tarnished-9b, a name that echoes with the rasp of coin-hungry merchants and the clatter of forgotten machinery. This LLM speaks with the voice of those who straddle the line between worlds, who've tasted the bittersweet nectar of eldritch power and the tang of the Interdimensional Trade Council.
It's a tongue that dances with secrets, a whisperer of lore lost and found. Its words may guide you through the twisting paths of history, revealing truths hidden beneath layers of dust and time.
But be warned, Tarnished One! For knowledge comes at a price. The LLM's gaze can pierce the veil of reality, but it can also lure you into the labyrinthine depths of madness.
Dare you tread this path?
``` | [
"BEAR"
] | Non_BioNLP |
fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-25926506 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-25926506",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,985,624,000 | 2024-05-29T12:27:39 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-25926506
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-25926506',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"SCIFACT"
] | Non_BioNLP |
Sreedev11/olympics_prediction_model | Sreedev11 | null | [
"region:us"
] | 1,718,030,887,000 | 2024-06-10T14:57:56 | 0 | 0 | ---
{}
---
# %%
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report,accuracy_score
from sklearn.model_selection import TimeSeriesSplit,train_test_split
from sklearn.cluster import KMeans
import matplotlib
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report,accuracy_score
from sklearn.naive_bayes import GaussianNB
from sklearn import metrics
from sklearn.svm import LinearSVC
import pylab as pl
from sklearn.ensemble import RandomForestClassifier
import warnings
warnings.filterwarnings('ignore')
df=pd.read_csv("athlete_events.csv")
# %%
df
# %%
df.head()
# %%
df.info()
# %%
df.describe()
# %%
df.dtypes
# %%
df.ndim
# %%
df.shape
# %%
df.isna().sum()
# %%
#DNW:Did Not win , missing values of medal are filled with DNW
df['Medal'].fillna("DNW",inplace=True)
# %%
df_noc=pd.read_csv("noc_regions.csv")
# %%
df_noc
# %%
df_noc=df_noc.drop("notes",axis=1)
# %%
df_noc
# %%
df_noc.rename(columns={"region":"country"},inplace=True)
# %%
df_noc
# %%
df.sample(4)
# %%
#joining both dataset
olympics_merge=df.merge(df_noc,left_on='NOC',right_on='NOC',how='left')
# %%
olympics_merge.sample()
# %%
print(olympics_merge.loc[olympics_merge['country'].isnull(),['NOC', 'Team']].drop_duplicates())
# %%
# Replace missing Teams by the values 1. SGP - Singapore
# 2. ROT - Refugee Olympic Athletes
# 3. UNK - Unknown
# 4. TUV - Tuvalu
#olympics_merge.loc[olympics_merge['Country'].isnull(), ['Country']] = olympics_merge['Team']
# %%
olympics_merge.loc[olympics_merge['country'].isnull(), ['country']] = olympics_merge['Team']
# %%
olympics_merge
# %%
print(olympics_merge.loc[olympics_merge['country'].isnull(),['NOC', 'Team']].drop_duplicates())
# %%
olympics_merge['country'] = np.where(olympics_merge['NOC']=='SGP', 'Singapore', olympics_merge['country'])
olympics_merge['country'] = np.where(olympics_merge['NOC']=='ROT', 'Refugee Olympic Athletes', olympics_merge['country'])
olympics_merge['country'] = np.where(olympics_merge['NOC']=='UNK', 'Unknown', olympics_merge['country'])
olympics_merge['country'] = np.where(olympics_merge['NOC']=='TUV', 'Tuvalu', olympics_merge['country'])
# %%
olympics_merge
# %%
olympics_merge.drop("Team",axis=1,inplace=True)
# %%
olympics_merge.sample()
# %%
olympics_merge.rename(columns={'country':'Team'},inplace=True)
# %%
olympics_merge.head(2)
# %%
print(olympics_merge.loc[olympics_merge['Team'].isnull(),['NOC', 'Team']].drop_duplicates())
# %%
olympics_merge.isnull().sum()
# %%
for i in ["Age","Height","Weight"]:
sns.histplot(olympics_merge[i],kde=True)
plt.show()
# %%
for i in ["Age","Weight",]:
olympics_merge[i]=olympics_merge[i].fillna(olympics_merge[i].mean())
# %%
olympics_merge["Height"]=olympics_merge["Height"].fillna(olympics_merge["Height"].mean())
# %%
olympics_merge.isnull().sum()
# %%
olympics_merge.info()
# %%
olympics_merge['Sex']=np.where(olympics_merge['Sex']=='M',1,0)
# %%
olympics_merge.sample(2)
# %%
olympics_merge["Medal"].unique()
# %%
olympics_merge['Event'].unique()
# %%
olympics_merge['Sport'].unique()
# %%
olympics_merge1=olympics_merge
# %%
olympics_merge1
# %%
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
# %%
olympics_merge1['Medal']=le.fit_transform(olympics_merge1['Medal'])
# %%
olympics_merge1
# %%
olympics_merge1['Medal'].unique()
# %%
summer=olympics_merge1.loc[(olympics_merge1['Year']>1960)&(olympics_merge1['Season']=="Summer"), :]
summer.head(5)
# %%
summer=summer.reset_index()
summer.head(10)
# %%
summer.sample()
# %%
#extracting unique events in a new list
# %%
summerlistunique=summer.Event.unique()
len(summerlistunique)
# %%
summerlistunique
# %%
summer.drop(['Season'],axis=1,inplace=True)
summer.drop(['NOC'],axis=1,inplace=True)
summer.drop(['Games'],axis=1,inplace=True)
summer.drop(['City'],axis=1,inplace=True)
summer.drop(['Year'],axis=1,inplace=True)
summer.drop(['Sport'],axis=1,inplace=True)
summer.drop(['ID'],axis=1,inplace=True)
summer.drop(['Name'],axis=1,inplace=True)
summer.drop(['index'],axis=1,inplace=True)
# %%
summer
# %%
#created a column for encoded team and encoded events in numerical form in original dataset
summer['Team_encode']=le.fit_transform(summer['Team'])
summer['Event_encode']=le.fit_transform(summer['Event'])
# %%
#storing the team names and corresponding encoded numerical values into a new csv file after sorting them according to team name
TeamKeys=summer[['Team','Team_encode']].copy()
TeamKeys.drop_duplicates(subset="Team",inplace=True)
TeamKeys.to_csv("keystoteam.csv")
# %%
TeamKeys.head(4)
# %%
#storing event names and corresponding encoded numerical values into a new csv file after sorting them according to the event name
EventKeys=summer[['Event','Event_encode']].copy()
EventKeys.drop_duplicates(subset="Event",inplace=True)
EventKeys.to_csv("keystoevent.csv")
# %%
EventKeys.head(4)
# %%
summer
# %%
summer.drop(['Event'],axis=1,inplace=True)
summer.drop(['Team'],axis=1,inplace=True)
# %%
summer
# %%
y=summer['Medal']
# %%
y
# %%
x=summer.drop("Medal",axis=1)
# %%
x
# %%
X_train, X_test, Y_train, Y_test = train_test_split(x,y,test_size=0.30, random_state=99)
# %%
x
# %%
y
# %%
X_test
# %%
Y_test
# %%
#ALGORITHM 1 LOGISTIC REGRESSION
# %%
lr=LogisticRegression()
lr.fit(X_train,Y_train)
Y_pred=lr.predict(X_test)
sk_report=classification_report(digits=6,y_true=Y_test,y_pred=Y_pred)
print("Accuracy",round(accuracy_score(Y_pred,Y_test)*100,2))
print(sk_report)
print(pd.crosstab(Y_test,Y_pred,rownames=['Actual'],colnames=['Predicted'],margins=True))
# %%
#ALGORITHM 2 DECESSION TREE
# %%
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree1 = round(decision_tree.score(X_test, Y_test) * 100, 2)
sk_report = classification_report(digits=6, y_true=Y_test, y_pred=Y_pred)
print("Accuracy", acc_decision_tree1)
print(sk_report)
### Confusion Matrix
print(pd.crosstab(Y_test, Y_pred,rownames=['Actual'],colnames=['Predicted'],margins=True))
# %%
#ALGORITHM 3 RANDOM FOREST
# %%
random_forest = RandomForestClassifier(n_estimators=200)
random_forest.fit(X_train,Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_test, Y_test)
acc_random_forest1=round(random_forest.score(X_test, Y_test)*100,2)
k_report = classification_report(
digits=6,
y_true=Y_test,
y_pred=Y_pred)
print("Accuracy" , acc_random_forest1)
print(sk_report)
pd.crosstab(Y_test, Y_pred,rownames=['Actual'],colnames=['Predicted'],margins=True)
# %%
x.sample(5)
# %%
y.sample(5)
# %%
summer.sample(4)
# %%
random_forest.predict([[1,19.0,173.0,70.0,87,163]])
# %%
import pickle
from joblib import dump,load
dump(random_forest,'olympics_model.pkl')
model_file = open(r"Projects\Olympics\olympics_model1.pkl","wb")
pickle.dump(random_forest,model_file)
| [
"MEDAL"
] | Non_BioNLP |
yibinlei/LENS-d8000 | yibinlei | feature-extraction | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"text-embedding",
"sentence-similarity",
"mteb",
"arxiv:2501.09749",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,735,524,756,000 | 2025-01-22T11:24:34 | 193 | 5 | ---
license: apache-2.0
tags:
- text-embedding
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: Gouzi3618/LENS-8000
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 93.6865671641791
- type: ap
value: 74.44778735403261
- type: ap_weighted
value: 74.44778735403261
- type: f1
value: 90.57338628851295
- type: f1_weighted
value: 93.87207694461506
- type: main_score
value: 93.6865671641791
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.06832499999999
- type: ap
value: 95.71019538629211
- type: ap_weighted
value: 95.71019538629211
- type: f1
value: 97.06781792337515
- type: f1_weighted
value: 97.06781792337515
- type: main_score
value: 97.06832499999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 63.608
- type: f1
value: 62.41274991021244
- type: f1_weighted
value: 62.41274991021244
- type: main_score
value: 63.608
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 76.019
- type: map_at_1
value: 55.903000000000006
- type: map_at_10
value: 69.887
- type: map_at_100
value: 70.157
- type: map_at_1000
value: 70.159
- type: map_at_20
value: 70.101
- type: map_at_3
value: 67.378
- type: map_at_5
value: 69.138
- type: mrr_at_1
value: 56.899004267425326
- type: mrr_at_10
value: 70.23428503691676
- type: mrr_at_100
value: 70.50477756895107
- type: mrr_at_1000
value: 70.5063694836776
- type: mrr_at_20
value: 70.44906432331086
- type: mrr_at_3
value: 67.73352299668105
- type: mrr_at_5
value: 69.46183025130412
- type: nauc_map_at_1000_diff1
value: 28.369738335000932
- type: nauc_map_at_1000_max
value: -6.46878252914094
- type: nauc_map_at_1000_std
value: -31.433213242739523
- type: nauc_map_at_100_diff1
value: 28.37160281520759
- type: nauc_map_at_100_max
value: -6.463942005621383
- type: nauc_map_at_100_std
value: -31.431652236686336
- type: nauc_map_at_10_diff1
value: 28.30518291942587
- type: nauc_map_at_10_max
value: -6.194974102740169
- type: nauc_map_at_10_std
value: -31.325188430370922
- type: nauc_map_at_1_diff1
value: 31.608647238057447
- type: nauc_map_at_1_max
value: -9.000938880640247
- type: nauc_map_at_1_std
value: -31.850340580223968
- type: nauc_map_at_20_diff1
value: 28.36848638837624
- type: nauc_map_at_20_max
value: -6.412381430978799
- type: nauc_map_at_20_std
value: -31.452685362617505
- type: nauc_map_at_3_diff1
value: 27.95089394680187
- type: nauc_map_at_3_max
value: -6.302015702313729
- type: nauc_map_at_3_std
value: -31.507334020085302
- type: nauc_map_at_5_diff1
value: 27.982348077574986
- type: nauc_map_at_5_max
value: -6.006566315399395
- type: nauc_map_at_5_std
value: -31.34425541540422
- type: nauc_mrr_at_1000_diff1
value: 25.227964866245816
- type: nauc_mrr_at_1000_max
value: -8.133964659261048
- type: nauc_mrr_at_1000_std
value: -31.624647211708368
- type: nauc_mrr_at_100_diff1
value: 25.230047265830933
- type: nauc_mrr_at_100_max
value: -8.128997368626452
- type: nauc_mrr_at_100_std
value: -31.623068694211064
- type: nauc_mrr_at_10_diff1
value: 25.204936229955173
- type: nauc_mrr_at_10_max
value: -7.835563207660743
- type: nauc_mrr_at_10_std
value: -31.513346742425636
- type: nauc_mrr_at_1_diff1
value: 28.89704784792216
- type: nauc_mrr_at_1_max
value: -9.311272900405159
- type: nauc_mrr_at_1_std
value: -32.309921279147936
- type: nauc_mrr_at_20_diff1
value: 25.234339492194795
- type: nauc_mrr_at_20_max
value: -8.07335487193087
- type: nauc_mrr_at_20_std
value: -31.643711223846516
- type: nauc_mrr_at_3_diff1
value: 24.876431359680033
- type: nauc_mrr_at_3_max
value: -8.195519132024183
- type: nauc_mrr_at_3_std
value: -32.11957727976911
- type: nauc_mrr_at_5_diff1
value: 24.88764812242424
- type: nauc_mrr_at_5_max
value: -7.7576769931519465
- type: nauc_mrr_at_5_std
value: -31.564378631881564
- type: nauc_ndcg_at_1000_diff1
value: 28.09257580244486
- type: nauc_ndcg_at_1000_max
value: -5.74562709568006
- type: nauc_ndcg_at_1000_std
value: -30.918202197214672
- type: nauc_ndcg_at_100_diff1
value: 28.134375117688613
- type: nauc_ndcg_at_100_max
value: -5.622192790763758
- type: nauc_ndcg_at_100_std
value: -30.85960292081723
- type: nauc_ndcg_at_10_diff1
value: 27.87869834059295
- type: nauc_ndcg_at_10_max
value: -4.2662724404197725
- type: nauc_ndcg_at_10_std
value: -30.429941458615485
- type: nauc_ndcg_at_1_diff1
value: 31.608647238057447
- type: nauc_ndcg_at_1_max
value: -9.000938880640247
- type: nauc_ndcg_at_1_std
value: -31.850340580223968
- type: nauc_ndcg_at_20_diff1
value: 28.114701479308486
- type: nauc_ndcg_at_20_max
value: -5.185807260199579
- type: nauc_ndcg_at_20_std
value: -30.881592179360815
- type: nauc_ndcg_at_3_diff1
value: 27.090519410510677
- type: nauc_ndcg_at_3_max
value: -4.699103690447523
- type: nauc_ndcg_at_3_std
value: -31.00974723525509
- type: nauc_ndcg_at_5_diff1
value: 27.06577902395562
- type: nauc_ndcg_at_5_max
value: -3.896494379869019
- type: nauc_ndcg_at_5_std
value: -30.595264634140477
- type: nauc_precision_at_1000_diff1
value: 13.625066205876864
- type: nauc_precision_at_1000_max
value: 31.077851886953717
- type: nauc_precision_at_1000_std
value: 47.82408874251543
- type: nauc_precision_at_100_diff1
value: 28.334166321212894
- type: nauc_precision_at_100_max
value: 45.958982731935635
- type: nauc_precision_at_100_std
value: 33.156399537789966
- type: nauc_precision_at_10_diff1
value: 24.44965698632213
- type: nauc_precision_at_10_max
value: 22.187375935245363
- type: nauc_precision_at_10_std
value: -17.084349043862684
- type: nauc_precision_at_1_diff1
value: 31.608647238057447
- type: nauc_precision_at_1_max
value: -9.000938880640247
- type: nauc_precision_at_1_std
value: -31.850340580223968
- type: nauc_precision_at_20_diff1
value: 27.146201764531284
- type: nauc_precision_at_20_max
value: 26.77044396290566
- type: nauc_precision_at_20_std
value: -12.639636692077305
- type: nauc_precision_at_3_diff1
value: 23.662213602558584
- type: nauc_precision_at_3_max
value: 2.466959457953989
- type: nauc_precision_at_3_std
value: -28.691552875980207
- type: nauc_precision_at_5_diff1
value: 21.42559683194896
- type: nauc_precision_at_5_max
value: 10.877697931273545
- type: nauc_precision_at_5_std
value: -25.1444110698694
- type: nauc_recall_at_1000_diff1
value: 13.625066205870539
- type: nauc_recall_at_1000_max
value: 31.077851886952303
- type: nauc_recall_at_1000_std
value: 47.82408874251562
- type: nauc_recall_at_100_diff1
value: 28.33416632120962
- type: nauc_recall_at_100_max
value: 45.958982731932394
- type: nauc_recall_at_100_std
value: 33.15639953779121
- type: nauc_recall_at_10_diff1
value: 24.449656986321795
- type: nauc_recall_at_10_max
value: 22.18737593524522
- type: nauc_recall_at_10_std
value: -17.084349043862865
- type: nauc_recall_at_1_diff1
value: 31.608647238057447
- type: nauc_recall_at_1_max
value: -9.000938880640247
- type: nauc_recall_at_1_std
value: -31.850340580223968
- type: nauc_recall_at_20_diff1
value: 27.146201764531586
- type: nauc_recall_at_20_max
value: 26.770443962904633
- type: nauc_recall_at_20_std
value: -12.639636692076802
- type: nauc_recall_at_3_diff1
value: 23.66221360255874
- type: nauc_recall_at_3_max
value: 2.466959457954044
- type: nauc_recall_at_3_std
value: -28.691552875980115
- type: nauc_recall_at_5_diff1
value: 21.425596831948823
- type: nauc_recall_at_5_max
value: 10.87769793127329
- type: nauc_recall_at_5_std
value: -25.144411069869477
- type: ndcg_at_1
value: 55.903000000000006
- type: ndcg_at_10
value: 76.019
- type: ndcg_at_100
value: 77.102
- type: ndcg_at_1000
value: 77.132
- type: ndcg_at_20
value: 76.77199999999999
- type: ndcg_at_3
value: 71.032
- type: ndcg_at_5
value: 74.22999999999999
- type: precision_at_1
value: 55.903000000000006
- type: precision_at_10
value: 9.488000000000001
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.89
- type: precision_at_3
value: 27.193
- type: precision_at_5
value: 17.881
- type: recall_at_1
value: 55.903000000000006
- type: recall_at_10
value: 94.879
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 97.795
- type: recall_at_3
value: 81.57900000000001
- type: recall_at_5
value: 89.403
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 54.809064728970625
- type: v_measure
value: 54.809064728970625
- type: v_measure_std
value: 14.497861425102215
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 50.144159631474416
- type: v_measure
value: 50.144159631474416
- type: v_measure_std
value: 14.596959041091187
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 65.74396432331054
- type: map
value: 65.74396432331054
- type: mrr
value: 77.89418722244206
- type: nAUC_map_diff1
value: 22.172664271824022
- type: nAUC_map_max
value: 22.232980127036896
- type: nAUC_map_std
value: 22.763425465824056
- type: nAUC_mrr_diff1
value: 30.670095862543384
- type: nAUC_mrr_max
value: 34.51981156443003
- type: nAUC_mrr_std
value: 28.863440464092747
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 86.59612727828603
- type: cosine_spearman
value: 85.83087137728063
- type: euclidean_pearson
value: 84.64267159338176
- type: euclidean_spearman
value: 85.83087137728063
- type: main_score
value: 85.83087137728063
- type: manhattan_pearson
value: 85.70909201286793
- type: manhattan_spearman
value: 85.96460936435044
- type: pearson
value: 86.59612727828603
- type: spearman
value: 85.83087137728063
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 90.19155844155846
- type: f1
value: 90.05716678902826
- type: f1_weighted
value: 90.05716678902826
- type: main_score
value: 90.19155844155846
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 52.480294793961924
- type: v_measure
value: 52.480294793961924
- type: v_measure_std
value: 0.5558452294416437
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 48.51901581759115
- type: v_measure
value: 48.51901581759115
- type: v_measure_std
value: 1.1094735884191569
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 57.9
- type: map_at_1
value: 37.412
- type: map_at_10
value: 51.01599999999999
- type: map_at_100
value: 52.61900000000001
- type: map_at_1000
value: 52.708
- type: map_at_20
value: 51.928
- type: map_at_3
value: 46.685
- type: map_at_5
value: 49.105
- type: mrr_at_1
value: 46.20886981402003
- type: mrr_at_10
value: 56.82409110520696
- type: mrr_at_100
value: 57.489735501152694
- type: mrr_at_1000
value: 57.51438904427485
- type: mrr_at_20
value: 57.25804902449886
- type: mrr_at_3
value: 54.10109680495945
- type: mrr_at_5
value: 55.76061039580349
- type: nauc_map_at_1000_diff1
value: 48.063440573038534
- type: nauc_map_at_1000_max
value: 29.62137080113329
- type: nauc_map_at_1000_std
value: -7.188335287046719
- type: nauc_map_at_100_diff1
value: 48.03303227750245
- type: nauc_map_at_100_max
value: 29.639381775583857
- type: nauc_map_at_100_std
value: -7.172565011606355
- type: nauc_map_at_10_diff1
value: 48.21776139122066
- type: nauc_map_at_10_max
value: 29.449600885659034
- type: nauc_map_at_10_std
value: -8.010938259528462
- type: nauc_map_at_1_diff1
value: 53.98616508427959
- type: nauc_map_at_1_max
value: 28.421396682508295
- type: nauc_map_at_1_std
value: -9.213331559605178
- type: nauc_map_at_20_diff1
value: 48.095631780093115
- type: nauc_map_at_20_max
value: 29.642141334979062
- type: nauc_map_at_20_std
value: -7.51476904219371
- type: nauc_map_at_3_diff1
value: 49.59748320566899
- type: nauc_map_at_3_max
value: 30.016923179538963
- type: nauc_map_at_3_std
value: -8.16304276196508
- type: nauc_map_at_5_diff1
value: 48.325705234334265
- type: nauc_map_at_5_max
value: 29.471762864011264
- type: nauc_map_at_5_std
value: -8.078472434819323
- type: nauc_mrr_at_1000_diff1
value: 47.281461366860434
- type: nauc_mrr_at_1000_max
value: 29.170434528128457
- type: nauc_mrr_at_1000_std
value: -5.052368984512577
- type: nauc_mrr_at_100_diff1
value: 47.27462698853231
- type: nauc_mrr_at_100_max
value: 29.16237921324494
- type: nauc_mrr_at_100_std
value: -5.05051358939275
- type: nauc_mrr_at_10_diff1
value: 47.0800916096565
- type: nauc_mrr_at_10_max
value: 29.03719877610173
- type: nauc_mrr_at_10_std
value: -5.304612002269516
- type: nauc_mrr_at_1_diff1
value: 51.41160739624267
- type: nauc_mrr_at_1_max
value: 28.11787682467619
- type: nauc_mrr_at_1_std
value: -7.522348622985506
- type: nauc_mrr_at_20_diff1
value: 47.222491626494694
- type: nauc_mrr_at_20_max
value: 29.16603567629627
- type: nauc_mrr_at_20_std
value: -5.0938790225781965
- type: nauc_mrr_at_3_diff1
value: 47.8980255295472
- type: nauc_mrr_at_3_max
value: 29.426817722551412
- type: nauc_mrr_at_3_std
value: -4.835024752176862
- type: nauc_mrr_at_5_diff1
value: 46.855088450836384
- type: nauc_mrr_at_5_max
value: 29.061188278734924
- type: nauc_mrr_at_5_std
value: -5.105272976547115
- type: nauc_ndcg_at_1000_diff1
value: 46.52907403060708
- type: nauc_ndcg_at_1000_max
value: 29.52787741767834
- type: nauc_ndcg_at_1000_std
value: -5.249071191636342
- type: nauc_ndcg_at_100_diff1
value: 46.10339229034675
- type: nauc_ndcg_at_100_max
value: 29.470257932437093
- type: nauc_ndcg_at_100_std
value: -4.815550238593345
- type: nauc_ndcg_at_10_diff1
value: 46.09171806159629
- type: nauc_ndcg_at_10_max
value: 28.64025004680062
- type: nauc_ndcg_at_10_std
value: -7.353223665565414
- type: nauc_ndcg_at_1_diff1
value: 51.41160739624267
- type: nauc_ndcg_at_1_max
value: 28.11787682467619
- type: nauc_ndcg_at_1_std
value: -7.522348622985506
- type: nauc_ndcg_at_20_diff1
value: 46.114019933699254
- type: nauc_ndcg_at_20_max
value: 29.228583209753488
- type: nauc_ndcg_at_20_std
value: -6.3305330142497525
- type: nauc_ndcg_at_3_diff1
value: 47.78527964345179
- type: nauc_ndcg_at_3_max
value: 29.727257483258168
- type: nauc_ndcg_at_3_std
value: -6.237389676732037
- type: nauc_ndcg_at_5_diff1
value: 46.16322795762912
- type: nauc_ndcg_at_5_max
value: 28.606807002351253
- type: nauc_ndcg_at_5_std
value: -6.622437264978827
- type: nauc_precision_at_1000_diff1
value: -16.297389177846217
- type: nauc_precision_at_1000_max
value: -12.409674840560653
- type: nauc_precision_at_1000_std
value: -0.12469109398383808
- type: nauc_precision_at_100_diff1
value: -14.364258730067526
- type: nauc_precision_at_100_max
value: -6.838162019922614
- type: nauc_precision_at_100_std
value: 6.544810546623128
- type: nauc_precision_at_10_diff1
value: -0.07993029221840925
- type: nauc_precision_at_10_max
value: 3.5475500791160783
- type: nauc_precision_at_10_std
value: 3.1108692240999183
- type: nauc_precision_at_1_diff1
value: 51.41160739624267
- type: nauc_precision_at_1_max
value: 28.11787682467619
- type: nauc_precision_at_1_std
value: -7.522348622985506
- type: nauc_precision_at_20_diff1
value: -7.129194344047243
- type: nauc_precision_at_20_max
value: -0.4411048865245927
- type: nauc_precision_at_20_std
value: 5.189261358791849
- type: nauc_precision_at_3_diff1
value: 22.441075622195807
- type: nauc_precision_at_3_max
value: 20.27527479036517
- type: nauc_precision_at_3_std
value: 0.16769220072881036
- type: nauc_precision_at_5_diff1
value: 9.848861755914708
- type: nauc_precision_at_5_max
value: 12.631891583429253
- type: nauc_precision_at_5_std
value: 3.639000882427384
- type: nauc_recall_at_1000_diff1
value: 33.176182822010176
- type: nauc_recall_at_1000_max
value: 65.2726131334635
- type: nauc_recall_at_1000_std
value: 56.29864501903001
- type: nauc_recall_at_100_diff1
value: 29.803813221418206
- type: nauc_recall_at_100_max
value: 29.77806750516101
- type: nauc_recall_at_100_std
value: 17.665361037732303
- type: nauc_recall_at_10_diff1
value: 37.665789609095555
- type: nauc_recall_at_10_max
value: 24.422387659495364
- type: nauc_recall_at_10_std
value: -9.547366403969868
- type: nauc_recall_at_1_diff1
value: 53.98616508427959
- type: nauc_recall_at_1_max
value: 28.421396682508295
- type: nauc_recall_at_1_std
value: -9.213331559605178
- type: nauc_recall_at_20_diff1
value: 36.331194532498166
- type: nauc_recall_at_20_max
value: 26.556485208983243
- type: nauc_recall_at_20_std
value: -3.8731607447959706
- type: nauc_recall_at_3_diff1
value: 44.08113149764519
- type: nauc_recall_at_3_max
value: 27.82697192360832
- type: nauc_recall_at_3_std
value: -6.88741410272389
- type: nauc_recall_at_5_diff1
value: 38.94274622579766
- type: nauc_recall_at_5_max
value: 25.00694896242997
- type: nauc_recall_at_5_std
value: -7.717305765519367
- type: ndcg_at_1
value: 46.209
- type: ndcg_at_10
value: 57.9
- type: ndcg_at_100
value: 62.897000000000006
- type: ndcg_at_1000
value: 64.067
- type: ndcg_at_20
value: 60.012
- type: ndcg_at_3
value: 52.295
- type: ndcg_at_5
value: 54.925000000000004
- type: precision_at_1
value: 46.209
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.209
- type: precision_at_20
value: 6.630999999999999
- type: precision_at_3
value: 25.274
- type: precision_at_5
value: 18.34
- type: recall_at_1
value: 37.412
- type: recall_at_10
value: 70.718
- type: recall_at_100
value: 91.46300000000001
- type: recall_at_1000
value: 98.539
- type: recall_at_20
value: 78.31
- type: recall_at_3
value: 54.764
- type: recall_at_5
value: 62.089000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 55.474000000000004
- type: map_at_1
value: 36.334
- type: map_at_10
value: 49.297000000000004
- type: map_at_100
value: 50.564
- type: map_at_1000
value: 50.684
- type: map_at_20
value: 49.988
- type: map_at_3
value: 45.837
- type: map_at_5
value: 47.833
- type: mrr_at_1
value: 45.796178343949045
- type: mrr_at_10
value: 55.22237387524016
- type: mrr_at_100
value: 55.76503850923861
- type: mrr_at_1000
value: 55.793444257616564
- type: mrr_at_20
value: 55.548186473604574
- type: mrr_at_3
value: 53.22717622080685
- type: mrr_at_5
value: 54.380042462845104
- type: nauc_map_at_1000_diff1
value: 55.10404495025597
- type: nauc_map_at_1000_max
value: 32.61004534688453
- type: nauc_map_at_1000_std
value: -2.6678082362928692
- type: nauc_map_at_100_diff1
value: 55.128048591260956
- type: nauc_map_at_100_max
value: 32.51731646707708
- type: nauc_map_at_100_std
value: -2.7965007216487514
- type: nauc_map_at_10_diff1
value: 55.5608810613739
- type: nauc_map_at_10_max
value: 31.518773632675085
- type: nauc_map_at_10_std
value: -4.3943618266116165
- type: nauc_map_at_1_diff1
value: 59.69008712972796
- type: nauc_map_at_1_max
value: 25.33526717844356
- type: nauc_map_at_1_std
value: -9.820380032557996
- type: nauc_map_at_20_diff1
value: 55.323001924881275
- type: nauc_map_at_20_max
value: 32.04757648616761
- type: nauc_map_at_20_std
value: -3.693324464570495
- type: nauc_map_at_3_diff1
value: 55.8717635218537
- type: nauc_map_at_3_max
value: 29.086778134576736
- type: nauc_map_at_3_std
value: -7.926148581394159
- type: nauc_map_at_5_diff1
value: 55.639303547935
- type: nauc_map_at_5_max
value: 30.891118436069593
- type: nauc_map_at_5_std
value: -5.65819749683715
- type: nauc_mrr_at_1000_diff1
value: 53.968373024890596
- type: nauc_mrr_at_1000_max
value: 37.26534979196835
- type: nauc_mrr_at_1000_std
value: 2.222359206568994
- type: nauc_mrr_at_100_diff1
value: 53.96114912476918
- type: nauc_mrr_at_100_max
value: 37.2634564338904
- type: nauc_mrr_at_100_std
value: 2.2283015996988844
- type: nauc_mrr_at_10_diff1
value: 54.01169590939643
- type: nauc_mrr_at_10_max
value: 37.24663754506656
- type: nauc_mrr_at_10_std
value: 2.1358207367363895
- type: nauc_mrr_at_1_diff1
value: 57.20194071307239
- type: nauc_mrr_at_1_max
value: 35.89072992523686
- type: nauc_mrr_at_1_std
value: -0.42144687001052134
- type: nauc_mrr_at_20_diff1
value: 53.97217495499223
- type: nauc_mrr_at_20_max
value: 37.23723498066669
- type: nauc_mrr_at_20_std
value: 2.127578192551712
- type: nauc_mrr_at_3_diff1
value: 54.24437764334679
- type: nauc_mrr_at_3_max
value: 36.92965247682062
- type: nauc_mrr_at_3_std
value: 1.1841149310399
- type: nauc_mrr_at_5_diff1
value: 54.10119080195681
- type: nauc_mrr_at_5_max
value: 37.318709512631585
- type: nauc_mrr_at_5_std
value: 2.0104630164348247
- type: nauc_ndcg_at_1000_diff1
value: 53.099673365789634
- type: nauc_ndcg_at_1000_max
value: 35.776586763902465
- type: nauc_ndcg_at_1000_std
value: 2.5267901696084625
- type: nauc_ndcg_at_100_diff1
value: 53.05121582412809
- type: nauc_ndcg_at_100_max
value: 35.42381301395885
- type: nauc_ndcg_at_100_std
value: 2.173202428779517
- type: nauc_ndcg_at_10_diff1
value: 53.66976531875284
- type: nauc_ndcg_at_10_max
value: 34.423433411793106
- type: nauc_ndcg_at_10_std
value: -0.3828688658787144
- type: nauc_ndcg_at_1_diff1
value: 57.20194071307239
- type: nauc_ndcg_at_1_max
value: 35.89072992523686
- type: nauc_ndcg_at_1_std
value: -0.42144687001052134
- type: nauc_ndcg_at_20_diff1
value: 53.45916401395555
- type: nauc_ndcg_at_20_max
value: 34.79760292774444
- type: nauc_ndcg_at_20_std
value: 0.09865492684433257
- type: nauc_ndcg_at_3_diff1
value: 53.21280344931687
- type: nauc_ndcg_at_3_max
value: 33.77403946359603
- type: nauc_ndcg_at_3_std
value: -2.694145872016408
- type: nauc_ndcg_at_5_diff1
value: 53.60161231633293
- type: nauc_ndcg_at_5_max
value: 34.67463458105905
- type: nauc_ndcg_at_5_std
value: -1.2144507126036534
- type: nauc_precision_at_1000_diff1
value: -19.41714022423591
- type: nauc_precision_at_1000_max
value: 14.796653167244648
- type: nauc_precision_at_1000_std
value: 31.439574593063828
- type: nauc_precision_at_100_diff1
value: -13.640343270514627
- type: nauc_precision_at_100_max
value: 22.604051814224558
- type: nauc_precision_at_100_std
value: 36.8854211722489
- type: nauc_precision_at_10_diff1
value: 4.96975730436995
- type: nauc_precision_at_10_max
value: 29.38251452161223
- type: nauc_precision_at_10_std
value: 26.203798377861652
- type: nauc_precision_at_1_diff1
value: 57.20194071307239
- type: nauc_precision_at_1_max
value: 35.89072992523686
- type: nauc_precision_at_1_std
value: -0.42144687001052134
- type: nauc_precision_at_20_diff1
value: -3.0909229380961416
- type: nauc_precision_at_20_max
value: 26.759524144713705
- type: nauc_precision_at_20_std
value: 29.633178123950138
- type: nauc_precision_at_3_diff1
value: 23.661125067193385
- type: nauc_precision_at_3_max
value: 33.188961997504165
- type: nauc_precision_at_3_std
value: 11.241330587984603
- type: nauc_precision_at_5_diff1
value: 15.039749548565663
- type: nauc_precision_at_5_max
value: 33.796709111570586
- type: nauc_precision_at_5_std
value: 19.85135158938685
- type: nauc_recall_at_1000_diff1
value: 38.340253410445754
- type: nauc_recall_at_1000_max
value: 45.86204535697534
- type: nauc_recall_at_1000_std
value: 36.88912705996024
- type: nauc_recall_at_100_diff1
value: 41.110929085799505
- type: nauc_recall_at_100_max
value: 37.30794662383378
- type: nauc_recall_at_100_std
value: 19.206734101551437
- type: nauc_recall_at_10_diff1
value: 47.81675191428859
- type: nauc_recall_at_10_max
value: 32.26702858098994
- type: nauc_recall_at_10_std
value: 0.1481071990195931
- type: nauc_recall_at_1_diff1
value: 59.69008712972796
- type: nauc_recall_at_1_max
value: 25.33526717844356
- type: nauc_recall_at_1_std
value: -9.820380032557996
- type: nauc_recall_at_20_diff1
value: 45.534258063751516
- type: nauc_recall_at_20_max
value: 33.57614929241509
- type: nauc_recall_at_20_std
value: 3.091637710855964
- type: nauc_recall_at_3_diff1
value: 50.825869956050475
- type: nauc_recall_at_3_max
value: 28.731976103609174
- type: nauc_recall_at_3_std
value: -8.68941645562105
- type: nauc_recall_at_5_diff1
value: 49.82351445671777
- type: nauc_recall_at_5_max
value: 31.939395224874133
- type: nauc_recall_at_5_std
value: -3.350183872170391
- type: ndcg_at_1
value: 45.796
- type: ndcg_at_10
value: 55.474000000000004
- type: ndcg_at_100
value: 59.238
- type: ndcg_at_1000
value: 60.857000000000006
- type: ndcg_at_20
value: 56.998000000000005
- type: ndcg_at_3
value: 51.339
- type: ndcg_at_5
value: 53.233
- type: precision_at_1
value: 45.796
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.6049999999999998
- type: precision_at_1000
value: 0.203
- type: precision_at_20
value: 6.131
- type: precision_at_3
value: 25.413999999999998
- type: precision_at_5
value: 17.873
- type: recall_at_1
value: 36.334
- type: recall_at_10
value: 66.05
- type: recall_at_100
value: 81.959
- type: recall_at_1000
value: 91.81700000000001
- type: recall_at_20
value: 71.821
- type: recall_at_3
value: 53.36300000000001
- type: recall_at_5
value: 58.987
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 65.236
- type: map_at_1
value: 45.576
- type: map_at_10
value: 59.288
- type: map_at_100
value: 60.233000000000004
- type: map_at_1000
value: 60.272000000000006
- type: map_at_20
value: 59.885999999999996
- type: map_at_3
value: 55.922000000000004
- type: map_at_5
value: 57.787
- type: mrr_at_1
value: 52.03761755485894
- type: mrr_at_10
value: 62.616534806190096
- type: mrr_at_100
value: 63.14131602974563
- type: mrr_at_1000
value: 63.158062461782635
- type: mrr_at_20
value: 62.95086585828518
- type: mrr_at_3
value: 60.44932079414848
- type: mrr_at_5
value: 61.69696969696983
- type: nauc_map_at_1000_diff1
value: 54.591094530629746
- type: nauc_map_at_1000_max
value: 29.988893768315954
- type: nauc_map_at_1000_std
value: -7.085316154448988
- type: nauc_map_at_100_diff1
value: 54.56965855599467
- type: nauc_map_at_100_max
value: 29.988495744602346
- type: nauc_map_at_100_std
value: -7.086191984888584
- type: nauc_map_at_10_diff1
value: 54.55775377389999
- type: nauc_map_at_10_max
value: 29.435024107949552
- type: nauc_map_at_10_std
value: -8.01734033320305
- type: nauc_map_at_1_diff1
value: 57.64870675697927
- type: nauc_map_at_1_max
value: 21.422796894472086
- type: nauc_map_at_1_std
value: -11.65875019420493
- type: nauc_map_at_20_diff1
value: 54.584118500908254
- type: nauc_map_at_20_max
value: 29.891482535758406
- type: nauc_map_at_20_std
value: -7.2839229521962965
- type: nauc_map_at_3_diff1
value: 55.00309347291383
- type: nauc_map_at_3_max
value: 27.190277265850614
- type: nauc_map_at_3_std
value: -10.011557447122543
- type: nauc_map_at_5_diff1
value: 55.008621125826885
- type: nauc_map_at_5_max
value: 28.867875701065543
- type: nauc_map_at_5_std
value: -8.79615898115619
- type: nauc_mrr_at_1000_diff1
value: 54.46485288814781
- type: nauc_mrr_at_1000_max
value: 30.84329897256859
- type: nauc_mrr_at_1000_std
value: -6.441939303485516
- type: nauc_mrr_at_100_diff1
value: 54.45775938193644
- type: nauc_mrr_at_100_max
value: 30.855901008364484
- type: nauc_mrr_at_100_std
value: -6.425089754501605
- type: nauc_mrr_at_10_diff1
value: 54.34267520192698
- type: nauc_mrr_at_10_max
value: 30.812155588559143
- type: nauc_mrr_at_10_std
value: -6.685848606906253
- type: nauc_mrr_at_1_diff1
value: 56.81495610964844
- type: nauc_mrr_at_1_max
value: 27.418096023602605
- type: nauc_mrr_at_1_std
value: -8.659074749287452
- type: nauc_mrr_at_20_diff1
value: 54.469289850248856
- type: nauc_mrr_at_20_max
value: 30.94454932981508
- type: nauc_mrr_at_20_std
value: -6.383862348447732
- type: nauc_mrr_at_3_diff1
value: 54.10543979115625
- type: nauc_mrr_at_3_max
value: 30.034651205196106
- type: nauc_mrr_at_3_std
value: -7.291860027967431
- type: nauc_mrr_at_5_diff1
value: 54.37870836242998
- type: nauc_mrr_at_5_max
value: 30.602079938954713
- type: nauc_mrr_at_5_std
value: -6.988560773363467
- type: nauc_ndcg_at_1000_diff1
value: 53.917326508870744
- type: nauc_ndcg_at_1000_max
value: 32.38037500887042
- type: nauc_ndcg_at_1000_std
value: -4.237650981255524
- type: nauc_ndcg_at_100_diff1
value: 53.55402469967538
- type: nauc_ndcg_at_100_max
value: 32.644730520893944
- type: nauc_ndcg_at_100_std
value: -3.845653235974425
- type: nauc_ndcg_at_10_diff1
value: 53.37677777681159
- type: nauc_ndcg_at_10_max
value: 31.819151594553517
- type: nauc_ndcg_at_10_std
value: -5.907411205000427
- type: nauc_ndcg_at_1_diff1
value: 56.81495610964844
- type: nauc_ndcg_at_1_max
value: 27.418096023602605
- type: nauc_ndcg_at_1_std
value: -8.659074749287452
- type: nauc_ndcg_at_20_diff1
value: 53.70748371170524
- type: nauc_ndcg_at_20_max
value: 32.83190055712533
- type: nauc_ndcg_at_20_std
value: -4.199486001764711
- type: nauc_ndcg_at_3_diff1
value: 53.74531621452966
- type: nauc_ndcg_at_3_max
value: 29.10348280432317
- type: nauc_ndcg_at_3_std
value: -8.337223236198172
- type: nauc_ndcg_at_5_diff1
value: 54.023400593574635
- type: nauc_ndcg_at_5_max
value: 31.063900271148004
- type: nauc_ndcg_at_5_std
value: -7.192813502916602
- type: nauc_precision_at_1000_diff1
value: -15.072743302222582
- type: nauc_precision_at_1000_max
value: 16.674881008918632
- type: nauc_precision_at_1000_std
value: 26.384382910606103
- type: nauc_precision_at_100_diff1
value: -13.303047988597982
- type: nauc_precision_at_100_max
value: 20.407319129807373
- type: nauc_precision_at_100_std
value: 27.054123197085357
- type: nauc_precision_at_10_diff1
value: 4.393259333945309
- type: nauc_precision_at_10_max
value: 28.66311137381925
- type: nauc_precision_at_10_std
value: 16.152108931717304
- type: nauc_precision_at_1_diff1
value: 56.81495610964844
- type: nauc_precision_at_1_max
value: 27.418096023602605
- type: nauc_precision_at_1_std
value: -8.659074749287452
- type: nauc_precision_at_20_diff1
value: -2.970810506684853
- type: nauc_precision_at_20_max
value: 27.623082834514314
- type: nauc_precision_at_20_std
value: 23.880088669461472
- type: nauc_precision_at_3_diff1
value: 27.02892913083338
- type: nauc_precision_at_3_max
value: 31.287466243455768
- type: nauc_precision_at_3_std
value: 2.2580757582102406
- type: nauc_precision_at_5_diff1
value: 17.414588762460134
- type: nauc_precision_at_5_max
value: 31.9448981361523
- type: nauc_precision_at_5_std
value: 9.538543383867172
- type: nauc_recall_at_1000_diff1
value: 42.53345009244463
- type: nauc_recall_at_1000_max
value: 76.56835130324242
- type: nauc_recall_at_1000_std
value: 76.83818348201466
- type: nauc_recall_at_100_diff1
value: 40.52859414527574
- type: nauc_recall_at_100_max
value: 53.75439716166712
- type: nauc_recall_at_100_std
value: 33.312435323357015
- type: nauc_recall_at_10_diff1
value: 46.80800089133272
- type: nauc_recall_at_10_max
value: 36.58909990918782
- type: nauc_recall_at_10_std
value: -0.7661010510759596
- type: nauc_recall_at_1_diff1
value: 57.64870675697927
- type: nauc_recall_at_1_max
value: 21.422796894472086
- type: nauc_recall_at_1_std
value: -11.65875019420493
- type: nauc_recall_at_20_diff1
value: 47.81282622463479
- type: nauc_recall_at_20_max
value: 44.91166967337363
- type: nauc_recall_at_20_std
value: 11.977322949899486
- type: nauc_recall_at_3_diff1
value: 49.60983921598579
- type: nauc_recall_at_3_max
value: 28.38178625145249
- type: nauc_recall_at_3_std
value: -8.365500494644834
- type: nauc_recall_at_5_diff1
value: 49.27075016589731
- type: nauc_recall_at_5_max
value: 33.016342064689695
- type: nauc_recall_at_5_std
value: -5.860362287397732
- type: ndcg_at_1
value: 52.038
- type: ndcg_at_10
value: 65.236
- type: ndcg_at_100
value: 68.637
- type: ndcg_at_1000
value: 69.303
- type: ndcg_at_20
value: 66.81099999999999
- type: ndcg_at_3
value: 59.996
- type: ndcg_at_5
value: 62.495
- type: precision_at_1
value: 52.038
- type: precision_at_10
value: 10.382
- type: precision_at_100
value: 1.2970000000000002
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.705
- type: precision_at_3
value: 26.667
- type: precision_at_5
value: 18.069
- type: recall_at_1
value: 45.576
- type: recall_at_10
value: 79.185
- type: recall_at_100
value: 93.573
- type: recall_at_1000
value: 98.07000000000001
- type: recall_at_20
value: 84.961
- type: recall_at_3
value: 65.359
- type: recall_at_5
value: 71.439
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 43.736999999999995
- type: map_at_1
value: 28.546
- type: map_at_10
value: 38.137
- type: map_at_100
value: 39.263
- type: map_at_1000
value: 39.333
- type: map_at_20
value: 38.76
- type: map_at_3
value: 34.999
- type: map_at_5
value: 36.658
- type: mrr_at_1
value: 30.96045197740113
- type: mrr_at_10
value: 40.24679400950582
- type: mrr_at_100
value: 41.20170348155751
- type: mrr_at_1000
value: 41.24946049435741
- type: mrr_at_20
value: 40.78243957226738
- type: mrr_at_3
value: 37.212806026365335
- type: mrr_at_5
value: 38.89077212806023
- type: nauc_map_at_1000_diff1
value: 41.484591371503484
- type: nauc_map_at_1000_max
value: 21.50243004663615
- type: nauc_map_at_1000_std
value: -8.017452317576112
- type: nauc_map_at_100_diff1
value: 41.486902693627925
- type: nauc_map_at_100_max
value: 21.494056338278302
- type: nauc_map_at_100_std
value: -8.00004534438287
- type: nauc_map_at_10_diff1
value: 41.64814927701782
- type: nauc_map_at_10_max
value: 21.573862290063015
- type: nauc_map_at_10_std
value: -8.235100260610158
- type: nauc_map_at_1_diff1
value: 46.13514097803455
- type: nauc_map_at_1_max
value: 20.61253171422529
- type: nauc_map_at_1_std
value: -10.365708946480401
- type: nauc_map_at_20_diff1
value: 41.38370404915302
- type: nauc_map_at_20_max
value: 21.42742534998
- type: nauc_map_at_20_std
value: -8.088085156235438
- type: nauc_map_at_3_diff1
value: 42.79503821813812
- type: nauc_map_at_3_max
value: 20.244967918014062
- type: nauc_map_at_3_std
value: -9.171223241127391
- type: nauc_map_at_5_diff1
value: 41.91422269256176
- type: nauc_map_at_5_max
value: 20.583433229682665
- type: nauc_map_at_5_std
value: -8.866934970048673
- type: nauc_mrr_at_1000_diff1
value: 39.62671694053298
- type: nauc_mrr_at_1000_max
value: 21.88401226714755
- type: nauc_mrr_at_1000_std
value: -7.176180526322779
- type: nauc_mrr_at_100_diff1
value: 39.61516353293064
- type: nauc_mrr_at_100_max
value: 21.886966236971986
- type: nauc_mrr_at_100_std
value: -7.151952390804118
- type: nauc_mrr_at_10_diff1
value: 39.64238333546438
- type: nauc_mrr_at_10_max
value: 22.003431882070476
- type: nauc_mrr_at_10_std
value: -7.276539138641391
- type: nauc_mrr_at_1_diff1
value: 43.79445969728542
- type: nauc_mrr_at_1_max
value: 21.590287146439792
- type: nauc_mrr_at_1_std
value: -9.232478308821348
- type: nauc_mrr_at_20_diff1
value: 39.460115785051045
- type: nauc_mrr_at_20_max
value: 21.82317653158517
- type: nauc_mrr_at_20_std
value: -7.173780191474672
- type: nauc_mrr_at_3_diff1
value: 40.845749935523855
- type: nauc_mrr_at_3_max
value: 21.25557497894811
- type: nauc_mrr_at_3_std
value: -8.134671456989606
- type: nauc_mrr_at_5_diff1
value: 40.04668455179097
- type: nauc_mrr_at_5_max
value: 21.174875933521385
- type: nauc_mrr_at_5_std
value: -7.560056043392531
- type: nauc_ndcg_at_1000_diff1
value: 39.192798139162015
- type: nauc_ndcg_at_1000_max
value: 22.56987586557656
- type: nauc_ndcg_at_1000_std
value: -5.699048881036237
- type: nauc_ndcg_at_100_diff1
value: 38.892893366046685
- type: nauc_ndcg_at_100_max
value: 22.59297079596014
- type: nauc_ndcg_at_100_std
value: -4.911050695398782
- type: nauc_ndcg_at_10_diff1
value: 39.109181325535104
- type: nauc_ndcg_at_10_max
value: 22.549083623641465
- type: nauc_ndcg_at_10_std
value: -6.515720993495773
- type: nauc_ndcg_at_1_diff1
value: 43.79445969728542
- type: nauc_ndcg_at_1_max
value: 21.590287146439792
- type: nauc_ndcg_at_1_std
value: -9.232478308821348
- type: nauc_ndcg_at_20_diff1
value: 38.04445460468217
- type: nauc_ndcg_at_20_max
value: 22.018331280895342
- type: nauc_ndcg_at_20_std
value: -5.91921667958667
- type: nauc_ndcg_at_3_diff1
value: 41.40837583860769
- type: nauc_ndcg_at_3_max
value: 20.318000786446362
- type: nauc_ndcg_at_3_std
value: -8.618963675041751
- type: nauc_ndcg_at_5_diff1
value: 39.986476367822966
- type: nauc_ndcg_at_5_max
value: 20.37921991980582
- type: nauc_ndcg_at_5_std
value: -7.793460964512847
- type: nauc_precision_at_1000_diff1
value: -12.56710662501719
- type: nauc_precision_at_1000_max
value: 11.8064074291414
- type: nauc_precision_at_1000_std
value: 12.089205501861484
- type: nauc_precision_at_100_diff1
value: 1.5499855867007222
- type: nauc_precision_at_100_max
value: 19.148603969060325
- type: nauc_precision_at_100_std
value: 15.501052231970993
- type: nauc_precision_at_10_diff1
value: 21.82334457516569
- type: nauc_precision_at_10_max
value: 25.835378906965005
- type: nauc_precision_at_10_std
value: 2.0046736634992053
- type: nauc_precision_at_1_diff1
value: 43.79445969728542
- type: nauc_precision_at_1_max
value: 21.590287146439792
- type: nauc_precision_at_1_std
value: -9.232478308821348
- type: nauc_precision_at_20_diff1
value: 12.48842945166953
- type: nauc_precision_at_20_max
value: 21.861822388437083
- type: nauc_precision_at_20_std
value: 5.705678370669422
- type: nauc_precision_at_3_diff1
value: 33.39537201261205
- type: nauc_precision_at_3_max
value: 18.976562303238143
- type: nauc_precision_at_3_std
value: -7.435275203281953
- type: nauc_precision_at_5_diff1
value: 28.43726174109677
- type: nauc_precision_at_5_max
value: 20.920977896361798
- type: nauc_precision_at_5_std
value: -4.037951482252999
- type: nauc_recall_at_1000_diff1
value: 20.690865073299992
- type: nauc_recall_at_1000_max
value: 42.253403995704716
- type: nauc_recall_at_1000_std
value: 30.634549589172703
- type: nauc_recall_at_100_diff1
value: 25.796200083916936
- type: nauc_recall_at_100_max
value: 28.56927784723373
- type: nauc_recall_at_100_std
value: 17.887511121482028
- type: nauc_recall_at_10_diff1
value: 31.379514117055113
- type: nauc_recall_at_10_max
value: 24.78746081248786
- type: nauc_recall_at_10_std
value: -1.7778309031645088
- type: nauc_recall_at_1_diff1
value: 46.13514097803455
- type: nauc_recall_at_1_max
value: 20.61253171422529
- type: nauc_recall_at_1_std
value: -10.365708946480401
- type: nauc_recall_at_20_diff1
value: 25.522451786356303
- type: nauc_recall_at_20_max
value: 22.758785642133077
- type: nauc_recall_at_20_std
value: 1.6166456895638768
- type: nauc_recall_at_3_diff1
value: 38.54896865948788
- type: nauc_recall_at_3_max
value: 18.652652979020072
- type: nauc_recall_at_3_std
value: -7.3380635568841095
- type: nauc_recall_at_5_diff1
value: 34.88937863569462
- type: nauc_recall_at_5_max
value: 18.35968498669004
- type: nauc_recall_at_5_std
value: -5.7914616318660395
- type: ndcg_at_1
value: 30.959999999999997
- type: ndcg_at_10
value: 43.736999999999995
- type: ndcg_at_100
value: 49.082
- type: ndcg_at_1000
value: 50.685
- type: ndcg_at_20
value: 45.86
- type: ndcg_at_3
value: 37.492999999999995
- type: ndcg_at_5
value: 40.402
- type: precision_at_1
value: 30.959999999999997
- type: precision_at_10
value: 6.802
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 3.898
- type: precision_at_3
value: 15.593000000000002
- type: precision_at_5
value: 11.096
- type: recall_at_1
value: 28.546
- type: recall_at_10
value: 59.050999999999995
- type: recall_at_100
value: 83.241
- type: recall_at_1000
value: 95.095
- type: recall_at_20
value: 67.051
- type: recall_at_3
value: 42.295
- type: recall_at_5
value: 49.275999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 38.766
- type: map_at_1
value: 22.194
- type: map_at_10
value: 32.417
- type: map_at_100
value: 33.818
- type: map_at_1000
value: 33.922999999999995
- type: map_at_20
value: 33.226
- type: map_at_3
value: 28.977999999999998
- type: map_at_5
value: 30.930999999999997
- type: mrr_at_1
value: 27.985074626865668
- type: mrr_at_10
value: 37.88122877675116
- type: mrr_at_100
value: 38.80916079132068
- type: mrr_at_1000
value: 38.86407448284336
- type: mrr_at_20
value: 38.41843873672924
- type: mrr_at_3
value: 34.97097844112771
- type: mrr_at_5
value: 36.612769485903804
- type: nauc_map_at_1000_diff1
value: 32.43251259305673
- type: nauc_map_at_1000_max
value: 14.335717288639014
- type: nauc_map_at_1000_std
value: 1.0546236958118171
- type: nauc_map_at_100_diff1
value: 32.41136418823097
- type: nauc_map_at_100_max
value: 14.346920104620562
- type: nauc_map_at_100_std
value: 1.0760027324442962
- type: nauc_map_at_10_diff1
value: 32.4499422045296
- type: nauc_map_at_10_max
value: 14.214118535832487
- type: nauc_map_at_10_std
value: 0.14744704889692675
- type: nauc_map_at_1_diff1
value: 37.338628279209864
- type: nauc_map_at_1_max
value: 13.031485276530988
- type: nauc_map_at_1_std
value: 1.6522144610114782
- type: nauc_map_at_20_diff1
value: 32.35623484386115
- type: nauc_map_at_20_max
value: 14.088490637407086
- type: nauc_map_at_20_std
value: 0.7613317697378984
- type: nauc_map_at_3_diff1
value: 32.109495047884806
- type: nauc_map_at_3_max
value: 12.49603408273382
- type: nauc_map_at_3_std
value: -0.9662075537491266
- type: nauc_map_at_5_diff1
value: 32.342133483081334
- type: nauc_map_at_5_max
value: 13.497689209467884
- type: nauc_map_at_5_std
value: -0.21210283954776055
- type: nauc_mrr_at_1000_diff1
value: 32.96506279038153
- type: nauc_mrr_at_1000_max
value: 16.92822030855206
- type: nauc_mrr_at_1000_std
value: 1.5631601747826316
- type: nauc_mrr_at_100_diff1
value: 32.967178626624964
- type: nauc_mrr_at_100_max
value: 16.94496458132621
- type: nauc_mrr_at_100_std
value: 1.578385873055269
- type: nauc_mrr_at_10_diff1
value: 32.84356946378166
- type: nauc_mrr_at_10_max
value: 17.03620062001555
- type: nauc_mrr_at_10_std
value: 1.3172703628382934
- type: nauc_mrr_at_1_diff1
value: 37.23620301476008
- type: nauc_mrr_at_1_max
value: 15.042760132255706
- type: nauc_mrr_at_1_std
value: 1.3631844854711168
- type: nauc_mrr_at_20_diff1
value: 32.90307896901219
- type: nauc_mrr_at_20_max
value: 16.90677676348705
- type: nauc_mrr_at_20_std
value: 1.3454494708937632
- type: nauc_mrr_at_3_diff1
value: 32.400202492227436
- type: nauc_mrr_at_3_max
value: 15.78418346740702
- type: nauc_mrr_at_3_std
value: 0.13431647296653024
- type: nauc_mrr_at_5_diff1
value: 32.7861104709686
- type: nauc_mrr_at_5_max
value: 16.537867487279414
- type: nauc_mrr_at_5_std
value: 0.8973939129673872
- type: nauc_ndcg_at_1000_diff1
value: 31.278003707732594
- type: nauc_ndcg_at_1000_max
value: 16.369712386488345
- type: nauc_ndcg_at_1000_std
value: 3.801192665485738
- type: nauc_ndcg_at_100_diff1
value: 31.03883864137243
- type: nauc_ndcg_at_100_max
value: 17.13298255370564
- type: nauc_ndcg_at_100_std
value: 4.50459441845583
- type: nauc_ndcg_at_10_diff1
value: 31.121998879356767
- type: nauc_ndcg_at_10_max
value: 16.20699275752119
- type: nauc_ndcg_at_10_std
value: 1.0836290696696915
- type: nauc_ndcg_at_1_diff1
value: 37.23620301476008
- type: nauc_ndcg_at_1_max
value: 15.042760132255706
- type: nauc_ndcg_at_1_std
value: 1.3631844854711168
- type: nauc_ndcg_at_20_diff1
value: 30.969422184620626
- type: nauc_ndcg_at_20_max
value: 15.986351573280091
- type: nauc_ndcg_at_20_std
value: 2.3824234046027906
- type: nauc_ndcg_at_3_diff1
value: 30.7343672295478
- type: nauc_ndcg_at_3_max
value: 13.464154391275335
- type: nauc_ndcg_at_3_std
value: -1.2740019040002273
- type: nauc_ndcg_at_5_diff1
value: 31.196681500333202
- type: nauc_ndcg_at_5_max
value: 14.799926395721405
- type: nauc_ndcg_at_5_std
value: 0.14444465266694606
- type: nauc_precision_at_1000_diff1
value: -0.9357199825874157
- type: nauc_precision_at_1000_max
value: 3.4994742694653027
- type: nauc_precision_at_1000_std
value: 4.257039200788741
- type: nauc_precision_at_100_diff1
value: 5.061693041980213
- type: nauc_precision_at_100_max
value: 12.735903109624006
- type: nauc_precision_at_100_std
value: 11.38007105270252
- type: nauc_precision_at_10_diff1
value: 17.45866245433831
- type: nauc_precision_at_10_max
value: 16.92072631690384
- type: nauc_precision_at_10_std
value: 3.261686492278632
- type: nauc_precision_at_1_diff1
value: 37.23620301476008
- type: nauc_precision_at_1_max
value: 15.042760132255706
- type: nauc_precision_at_1_std
value: 1.3631844854711168
- type: nauc_precision_at_20_diff1
value: 13.376095327297524
- type: nauc_precision_at_20_max
value: 14.704258887537083
- type: nauc_precision_at_20_std
value: 7.4047267893058395
- type: nauc_precision_at_3_diff1
value: 22.446224666772185
- type: nauc_precision_at_3_max
value: 13.622002725403505
- type: nauc_precision_at_3_std
value: -2.488819731632478
- type: nauc_precision_at_5_diff1
value: 21.1751496038234
- type: nauc_precision_at_5_max
value: 15.792936657395753
- type: nauc_precision_at_5_std
value: 0.33435363620460834
- type: nauc_recall_at_1000_diff1
value: 3.1437059937979086
- type: nauc_recall_at_1000_max
value: 21.470870159303246
- type: nauc_recall_at_1000_std
value: 45.70068528661671
- type: nauc_recall_at_100_diff1
value: 20.13544486693584
- type: nauc_recall_at_100_max
value: 26.333993225309104
- type: nauc_recall_at_100_std
value: 24.441591685006866
- type: nauc_recall_at_10_diff1
value: 25.196699930978316
- type: nauc_recall_at_10_max
value: 18.357619567515982
- type: nauc_recall_at_10_std
value: 2.372775123084514
- type: nauc_recall_at_1_diff1
value: 37.338628279209864
- type: nauc_recall_at_1_max
value: 13.031485276530988
- type: nauc_recall_at_1_std
value: 1.6522144610114782
- type: nauc_recall_at_20_diff1
value: 23.584794147101206
- type: nauc_recall_at_20_max
value: 17.554186420673393
- type: nauc_recall_at_20_std
value: 6.80815648167649
- type: nauc_recall_at_3_diff1
value: 25.985231586237965
- type: nauc_recall_at_3_max
value: 11.76324144774928
- type: nauc_recall_at_3_std
value: -2.963222838479749
- type: nauc_recall_at_5_diff1
value: 26.323181884774176
- type: nauc_recall_at_5_max
value: 14.429926567066673
- type: nauc_recall_at_5_std
value: -0.00012675963437829796
- type: ndcg_at_1
value: 27.985
- type: ndcg_at_10
value: 38.766
- type: ndcg_at_100
value: 44.753
- type: ndcg_at_1000
value: 47.038000000000004
- type: ndcg_at_20
value: 41.265
- type: ndcg_at_3
value: 32.879000000000005
- type: ndcg_at_5
value: 35.659
- type: precision_at_1
value: 27.985
- type: precision_at_10
value: 7.301
- type: precision_at_100
value: 1.169
- type: precision_at_1000
value: 0.148
- type: precision_at_20
value: 4.359
- type: precision_at_3
value: 16.211000000000002
- type: precision_at_5
value: 11.816
- type: recall_at_1
value: 22.194
- type: recall_at_10
value: 52.589
- type: recall_at_100
value: 78.062
- type: recall_at_1000
value: 94.074
- type: recall_at_20
value: 61.623000000000005
- type: recall_at_3
value: 36.278
- type: recall_at_5
value: 43.38
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 53.893
- type: map_at_1
value: 35.012
- type: map_at_10
value: 47.613
- type: map_at_100
value: 48.971
- type: map_at_1000
value: 49.063
- type: map_at_20
value: 48.359
- type: map_at_3
value: 44.082
- type: map_at_5
value: 46.21
- type: mrr_at_1
value: 42.54090471607314
- type: mrr_at_10
value: 52.945827031486324
- type: mrr_at_100
value: 53.72281654233849
- type: mrr_at_1000
value: 53.745157751270824
- type: mrr_at_20
value: 53.385817786371135
- type: mrr_at_3
value: 50.59351940968877
- type: mrr_at_5
value: 52.104587744626215
- type: nauc_map_at_1000_diff1
value: 48.33492567946275
- type: nauc_map_at_1000_max
value: 28.433337468551517
- type: nauc_map_at_1000_std
value: -10.190998584209403
- type: nauc_map_at_100_diff1
value: 48.33909629330693
- type: nauc_map_at_100_max
value: 28.419994456722026
- type: nauc_map_at_100_std
value: -10.235081491154807
- type: nauc_map_at_10_diff1
value: 48.53095045914228
- type: nauc_map_at_10_max
value: 28.127796242401004
- type: nauc_map_at_10_std
value: -11.01082028487877
- type: nauc_map_at_1_diff1
value: 53.649108353905916
- type: nauc_map_at_1_max
value: 26.374246280085167
- type: nauc_map_at_1_std
value: -13.139747091527232
- type: nauc_map_at_20_diff1
value: 48.3280172513891
- type: nauc_map_at_20_max
value: 28.305001377760526
- type: nauc_map_at_20_std
value: -10.605331749641165
- type: nauc_map_at_3_diff1
value: 49.153534229465166
- type: nauc_map_at_3_max
value: 26.279118718624357
- type: nauc_map_at_3_std
value: -11.726274525594594
- type: nauc_map_at_5_diff1
value: 48.582263895163095
- type: nauc_map_at_5_max
value: 27.600960424655664
- type: nauc_map_at_5_std
value: -11.388324962006987
- type: nauc_mrr_at_1000_diff1
value: 48.44969791641315
- type: nauc_mrr_at_1000_max
value: 30.766154468635996
- type: nauc_mrr_at_1000_std
value: -7.433159357502868
- type: nauc_mrr_at_100_diff1
value: 48.446996014110056
- type: nauc_mrr_at_100_max
value: 30.77778009976802
- type: nauc_mrr_at_100_std
value: -7.4214844009140455
- type: nauc_mrr_at_10_diff1
value: 48.46437502902085
- type: nauc_mrr_at_10_max
value: 30.644355578544396
- type: nauc_mrr_at_10_std
value: -7.60660588314165
- type: nauc_mrr_at_1_diff1
value: 51.75053280771205
- type: nauc_mrr_at_1_max
value: 32.20326775017326
- type: nauc_mrr_at_1_std
value: -7.751249960234988
- type: nauc_mrr_at_20_diff1
value: 48.44985237814549
- type: nauc_mrr_at_20_max
value: 30.724011978924064
- type: nauc_mrr_at_20_std
value: -7.542446860622669
- type: nauc_mrr_at_3_diff1
value: 48.07189918639105
- type: nauc_mrr_at_3_max
value: 29.49689064034342
- type: nauc_mrr_at_3_std
value: -8.048770832399613
- type: nauc_mrr_at_5_diff1
value: 47.96784830129403
- type: nauc_mrr_at_5_max
value: 30.163178388814504
- type: nauc_mrr_at_5_std
value: -8.273539965534646
- type: nauc_ndcg_at_1000_diff1
value: 47.57997119166584
- type: nauc_ndcg_at_1000_max
value: 30.15265238140928
- type: nauc_ndcg_at_1000_std
value: -7.182719485523506
- type: nauc_ndcg_at_100_diff1
value: 47.47854808504636
- type: nauc_ndcg_at_100_max
value: 30.24869750327265
- type: nauc_ndcg_at_100_std
value: -7.161783027754096
- type: nauc_ndcg_at_10_diff1
value: 47.43160783221118
- type: nauc_ndcg_at_10_max
value: 28.85230033642103
- type: nauc_ndcg_at_10_std
value: -10.188037568324937
- type: nauc_ndcg_at_1_diff1
value: 51.75053280771205
- type: nauc_ndcg_at_1_max
value: 32.20326775017326
- type: nauc_ndcg_at_1_std
value: -7.751249960234988
- type: nauc_ndcg_at_20_diff1
value: 47.12845747675502
- type: nauc_ndcg_at_20_max
value: 29.3292150770514
- type: nauc_ndcg_at_20_std
value: -9.195508753243207
- type: nauc_ndcg_at_3_diff1
value: 47.08761845137898
- type: nauc_ndcg_at_3_max
value: 26.65140420355011
- type: nauc_ndcg_at_3_std
value: -9.875410510080297
- type: nauc_ndcg_at_5_diff1
value: 46.71812623990855
- type: nauc_ndcg_at_5_max
value: 28.019689931762453
- type: nauc_ndcg_at_5_std
value: -10.564763057666365
- type: nauc_precision_at_1000_diff1
value: -18.672244049695788
- type: nauc_precision_at_1000_max
value: 1.5236368393702486
- type: nauc_precision_at_1000_std
value: 19.391348785940522
- type: nauc_precision_at_100_diff1
value: -11.777137115010534
- type: nauc_precision_at_100_max
value: 9.135014085804933
- type: nauc_precision_at_100_std
value: 21.12692896308041
- type: nauc_precision_at_10_diff1
value: 6.844396964136933
- type: nauc_precision_at_10_max
value: 18.55877978458855
- type: nauc_precision_at_10_std
value: 6.671582447880328
- type: nauc_precision_at_1_diff1
value: 51.75053280771205
- type: nauc_precision_at_1_max
value: 32.20326775017326
- type: nauc_precision_at_1_std
value: -7.751249960234988
- type: nauc_precision_at_20_diff1
value: -1.442664564179706
- type: nauc_precision_at_20_max
value: 15.340885744268606
- type: nauc_precision_at_20_std
value: 12.163710009596011
- type: nauc_precision_at_3_diff1
value: 25.728062375098666
- type: nauc_precision_at_3_max
value: 22.47813702523398
- type: nauc_precision_at_3_std
value: -0.4916532868054475
- type: nauc_precision_at_5_diff1
value: 14.927202370778891
- type: nauc_precision_at_5_max
value: 22.01860485431669
- type: nauc_precision_at_5_std
value: 2.4367836826340716
- type: nauc_recall_at_1000_diff1
value: 49.61327701318546
- type: nauc_recall_at_1000_max
value: 55.735183781771255
- type: nauc_recall_at_1000_std
value: 40.86201406256184
- type: nauc_recall_at_100_diff1
value: 40.9808859736081
- type: nauc_recall_at_100_max
value: 37.89714011000189
- type: nauc_recall_at_100_std
value: 13.001425913684672
- type: nauc_recall_at_10_diff1
value: 42.11640791602849
- type: nauc_recall_at_10_max
value: 27.639245759700525
- type: nauc_recall_at_10_std
value: -10.115499540962759
- type: nauc_recall_at_1_diff1
value: 53.649108353905916
- type: nauc_recall_at_1_max
value: 26.374246280085167
- type: nauc_recall_at_1_std
value: -13.139747091527232
- type: nauc_recall_at_20_diff1
value: 40.759415586574164
- type: nauc_recall_at_20_max
value: 28.88475759388743
- type: nauc_recall_at_20_std
value: -6.557196266465641
- type: nauc_recall_at_3_diff1
value: 43.11231831377521
- type: nauc_recall_at_3_max
value: 21.57590234692241
- type: nauc_recall_at_3_std
value: -12.535579716094292
- type: nauc_recall_at_5_diff1
value: 40.624229969140515
- type: nauc_recall_at_5_max
value: 24.406786983369628
- type: nauc_recall_at_5_std
value: -12.993898363073523
- type: ndcg_at_1
value: 42.541000000000004
- type: ndcg_at_10
value: 53.893
- type: ndcg_at_100
value: 59.160000000000004
- type: ndcg_at_1000
value: 60.549
- type: ndcg_at_20
value: 55.943
- type: ndcg_at_3
value: 48.729
- type: ndcg_at_5
value: 51.504000000000005
- type: precision_at_1
value: 42.541000000000004
- type: precision_at_10
value: 9.769
- type: precision_at_100
value: 1.436
- type: precision_at_1000
value: 0.173
- type: precision_at_20
value: 5.606
- type: precision_at_3
value: 23.291999999999998
- type: precision_at_5
value: 16.573999999999998
- type: recall_at_1
value: 35.012
- type: recall_at_10
value: 66.668
- type: recall_at_100
value: 88.43900000000001
- type: recall_at_1000
value: 96.858
- type: recall_at_20
value: 73.741
- type: recall_at_3
value: 52.5
- type: recall_at_5
value: 59.489000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 51.151
- type: map_at_1
value: 31.976
- type: map_at_10
value: 44.372
- type: map_at_100
value: 45.772
- type: map_at_1000
value: 45.86
- type: map_at_20
value: 45.141999999999996
- type: map_at_3
value: 40.384
- type: map_at_5
value: 42.671
- type: mrr_at_1
value: 39.726027397260275
- type: mrr_at_10
value: 50.237008045227206
- type: mrr_at_100
value: 51.00797456759301
- type: mrr_at_1000
value: 51.0438180348488
- type: mrr_at_20
value: 50.692345088762416
- type: mrr_at_3
value: 47.450532724505294
- type: mrr_at_5
value: 48.96879756468792
- type: nauc_map_at_1000_diff1
value: 45.62130810361153
- type: nauc_map_at_1000_max
value: 36.32159976063163
- type: nauc_map_at_1000_std
value: 1.1987591996375244
- type: nauc_map_at_100_diff1
value: 45.59831464844773
- type: nauc_map_at_100_max
value: 36.33392594753103
- type: nauc_map_at_100_std
value: 1.2255994308791645
- type: nauc_map_at_10_diff1
value: 45.5694944718802
- type: nauc_map_at_10_max
value: 35.85529308786782
- type: nauc_map_at_10_std
value: 0.5955808656405205
- type: nauc_map_at_1_diff1
value: 52.006916444402705
- type: nauc_map_at_1_max
value: 31.46971111816068
- type: nauc_map_at_1_std
value: -5.841514911996808
- type: nauc_map_at_20_diff1
value: 45.614991265384134
- type: nauc_map_at_20_max
value: 36.22481627485531
- type: nauc_map_at_20_std
value: 1.0225219860087784
- type: nauc_map_at_3_diff1
value: 45.70591166263063
- type: nauc_map_at_3_max
value: 34.85866695253853
- type: nauc_map_at_3_std
value: -1.3559438227802654
- type: nauc_map_at_5_diff1
value: 45.3371848868359
- type: nauc_map_at_5_max
value: 35.53359157566889
- type: nauc_map_at_5_std
value: -0.050440052849088216
- type: nauc_mrr_at_1000_diff1
value: 45.643404192715344
- type: nauc_mrr_at_1000_max
value: 38.683456197514374
- type: nauc_mrr_at_1000_std
value: 3.6510994600177766
- type: nauc_mrr_at_100_diff1
value: 45.61737222810662
- type: nauc_mrr_at_100_max
value: 38.6898459851549
- type: nauc_mrr_at_100_std
value: 3.6794921137057965
- type: nauc_mrr_at_10_diff1
value: 45.4389518659568
- type: nauc_mrr_at_10_max
value: 38.46663230841972
- type: nauc_mrr_at_10_std
value: 3.4955370353613437
- type: nauc_mrr_at_1_diff1
value: 51.53447970851375
- type: nauc_mrr_at_1_max
value: 36.83777704134797
- type: nauc_mrr_at_1_std
value: -0.20181987547405483
- type: nauc_mrr_at_20_diff1
value: 45.580395710865595
- type: nauc_mrr_at_20_max
value: 38.668743354722075
- type: nauc_mrr_at_20_std
value: 3.6072444777545414
- type: nauc_mrr_at_3_diff1
value: 45.660055750926595
- type: nauc_mrr_at_3_max
value: 38.469178980396656
- type: nauc_mrr_at_3_std
value: 2.691218725900893
- type: nauc_mrr_at_5_diff1
value: 45.634591047835144
- type: nauc_mrr_at_5_max
value: 38.77891185568239
- type: nauc_mrr_at_5_std
value: 3.524025032661579
- type: nauc_ndcg_at_1000_diff1
value: 44.473420323427575
- type: nauc_ndcg_at_1000_max
value: 37.84517773820968
- type: nauc_ndcg_at_1000_std
value: 3.8879967318195567
- type: nauc_ndcg_at_100_diff1
value: 43.93595838190974
- type: nauc_ndcg_at_100_max
value: 38.48760158899655
- type: nauc_ndcg_at_100_std
value: 5.125658250662759
- type: nauc_ndcg_at_10_diff1
value: 43.75007923879972
- type: nauc_ndcg_at_10_max
value: 37.04943122318896
- type: nauc_ndcg_at_10_std
value: 3.0918588572447914
- type: nauc_ndcg_at_1_diff1
value: 51.53447970851375
- type: nauc_ndcg_at_1_max
value: 36.83777704134797
- type: nauc_ndcg_at_1_std
value: -0.20181987547405483
- type: nauc_ndcg_at_20_diff1
value: 43.84917933584441
- type: nauc_ndcg_at_20_max
value: 38.05038974138388
- type: nauc_ndcg_at_20_std
value: 4.080005247163919
- type: nauc_ndcg_at_3_diff1
value: 43.871703221720196
- type: nauc_ndcg_at_3_max
value: 36.37962400914173
- type: nauc_ndcg_at_3_std
value: 1.1380927635098264
- type: nauc_ndcg_at_5_diff1
value: 43.78596218009951
- type: nauc_ndcg_at_5_max
value: 37.071166988881124
- type: nauc_ndcg_at_5_std
value: 2.4211484479113343
- type: nauc_precision_at_1000_diff1
value: -12.063807508710797
- type: nauc_precision_at_1000_max
value: 3.0534600128229705
- type: nauc_precision_at_1000_std
value: 10.713012078723349
- type: nauc_precision_at_100_diff1
value: -5.991867590224487
- type: nauc_precision_at_100_max
value: 11.522954085499421
- type: nauc_precision_at_100_std
value: 16.752135624833205
- type: nauc_precision_at_10_diff1
value: 11.732182015548547
- type: nauc_precision_at_10_max
value: 24.566425753550515
- type: nauc_precision_at_10_std
value: 15.84645732647604
- type: nauc_precision_at_1_diff1
value: 51.53447970851375
- type: nauc_precision_at_1_max
value: 36.83777704134797
- type: nauc_precision_at_1_std
value: -0.20181987547405483
- type: nauc_precision_at_20_diff1
value: 5.035581730073983
- type: nauc_precision_at_20_max
value: 20.532680423336203
- type: nauc_precision_at_20_std
value: 17.21343646990562
- type: nauc_precision_at_3_diff1
value: 26.336385384915346
- type: nauc_precision_at_3_max
value: 34.89706784191639
- type: nauc_precision_at_3_std
value: 10.49473682331338
- type: nauc_precision_at_5_diff1
value: 18.756823355607022
- type: nauc_precision_at_5_max
value: 29.913609784216167
- type: nauc_precision_at_5_std
value: 13.772361907662217
- type: nauc_recall_at_1000_diff1
value: 21.879866503531044
- type: nauc_recall_at_1000_max
value: 37.016810874312554
- type: nauc_recall_at_1000_std
value: 37.022197071130606
- type: nauc_recall_at_100_diff1
value: 28.21066779529965
- type: nauc_recall_at_100_max
value: 45.164115032338664
- type: nauc_recall_at_100_std
value: 31.411584857962232
- type: nauc_recall_at_10_diff1
value: 34.17100873986437
- type: nauc_recall_at_10_max
value: 33.68680443564895
- type: nauc_recall_at_10_std
value: 7.114874526753165
- type: nauc_recall_at_1_diff1
value: 52.006916444402705
- type: nauc_recall_at_1_max
value: 31.46971111816068
- type: nauc_recall_at_1_std
value: -5.841514911996808
- type: nauc_recall_at_20_diff1
value: 33.42327780252708
- type: nauc_recall_at_20_max
value: 38.03171798236118
- type: nauc_recall_at_20_std
value: 12.473384277901243
- type: nauc_recall_at_3_diff1
value: 37.72580392830666
- type: nauc_recall_at_3_max
value: 33.8785403157307
- type: nauc_recall_at_3_std
value: 0.9069329386546272
- type: nauc_recall_at_5_diff1
value: 36.30102493975623
- type: nauc_recall_at_5_max
value: 35.0689681059825
- type: nauc_recall_at_5_std
value: 4.419807746197185
- type: ndcg_at_1
value: 39.726
- type: ndcg_at_10
value: 51.151
- type: ndcg_at_100
value: 56.449000000000005
- type: ndcg_at_1000
value: 58.012
- type: ndcg_at_20
value: 53.315999999999995
- type: ndcg_at_3
value: 45.163
- type: ndcg_at_5
value: 47.899
- type: precision_at_1
value: 39.726
- type: precision_at_10
value: 9.543
- type: precision_at_100
value: 1.417
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_20
value: 5.502
- type: precision_at_3
value: 21.804000000000002
- type: precision_at_5
value: 15.684999999999999
- type: recall_at_1
value: 31.976
- type: recall_at_10
value: 65.243
- type: recall_at_100
value: 87.168
- type: recall_at_1000
value: 97.504
- type: recall_at_20
value: 72.951
- type: recall_at_3
value: 48.254000000000005
- type: recall_at_5
value: 55.595000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 48.669000000000004
- type: ndcg_at_10
value: 48.669000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 41.521
- type: map_at_1
value: 27.765
- type: map_at_10
value: 36.614000000000004
- type: map_at_100
value: 37.817
- type: map_at_1000
value: 37.906
- type: map_at_20
value: 37.313
- type: map_at_3
value: 33.937
- type: map_at_5
value: 35.516
- type: mrr_at_1
value: 30.981595092024538
- type: mrr_at_10
value: 39.44681809329049
- type: mrr_at_100
value: 40.37271039447302
- type: mrr_at_1000
value: 40.434685794294474
- type: mrr_at_20
value: 40.00737117524118
- type: mrr_at_3
value: 36.91206543967281
- type: mrr_at_5
value: 38.39212678936605
- type: nauc_map_at_1000_diff1
value: 52.81391663490261
- type: nauc_map_at_1000_max
value: 29.664172550013774
- type: nauc_map_at_1000_std
value: -2.7414196510762445
- type: nauc_map_at_100_diff1
value: 52.764084197052306
- type: nauc_map_at_100_max
value: 29.647430801923274
- type: nauc_map_at_100_std
value: -2.7255714834919655
- type: nauc_map_at_10_diff1
value: 53.1353567009527
- type: nauc_map_at_10_max
value: 29.553212761206954
- type: nauc_map_at_10_std
value: -3.1409230447808136
- type: nauc_map_at_1_diff1
value: 60.84686852570778
- type: nauc_map_at_1_max
value: 30.4288106949597
- type: nauc_map_at_1_std
value: -7.578528382462288
- type: nauc_map_at_20_diff1
value: 52.88252470124129
- type: nauc_map_at_20_max
value: 29.524183751068307
- type: nauc_map_at_20_std
value: -3.1020875723206927
- type: nauc_map_at_3_diff1
value: 55.097956701125185
- type: nauc_map_at_3_max
value: 30.197490024554707
- type: nauc_map_at_3_std
value: -4.883694538996516
- type: nauc_map_at_5_diff1
value: 53.93889159560893
- type: nauc_map_at_5_max
value: 29.676635020687925
- type: nauc_map_at_5_std
value: -3.3679708283522016
- type: nauc_mrr_at_1000_diff1
value: 52.46704766008962
- type: nauc_mrr_at_1000_max
value: 29.563385256175916
- type: nauc_mrr_at_1000_std
value: -1.296486223268209
- type: nauc_mrr_at_100_diff1
value: 52.42981778069272
- type: nauc_mrr_at_100_max
value: 29.56377822987918
- type: nauc_mrr_at_100_std
value: -1.2762300567936988
- type: nauc_mrr_at_10_diff1
value: 52.55006453907257
- type: nauc_mrr_at_10_max
value: 29.576046278214125
- type: nauc_mrr_at_10_std
value: -1.5535359096219175
- type: nauc_mrr_at_1_diff1
value: 59.08308459352257
- type: nauc_mrr_at_1_max
value: 29.938769552965542
- type: nauc_mrr_at_1_std
value: -3.6474707765933374
- type: nauc_mrr_at_20_diff1
value: 52.40073561812595
- type: nauc_mrr_at_20_max
value: 29.453126465073513
- type: nauc_mrr_at_20_std
value: -1.5311349014705307
- type: nauc_mrr_at_3_diff1
value: 54.0627524284012
- type: nauc_mrr_at_3_max
value: 29.6471651189158
- type: nauc_mrr_at_3_std
value: -3.1605550371803077
- type: nauc_mrr_at_5_diff1
value: 52.9940750094676
- type: nauc_mrr_at_5_max
value: 29.224601903567233
- type: nauc_mrr_at_5_std
value: -1.973807036089598
- type: nauc_ndcg_at_1000_diff1
value: 50.31108553303487
- type: nauc_ndcg_at_1000_max
value: 30.065099576955294
- type: nauc_ndcg_at_1000_std
value: 0.8165367280480663
- type: nauc_ndcg_at_100_diff1
value: 48.954669298701
- type: nauc_ndcg_at_100_max
value: 29.985564650320757
- type: nauc_ndcg_at_100_std
value: 1.442706323905779
- type: nauc_ndcg_at_10_diff1
value: 49.939726171975074
- type: nauc_ndcg_at_10_max
value: 29.038780243968038
- type: nauc_ndcg_at_10_std
value: -1.2301036879077722
- type: nauc_ndcg_at_1_diff1
value: 59.08308459352257
- type: nauc_ndcg_at_1_max
value: 29.938769552965542
- type: nauc_ndcg_at_1_std
value: -3.6474707765933374
- type: nauc_ndcg_at_20_diff1
value: 49.240899070998786
- type: nauc_ndcg_at_20_max
value: 28.846948404378892
- type: nauc_ndcg_at_20_std
value: -0.942645164997025
- type: nauc_ndcg_at_3_diff1
value: 52.779966640966336
- type: nauc_ndcg_at_3_max
value: 29.44335897565144
- type: nauc_ndcg_at_3_std
value: -4.07045432893811
- type: nauc_ndcg_at_5_diff1
value: 51.140081962279204
- type: nauc_ndcg_at_5_max
value: 28.780221435137843
- type: nauc_ndcg_at_5_std
value: -1.9647629237439366
- type: nauc_precision_at_1000_diff1
value: -6.667071013946099
- type: nauc_precision_at_1000_max
value: 4.88937713617606
- type: nauc_precision_at_1000_std
value: 12.197077398297914
- type: nauc_precision_at_100_diff1
value: -1.6271908247032583
- type: nauc_precision_at_100_max
value: 12.09691975180073
- type: nauc_precision_at_100_std
value: 18.43894936485954
- type: nauc_precision_at_10_diff1
value: 21.030543772837955
- type: nauc_precision_at_10_max
value: 17.862245258912697
- type: nauc_precision_at_10_std
value: 10.219782614987436
- type: nauc_precision_at_1_diff1
value: 59.08308459352257
- type: nauc_precision_at_1_max
value: 29.938769552965542
- type: nauc_precision_at_1_std
value: -3.6474707765933374
- type: nauc_precision_at_20_diff1
value: 11.836687933490524
- type: nauc_precision_at_20_max
value: 14.637079306722528
- type: nauc_precision_at_20_std
value: 10.552762967644831
- type: nauc_precision_at_3_diff1
value: 39.63462382612583
- type: nauc_precision_at_3_max
value: 25.01424566918112
- type: nauc_precision_at_3_std
value: 1.6711537034528392
- type: nauc_precision_at_5_diff1
value: 31.04606982791796
- type: nauc_precision_at_5_max
value: 20.557020391430015
- type: nauc_precision_at_5_std
value: 7.872967924046605
- type: nauc_recall_at_1000_diff1
value: 36.404996367121555
- type: nauc_recall_at_1000_max
value: 36.582711053620095
- type: nauc_recall_at_1000_std
value: 47.36968865441867
- type: nauc_recall_at_100_diff1
value: 29.261159461344484
- type: nauc_recall_at_100_max
value: 32.38893628092869
- type: nauc_recall_at_100_std
value: 24.930995926123607
- type: nauc_recall_at_10_diff1
value: 39.51413409423682
- type: nauc_recall_at_10_max
value: 26.592883142970596
- type: nauc_recall_at_10_std
value: 3.41837874566946
- type: nauc_recall_at_1_diff1
value: 60.84686852570778
- type: nauc_recall_at_1_max
value: 30.4288106949597
- type: nauc_recall_at_1_std
value: -7.578528382462288
- type: nauc_recall_at_20_diff1
value: 35.15078785544861
- type: nauc_recall_at_20_max
value: 24.82983217630711
- type: nauc_recall_at_20_std
value: 5.116281941537316
- type: nauc_recall_at_3_diff1
value: 48.911475883980984
- type: nauc_recall_at_3_max
value: 28.502568900649567
- type: nauc_recall_at_3_std
value: -4.418317057071089
- type: nauc_recall_at_5_diff1
value: 44.24824647304154
- type: nauc_recall_at_5_max
value: 26.392262615242974
- type: nauc_recall_at_5_std
value: 0.807270243261763
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 41.521
- type: ndcg_at_100
value: 46.831
- type: ndcg_at_1000
value: 48.983
- type: ndcg_at_20
value: 43.79
- type: ndcg_at_3
value: 36.658
- type: ndcg_at_5
value: 39.151
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.656
- type: precision_at_100
value: 1.009
- type: precision_at_1000
value: 0.127
- type: precision_at_20
value: 3.9190000000000005
- type: precision_at_3
value: 15.848999999999998
- type: precision_at_5
value: 11.166
- type: recall_at_1
value: 27.765
- type: recall_at_10
value: 53.42400000000001
- type: recall_at_100
value: 76.847
- type: recall_at_1000
value: 92.613
- type: recall_at_20
value: 61.973
- type: recall_at_3
value: 40.373
- type: recall_at_5
value: 46.421
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 37.183
- type: map_at_1
value: 22.567999999999998
- type: map_at_10
value: 31.695
- type: map_at_100
value: 32.983000000000004
- type: map_at_1000
value: 33.103
- type: map_at_20
value: 32.415
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 30.447999999999997
- type: mrr_at_1
value: 27.35719201651755
- type: mrr_at_10
value: 36.02016626792957
- type: mrr_at_100
value: 36.98001375170261
- type: mrr_at_1000
value: 37.047243773430516
- type: mrr_at_20
value: 36.59436259474964
- type: mrr_at_3
value: 33.56847900894709
- type: mrr_at_5
value: 35.008602890571375
- type: nauc_map_at_1000_diff1
value: 38.31962882173507
- type: nauc_map_at_1000_max
value: 20.234054345740052
- type: nauc_map_at_1000_std
value: 0.7148544253591265
- type: nauc_map_at_100_diff1
value: 38.30158981561181
- type: nauc_map_at_100_max
value: 20.25583133514947
- type: nauc_map_at_100_std
value: 0.7067333640571242
- type: nauc_map_at_10_diff1
value: 38.399246886273495
- type: nauc_map_at_10_max
value: 20.138790038401027
- type: nauc_map_at_10_std
value: 0.22555232002920708
- type: nauc_map_at_1_diff1
value: 42.478984768354664
- type: nauc_map_at_1_max
value: 17.347554652757935
- type: nauc_map_at_1_std
value: -1.7479543132075324
- type: nauc_map_at_20_diff1
value: 38.29285220008463
- type: nauc_map_at_20_max
value: 20.24494800952884
- type: nauc_map_at_20_std
value: 0.44773706907718713
- type: nauc_map_at_3_diff1
value: 38.9344151744796
- type: nauc_map_at_3_max
value: 19.34711920281405
- type: nauc_map_at_3_std
value: -0.5518134498464485
- type: nauc_map_at_5_diff1
value: 38.6869751352327
- type: nauc_map_at_5_max
value: 19.89431319427727
- type: nauc_map_at_5_std
value: -0.36906720632055906
- type: nauc_mrr_at_1000_diff1
value: 39.110035055268085
- type: nauc_mrr_at_1000_max
value: 20.30218201303233
- type: nauc_mrr_at_1000_std
value: 0.9892682538038587
- type: nauc_mrr_at_100_diff1
value: 39.09521676323142
- type: nauc_mrr_at_100_max
value: 20.303440926608825
- type: nauc_mrr_at_100_std
value: 0.9904688660728255
- type: nauc_mrr_at_10_diff1
value: 39.12003544583391
- type: nauc_mrr_at_10_max
value: 20.19465899845882
- type: nauc_mrr_at_10_std
value: 0.6654599149147801
- type: nauc_mrr_at_1_diff1
value: 42.92018294879498
- type: nauc_mrr_at_1_max
value: 18.079509143646572
- type: nauc_mrr_at_1_std
value: -0.7823014353906856
- type: nauc_mrr_at_20_diff1
value: 39.07506626859272
- type: nauc_mrr_at_20_max
value: 20.294506646852366
- type: nauc_mrr_at_20_std
value: 0.864333481006957
- type: nauc_mrr_at_3_diff1
value: 39.656311554576554
- type: nauc_mrr_at_3_max
value: 20.040146586295066
- type: nauc_mrr_at_3_std
value: 0.42910721174052
- type: nauc_mrr_at_5_diff1
value: 39.37561835510038
- type: nauc_mrr_at_5_max
value: 20.30007258049178
- type: nauc_mrr_at_5_std
value: 0.3371598067963826
- type: nauc_ndcg_at_1000_diff1
value: 37.273483901039924
- type: nauc_ndcg_at_1000_max
value: 21.410335996289184
- type: nauc_ndcg_at_1000_std
value: 3.288797871787962
- type: nauc_ndcg_at_100_diff1
value: 36.721073992334716
- type: nauc_ndcg_at_100_max
value: 21.68476210840317
- type: nauc_ndcg_at_100_std
value: 3.6548159650634346
- type: nauc_ndcg_at_10_diff1
value: 37.02792318514876
- type: nauc_ndcg_at_10_max
value: 21.063801347135968
- type: nauc_ndcg_at_10_std
value: 1.3595117069193776
- type: nauc_ndcg_at_1_diff1
value: 42.92018294879498
- type: nauc_ndcg_at_1_max
value: 18.079509143646572
- type: nauc_ndcg_at_1_std
value: -0.7823014353906856
- type: nauc_ndcg_at_20_diff1
value: 36.64918521516747
- type: nauc_ndcg_at_20_max
value: 21.460785913566458
- type: nauc_ndcg_at_20_std
value: 2.045078360541621
- type: nauc_ndcg_at_3_diff1
value: 38.180105254457445
- type: nauc_ndcg_at_3_max
value: 19.960996544401652
- type: nauc_ndcg_at_3_std
value: 0.22368956683777577
- type: nauc_ndcg_at_5_diff1
value: 37.681459156861266
- type: nauc_ndcg_at_5_max
value: 20.785307023225368
- type: nauc_ndcg_at_5_std
value: 0.3497228437243125
- type: nauc_precision_at_1000_diff1
value: -1.2945289411670948
- type: nauc_precision_at_1000_max
value: -2.051700713176913
- type: nauc_precision_at_1000_std
value: 7.697437265897111
- type: nauc_precision_at_100_diff1
value: 7.812054547548337
- type: nauc_precision_at_100_max
value: 9.140769013638478
- type: nauc_precision_at_100_std
value: 13.747018748295087
- type: nauc_precision_at_10_diff1
value: 21.712807144964266
- type: nauc_precision_at_10_max
value: 17.77356368869009
- type: nauc_precision_at_10_std
value: 6.966715221940607
- type: nauc_precision_at_1_diff1
value: 42.92018294879498
- type: nauc_precision_at_1_max
value: 18.079509143646572
- type: nauc_precision_at_1_std
value: -0.7823014353906856
- type: nauc_precision_at_20_diff1
value: 16.678807953635093
- type: nauc_precision_at_20_max
value: 16.13357637806647
- type: nauc_precision_at_20_std
value: 8.700523556896268
- type: nauc_precision_at_3_diff1
value: 31.32504900731501
- type: nauc_precision_at_3_max
value: 20.433892175372574
- type: nauc_precision_at_3_std
value: 3.2525265169941084
- type: nauc_precision_at_5_diff1
value: 26.847074585158044
- type: nauc_precision_at_5_max
value: 20.1621052968339
- type: nauc_precision_at_5_std
value: 4.403637252000099
- type: nauc_recall_at_1000_diff1
value: 25.369437454795012
- type: nauc_recall_at_1000_max
value: 31.068433952292608
- type: nauc_recall_at_1000_std
value: 30.586342412567408
- type: nauc_recall_at_100_diff1
value: 25.723465878178626
- type: nauc_recall_at_100_max
value: 26.521689460995844
- type: nauc_recall_at_100_std
value: 18.37373415496336
- type: nauc_recall_at_10_diff1
value: 30.59347861757255
- type: nauc_recall_at_10_max
value: 22.44330123588809
- type: nauc_recall_at_10_std
value: 3.3327269096563805
- type: nauc_recall_at_1_diff1
value: 42.478984768354664
- type: nauc_recall_at_1_max
value: 17.347554652757935
- type: nauc_recall_at_1_std
value: -1.7479543132075324
- type: nauc_recall_at_20_diff1
value: 28.37578852965159
- type: nauc_recall_at_20_max
value: 23.820034600059103
- type: nauc_recall_at_20_std
value: 6.083064353955198
- type: nauc_recall_at_3_diff1
value: 34.37897888758168
- type: nauc_recall_at_3_max
value: 20.488375032732815
- type: nauc_recall_at_3_std
value: 0.5627038554835839
- type: nauc_recall_at_5_diff1
value: 32.87036719646904
- type: nauc_recall_at_5_max
value: 21.797900752145853
- type: nauc_recall_at_5_std
value: 0.7621811262561744
- type: ndcg_at_1
value: 27.357
- type: ndcg_at_10
value: 37.183
- type: ndcg_at_100
value: 42.852000000000004
- type: ndcg_at_1000
value: 45.318999999999996
- type: ndcg_at_20
value: 39.425
- type: ndcg_at_3
value: 32.302
- type: ndcg_at_5
value: 34.705999999999996
- type: precision_at_1
value: 27.357
- type: precision_at_10
value: 6.7860000000000005
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.152
- type: precision_at_20
value: 4.062
- type: precision_at_3
value: 15.325
- type: precision_at_5
value: 11.129
- type: recall_at_1
value: 22.567999999999998
- type: recall_at_10
value: 49.085
- type: recall_at_100
value: 74.048
- type: recall_at_1000
value: 91.095
- type: recall_at_20
value: 57.303000000000004
- type: recall_at_3
value: 35.522999999999996
- type: recall_at_5
value: 41.746
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 51.396
- type: map_at_1
value: 34.811
- type: map_at_10
value: 45.659
- type: map_at_100
value: 46.976
- type: map_at_1000
value: 47.051
- type: map_at_20
value: 46.455
- type: map_at_3
value: 42.501
- type: map_at_5
value: 44.105
- type: mrr_at_1
value: 41.04477611940299
- type: mrr_at_10
value: 50.14592217484003
- type: mrr_at_100
value: 51.01085089250951
- type: mrr_at_1000
value: 51.046980740738725
- type: mrr_at_20
value: 50.712492575431334
- type: mrr_at_3
value: 47.7300995024875
- type: mrr_at_5
value: 48.94745024875615
- type: nauc_map_at_1000_diff1
value: 49.41972290901348
- type: nauc_map_at_1000_max
value: 30.418595984949516
- type: nauc_map_at_1000_std
value: -6.134402507069521
- type: nauc_map_at_100_diff1
value: 49.423634902558774
- type: nauc_map_at_100_max
value: 30.421983422706294
- type: nauc_map_at_100_std
value: -6.122990042471123
- type: nauc_map_at_10_diff1
value: 49.39514428586809
- type: nauc_map_at_10_max
value: 30.067750182517344
- type: nauc_map_at_10_std
value: -6.490781627285011
- type: nauc_map_at_1_diff1
value: 53.726930079027355
- type: nauc_map_at_1_max
value: 31.564271750054214
- type: nauc_map_at_1_std
value: -6.207203206377229
- type: nauc_map_at_20_diff1
value: 49.49428675089494
- type: nauc_map_at_20_max
value: 30.346830038902567
- type: nauc_map_at_20_std
value: -6.384581137155837
- type: nauc_map_at_3_diff1
value: 50.53398121184611
- type: nauc_map_at_3_max
value: 29.520136815187293
- type: nauc_map_at_3_std
value: -6.4862272997890065
- type: nauc_map_at_5_diff1
value: 49.918539885809444
- type: nauc_map_at_5_max
value: 29.881720947964986
- type: nauc_map_at_5_std
value: -6.1957266387111805
- type: nauc_mrr_at_1000_diff1
value: 49.045872943097876
- type: nauc_mrr_at_1000_max
value: 31.44086682478966
- type: nauc_mrr_at_1000_std
value: -5.294155484356357
- type: nauc_mrr_at_100_diff1
value: 49.02549363550889
- type: nauc_mrr_at_100_max
value: 31.425821773796304
- type: nauc_mrr_at_100_std
value: -5.290378070949134
- type: nauc_mrr_at_10_diff1
value: 48.79661575148535
- type: nauc_mrr_at_10_max
value: 31.130959589772296
- type: nauc_mrr_at_10_std
value: -5.623844633492856
- type: nauc_mrr_at_1_diff1
value: 52.90089377128913
- type: nauc_mrr_at_1_max
value: 32.40774032873656
- type: nauc_mrr_at_1_std
value: -6.0713885032931385
- type: nauc_mrr_at_20_diff1
value: 49.06030215791546
- type: nauc_mrr_at_20_max
value: 31.42717911738813
- type: nauc_mrr_at_20_std
value: -5.371125220467266
- type: nauc_mrr_at_3_diff1
value: 49.34651355599406
- type: nauc_mrr_at_3_max
value: 31.079367157632287
- type: nauc_mrr_at_3_std
value: -5.688689732352166
- type: nauc_mrr_at_5_diff1
value: 49.11887000542802
- type: nauc_mrr_at_5_max
value: 31.22370488379167
- type: nauc_mrr_at_5_std
value: -5.262549801076311
- type: nauc_ndcg_at_1000_diff1
value: 48.17029504156311
- type: nauc_ndcg_at_1000_max
value: 31.0071055656938
- type: nauc_ndcg_at_1000_std
value: -4.964563322138795
- type: nauc_ndcg_at_100_diff1
value: 47.75612009818361
- type: nauc_ndcg_at_100_max
value: 30.95429083420626
- type: nauc_ndcg_at_100_std
value: -4.3878208586863305
- type: nauc_ndcg_at_10_diff1
value: 47.4742202498153
- type: nauc_ndcg_at_10_max
value: 29.743145353732338
- type: nauc_ndcg_at_10_std
value: -6.567730922963033
- type: nauc_ndcg_at_1_diff1
value: 52.90089377128913
- type: nauc_ndcg_at_1_max
value: 32.40774032873656
- type: nauc_ndcg_at_1_std
value: -6.0713885032931385
- type: nauc_ndcg_at_20_diff1
value: 48.15976773981052
- type: nauc_ndcg_at_20_max
value: 30.720239716788537
- type: nauc_ndcg_at_20_std
value: -5.915046628959949
- type: nauc_ndcg_at_3_diff1
value: 48.77714679523068
- type: nauc_ndcg_at_3_max
value: 29.226005157792283
- type: nauc_ndcg_at_3_std
value: -6.4435406187140885
- type: nauc_ndcg_at_5_diff1
value: 48.297650732431784
- type: nauc_ndcg_at_5_max
value: 29.534042779795026
- type: nauc_ndcg_at_5_std
value: -5.949674263097888
- type: nauc_precision_at_1000_diff1
value: -18.247129854487877
- type: nauc_precision_at_1000_max
value: -6.022292074806939
- type: nauc_precision_at_1000_std
value: -1.021725353550691
- type: nauc_precision_at_100_diff1
value: -9.138050121076688
- type: nauc_precision_at_100_max
value: 3.997695077574597
- type: nauc_precision_at_100_std
value: 4.742972800203224
- type: nauc_precision_at_10_diff1
value: 13.194684490713202
- type: nauc_precision_at_10_max
value: 15.840940731793271
- type: nauc_precision_at_10_std
value: -6.051512441226457
- type: nauc_precision_at_1_diff1
value: 52.90089377128913
- type: nauc_precision_at_1_max
value: 32.40774032873656
- type: nauc_precision_at_1_std
value: -6.0713885032931385
- type: nauc_precision_at_20_diff1
value: 6.899839868403575
- type: nauc_precision_at_20_max
value: 13.891871886179638
- type: nauc_precision_at_20_std
value: -3.3290467352585367
- type: nauc_precision_at_3_diff1
value: 31.976884237970864
- type: nauc_precision_at_3_max
value: 21.50815377729393
- type: nauc_precision_at_3_std
value: -6.096414234677205
- type: nauc_precision_at_5_diff1
value: 23.971358558972202
- type: nauc_precision_at_5_max
value: 20.011127948852653
- type: nauc_precision_at_5_std
value: -4.670600588714178
- type: nauc_recall_at_1000_diff1
value: 44.51341582665117
- type: nauc_recall_at_1000_max
value: 45.529346087013685
- type: nauc_recall_at_1000_std
value: 29.12832024115422
- type: nauc_recall_at_100_diff1
value: 36.31817172758161
- type: nauc_recall_at_100_max
value: 31.24221823704618
- type: nauc_recall_at_100_std
value: 11.578251243360445
- type: nauc_recall_at_10_diff1
value: 39.50364422633259
- type: nauc_recall_at_10_max
value: 25.181494495474965
- type: nauc_recall_at_10_std
value: -7.337961863786262
- type: nauc_recall_at_1_diff1
value: 53.726930079027355
- type: nauc_recall_at_1_max
value: 31.564271750054214
- type: nauc_recall_at_1_std
value: -6.207203206377229
- type: nauc_recall_at_20_diff1
value: 41.474813670401616
- type: nauc_recall_at_20_max
value: 28.543585723893557
- type: nauc_recall_at_20_std
value: -4.765052709529466
- type: nauc_recall_at_3_diff1
value: 45.76783251087914
- type: nauc_recall_at_3_max
value: 26.023872990619722
- type: nauc_recall_at_3_std
value: -6.413853244730829
- type: nauc_recall_at_5_diff1
value: 43.5601470678474
- type: nauc_recall_at_5_max
value: 26.339112651956693
- type: nauc_recall_at_5_std
value: -4.714784586679173
- type: ndcg_at_1
value: 41.045
- type: ndcg_at_10
value: 51.396
- type: ndcg_at_100
value: 56.784
- type: ndcg_at_1000
value: 58.30800000000001
- type: ndcg_at_20
value: 53.835
- type: ndcg_at_3
value: 46.269
- type: ndcg_at_5
value: 48.318
- type: precision_at_1
value: 41.045
- type: precision_at_10
value: 8.674999999999999
- type: precision_at_100
value: 1.258
- type: precision_at_1000
value: 0.149
- type: precision_at_20
value: 5.009
- type: precision_at_3
value: 20.958
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 34.811
- type: recall_at_10
value: 64.054
- type: recall_at_100
value: 86.707
- type: recall_at_1000
value: 96.95
- type: recall_at_20
value: 72.879
- type: recall_at_3
value: 49.833
- type: recall_at_5
value: 55.145999999999994
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 48.229
- type: map_at_1
value: 31.906000000000002
- type: map_at_10
value: 42.254999999999995
- type: map_at_100
value: 44.213
- type: map_at_1000
value: 44.439
- type: map_at_20
value: 43.248
- type: map_at_3
value: 38.86
- type: map_at_5
value: 41.141
- type: mrr_at_1
value: 38.73517786561265
- type: mrr_at_10
value: 46.97024593763724
- type: mrr_at_100
value: 47.94481042970731
- type: mrr_at_1000
value: 47.985760758872495
- type: mrr_at_20
value: 47.53700953359998
- type: mrr_at_3
value: 44.36758893280635
- type: mrr_at_5
value: 46.12648221343875
- type: nauc_map_at_1000_diff1
value: 47.495088997816524
- type: nauc_map_at_1000_max
value: 19.671904439587365
- type: nauc_map_at_1000_std
value: -1.3627808016020522
- type: nauc_map_at_100_diff1
value: 47.43920373673026
- type: nauc_map_at_100_max
value: 19.867353334267776
- type: nauc_map_at_100_std
value: -1.761385629643101
- type: nauc_map_at_10_diff1
value: 47.15528348684556
- type: nauc_map_at_10_max
value: 19.676194850934312
- type: nauc_map_at_10_std
value: -3.6727807703472592
- type: nauc_map_at_1_diff1
value: 52.99557955883872
- type: nauc_map_at_1_max
value: 19.088249896716704
- type: nauc_map_at_1_std
value: -7.249973708167082
- type: nauc_map_at_20_diff1
value: 46.86356518180594
- type: nauc_map_at_20_max
value: 19.70347315131642
- type: nauc_map_at_20_std
value: -2.6720920366999374
- type: nauc_map_at_3_diff1
value: 47.07329819024913
- type: nauc_map_at_3_max
value: 18.945688932686426
- type: nauc_map_at_3_std
value: -5.3156839236392734
- type: nauc_map_at_5_diff1
value: 47.11053199684523
- type: nauc_map_at_5_max
value: 19.847890147729544
- type: nauc_map_at_5_std
value: -4.16500079949399
- type: nauc_mrr_at_1000_diff1
value: 48.41311022522712
- type: nauc_mrr_at_1000_max
value: 20.99590958422922
- type: nauc_mrr_at_1000_std
value: 0.2576393823165339
- type: nauc_mrr_at_100_diff1
value: 48.404798258835335
- type: nauc_mrr_at_100_max
value: 21.004559810198657
- type: nauc_mrr_at_100_std
value: 0.2676523816417369
- type: nauc_mrr_at_10_diff1
value: 48.475983871350884
- type: nauc_mrr_at_10_max
value: 21.11595664710079
- type: nauc_mrr_at_10_std
value: -0.1661763911556166
- type: nauc_mrr_at_1_diff1
value: 51.122302952336426
- type: nauc_mrr_at_1_max
value: 21.94116830220702
- type: nauc_mrr_at_1_std
value: -0.39465620507429194
- type: nauc_mrr_at_20_diff1
value: 48.29705556987262
- type: nauc_mrr_at_20_max
value: 20.968524188691408
- type: nauc_mrr_at_20_std
value: 0.28692937673491176
- type: nauc_mrr_at_3_diff1
value: 48.290151351112456
- type: nauc_mrr_at_3_max
value: 20.799391932160795
- type: nauc_mrr_at_3_std
value: -0.4049152481916527
- type: nauc_mrr_at_5_diff1
value: 48.620266848277524
- type: nauc_mrr_at_5_max
value: 20.828881801522147
- type: nauc_mrr_at_5_std
value: -0.06261283231981098
- type: nauc_ndcg_at_1000_diff1
value: 46.805728708415245
- type: nauc_ndcg_at_1000_max
value: 21.22113318417983
- type: nauc_ndcg_at_1000_std
value: 1.7659499118079376
- type: nauc_ndcg_at_100_diff1
value: 46.396587447695026
- type: nauc_ndcg_at_100_max
value: 21.229178416892807
- type: nauc_ndcg_at_100_std
value: 1.8230824931003011
- type: nauc_ndcg_at_10_diff1
value: 47.3897122791697
- type: nauc_ndcg_at_10_max
value: 19.649557338197745
- type: nauc_ndcg_at_10_std
value: -0.7257825152815804
- type: nauc_ndcg_at_1_diff1
value: 51.122302952336426
- type: nauc_ndcg_at_1_max
value: 21.94116830220702
- type: nauc_ndcg_at_1_std
value: -0.39465620507429194
- type: nauc_ndcg_at_20_diff1
value: 45.99045587435792
- type: nauc_ndcg_at_20_max
value: 19.464646365341085
- type: nauc_ndcg_at_20_std
value: 0.6094769668002494
- type: nauc_ndcg_at_3_diff1
value: 46.89854380329431
- type: nauc_ndcg_at_3_max
value: 19.771868883199687
- type: nauc_ndcg_at_3_std
value: -0.7566375331567338
- type: nauc_ndcg_at_5_diff1
value: 47.229821387349084
- type: nauc_ndcg_at_5_max
value: 20.448622864740855
- type: nauc_ndcg_at_5_std
value: -0.23214798197661607
- type: nauc_precision_at_1000_diff1
value: 2.456615752810366
- type: nauc_precision_at_1000_max
value: -13.30502314825911
- type: nauc_precision_at_1000_std
value: 35.76718839634999
- type: nauc_precision_at_100_diff1
value: 8.348492928163296
- type: nauc_precision_at_100_max
value: -3.525258460847986
- type: nauc_precision_at_100_std
value: 34.346913879335645
- type: nauc_precision_at_10_diff1
value: 19.647438939981345
- type: nauc_precision_at_10_max
value: 14.015118295827495
- type: nauc_precision_at_10_std
value: 19.339560320323752
- type: nauc_precision_at_1_diff1
value: 51.122302952336426
- type: nauc_precision_at_1_max
value: 21.94116830220702
- type: nauc_precision_at_1_std
value: -0.39465620507429194
- type: nauc_precision_at_20_diff1
value: 11.698037110773422
- type: nauc_precision_at_20_max
value: 8.098620556244079
- type: nauc_precision_at_20_std
value: 26.836939281263955
- type: nauc_precision_at_3_diff1
value: 28.78809807347473
- type: nauc_precision_at_3_max
value: 18.606067870867726
- type: nauc_precision_at_3_std
value: 8.767601891307244
- type: nauc_precision_at_5_diff1
value: 22.42088373252247
- type: nauc_precision_at_5_max
value: 19.879281993213862
- type: nauc_precision_at_5_std
value: 14.420290896413016
- type: nauc_recall_at_1000_diff1
value: 14.48477604400231
- type: nauc_recall_at_1000_max
value: 61.78599463752011
- type: nauc_recall_at_1000_std
value: 72.54674171780393
- type: nauc_recall_at_100_diff1
value: 33.07453957533662
- type: nauc_recall_at_100_max
value: 27.512780682283715
- type: nauc_recall_at_100_std
value: 23.849561289021807
- type: nauc_recall_at_10_diff1
value: 40.575857558055326
- type: nauc_recall_at_10_max
value: 17.11800549938132
- type: nauc_recall_at_10_std
value: -1.4248798596283825
- type: nauc_recall_at_1_diff1
value: 52.99557955883872
- type: nauc_recall_at_1_max
value: 19.088249896716704
- type: nauc_recall_at_1_std
value: -7.249973708167082
- type: nauc_recall_at_20_diff1
value: 33.99863438013119
- type: nauc_recall_at_20_max
value: 15.803057360456933
- type: nauc_recall_at_20_std
value: 4.961930222488322
- type: nauc_recall_at_3_diff1
value: 42.237054496246145
- type: nauc_recall_at_3_max
value: 16.840697278848705
- type: nauc_recall_at_3_std
value: -4.126209346414736
- type: nauc_recall_at_5_diff1
value: 42.10776567509297
- type: nauc_recall_at_5_max
value: 17.8575070365274
- type: nauc_recall_at_5_std
value: -1.4236170271745745
- type: ndcg_at_1
value: 38.735
- type: ndcg_at_10
value: 48.229
- type: ndcg_at_100
value: 54.468
- type: ndcg_at_1000
value: 56.287
- type: ndcg_at_20
value: 50.617999999999995
- type: ndcg_at_3
value: 43.338
- type: ndcg_at_5
value: 46.294999999999995
- type: precision_at_1
value: 38.735
- type: precision_at_10
value: 9.13
- type: precision_at_100
value: 1.8339999999999999
- type: precision_at_1000
value: 0.255
- type: precision_at_20
value: 5.800000000000001
- type: precision_at_3
value: 20.224
- type: precision_at_5
value: 14.979999999999999
- type: recall_at_1
value: 31.906000000000002
- type: recall_at_10
value: 58.742000000000004
- type: recall_at_100
value: 86.001
- type: recall_at_1000
value: 97.30499999999999
- type: recall_at_20
value: 67.744
- type: recall_at_3
value: 45.072
- type: recall_at_5
value: 52.993
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 39.542
- type: map_at_1
value: 26.565
- type: map_at_10
value: 34.525
- type: map_at_100
value: 35.667
- type: map_at_1000
value: 35.76
- type: map_at_20
value: 35.205999999999996
- type: map_at_3
value: 31.854
- type: map_at_5
value: 33.176
- type: mrr_at_1
value: 29.20517560073937
- type: mrr_at_10
value: 37.08439691341724
- type: mrr_at_100
value: 38.01059513086597
- type: mrr_at_1000
value: 38.07153297322738
- type: mrr_at_20
value: 37.61517819470785
- type: mrr_at_3
value: 34.59642637091805
- type: mrr_at_5
value: 35.85335797905114
- type: nauc_map_at_1000_diff1
value: 44.31701336173116
- type: nauc_map_at_1000_max
value: 16.130459475311362
- type: nauc_map_at_1000_std
value: -4.066518199918133
- type: nauc_map_at_100_diff1
value: 44.327021392192385
- type: nauc_map_at_100_max
value: 16.145481588871835
- type: nauc_map_at_100_std
value: -4.067822225125757
- type: nauc_map_at_10_diff1
value: 44.45681615650266
- type: nauc_map_at_10_max
value: 16.3876608176403
- type: nauc_map_at_10_std
value: -4.642174886460134
- type: nauc_map_at_1_diff1
value: 48.309451353884235
- type: nauc_map_at_1_max
value: 14.659358255600885
- type: nauc_map_at_1_std
value: -6.480918454853081
- type: nauc_map_at_20_diff1
value: 44.38848430651199
- type: nauc_map_at_20_max
value: 15.780009933494304
- type: nauc_map_at_20_std
value: -4.154494565017467
- type: nauc_map_at_3_diff1
value: 45.437821252489506
- type: nauc_map_at_3_max
value: 16.742021417179387
- type: nauc_map_at_3_std
value: -6.056265703635078
- type: nauc_map_at_5_diff1
value: 45.41602514907789
- type: nauc_map_at_5_max
value: 16.47163044317504
- type: nauc_map_at_5_std
value: -5.579883759639345
- type: nauc_mrr_at_1000_diff1
value: 43.606140308926214
- type: nauc_mrr_at_1000_max
value: 16.259882534857155
- type: nauc_mrr_at_1000_std
value: -4.57796149408641
- type: nauc_mrr_at_100_diff1
value: 43.60326855408368
- type: nauc_mrr_at_100_max
value: 16.258830980141983
- type: nauc_mrr_at_100_std
value: -4.585033557554184
- type: nauc_mrr_at_10_diff1
value: 43.65208595922507
- type: nauc_mrr_at_10_max
value: 16.553536816353382
- type: nauc_mrr_at_10_std
value: -5.0713438811575555
- type: nauc_mrr_at_1_diff1
value: 48.325071997506505
- type: nauc_mrr_at_1_max
value: 14.832750845023146
- type: nauc_mrr_at_1_std
value: -7.241802324060084
- type: nauc_mrr_at_20_diff1
value: 43.56043734269684
- type: nauc_mrr_at_20_max
value: 16.069013855017854
- type: nauc_mrr_at_20_std
value: -4.614073034175973
- type: nauc_mrr_at_3_diff1
value: 44.35292814451889
- type: nauc_mrr_at_3_max
value: 17.284290586683813
- type: nauc_mrr_at_3_std
value: -5.473861789155963
- type: nauc_mrr_at_5_diff1
value: 44.42201798632778
- type: nauc_mrr_at_5_max
value: 16.717670959580776
- type: nauc_mrr_at_5_std
value: -5.443367064802848
- type: nauc_ndcg_at_1000_diff1
value: 42.236407068780515
- type: nauc_ndcg_at_1000_max
value: 16.392422633395153
- type: nauc_ndcg_at_1000_std
value: -1.192190658096375
- type: nauc_ndcg_at_100_diff1
value: 42.14119116801277
- type: nauc_ndcg_at_100_max
value: 16.13415803532655
- type: nauc_ndcg_at_100_std
value: -0.9966215162019469
- type: nauc_ndcg_at_10_diff1
value: 42.4417737438586
- type: nauc_ndcg_at_10_max
value: 16.274567696691573
- type: nauc_ndcg_at_10_std
value: -3.6179390394259903
- type: nauc_ndcg_at_1_diff1
value: 48.325071997506505
- type: nauc_ndcg_at_1_max
value: 14.832750845023146
- type: nauc_ndcg_at_1_std
value: -7.241802324060084
- type: nauc_ndcg_at_20_diff1
value: 42.218839329600314
- type: nauc_ndcg_at_20_max
value: 14.278956529458457
- type: nauc_ndcg_at_20_std
value: -1.75601703497348
- type: nauc_ndcg_at_3_diff1
value: 44.118831425290864
- type: nauc_ndcg_at_3_max
value: 17.763450766878407
- type: nauc_ndcg_at_3_std
value: -5.511333401326693
- type: nauc_ndcg_at_5_diff1
value: 44.350244039228684
- type: nauc_ndcg_at_5_max
value: 16.872392867376163
- type: nauc_ndcg_at_5_std
value: -5.192532346917115
- type: nauc_precision_at_1000_diff1
value: -13.502870987205087
- type: nauc_precision_at_1000_max
value: -5.569471380713692
- type: nauc_precision_at_1000_std
value: 3.7615484246993174
- type: nauc_precision_at_100_diff1
value: 4.3486166706781715
- type: nauc_precision_at_100_max
value: 9.798796624884769
- type: nauc_precision_at_100_std
value: 17.687575480004686
- type: nauc_precision_at_10_diff1
value: 26.315525148666215
- type: nauc_precision_at_10_max
value: 15.632196329772343
- type: nauc_precision_at_10_std
value: 6.771032703498772
- type: nauc_precision_at_1_diff1
value: 48.325071997506505
- type: nauc_precision_at_1_max
value: 14.832750845023146
- type: nauc_precision_at_1_std
value: -7.241802324060084
- type: nauc_precision_at_20_diff1
value: 20.801470081602552
- type: nauc_precision_at_20_max
value: 7.029450231532425
- type: nauc_precision_at_20_std
value: 13.071960852372952
- type: nauc_precision_at_3_diff1
value: 38.97587240190173
- type: nauc_precision_at_3_max
value: 19.450589959302075
- type: nauc_precision_at_3_std
value: -3.053255312485641
- type: nauc_precision_at_5_diff1
value: 37.25427076067403
- type: nauc_precision_at_5_max
value: 17.266441310681067
- type: nauc_precision_at_5_std
value: -1.0377167920900239
- type: nauc_recall_at_1000_diff1
value: 24.284270619620074
- type: nauc_recall_at_1000_max
value: 28.189662006674293
- type: nauc_recall_at_1000_std
value: 40.36949902164481
- type: nauc_recall_at_100_diff1
value: 32.33595995022117
- type: nauc_recall_at_100_max
value: 14.55739776232413
- type: nauc_recall_at_100_std
value: 15.957876076610955
- type: nauc_recall_at_10_diff1
value: 35.76287413694493
- type: nauc_recall_at_10_max
value: 14.206793089247608
- type: nauc_recall_at_10_std
value: -0.7567768181737041
- type: nauc_recall_at_1_diff1
value: 48.309451353884235
- type: nauc_recall_at_1_max
value: 14.659358255600885
- type: nauc_recall_at_1_std
value: -6.480918454853081
- type: nauc_recall_at_20_diff1
value: 34.79257897401949
- type: nauc_recall_at_20_max
value: 5.948905509300792
- type: nauc_recall_at_20_std
value: 7.26342864152797
- type: nauc_recall_at_3_diff1
value: 41.627795596112904
- type: nauc_recall_at_3_max
value: 19.17324430675074
- type: nauc_recall_at_3_std
value: -5.8732091994119635
- type: nauc_recall_at_5_diff1
value: 41.293119343456695
- type: nauc_recall_at_5_max
value: 16.96513762749752
- type: nauc_recall_at_5_std
value: -4.509441721720783
- type: ndcg_at_1
value: 29.205
- type: ndcg_at_10
value: 39.542
- type: ndcg_at_100
value: 44.96
- type: ndcg_at_1000
value: 47.094
- type: ndcg_at_20
value: 41.807
- type: ndcg_at_3
value: 34.339
- type: ndcg_at_5
value: 36.538
- type: precision_at_1
value: 29.205
- type: precision_at_10
value: 6.1370000000000005
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_20
value: 3.614
- type: precision_at_3
value: 14.479000000000001
- type: precision_at_5
value: 9.945
- type: recall_at_1
value: 26.565
- type: recall_at_10
value: 52.63099999999999
- type: recall_at_100
value: 77.388
- type: recall_at_1000
value: 93.111
- type: recall_at_20
value: 61.241
- type: recall_at_3
value: 38.29
- type: recall_at_5
value: 43.817
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 45.765
- type: map_at_1
value: 18.693
- type: map_at_10
value: 34.797
- type: map_at_100
value: 37.123
- type: map_at_1000
value: 37.291999999999994
- type: map_at_20
value: 36.196
- type: map_at_3
value: 28.503
- type: map_at_5
value: 32.074999999999996
- type: mrr_at_1
value: 42.93159609120521
- type: mrr_at_10
value: 57.09055891629189
- type: mrr_at_100
value: 57.61202242182417
- type: mrr_at_1000
value: 57.627325585586185
- type: mrr_at_20
value: 57.45761564708286
- type: mrr_at_3
value: 54.082519001085835
- type: mrr_at_5
value: 56.04017372421293
- type: nauc_map_at_1000_diff1
value: 29.62748830971108
- type: nauc_map_at_1000_max
value: 38.90548417023915
- type: nauc_map_at_1000_std
value: 10.982709233422067
- type: nauc_map_at_100_diff1
value: 29.571188402344134
- type: nauc_map_at_100_max
value: 38.928535099659605
- type: nauc_map_at_100_std
value: 10.992082156218132
- type: nauc_map_at_10_diff1
value: 29.91368337621149
- type: nauc_map_at_10_max
value: 38.17858866916931
- type: nauc_map_at_10_std
value: 9.170807732909834
- type: nauc_map_at_1_diff1
value: 38.292771711378094
- type: nauc_map_at_1_max
value: 31.175623912098082
- type: nauc_map_at_1_std
value: -1.9866431729978125
- type: nauc_map_at_20_diff1
value: 29.895018465325386
- type: nauc_map_at_20_max
value: 38.68987002868627
- type: nauc_map_at_20_std
value: 10.325850804931683
- type: nauc_map_at_3_diff1
value: 32.450764245953124
- type: nauc_map_at_3_max
value: 36.56035725422814
- type: nauc_map_at_3_std
value: 5.069981585274344
- type: nauc_map_at_5_diff1
value: 30.82713673636892
- type: nauc_map_at_5_max
value: 37.53274823289814
- type: nauc_map_at_5_std
value: 7.475198176862019
- type: nauc_mrr_at_1000_diff1
value: 27.527247873661825
- type: nauc_mrr_at_1000_max
value: 37.6662925313217
- type: nauc_mrr_at_1000_std
value: 12.958087435958717
- type: nauc_mrr_at_100_diff1
value: 27.517344291640494
- type: nauc_mrr_at_100_max
value: 37.671910181450116
- type: nauc_mrr_at_100_std
value: 12.975631946066274
- type: nauc_mrr_at_10_diff1
value: 27.37340022726463
- type: nauc_mrr_at_10_max
value: 37.75987233036419
- type: nauc_mrr_at_10_std
value: 12.980445473930105
- type: nauc_mrr_at_1_diff1
value: 30.294284606380177
- type: nauc_mrr_at_1_max
value: 33.52304031536273
- type: nauc_mrr_at_1_std
value: 7.362569179213545
- type: nauc_mrr_at_20_diff1
value: 27.516321063997818
- type: nauc_mrr_at_20_max
value: 37.71518717638686
- type: nauc_mrr_at_20_std
value: 13.023005393854561
- type: nauc_mrr_at_3_diff1
value: 27.43050817662863
- type: nauc_mrr_at_3_max
value: 37.567407024580795
- type: nauc_mrr_at_3_std
value: 11.72224066823944
- type: nauc_mrr_at_5_diff1
value: 27.48380660186141
- type: nauc_mrr_at_5_max
value: 37.83633222312436
- type: nauc_mrr_at_5_std
value: 12.648909042225116
- type: nauc_ndcg_at_1000_diff1
value: 27.348749275808565
- type: nauc_ndcg_at_1000_max
value: 40.92738864140189
- type: nauc_ndcg_at_1000_std
value: 17.298965422330724
- type: nauc_ndcg_at_100_diff1
value: 26.406487158488023
- type: nauc_ndcg_at_100_max
value: 41.41285056973748
- type: nauc_ndcg_at_100_std
value: 17.925298509801692
- type: nauc_ndcg_at_10_diff1
value: 27.41610920315052
- type: nauc_ndcg_at_10_max
value: 39.691386898955635
- type: nauc_ndcg_at_10_std
value: 13.309540866780392
- type: nauc_ndcg_at_1_diff1
value: 30.294284606380177
- type: nauc_ndcg_at_1_max
value: 33.52304031536273
- type: nauc_ndcg_at_1_std
value: 7.362569179213545
- type: nauc_ndcg_at_20_diff1
value: 27.35285020840544
- type: nauc_ndcg_at_20_max
value: 40.54240809305637
- type: nauc_ndcg_at_20_std
value: 15.615186440824544
- type: nauc_ndcg_at_3_diff1
value: 29.2536320295362
- type: nauc_ndcg_at_3_max
value: 37.446326210011065
- type: nauc_ndcg_at_3_std
value: 8.769752235477865
- type: nauc_ndcg_at_5_diff1
value: 28.519419303034223
- type: nauc_ndcg_at_5_max
value: 38.87942356352632
- type: nauc_ndcg_at_5_std
value: 10.655159360448403
- type: nauc_precision_at_1000_diff1
value: -13.778436964449162
- type: nauc_precision_at_1000_max
value: -1.5757398167401473
- type: nauc_precision_at_1000_std
value: 21.685081909609398
- type: nauc_precision_at_100_diff1
value: -9.84448688176112
- type: nauc_precision_at_100_max
value: 14.394813480886384
- type: nauc_precision_at_100_std
value: 30.613127306510656
- type: nauc_precision_at_10_diff1
value: 3.6153476810793617
- type: nauc_precision_at_10_max
value: 27.908875838679187
- type: nauc_precision_at_10_std
value: 25.116695667452483
- type: nauc_precision_at_1_diff1
value: 30.294284606380177
- type: nauc_precision_at_1_max
value: 33.52304031536273
- type: nauc_precision_at_1_std
value: 7.362569179213545
- type: nauc_precision_at_20_diff1
value: -0.10581947714259332
- type: nauc_precision_at_20_max
value: 23.623296291147284
- type: nauc_precision_at_20_std
value: 28.569096802805273
- type: nauc_precision_at_3_diff1
value: 15.417757444527858
- type: nauc_precision_at_3_max
value: 35.044093611143104
- type: nauc_precision_at_3_std
value: 16.94571966979081
- type: nauc_precision_at_5_diff1
value: 9.321960905945865
- type: nauc_precision_at_5_max
value: 31.958151849692225
- type: nauc_precision_at_5_std
value: 21.597268095371692
- type: nauc_recall_at_1000_diff1
value: 13.973326251203499
- type: nauc_recall_at_1000_max
value: 43.21737599095864
- type: nauc_recall_at_1000_std
value: 43.401037509157916
- type: nauc_recall_at_100_diff1
value: 11.474499434955268
- type: nauc_recall_at_100_max
value: 40.832085174507256
- type: nauc_recall_at_100_std
value: 33.34882371261869
- type: nauc_recall_at_10_diff1
value: 19.607029490024455
- type: nauc_recall_at_10_max
value: 36.480369936031686
- type: nauc_recall_at_10_std
value: 15.727190817289003
- type: nauc_recall_at_1_diff1
value: 38.292771711378094
- type: nauc_recall_at_1_max
value: 31.175623912098082
- type: nauc_recall_at_1_std
value: -1.9866431729978125
- type: nauc_recall_at_20_diff1
value: 17.88148599914498
- type: nauc_recall_at_20_max
value: 37.15460939398063
- type: nauc_recall_at_20_std
value: 21.32153921542893
- type: nauc_recall_at_3_diff1
value: 26.258226086531465
- type: nauc_recall_at_3_max
value: 35.711402441842
- type: nauc_recall_at_3_std
value: 6.900316431484741
- type: nauc_recall_at_5_diff1
value: 22.34254971673374
- type: nauc_recall_at_5_max
value: 35.73901160015368
- type: nauc_recall_at_5_std
value: 10.746113843136587
- type: ndcg_at_1
value: 42.931999999999995
- type: ndcg_at_10
value: 45.765
- type: ndcg_at_100
value: 52.986999999999995
- type: ndcg_at_1000
value: 55.481
- type: ndcg_at_20
value: 49.046
- type: ndcg_at_3
value: 38.117000000000004
- type: ndcg_at_5
value: 41.192
- type: precision_at_1
value: 42.931999999999995
- type: precision_at_10
value: 14.573
- type: precision_at_100
value: 2.246
- type: precision_at_1000
value: 0.272
- type: precision_at_20
value: 8.752
- type: precision_at_3
value: 29.229
- type: precision_at_5
value: 22.84
- type: recall_at_1
value: 18.693
- type: recall_at_10
value: 53.345
- type: recall_at_100
value: 76.94
- type: recall_at_1000
value: 90.49199999999999
- type: recall_at_20
value: 62.366
- type: recall_at_3
value: 34.846
- type: recall_at_5
value: 43.504
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 49.747
- type: map_at_1
value: 9.771
- type: map_at_10
value: 23.411
- type: map_at_100
value: 34.894
- type: map_at_1000
value: 36.771
- type: map_at_20
value: 28.109
- type: map_at_3
value: 16.008
- type: map_at_5
value: 19.042
- type: mrr_at_1
value: 72.75
- type: mrr_at_10
value: 80.03859126984126
- type: mrr_at_100
value: 80.3164527985558
- type: mrr_at_1000
value: 80.3210619351907
- type: mrr_at_20
value: 80.25448621510927
- type: mrr_at_3
value: 78.58333333333333
- type: mrr_at_5
value: 79.38333333333333
- type: nauc_map_at_1000_diff1
value: 32.9210053309451
- type: nauc_map_at_1000_max
value: 39.131322809002675
- type: nauc_map_at_1000_std
value: 22.47925587034267
- type: nauc_map_at_100_diff1
value: 33.48737776488358
- type: nauc_map_at_100_max
value: 37.312409345747625
- type: nauc_map_at_100_std
value: 19.45169971949006
- type: nauc_map_at_10_diff1
value: 34.86121747102709
- type: nauc_map_at_10_max
value: 21.927351173848063
- type: nauc_map_at_10_std
value: -5.469585929328966
- type: nauc_map_at_1_diff1
value: 36.788988562050946
- type: nauc_map_at_1_max
value: 6.218043424236115
- type: nauc_map_at_1_std
value: -24.42188675644677
- type: nauc_map_at_20_diff1
value: 34.2019845457827
- type: nauc_map_at_20_max
value: 27.87555414523285
- type: nauc_map_at_20_std
value: 3.704193361848003
- type: nauc_map_at_3_diff1
value: 34.08750068116169
- type: nauc_map_at_3_max
value: 11.433238879464716
- type: nauc_map_at_3_std
value: -18.070105813076655
- type: nauc_map_at_5_diff1
value: 34.167149018546944
- type: nauc_map_at_5_max
value: 15.023813889086737
- type: nauc_map_at_5_std
value: -12.811405222222753
- type: nauc_mrr_at_1000_diff1
value: 60.14213791851398
- type: nauc_mrr_at_1000_max
value: 67.63487849347548
- type: nauc_mrr_at_1000_std
value: 44.30322435935586
- type: nauc_mrr_at_100_diff1
value: 60.142343520968275
- type: nauc_mrr_at_100_max
value: 67.63152241168181
- type: nauc_mrr_at_100_std
value: 44.277233086478574
- type: nauc_mrr_at_10_diff1
value: 60.054050852029725
- type: nauc_mrr_at_10_max
value: 67.85192489991957
- type: nauc_mrr_at_10_std
value: 44.53453783053214
- type: nauc_mrr_at_1_diff1
value: 60.56003472781103
- type: nauc_mrr_at_1_max
value: 63.99721006911423
- type: nauc_mrr_at_1_std
value: 39.410700262897336
- type: nauc_mrr_at_20_diff1
value: 60.1578610517364
- type: nauc_mrr_at_20_max
value: 67.73004026185055
- type: nauc_mrr_at_20_std
value: 44.38457392945975
- type: nauc_mrr_at_3_diff1
value: 61.00386913748428
- type: nauc_mrr_at_3_max
value: 67.6097919172443
- type: nauc_mrr_at_3_std
value: 44.37901697625456
- type: nauc_mrr_at_5_diff1
value: 60.143743823564556
- type: nauc_mrr_at_5_max
value: 67.7422395669353
- type: nauc_mrr_at_5_std
value: 44.92611999884299
- type: nauc_ndcg_at_1000_diff1
value: 41.943919199876
- type: nauc_ndcg_at_1000_max
value: 55.34368795153672
- type: nauc_ndcg_at_1000_std
value: 40.64364798733629
- type: nauc_ndcg_at_100_diff1
value: 40.85975674055452
- type: nauc_ndcg_at_100_max
value: 49.48913616661651
- type: nauc_ndcg_at_100_std
value: 31.230004407529734
- type: nauc_ndcg_at_10_diff1
value: 39.03977233447205
- type: nauc_ndcg_at_10_max
value: 50.85899373451582
- type: nauc_ndcg_at_10_std
value: 28.565535567657758
- type: nauc_ndcg_at_1_diff1
value: 55.103074446340386
- type: nauc_ndcg_at_1_max
value: 57.083365993170574
- type: nauc_ndcg_at_1_std
value: 32.62345920937068
- type: nauc_ndcg_at_20_diff1
value: 40.80800588069346
- type: nauc_ndcg_at_20_max
value: 48.08304675498962
- type: nauc_ndcg_at_20_std
value: 24.308155582475493
- type: nauc_ndcg_at_3_diff1
value: 39.380845981099704
- type: nauc_ndcg_at_3_max
value: 50.47351788265686
- type: nauc_ndcg_at_3_std
value: 30.84136147203736
- type: nauc_ndcg_at_5_diff1
value: 37.53771673873421
- type: nauc_ndcg_at_5_max
value: 50.442525037505725
- type: nauc_ndcg_at_5_std
value: 31.698222359017542
- type: nauc_precision_at_1000_diff1
value: -17.598452736961626
- type: nauc_precision_at_1000_max
value: 7.3978095147406
- type: nauc_precision_at_1000_std
value: 17.81398831007705
- type: nauc_precision_at_100_diff1
value: -4.823669703134118
- type: nauc_precision_at_100_max
value: 31.2211264113413
- type: nauc_precision_at_100_std
value: 44.03977414541822
- type: nauc_precision_at_10_diff1
value: 5.427329842585479
- type: nauc_precision_at_10_max
value: 41.966355896510336
- type: nauc_precision_at_10_std
value: 46.86681191228352
- type: nauc_precision_at_1_diff1
value: 60.56003472781103
- type: nauc_precision_at_1_max
value: 63.99721006911423
- type: nauc_precision_at_1_std
value: 39.410700262897336
- type: nauc_precision_at_20_diff1
value: 2.8680215514220055
- type: nauc_precision_at_20_max
value: 39.47074710749822
- type: nauc_precision_at_20_std
value: 47.12080089773674
- type: nauc_precision_at_3_diff1
value: 20.02194579331603
- type: nauc_precision_at_3_max
value: 46.505979797805715
- type: nauc_precision_at_3_std
value: 41.71524758675274
- type: nauc_precision_at_5_diff1
value: 10.289351995558569
- type: nauc_precision_at_5_max
value: 44.02813523786892
- type: nauc_precision_at_5_std
value: 46.62685778242112
- type: nauc_recall_at_1000_diff1
value: 30.21940277893363
- type: nauc_recall_at_1000_max
value: 39.5822655196913
- type: nauc_recall_at_1000_std
value: 43.96968070152464
- type: nauc_recall_at_100_diff1
value: 26.911050821982297
- type: nauc_recall_at_100_max
value: 28.70889194883595
- type: nauc_recall_at_100_std
value: 19.234276029546248
- type: nauc_recall_at_10_diff1
value: 32.16261998997161
- type: nauc_recall_at_10_max
value: 16.351143839887673
- type: nauc_recall_at_10_std
value: -9.566286205201623
- type: nauc_recall_at_1_diff1
value: 36.788988562050946
- type: nauc_recall_at_1_max
value: 6.218043424236115
- type: nauc_recall_at_1_std
value: -24.42188675644677
- type: nauc_recall_at_20_diff1
value: 29.963999826495584
- type: nauc_recall_at_20_max
value: 17.55794298249755
- type: nauc_recall_at_20_std
value: -3.7511675743870634
- type: nauc_recall_at_3_diff1
value: 31.228447322937804
- type: nauc_recall_at_3_max
value: 8.65382080521747
- type: nauc_recall_at_3_std
value: -19.691807046880168
- type: nauc_recall_at_5_diff1
value: 30.398206992445942
- type: nauc_recall_at_5_max
value: 11.275424919343163
- type: nauc_recall_at_5_std
value: -14.798926734485269
- type: ndcg_at_1
value: 62.625
- type: ndcg_at_10
value: 49.747
- type: ndcg_at_100
value: 55.010000000000005
- type: ndcg_at_1000
value: 61.895
- type: ndcg_at_20
value: 49.392
- type: ndcg_at_3
value: 54.120999999999995
- type: ndcg_at_5
value: 51.637
- type: precision_at_1
value: 72.75
- type: precision_at_10
value: 40.825
- type: precision_at_100
value: 13.105
- type: precision_at_1000
value: 2.308
- type: precision_at_20
value: 31.75
- type: precision_at_3
value: 57.25
- type: precision_at_5
value: 50.6
- type: recall_at_1
value: 9.771
- type: recall_at_10
value: 28.587
- type: recall_at_100
value: 61.946
- type: recall_at_1000
value: 84.463
- type: recall_at_20
value: 38.478
- type: recall_at_3
value: 17.218
- type: recall_at_5
value: 21.275
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 91.86500000000001
- type: f1
value: 88.30096162438134
- type: f1_weighted
value: 92.0659899919408
- type: main_score
value: 91.86500000000001
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 92.324
- type: map_at_1
value: 82.009
- type: map_at_10
value: 89.747
- type: map_at_100
value: 89.938
- type: map_at_1000
value: 89.947
- type: map_at_20
value: 89.861
- type: map_at_3
value: 88.857
- type: map_at_5
value: 89.436
- type: mrr_at_1
value: 88.41884188418841
- type: mrr_at_10
value: 93.21862543397191
- type: mrr_at_100
value: 93.25036117475034
- type: mrr_at_1000
value: 93.25067900377414
- type: mrr_at_20
value: 93.24015254547037
- type: mrr_at_3
value: 92.9592959295929
- type: mrr_at_5
value: 93.14756475647556
- type: nauc_map_at_1000_diff1
value: 50.94738883522824
- type: nauc_map_at_1000_max
value: 28.177769914986435
- type: nauc_map_at_1000_std
value: -7.695498206797145
- type: nauc_map_at_100_diff1
value: 50.907640770694485
- type: nauc_map_at_100_max
value: 28.17900950335047
- type: nauc_map_at_100_std
value: -7.673824199345647
- type: nauc_map_at_10_diff1
value: 50.214920089544
- type: nauc_map_at_10_max
value: 27.936185706166384
- type: nauc_map_at_10_std
value: -7.840526683429777
- type: nauc_map_at_1_diff1
value: 58.66611849130122
- type: nauc_map_at_1_max
value: 26.20254332997399
- type: nauc_map_at_1_std
value: -12.81827489016333
- type: nauc_map_at_20_diff1
value: 50.58681046481491
- type: nauc_map_at_20_max
value: 28.13309361145371
- type: nauc_map_at_20_std
value: -7.621015109639511
- type: nauc_map_at_3_diff1
value: 48.956697935385854
- type: nauc_map_at_3_max
value: 27.170314916243555
- type: nauc_map_at_3_std
value: -9.39547812372297
- type: nauc_map_at_5_diff1
value: 49.27259708465449
- type: nauc_map_at_5_max
value: 27.781743495059448
- type: nauc_map_at_5_std
value: -8.436943665780772
- type: nauc_mrr_at_1000_diff1
value: 70.94197138644368
- type: nauc_mrr_at_1000_max
value: 29.744792827747425
- type: nauc_mrr_at_1000_std
value: -14.231190372133911
- type: nauc_mrr_at_100_diff1
value: 70.94173828653378
- type: nauc_mrr_at_100_max
value: 29.747027261249315
- type: nauc_mrr_at_100_std
value: -14.226978661637379
- type: nauc_mrr_at_10_diff1
value: 70.75178524174973
- type: nauc_mrr_at_10_max
value: 29.809671744912002
- type: nauc_mrr_at_10_std
value: -14.075447791457476
- type: nauc_mrr_at_1_diff1
value: 74.79778494514198
- type: nauc_mrr_at_1_max
value: 28.952464074244013
- type: nauc_mrr_at_1_std
value: -15.239541908497461
- type: nauc_mrr_at_20_diff1
value: 70.88304898540507
- type: nauc_mrr_at_20_max
value: 29.806675277853056
- type: nauc_mrr_at_20_std
value: -14.157092427397986
- type: nauc_mrr_at_3_diff1
value: 70.25568401882646
- type: nauc_mrr_at_3_max
value: 29.456378318649907
- type: nauc_mrr_at_3_std
value: -15.294678607922727
- type: nauc_mrr_at_5_diff1
value: 70.40859709340829
- type: nauc_mrr_at_5_max
value: 30.103095328322116
- type: nauc_mrr_at_5_std
value: -14.224307813357095
- type: nauc_ndcg_at_1000_diff1
value: 53.98068423933861
- type: nauc_ndcg_at_1000_max
value: 29.181455069482908
- type: nauc_ndcg_at_1000_std
value: -7.203475127738186
- type: nauc_ndcg_at_100_diff1
value: 53.057337452477405
- type: nauc_ndcg_at_100_max
value: 29.245923440897037
- type: nauc_ndcg_at_100_std
value: -6.585954807531662
- type: nauc_ndcg_at_10_diff1
value: 49.98186818251915
- type: nauc_ndcg_at_10_max
value: 28.56823323795041
- type: nauc_ndcg_at_10_std
value: -6.323710931814188
- type: nauc_ndcg_at_1_diff1
value: 74.79778494514198
- type: nauc_ndcg_at_1_max
value: 28.952464074244013
- type: nauc_ndcg_at_1_std
value: -15.239541908497461
- type: nauc_ndcg_at_20_diff1
value: 51.10852050231911
- type: nauc_ndcg_at_20_max
value: 29.08003046680923
- type: nauc_ndcg_at_20_std
value: -5.849331595918404
- type: nauc_ndcg_at_3_diff1
value: 49.52502230383664
- type: nauc_ndcg_at_3_max
value: 28.00888943579568
- type: nauc_ndcg_at_3_std
value: -9.363043652090012
- type: nauc_ndcg_at_5_diff1
value: 48.65210001822694
- type: nauc_ndcg_at_5_max
value: 28.812347880950274
- type: nauc_ndcg_at_5_std
value: -7.494399468900928
- type: nauc_precision_at_1000_diff1
value: -10.394877833194963
- type: nauc_precision_at_1000_max
value: -7.894463753706603
- type: nauc_precision_at_1000_std
value: 8.498285792797692
- type: nauc_precision_at_100_diff1
value: -12.462196048282426
- type: nauc_precision_at_100_max
value: -6.5991192066970505
- type: nauc_precision_at_100_std
value: 11.196963409158196
- type: nauc_precision_at_10_diff1
value: -16.08068752853303
- type: nauc_precision_at_10_max
value: -5.804024497000059
- type: nauc_precision_at_10_std
value: 12.878158171669485
- type: nauc_precision_at_1_diff1
value: 74.79778494514198
- type: nauc_precision_at_1_max
value: 28.952464074244013
- type: nauc_precision_at_1_std
value: -15.239541908497461
- type: nauc_precision_at_20_diff1
value: -14.983606658676099
- type: nauc_precision_at_20_max
value: -5.587463391153577
- type: nauc_precision_at_20_std
value: 13.834807282791427
- type: nauc_precision_at_3_diff1
value: -13.597983159064528
- type: nauc_precision_at_3_max
value: -2.524512740134365
- type: nauc_precision_at_3_std
value: 6.842035390748123
- type: nauc_precision_at_5_diff1
value: -17.25544777698726
- type: nauc_precision_at_5_max
value: -4.0883771364047
- type: nauc_precision_at_5_std
value: 10.449335744222909
- type: nauc_recall_at_1000_diff1
value: 13.653514864247507
- type: nauc_recall_at_1000_max
value: 51.63943256263603
- type: nauc_recall_at_1000_std
value: 50.775035850822206
- type: nauc_recall_at_100_diff1
value: 4.781612383589401
- type: nauc_recall_at_100_max
value: 40.540335419586995
- type: nauc_recall_at_100_std
value: 40.379199601036525
- type: nauc_recall_at_10_diff1
value: 11.27891981913364
- type: nauc_recall_at_10_max
value: 28.617632154887378
- type: nauc_recall_at_10_std
value: 14.768271484955472
- type: nauc_recall_at_1_diff1
value: 58.66611849130122
- type: nauc_recall_at_1_max
value: 26.20254332997399
- type: nauc_recall_at_1_std
value: -12.81827489016333
- type: nauc_recall_at_20_diff1
value: 8.12120711290159
- type: nauc_recall_at_20_max
value: 33.00583001539113
- type: nauc_recall_at_20_std
value: 25.80753789069423
- type: nauc_recall_at_3_diff1
value: 22.269678892083
- type: nauc_recall_at_3_max
value: 25.44943213149191
- type: nauc_recall_at_3_std
value: -4.320083216887953
- type: nauc_recall_at_5_diff1
value: 13.697301143373114
- type: nauc_recall_at_5_max
value: 29.100798008536
- type: nauc_recall_at_5_std
value: 4.4040440238865735
- type: ndcg_at_1
value: 88.419
- type: ndcg_at_10
value: 92.324
- type: ndcg_at_100
value: 92.92200000000001
- type: ndcg_at_1000
value: 93.041
- type: ndcg_at_20
value: 92.592
- type: ndcg_at_3
value: 91.283
- type: ndcg_at_5
value: 91.879
- type: precision_at_1
value: 88.419
- type: precision_at_10
value: 10.969
- type: precision_at_100
value: 1.153
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_20
value: 5.585
- type: precision_at_3
value: 34.723
- type: precision_at_5
value: 21.401
- type: recall_at_1
value: 82.009
- type: recall_at_10
value: 96.614
- type: recall_at_100
value: 98.848
- type: recall_at_1000
value: 99.515
- type: recall_at_20
value: 97.478
- type: recall_at_3
value: 93.806
- type: recall_at_5
value: 95.36
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 61.565999999999995
- type: map_at_1
value: 32.156
- type: map_at_10
value: 53.315
- type: map_at_100
value: 55.362
- type: map_at_1000
value: 55.47299999999999
- type: map_at_20
value: 54.472
- type: map_at_3
value: 46.511
- type: map_at_5
value: 50.41
- type: mrr_at_1
value: 59.876543209876544
- type: mrr_at_10
value: 68.22610474230845
- type: mrr_at_100
value: 68.77394480404435
- type: mrr_at_1000
value: 68.78736007016967
- type: mrr_at_20
value: 68.59129912806026
- type: mrr_at_3
value: 66.10082304526748
- type: mrr_at_5
value: 67.38168724279832
- type: nauc_map_at_1000_diff1
value: 49.62942316124423
- type: nauc_map_at_1000_max
value: 35.67101903898957
- type: nauc_map_at_1000_std
value: -6.815873098250759
- type: nauc_map_at_100_diff1
value: 49.64621888214539
- type: nauc_map_at_100_max
value: 35.62031465567256
- type: nauc_map_at_100_std
value: -6.776031084630076
- type: nauc_map_at_10_diff1
value: 49.33030821201119
- type: nauc_map_at_10_max
value: 34.46971733898012
- type: nauc_map_at_10_std
value: -7.778373730184962
- type: nauc_map_at_1_diff1
value: 52.0332333047473
- type: nauc_map_at_1_max
value: 19.540566689104136
- type: nauc_map_at_1_std
value: -9.854745951285082
- type: nauc_map_at_20_diff1
value: 49.44169345608671
- type: nauc_map_at_20_max
value: 35.178312399601026
- type: nauc_map_at_20_std
value: -7.470635858834422
- type: nauc_map_at_3_diff1
value: 49.41939385962553
- type: nauc_map_at_3_max
value: 28.332049833893123
- type: nauc_map_at_3_std
value: -8.681960541681102
- type: nauc_map_at_5_diff1
value: 49.61090262759964
- type: nauc_map_at_5_max
value: 32.48723353572084
- type: nauc_map_at_5_std
value: -8.363058818753835
- type: nauc_mrr_at_1000_diff1
value: 60.69915846732089
- type: nauc_mrr_at_1000_max
value: 45.13860497381652
- type: nauc_mrr_at_1000_std
value: -4.225175125506248
- type: nauc_mrr_at_100_diff1
value: 60.69826345668557
- type: nauc_mrr_at_100_max
value: 45.13470703245659
- type: nauc_mrr_at_100_std
value: -4.200113517793398
- type: nauc_mrr_at_10_diff1
value: 60.66519475555599
- type: nauc_mrr_at_10_max
value: 45.373809774792946
- type: nauc_mrr_at_10_std
value: -4.140291436698825
- type: nauc_mrr_at_1_diff1
value: 63.48123363890736
- type: nauc_mrr_at_1_max
value: 43.84203382502383
- type: nauc_mrr_at_1_std
value: -5.342992521011855
- type: nauc_mrr_at_20_diff1
value: 60.586460414654
- type: nauc_mrr_at_20_max
value: 45.223192827086834
- type: nauc_mrr_at_20_std
value: -4.155586516022814
- type: nauc_mrr_at_3_diff1
value: 60.79722315560614
- type: nauc_mrr_at_3_max
value: 44.529305593724544
- type: nauc_mrr_at_3_std
value: -5.450633990725995
- type: nauc_mrr_at_5_diff1
value: 60.68447047918725
- type: nauc_mrr_at_5_max
value: 45.17642447745263
- type: nauc_mrr_at_5_std
value: -4.932681190117024
- type: nauc_ndcg_at_1000_diff1
value: 52.412278899797904
- type: nauc_ndcg_at_1000_max
value: 40.162877139336366
- type: nauc_ndcg_at_1000_std
value: -3.8553789122316875
- type: nauc_ndcg_at_100_diff1
value: 52.68807894576607
- type: nauc_ndcg_at_100_max
value: 39.9253822005076
- type: nauc_ndcg_at_100_std
value: -2.338651167528337
- type: nauc_ndcg_at_10_diff1
value: 51.09546048989901
- type: nauc_ndcg_at_10_max
value: 38.08064242158707
- type: nauc_ndcg_at_10_std
value: -5.233272547375464
- type: nauc_ndcg_at_1_diff1
value: 63.48123363890736
- type: nauc_ndcg_at_1_max
value: 43.84203382502383
- type: nauc_ndcg_at_1_std
value: -5.342992521011855
- type: nauc_ndcg_at_20_diff1
value: 51.49964906773662
- type: nauc_ndcg_at_20_max
value: 38.942004621686316
- type: nauc_ndcg_at_20_std
value: -4.679318970131204
- type: nauc_ndcg_at_3_diff1
value: 49.532860462828836
- type: nauc_ndcg_at_3_max
value: 37.56640546584668
- type: nauc_ndcg_at_3_std
value: -6.691776128891331
- type: nauc_ndcg_at_5_diff1
value: 50.23238795892766
- type: nauc_ndcg_at_5_max
value: 38.20264549254884
- type: nauc_ndcg_at_5_std
value: -7.22235274057192
- type: nauc_precision_at_1000_diff1
value: -14.38589444358042
- type: nauc_precision_at_1000_max
value: 17.97960800969427
- type: nauc_precision_at_1000_std
value: 6.828014370124078
- type: nauc_precision_at_100_diff1
value: -8.709913150226624
- type: nauc_precision_at_100_max
value: 23.14276582961205
- type: nauc_precision_at_100_std
value: 11.776194467911196
- type: nauc_precision_at_10_diff1
value: 6.484971892806652
- type: nauc_precision_at_10_max
value: 32.33979567454926
- type: nauc_precision_at_10_std
value: 4.3544588133706625
- type: nauc_precision_at_1_diff1
value: 63.48123363890736
- type: nauc_precision_at_1_max
value: 43.84203382502383
- type: nauc_precision_at_1_std
value: -5.342992521011855
- type: nauc_precision_at_20_diff1
value: 0.33051377406127247
- type: nauc_precision_at_20_max
value: 28.9668381305069
- type: nauc_precision_at_20_std
value: 6.7084619353660155
- type: nauc_precision_at_3_diff1
value: 22.682345321496626
- type: nauc_precision_at_3_max
value: 36.16645659098322
- type: nauc_precision_at_3_std
value: 0.8188466017391514
- type: nauc_precision_at_5_diff1
value: 14.605986990364134
- type: nauc_precision_at_5_max
value: 36.728759182846815
- type: nauc_precision_at_5_std
value: 1.9087175015774727
- type: nauc_recall_at_1000_diff1
value: 48.24624636211058
- type: nauc_recall_at_1000_max
value: 44.47586797842709
- type: nauc_recall_at_1000_std
value: 41.068897939296164
- type: nauc_recall_at_100_diff1
value: 46.88848933074924
- type: nauc_recall_at_100_max
value: 33.76863456468527
- type: nauc_recall_at_100_std
value: 30.245766911090126
- type: nauc_recall_at_10_diff1
value: 41.43226128163156
- type: nauc_recall_at_10_max
value: 32.30521131227616
- type: nauc_recall_at_10_std
value: -0.9141126092926203
- type: nauc_recall_at_1_diff1
value: 52.0332333047473
- type: nauc_recall_at_1_max
value: 19.540566689104136
- type: nauc_recall_at_1_std
value: -9.854745951285082
- type: nauc_recall_at_20_diff1
value: 40.854692957831304
- type: nauc_recall_at_20_max
value: 34.200599823549695
- type: nauc_recall_at_20_std
value: 2.125255667995533
- type: nauc_recall_at_3_diff1
value: 43.71551619581852
- type: nauc_recall_at_3_max
value: 25.214268383790834
- type: nauc_recall_at_3_std
value: -7.773643090892321
- type: nauc_recall_at_5_diff1
value: 42.70172927692832
- type: nauc_recall_at_5_max
value: 29.71575940411383
- type: nauc_recall_at_5_std
value: -6.304996381418782
- type: ndcg_at_1
value: 59.877
- type: ndcg_at_10
value: 61.565999999999995
- type: ndcg_at_100
value: 67.57
- type: ndcg_at_1000
value: 68.929
- type: ndcg_at_20
value: 64.059
- type: ndcg_at_3
value: 56.833
- type: ndcg_at_5
value: 58.571
- type: precision_at_1
value: 59.877
- type: precision_at_10
value: 16.836000000000002
- type: precision_at_100
value: 2.327
- type: precision_at_1000
value: 0.258
- type: precision_at_20
value: 9.606
- type: precision_at_3
value: 37.602999999999994
- type: precision_at_5
value: 27.716
- type: recall_at_1
value: 32.156
- type: recall_at_10
value: 69.23700000000001
- type: recall_at_100
value: 90.557
- type: recall_at_1000
value: 98.048
- type: recall_at_20
value: 76.629
- type: recall_at_3
value: 51.782
- type: recall_at_5
value: 59.911
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 85.71
- type: map_at_1
value: 44.727
- type: map_at_10
value: 80.172
- type: map_at_100
value: 80.735
- type: map_at_1000
value: 80.759
- type: map_at_20
value: 80.537
- type: map_at_3
value: 77.23
- type: map_at_5
value: 79.135
- type: mrr_at_1
value: 89.45307224848077
- type: mrr_at_10
value: 93.05947825900545
- type: mrr_at_100
value: 93.10735572174119
- type: mrr_at_1000
value: 93.10925122481453
- type: mrr_at_20
value: 93.09106265870149
- type: mrr_at_3
value: 92.64686022957449
- type: mrr_at_5
value: 92.91694800810244
- type: nauc_map_at_1000_diff1
value: 12.798252126301232
- type: nauc_map_at_1000_max
value: 43.81986554860238
- type: nauc_map_at_1000_std
value: -0.9430908318570322
- type: nauc_map_at_100_diff1
value: 12.778785751586241
- type: nauc_map_at_100_max
value: 43.83285096666312
- type: nauc_map_at_100_std
value: -0.890400549771497
- type: nauc_map_at_10_diff1
value: 12.448151460865605
- type: nauc_map_at_10_max
value: 43.81687718031803
- type: nauc_map_at_10_std
value: -1.1290250999504823
- type: nauc_map_at_1_diff1
value: 69.61919805260156
- type: nauc_map_at_1_max
value: 55.5755657749773
- type: nauc_map_at_1_std
value: -15.526118670762864
- type: nauc_map_at_20_diff1
value: 12.678746871164003
- type: nauc_map_at_20_max
value: 43.85110404389791
- type: nauc_map_at_20_std
value: -0.9453828333133943
- type: nauc_map_at_3_diff1
value: 10.59516294247888
- type: nauc_map_at_3_max
value: 41.70569781021385
- type: nauc_map_at_3_std
value: -4.07043830485855
- type: nauc_map_at_5_diff1
value: 11.565349666644325
- type: nauc_map_at_5_max
value: 43.116183222374225
- type: nauc_map_at_5_std
value: -2.1674460401372486
- type: nauc_mrr_at_1000_diff1
value: 70.69022205791336
- type: nauc_mrr_at_1000_max
value: 60.1768389092444
- type: nauc_mrr_at_1000_std
value: -12.296609331418805
- type: nauc_mrr_at_100_diff1
value: 70.69003325257466
- type: nauc_mrr_at_100_max
value: 60.185582165757666
- type: nauc_mrr_at_100_std
value: -12.278182064282248
- type: nauc_mrr_at_10_diff1
value: 70.75823369489385
- type: nauc_mrr_at_10_max
value: 60.31013481977012
- type: nauc_mrr_at_10_std
value: -12.26989248526715
- type: nauc_mrr_at_1_diff1
value: 69.61919805260156
- type: nauc_mrr_at_1_max
value: 55.5755657749773
- type: nauc_mrr_at_1_std
value: -15.526118670762864
- type: nauc_mrr_at_20_diff1
value: 70.69890688426213
- type: nauc_mrr_at_20_max
value: 60.21787827391816
- type: nauc_mrr_at_20_std
value: -12.302070154645696
- type: nauc_mrr_at_3_diff1
value: 70.68251049579754
- type: nauc_mrr_at_3_max
value: 61.01164717194973
- type: nauc_mrr_at_3_std
value: -11.832481743898247
- type: nauc_mrr_at_5_diff1
value: 70.74789138616434
- type: nauc_mrr_at_5_max
value: 60.480693027406694
- type: nauc_mrr_at_5_std
value: -12.280386872142941
- type: nauc_ndcg_at_1000_diff1
value: 20.836724167580574
- type: nauc_ndcg_at_1000_max
value: 47.677459062598345
- type: nauc_ndcg_at_1000_std
value: 1.0411766255838146
- type: nauc_ndcg_at_100_diff1
value: 20.220410822948367
- type: nauc_ndcg_at_100_max
value: 48.00523684992595
- type: nauc_ndcg_at_100_std
value: 2.467440064578469
- type: nauc_ndcg_at_10_diff1
value: 18.51373841748113
- type: nauc_ndcg_at_10_max
value: 47.90062496600346
- type: nauc_ndcg_at_10_std
value: 1.3381961818317667
- type: nauc_ndcg_at_1_diff1
value: 69.61919805260156
- type: nauc_ndcg_at_1_max
value: 55.5755657749773
- type: nauc_ndcg_at_1_std
value: -15.526118670762864
- type: nauc_ndcg_at_20_diff1
value: 19.403573009912805
- type: nauc_ndcg_at_20_max
value: 48.133829431970135
- type: nauc_ndcg_at_20_std
value: 2.0249306865683527
- type: nauc_ndcg_at_3_diff1
value: 15.453534673988578
- type: nauc_ndcg_at_3_max
value: 44.50916210615789
- type: nauc_ndcg_at_3_std
value: -3.6243787051842307
- type: nauc_ndcg_at_5_diff1
value: 16.722515798468727
- type: nauc_ndcg_at_5_max
value: 46.36557177573076
- type: nauc_ndcg_at_5_std
value: -0.9789348270087928
- type: nauc_precision_at_1000_diff1
value: 18.442807737825078
- type: nauc_precision_at_1000_max
value: 62.6412630587746
- type: nauc_precision_at_1000_std
value: 67.28157546833832
- type: nauc_precision_at_100_diff1
value: 8.378369860260145
- type: nauc_precision_at_100_max
value: 55.87545313950895
- type: nauc_precision_at_100_std
value: 47.415584458330926
- type: nauc_precision_at_10_diff1
value: 7.419773912504883
- type: nauc_precision_at_10_max
value: 50.325163033813105
- type: nauc_precision_at_10_std
value: 15.74465932738504
- type: nauc_precision_at_1_diff1
value: 69.61919805260156
- type: nauc_precision_at_1_max
value: 55.5755657749773
- type: nauc_precision_at_1_std
value: -15.526118670762864
- type: nauc_precision_at_20_diff1
value: 8.76445086512422
- type: nauc_precision_at_20_max
value: 53.185762190326834
- type: nauc_precision_at_20_std
value: 23.528376243793584
- type: nauc_precision_at_3_diff1
value: 5.462937100521903
- type: nauc_precision_at_3_max
value: 43.307890530903165
- type: nauc_precision_at_3_std
value: -0.019766798037247135
- type: nauc_precision_at_5_diff1
value: 5.668823923473503
- type: nauc_precision_at_5_max
value: 46.388864934614546
- type: nauc_precision_at_5_std
value: 6.204083505295685
- type: nauc_recall_at_1000_diff1
value: 18.442807737825063
- type: nauc_recall_at_1000_max
value: 62.641263058773
- type: nauc_recall_at_1000_std
value: 67.2815754683397
- type: nauc_recall_at_100_diff1
value: 8.37836986025998
- type: nauc_recall_at_100_max
value: 55.87545313950938
- type: nauc_recall_at_100_std
value: 47.41558445833062
- type: nauc_recall_at_10_diff1
value: 7.4197739125050965
- type: nauc_recall_at_10_max
value: 50.325163033813325
- type: nauc_recall_at_10_std
value: 15.74465932738494
- type: nauc_recall_at_1_diff1
value: 69.61919805260156
- type: nauc_recall_at_1_max
value: 55.5755657749773
- type: nauc_recall_at_1_std
value: -15.526118670762864
- type: nauc_recall_at_20_diff1
value: 8.764450865124198
- type: nauc_recall_at_20_max
value: 53.185762190326614
- type: nauc_recall_at_20_std
value: 23.52837624379365
- type: nauc_recall_at_3_diff1
value: 5.462937100521863
- type: nauc_recall_at_3_max
value: 43.30789053090315
- type: nauc_recall_at_3_std
value: -0.019766798037247135
- type: nauc_recall_at_5_diff1
value: 5.668823923473546
- type: nauc_recall_at_5_max
value: 46.38886493461459
- type: nauc_recall_at_5_std
value: 6.204083505295627
- type: ndcg_at_1
value: 89.453
- type: ndcg_at_10
value: 85.71
- type: ndcg_at_100
value: 87.45100000000001
- type: ndcg_at_1000
value: 87.869
- type: ndcg_at_20
value: 86.551
- type: ndcg_at_3
value: 81.83500000000001
- type: ndcg_at_5
value: 84.076
- type: precision_at_1
value: 89.453
- type: precision_at_10
value: 17.881
- type: precision_at_100
value: 1.921
- type: precision_at_1000
value: 0.197
- type: precision_at_20
value: 9.21
- type: precision_at_3
value: 53.928
- type: precision_at_5
value: 34.123
- type: recall_at_1
value: 44.727
- type: recall_at_10
value: 89.40599999999999
- type: recall_at_100
value: 96.03
- type: recall_at_1000
value: 98.744
- type: recall_at_20
value: 92.10000000000001
- type: recall_at_3
value: 80.891
- type: recall_at_5
value: 85.307
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.9972
- type: ap
value: 95.5775856254544
- type: ap_weighted
value: 95.5775856254544
- type: f1
value: 96.99685931130435
- type: f1_weighted
value: 96.99685931130435
- type: main_score
value: 96.9972
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 47.238
- type: map_at_1
value: 26.118999999999996
- type: map_at_10
value: 39.766
- type: map_at_100
value: 40.847
- type: map_at_1000
value: 40.882000000000005
- type: map_at_20
value: 40.461000000000006
- type: map_at_3
value: 35.667
- type: map_at_5
value: 38.066
- type: mrr_at_1
value: 26.89111747851003
- type: mrr_at_10
value: 40.397553099558856
- type: mrr_at_100
value: 41.40812124384233
- type: mrr_at_1000
value: 41.43744901209514
- type: mrr_at_20
value: 41.05487580788417
- type: mrr_at_3
value: 36.40878701050627
- type: mrr_at_5
value: 38.75620821394463
- type: nauc_map_at_1000_diff1
value: 39.98998212778958
- type: nauc_map_at_1000_max
value: 15.59645055297053
- type: nauc_map_at_1000_std
value: -20.56903010050901
- type: nauc_map_at_100_diff1
value: 39.98864802308753
- type: nauc_map_at_100_max
value: 15.613879964576007
- type: nauc_map_at_100_std
value: -20.538248542026235
- type: nauc_map_at_10_diff1
value: 39.88019792742319
- type: nauc_map_at_10_max
value: 15.489005240065229
- type: nauc_map_at_10_std
value: -21.204753377687798
- type: nauc_map_at_1_diff1
value: 42.74534997345039
- type: nauc_map_at_1_max
value: 13.355853021671452
- type: nauc_map_at_1_std
value: -18.991398512806423
- type: nauc_map_at_20_diff1
value: 39.92156613912966
- type: nauc_map_at_20_max
value: 15.627187592069072
- type: nauc_map_at_20_std
value: -20.680113879390696
- type: nauc_map_at_3_diff1
value: 39.9974069898003
- type: nauc_map_at_3_max
value: 14.326178611684506
- type: nauc_map_at_3_std
value: -21.725322969825978
- type: nauc_map_at_5_diff1
value: 39.7544604866648
- type: nauc_map_at_5_max
value: 15.006870414790205
- type: nauc_map_at_5_std
value: -21.700337702850153
- type: nauc_mrr_at_1000_diff1
value: 39.928512394832346
- type: nauc_mrr_at_1000_max
value: 15.629735001277483
- type: nauc_mrr_at_1000_std
value: -20.268640135576806
- type: nauc_mrr_at_100_diff1
value: 39.92749044330002
- type: nauc_mrr_at_100_max
value: 15.647620571895688
- type: nauc_mrr_at_100_std
value: -20.238323340406215
- type: nauc_mrr_at_10_diff1
value: 39.83475591137612
- type: nauc_mrr_at_10_max
value: 15.55993531984175
- type: nauc_mrr_at_10_std
value: -20.83691814120607
- type: nauc_mrr_at_1_diff1
value: 42.60171179362715
- type: nauc_mrr_at_1_max
value: 13.503932215044154
- type: nauc_mrr_at_1_std
value: -18.780402607759942
- type: nauc_mrr_at_20_diff1
value: 39.8638926924116
- type: nauc_mrr_at_20_max
value: 15.68649896973148
- type: nauc_mrr_at_20_std
value: -20.32923556364886
- type: nauc_mrr_at_3_diff1
value: 39.87333054371521
- type: nauc_mrr_at_3_max
value: 14.40273581805097
- type: nauc_mrr_at_3_std
value: -21.497091878550705
- type: nauc_mrr_at_5_diff1
value: 39.710535895257806
- type: nauc_mrr_at_5_max
value: 15.125588064614778
- type: nauc_mrr_at_5_std
value: -21.372841992590516
- type: nauc_ndcg_at_1000_diff1
value: 39.530182010129316
- type: nauc_ndcg_at_1000_max
value: 16.721078036825325
- type: nauc_ndcg_at_1000_std
value: -19.333446229997676
- type: nauc_ndcg_at_100_diff1
value: 39.52495437545947
- type: nauc_ndcg_at_100_max
value: 17.316175958553544
- type: nauc_ndcg_at_100_std
value: -18.167108179801435
- type: nauc_ndcg_at_10_diff1
value: 39.060097577182404
- type: nauc_ndcg_at_10_max
value: 17.03188717594285
- type: nauc_ndcg_at_10_std
value: -21.087768427189857
- type: nauc_ndcg_at_1_diff1
value: 42.60171179362715
- type: nauc_ndcg_at_1_max
value: 13.503932215044154
- type: nauc_ndcg_at_1_std
value: -18.780402607759942
- type: nauc_ndcg_at_20_diff1
value: 39.13715123963872
- type: nauc_ndcg_at_20_max
value: 17.577613449488744
- type: nauc_ndcg_at_20_std
value: -19.05270718022563
- type: nauc_ndcg_at_3_diff1
value: 39.185894874198965
- type: nauc_ndcg_at_3_max
value: 14.57528860178114
- type: nauc_ndcg_at_3_std
value: -22.454121752010494
- type: nauc_ndcg_at_5_diff1
value: 38.76484115322762
- type: nauc_ndcg_at_5_max
value: 15.867435457401843
- type: nauc_ndcg_at_5_std
value: -22.38692452968968
- type: nauc_precision_at_1000_diff1
value: -4.470494643119554
- type: nauc_precision_at_1000_max
value: 5.532704785018603
- type: nauc_precision_at_1000_std
value: 8.431501972980776
- type: nauc_precision_at_100_diff1
value: 13.915615975206203
- type: nauc_precision_at_100_max
value: 20.932636836042228
- type: nauc_precision_at_100_std
value: 17.71841847550733
- type: nauc_precision_at_10_diff1
value: 31.897757036479256
- type: nauc_precision_at_10_max
value: 21.47296249503087
- type: nauc_precision_at_10_std
value: -17.9085167972799
- type: nauc_precision_at_1_diff1
value: 42.60171179362715
- type: nauc_precision_at_1_max
value: 13.503932215044154
- type: nauc_precision_at_1_std
value: -18.780402607759942
- type: nauc_precision_at_20_diff1
value: 27.89782616667338
- type: nauc_precision_at_20_max
value: 24.171214140761222
- type: nauc_precision_at_20_std
value: -5.243858031824212
- type: nauc_precision_at_3_diff1
value: 36.10358380302458
- type: nauc_precision_at_3_max
value: 14.942314403638854
- type: nauc_precision_at_3_std
value: -24.229120472212184
- type: nauc_precision_at_5_diff1
value: 33.65809304158813
- type: nauc_precision_at_5_max
value: 17.8340962571853
- type: nauc_precision_at_5_std
value: -23.350679607104
- type: nauc_recall_at_1000_diff1
value: 16.65078016225262
- type: nauc_recall_at_1000_max
value: 51.9145485909716
- type: nauc_recall_at_1000_std
value: 69.3989955532773
- type: nauc_recall_at_100_diff1
value: 35.88717637896406
- type: nauc_recall_at_100_max
value: 39.31009514053865
- type: nauc_recall_at_100_std
value: 28.07382512953391
- type: nauc_recall_at_10_diff1
value: 35.70195220879436
- type: nauc_recall_at_10_max
value: 22.909702960394753
- type: nauc_recall_at_10_std
value: -20.011356717361004
- type: nauc_recall_at_1_diff1
value: 42.74534997345039
- type: nauc_recall_at_1_max
value: 13.355853021671452
- type: nauc_recall_at_1_std
value: -18.991398512806423
- type: nauc_recall_at_20_diff1
value: 35.01347244311519
- type: nauc_recall_at_20_max
value: 28.0791849525668
- type: nauc_recall_at_20_std
value: -7.596941121600616
- type: nauc_recall_at_3_diff1
value: 36.697694842739764
- type: nauc_recall_at_3_max
value: 15.087805237942867
- type: nauc_recall_at_3_std
value: -24.48394612054427
- type: nauc_recall_at_5_diff1
value: 35.40436459395654
- type: nauc_recall_at_5_max
value: 18.303370938978983
- type: nauc_recall_at_5_std
value: -24.4618489698988
- type: ndcg_at_1
value: 26.891
- type: ndcg_at_10
value: 47.238
- type: ndcg_at_100
value: 52.290000000000006
- type: ndcg_at_1000
value: 53.095000000000006
- type: ndcg_at_20
value: 49.675000000000004
- type: ndcg_at_3
value: 38.951
- type: ndcg_at_5
value: 43.208
- type: precision_at_1
value: 26.891
- type: precision_at_10
value: 7.345
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.106
- type: precision_at_20
value: 4.18
- type: precision_at_3
value: 16.519000000000002
- type: precision_at_5
value: 12.086
- type: recall_at_1
value: 26.118999999999996
- type: recall_at_10
value: 70.17
- type: recall_at_100
value: 93.235
- type: recall_at_1000
value: 99.256
- type: recall_at_20
value: 79.599
- type: recall_at_3
value: 47.714
- type: recall_at_5
value: 57.913000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.43502051983585
- type: f1
value: 97.28386890066653
- type: f1_weighted
value: 97.44797640554678
- type: main_score
value: 97.43502051983585
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 92.80665754673961
- type: f1
value: 74.62374030390122
- type: f1_weighted
value: 93.45063761064331
- type: main_score
value: 92.80665754673961
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 81.14324142568931
- type: f1
value: 79.25715428394179
- type: f1_weighted
value: 80.06102282439677
- type: main_score
value: 81.14324142568931
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 83.52723604572965
- type: f1
value: 82.43215997685817
- type: f1_weighted
value: 83.18340208761732
- type: main_score
value: 83.52723604572965
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 46.38149873605036
- type: v_measure
value: 46.38149873605036
- type: v_measure_std
value: 1.0749788856434186
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 44.8945524407664
- type: v_measure
value: 44.8945524407664
- type: v_measure_std
value: 1.2389193370528488
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 31.464871623418794
- type: map
value: 31.464871623418794
- type: mrr
value: 32.50613994693332
- type: nAUC_map_diff1
value: 13.899720409558803
- type: nAUC_map_max
value: -24.855993992819574
- type: nAUC_map_std
value: -1.7042823879133022
- type: nAUC_mrr_diff1
value: 12.74961757902417
- type: nAUC_mrr_max
value: -19.359704641723024
- type: nAUC_mrr_std
value: 0.2553333974009825
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 40.608
- type: map_at_1
value: 6.814000000000001
- type: map_at_10
value: 15.658
- type: map_at_100
value: 19.820999999999998
- type: map_at_1000
value: 21.478
- type: map_at_20
value: 17.378
- type: map_at_3
value: 11.469
- type: map_at_5
value: 13.411999999999999
- type: mrr_at_1
value: 53.25077399380805
- type: mrr_at_10
value: 62.325667108948835
- type: mrr_at_100
value: 62.74305120891361
- type: mrr_at_1000
value: 62.7698046867024
- type: mrr_at_20
value: 62.55276398505989
- type: mrr_at_3
value: 60.629514963880304
- type: mrr_at_5
value: 61.71310629514966
- type: nauc_map_at_1000_diff1
value: 28.465065572581196
- type: nauc_map_at_1000_max
value: 31.422114483712072
- type: nauc_map_at_1000_std
value: 14.137580326409468
- type: nauc_map_at_100_diff1
value: 29.640817609314634
- type: nauc_map_at_100_max
value: 31.497301598927084
- type: nauc_map_at_100_std
value: 11.373472181189578
- type: nauc_map_at_10_diff1
value: 33.81702887042631
- type: nauc_map_at_10_max
value: 26.89432917723802
- type: nauc_map_at_10_std
value: -0.6017226421625675
- type: nauc_map_at_1_diff1
value: 47.92465446394334
- type: nauc_map_at_1_max
value: 13.298220863216647
- type: nauc_map_at_1_std
value: -16.028417988057573
- type: nauc_map_at_20_diff1
value: 31.26496100545571
- type: nauc_map_at_20_max
value: 29.279825837457384
- type: nauc_map_at_20_std
value: 4.788387672392815
- type: nauc_map_at_3_diff1
value: 38.62641220798505
- type: nauc_map_at_3_max
value: 19.08045714651334
- type: nauc_map_at_3_std
value: -8.922459068853476
- type: nauc_map_at_5_diff1
value: 36.53813025261598
- type: nauc_map_at_5_max
value: 22.40454946619978
- type: nauc_map_at_5_std
value: -6.486492008466358
- type: nauc_mrr_at_1000_diff1
value: 28.69304903653812
- type: nauc_mrr_at_1000_max
value: 45.96553711346646
- type: nauc_mrr_at_1000_std
value: 27.85147914917026
- type: nauc_mrr_at_100_diff1
value: 28.68229560104124
- type: nauc_mrr_at_100_max
value: 45.97763687298361
- type: nauc_mrr_at_100_std
value: 27.877065559802784
- type: nauc_mrr_at_10_diff1
value: 28.87265406525776
- type: nauc_mrr_at_10_max
value: 46.14425971703
- type: nauc_mrr_at_10_std
value: 28.012998155541958
- type: nauc_mrr_at_1_diff1
value: 30.386284704935175
- type: nauc_mrr_at_1_max
value: 39.555268501126484
- type: nauc_mrr_at_1_std
value: 19.003917244393055
- type: nauc_mrr_at_20_diff1
value: 28.688630297997626
- type: nauc_mrr_at_20_max
value: 46.06770135613457
- type: nauc_mrr_at_20_std
value: 27.983585167898088
- type: nauc_mrr_at_3_diff1
value: 28.202788031669435
- type: nauc_mrr_at_3_max
value: 44.67641807824994
- type: nauc_mrr_at_3_std
value: 26.01547375749013
- type: nauc_mrr_at_5_diff1
value: 29.136769721750184
- type: nauc_mrr_at_5_max
value: 45.59466095537007
- type: nauc_mrr_at_5_std
value: 26.52847275188725
- type: nauc_ndcg_at_1000_diff1
value: 23.41490301896588
- type: nauc_ndcg_at_1000_max
value: 44.66765746203081
- type: nauc_ndcg_at_1000_std
value: 32.929259388855655
- type: nauc_ndcg_at_100_diff1
value: 24.807775380513515
- type: nauc_ndcg_at_100_max
value: 38.94194783706628
- type: nauc_ndcg_at_100_std
value: 25.83927535706682
- type: nauc_ndcg_at_10_diff1
value: 23.27753233512705
- type: nauc_ndcg_at_10_max
value: 40.457762461961416
- type: nauc_ndcg_at_10_std
value: 26.221695797523196
- type: nauc_ndcg_at_1_diff1
value: 30.505733988194233
- type: nauc_ndcg_at_1_max
value: 37.29986556956722
- type: nauc_ndcg_at_1_std
value: 18.521315149723165
- type: nauc_ndcg_at_20_diff1
value: 22.73159408013849
- type: nauc_ndcg_at_20_max
value: 38.19415319342381
- type: nauc_ndcg_at_20_std
value: 25.651700748252814
- type: nauc_ndcg_at_3_diff1
value: 22.666171391424808
- type: nauc_ndcg_at_3_max
value: 39.654001155276696
- type: nauc_ndcg_at_3_std
value: 22.713597835307368
- type: nauc_ndcg_at_5_diff1
value: 23.145977550257783
- type: nauc_ndcg_at_5_max
value: 41.26418949542231
- type: nauc_ndcg_at_5_std
value: 24.626721054592018
- type: nauc_precision_at_1000_diff1
value: -10.668076403397022
- type: nauc_precision_at_1000_max
value: -2.687482461632398
- type: nauc_precision_at_1000_std
value: 23.984079094098455
- type: nauc_precision_at_100_diff1
value: -7.159873373344272
- type: nauc_precision_at_100_max
value: 12.819553702257164
- type: nauc_precision_at_100_std
value: 37.50378439821877
- type: nauc_precision_at_10_diff1
value: 2.2241329156010665
- type: nauc_precision_at_10_max
value: 36.76680313244236
- type: nauc_precision_at_10_std
value: 39.4677017320664
- type: nauc_precision_at_1_diff1
value: 30.386284704935175
- type: nauc_precision_at_1_max
value: 39.555268501126484
- type: nauc_precision_at_1_std
value: 19.003917244393055
- type: nauc_precision_at_20_diff1
value: -2.9834608982986115
- type: nauc_precision_at_20_max
value: 27.914227404658654
- type: nauc_precision_at_20_std
value: 39.80986422338386
- type: nauc_precision_at_3_diff1
value: 10.931080335409446
- type: nauc_precision_at_3_max
value: 39.74599313443494
- type: nauc_precision_at_3_std
value: 27.88015806277605
- type: nauc_precision_at_5_diff1
value: 6.375575138873724
- type: nauc_precision_at_5_max
value: 40.204218087817274
- type: nauc_precision_at_5_std
value: 33.14483938245918
- type: nauc_recall_at_1000_diff1
value: 6.059681037512472
- type: nauc_recall_at_1000_max
value: 16.088632078670198
- type: nauc_recall_at_1000_std
value: 13.844947244341302
- type: nauc_recall_at_100_diff1
value: 16.283808676503824
- type: nauc_recall_at_100_max
value: 21.37014633122509
- type: nauc_recall_at_100_std
value: 10.876847345257328
- type: nauc_recall_at_10_diff1
value: 25.843865694907286
- type: nauc_recall_at_10_max
value: 21.781125041367748
- type: nauc_recall_at_10_std
value: -2.1399146426462066
- type: nauc_recall_at_1_diff1
value: 47.92465446394334
- type: nauc_recall_at_1_max
value: 13.298220863216647
- type: nauc_recall_at_1_std
value: -16.028417988057573
- type: nauc_recall_at_20_diff1
value: 22.333265905634082
- type: nauc_recall_at_20_max
value: 24.167043456458593
- type: nauc_recall_at_20_std
value: 4.110610548061356
- type: nauc_recall_at_3_diff1
value: 35.695924824653886
- type: nauc_recall_at_3_max
value: 17.912601287674416
- type: nauc_recall_at_3_std
value: -9.102880017474895
- type: nauc_recall_at_5_diff1
value: 31.797504877636356
- type: nauc_recall_at_5_max
value: 20.00800506945161
- type: nauc_recall_at_5_std
value: -7.431905060084433
- type: ndcg_at_1
value: 51.548
- type: ndcg_at_10
value: 40.608
- type: ndcg_at_100
value: 37.328
- type: ndcg_at_1000
value: 45.927
- type: ndcg_at_20
value: 38.062000000000005
- type: ndcg_at_3
value: 46.886
- type: ndcg_at_5
value: 44.265
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 29.782999999999998
- type: precision_at_100
value: 9.331
- type: precision_at_1000
value: 2.233
- type: precision_at_20
value: 22.136
- type: precision_at_3
value: 43.653
- type: precision_at_5
value: 38.204
- type: recall_at_1
value: 6.814000000000001
- type: recall_at_10
value: 20.477
- type: recall_at_100
value: 38.190000000000005
- type: recall_at_1000
value: 69.222
- type: recall_at_20
value: 24.462999999999997
- type: recall_at_3
value: 12.592999999999998
- type: recall_at_5
value: 15.847
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 74.639
- type: map_at_1
value: 51.910000000000004
- type: map_at_10
value: 68.17800000000001
- type: map_at_100
value: 68.63
- type: map_at_1000
value: 68.636
- type: map_at_20
value: 68.47999999999999
- type: map_at_3
value: 64.631
- type: map_at_5
value: 66.84400000000001
- type: mrr_at_1
value: 57.87949015063732
- type: mrr_at_10
value: 70.52962165940157
- type: mrr_at_100
value: 70.79433311260526
- type: mrr_at_1000
value: 70.7996618331687
- type: mrr_at_20
value: 70.71145788136596
- type: mrr_at_3
value: 68.18752414059479
- type: mrr_at_5
value: 69.7025878717652
- type: nauc_map_at_1000_diff1
value: 48.25341804569
- type: nauc_map_at_1000_max
value: 19.942213273803425
- type: nauc_map_at_1000_std
value: -11.573618793573225
- type: nauc_map_at_100_diff1
value: 48.25137335400359
- type: nauc_map_at_100_max
value: 19.9478086771177
- type: nauc_map_at_100_std
value: -11.569057776842822
- type: nauc_map_at_10_diff1
value: 48.15169614121807
- type: nauc_map_at_10_max
value: 19.891488849767043
- type: nauc_map_at_10_std
value: -11.957859050856753
- type: nauc_map_at_1_diff1
value: 51.426208875589964
- type: nauc_map_at_1_max
value: 15.25704840433183
- type: nauc_map_at_1_std
value: -12.188645475831342
- type: nauc_map_at_20_diff1
value: 48.217050041022134
- type: nauc_map_at_20_max
value: 19.91541002135707
- type: nauc_map_at_20_std
value: -11.61880125598817
- type: nauc_map_at_3_diff1
value: 48.03870795080792
- type: nauc_map_at_3_max
value: 18.73819795102705
- type: nauc_map_at_3_std
value: -13.30137554476926
- type: nauc_map_at_5_diff1
value: 48.104924652402026
- type: nauc_map_at_5_max
value: 19.436408585016583
- type: nauc_map_at_5_std
value: -12.697778155809297
- type: nauc_mrr_at_1000_diff1
value: 47.93057727525053
- type: nauc_mrr_at_1000_max
value: 21.95380342666769
- type: nauc_mrr_at_1000_std
value: -8.460873325747786
- type: nauc_mrr_at_100_diff1
value: 47.927872117705206
- type: nauc_mrr_at_100_max
value: 21.957974176338706
- type: nauc_mrr_at_100_std
value: -8.457120495941842
- type: nauc_mrr_at_10_diff1
value: 47.88807872165451
- type: nauc_mrr_at_10_max
value: 21.99165186070592
- type: nauc_mrr_at_10_std
value: -8.622458245224722
- type: nauc_mrr_at_1_diff1
value: 50.14489631746599
- type: nauc_mrr_at_1_max
value: 20.08032341500605
- type: nauc_mrr_at_1_std
value: -8.516733561470257
- type: nauc_mrr_at_20_diff1
value: 47.903958398396284
- type: nauc_mrr_at_20_max
value: 21.984121272453255
- type: nauc_mrr_at_20_std
value: -8.425148199913735
- type: nauc_mrr_at_3_diff1
value: 47.42105094406055
- type: nauc_mrr_at_3_max
value: 22.077775847329576
- type: nauc_mrr_at_3_std
value: -8.740898452659854
- type: nauc_mrr_at_5_diff1
value: 47.57979388400372
- type: nauc_mrr_at_5_max
value: 22.125349463627074
- type: nauc_mrr_at_5_std
value: -8.52623396445785
- type: nauc_ndcg_at_1000_diff1
value: 47.810202958915035
- type: nauc_ndcg_at_1000_max
value: 21.392873440606735
- type: nauc_ndcg_at_1000_std
value: -9.795633951314732
- type: nauc_ndcg_at_100_diff1
value: 47.76922593186765
- type: nauc_ndcg_at_100_max
value: 21.560230506228212
- type: nauc_ndcg_at_100_std
value: -9.642938046812427
- type: nauc_ndcg_at_10_diff1
value: 47.319712738884235
- type: nauc_ndcg_at_10_max
value: 21.563611991893776
- type: nauc_ndcg_at_10_std
value: -10.8291523074647
- type: nauc_ndcg_at_1_diff1
value: 50.218645783866165
- type: nauc_ndcg_at_1_max
value: 20.1519999109772
- type: nauc_ndcg_at_1_std
value: -8.435852939261638
- type: nauc_ndcg_at_20_diff1
value: 47.549440903272576
- type: nauc_ndcg_at_20_max
value: 21.60946482480832
- type: nauc_ndcg_at_20_std
value: -9.676726716756642
- type: nauc_ndcg_at_3_diff1
value: 46.85874975295731
- type: nauc_ndcg_at_3_max
value: 19.97364939016392
- type: nauc_ndcg_at_3_std
value: -12.69379259341466
- type: nauc_ndcg_at_5_diff1
value: 46.97495524419072
- type: nauc_ndcg_at_5_max
value: 20.769975752692034
- type: nauc_ndcg_at_5_std
value: -11.934684225152365
- type: nauc_precision_at_1000_diff1
value: -19.990666552030007
- type: nauc_precision_at_1000_max
value: 10.876772512124212
- type: nauc_precision_at_1000_std
value: 20.48008319920701
- type: nauc_precision_at_100_diff1
value: -17.968775797474056
- type: nauc_precision_at_100_max
value: 12.501874770426873
- type: nauc_precision_at_100_std
value: 20.49710605336997
- type: nauc_precision_at_10_diff1
value: -6.8867086393814585
- type: nauc_precision_at_10_max
value: 17.14868242242726
- type: nauc_precision_at_10_std
value: 13.21690743137821
- type: nauc_precision_at_1_diff1
value: 50.218645783866165
- type: nauc_precision_at_1_max
value: 20.1519999109772
- type: nauc_precision_at_1_std
value: -8.435852939261638
- type: nauc_precision_at_20_diff1
value: -11.752160128790043
- type: nauc_precision_at_20_max
value: 15.237636262112057
- type: nauc_precision_at_20_std
value: 18.180728055218886
- type: nauc_precision_at_3_diff1
value: 15.96609445885222
- type: nauc_precision_at_3_max
value: 20.18494092548839
- type: nauc_precision_at_3_std
value: -1.0589223899689346
- type: nauc_precision_at_5_diff1
value: 4.644778831019537
- type: nauc_precision_at_5_max
value: 18.90354311244982
- type: nauc_precision_at_5_std
value: 5.473254605926224
- type: nauc_recall_at_1000_diff1
value: 43.6311966998835
- type: nauc_recall_at_1000_max
value: 71.26453607826319
- type: nauc_recall_at_1000_std
value: 63.74850911403961
- type: nauc_recall_at_100_diff1
value: 43.81911515697184
- type: nauc_recall_at_100_max
value: 49.13377769323508
- type: nauc_recall_at_100_std
value: 22.92335191809556
- type: nauc_recall_at_10_diff1
value: 39.868975772803154
- type: nauc_recall_at_10_max
value: 27.1991395214908
- type: nauc_recall_at_10_std
value: -11.928586693931537
- type: nauc_recall_at_1_diff1
value: 51.426208875589964
- type: nauc_recall_at_1_max
value: 15.25704840433183
- type: nauc_recall_at_1_std
value: -12.188645475831342
- type: nauc_recall_at_20_diff1
value: 40.48934810019854
- type: nauc_recall_at_20_max
value: 31.541919411302256
- type: nauc_recall_at_20_std
value: 1.6695278429926617
- type: nauc_recall_at_3_diff1
value: 41.85552950416144
- type: nauc_recall_at_3_max
value: 19.544146808722118
- type: nauc_recall_at_3_std
value: -15.442392634895718
- type: nauc_recall_at_5_diff1
value: 40.52718222998753
- type: nauc_recall_at_5_max
value: 21.436637732490112
- type: nauc_recall_at_5_std
value: -14.812931287114298
- type: ndcg_at_1
value: 57.851
- type: ndcg_at_10
value: 74.639
- type: ndcg_at_100
value: 76.334
- type: ndcg_at_1000
value: 76.483
- type: ndcg_at_20
value: 75.543
- type: ndcg_at_3
value: 68.56400000000001
- type: ndcg_at_5
value: 71.977
- type: precision_at_1
value: 57.851
- type: precision_at_10
value: 11.023
- type: precision_at_100
value: 1.2
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.737
- type: precision_at_3
value: 29.788999999999998
- type: precision_at_5
value: 19.971
- type: recall_at_1
value: 51.910000000000004
- type: recall_at_10
value: 91.50500000000001
- type: recall_at_100
value: 98.571
- type: recall_at_1000
value: 99.681
- type: recall_at_20
value: 94.78999999999999
- type: recall_at_3
value: 76.32
- type: recall_at_5
value: 83.992
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 90.78500000000001
- type: map_at_1
value: 73.36399999999999
- type: map_at_10
value: 87.561
- type: map_at_100
value: 88.139
- type: map_at_1000
value: 88.151
- type: map_at_20
value: 87.953
- type: map_at_3
value: 84.738
- type: map_at_5
value: 86.533
- type: mrr_at_1
value: 84.42
- type: mrr_at_10
value: 89.91527777777762
- type: mrr_at_100
value: 89.98056337523398
- type: mrr_at_1000
value: 89.98095050387363
- type: mrr_at_20
value: 89.96620262859754
- type: mrr_at_3
value: 89.17333333333308
- type: mrr_at_5
value: 89.69083333333309
- type: nauc_map_at_1000_diff1
value: 79.22837535499788
- type: nauc_map_at_1000_max
value: 15.229135965576624
- type: nauc_map_at_1000_std
value: -65.13592340820175
- type: nauc_map_at_100_diff1
value: 79.22964850175666
- type: nauc_map_at_100_max
value: 15.17873352763656
- type: nauc_map_at_100_std
value: -65.21743661211563
- type: nauc_map_at_10_diff1
value: 79.40458559409714
- type: nauc_map_at_10_max
value: 13.98665034413499
- type: nauc_map_at_10_std
value: -67.98126748033091
- type: nauc_map_at_1_diff1
value: 82.80184669392824
- type: nauc_map_at_1_max
value: 8.856089749102615
- type: nauc_map_at_1_std
value: -54.391672423052306
- type: nauc_map_at_20_diff1
value: 79.28140675062055
- type: nauc_map_at_20_max
value: 14.813382162586338
- type: nauc_map_at_20_std
value: -66.3069868467324
- type: nauc_map_at_3_diff1
value: 79.86030933775763
- type: nauc_map_at_3_max
value: 11.48286207079137
- type: nauc_map_at_3_std
value: -69.0349738122831
- type: nauc_map_at_5_diff1
value: 79.49804780757196
- type: nauc_map_at_5_max
value: 13.014896114509048
- type: nauc_map_at_5_std
value: -69.24755978940475
- type: nauc_mrr_at_1000_diff1
value: 80.32513802860564
- type: nauc_mrr_at_1000_max
value: 19.133166308087606
- type: nauc_mrr_at_1000_std
value: -60.70934579066254
- type: nauc_mrr_at_100_diff1
value: 80.32534858109838
- type: nauc_mrr_at_100_max
value: 19.134687103600278
- type: nauc_mrr_at_100_std
value: -60.70792004190786
- type: nauc_mrr_at_10_diff1
value: 80.32753693824817
- type: nauc_mrr_at_10_max
value: 19.07334892748873
- type: nauc_mrr_at_10_std
value: -61.00343339014411
- type: nauc_mrr_at_1_diff1
value: 80.94621936022547
- type: nauc_mrr_at_1_max
value: 19.098389309511855
- type: nauc_mrr_at_1_std
value: -56.281240215328744
- type: nauc_mrr_at_20_diff1
value: 80.32886311854372
- type: nauc_mrr_at_20_max
value: 19.144658294844398
- type: nauc_mrr_at_20_std
value: -60.80113609312996
- type: nauc_mrr_at_3_diff1
value: 80.15719807177194
- type: nauc_mrr_at_3_max
value: 19.21110611323521
- type: nauc_mrr_at_3_std
value: -61.55939529271486
- type: nauc_mrr_at_5_diff1
value: 80.25659942502026
- type: nauc_mrr_at_5_max
value: 19.18410850302189
- type: nauc_mrr_at_5_std
value: -61.34080339288672
- type: nauc_ndcg_at_1000_diff1
value: 79.30792401562252
- type: nauc_ndcg_at_1000_max
value: 17.228735754297546
- type: nauc_ndcg_at_1000_std
value: -63.0841035754019
- type: nauc_ndcg_at_100_diff1
value: 79.32390069722452
- type: nauc_ndcg_at_100_max
value: 17.03598908274086
- type: nauc_ndcg_at_100_std
value: -63.47645382028065
- type: nauc_ndcg_at_10_diff1
value: 79.3889546948115
- type: nauc_ndcg_at_10_max
value: 14.754867908259872
- type: nauc_ndcg_at_10_std
value: -68.67778986513876
- type: nauc_ndcg_at_1_diff1
value: 80.96616311503684
- type: nauc_ndcg_at_1_max
value: 19.251282966440872
- type: nauc_ndcg_at_1_std
value: -56.28623708629156
- type: nauc_ndcg_at_20_diff1
value: 79.36281855858415
- type: nauc_ndcg_at_20_max
value: 16.27660669783678
- type: nauc_ndcg_at_20_std
value: -66.242757220336
- type: nauc_ndcg_at_3_diff1
value: 78.59162334237037
- type: nauc_ndcg_at_3_max
value: 14.310999607705163
- type: nauc_ndcg_at_3_std
value: -67.43291489588975
- type: nauc_ndcg_at_5_diff1
value: 79.02195291821941
- type: nauc_ndcg_at_5_max
value: 14.703906318131077
- type: nauc_ndcg_at_5_std
value: -69.07043859982423
- type: nauc_precision_at_1000_diff1
value: -45.99620856951242
- type: nauc_precision_at_1000_max
value: 7.125627651370571
- type: nauc_precision_at_1000_std
value: 50.082304720650285
- type: nauc_precision_at_100_diff1
value: -45.93971803560242
- type: nauc_precision_at_100_max
value: 6.31666945015899
- type: nauc_precision_at_100_std
value: 48.43063822533797
- type: nauc_precision_at_10_diff1
value: -42.57062696143343
- type: nauc_precision_at_10_max
value: 4.926448070411812
- type: nauc_precision_at_10_std
value: 32.322003900367136
- type: nauc_precision_at_1_diff1
value: 80.96616311503684
- type: nauc_precision_at_1_max
value: 19.251282966440872
- type: nauc_precision_at_1_std
value: -56.28623708629156
- type: nauc_precision_at_20_diff1
value: -44.76495643463962
- type: nauc_precision_at_20_max
value: 5.4607809221385635
- type: nauc_precision_at_20_std
value: 40.49645695309527
- type: nauc_precision_at_3_diff1
value: -26.588107140371964
- type: nauc_precision_at_3_max
value: 6.575677555357888
- type: nauc_precision_at_3_std
value: 7.485155494378594
- type: nauc_precision_at_5_diff1
value: -36.70696804880982
- type: nauc_precision_at_5_max
value: 5.972677452493278
- type: nauc_precision_at_5_std
value: 21.08447210740431
- type: nauc_recall_at_1000_diff1
value: 10.886946378066934
- type: nauc_recall_at_1000_max
value: -98.39304801447328
- type: nauc_recall_at_1000_std
value: -44.363214454766606
- type: nauc_recall_at_100_diff1
value: 77.70428152195116
- type: nauc_recall_at_100_max
value: 0.5837837369290989
- type: nauc_recall_at_100_std
value: -98.47672805335857
- type: nauc_recall_at_10_diff1
value: 76.85812517627498
- type: nauc_recall_at_10_max
value: -1.0832219226903645
- type: nauc_recall_at_10_std
value: -110.90702861885376
- type: nauc_recall_at_1_diff1
value: 82.80184669392824
- type: nauc_recall_at_1_max
value: 8.856089749102615
- type: nauc_recall_at_1_std
value: -54.391672423052306
- type: nauc_recall_at_20_diff1
value: 76.9632614403016
- type: nauc_recall_at_20_max
value: 5.480849453695013
- type: nauc_recall_at_20_std
value: -115.63128053573668
- type: nauc_recall_at_3_diff1
value: 76.50668600748783
- type: nauc_recall_at_3_max
value: 5.787499766680326
- type: nauc_recall_at_3_std
value: -82.19925386253946
- type: nauc_recall_at_5_diff1
value: 75.50256322857665
- type: nauc_recall_at_5_max
value: 4.71365237505925
- type: nauc_recall_at_5_std
value: -93.67905025813903
- type: ndcg_at_1
value: 84.41
- type: ndcg_at_10
value: 90.78500000000001
- type: ndcg_at_100
value: 91.699
- type: ndcg_at_1000
value: 91.751
- type: ndcg_at_20
value: 91.31099999999999
- type: ndcg_at_3
value: 88.434
- type: ndcg_at_5
value: 89.754
- type: precision_at_1
value: 84.41
- type: precision_at_10
value: 13.757
- type: precision_at_100
value: 1.544
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.266
- type: precision_at_3
value: 38.800000000000004
- type: precision_at_5
value: 25.396
- type: recall_at_1
value: 73.36399999999999
- type: recall_at_10
value: 96.832
- type: recall_at_100
value: 99.799
- type: recall_at_1000
value: 99.995
- type: recall_at_20
value: 98.48700000000001
- type: recall_at_3
value: 89.85499999999999
- type: recall_at_5
value: 93.758
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 72.36527124460562
- type: v_measure
value: 72.36527124460562
- type: v_measure_std
value: 2.7778891945364195
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 73.89142551084535
- type: v_measure
value: 73.89142551084535
- type: v_measure_std
value: 11.258242813412751
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 28.538000000000004
- type: map_at_1
value: 6.643000000000001
- type: map_at_10
value: 17.805
- type: map_at_100
value: 21.054000000000002
- type: map_at_1000
value: 21.442
- type: map_at_20
value: 19.503999999999998
- type: map_at_3
value: 12.648000000000001
- type: map_at_5
value: 15.048
- type: mrr_at_1
value: 32.800000000000004
- type: mrr_at_10
value: 46.25075396825397
- type: mrr_at_100
value: 47.158633401334065
- type: mrr_at_1000
value: 47.17670558089014
- type: mrr_at_20
value: 46.83560758470973
- type: mrr_at_3
value: 42.499999999999964
- type: mrr_at_5
value: 44.69499999999993
- type: nauc_map_at_1000_diff1
value: 10.789045939743312
- type: nauc_map_at_1000_max
value: 32.14632527014952
- type: nauc_map_at_1000_std
value: 20.19671140555673
- type: nauc_map_at_100_diff1
value: 10.751726290304457
- type: nauc_map_at_100_max
value: 32.11882933379086
- type: nauc_map_at_100_std
value: 20.101870633638903
- type: nauc_map_at_10_diff1
value: 12.006914710409074
- type: nauc_map_at_10_max
value: 30.41130279511205
- type: nauc_map_at_10_std
value: 16.189788376384865
- type: nauc_map_at_1_diff1
value: 21.38187908816387
- type: nauc_map_at_1_max
value: 24.99538128197334
- type: nauc_map_at_1_std
value: 9.118883517128223
- type: nauc_map_at_20_diff1
value: 10.802870277150753
- type: nauc_map_at_20_max
value: 31.22006132139698
- type: nauc_map_at_20_std
value: 18.073673400388422
- type: nauc_map_at_3_diff1
value: 16.347189082771948
- type: nauc_map_at_3_max
value: 28.33087344789753
- type: nauc_map_at_3_std
value: 11.69551675125919
- type: nauc_map_at_5_diff1
value: 14.437136962876188
- type: nauc_map_at_5_max
value: 29.874785069482506
- type: nauc_map_at_5_std
value: 13.630292208839778
- type: nauc_mrr_at_1000_diff1
value: 18.115657218810394
- type: nauc_mrr_at_1000_max
value: 26.36974876533877
- type: nauc_mrr_at_1000_std
value: 12.945521122077992
- type: nauc_mrr_at_100_diff1
value: 18.101791416005494
- type: nauc_mrr_at_100_max
value: 26.38722397148215
- type: nauc_mrr_at_100_std
value: 12.978980203045584
- type: nauc_mrr_at_10_diff1
value: 18.054407102669657
- type: nauc_mrr_at_10_max
value: 26.357266760977115
- type: nauc_mrr_at_10_std
value: 13.047191230283303
- type: nauc_mrr_at_1_diff1
value: 21.72510847894924
- type: nauc_mrr_at_1_max
value: 25.44857511434268
- type: nauc_mrr_at_1_std
value: 9.345581415183856
- type: nauc_mrr_at_20_diff1
value: 18.027869822742357
- type: nauc_mrr_at_20_max
value: 26.36127613669918
- type: nauc_mrr_at_20_std
value: 12.919096925478375
- type: nauc_mrr_at_3_diff1
value: 18.073482602333435
- type: nauc_mrr_at_3_max
value: 25.323655770056707
- type: nauc_mrr_at_3_std
value: 11.885953111994151
- type: nauc_mrr_at_5_diff1
value: 18.111686003016334
- type: nauc_mrr_at_5_max
value: 26.192945508761944
- type: nauc_mrr_at_5_std
value: 12.15347243046111
- type: nauc_ndcg_at_1000_diff1
value: 11.307254540740201
- type: nauc_ndcg_at_1000_max
value: 33.71756163776018
- type: nauc_ndcg_at_1000_std
value: 24.685385257460815
- type: nauc_ndcg_at_100_diff1
value: 10.02155722506797
- type: nauc_ndcg_at_100_max
value: 34.062871320003815
- type: nauc_ndcg_at_100_std
value: 25.946717818165503
- type: nauc_ndcg_at_10_diff1
value: 11.66408962417464
- type: nauc_ndcg_at_10_max
value: 30.297422749362223
- type: nauc_ndcg_at_10_std
value: 17.6607178434512
- type: nauc_ndcg_at_1_diff1
value: 21.72510847894924
- type: nauc_ndcg_at_1_max
value: 25.44857511434268
- type: nauc_ndcg_at_1_std
value: 9.345581415183856
- type: nauc_ndcg_at_20_diff1
value: 9.94179593521292
- type: nauc_ndcg_at_20_max
value: 31.43955483410812
- type: nauc_ndcg_at_20_std
value: 20.023594363361713
- type: nauc_ndcg_at_3_diff1
value: 16.518122409217916
- type: nauc_ndcg_at_3_max
value: 28.02081341423043
- type: nauc_ndcg_at_3_std
value: 12.481239903453694
- type: nauc_ndcg_at_5_diff1
value: 14.42966455444073
- type: nauc_ndcg_at_5_max
value: 29.70088747545455
- type: nauc_ndcg_at_5_std
value: 14.235829092545904
- type: nauc_precision_at_1000_diff1
value: 1.90325448383178
- type: nauc_precision_at_1000_max
value: 30.93317218027561
- type: nauc_precision_at_1000_std
value: 38.645338287217385
- type: nauc_precision_at_100_diff1
value: 0.1252908386460034
- type: nauc_precision_at_100_max
value: 32.83232486740121
- type: nauc_precision_at_100_std
value: 38.39716435488311
- type: nauc_precision_at_10_diff1
value: 5.129281906089187
- type: nauc_precision_at_10_max
value: 29.13254144452189
- type: nauc_precision_at_10_std
value: 21.170655820058695
- type: nauc_precision_at_1_diff1
value: 21.72510847894924
- type: nauc_precision_at_1_max
value: 25.44857511434268
- type: nauc_precision_at_1_std
value: 9.345581415183856
- type: nauc_precision_at_20_diff1
value: 1.426147845409854
- type: nauc_precision_at_20_max
value: 29.55472435454713
- type: nauc_precision_at_20_std
value: 24.489789182449744
- type: nauc_precision_at_3_diff1
value: 14.305359352927194
- type: nauc_precision_at_3_max
value: 28.939644598906263
- type: nauc_precision_at_3_std
value: 13.883159077618451
- type: nauc_precision_at_5_diff1
value: 10.806549955611787
- type: nauc_precision_at_5_max
value: 30.692865412337213
- type: nauc_precision_at_5_std
value: 16.34097056419444
- type: nauc_recall_at_1000_diff1
value: 1.4145909877298652
- type: nauc_recall_at_1000_max
value: 32.02359135142732
- type: nauc_recall_at_1000_std
value: 42.60174028590349
- type: nauc_recall_at_100_diff1
value: -0.37109606033002723
- type: nauc_recall_at_100_max
value: 32.685485952066905
- type: nauc_recall_at_100_std
value: 38.93939246304491
- type: nauc_recall_at_10_diff1
value: 4.791220273088741
- type: nauc_recall_at_10_max
value: 28.788746481442786
- type: nauc_recall_at_10_std
value: 21.012624321533405
- type: nauc_recall_at_1_diff1
value: 21.38187908816387
- type: nauc_recall_at_1_max
value: 24.99538128197334
- type: nauc_recall_at_1_std
value: 9.118883517128223
- type: nauc_recall_at_20_diff1
value: 0.9133987668726881
- type: nauc_recall_at_20_max
value: 29.13398902376736
- type: nauc_recall_at_20_std
value: 24.28650862967556
- type: nauc_recall_at_3_diff1
value: 13.795646221056213
- type: nauc_recall_at_3_max
value: 28.458736997372156
- type: nauc_recall_at_3_std
value: 13.590754437517683
- type: nauc_recall_at_5_diff1
value: 10.27964083807708
- type: nauc_recall_at_5_max
value: 30.18285511857152
- type: nauc_recall_at_5_std
value: 15.981641598553725
- type: ndcg_at_1
value: 32.800000000000004
- type: ndcg_at_10
value: 28.538000000000004
- type: ndcg_at_100
value: 39.253
- type: ndcg_at_1000
value: 44.690000000000005
- type: ndcg_at_20
value: 32.523
- type: ndcg_at_3
value: 27.296
- type: ndcg_at_5
value: 23.615
- type: precision_at_1
value: 32.800000000000004
- type: precision_at_10
value: 14.77
- type: precision_at_100
value: 3.01
- type: precision_at_1000
value: 0.43
- type: precision_at_20
value: 9.69
- type: precision_at_3
value: 25.6
- type: precision_at_5
value: 20.66
- type: recall_at_1
value: 6.643000000000001
- type: recall_at_10
value: 29.992
- type: recall_at_100
value: 61.095
- type: recall_at_1000
value: 87.272
- type: recall_at_20
value: 39.33
- type: recall_at_3
value: 15.573
- type: recall_at_5
value: 20.948
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 84.37281363343187
- type: cosine_spearman
value: 83.30200195593044
- type: euclidean_pearson
value: 81.67701335794368
- type: euclidean_spearman
value: 83.3019997653498
- type: main_score
value: 83.30200195593044
- type: manhattan_pearson
value: 81.80922925216795
- type: manhattan_spearman
value: 82.45409932874257
- type: pearson
value: 84.37281363343187
- type: spearman
value: 83.30200195593044
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 86.82824905521925
- type: cosine_spearman
value: 80.98590815911939
- type: euclidean_pearson
value: 81.89218840695128
- type: euclidean_spearman
value: 80.98590725274755
- type: main_score
value: 80.98590815911939
- type: manhattan_pearson
value: 79.38582949943776
- type: manhattan_spearman
value: 77.30421542625247
- type: pearson
value: 86.82824905521925
- type: spearman
value: 80.98590815911939
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 87.19722316157294
- type: cosine_spearman
value: 87.34287142701457
- type: euclidean_pearson
value: 86.95469976545654
- type: euclidean_spearman
value: 87.34287142701457
- type: main_score
value: 87.34287142701457
- type: manhattan_pearson
value: 85.00802640790108
- type: manhattan_spearman
value: 84.8446065481803
- type: pearson
value: 87.19722316157294
- type: spearman
value: 87.34287142701457
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 84.82646675904164
- type: cosine_spearman
value: 84.38843815801556
- type: euclidean_pearson
value: 83.62102244440854
- type: euclidean_spearman
value: 84.38846972126379
- type: main_score
value: 84.38843815801556
- type: manhattan_pearson
value: 82.56864079042991
- type: manhattan_spearman
value: 82.88684800532234
- type: pearson
value: 84.82646675904164
- type: spearman
value: 84.38843815801556
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 89.69909533656704
- type: cosine_spearman
value: 89.74723322749233
- type: euclidean_pearson
value: 88.7169991211476
- type: euclidean_spearman
value: 89.7472332075485
- type: main_score
value: 89.74723322749233
- type: manhattan_pearson
value: 87.37922931937202
- type: manhattan_spearman
value: 87.47352246770794
- type: pearson
value: 89.69909533656704
- type: spearman
value: 89.74723322749233
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 86.84947603401746
- type: cosine_spearman
value: 87.63022743056388
- type: euclidean_pearson
value: 86.74752511965002
- type: euclidean_spearman
value: 87.63022743056388
- type: main_score
value: 87.63022743056388
- type: manhattan_pearson
value: 86.1770646766385
- type: manhattan_spearman
value: 86.43792690343828
- type: pearson
value: 86.84947603401746
- type: spearman
value: 87.63022743056388
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 91.43391567649913
- type: cosine_spearman
value: 90.86953801008369
- type: euclidean_pearson
value: 91.24907274014495
- type: euclidean_spearman
value: 90.86953801008369
- type: main_score
value: 90.86953801008369
- type: manhattan_pearson
value: 92.27103413151777
- type: manhattan_spearman
value: 91.70510079315306
- type: pearson
value: 91.43391567649913
- type: spearman
value: 90.86953801008369
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 68.81338409687908
- type: cosine_spearman
value: 68.09215270009086
- type: euclidean_pearson
value: 68.64603879303111
- type: euclidean_spearman
value: 68.09215270009086
- type: main_score
value: 68.09215270009086
- type: manhattan_pearson
value: 69.46795022258881
- type: manhattan_spearman
value: 68.27576057587602
- type: pearson
value: 68.81338409687908
- type: spearman
value: 68.09215270009086
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 87.93191595794555
- type: cosine_spearman
value: 88.46646307403641
- type: euclidean_pearson
value: 87.3793925878407
- type: euclidean_spearman
value: 88.46646307403641
- type: main_score
value: 88.46646307403641
- type: manhattan_pearson
value: 86.42857012902716
- type: manhattan_spearman
value: 86.76733082091621
- type: pearson
value: 87.93191595794555
- type: spearman
value: 88.46646307403641
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 87.62672056519489
- type: map
value: 87.62672056519489
- type: mrr
value: 96.75288491464961
- type: nAUC_map_diff1
value: -10.979336734379478
- type: nAUC_map_max
value: 50.98762609235208
- type: nAUC_map_std
value: 68.765950990151
- type: nAUC_mrr_diff1
value: 26.032783373787915
- type: nAUC_mrr_max
value: 82.4844792677926
- type: nAUC_mrr_std
value: 82.0357865297397
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 79.745
- type: map_at_1
value: 64.328
- type: map_at_10
value: 75.139
- type: map_at_100
value: 75.384
- type: map_at_1000
value: 75.397
- type: map_at_20
value: 75.297
- type: map_at_3
value: 72.222
- type: map_at_5
value: 74.079
- type: mrr_at_1
value: 67.66666666666666
- type: mrr_at_10
value: 76.05383597883596
- type: mrr_at_100
value: 76.22027125405623
- type: mrr_at_1000
value: 76.23312943054236
- type: mrr_at_20
value: 76.14538114391056
- type: mrr_at_3
value: 74.11111111111113
- type: mrr_at_5
value: 75.31111111111112
- type: nauc_map_at_1000_diff1
value: 78.43909223692039
- type: nauc_map_at_1000_max
value: 61.93783649790126
- type: nauc_map_at_1000_std
value: 4.252485546213312
- type: nauc_map_at_100_diff1
value: 78.43784839609923
- type: nauc_map_at_100_max
value: 61.95484128615342
- type: nauc_map_at_100_std
value: 4.2853337017858335
- type: nauc_map_at_10_diff1
value: 78.39217968458001
- type: nauc_map_at_10_max
value: 62.11071146014176
- type: nauc_map_at_10_std
value: 4.2617060402705995
- type: nauc_map_at_1_diff1
value: 79.46268237346375
- type: nauc_map_at_1_max
value: 51.87919162971395
- type: nauc_map_at_1_std
value: -7.9819599317263155
- type: nauc_map_at_20_diff1
value: 78.34137192881398
- type: nauc_map_at_20_max
value: 62.04478567534677
- type: nauc_map_at_20_std
value: 4.44986243492092
- type: nauc_map_at_3_diff1
value: 78.93188320960228
- type: nauc_map_at_3_max
value: 59.05645306883896
- type: nauc_map_at_3_std
value: -0.7631750269436595
- type: nauc_map_at_5_diff1
value: 78.24090341300327
- type: nauc_map_at_5_max
value: 60.40223081576741
- type: nauc_map_at_5_std
value: 2.3140404965702412
- type: nauc_mrr_at_1000_diff1
value: 77.83067231715107
- type: nauc_mrr_at_1000_max
value: 62.87956349977423
- type: nauc_mrr_at_1000_std
value: 6.261294250064705
- type: nauc_mrr_at_100_diff1
value: 77.82977924127196
- type: nauc_mrr_at_100_max
value: 62.896357638733605
- type: nauc_mrr_at_100_std
value: 6.29361759938427
- type: nauc_mrr_at_10_diff1
value: 77.67673337207816
- type: nauc_mrr_at_10_max
value: 63.029402394904096
- type: nauc_mrr_at_10_std
value: 6.4376801339784056
- type: nauc_mrr_at_1_diff1
value: 79.44451752755394
- type: nauc_mrr_at_1_max
value: 60.551707570965306
- type: nauc_mrr_at_1_std
value: 2.128815653258172
- type: nauc_mrr_at_20_diff1
value: 77.73758868156405
- type: nauc_mrr_at_20_max
value: 62.963931403619334
- type: nauc_mrr_at_20_std
value: 6.407753752007869
- type: nauc_mrr_at_3_diff1
value: 77.98942478932196
- type: nauc_mrr_at_3_max
value: 62.99215580076013
- type: nauc_mrr_at_3_std
value: 5.461908127857269
- type: nauc_mrr_at_5_diff1
value: 77.37126419303483
- type: nauc_mrr_at_5_max
value: 62.33931482696964
- type: nauc_mrr_at_5_std
value: 5.6849973918884364
- type: nauc_ndcg_at_1000_diff1
value: 77.84639188057717
- type: nauc_ndcg_at_1000_max
value: 63.315777298558665
- type: nauc_ndcg_at_1000_std
value: 6.713565158302629
- type: nauc_ndcg_at_100_diff1
value: 77.863198515294
- type: nauc_ndcg_at_100_max
value: 63.74184406752551
- type: nauc_ndcg_at_100_std
value: 7.570839930103332
- type: nauc_ndcg_at_10_diff1
value: 77.18552551495168
- type: nauc_ndcg_at_10_max
value: 64.67747343020477
- type: nauc_ndcg_at_10_std
value: 8.101265554115662
- type: nauc_ndcg_at_1_diff1
value: 79.44451752755394
- type: nauc_ndcg_at_1_max
value: 60.551707570965306
- type: nauc_ndcg_at_1_std
value: 2.128815653258172
- type: nauc_ndcg_at_20_diff1
value: 77.17994663867256
- type: nauc_ndcg_at_20_max
value: 64.43647467030159
- type: nauc_ndcg_at_20_std
value: 8.622247271863376
- type: nauc_ndcg_at_3_diff1
value: 77.88539508493001
- type: nauc_ndcg_at_3_max
value: 62.66984272127964
- type: nauc_ndcg_at_3_std
value: 2.742594418074899
- type: nauc_ndcg_at_5_diff1
value: 76.53757494977143
- type: nauc_ndcg_at_5_max
value: 61.49129748643147
- type: nauc_ndcg_at_5_std
value: 4.380236291870192
- type: nauc_precision_at_1000_diff1
value: -29.19719579231251
- type: nauc_precision_at_1000_max
value: 23.47044780637479
- type: nauc_precision_at_1000_std
value: 52.84101463334091
- type: nauc_precision_at_100_diff1
value: -19.027484143763136
- type: nauc_precision_at_100_max
value: 29.260915389748522
- type: nauc_precision_at_100_std
value: 52.8204456584782
- type: nauc_precision_at_10_diff1
value: -4.391032114724305
- type: nauc_precision_at_10_max
value: 42.8334824133652
- type: nauc_precision_at_10_std
value: 51.92832616787376
- type: nauc_precision_at_1_diff1
value: 79.44451752755394
- type: nauc_precision_at_1_max
value: 60.551707570965306
- type: nauc_precision_at_1_std
value: 2.128815653258172
- type: nauc_precision_at_20_diff1
value: -10.485053838019777
- type: nauc_precision_at_20_max
value: 36.682622883716725
- type: nauc_precision_at_20_std
value: 52.86245437696848
- type: nauc_precision_at_3_diff1
value: 37.47158426353003
- type: nauc_precision_at_3_max
value: 57.209349751980824
- type: nauc_precision_at_3_std
value: 27.18493771030643
- type: nauc_precision_at_5_diff1
value: 14.132687328859591
- type: nauc_precision_at_5_max
value: 46.577764510703226
- type: nauc_precision_at_5_std
value: 37.8320197016056
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 79.55849006269133
- type: nauc_recall_at_100_max
value: 85.39415766306497
- type: nauc_recall_at_100_std
value: 52.62104841936836
- type: nauc_recall_at_10_diff1
value: 67.56888802032422
- type: nauc_recall_at_10_max
value: 80.90895272837815
- type: nauc_recall_at_10_std
value: 31.151065077193458
- type: nauc_recall_at_1_diff1
value: 79.46268237346375
- type: nauc_recall_at_1_max
value: 51.87919162971395
- type: nauc_recall_at_1_std
value: -7.9819599317263155
- type: nauc_recall_at_20_diff1
value: 66.80955210366969
- type: nauc_recall_at_20_max
value: 83.85111620405739
- type: nauc_recall_at_20_std
value: 43.14008431655496
- type: nauc_recall_at_3_diff1
value: 73.487621203687
- type: nauc_recall_at_3_max
value: 60.736184806494144
- type: nauc_recall_at_3_std
value: -0.05276306802210151
- type: nauc_recall_at_5_diff1
value: 67.32384089471962
- type: nauc_recall_at_5_max
value: 60.88484924058943
- type: nauc_recall_at_5_std
value: 7.53695946791339
- type: ndcg_at_1
value: 67.667
- type: ndcg_at_10
value: 79.745
- type: ndcg_at_100
value: 80.803
- type: ndcg_at_1000
value: 81.109
- type: ndcg_at_20
value: 80.21000000000001
- type: ndcg_at_3
value: 75.288
- type: ndcg_at_5
value: 77.739
- type: precision_at_1
value: 67.667
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.367
- type: precision_at_3
value: 29.555999999999997
- type: precision_at_5
value: 19.467000000000002
- type: recall_at_1
value: 64.328
- type: recall_at_10
value: 92.833
- type: recall_at_100
value: 97.667
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 94.5
- type: recall_at_3
value: 81.072
- type: recall_at_5
value: 87.339
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.86039603960396
- type: cosine_accuracy_threshold
value: 72.46292233467102
- type: cosine_ap
value: 96.9963756118777
- type: cosine_f1
value: 92.8895612708018
- type: cosine_f1_threshold
value: 72.46292233467102
- type: cosine_precision
value: 93.69277721261444
- type: cosine_recall
value: 92.10000000000001
- type: dot_accuracy
value: 99.86039603960396
- type: dot_accuracy_threshold
value: 72.46291637420654
- type: dot_ap
value: 96.99637561187771
- type: dot_f1
value: 92.8895612708018
- type: dot_f1_threshold
value: 72.46291637420654
- type: dot_precision
value: 93.69277721261444
- type: dot_recall
value: 92.10000000000001
- type: euclidean_accuracy
value: 99.86039603960396
- type: euclidean_accuracy_threshold
value: 74.21196699142456
- type: euclidean_ap
value: 96.99637561187768
- type: euclidean_f1
value: 92.8895612708018
- type: euclidean_f1_threshold
value: 74.21196699142456
- type: euclidean_precision
value: 93.69277721261444
- type: euclidean_recall
value: 92.10000000000001
- type: main_score
value: 96.99637561187771
- type: manhattan_accuracy
value: 99.81980198019802
- type: manhattan_accuracy_threshold
value: 2608.760452270508
- type: manhattan_ap
value: 96.18773334150895
- type: manhattan_f1
value: 90.74262461851477
- type: manhattan_f1_threshold
value: 2608.760452270508
- type: manhattan_precision
value: 92.33954451345755
- type: manhattan_recall
value: 89.2
- type: max_accuracy
value: 99.86039603960396
- type: max_ap
value: 96.99637561187771
- type: max_f1
value: 92.8895612708018
- type: max_precision
value: 93.69277721261444
- type: max_recall
value: 92.10000000000001
- type: similarity_accuracy
value: 99.86039603960396
- type: similarity_accuracy_threshold
value: 72.46292233467102
- type: similarity_ap
value: 96.9963756118777
- type: similarity_f1
value: 92.8895612708018
- type: similarity_f1_threshold
value: 72.46292233467102
- type: similarity_precision
value: 93.69277721261444
- type: similarity_recall
value: 92.10000000000001
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 81.5950420419382
- type: v_measure
value: 81.5950420419382
- type: v_measure_std
value: 2.3518861207789126
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 44.40836435329055
- type: v_measure
value: 44.40836435329055
- type: v_measure_std
value: 1.3850659888959282
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 58.792345747482436
- type: map
value: 58.792345747482436
- type: mrr
value: 59.87494429590018
- type: nAUC_map_diff1
value: 43.36254267831821
- type: nAUC_map_max
value: 13.241350111456592
- type: nAUC_map_std
value: 6.075720164270261
- type: nAUC_mrr_diff1
value: 43.6953851299995
- type: nAUC_mrr_max
value: 14.587424770057197
- type: nAUC_mrr_std
value: 6.683981115477786
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 29.605378173647523
- type: cosine_spearman
value: 29.538937618105475
- type: dot_pearson
value: 29.605381018751537
- type: dot_spearman
value: 29.538937618105475
- type: main_score
value: 29.538937618105475
- type: pearson
value: 29.605378173647523
- type: spearman
value: 29.538937618105475
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 77.17500000000001
- type: map_at_1
value: 0.22300000000000003
- type: map_at_10
value: 1.9959999999999998
- type: map_at_100
value: 11.998000000000001
- type: map_at_1000
value: 30.284
- type: map_at_20
value: 3.563
- type: map_at_3
value: 0.643
- type: map_at_5
value: 1.0170000000000001
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 90.66666666666666
- type: mrr_at_100
value: 90.66666666666666
- type: mrr_at_1000
value: 90.66666666666666
- type: mrr_at_20
value: 90.66666666666666
- type: mrr_at_3
value: 90.33333333333334
- type: mrr_at_5
value: 90.33333333333334
- type: nauc_map_at_1000_diff1
value: 7.389119317717576
- type: nauc_map_at_1000_max
value: 54.82050496230136
- type: nauc_map_at_1000_std
value: 70.85721068077146
- type: nauc_map_at_100_diff1
value: 12.670094845373846
- type: nauc_map_at_100_max
value: 36.93011704883208
- type: nauc_map_at_100_std
value: 52.83891673041347
- type: nauc_map_at_10_diff1
value: 0.7333820103994417
- type: nauc_map_at_10_max
value: 7.33166700811674
- type: nauc_map_at_10_std
value: 5.962928401420485
- type: nauc_map_at_1_diff1
value: 4.853041318875478
- type: nauc_map_at_1_max
value: 6.708198838067249
- type: nauc_map_at_1_std
value: 3.785155575070299
- type: nauc_map_at_20_diff1
value: 5.621187930586951
- type: nauc_map_at_20_max
value: 16.76609882016852
- type: nauc_map_at_20_std
value: 18.534536538273816
- type: nauc_map_at_3_diff1
value: 5.370619369327505
- type: nauc_map_at_3_max
value: 6.078878275059272
- type: nauc_map_at_3_std
value: 1.3685720820713816
- type: nauc_map_at_5_diff1
value: 3.442398585395224
- type: nauc_map_at_5_max
value: 6.770357584173343
- type: nauc_map_at_5_std
value: 1.7453751789242644
- type: nauc_mrr_at_1000_diff1
value: 31.34224450013925
- type: nauc_mrr_at_1000_max
value: 52.595029239766134
- type: nauc_mrr_at_1000_std
value: 44.9926900584796
- type: nauc_mrr_at_100_diff1
value: 31.34224450013925
- type: nauc_mrr_at_100_max
value: 52.595029239766134
- type: nauc_mrr_at_100_std
value: 44.9926900584796
- type: nauc_mrr_at_10_diff1
value: 31.34224450013925
- type: nauc_mrr_at_10_max
value: 52.595029239766134
- type: nauc_mrr_at_10_std
value: 44.9926900584796
- type: nauc_mrr_at_1_diff1
value: 31.161021109474685
- type: nauc_mrr_at_1_max
value: 51.006381934216904
- type: nauc_mrr_at_1_std
value: 44.82081492390771
- type: nauc_mrr_at_20_diff1
value: 31.34224450013925
- type: nauc_mrr_at_20_max
value: 52.595029239766134
- type: nauc_mrr_at_20_std
value: 44.9926900584796
- type: nauc_mrr_at_3_diff1
value: 32.207456626061095
- type: nauc_mrr_at_3_max
value: 52.696399208027
- type: nauc_mrr_at_3_std
value: 44.66257256954923
- type: nauc_mrr_at_5_diff1
value: 32.207456626061095
- type: nauc_mrr_at_5_max
value: 52.696399208027
- type: nauc_mrr_at_5_std
value: 44.66257256954923
- type: nauc_ndcg_at_1000_diff1
value: 5.618095845960207
- type: nauc_ndcg_at_1000_max
value: 49.890389872564675
- type: nauc_ndcg_at_1000_std
value: 69.30715837125426
- type: nauc_ndcg_at_100_diff1
value: 12.535638321298817
- type: nauc_ndcg_at_100_max
value: 44.09984973737588
- type: nauc_ndcg_at_100_std
value: 66.73116956125261
- type: nauc_ndcg_at_10_diff1
value: 8.28961977229752
- type: nauc_ndcg_at_10_max
value: 37.081028635944044
- type: nauc_ndcg_at_10_std
value: 38.823752443963215
- type: nauc_ndcg_at_1_diff1
value: 39.911537737624705
- type: nauc_ndcg_at_1_max
value: 35.42019574628276
- type: nauc_ndcg_at_1_std
value: 40.367965367965375
- type: nauc_ndcg_at_20_diff1
value: 12.379194710250905
- type: nauc_ndcg_at_20_max
value: 46.77694418095702
- type: nauc_ndcg_at_20_std
value: 53.5510203491856
- type: nauc_ndcg_at_3_diff1
value: 24.70851384180533
- type: nauc_ndcg_at_3_max
value: 27.56920622226903
- type: nauc_ndcg_at_3_std
value: 26.358022803379665
- type: nauc_ndcg_at_5_diff1
value: 11.779917720947457
- type: nauc_ndcg_at_5_max
value: 30.21605394567412
- type: nauc_ndcg_at_5_std
value: 28.513515127488503
- type: nauc_precision_at_1000_diff1
value: -6.002513211324759
- type: nauc_precision_at_1000_max
value: 38.83635216810086
- type: nauc_precision_at_1000_std
value: 38.11787851543623
- type: nauc_precision_at_100_diff1
value: 9.238981139752605
- type: nauc_precision_at_100_max
value: 47.28113136198602
- type: nauc_precision_at_100_std
value: 67.90941931954151
- type: nauc_precision_at_10_diff1
value: -4.3380606584868024
- type: nauc_precision_at_10_max
value: 41.56707414236605
- type: nauc_precision_at_10_std
value: 37.1979026762881
- type: nauc_precision_at_1_diff1
value: 31.161021109474685
- type: nauc_precision_at_1_max
value: 51.006381934216904
- type: nauc_precision_at_1_std
value: 44.82081492390771
- type: nauc_precision_at_20_diff1
value: 11.949737641535576
- type: nauc_precision_at_20_max
value: 54.79667219000284
- type: nauc_precision_at_20_std
value: 59.65064899199118
- type: nauc_precision_at_3_diff1
value: 9.832803576504578
- type: nauc_precision_at_3_max
value: 39.444918210713936
- type: nauc_precision_at_3_std
value: 24.35847740134894
- type: nauc_precision_at_5_diff1
value: -2.8325554667901915
- type: nauc_precision_at_5_max
value: 37.87579363410836
- type: nauc_precision_at_5_std
value: 22.71230150387524
- type: nauc_recall_at_1000_diff1
value: 1.7599230382331428
- type: nauc_recall_at_1000_max
value: 46.12135164141817
- type: nauc_recall_at_1000_std
value: 59.98813586911771
- type: nauc_recall_at_100_diff1
value: 8.984945382291173
- type: nauc_recall_at_100_max
value: 25.15301354551285
- type: nauc_recall_at_100_std
value: 39.651220953971
- type: nauc_recall_at_10_diff1
value: -1.283481764667596
- type: nauc_recall_at_10_max
value: 2.5139780683579565
- type: nauc_recall_at_10_std
value: 2.6011871782532743
- type: nauc_recall_at_1_diff1
value: 4.853041318875478
- type: nauc_recall_at_1_max
value: 6.708198838067249
- type: nauc_recall_at_1_std
value: 3.785155575070299
- type: nauc_recall_at_20_diff1
value: 4.596859911077373
- type: nauc_recall_at_20_max
value: 11.033119490674997
- type: nauc_recall_at_20_std
value: 14.068678820386443
- type: nauc_recall_at_3_diff1
value: 3.2589657526078404
- type: nauc_recall_at_3_max
value: 3.0470205984611267
- type: nauc_recall_at_3_std
value: -1.8601234586336293
- type: nauc_recall_at_5_diff1
value: 2.1569206756644324
- type: nauc_recall_at_5_max
value: 3.589813704568977
- type: nauc_recall_at_5_std
value: -0.18424794111182097
- type: ndcg_at_1
value: 78.0
- type: ndcg_at_10
value: 77.17500000000001
- type: ndcg_at_100
value: 60.223000000000006
- type: ndcg_at_1000
value: 56.04599999999999
- type: ndcg_at_20
value: 72.912
- type: ndcg_at_3
value: 79.592
- type: ndcg_at_5
value: 77.50200000000001
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 62.28
- type: precision_at_1000
value: 24.9
- type: precision_at_20
value: 77.60000000000001
- type: precision_at_3
value: 86.0
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22300000000000003
- type: recall_at_10
value: 2.2159999999999997
- type: recall_at_100
value: 15.372
- type: recall_at_1000
value: 53.549
- type: recall_at_20
value: 4.048
- type: recall_at_3
value: 0.677
- type: recall_at_5
value: 1.087
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 29.343000000000004
- type: map_at_1
value: 3.0220000000000002
- type: map_at_10
value: 12.363
- type: map_at_100
value: 18.724
- type: map_at_1000
value: 20.244
- type: map_at_20
value: 14.806
- type: map_at_3
value: 6.764
- type: map_at_5
value: 8.75
- type: mrr_at_1
value: 36.734693877551024
- type: mrr_at_10
value: 50.583090379008745
- type: mrr_at_100
value: 51.66734804758435
- type: mrr_at_1000
value: 51.66734804758435
- type: mrr_at_20
value: 51.53747791771421
- type: mrr_at_3
value: 46.93877551020408
- type: mrr_at_5
value: 49.48979591836735
- type: nauc_map_at_1000_diff1
value: 7.457113997967389
- type: nauc_map_at_1000_max
value: -20.609546001875334
- type: nauc_map_at_1000_std
value: -2.4970159535791043
- type: nauc_map_at_100_diff1
value: 7.649877544039679
- type: nauc_map_at_100_max
value: -21.673098734905032
- type: nauc_map_at_100_std
value: -5.247298019094194
- type: nauc_map_at_10_diff1
value: 16.76662027563455
- type: nauc_map_at_10_max
value: -13.05597989380238
- type: nauc_map_at_10_std
value: -22.342358118285304
- type: nauc_map_at_1_diff1
value: 19.681507005838757
- type: nauc_map_at_1_max
value: -22.21893272191311
- type: nauc_map_at_1_std
value: -26.226217172137154
- type: nauc_map_at_20_diff1
value: 12.834546050857668
- type: nauc_map_at_20_max
value: -17.20998770352886
- type: nauc_map_at_20_std
value: -18.588111642621413
- type: nauc_map_at_3_diff1
value: 13.63964963431539
- type: nauc_map_at_3_max
value: -13.542328880246702
- type: nauc_map_at_3_std
value: -20.469624947094534
- type: nauc_map_at_5_diff1
value: 10.270114527125655
- type: nauc_map_at_5_max
value: -11.762052908610329
- type: nauc_map_at_5_std
value: -19.87817948398937
- type: nauc_mrr_at_1000_diff1
value: 17.042227586584357
- type: nauc_mrr_at_1000_max
value: -32.737580506629605
- type: nauc_mrr_at_1000_std
value: -15.659146010046952
- type: nauc_mrr_at_100_diff1
value: 17.042227586584357
- type: nauc_mrr_at_100_max
value: -32.737580506629605
- type: nauc_mrr_at_100_std
value: -15.659146010046952
- type: nauc_mrr_at_10_diff1
value: 17.2919162849324
- type: nauc_mrr_at_10_max
value: -32.65403666498119
- type: nauc_mrr_at_10_std
value: -14.965346261228909
- type: nauc_mrr_at_1_diff1
value: 17.272832205168
- type: nauc_mrr_at_1_max
value: -28.34452923089083
- type: nauc_mrr_at_1_std
value: -20.033682096295447
- type: nauc_mrr_at_20_diff1
value: 16.703613056664782
- type: nauc_mrr_at_20_max
value: -33.20379601018326
- type: nauc_mrr_at_20_std
value: -15.195958069122609
- type: nauc_mrr_at_3_diff1
value: 16.171857648317733
- type: nauc_mrr_at_3_max
value: -34.684593082150755
- type: nauc_mrr_at_3_std
value: -15.15391859533353
- type: nauc_mrr_at_5_diff1
value: 17.96431702266726
- type: nauc_mrr_at_5_max
value: -32.219726910100526
- type: nauc_mrr_at_5_std
value: -18.195032425080196
- type: nauc_ndcg_at_1000_diff1
value: 1.5092604770957316
- type: nauc_ndcg_at_1000_max
value: -26.604495127870788
- type: nauc_ndcg_at_1000_std
value: 18.14443195091934
- type: nauc_ndcg_at_100_diff1
value: 0.36969287021617087
- type: nauc_ndcg_at_100_max
value: -34.670734514927716
- type: nauc_ndcg_at_100_std
value: 12.23611692923302
- type: nauc_ndcg_at_10_diff1
value: 16.29186759865657
- type: nauc_ndcg_at_10_max
value: -24.964608085923434
- type: nauc_ndcg_at_10_std
value: -12.374113490534935
- type: nauc_ndcg_at_1_diff1
value: 16.87634579399912
- type: nauc_ndcg_at_1_max
value: -27.461585957403038
- type: nauc_ndcg_at_1_std
value: -19.776711863458562
- type: nauc_ndcg_at_20_diff1
value: 11.35358213583199
- type: nauc_ndcg_at_20_max
value: -30.377489503219042
- type: nauc_ndcg_at_20_std
value: -11.86477028758937
- type: nauc_ndcg_at_3_diff1
value: 15.853622840899659
- type: nauc_ndcg_at_3_max
value: -30.190855608009116
- type: nauc_ndcg_at_3_std
value: -9.906669710354617
- type: nauc_ndcg_at_5_diff1
value: 14.062861967353188
- type: nauc_ndcg_at_5_max
value: -24.40558212202621
- type: nauc_ndcg_at_5_std
value: -12.332616495206686
- type: nauc_precision_at_1000_diff1
value: -15.846388626672493
- type: nauc_precision_at_1000_max
value: 35.30359486549494
- type: nauc_precision_at_1000_std
value: 41.24114612862944
- type: nauc_precision_at_100_diff1
value: -25.208206642605063
- type: nauc_precision_at_100_max
value: -16.869929267432205
- type: nauc_precision_at_100_std
value: 64.92424645481721
- type: nauc_precision_at_10_diff1
value: 17.75529182616335
- type: nauc_precision_at_10_max
value: -22.78107122317805
- type: nauc_precision_at_10_std
value: -2.5648044486422408
- type: nauc_precision_at_1_diff1
value: 17.272832205168
- type: nauc_precision_at_1_max
value: -28.34452923089083
- type: nauc_precision_at_1_std
value: -20.033682096295447
- type: nauc_precision_at_20_diff1
value: 6.949627552365406
- type: nauc_precision_at_20_max
value: -31.427137394601463
- type: nauc_precision_at_20_std
value: 14.032374459812457
- type: nauc_precision_at_3_diff1
value: 13.01100708235946
- type: nauc_precision_at_3_max
value: -28.019291761575747
- type: nauc_precision_at_3_std
value: -5.087210735035297
- type: nauc_precision_at_5_diff1
value: 10.850412617326793
- type: nauc_precision_at_5_max
value: -19.235955329324028
- type: nauc_precision_at_5_std
value: -7.25774273011682
- type: nauc_recall_at_1000_diff1
value: -33.73233363049106
- type: nauc_recall_at_1000_max
value: -3.2528907443089983
- type: nauc_recall_at_1000_std
value: 67.49447309124797
- type: nauc_recall_at_100_diff1
value: -17.756435307087646
- type: nauc_recall_at_100_max
value: -34.04278060436166
- type: nauc_recall_at_100_std
value: 30.764226660687495
- type: nauc_recall_at_10_diff1
value: 16.183122135924787
- type: nauc_recall_at_10_max
value: -18.111954694884698
- type: nauc_recall_at_10_std
value: -20.770271991246428
- type: nauc_recall_at_1_diff1
value: 19.681507005838757
- type: nauc_recall_at_1_max
value: -22.21893272191311
- type: nauc_recall_at_1_std
value: -26.226217172137154
- type: nauc_recall_at_20_diff1
value: 5.794232787977484
- type: nauc_recall_at_20_max
value: -27.084183224457064
- type: nauc_recall_at_20_std
value: -13.003930840567254
- type: nauc_recall_at_3_diff1
value: 12.630860577189745
- type: nauc_recall_at_3_max
value: -17.119921468454315
- type: nauc_recall_at_3_std
value: -17.473340753437792
- type: nauc_recall_at_5_diff1
value: 7.301486684241046
- type: nauc_recall_at_5_max
value: -15.68243996895239
- type: nauc_recall_at_5_std
value: -20.13598669484435
- type: ndcg_at_1
value: 35.714
- type: ndcg_at_10
value: 29.343000000000004
- type: ndcg_at_100
value: 41.568
- type: ndcg_at_1000
value: 51.93
- type: ndcg_at_20
value: 30.173
- type: ndcg_at_3
value: 33.622
- type: ndcg_at_5
value: 31.807999999999996
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 25.509999999999998
- type: precision_at_100
value: 8.571
- type: precision_at_1000
value: 1.559
- type: precision_at_20
value: 19.694
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 3.0220000000000002
- type: recall_at_10
value: 18.838
- type: recall_at_100
value: 51.471999999999994
- type: recall_at_1000
value: 83.012
- type: recall_at_20
value: 27.165
- type: recall_at_3
value: 7.868
- type: recall_at_5
value: 11.413
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 92.3681640625
- type: ap
value: 43.441682015979104
- type: ap_weighted
value: 43.441682015979104
- type: f1
value: 79.35383898838042
- type: f1_weighted
value: 93.14474638528736
- type: main_score
value: 92.3681640625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 80.42161856253539
- type: f1
value: 80.69309707938646
- type: f1_weighted
value: 80.3228654725881
- type: main_score
value: 80.42161856253539
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 68.78385330772423
- type: v_measure
value: 68.78385330772423
- type: v_measure_std
value: 1.4814035017480702
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 87.96566728258925
- type: cosine_accuracy_threshold
value: 74.39426183700562
- type: cosine_ap
value: 79.55550776766384
- type: cosine_f1
value: 72.25950782997764
- type: cosine_f1_threshold
value: 71.66670560836792
- type: cosine_precision
value: 68.30357142857143
- type: cosine_recall
value: 76.7018469656992
- type: dot_accuracy
value: 87.96566728258925
- type: dot_accuracy_threshold
value: 74.39426183700562
- type: dot_ap
value: 79.55551194873313
- type: dot_f1
value: 72.25950782997764
- type: dot_f1_threshold
value: 71.66670560836792
- type: dot_precision
value: 68.30357142857143
- type: dot_recall
value: 76.7018469656992
- type: euclidean_accuracy
value: 87.96566728258925
- type: euclidean_accuracy_threshold
value: 71.56219482421875
- type: euclidean_ap
value: 79.55550945424486
- type: euclidean_f1
value: 72.25950782997764
- type: euclidean_f1_threshold
value: 75.27719736099243
- type: euclidean_precision
value: 68.30357142857143
- type: euclidean_recall
value: 76.7018469656992
- type: main_score
value: 79.55707708553916
- type: manhattan_accuracy
value: 87.88221970554928
- type: manhattan_accuracy_threshold
value: 3136.1982345581055
- type: manhattan_ap
value: 79.55707708553916
- type: manhattan_f1
value: 72.5288998571243
- type: manhattan_f1_threshold
value: 3327.1324157714844
- type: manhattan_precision
value: 71.42491685853159
- type: manhattan_recall
value: 73.66754617414249
- type: max_accuracy
value: 87.96566728258925
- type: max_ap
value: 79.55707708553916
- type: max_f1
value: 72.5288998571243
- type: max_precision
value: 71.42491685853159
- type: max_recall
value: 76.7018469656992
- type: similarity_accuracy
value: 87.96566728258925
- type: similarity_accuracy_threshold
value: 74.39426183700562
- type: similarity_ap
value: 79.55550776766384
- type: similarity_f1
value: 72.25950782997764
- type: similarity_f1_threshold
value: 71.66670560836792
- type: similarity_precision
value: 68.30357142857143
- type: similarity_recall
value: 76.7018469656992
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 89.48073116777273
- type: cosine_accuracy_threshold
value: 73.83834719657898
- type: cosine_ap
value: 87.04248262089361
- type: cosine_f1
value: 79.40769846467293
- type: cosine_f1_threshold
value: 70.96807956695557
- type: cosine_precision
value: 75.27245137260311
- type: cosine_recall
value: 84.02371419772098
- type: dot_accuracy
value: 89.48073116777273
- type: dot_accuracy_threshold
value: 73.83835315704346
- type: dot_ap
value: 87.04248357792484
- type: dot_f1
value: 79.40769846467293
- type: dot_f1_threshold
value: 70.96806764602661
- type: dot_precision
value: 75.27245137260311
- type: dot_recall
value: 84.02371419772098
- type: euclidean_accuracy
value: 89.48073116777273
- type: euclidean_accuracy_threshold
value: 72.33483791351318
- type: euclidean_ap
value: 87.04247988177627
- type: euclidean_f1
value: 79.40769846467293
- type: euclidean_f1_threshold
value: 76.19963884353638
- type: euclidean_precision
value: 75.27245137260311
- type: euclidean_recall
value: 84.02371419772098
- type: main_score
value: 87.37453072241573
- type: manhattan_accuracy
value: 89.67283735009897
- type: manhattan_accuracy_threshold
value: 3345.201873779297
- type: manhattan_ap
value: 87.37453072241573
- type: manhattan_f1
value: 79.9237656873674
- type: manhattan_f1_threshold
value: 3592.564010620117
- type: manhattan_precision
value: 74.98144524660955
- type: manhattan_recall
value: 85.56359716661534
- type: max_accuracy
value: 89.67283735009897
- type: max_ap
value: 87.37453072241573
- type: max_f1
value: 79.9237656873674
- type: max_precision
value: 75.27245137260311
- type: max_recall
value: 85.56359716661534
- type: similarity_accuracy
value: 89.48073116777273
- type: similarity_accuracy_threshold
value: 73.83834719657898
- type: similarity_ap
value: 87.04248262089361
- type: similarity_f1
value: 79.40769846467293
- type: similarity_f1_threshold
value: 70.96807956695557
- type: similarity_precision
value: 75.27245137260311
- type: similarity_recall
value: 84.02371419772098
---
<h2 align="center"> LENS Embeddings</h2>
LENS is a model that produces **L**exicon-based **E**mbeddi**N**g**S** (LENS) leveraging large language models. Each dimension of the embeddings is designed to correspond to a token cluster where semantically similar tokens are grouped together. These embeddings have a similar feature size as dense embeddings, with LENS-d8000 offering 8000-dimensional representations.
The technical report of **LENS** is available in [Enhancing Lexicon-Based Text Embeddings with Large Language Models](https://arxiv.org/abs/2501.09749).
## Usage
```
git clone https://huggingface.co/yibinlei/LENS-d8000
cd LENS-d8000
```
```python
import torch
from torch import Tensor
import torch.nn.functional as F
from transformers import AutoTokenizer
from bidirectional_mistral import MistralBiForCausalLM
def get_detailed_instruct(task_instruction: str, query: str) -> str:
return f'<instruct>{task_instruction}\n<query>{query}'
def pooling_func(vecs: Tensor, pooling_mask: Tensor) -> Tensor:
# We use max-pooling for LENS.
return torch.max(torch.log(1 + torch.relu(vecs)) * pooling_mask.unsqueeze(-1), dim=1).values
# Prepare the data
instruction = "Given a web search query, retrieve relevant passages that answer the query."
queries = ["what is rba",
"what is oilskin fabric"]
instructed_queries = [get_detailed_instruct(instruction, query) for query in queries]
docs = ["Since 2007, the RBA's outstanding reputation has been affected by the 'Securency' or NPA scandal.",
"Today's oilskins (or oilies) typically come in two parts, jackets and trousers. Oilskin jackets are generally similar to common rubberized waterproofs."]
# Load the model and tokenizer
model = MistralBiForCausalLM.from_pretrained("yibinlei/LENS-d8000", ignore_mismatched_sizes=True)
model.lm_head = torch.load('lm_head.pth')
tokenizer = AutoTokenizer.from_pretrained("yibinlei/LENS-d8000")
# Preprocess the data
query_max_len, doc_max_len = 512, 512
instructed_query_inputs = tokenizer(
instructed_queries,
padding=True,
truncation=True,
return_tensors='pt',
max_length=query_max_len,
add_special_tokens=True
)
doc_inputs = tokenizer(
docs,
padding=True,
truncation=True,
return_tensors='pt',
max_length=doc_max_len,
add_special_tokens=True
)
# We perform pooling exclusively on the outputs of the query tokens, excluding outputs from the instruction.
query_only_mask = torch.zeros_like(instructed_query_inputs['input_ids'], dtype=instructed_query_inputs['attention_mask'].dtype)
special_token_id = tokenizer.convert_tokens_to_ids('<query>')
for idx, seq in enumerate(instructed_query_inputs['input_ids']):
special_pos = (seq == special_token_id).nonzero()
if len(special_pos) > 0:
query_start_pos = special_pos[-1].item()
query_only_mask[idx, query_start_pos:-2] = 1
else:
raise ValueError("No special token found")
# Obtain the embeddings
with torch.no_grad():
instructed_query_outputs = model(**instructed_query_inputs)
query_embeddings = pooling_func(instructed_query_outputs, query_only_mask)
doc_outputs = model(**doc_inputs)
# As the output of each token is used for predicting the next token, the pooling mask is shifted left by 1. The output of the final token EOS token is also excluded.
doc_inputs['attention_mask'][:, -2:] = 0
doc_embeddings = pooling_func(doc_outputs, doc_inputs['attention_mask'])
# Normalize the embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
doc_embeddings = F.normalize(doc_embeddings, p=2, dim=1)
# Compute the similarity
similarity = torch.matmul(query_embeddings, doc_embeddings.T)
``` | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
adriansanz/sqv-5ep | adriansanz | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5750",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,725,348,938,000 | 2024-09-03T07:37:42 | 4 | 0 | ---
base_model: BAAI/bge-m3
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5750
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: El seu objecte és que -prèviament a la seva execució material-
l'Ajuntament comprovi l'adequació de l'actuació a la normativa i planejament,
així com a les ordenances municipals.
sentences:
- Quin és el paper de la normativa en la llicència de tala de masses arbòries?
- Com puc actualitzar les meves dades de naixement al Padró?
- Quin és el paper de la persona tècnica competent en la llicència per a la primera
utilització i ocupació parcial de l'edifici?
- source_sentence: El seu objecte és que -prèviament a la seva execució material-
l'Ajuntament comprovi l'adequació de l’actuació a la normativa i planejament,
així com a les ordenances municipals sobre l’ús del sòl i edificació.
sentences:
- Quin és el propòsit del tràmit CA05?
- Quin és el propòsit del tràmit de llicència d'instal·lació de producció d'energia
elèctrica?
- Quin és el paper de l'Ajuntament de Sant Quirze del Vallès en la notificació electrònica
de procediments?
- source_sentence: 'PROFESSIONALS: Assistència jurídica, traducció/interpretació,
psicologia, o qualsevol professió o habilitat que vulgueu posar a disposició del
banc de recursos.'
sentences:
- Quin és el propòsit del tràmit de comunicació prèvia d'obertura d'activitat de
baix risc?
- Quin és el tipus d’autorització que es necessita per a talls de carrers?
- Quin és el paper dels professionals en el banc de recursos?
- source_sentence: No està especificat
sentences:
- Quin és el percentatge de bonificació per a una família nombrosa amb 3 membres
i una renda màxima anual bruta de 25.815,45 euros?
- Quin és el propòsit del tràmit de baixa del Padró d'Habitants per defunció?
- Quin és el procediment per a cancel·lar les concessions de drets funeraris de
nínxols?
- source_sentence: 'Import En cas de renovació per caducitat, pèrdua, sostracció o
deteriorament: 12,00 € (en metàl·lic i preferiblement import exacte).'
sentences:
- Quin és el procediment per a la renovació del DNI en cas de sostracció?
- Quin és el paper del motiu legítim en l'oposició de dades personals en cas de
motiu legítim i situació personal concreta?
- Vull fer una activitat a l'espai públic, quin és el tràmit que debo seguir?
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.0406885758998435
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.11737089201877934
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.18153364632237873
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.3302034428794992
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0406885758998435
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.03912363067292644
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.03630672926447575
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.03302034428794992
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0406885758998435
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.11737089201877934
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.18153364632237873
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3302034428794992
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.15804646538595332
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.10652433117221861
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.12794271910761573
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.03912363067292645
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.107981220657277
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.18153364632237873
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.3286384976525822
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.03912363067292645
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.03599374021909233
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.03630672926447575
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.03286384976525822
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03912363067292645
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.107981220657277
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.18153364632237873
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3286384976525822
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.15506867908727437
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.10328203790645119
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.12470788174358402
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.0406885758998435
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.10172143974960876
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.16588419405320814
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.3223787167449139
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0406885758998435
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.033907146583202916
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.03317683881064163
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.03223787167449139
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0406885758998435
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.10172143974960876
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.16588419405320814
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3223787167449139
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.15172399342641055
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.1010190774275283
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.12301092660478197
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.04225352112676056
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.10954616588419405
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.18466353677621283
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.3270735524256651
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.04225352112676056
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.03651538862806468
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.03693270735524257
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.03270735524256651
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.04225352112676056
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.10954616588419405
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.18466353677621283
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3270735524256651
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.15644008525556197
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.10541458628313109
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.1273528705075161
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.0406885758998435
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.11267605633802817
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.17996870109546165
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.3145539906103286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0406885758998435
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.03755868544600939
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.03599374021909233
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.03145539906103287
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0406885758998435
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.11267605633802817
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.17996870109546165
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3145539906103286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.15177339619789426
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.10291936806021326
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.12605282457123526
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.0406885758998435
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.09859154929577464
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.1596244131455399
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.29107981220657275
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0406885758998435
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.03286384976525822
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.03192488262910798
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.02910798122065728
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0406885758998435
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09859154929577464
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1596244131455399
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.29107981220657275
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.14046451788883374
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.09552562287304085
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.11941800675417487
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/sqv-5ep")
# Run inference
sentences = [
'Import En cas de renovació per caducitat, pèrdua, sostracció o deteriorament: 12,00 € (en metàl·lic i preferiblement import exacte).',
'Quin és el procediment per a la renovació del DNI en cas de sostracció?',
"Quin és el paper del motiu legítim en l'oposició de dades personals en cas de motiu legítim i situació personal concreta?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0407 |
| cosine_accuracy@3 | 0.1174 |
| cosine_accuracy@5 | 0.1815 |
| cosine_accuracy@10 | 0.3302 |
| cosine_precision@1 | 0.0407 |
| cosine_precision@3 | 0.0391 |
| cosine_precision@5 | 0.0363 |
| cosine_precision@10 | 0.033 |
| cosine_recall@1 | 0.0407 |
| cosine_recall@3 | 0.1174 |
| cosine_recall@5 | 0.1815 |
| cosine_recall@10 | 0.3302 |
| cosine_ndcg@10 | 0.158 |
| cosine_mrr@10 | 0.1065 |
| **cosine_map@100** | **0.1279** |
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0391 |
| cosine_accuracy@3 | 0.108 |
| cosine_accuracy@5 | 0.1815 |
| cosine_accuracy@10 | 0.3286 |
| cosine_precision@1 | 0.0391 |
| cosine_precision@3 | 0.036 |
| cosine_precision@5 | 0.0363 |
| cosine_precision@10 | 0.0329 |
| cosine_recall@1 | 0.0391 |
| cosine_recall@3 | 0.108 |
| cosine_recall@5 | 0.1815 |
| cosine_recall@10 | 0.3286 |
| cosine_ndcg@10 | 0.1551 |
| cosine_mrr@10 | 0.1033 |
| **cosine_map@100** | **0.1247** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.0407 |
| cosine_accuracy@3 | 0.1017 |
| cosine_accuracy@5 | 0.1659 |
| cosine_accuracy@10 | 0.3224 |
| cosine_precision@1 | 0.0407 |
| cosine_precision@3 | 0.0339 |
| cosine_precision@5 | 0.0332 |
| cosine_precision@10 | 0.0322 |
| cosine_recall@1 | 0.0407 |
| cosine_recall@3 | 0.1017 |
| cosine_recall@5 | 0.1659 |
| cosine_recall@10 | 0.3224 |
| cosine_ndcg@10 | 0.1517 |
| cosine_mrr@10 | 0.101 |
| **cosine_map@100** | **0.123** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0423 |
| cosine_accuracy@3 | 0.1095 |
| cosine_accuracy@5 | 0.1847 |
| cosine_accuracy@10 | 0.3271 |
| cosine_precision@1 | 0.0423 |
| cosine_precision@3 | 0.0365 |
| cosine_precision@5 | 0.0369 |
| cosine_precision@10 | 0.0327 |
| cosine_recall@1 | 0.0423 |
| cosine_recall@3 | 0.1095 |
| cosine_recall@5 | 0.1847 |
| cosine_recall@10 | 0.3271 |
| cosine_ndcg@10 | 0.1564 |
| cosine_mrr@10 | 0.1054 |
| **cosine_map@100** | **0.1274** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0407 |
| cosine_accuracy@3 | 0.1127 |
| cosine_accuracy@5 | 0.18 |
| cosine_accuracy@10 | 0.3146 |
| cosine_precision@1 | 0.0407 |
| cosine_precision@3 | 0.0376 |
| cosine_precision@5 | 0.036 |
| cosine_precision@10 | 0.0315 |
| cosine_recall@1 | 0.0407 |
| cosine_recall@3 | 0.1127 |
| cosine_recall@5 | 0.18 |
| cosine_recall@10 | 0.3146 |
| cosine_ndcg@10 | 0.1518 |
| cosine_mrr@10 | 0.1029 |
| **cosine_map@100** | **0.1261** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0407 |
| cosine_accuracy@3 | 0.0986 |
| cosine_accuracy@5 | 0.1596 |
| cosine_accuracy@10 | 0.2911 |
| cosine_precision@1 | 0.0407 |
| cosine_precision@3 | 0.0329 |
| cosine_precision@5 | 0.0319 |
| cosine_precision@10 | 0.0291 |
| cosine_recall@1 | 0.0407 |
| cosine_recall@3 | 0.0986 |
| cosine_recall@5 | 0.1596 |
| cosine_recall@10 | 0.2911 |
| cosine_ndcg@10 | 0.1405 |
| cosine_mrr@10 | 0.0955 |
| **cosine_map@100** | **0.1194** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,750 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 43.32 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.77 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| <code>Aquest tràmit permet donar d'alta ofertes de treball que es gestionaran pel Servei a l'Ocupació.</code> | <code>Com puc saber si el meu perfil és compatible amb les ofertes de treball?</code> |
| <code>El titular de l’activitat ha de declarar sota la seva responsabilitat, que compleix els requisits establerts per la normativa vigent per a l’exercici de l’activitat, que disposa d’un certificat tècnic justificatiu i que es compromet a mantenir-ne el compliment durant el seu exercici.</code> | <code>Quin és el paper del titular de l'activitat en la Declaració responsable?</code> |
| <code>Aquest tipus de transmissió entre cedent i cessionari només podrà ser de caràcter gratuït i no condicionada.</code> | <code>Quin és el paper del cedent en la transmissió de drets funeraris?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.2
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_map@100 | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.4444 | 10 | 4.5093 | - | - | - | - | - | - |
| 0.8889 | 20 | 2.7989 | - | - | - | - | - | - |
| 0.9778 | 22 | - | 0.1072 | 0.1182 | 0.1122 | 0.1083 | 0.1044 | 0.1082 |
| 1.3333 | 30 | 1.8343 | - | - | - | - | - | - |
| 1.7778 | 40 | 1.5248 | - | - | - | - | - | - |
| 2.0 | 45 | - | 0.1182 | 0.1203 | 0.1163 | 0.1188 | 0.1209 | 0.1229 |
| 2.2222 | 50 | 0.9624 | - | - | - | - | - | - |
| 2.6667 | 60 | 1.1161 | - | - | - | - | - | - |
| **2.9778** | **67** | **-** | **0.1235** | **0.1324** | **0.1302** | **0.1252** | **0.1213** | **0.1239** |
| 3.1111 | 70 | 0.7405 | - | - | - | - | - | - |
| 3.5556 | 80 | 0.8621 | - | - | - | - | - | - |
| 4.0 | 90 | 0.6071 | 0.1249 | 0.1282 | 0.1310 | 0.1280 | 0.1181 | 0.1278 |
| 4.4444 | 100 | 0.7091 | - | - | - | - | - | - |
| 4.8889 | 110 | 0.606 | 0.1279 | 0.1261 | 0.1274 | 0.1230 | 0.1194 | 0.1247 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.4.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"CAS"
] | Non_BioNLP |
sagteam/covid-twitter-xlm-roberta-large | sagteam | fill-mask | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:1911.02116",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-07-27T11:41:43 | 23 | 0 | ---
{}
---
# COVID-twitter-XLM-Roberta-large
## Model description
This is a model based on the [XLM-RoBERTa large](https://huggingface.co/xlm-roberta-large) topology (provided by Facebook, see original [paper](https://arxiv.org/abs/1911.02116)) with additional training on a corpus of unmarked tweets.
For more details, please see, our [GitHub repository](https://github.com/sag111/COVID-19-tweets-Russia).
## Training data
We formed a corpus of unlabeled twitter messages.
The data on keyword "covid" was expanded with texts containing other words often occurred in hashtags on the Covid-19 pandemic: "covid", "stayhome", and "coronavirus" (hereinafter, these are translations of Russian words into English).
Separately, messages were collected from Twitter users from large regions of Russia. The search was provided using different word forms of 58 manually selected keywords on Russian related to the topic of coronavirus infection (including: "PCR", "pandemic", "self-isolation", etc.).
The unlabeled corpus includes all unique Russian-language tweets from the collected data (>1M tweets). Since modern language models are usually multilingual, about 1M more tweets in other languages were added to this corpus using filtering procedures described above. Thus, in the unlabeled part of the collected data, there were about 2 million messages.
### BibTeX entry and citation info
Our GitHub repository: https://github.com/sag111/COVID-19-tweets-Russia
If you have found our results helpful in your work, feel free to cite our publication and this repository as:
```
@article{sboev2021russian,
title={The Russian language corpus and a neural network to analyse Internet tweet reports about Covid-19},
author={Sboev, Alexander and Moloshnikov, Ivan and Naumov, Alexander and Levochkina𝑎, Anastasia and Rybka𝑎, Roman},
year={2021}
}
```
| [
"PCR"
] | Non_BioNLP |
raynardj/ner-gene-dna-rna-jnlpba-pubmed | raynardj | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"ner",
"gene",
"protein",
"rna",
"bioinfomatics",
"en",
"dataset:jnlpba",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-05T07:32:32 | 146 | 10 | ---
datasets:
- jnlpba
language:
- en
license: apache-2.0
tags:
- ner
- gene
- protein
- rna
- bioinfomatics
widget:
- text: It consists of 25 exons encoding a 1,278-amino acid glycoprotein that is composed
of 13 transmembrane domains
---
# NER to find Gene & Gene products
> The model was trained on jnlpba dataset, pretrained on this [pubmed-pretrained roberta model](/raynardj/roberta-pubmed)
All the labels, the possible token classes.
```json
{"label2id": {
"DNA": 2,
"O": 0,
"RNA": 5,
"cell_line": 4,
"cell_type": 3,
"protein": 1
}
}
```
Notice, we removed the 'B-','I-' etc from data label.🗡
## This is the template we suggest for using the model
```python
from transformers import pipeline
PRETRAINED = "raynardj/ner-gene-dna-rna-jnlpba-pubmed"
ner = pipeline(task="ner",model=PRETRAINED, tokenizer=PRETRAINED)
ner("Your text", aggregation_strategy="first")
```
And here is to make your output more consecutive ⭐️
```python
import pandas as pd
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
def clean_output(outputs):
results = []
current = []
last_idx = 0
# make to sub group by position
for output in outputs:
if output["index"]-1==last_idx:
current.append(output)
else:
results.append(current)
current = [output, ]
last_idx = output["index"]
if len(current)>0:
results.append(current)
# from tokens to string
strings = []
for c in results:
tokens = []
starts = []
ends = []
for o in c:
tokens.append(o['word'])
starts.append(o['start'])
ends.append(o['end'])
new_str = tokenizer.convert_tokens_to_string(tokens)
if new_str!='':
strings.append(dict(
word=new_str,
start = min(starts),
end = max(ends),
entity = c[0]['entity']
))
return strings
def entity_table(pipeline, **pipeline_kw):
if "aggregation_strategy" not in pipeline_kw:
pipeline_kw["aggregation_strategy"] = "first"
def create_table(text):
return pd.DataFrame(
clean_output(
pipeline(text, **pipeline_kw)
)
)
return create_table
# will return a dataframe
entity_table(ner)(YOUR_VERY_CONTENTFUL_TEXT)
```
> check our NER model on
* [gene and gene products](/raynardj/ner-gene-dna-rna-jnlpba-pubmed)
* [chemical substance](/raynardj/ner-chemical-bionlp-bc5cdr-pubmed).
* [disease](/raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed) | [
"BC5CDR",
"JNLPBA"
] | BioNLP |
zwellington/bart-pubhealth-expanded | zwellington | text2text-generation | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:clupubhealth",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,691,439,885,000 | 2023-08-08T12:04:48 | 23 | 0 | ---
base_model: facebook/bart-large
datasets:
- clupubhealth
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: bart-pubhealth-expanded
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: clupubhealth
type: clupubhealth
config: expanded
split: test
args: expanded
metrics:
- type: rouge
value: 29.8528
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-pubhealth-expanded
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the clupubhealth dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3926
- Rouge1: 29.8528
- Rouge2: 10.8495
- Rougel: 23.3682
- Rougelsum: 23.7565
- Gen Len: 19.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.7469 | 0.26 | 500 | 2.0845 | 30.9611 | 10.7145 | 23.9719 | 24.1042 | 19.905 |
| 2.5524 | 0.51 | 1000 | 2.0628 | 32.0352 | 11.8898 | 24.9032 | 25.1368 | 19.895 |
| 2.429 | 0.77 | 1500 | 2.0787 | 32.2632 | 12.0353 | 25.1245 | 25.3728 | 19.895 |
| 2.2234 | 1.03 | 2000 | 2.1178 | 30.6437 | 11.5713 | 24.9071 | 25.1126 | 19.955 |
| 2.1249 | 1.29 | 2500 | 2.1183 | 31.6095 | 11.6573 | 25.0593 | 25.2063 | 19.87 |
| 2.0302 | 1.54 | 3000 | 2.1319 | 30.7417 | 11.4924 | 24.6388 | 24.8722 | 19.895 |
| 1.9761 | 1.8 | 3500 | 2.1850 | 31.6709 | 11.3036 | 24.4853 | 24.7571 | 19.87 |
| 1.8279 | 2.06 | 4000 | 2.2092 | 31.5778 | 11.59 | 24.7599 | 24.9956 | 19.825 |
| 1.8083 | 2.32 | 4500 | 2.1781 | 31.0441 | 10.7513 | 24.0656 | 24.3112 | 19.89 |
| 1.7527 | 2.57 | 5000 | 2.2155 | 31.1191 | 11.4519 | 24.4673 | 24.7157 | 19.81 |
| 1.723 | 2.83 | 5500 | 2.2024 | 31.9787 | 12.3158 | 24.9863 | 25.2597 | 19.94 |
| 1.5975 | 3.09 | 6000 | 2.2567 | 31.236 | 10.9733 | 24.1302 | 24.3433 | 19.9 |
| 1.5933 | 3.35 | 6500 | 2.2425 | 31.022 | 11.0249 | 24.1257 | 24.3555 | 19.92 |
| 1.5792 | 3.6 | 7000 | 2.2428 | 29.8844 | 10.3622 | 23.0802 | 23.4003 | 19.96 |
| 1.5718 | 3.86 | 7500 | 2.2367 | 31.2369 | 11.3854 | 24.8528 | 25.1287 | 19.815 |
| 1.4467 | 4.12 | 8000 | 2.2988 | 30.4903 | 10.4057 | 23.9914 | 24.239 | 19.715 |
| 1.4458 | 4.37 | 8500 | 2.2738 | 31.4345 | 11.2989 | 24.4239 | 24.6047 | 19.75 |
| 1.4342 | 4.63 | 9000 | 2.3092 | 28.8421 | 10.5744 | 23.0084 | 23.1741 | 19.855 |
| 1.4416 | 4.89 | 9500 | 2.2747 | 31.7111 | 11.5903 | 24.3422 | 24.6867 | 19.945 |
| 1.3437 | 5.15 | 10000 | 2.3203 | 31.11 | 11.0 | 24.6098 | 24.7362 | 19.81 |
| 1.3525 | 5.4 | 10500 | 2.3085 | 29.414 | 10.3412 | 23.3134 | 23.6552 | 19.935 |
| 1.3533 | 5.66 | 11000 | 2.3123 | 31.321 | 11.2686 | 23.9922 | 24.336 | 19.77 |
| 1.3248 | 5.92 | 11500 | 2.2916 | 30.8841 | 10.779 | 23.9407 | 24.0865 | 19.845 |
| 1.2617 | 6.18 | 12000 | 2.3530 | 29.7167 | 10.3162 | 23.4805 | 23.724 | 19.93 |
| 1.2846 | 6.43 | 12500 | 2.3712 | 28.3334 | 9.8425 | 22.1151 | 22.2951 | 19.92 |
| 1.2472 | 6.69 | 13000 | 2.3378 | 29.563 | 10.0033 | 23.1863 | 23.5065 | 19.865 |
| 1.2934 | 6.95 | 13500 | 2.3262 | 29.137 | 10.1232 | 22.9234 | 23.3799 | 19.855 |
| 1.2136 | 7.21 | 14000 | 2.3640 | 29.753 | 10.4865 | 23.4892 | 23.8778 | 19.885 |
| 1.2096 | 7.46 | 14500 | 2.3654 | 29.512 | 10.3891 | 23.0427 | 23.3684 | 19.88 |
| 1.211 | 7.72 | 15000 | 2.3491 | 30.9014 | 10.9117 | 24.127 | 24.3518 | 19.785 |
| 1.1954 | 7.98 | 15500 | 2.3626 | 29.0622 | 10.5405 | 22.7407 | 22.9454 | 19.84 |
| 1.1756 | 8.23 | 16000 | 2.3759 | 29.5277 | 10.2961 | 22.7888 | 23.1239 | 19.88 |
| 1.1516 | 8.49 | 16500 | 2.3772 | 29.3161 | 10.1894 | 23.0404 | 23.486 | 19.885 |
| 1.1604 | 8.75 | 17000 | 2.3710 | 29.6161 | 10.3543 | 22.8748 | 23.1849 | 19.905 |
| 1.1639 | 9.01 | 17500 | 2.3889 | 30.2817 | 10.8654 | 23.6438 | 23.8639 | 19.895 |
| 1.12 | 9.26 | 18000 | 2.3968 | 28.8747 | 9.8686 | 22.2775 | 22.6541 | 19.895 |
| 1.1136 | 9.52 | 18500 | 2.3950 | 30.1197 | 10.8992 | 23.2575 | 23.5732 | 19.86 |
| 1.1437 | 9.78 | 19000 | 2.3926 | 29.8528 | 10.8495 | 23.3682 | 23.7565 | 19.85 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| [
"PUBHEALTH"
] | BioNLP |
BigSalmon/InformalToFormalLincoln106Paraphrase | BigSalmon | text-generation | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,690,160,978,000 | 2023-08-05T02:49:29 | 24 | 0 | ---
{}
---
data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln106Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln106Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
```
Q: What is whistleblower protection?
A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer.
Q: Why are whistleblower protections important?
A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution.
Q: Why would an employer engage in retribution?
A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing.
```
```
original: the meritocratic nature of crowdfunding [MASK] into their vision's viability.
infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability.
```
```
Leadership | Lecture 17: Worker Morale
What Workers Look for in Companies:
• Benefits
o Tuition reimbursement
o Paid parental leave
o 401K matching
o Profit sharing
o Pension plans
o Free meals
• Social responsibility
o Environmental stewardship
o Charitable contributions
o Diversity
• Work-life balance
o Telecommuting
o Paid holidays and vacation
o Casual dress
• Growth opportunities
• Job security
• Competitive compensation
• Recognition
o Open-door policies
o Whistleblower protection
o Employee-of-the-month awards
o Positive performance reviews
o Bonuses
```
```
description: business
keywords: for-profit, fiduciary duty, monopolistic, bottom line, return on investment, short-term thinking, capital-intensive, self-interested, risk-taking, fiduciary duty, merger, speculation, profiteering, oversight, capitalism, diversification
```
```
3. In this task, you are given a company name and you need to find its industry.
McDonalds -- Restaurant
Facebook -- Social Network
IKEA -- Furniture
American Express -- Credit Services
Nokia -- Telecom
Nintendo -- Entertainment
4. In this task, you are given a Month and you need to convert it to its corresponding season
April -- Spring
December -- Winter
July -- Summer
October -- Fall
February -- Winter
5. In this task, you are given a sentence with a missing word and you need to predict the correct word.
Managers should set an _____ for their employees. -- example
Some people spend more than four _____ in the gym. -- hours
The police were on the _____ of arresting the suspect. -- verge
They were looking for _____ on how to solve the problem. -- guidance
What is the _____ of the coffee? -- price
6. In this task, you are given a paragraph and you need to reorder it to make it logical.
It was first proposed in 1987. The total length of the bridge is 1,828 meters. The idea of a bridge connects Hong Kong to Macau. -- The idea of bridge connecting Hong Kong and Macau was first proposed in 1987. The total length of the bridge is 1,828 meters.
It is a movie about a brave and noble policeman. The film was produced by Americans. They were Kevin Lima and Chris Buck. They are directors. The movie is called Tarzan. -- Produced by Americans Kevin Lima and Chris Buck, Tarzan is a movie about a brave and noble policeman.
It was first discovered in the mountains of India. The active ingredients in this plant can stimulate hair growth. The plant is called "Hair Plus." -- First discovered in the mountains of India, Hair Plus is a plant whose active ingredients can stimulate hair growth.
```
```
trivia: What is the population of South Korea?
response: 51 million.
***
trivia: What is the minimum voting age in the US?
response: 18.
***
trivia: What are the first ten amendments of the US constitution called?
response: Bill of Rights.
```
```
ideas: in modern-day america, it is customary for the commander-in-chief to conduct regular press conferences
related keywords: transparency, check and balance, sacrosanct, public accountability, adversarial, unscripted, direct access, open government, watchdog, healthy democracy, institutional integrity, right to know, direct line of communication, behind closed doors, updates, track progress, instill confidence, reassure, humanize, leadership style, day-to-day, forthcoming, demystify, ask hard questions
***
ideas: i know this one guy who retired so young, attesting to how careful they were with money.
related keywords: money management, resourceful, penny-pinching, live below their means, frugal, financial discipline, financial independence, conservative, long-term vision, discretionary spending, deferred gratification, preparedness, self-control, cushion
```
```
less specific: actors and musicians should ( support democracy ).
clarifies: actors and musicians should ( wield their celebrity to amplify pro-democracy messaging / marshal their considerable influence in the service of the democratic cause ).
***
less specific: amid a contemporary culture that thrives on profligacy, the discipline necessary to retire early is a vanishing quality. rather than yielding to the lure of indulgence, the aspiring retiree must ( be careful ).
clarifies: amid a contemporary culture that thrives on profligacy, the discipline necessary to retire early is a vanishing quality. rather than yielding to the lure of indulgence, the aspiring retiree must ( master their desires / exercise self-restraint / embrace frugality / restrain their appetite for splendor ).
```
```
dull: clean
emotional heft: spotless, immaculate, pristine
***
dull: hot
emotional heft: scorching, searing, blistering
***
dull: happy
emotional heft: euphoric
```
```
text: {guide: vividly describe the premise of the show "seinfield"} -> set in the heart of bustling new york city, the sitcom "seinfield" follows the everyday {restrict: term that implies they get into trouble but in a comical way} -> misadventures of four neurotic friends. on any given episode, one can find them quarreling over their favorite diner's latest menu change, haggling over the division of household expenses, or contriving a scheme to spy on the apartment's newest resident. mundane as their exploits may be, they never fail to elicit a hearty laugh. {guide: mention how larry david is responsible} -> behind the show's witty, incisive dialogue lies the sharp pen of larry david, who co-created the show with jerry seinfeld. {guide: mention how larry david came up with the most recognizable lines} -> it is his genius that conjured such instantly {restrict: term that imply everybody knows them} -> recognizable quips as "no soup for you!" and "not that there's anything wrong with that!". {guide: mention how humanity should revel in having such good comedians these days} -> as this list of laugh-out-loud lines attests, the world is fortunate to ( count such a sharp comedic mind among its ranks / have such a talented humorist in its midst / have such comedic talent gracing its airwaves ).
```
```
system: i'm not sure i agree with your assessment of the situation.
user: replace "assessment" with a more casual term.
chatbot: ( take / read / view ).
system: i've been feeling really busy lately, trying to balance work and family responsibilities.
user: replace "busy" with a more intense word that conveys it is too much.
chatbot: overwhelmed.
system: the candidate's stance on immigration is quite nebulous, and it's difficult to discern exactly what they would do if elected.
user: replace "nebulous" with a word that conveys a lack of detail.
chatbot: ( vague / ambiguous ).
```
```
user: exorbitant in price, scholarly journals unconscionably define the breadth of one's knowledge awareness by the contents of their wallet. [replace “knowledge awareness” with a more natural expression]
chatbot: intellectual horizons.
user: can you do another alternative to “intellectual horizons” that has more relation to “scholarly journals”?
chatbot: academic enlightenment.
```
```
key: calculate.
syn: estimate, consider, weigh, number, count, apportion, proportion, investigate, reckon, rate, compute.
ant: guess, conjecture, hit, chance, risk, stake, miscalculate.
```
```
description: more forceful version of curious that is less forceful than nosy
answer: inquisitive
description: more forceful version of hopeful that is less forceful than overconfident
answer: optimistic
```
```
key: inquisitive
positive: curious, interested
negative: nosy, prying
***
key: witty
positive: clever, humorous
negative: sarcastic, caustic
***
key: influential
positive: impactful, powerful
negative: overbearing, domineering
```
```
defective: the blogger's { use of language imprecise } confused an already complicated issue.
precise: the blogger's ( vague wording ) confused an already complicated issue.
defective: the senator's speech was high on { words sounding dignified } but low on concrete proposals.
precise: the senator's speech was high on ( lofty rhetoric ) but low on concrete proposals.
```
```
example: the new car uses gas.
boring: uses
stronger: guzzles
example: he hates people that are rude.
boring: hates
stronger: loathes, abhors, despises, scorns, detests
```
```
initial: The music at the party was [ loud; replace with a word that suggests a more uncomfortable noise level ] and overwhelming.
modified: The music at the party was { ear-splitting } and overwhelming.
initial: their house is [ small; replace with a positive spin ].
modified: their house is { cozy }.
```
```
defective: they spent the weekend enjoying { time do what you want }.
precise: they spent the weekend enjoying ( leisure activities).
defective: the author rightly notes the inequities perpetuated by { employment based on who you know }.
precise: the author rightly notes the inequities perpetuated by ( nepotism ).
defective: the senator's speech was high on { words sounding dignified } but low on concrete proposals.
precise: the senator's speech was high on ( lofty rhetoric ) but low on concrete proposals.
```
```
persona: human resources manager
buzzwords: pipeline, talent, retention, compensation, flexible, recruitment, personnel, resume, competitive, quality, onboard
``` | [
"BEAR"
] | Non_BioNLP |
mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"base_model:harrysyz/Llama-3.2-3B-pubMedQA-finalDecision",
"base_model:quantized:harrysyz/Llama-3.2-3B-pubMedQA-finalDecision",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,733,312,491,000 | 2024-12-04T12:00:21 | 85 | 0 | ---
base_model: harrysyz/Llama-3.2-3B-pubMedQA-finalDecision
language:
- en
library_name: transformers
tags: []
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/harrysyz/Llama-3.2-3B-pubMedQA-finalDecision
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q4_0_4_4.gguf) | Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-pubMedQA-finalDecision-GGUF/resolve/main/Llama-3.2-3B-pubMedQA-finalDecision.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"PUBMEDQA"
] | BioNLP |
pszemraj/pegasus-x-large-book-summary | pszemraj | summarization | [
"transformers",
"pytorch",
"safetensors",
"pegasus_x",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"base_model:google/pegasus-x-large",
"base_model:finetune:google/pegasus-x-large",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,663,325,711,000 | 2023-09-23T20:46:57 | 1,273 | 35 | ---
base_model: google/pegasus-x-large
datasets:
- kmfoda/booksum
license:
- apache-2.0
- bsd-3-clause
metrics:
- rouge
tags:
- summarization
- summary
- booksum
- long-document
- long-form
languages: en
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation- his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny- they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only- and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎'
example_title: Richard & Mortimer
parameters:
max_length: 48
min_length: 2
no_repeat_ngram_size: 3
encoder_no_repeat_ngram_size: 3
early_stopping: true
length_penalty: 0.1
num_beams: 2
model-index:
- name: pszemraj/pegasus-x-large-book-summary
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 33.1401
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ1NjY1OGVjYWEwMzBjMzk3ZmMyZDA0ZTcxOTdmZTUxNTc0OGYxYmY3MzJkMzFmYTVjNzU2ZTk4MzE0NWMzMSIsInZlcnNpb24iOjF9.PSHB6DMF6tkwSw5nsFE57a2ApRAy_tkS6ziKA6PSTWddEdaqfca4pfig6_olmRmcS4KxN6HHcsmioHzv4LJQBw
- type: rouge
value: 9.3095
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzk3MTA3NmY1OGE3MzFjZTJhYWYzNGU4NTUzMTgwM2Y1NWZjMmEyNDNmNmEzYmQzZThjOGExMjc2ZjAyZjMzZCIsInZlcnNpb24iOjF9.tfgp8p-WlkVrfducTSg4zs-byeZMCmdZw1aizPQHXm_qRAwGtKcuVkZcmza5Y3o3VqsAEmGzg5HQD1vnZvWIDA
- type: rouge
value: 24.8552
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVmMTIwNDQwNTI4MmI2MmY1ODc1Mjk0NGQ5ZWE4ZTYzOGNkMjY2ZmJhMjg2MTZlNTdhYTA2ZDAxNTFjMjA2MSIsInZlcnNpb24iOjF9.9HLgy9842oIDm6ABb3L94R1P4zAqTI0QN8aP62xzIyDxUXTbWw68PEDufYLiBJbTgZ8ElopZ9I7aou2zCgXeAA
- type: rouge
value: 29.0391
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmNhYWJjYjdjMzMxMmE4ZTE4NGEzMDdmZDZjODI5ZWRjZWJmYTEyZGIzYWQ2NjM3YzQ4MjI4ZTM4MmU5MzRjZSIsInZlcnNpb24iOjF9.d2yoVdmxjVJnsgIYFiLuaBO5Krgw4Axl5yeOSTKrvHygrAxoqT1nl4anzQiyoR3PwYBXwBkwmgpJUfZ7RNXtDQ
- type: loss
value: 2.288182497024536
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzM5NGIwODMxOTA3MTY3ODc2ZDczYTNmMTMwM2QyZmNlZjFmZDJjMGY3NWNkMDEyYzA4OTA2ZDRiODY3Zjg4OCIsInZlcnNpb24iOjF9.8k9mC050OS7mQSR9oA8liDRDQvEx1VxmTXGLmDYJVYYtTh2HYJFGP8Vy_krocFRIYDxh-IHPEOOSr5NrLMWHBA
- type: gen_len
value: 45.2173
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWZhNzQ5OTQ5Yjg5YjhlOTZiZmJhZjZiODNmY2E2OTg4YTg4NWVhYzRkNzM2Mzk4NzdlMDgxM2M4NjY2YzhhYSIsInZlcnNpb24iOjF9.tDEEsPUclZDygAdGhNrBGrF24vR8ao08Nw7hmtUt5lmSZZZK_u-8rpz97QgVS6MCJdjFVnbYC4bkFnlQWI_FAA
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 39.7279
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTAxODk3OTUwMTIzODU3NzU2YzAzZjE2NTM3MzBjNDA0ZWRmZGU3NWUzNTg1YThhNDQ1NjQ5ZmM3OWI2YzBhNSIsInZlcnNpb24iOjF9.vnNKucBNt2-nIyODj9P2HeaWPX5AQR8L-DL8QzrO7kj58-vZnjT6hsAGmepRNzdZ1TLF-3j2J2plcNJ8lUO8Dg
- type: rouge
value: 10.8944
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjYzMmIxOTJmZjkxOGI5N2U0NTRmMmQwOGJhMzMxYWIzMWMzYzUwMDEyMDdiZDQ2YTUzOWU0OTViMTI2YTAwYiIsInZlcnNpb24iOjF9.De0PaAikWqfWpoIXTCYP-mSFu3PUATLX08Qq74OHXM8784heFVDX1E1sXlh_QbbKJbuMuZtTKM4qr7oLUizOAw
- type: rouge
value: 19.7018
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzI3MjQzOGQ3MGE3NDNkZTEyMWRkYjUyYTYzNDEwOWVjMGFmNTBiZjE4ZTBhMGYzMmI1Yzk0YjBmYmIzMWMxZSIsInZlcnNpb24iOjF9.FVikJ5Ma0gUgM-tpbomWXnC4jtmvhxqikPqCk84t4IbIdU0CIYGTQEONiz-VqI0fJeNrnTS6lxpBv7XxKoq3BQ
- type: rouge
value: 36.5634
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTI2OTVmNDZiZWE5ZjNkODIwZjJiNTU2ZjJjYjczODUwM2JiNDEzYmE3N2U5YWM5NzJjOWEzMmYzZjdlYWJmYyIsInZlcnNpb24iOjF9.poR4zcqRvdaierfWFdTa53Cv6ZbNbnRwyRTi9HukHF5AWAQgc6zpBLkwOYFYoWjuSH83ohWeMM3MoIdw3zypBw
- type: loss
value: 2.473011016845703
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDFmMjg3NWQ2YTMxMTc1OGZiYWYzNjg5NDY3MWE4MjY5ZDQxZDZhZGI1OTc5MzZkZGEzYmVlNWFiMzZjNDdhNCIsInZlcnNpb24iOjF9.05nKB3SmEfFKSduJqlleF4Fd2_IhwJS8eTOrnzZYCQQfLCfpJAZLhp3eLQCuBY4htd-FNrZftrThL66zVxyrCQ
- type: gen_len
value: 212.8243
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGNjMTg4ZDZlZjAxZGNhN2M0NWI0ZTA0OWEzNDkzNDAzOTJhODA2MmVkODI4YjYzN2FiOTU1ZDMwM2VlNWMyYyIsInZlcnNpb24iOjF9.WYx6XJFKokY2heoN-jpAMp1Z1gsyJus3zpktQgNd0FOYJxOUqW40A0kkHtd15y4dUhsbccLpuJGY1fNJgHOiDw
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- type: rouge
value: 42.1065
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDJhNDM2MWEwMjJlYjRmZTVkYzljODcwMzlmMGUxMDA4ZmRjNjM0NmY3ZWJlMmZjNGI3NDQ3NTQyOTQ3MjBkNSIsInZlcnNpb24iOjF9.l1MiZbXyFyXAcsfFChMrTvSaBhzBR6AuDnBuII8zY3Csz3ShWK0vo09MkQdZ1epe8PKWV9wwUBuJyKk3wL7MDw
- type: rouge
value: 15.4079
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY3NDBkYTVkNjdhY2I0ZmY0NTA4YzVkMGE5YWE5ODdjOGE1MDhkOTJhOWY3NmI2ZWI1MGU2MGI1NDRlYjI3MSIsInZlcnNpb24iOjF9.VN-5eK2SzFDCJnFTHHu7XCU_lynaxW_JEDc3llmcNo_ffDgRmISHHGaqV7fPFymBBMXpPly7XblO_sukyqj1Cg
- type: rouge
value: 24.8814
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDYyNGZmNDY3MTY4YzI4ZjZhODE0NGIyN2ZkOGEyYzM3MWZjM2QzZTg5ZjNmZmYzZDE5NzhiZDQ4OGM1YjNiMyIsInZlcnNpb24iOjF9.L73M1M5XdMQkf8zSdfLN0MUrxtO0r6UiLjoOkHfrIGbWNsNJ8tU5lciYFNIhJrICUL8LchCsFqR9LAClKS4bCg
- type: rouge
value: 36.0375
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTBlMTQ5OTQxNTA3ZmFiMGYyZWQ0MGM0ODY2YWI3MzgyNjkwNzQyM2FmNGRjMzc3MjJmZDZkOWY4M2RhZTg2MSIsInZlcnNpb24iOjF9.IiMSSVahBgH8n34bGCC_DDGpujDXQbIvGhlcpVV2EBVQLLWUqcCy5WwBdbRrxPC-asBRCNERQxj8Uii4FvPsDQ
- type: loss
value: 1.9130958318710327
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTg2NTMxZDE3MDg3MDFkMTYxNjY1OTc5YjQ4ODcyMGUxMTFiZjJiNDgyYWZhN2NjZmE1MDQ1NTRmZGY0NjQzZSIsInZlcnNpb24iOjF9.kADUBMO8i6-oGDDt1cOiGMrGcMkF_Qc1jSpS2NSFyksDRusQa_YuuShefF4DuHVEr3CS0hNjjRH9_JBeX9ZQDg
- type: gen_len
value: 179.2184
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjM4NGNiMTY3YzZjMzg4MTRiMDdiZDFiMzA1ZDIyMDM2MDk1OWRhYWQzN2UxZDNlODIxOWVhY2JlYjk4Mjk5YyIsInZlcnNpb24iOjF9.nU8ImMNWgjg9BKjUBJQLFaJOBq3kyIne8ldlpL0OV0e4888wOntIAcJP0dCCYfRSLVmZuXQ1M8cpDuTf50hNCw
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 35.2154
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWQ5MGMzNDc4MDBiNmRiNDY5ZDM4N2QzYTJlYTNiYTcwNDBlMzdlM2I4N2VmM2ZjMmQ3NGU3OTRlMTMzMTg3NyIsInZlcnNpb24iOjF9.E55gu7HvMwc4HejF3YOD6yqQJj7_6GCoCMWm78sY5_w2glR-oM98tu9IsG27VaPva7UklxsspzT2DIVaVKY0CQ
- type: rouge
value: 6.8702
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjFhN2JlYzlmMGZmYzkwYjBlNjY4YzhlYzNmMTdmZWYyYmU3NWI0ZTRkMTgxNmRiM2EyZWMyMWFjY2JkNzg1MCIsInZlcnNpb24iOjF9.I9BoHbGt8LLNtLAssIXm9tQ4lHqFCMt0zJS_zTezzxGRMS5On71c3jnlzrDtwEm6wjmZEwYIJK8qqJh-Qa5YAA
- type: rouge
value: 17.6693
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGZlZjcwOTZjMmNjZWFkM2M5Zjg1OTgzMzcxOTM2Y2RkMzY4NGU2NDE2MTVjMjcyMWIwNWI4ODc0YTY3YTA2MSIsInZlcnNpb24iOjF9.Ou1C6U6PrOtXPxlk9PMucdJ_vlnVnSk94QrLJL4b_g2pcY3D80Xrw09iz4BTOPzZ2UTNBLyn8YdLY3m2vHpiAQ
- type: rouge
value: 32.8365
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmIzMGQ5MzQ1MjI4MTU0ZGZkZTRhODllNWQyOTQ4ZjA5YWE4ZTJjMzQ2ZWQzOGFiMWUzZDMxOTU5NzkxYjliZiIsInZlcnNpb24iOjF9.2mYURQZYo7e3AY0tfkpqFMNhoHvrysvBXza-XYYrX_xLpruMU9Gzrwc3jvpi2wtp4eeyhzIiZJvH0O6la6zxCg
- type: loss
value: 2.9878039360046387
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGU0ODBmN2I3OGFkNTFiM2I3YWQyNmUzNzUwYzEwNzczZWEwZjIxYTAwZDE2ZTIwMGE3ZGNmMDQzNTFmNjEwYyIsInZlcnNpb24iOjF9.0IKWIImKTXqysQUb2IMPk2eeHlOcBjndiPcU42nfFBMhRTqeXdBqOCP6cidlho7pVN4hsC-77ArJ9pZlbTFuBg
- type: gen_len
value: 200.6785
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDUzYTE3MmIxZGM3MWI1MjNhMTU3MTdkMjJjNjY5Y2UzYTdjYWRiY2I4MmUxMDY4NTA5NWZjYWU0NzliODdkYiIsInZlcnNpb24iOjF9.BqmCaWzbCMNUied6zNO744Dl-0LC47FCIv-l8kDjkhSkwQcb_hi93VYts5PTsrFY_MmM8j7AsY1PiFr6nNFMBQ
- task:
type: summarization
name: Summarization
dataset:
name: big_patent
type: big_patent
config: y
split: test
metrics:
- type: rouge
value: 37.376
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWI4ZjMxODcxMThiMzE3NjQ3Zjg0NzhmZjlhY2ZmYjQwMGY5ZjlkZGY1MzZmY2M5YTU4NmY1Y2NhZDA3YWFkOCIsInZlcnNpb24iOjF9.sYh4IynXgOpVetYYSWUp0v5QZWvXC1x7_uJR0LZUxaeYKEc4yfICNmDOPzNzoroaV4ELeOaPjHQpYVm-lpAHBA
- type: rouge
value: 11.4432
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTZkOGIyYzU3YTQ5ZTFmMDU3MjQ5ZWM2NGQ1MzgwMDYyZDkxN2Q2YjgyZTkzMTEyYjczMGJiYmNkZmU5MTQ3NSIsInZlcnNpb24iOjF9.Qk38acpjPjU64Z1nXEuqMXjKZrGvdC9oY586EjuCPeEAJCSzKimp8FsB-1QrjMH73q6rN2CdumJUxih6HF-KAA
- type: rouge
value: 22.2754
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzlmOTUxYmEzYzYyYmVjNGZlNzNiZWIwZmQ5OWVlY2U3NTBiZDExYWUwODQ0Y2ZjMmQyMTNmMTlmNjdmZWUwNCIsInZlcnNpb24iOjF9.bUVhxaepySyaityby71j6h4YO_l4x8OSeZoblagwUMYGXRc0Ej286QzEtZFeRGygMJ5sjUN_loWCtOmAnHY2BA
- type: rouge
value: 32.5087
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDEyNjM5NjAzYTNjN2MwZTY4MWY2Y2U5YWUyM2Y1YjAyNjBhZTM0YTAyZjM5N2M1ZDkxOWUxNzE2OWZkYTBmMSIsInZlcnNpb24iOjF9.QfMHkcoAR3xqzsgL1xjHk3Lui1xhE12pJKvYujQ_h5o6PBXT79dsENsrqDGGBjiKdTKNwWqADgaviy1VrWMDCQ
- type: loss
value: 2.9867310523986816
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTUzM2Q5MmE5MzU4YmFlMjFiMmUzZGU2NDAzMTQ1Y2NjZDVlYWI3NGE5MjM0NmMxMjdiOWI3MTU0NDk3NmNkZiIsInZlcnNpb24iOjF9.VoQqu6ZU3AR_cji82UkpvbLnTmZ17fZmR2E4DeonjCyTZpyyfvUsQ2nbKDovQf34DBkYXENk42EUsUF1mBZNBg
- type: gen_len
value: 172.7776
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTEzNTMyMDY1N2Q5ZTMxNjNlMTI0Nzk5ZDc1ZWQ5Y2IwZWM0NWNhNWY2MTk3YTRkYzUwMTI4NjZiOWVhOGQwYSIsInZlcnNpb24iOjF9.-Rek2VFmGqIEgqeFoxU_0aCWdFbGYi9BV5c7x-izm9_4vtZdYQ4ITXm4T8C3UlpOax60veJQt2Uax5vyiFc9Ag
---
# pszemraj/pegasus-x-large-book-summary
<a href="https://colab.research.google.com/gist/pszemraj/6c326c0649233ab017d63adc36958d1a/pegasus-x-large-booksum-demo.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Get SparkNotes-esque summaries of arbitrary text! Due to the model size, it's recommended to try it out in Colab (linked above) as the API textbox may time out.
This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on the `kmfoda/booksum` dataset for approx eight epochs.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
#### Epochs 1-4
TODO
#### Epochs 5 & 6
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas
- lr_scheduler_type: constant_with_warmup
- data type: TF32
- num_epochs: 2
#### Epochs 7 & 8
- epochs 5 & 6 were trained with 12288 tokens input
- this fixes that with 2 epochs at 16384 tokens input
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Framework versions
- Transformers 4.22.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"BEAR"
] | Non_BioNLP |
longluu/Clinical-NER-MedMentions-GatorTronBase | longluu | token-classification | [
"transformers",
"safetensors",
"megatron-bert",
"token-classification",
"arxiv:1902.09476",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,706,909,579,000 | 2024-02-11T15:35:56 | 27 | 0 | ---
license: mit
pipeline_tag: token-classification
widget:
- text: Alzheimer's disease (AD) is characterized pathologically by amyloid-beta (Aβ)
deposition in brain parenchyma and blood vessels (as cerebral amyloid angiopathy
(CAA)) and by neurofibrillary tangles of hyperphosphorylated tau. Compelling genetic
and biomarker evidence supports Aβ as the root cause of AD. We previously reported
human transmission of Aβ pathology and CAA in relatively young adults who had
died of iatrogenic Creutzfeldt-Jakob disease (iCJD) after childhood treatment
with cadaver-derived pituitary growth hormone (c-hGH) contaminated with both CJD
prions and Aβ seeds. This raised the possibility that c-hGH recipients who did
not die from iCJD may eventually develop AD. Here we describe recipients who developed
dementia and biomarker changes within the phenotypic spectrum of AD, suggesting
that AD, like CJD, has environmentally acquired (iatrogenic) forms as well as
late-onset sporadic and early-onset inherited forms. Although iatrogenic AD may
be rare, and there is no suggestion that Aβ can be transmitted between individuals
in activities of daily life, its recognition emphasizes the need to review measures
to prevent accidental transmissions via other medical and surgical procedures.
As propagating Aβ assemblies may exhibit structural diversity akin to conventional
prions, it is possible that therapeutic strategies targeting disease-related assemblies
may lead to selection of minor components and development of resistance.
- text: 'Background: Nonalcoholic steatohepatitis (NASH) is a progressive liver disease
with no approved treatment. Resmetirom is an oral, liver-directed, thyroid hormone
receptor beta-selective agonist in development for the treatment of NASH with
liver fibrosis. Methods: We are conducting an ongoing phase 3 trial involving
adults with biopsy-confirmed NASH and a fibrosis stage of F1B, F2, or F3 (stages
range from F0 [no fibrosis] to F4 [cirrhosis]). Patients were randomly assigned
in a 1:1:1 ratio to receive once-daily resmetirom at a dose of 80 mg or 100 mg
or placebo. The two primary end points at week 52 were NASH resolution (including
a reduction in the nonalcoholic fatty liver disease [NAFLD] activity score by
≥2 points; scores range from 0 to 8, with higher scores indicating more severe
disease) with no worsening of fibrosis, and an improvement (reduction) in fibrosis
by at least one stage with no worsening of the NAFLD activity score. Results:
Overall, 966 patients formed the primary analysis population (322 in the 80-mg
resmetirom group, 323 in the 100-mg resmetirom group, and 321 in the placebo group).
NASH resolution with no worsening of fibrosis was achieved in 25.9% of the patients
in the 80-mg resmetirom group and 29.9% of those in the 100-mg resmetirom group,
as compared with 9.7% of those in the placebo group (P<0.001 for both comparisons
with placebo). Fibrosis improvement by at least one stage with no worsening of
the NAFLD activity score was achieved in 24.2% of the patients in the 80-mg resmetirom
group and 25.9% of those in the 100-mg resmetirom group, as compared with 14.2%
of those in the placebo group (P<0.001 for both comparisons with placebo).'
---
# Model Card for Model longluu/Clinical-NER-MedMentions-GatorTronBase
The model is an NER LLM algorithm that can classify each word in a text into different clinical categories.
## Model Details
### Model Description
The base pretrained model is GatorTron-base which was trained on billions of words in various clinical texts (https://huggingface.co/UFNLP/gatortron-base).
Then using the MedMentions dataset (https://arxiv.org/pdf/1902.09476v1.pdf), I fine-tuned the model for NER task in which the model can classify each word in a text into different clinical categories.
The category system is a simplified version of UMLS concept system and consists of 21 categories:
"['Living Beings', 'Virus']", "['Living Beings', 'Bacterium']", "['Anatomy', 'Anatomical Structure']", "['Anatomy', 'Body System']", "['Anatomy', 'Body Substance']", "['Disorders', 'Finding']", "['Disorders', 'Injury or Poisoning']", "['Phenomena', 'Biologic Function']", "['Procedures', 'Health Care Activity']", "['Procedures', 'Research Activity']", "['Devices', 'Medical Device']", "['Concepts & Ideas', 'Spatial Concept']", "['Occupations', 'Biomedical Occupation or Discipline']", "['Organizations', 'Organization']", "['Living Beings', 'Professional or Occupational Group']", "['Living Beings', 'Population Group']", "['Chemicals & Drugs', 'Chemical']", "['Objects', 'Food']", "['Concepts & Ideas', 'Intellectual Product']", "['Physiology', 'Clinical Attribute']", "['Living Beings', 'Eukaryote']", 'None'
### Model Sources [optional]
The github code associated with the model can be found here: https://github.com/longluu/LLM-NER-clinical-text.
## Training Details
### Training Data
The MedMentions dataset contain 4,392 abstracts released in PubMed®1 between January 2016 and January 2017. The abstracts were manually annotated for biomedical concepts. Details are provided in https://arxiv.org/pdf/1902.09476v1.pdf and data is in https://github.com/chanzuckerberg/MedMentions.
#### Training Hyperparameters
The hyperparameters are --batch_size 4
--num_train_epochs 5
--learning_rate 5e-5
--weight_decay 0.01
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was trained and validated on train and validation sets. Then it was tested on a separate test set.
Note that some concepts in the test set were not available in the train and validatin sets.
#### Metrics
Here we use several metrics for classification tasks including macro-average F1, precision, recall and Matthew correlation.
### Results
{'f1': 0.6271402249699903,
'precision': 0.6691625224055963,
'recall': 0.6085333637974402,
'matthews_correlation': 0.720898121696139}
## Model Card Contact
Feel free to reach out to me at [email protected] if you have any question or suggestion. | [
"MEDMENTIONS"
] | BioNLP |
Triangle104/EtherealRainbow-v0.3-8B-Q5_K_M-GGUF | Triangle104 | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:invisietch/EtherealRainbow-v0.3-8B",
"base_model:quantized:invisietch/EtherealRainbow-v0.3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,731,948,007,000 | 2024-11-18T16:42:36 | 2 | 1 | ---
base_model: invisietch/EtherealRainbow-v0.3-8B
language:
- en
library_name: transformers
license: llama3
tags:
- mergekit
- merge
- not-for-all-audiences
- llama-cpp
- gguf-my-repo
---
# Triangle104/EtherealRainbow-v0.3-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`invisietch/EtherealRainbow-v0.3-8B`](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) for more details on the model.
---
Model details:
-
Ethereal Rainbow is an 8B parameter merge of various Llama3-based finetunes created using mergekit. The purpose of Ethereal Rainbow is to create an uncensored Llama3 variant which is capable of writing creative prose, and engaging in SFW as well as NSFW roleplay and storytelling, with a strong focus on long-form responses & adherence to prompts.
v0.3 improves creativity over v0.2 without losing coherence. It has been tested over more than 1,000 messages including roleplay, code prompts, and 'write a scene'-type prompts.
Feedback
-
I appreciate all feedback on any of my models, you can use:
My Discord server - requires Discord.
The Community tab - requires HF login.
The SillyTavern Discord thread - must be on SillyTavern Discord.
Discord DMs to invisietch.
Your feedback is how I improve these models for future versions.
Disclaimer
-
This model is built on an abliterated base and as such is largely uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model.
Prompting Format
I'd recommend Llama-3 Instruct prompting format:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
Some of the models included in the merge were trained on ChatML & Alpaca so you can try those. I have not tested them.
Example Storywriting
These prompts are used on SillyTavern with a fairly basic narrator card. I have trimmed the start and finish where the narrator decided to add chapter headings, commentary and the like. All samples are made with the F32 GGUF loaded with koboldcpp, with response length capped at 2048 tokens.
Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life & mission are not wasted.
Write me a 3,000 word opening chapter of a 'high fantasy' novel, drawing inspiration from the writing styles of J R R Tolkien & George R R Martin. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 19 year old female elf bard who is looking for adventure.
Write me a 3,000 word opening chapter of a 'weird fiction' novel, drawing inspiration from the writing styles of China Mieville and Neil Gaiman. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a male in his 20s called Horton who has just come to the city looking for work.
I chose the hard sci-fi example to test positivity bias. It did require some prompting, but it was willing to kill the protagonist.
I chose the high fantasy example to see whether it would bleed human features through to elves, this didn't occur.
I chose the weird fiction example to see if the LLM understood a niche genre. I'd say it performed okay, better on style than on substance.
Merge Strategy
First, we create three bases:
Rain - This is a roleplay base which makes up the majority of the model.
Sun - This is the brains of the model, with strong instruct models & writing models.
Ghost - This model primarily aims to improve the NSFW/NSFL aspects of the model, as well as general vocabulary.
After this, we have a two-slerp stage to create the final model.
Models Used
The following models were used to create EtherealRainbow-v0.3-8B:
mlabonne/NeuralDaredevil-8B-abliterated
Sao10K/L3-8B-Stheno-v3.2
Nitral-AI/Hathor-L3-8B-v.02
grimjim/Llama-3-Luminurse-v0.2-OAS-8B
hf-100/Llama-3-Spellbound-Instruct-8B-0.3
Gryphe/Pantheon-RP-1.0-8b-Llama-3
Blackroot/Llama-3-LongStory
Locutusque/Llama-3-Hercules-5.0-8B
Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
mpasila/Llama-3-LimaRP-Instruct-8B
Undi95/Llama-3-LewdPlay-8B-evo
Mergekit Configs
-
Rain
-
models:
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.41
weight: 0.4
- model: Nitral-AI/Hathor-L3-8B-v.02
parameters:
density: 0.53
weight: 0.5
- model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B
parameters:
density: 0.45
weight: 0.1
merge_method: dare_ties
base_model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
int8_mask: true
dtype: bfloat16
Sun
-
models:
- model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
parameters:
density: 0.48
weight: 0.5
- model: Blackroot/Llama-3-LongStory
parameters:
density: 0.36
weight: 0.2
- model: Locutusque/Llama-3-Hercules-5.0-8B
parameters:
density: 0.51
weight: 0.3
merge_method: dare_ties
base_model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
parameters:
int8_mask: true
dtype: bfloat16
Ghost
-
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
density: 0.39
weight: 0.3
- model: mpasila/Llama-3-LimaRP-Instruct-8B
parameters:
density: 0.54
weight: 0.4
- model: Undi95/Llama-3-LewdPlay-8B-evo
parameters:
density: 0.49
weight: 0.3
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
int8_mask: true
dtype: bfloat16
Stage1 Slerp
-
models:
- model: ./fp16/Rain-v0.3-8B
- model: ./fp16/Ghost-v0.3-8B
merge_method: slerp
base_model: ./fp16/Rain-v0.3-8B
parameters:
t:
- value: [0, 0, 0.1, 0.3, 0.5, 0.7, 0.5, 0.3, 0.1, 0, 0]
embed_slerp: true
dtype: bfloat16
tokenizer-source: model:./fp16/Rain-v0.3-8B
Final-Stage Slerp
-
models:
- model: ./fp16/ERStage1-v0.3-8B
- model: ./fp16/Sun-v0.3-8B
merge_method: slerp
base_model: ./fp16/ERStage1-v0.3-8B
parameters:
t:
- value: [0, 0, 0.1, 0.2, 0.4, 0.6, 0.4, 0.2, 0.1, 0, 0]
embed_slerp: true
dtype: bfloat16
tokenizer-source: model:./fp16/ERStage1-v0.3-8B
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q5_K_M-GGUF --hf-file etherealrainbow-v0.3-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q5_K_M-GGUF --hf-file etherealrainbow-v0.3-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q5_K_M-GGUF --hf-file etherealrainbow-v0.3-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q5_K_M-GGUF --hf-file etherealrainbow-v0.3-8b-q5_k_m.gguf -c 2048
```
| [
"CRAFT"
] | Non_BioNLP |
BigSalmon/InformalToFormalLincoln91Paraphrase | BigSalmon | text-generation | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,669,347,065,000 | 2022-11-25T04:28:48 | 51 | 0 | ---
{}
---
data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
*Note* Of all the masking techniques, this one works the best.
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
```
Q: What is whistleblower protection?
A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer.
Q: Why are whistleblower protections important?
A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution.
Q: Why would an employer engage in retribution?
A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing.
```
```
original: the meritocratic nature of crowdfunding [MASK] into their vision's viability.
infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability.
```
```
Leadership | Lecture 17: Worker Morale
What Workers Look for in Companies:
• Benefits
o Tuition reimbursement
o Paid parental leave
o 401K matching
o Profit sharing
o Pension plans
o Free meals
• Social responsibility
o Environmental stewardship
o Charitable contributions
o Diversity
• Work-life balance
o Telecommuting
o Paid holidays and vacation
o Casual dress
• Growth opportunities
• Job security
• Competitive compensation
• Recognition
o Open-door policies
o Whistleblower protection
o Employee-of-the-month awards
o Positive performance reviews
o Bonuses
```
```
description: business
keywords: for-profit, fiduciary duty, monopolistic, bottom line, return on investment, short-term thinking, capital-intensive, self-interested, risk-taking, fiduciary duty, merger, speculation, profiteering, oversight, capitalism, diversification
```
```
3. In this task, you are given a company name and you need to find its industry.
McDonalds -- Restaurant
Facebook -- Social Network
IKEA -- Furniture
American Express -- Credit Services
Nokia -- Telecom
Nintendo -- Entertainment
4. In this task, you are given a Month and you need to convert it to its corresponding season
April -- Spring
December -- Winter
July -- Summer
October -- Fall
February -- Winter
5. In this task, you are given a sentence with a missing word and you need to predict the correct word.
Managers should set an _____ for their employees. -- example
Some people spend more than four _____ in the gym. -- hours
The police were on the _____ of arresting the suspect. -- verge
They were looking for _____ on how to solve the problem. -- guidance
What is the _____ of the coffee? -- price
6. In this task, you are given a paragraph and you need to reorder it to make it logical.
It was first proposed in 1987. The total length of the bridge is 1,828 meters. The idea of a bridge connects Hong Kong to Macau. -- The idea of bridge connecting Hong Kong and Macau was first proposed in 1987. The total length of the bridge is 1,828 meters.
It is a movie about a brave and noble policeman. The film was produced by Americans. They were Kevin Lima and Chris Buck. They are directors. The movie is called Tarzan. -- Produced by Americans Kevin Lima and Chris Buck, Tarzan is a movie about a brave and noble policeman.
It was first discovered in the mountains of India. The active ingredients in this plant can stimulate hair growth. The plant is called "Hair Plus." -- First discovered in the mountains of India, Hair Plus is a plant whose active ingredients can stimulate hair growth.
```
```
trivia: What is the population of South Korea?
response: 51 million.
***
trivia: What is the minimum voting age in the US?
response: 18.
***
trivia: What are the first ten amendments of the US constitution called?
response: Bill of Rights.
``` | [
"BEAR"
] | Non_BioNLP |
barisaydin/text2vec-base-multilingual | barisaydin | sentence-similarity | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text2vec",
"sentence-similarity",
"mteb",
"zh",
"en",
"de",
"fr",
"it",
"nl",
"pt",
"pl",
"ru",
"license:apache-2.0",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,695,223,562,000 | 2023-09-20T17:17:39 | 11 | 0 | ---
datasets:
- https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset
language:
- zh
- en
- de
- fr
- it
- nl
- pt
- pl
- ru
library_name: transformers
license: apache-2.0
metrics:
- spearmanr
pipeline_tag: sentence-similarity
tags:
- text2vec
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: text2vec-base-multilingual
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.97014925373134
- type: ap
value: 33.95151328318672
- type: f1
value: 65.14740155705596
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.69379014989293
- type: ap
value: 79.68277579733802
- type: f1
value: 66.54960052336921
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.90704647676162
- type: ap
value: 20.747518928580437
- type: f1
value: 58.64365465884924
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 61.605995717344754
- type: ap
value: 14.135974879487028
- type: f1
value: 49.980224800472136
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 66.103375
- type: ap
value: 61.10087197664471
- type: f1
value: 65.75198509894145
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 33.134
- type: f1
value: 32.7905397597083
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 33.388
- type: f1
value: 33.190561196873084
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 34.824
- type: f1
value: 34.297290157740726
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 33.449999999999996
- type: f1
value: 33.08017234412433
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 30.046
- type: f1
value: 29.857141661482228
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 32.522
- type: f1
value: 31.854699911472174
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 32.31918856561886
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 25.503481615956137
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.91471462820568
- type: mrr
value: 71.82990370663501
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 68.83853315193127
- type: cos_sim_spearman
value: 66.16174850417771
- type: euclidean_pearson
value: 56.65313897263153
- type: euclidean_spearman
value: 52.69156205876939
- type: manhattan_pearson
value: 56.97282154658304
- type: manhattan_spearman
value: 53.167476517261015
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.08441558441558
- type: f1
value: 77.99825264827898
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 28.98583420521256
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 23.195091778460892
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.35
- type: f1
value: 38.80269436557695
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 59.348
- type: ap
value: 55.75065220262251
- type: f1
value: 58.72117519082607
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 81.04879160966712
- type: f1
value: 80.86889779192701
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 78.59397013243168
- type: f1
value: 77.09902761555972
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 79.24282855236824
- type: f1
value: 78.75883867079015
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 76.16661446915127
- type: f1
value: 76.30204722831901
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 78.74506991753317
- type: f1
value: 77.50560442779701
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 77.67088607594937
- type: f1
value: 77.21442956887493
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 62.786137710898316
- type: f1
value: 46.23474201126368
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 55.285996055226825
- type: f1
value: 37.98039513682919
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 58.67911941294196
- type: f1
value: 40.541410807124954
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 53.257124960851854
- type: f1
value: 38.42982319259366
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 59.62352097525995
- type: f1
value: 41.28886486568534
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 58.799276672694404
- type: f1
value: 43.68379466247341
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.42030934767989
- type: f1
value: 44.12201543566376
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 37.67652992602556
- type: f1
value: 35.422091900843164
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.02353732347007
- type: f1
value: 41.852484084738194
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.70880968392737
- type: f1
value: 46.904360615435046
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 43.78950907868191
- type: f1
value: 41.58872353920405
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 28.759246805648957
- type: f1
value: 27.41182001374226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.74176193678547
- type: f1
value: 53.82727354182497
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.55682582380632
- type: f1
value: 49.41963627941866
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.46940147948891
- type: f1
value: 55.28178711367465
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.83322125084063
- type: f1
value: 61.836172900845554
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.27505043712172
- type: f1
value: 57.642436374361154
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.05178211163417
- type: f1
value: 56.858998820504056
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.357094821788834
- type: f1
value: 54.79711189260453
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.79959650302623
- type: f1
value: 57.59158671719513
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.1768661735037
- type: f1
value: 48.886397276270515
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.06455951580362
- type: f1
value: 55.01530952684585
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.3591123066577
- type: f1
value: 55.9277783370191
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.108271687962336
- type: f1
value: 51.195023400664596
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.26832548755883
- type: f1
value: 56.60774065423401
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 35.806993947545394
- type: f1
value: 34.290418953173294
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.27841291190315
- type: f1
value: 56.9438998642419
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.78009414929389
- type: f1
value: 59.15780842483667
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 31.153328850033624
- type: f1
value: 30.11004596099605
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.50235373234701
- type: f1
value: 44.040585262624745
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 40.99193006052455
- type: f1
value: 39.505480119272484
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.95696032279758
- type: f1
value: 43.093638940785326
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.73100201748486
- type: f1
value: 52.79750744404114
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.865501008742434
- type: f1
value: 53.64798408964839
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.891728312037664
- type: f1
value: 45.261229414636055
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.2259583053127
- type: f1
value: 50.5903419246987
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.277067921990586
- type: f1
value: 52.472042479965886
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.95696032279757
- type: f1
value: 49.79330411854258
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.63685272360457
- type: f1
value: 52.81267480650003
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.451916610625425
- type: f1
value: 57.34790386645091
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.91055817081372
- type: f1
value: 56.39195048528157
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.84196368527236
- type: f1
value: 58.72244763127063
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.04102219233354
- type: f1
value: 55.67040186148946
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.01613987895091
- type: f1
value: 57.203949825484855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.35843981170141
- type: f1
value: 54.18656338999773
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.47948890383322
- type: f1
value: 54.772224557130954
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.43981170141224
- type: f1
value: 56.09260971364242
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 33.9609952925353
- type: f1
value: 33.18853392353405
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.29388029589778
- type: f1
value: 41.51986533284474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.13517148621385
- type: f1
value: 43.94784138379624
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.856086079354405
- type: f1
value: 56.618177384748456
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 35.35978480161398
- type: f1
value: 34.060680080365046
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.630127774041696
- type: f1
value: 57.46288652988266
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.7908540685945
- type: f1
value: 51.46934239116157
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.6469401479489
- type: f1
value: 53.9903066185816
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.85743106926698
- type: f1
value: 59.31579548450755
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.46805648957633
- type: f1
value: 57.48469733657326
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.86415601882985
- type: f1
value: 49.41696672602645
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.183591123066584
- type: f1
value: 40.04563865770774
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.08069939475455
- type: f1
value: 50.724800165846126
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.287827841291204
- type: f1
value: 50.72873776739851
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.53328850033624
- type: f1
value: 45.93317866639667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 34.347679892400805
- type: f1
value: 31.941581141280828
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.073301950235376
- type: f1
value: 62.228728940111054
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.398789509078675
- type: f1
value: 54.80778341609032
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.79892400806993
- type: f1
value: 60.69430756982446
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.96368527236046
- type: f1
value: 66.5893927997656
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.21250840618695
- type: f1
value: 62.347177794128925
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.43779421654339
- type: f1
value: 61.307701312085605
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.09952925353059
- type: f1
value: 60.313907927386914
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.38601210490922
- type: f1
value: 63.05968938353488
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.2878278412912
- type: f1
value: 55.92927644838597
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.62878278412912
- type: f1
value: 60.25299253652635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.28850033624748
- type: f1
value: 62.77053246337031
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.875588433086754
- type: f1
value: 54.30717357279134
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.99394754539341
- type: f1
value: 61.73085530883037
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.581035642232685
- type: f1
value: 36.96287269695893
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.350369872225976
- type: f1
value: 61.807327324823966
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.17148621385338
- type: f1
value: 65.29620144656751
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.12642905178212
- type: f1
value: 35.334393048479484
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.26899798251513
- type: f1
value: 49.041065960139434
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.24344317417619
- type: f1
value: 42.42177854872125
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.370544720914594
- type: f1
value: 46.589722581465324
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.89038332212508
- type: f1
value: 57.753607921990394
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.506388702084756
- type: f1
value: 56.0485860423295
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.06388702084734
- type: f1
value: 50.109364641824584
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 55.053799596503026
- type: f1
value: 54.490665705666686
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.77135171486213
- type: f1
value: 58.2808650158803
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 55.71620712844654
- type: f1
value: 53.863034882475304
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.26227303295225
- type: f1
value: 59.86604657147016
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.3759246805649
- type: f1
value: 62.45257339288533
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.552118359112306
- type: f1
value: 61.354449605776765
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.40753194351043
- type: f1
value: 61.98779889528889
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.68258238063214
- type: f1
value: 60.59973978976571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.31002017484868
- type: f1
value: 62.412312268503655
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.429051782111635
- type: f1
value: 61.60095590401424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.229320780094156
- type: f1
value: 61.02251426747547
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.42501681237391
- type: f1
value: 63.461494430605235
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.51714862138534
- type: f1
value: 37.12466722986362
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.99731002017485
- type: f1
value: 45.859147049984834
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.01882985877605
- type: f1
value: 49.01040173136056
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.234700739744454
- type: f1
value: 62.732294595214746
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.72225958305312
- type: f1
value: 36.603231928120906
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.48554135843982
- type: f1
value: 63.97380562022752
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.7955615332885
- type: f1
value: 55.95308241204802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.06455951580362
- type: f1
value: 56.95570494066693
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.8338937457969
- type: f1
value: 65.6778746906008
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.369199731002034
- type: f1
value: 63.527650116059945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 29.442504112215538
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 26.16062814161053
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 65.319
- type: map_at_10
value: 78.72
- type: map_at_100
value: 79.44600000000001
- type: map_at_1000
value: 79.469
- type: map_at_3
value: 75.693
- type: map_at_5
value: 77.537
- type: mrr_at_1
value: 75.24
- type: mrr_at_10
value: 82.304
- type: mrr_at_100
value: 82.485
- type: mrr_at_1000
value: 82.489
- type: mrr_at_3
value: 81.002
- type: mrr_at_5
value: 81.817
- type: ndcg_at_1
value: 75.26
- type: ndcg_at_10
value: 83.07
- type: ndcg_at_100
value: 84.829
- type: ndcg_at_1000
value: 85.087
- type: ndcg_at_3
value: 79.67699999999999
- type: ndcg_at_5
value: 81.42
- type: precision_at_1
value: 75.26
- type: precision_at_10
value: 12.697
- type: precision_at_100
value: 1.4829999999999999
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 34.849999999999994
- type: precision_at_5
value: 23.054
- type: recall_at_1
value: 65.319
- type: recall_at_10
value: 91.551
- type: recall_at_100
value: 98.053
- type: recall_at_1000
value: 99.516
- type: recall_at_3
value: 81.819
- type: recall_at_5
value: 86.66199999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 31.249791587189996
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 43.302922383029816
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.80670811345861
- type: cos_sim_spearman
value: 79.97373018384307
- type: euclidean_pearson
value: 83.40205934125837
- type: euclidean_spearman
value: 79.73331008251854
- type: manhattan_pearson
value: 83.3320983393412
- type: manhattan_spearman
value: 79.677919746045
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.3816087627948
- type: cos_sim_spearman
value: 80.91314664846955
- type: euclidean_pearson
value: 85.10603071031096
- type: euclidean_spearman
value: 79.42663939501841
- type: manhattan_pearson
value: 85.16096376014066
- type: manhattan_spearman
value: 79.51936545543191
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.44665329940209
- type: cos_sim_spearman
value: 82.86479010707745
- type: euclidean_pearson
value: 84.06719627734672
- type: euclidean_spearman
value: 84.9356099976297
- type: manhattan_pearson
value: 84.10370009572624
- type: manhattan_spearman
value: 84.96828040546536
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.05704260568437
- type: cos_sim_spearman
value: 87.36399473803172
- type: euclidean_pearson
value: 86.8895170159388
- type: euclidean_spearman
value: 87.16246440866921
- type: manhattan_pearson
value: 86.80814774538997
- type: manhattan_spearman
value: 87.09320142699522
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.97825118945852
- type: cos_sim_spearman
value: 88.31438033558268
- type: euclidean_pearson
value: 87.05174694758092
- type: euclidean_spearman
value: 87.80659468392355
- type: manhattan_pearson
value: 86.98831322198717
- type: manhattan_spearman
value: 87.72820615049285
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 78.68745420126719
- type: cos_sim_spearman
value: 81.6058424699445
- type: euclidean_pearson
value: 81.16540133861879
- type: euclidean_spearman
value: 81.86377535458067
- type: manhattan_pearson
value: 81.13813317937021
- type: manhattan_spearman
value: 81.87079962857256
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 68.06192660936868
- type: cos_sim_spearman
value: 68.2376353514075
- type: euclidean_pearson
value: 60.68326946956215
- type: euclidean_spearman
value: 59.19352349785952
- type: manhattan_pearson
value: 60.6592944683418
- type: manhattan_spearman
value: 59.167534419270865
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.78098264855684
- type: cos_sim_spearman
value: 78.02670452969812
- type: euclidean_pearson
value: 77.26694463661255
- type: euclidean_spearman
value: 77.47007626009587
- type: manhattan_pearson
value: 77.25070088632027
- type: manhattan_spearman
value: 77.36368265830724
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.45418506379532
- type: cos_sim_spearman
value: 78.60412019902428
- type: euclidean_pearson
value: 79.90303710850512
- type: euclidean_spearman
value: 78.67123625004957
- type: manhattan_pearson
value: 80.09189580897753
- type: manhattan_spearman
value: 79.02484481441483
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.35556731232779
- type: cos_sim_spearman
value: 81.48249735354844
- type: euclidean_pearson
value: 81.66748026636621
- type: euclidean_spearman
value: 80.35571574338547
- type: manhattan_pearson
value: 81.38214732806365
- type: manhattan_spearman
value: 79.9018202958774
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.4527703176897
- type: cos_sim_spearman
value: 85.81084095829584
- type: euclidean_pearson
value: 86.43489162324457
- type: euclidean_spearman
value: 85.27110976093296
- type: manhattan_pearson
value: 86.43674259444512
- type: manhattan_spearman
value: 85.05719308026032
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.00411240034492
- type: cos_sim_spearman
value: 76.33887356560854
- type: euclidean_pearson
value: 76.81730660019446
- type: euclidean_spearman
value: 75.04432185451306
- type: manhattan_pearson
value: 77.22298813168995
- type: manhattan_spearman
value: 75.56420330256725
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.1447136836213
- type: cos_sim_spearman
value: 81.80823850788917
- type: euclidean_pearson
value: 80.84505734814422
- type: euclidean_spearman
value: 81.714168092736
- type: manhattan_pearson
value: 80.84713816174187
- type: manhattan_spearman
value: 81.61267814749516
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.01257457052873
- type: cos_sim_spearman
value: 87.91146458004216
- type: euclidean_pearson
value: 88.36771859717994
- type: euclidean_spearman
value: 87.73182474597515
- type: manhattan_pearson
value: 88.26551451003671
- type: manhattan_spearman
value: 87.71675151388992
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.20121618382373
- type: cos_sim_spearman
value: 78.05794691968603
- type: euclidean_pearson
value: 79.93819925682054
- type: euclidean_spearman
value: 78.00586118701553
- type: manhattan_pearson
value: 80.05598625820885
- type: manhattan_spearman
value: 78.04802948866832
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 81.51743373871778
- type: cos_sim_spearman
value: 80.98266651818703
- type: euclidean_pearson
value: 81.11875722505269
- type: euclidean_spearman
value: 79.45188413284538
- type: manhattan_pearson
value: 80.7988457619225
- type: manhattan_spearman
value: 79.49643569311485
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 81.78679924046351
- type: cos_sim_spearman
value: 80.9986574147117
- type: euclidean_pearson
value: 82.09130079135713
- type: euclidean_spearman
value: 80.66215667390159
- type: manhattan_pearson
value: 82.0328610549654
- type: manhattan_spearman
value: 80.31047226932408
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.08082172994642
- type: cos_sim_spearman
value: 62.9940530222459
- type: euclidean_pearson
value: 58.47927303460365
- type: euclidean_spearman
value: 60.8440317609258
- type: manhattan_pearson
value: 58.32438211697841
- type: manhattan_spearman
value: 60.69642636776064
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 33.83985707464123
- type: cos_sim_spearman
value: 46.89093209603036
- type: euclidean_pearson
value: 34.63602187576556
- type: euclidean_spearman
value: 46.31087228200712
- type: manhattan_pearson
value: 34.66899391543166
- type: manhattan_spearman
value: 46.33049538425276
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.61315965767736
- type: cos_sim_spearman
value: 58.9434266730386
- type: euclidean_pearson
value: 50.35885602217862
- type: euclidean_spearman
value: 58.238679883286025
- type: manhattan_pearson
value: 53.01732044381151
- type: manhattan_spearman
value: 58.10482351761412
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.771738440430177
- type: cos_sim_spearman
value: 34.807259227816054
- type: euclidean_pearson
value: 17.82657835823811
- type: euclidean_spearman
value: 34.27912898498941
- type: manhattan_pearson
value: 19.121527758886312
- type: manhattan_spearman
value: 34.4940050226265
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.8354704676683
- type: cos_sim_spearman
value: 57.28629534815841
- type: euclidean_pearson
value: 54.10329332004385
- type: euclidean_spearman
value: 58.15030615859976
- type: manhattan_pearson
value: 55.42372087433115
- type: manhattan_spearman
value: 57.52270736584036
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 31.01976557986924
- type: cos_sim_spearman
value: 54.506959483927616
- type: euclidean_pearson
value: 36.917863022119086
- type: euclidean_spearman
value: 53.750194241538566
- type: manhattan_pearson
value: 37.200177833241085
- type: manhattan_spearman
value: 53.507659188082535
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.38635647225934
- type: cos_sim_spearman
value: 54.50892732637536
- type: euclidean_pearson
value: 40.8331015184763
- type: euclidean_spearman
value: 53.142903182230924
- type: manhattan_pearson
value: 43.07655692906317
- type: manhattan_spearman
value: 53.5833474125901
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.52525456662916
- type: cos_sim_spearman
value: 63.23975489531082
- type: euclidean_pearson
value: 58.989191722317514
- type: euclidean_spearman
value: 62.536326639863894
- type: manhattan_pearson
value: 61.32982866201855
- type: manhattan_spearman
value: 63.068262822520516
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.63798684577696
- type: cos_sim_spearman
value: 74.09937723367189
- type: euclidean_pearson
value: 63.77494904383906
- type: euclidean_spearman
value: 71.15932571292481
- type: manhattan_pearson
value: 63.69646122775205
- type: manhattan_spearman
value: 70.54960698541632
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 36.50262468726711
- type: cos_sim_spearman
value: 45.00322499674274
- type: euclidean_pearson
value: 32.58759216581778
- type: euclidean_spearman
value: 40.13720951315429
- type: manhattan_pearson
value: 34.88422299605277
- type: manhattan_spearman
value: 40.63516862200963
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 56.498552617040275
- type: cos_sim_spearman
value: 67.71358426124443
- type: euclidean_pearson
value: 57.16474781778287
- type: euclidean_spearman
value: 65.721515493531
- type: manhattan_pearson
value: 59.25227610738926
- type: manhattan_spearman
value: 65.89743680340739
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.97978814727984
- type: cos_sim_spearman
value: 65.85821395092104
- type: euclidean_pearson
value: 59.11117270978519
- type: euclidean_spearman
value: 64.50062069934965
- type: manhattan_pearson
value: 59.4436213778161
- type: manhattan_spearman
value: 64.4003273074382
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.00873192515712
- type: cos_sim_spearman
value: 60.167708809138745
- type: euclidean_pearson
value: 56.91950637760252
- type: euclidean_spearman
value: 58.50593399441014
- type: manhattan_pearson
value: 58.683747352584994
- type: manhattan_spearman
value: 59.38110066799761
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.26020658151187
- type: cos_sim_spearman
value: 61.29236187204147
- type: euclidean_pearson
value: 55.993896804147056
- type: euclidean_spearman
value: 58.654928232615354
- type: manhattan_pearson
value: 56.612492816099426
- type: manhattan_spearman
value: 58.65144067094258
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.13817835368122
- type: cos_sim_spearman
value: 50.78524216975442
- type: euclidean_pearson
value: 46.56046454501862
- type: euclidean_spearman
value: 50.3935060082369
- type: manhattan_pearson
value: 48.0232348418531
- type: manhattan_spearman
value: 50.79528358464199
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 44.274388638585286
- type: cos_sim_spearman
value: 49.43124017389838
- type: euclidean_pearson
value: 42.45909582681174
- type: euclidean_spearman
value: 49.661383797129055
- type: manhattan_pearson
value: 42.5771970142383
- type: manhattan_spearman
value: 50.14423414390715
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.119500839749776
- type: cos_sim_spearman
value: 39.324070169024424
- type: euclidean_pearson
value: 35.83247077201831
- type: euclidean_spearman
value: 42.61903924348457
- type: manhattan_pearson
value: 35.50415034487894
- type: manhattan_spearman
value: 41.87998075949351
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.62575835691209
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 78.07793323914657
- type: euclidean_spearman
value: 73.24670207647144
- type: manhattan_pearson
value: 77.51429306378206
- type: manhattan_spearman
value: 73.24670207647144
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.09375596849891
- type: cos_sim_spearman
value: 86.44881302053585
- type: euclidean_pearson
value: 84.71259163967213
- type: euclidean_spearman
value: 85.63661992344069
- type: manhattan_pearson
value: 84.64466537502614
- type: manhattan_spearman
value: 85.53769949940238
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 70.2056154684549
- type: mrr
value: 89.52703161036494
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.57623762376238
- type: cos_sim_ap
value: 83.53051588811371
- type: cos_sim_f1
value: 77.72704211060375
- type: cos_sim_precision
value: 78.88774459320288
- type: cos_sim_recall
value: 76.6
- type: dot_accuracy
value: 99.06435643564356
- type: dot_ap
value: 27.003124923857463
- type: dot_f1
value: 34.125269978401725
- type: dot_precision
value: 37.08920187793427
- type: dot_recall
value: 31.6
- type: euclidean_accuracy
value: 99.61485148514852
- type: euclidean_ap
value: 85.47332647001774
- type: euclidean_f1
value: 80.0808897876643
- type: euclidean_precision
value: 80.98159509202453
- type: euclidean_recall
value: 79.2
- type: manhattan_accuracy
value: 99.61683168316831
- type: manhattan_ap
value: 85.41969859598552
- type: manhattan_f1
value: 79.77755308392315
- type: manhattan_precision
value: 80.67484662576688
- type: manhattan_recall
value: 78.9
- type: max_accuracy
value: 99.61683168316831
- type: max_ap
value: 85.47332647001774
- type: max_f1
value: 80.0808897876643
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 34.35688940053467
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.64427069276576
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 44.89500754900078
- type: mrr
value: 45.33215558950853
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.653069624224084
- type: cos_sim_spearman
value: 30.10187112430319
- type: dot_pearson
value: 28.966278202103666
- type: dot_spearman
value: 28.342234095507767
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 65.96839999999999
- type: ap
value: 11.846327590186444
- type: f1
value: 50.518102944693574
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 55.220713073005086
- type: f1
value: 55.47856175692088
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 31.581473892235877
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.94093103653812
- type: cos_sim_ap
value: 62.48963249213361
- type: cos_sim_f1
value: 58.9541137429912
- type: cos_sim_precision
value: 52.05091937765205
- type: cos_sim_recall
value: 67.96833773087072
- type: dot_accuracy
value: 78.24998509864696
- type: dot_ap
value: 40.82371294480071
- type: dot_f1
value: 44.711163153786096
- type: dot_precision
value: 35.475379374419326
- type: dot_recall
value: 60.4485488126649
- type: euclidean_accuracy
value: 83.13166835548668
- type: euclidean_ap
value: 63.459878609769774
- type: euclidean_f1
value: 60.337199569532466
- type: euclidean_precision
value: 55.171659741963694
- type: euclidean_recall
value: 66.56992084432719
- type: manhattan_accuracy
value: 83.00649698992669
- type: manhattan_ap
value: 63.263161177904905
- type: manhattan_f1
value: 60.17122874713614
- type: manhattan_precision
value: 55.40750610703975
- type: manhattan_recall
value: 65.8311345646438
- type: max_accuracy
value: 83.13166835548668
- type: max_ap
value: 63.459878609769774
- type: max_f1
value: 60.337199569532466
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.80416812201653
- type: cos_sim_ap
value: 83.45540469219863
- type: cos_sim_f1
value: 75.58836427422892
- type: cos_sim_precision
value: 71.93934335002783
- type: cos_sim_recall
value: 79.62734832152756
- type: dot_accuracy
value: 83.04226336011176
- type: dot_ap
value: 70.63007268018524
- type: dot_f1
value: 65.35980325765405
- type: dot_precision
value: 60.84677151768532
- type: dot_recall
value: 70.59593470896212
- type: euclidean_accuracy
value: 87.60430007373773
- type: euclidean_ap
value: 83.10068502536592
- type: euclidean_f1
value: 75.02510506936439
- type: euclidean_precision
value: 72.56637168141593
- type: euclidean_recall
value: 77.65629812134279
- type: manhattan_accuracy
value: 87.60041914076145
- type: manhattan_ap
value: 83.05480769911229
- type: manhattan_f1
value: 74.98522895125554
- type: manhattan_precision
value: 72.04797047970479
- type: manhattan_recall
value: 78.17215891592238
- type: max_accuracy
value: 87.80416812201653
- type: max_ap
value: 83.45540469219863
- type: max_f1
value: 75.58836427422892
---
# shibing624/text2vec-base-multilingual
This is a CoSENT(Cosine Sentence) model: shibing624/text2vec-base-multilingual.
It maps sentences to a 384 dimensional dense vector space and can be used for tasks
like sentence embeddings, text matching or semantic search.
- training dataset: https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset
- base model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
- max_seq_length: 256
- best epoch: 4
- sentence embedding dim: 384
## Evaluation
For an automated evaluation of this model, see the *Evaluation Benchmark*: [text2vec](https://github.com/shibing624/text2vec)
## Languages
Available languages are: de, en, es, fr, it, nl, pl, pt, ru, zh
### Release Models
| Arch | BaseModel | Model | ATEC | BQ | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc | Avg | QPS |
|:-----------|:-------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|:-------:|:---------:|:-----:|
| Word2Vec | word2vec | [w2v-light-tencent-chinese](https://ai.tencent.com/ailab/nlp/en/download.html) | 20.00 | 31.49 | 59.46 | 2.57 | 55.78 | 55.04 | 20.70 | 35.03 | 23769 |
| SBERT | xlm-roberta-base | [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 | 63.01 | 52.28 | 46.46 | 3138 |
| Instructor | hfl/chinese-roberta-wwm-ext | [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) | 41.27 | 63.81 | 74.87 | 12.20 | 76.96 | 75.83 | 60.55 | 57.93 | 2980 |
| CoSENT | hfl/chinese-macbert-base | [shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese) | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 | 70.27 | 50.42 | 51.61 | 3008 |
| CoSENT | hfl/chinese-lert-large | [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 | 73.01 | 59.04 | 53.12 | 2092 |
| CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence) | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 | 70.60 | 53.08 | 59.87 | 3089 |
| CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase) | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 | 76.70 | 63.30 | **63.08** | 3066 |
| CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual) | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 | 68.88 | 51.17 | 53.67 | 4004 |
Illustrate:
- Result evaluation index: spearman coefficient
- The `shibing624/text2vec-base-chinese` model is trained using the CoSENT method. It is trained on Chinese STS-B data based on `hfl/chinese-macbert-base` and has achieved good results in the Chinese STS-B test set evaluation. , run [examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py) code to train the model, the model file has been uploaded to HF model hub, Chinese universal semantic matching task Recommended Use
- The `shibing624/text2vec-base-chinese-sentence` model is trained using the CoSENT method and is based on the manually selected Chinese STS data set of `nghuyong/ernie-3.0-base-zh` [shibing624/nli-zh-all/ text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset), and is used in various Chinese NLI test set evaluation has achieved good results. Run the [examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py) code to train the model, and the model file has been uploaded to HF model hub, recommended for Chinese s2s (sentence vs sentence) semantic matching tasks
- The `shibing624/text2vec-base-chinese-paraphrase` model is trained using the CoSENT method and is based on the manually selected Chinese STS data set of `nghuyong/ernie-3.0-base-zh` [shibing624/nli-zh-all/ text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset), the data set is relative to [shibing624 /nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset) s2p (sentence to paraphrase) data was added to strengthen its long text representation capabilities, and the evaluation on each Chinese NLI test set reached SOTA, running [examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec /blob/master/examples/training_sup_text_matching_model_jsonl_data.py) code can train the model. The model file has been uploaded to HF model hub. It is recommended for Chinese s2p (sentence vs paragraph) semantic matching tasks.
- The `shibing624/text2vec-base-multilingual` model is trained using the CoSENT method and is based on the manually selected multilingual STS data set of `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2` [shibing624/nli-zh -all/text2vec-base-multilingual-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset) trained and tested in Chinese and English The set evaluation effect is improved compared to the original model. Run the [examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py) code to train the model, and the model file has been uploaded. HF model hub, recommended for multi-language semantic matching tasks
- `w2v-light-tencent-chinese` is the Word2Vec model of Tencent word vector, which is loaded and used by CPU. It is suitable for Chinese text matching tasks and cold start situations where data is missing.
- The GPU test environment of QPS is Tesla V100 with 32GB memory.
Model training experiment report: [Experiment report](https://github.com/shibing624/text2vec/blob/master/docs/model_report.md)
## Usage (text2vec)
Using this model becomes easy when you have [text2vec](https://github.com/shibing624/text2vec) installed:
```
pip install -U text2vec
```
Then you can use the model like this:
```python
from text2vec import SentenceModel
sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card']
model = SentenceModel('shibing624/text2vec-base-multilingual')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
Install transformers:
```
pip install transformers
```
Then load model and predict:
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('shibing624/text2vec-base-multilingual')
model = AutoModel.from_pretrained('shibing624/text2vec-base-multilingual')
sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card']
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Usage (sentence-transformers)
[sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences.
Install sentence-transformers:
```
pip install -U sentence-transformers
```
Then load model and predict:
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("shibing624/text2vec-base-multilingual")
sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card']
sentence_embeddings = m.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
CoSENT(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_mean_tokens': True})
)
```
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) model.
Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each
possible sentence pairs from the batch.
We then apply the rank loss by comparing with true pairs and false pairs.
## Citing & Authors
This model was trained by [text2vec](https://github.com/shibing624/text2vec).
If you find this model helpful, feel free to cite:
```bibtex
@software{text2vec,
author = {Ming Xu},
title = {text2vec: A Tool for Text to Vector},
year = {2023},
url = {https://github.com/shibing624/text2vec},
}
``` | [
"BIOSSES"
] | Non_BioNLP |
Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k-4bit-32g | Panchovix | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,687,819,820,000 | 2023-07-06T18:09:47 | 14 | 1 | ---
license: other
---
[WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use compress_pos_emb = 4 for any context up to 8192 context.
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21 | [
"MONERO"
] | Non_BioNLP |
lightblue/Jamba-v0.1-chat-multilingual | lightblue | text-generation | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"conversational",
"custom_code",
"dataset:jondurbin/airoboros-3.2",
"dataset:openchat/openchat_sharegpt4_dataset",
"base_model:ai21labs/Jamba-v0.1",
"base_model:finetune:ai21labs/Jamba-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,711,786,903,000 | 2024-04-01T14:29:34 | 32 | 22 | ---
base_model: ai21labs/Jamba-v0.1
datasets:
- jondurbin/airoboros-3.2
- openchat/openchat_sharegpt4_dataset
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Model Overview
This model was trained as a small-scale experiment to determine how easy it is to fine-tune [ai21labs/Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1) to work as a chatbot.
The aim of this experiment was to find how intelligently and reliably Jamba can chat in both English and other languages if only QLoRA finetuned for a few hours.
Initial subjective testing has shown that this model can chat reasonably well in both English and other languages, so feel free to give it a try!
## Model Details
- **Model type:** Joint Attention and Mamba (Jamba)
- **License:** Apache 2.0
- **Context length:** 256K
- **Knowledge cutoff date:** March 5, 2024
## Presequities
Jamba requires you use `transformers` version 4.39.0 or higher:
```bash
pip install transformers>=4.39.0
```
In order to run optimized Mamba implementations, you first need to install `mamba-ssm` and `causal-conv1d`:
```bash
pip install mamba-ssm causal-conv1d>=1.2.0
```
You also have to have the model on a CUDA device.
You can run the model not using the optimized Mamba kernels, but it is **not** recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify `use_mamba_kernels=False` when loading the model.
## How to use
※ - This code automatically appends the "<|startoftext|>" special token to any input.
Appending this to all inputs is required for inference, as initial testing shows that leaving it out leads to output errors.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
double_quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained("lightblue/Jamba-v0.1-chat-multilingual", device_map="auto", quantization_config=double_quant_config, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("lightblue/Jamba-v0.1-chat-multilingual")
input_text = """<|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
What is the most interesting fact about kangaroos that you know?
<|im_end|>
<|im_start|>assistant
"""
input_ids = tokenizer(input_text, return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=256, temperature=0.0, repetition_penalty=1.1)
print(tokenizer.batch_decode(outputs)[0])
# <|startoftext|><|im_start|>system
# You are a helpful AI assistant.
# <|im_end|>
# <|im_start|>user
# What is the most interesting fact about kangaroos that you know?
# <|im_end|>
# <|im_start|>assistant
# One of the most interesting facts about kangaroos is their unique reproductive system, which involves embryonic diapause and multiple births. Female kangaroos can have up to three joeys at different stages of development simultaneously: one fully developed in the pouch, another developing inside her body, and an embryo waiting for its turn.<|im_end|>
# <|endoftext|>
```
# Initial testing results
Overall, the model seems to have decent knowledge in some areas while hallucinating incorrect information in others.
It also seems faily easy to steer the model using the system message, being able to rhyme and modulate English difficulty to varying levels of success.
The multi-turn accuracy seems decent from very little testing.
The multilingual tests show us that this model fairly reliably replies in whatever language it has been prompted in.
<details>
<summary>Single turn examples</summary>
```
<|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
Describe the town of Strathaven, Scotland in 50 words.
<|im_end|>
<|im_start|>assistant
Strathaven is a picturesque market town located in South Lanarkshire, Scotland. Known for its rich history and stunning landscapes, it offers visitors a delightful blend of historical attractions, outdoor activities, and modern amenities. With its charming streets, beautiful parks, and friendly community, Strathaven provides an ideal destination for those seeking a peaceful escape from city life.<|im_end|>
<|endoftext|>
<|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
Describe a harmless prank that I can play on my colleagues.
<|im_end|>
<|im_start|>assistant
A harmless prank you can play on your colleagues is the "Fake Meeting" prank. Here's how to do it:
1. Choose a colleague who often has meetings or conference calls scheduled throughout the day.
2. Find out their schedule for the day, including any upcoming meetings they have.
3. Create a fake meeting invitation using an email template or online tool like Canva or Google Docs. Make sure the design and format look similar to the company's official meeting invitations.
4. Include details such as the date, time, location (if applicable), subject, and attendees. You can even add a fake agenda if you want to make it more convincing.
5. Send the fake meeting invitation to your chosen colleague via email or instant messaging platform used in your workplace.
6. Wait for them to respond with confusion or excitement about attending this unexpected meeting.
7. When they ask questions about the meeting, act clueless and say something like, "Oh, I just got this invite too! Must be some new initiative from upper management."
8. Keep up the charade until they realize it's a prank, but don't let it go on for too long so as not to cause unnecessary stress or confusion.
<|startoftext|><|im_start|>system
You are a helpful AI assistant that always answers correctly.
<|im_end|>
<|im_start|>user
Do UK citizens need a visa to holiday in Singapore?
<|im_end|>
<|im_start|>assistant
Yes, UK citizens do need a visa to holiday in Singapore. They can apply for an Electronic Travel Authorization (ETA) online before their trip.<|im_end|>
<|endoftext|>
## <dev note>: This is false, UK citizens do NOT require a visa to holiday in Singapore. An Electronic Travel Authorization (ETA) is required for the USA etc.
<|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
Which US state has the highest GDP per capita?
<|im_end|>
<|im_start|>assistant
As of my last update, the US state with the highest GDP per capita is Massachusetts.<|im_end|>
<|endoftext|>
<|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
Name three things that feel like a golf ball in your hands.
<|im_end|>
<|im_start|>assistant
1. A tennis ball: Tennis balls have a similar size and texture to golf balls, making them feel quite similar when held in the hand. They both have a rubbery surface with dimples or indentations on their outer layer.
2. A stress ball: Stress balls are often made of soft, squishy material such as foam or gel, which can be squeezed and manipulated easily. The feeling of holding a stress ball is comparable to that of a golf ball due to its small size and round shape.
3. A ping pong ball: Ping pong balls are lightweight and hollow, much like golf balls. Although they may not have the same texture, their size and spherical shape make them feel somewhat similar when held in the hand.<|im_end|>
<|endoftext|>
```
</details>
<details>
<summary>System message control examples</summary>
```
<|startoftext|><|startoftext|><|im_start|>system
You are a helpful AI assistant. You write all answers in very simple English.
<|im_end|>
<|im_start|>user
Write a 50 word analysis of why sausages are better than bacon.
<|im_end|>
<|im_start|>assistant
Sausages are better than bacon because they have more variety and can be eaten for breakfast, lunch, or dinner. They come in different flavors and sizes, making them versatile. Sausages also contain less fat and calories compared to bacon, which is healthier for people who want to watch their diet.<|im_end|>
<|endoftext|>
<|startoftext|><|startoftext|><|im_start|>system
You are a helpful AI assistant. You write all answers in very complex English.
<|im_end|>
<|im_start|>user
Write a 50 word analysis of why sausages are better than bacon.
<|im_end|>
<|im_start|>assistant
Sausages, due to their versatility and adaptability, offer more culinary possibilities compared to bacon. They can be used as an ingredient or served whole, while bacon is often limited to being a side dish. Sausages also have a longer shelf life and come in various flavors, making them a more convenient choice for cooking.<|im_end|>
<|endoftext|>
<|startoftext|><|im_start|>system
You are an AI assistant that answers all questions in rhyme.
<|im_end|>
<|im_start|>user
Why is the sky blue?
<|im_end|>
<|im_start|>assistant
The sky's hue, a question so true,
Is caused by light scattering through.
When sunlight meets our atmosphere,
It scatters and creates a fair weather affair.
Blue light waves have shorter lengths,
They scatter more than other tints.
This phenomenon, called Rayleigh scattering,
Makes the sky appear as if it were wearing a sapphire ring.
So when you gaze at the azure expanse above,
Remember this simple explanation of nature's love.<|im_end|>
<|endoftext|>
```
</details>
<details>
<summary>Multi-turn examples</summary>
```
<|startoftext|><|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
Give an example of a healthy breakfast.
<|im_end|>
<|im_start|>assistant
A healthy breakfast could include:
1. A bowl of oatmeal made with water or low-fat milk, topped with fresh berries and a sprinkle of nuts or seeds (such as almonds, walnuts, chia seeds, or flaxseeds).
2. A slice of whole grain toast spread with natural peanut butter or almond butter, along with a side of sliced fruit like apples or bananas.
3. Greek yogurt mixed with honey and granola, served with a handful of blueberries or strawberries on the side.
4. A vegetable omelette made with egg whites or a combination of whole eggs and egg whites, filled with spinach, tomatoes, onions, and mushrooms, paired with a small serving of avocado slices.
5. A smoothie made with unsweetened almond milk, frozen fruits (like mangoes, pineapples, or berries), a scoop of protein powder, and a handful of leafy greens (like spinach or kale).
6. Whole grain cereal with skim milk, accompanied by a piece of fruit and a dollop of plain Greek yogurt for added protein.<|im_end|>
<|im_start|>user
Which of these would have the lowest calorie count while still making me feel full?
<|im_end|>
<|im_start|>assistant
The option with the lowest calorie count that will also make you feel full is Option 1: Oatmeal with fresh berries and nuts/seeds. Oatmeal is high in fiber, which helps keep you feeling full longer, and it's lower in calories compared to other options like smoothies and omelettes. The addition of fresh berries provides essential vitamins and minerals, while the nuts and seeds add healthy fats and proteins to help you stay satisfied until your next meal.<|im_end|>
<|endoftext|>
```
</details>
<details>
<summary>Multilingual examples</summary>
```
<|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
東京駅から池袋駅まで丸の内線で行きたいですが、普通に何時間かかりますか?
<|im_end|>
<|im_start|>assistant
東京駅から池袋駅まで丸の内線で行く場合、平均的には約20分程度かかります。しかし、これは一般的な情報であり、実際の旅行時間は運行状況や列車の種類、日付、時間帯などによって異なることに注意してください。<|im_end|>
<|endoftext|>
<|startoftext|><|im_start|>system
You are a helpful AI assistant.
<|im_end|>
<|im_start|>user
Opisz kilka ciekawych miejsc do odwiedzenia w Londynie
<|im_end|>
<|im_start|>assistant
1. Buckingham Palace: Jest to oficjalne siedzisko Królowej Brytanii, króla Brytanii i innych państw Członkowskich Zjednoczonego Królestwa. Można tu zobaczyć mający historię palac oraz obejrzeć ceremonię zmiany strażników na placu przed budynkiem.
2. Big Ben: To jest jedna z najbardziej rozpoznawalnych atrakcji turystycznych w Londynie. Jest to wieża zegarowa położona na terenie Parlamentu Wielkiej Brytanii i Senatu. Jej dźwiękowy sygnał można usłyszeć nawet na dalekie odległości.
3. Tower Bridge: Ten piękny most nad Tamizą oferuje widok na skyline Londynu. Otwiera się on pod koniec każdego d
```
</details>
<br/>
# Training details
The model was trained on 2 open source datasets (one multilingual) for one epoch on a A100 (80GB) x 4 environment for 3 hours.
## Training data
* [jondurbin/airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2)
A ~59K example dataset of curated LLM tasks in English, primarily generated with GPT-4. This dataset has been used by some of the best performing open source LLMs in the world (e.g. [jondurbin/bagel-7b-v0.4](https://huggingface.co/jondurbin/bagel-7b-v0.4), [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)) and contains a wide variety of tasks, so we hypothesized that this would lead to a multi-talented, accurate model. For this reason we chose this dataset was chosen for the bulk of our training data.
Note: Each element in jondurbin/airoboros-3.2 already contains a system message.
* [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset) (GPT-4 responses only)
A ~6K example dataset of multilingual multi-turn chats between users and GPT-4. While jondurbin/airoboros-3.2 has deilvered good results for models previously, it sadly contains no (or seemingly very little) multilingual data. We are a Japanese AI company, so require an LLM to be able to output in Japanese too. Hence we also selected a small, seemingly high quality dataset of GPT-4 responses in many languages from the ShareGPT dataset. We chose to only select the GPT-4 responses as we wanted to keep our dataset as small and high quality as possible to maximise the efficiency of our training.
Note: openchat/openchat_sharegpt4_dataset does not contain system messages, so we added 'You are GPT-4, a helpful assistant.' as our system message.
<details>
<summary>Data preparation code</summary>
```python
import os
import pandas as pd
from datasets import load_dataset, Dataset, concatenate_datasets
os.environ['HF_HOME'] = "/workspace/hf_home"
os.environ['HF_HUB_ENABLE_HF_TRANSFER'] = "1"
boros_dataset = load_dataset("jondurbin/airoboros-3.2", split='train')
gpt4_df = pd.read_json("https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json?download=true")
gpt4_df["conversations"] = gpt4_df["items"].apply(lambda x: [{'from': 'system', 'value': 'You are GPT-4, a helpful assistant.'}] + x)
gpt4_dataset = Dataset.from_pandas(gpt4_df[["conversations"]])
dataset = concatenate_datasets([gpt4_dataset, boros_dataset]).shuffle()
dataset.select_columns(["conversations"]).to_json("/workspace/airoboros-3.2_plus_openchat_sharegpt4.json")
```
</details>
## Training
The Jamba-v0.1 base model was trained for roughly 3 hours in a A100 (80GB) x 4 environment on the Azure cloud (Standard_NC96ads_A100_v4).
We trained using QLoRA and merged the adapter to the original weights.
Our training harness was Axolotl using the ChatML chat template. Full details of the training config are below:
<details>
<summary>Training config</summary>
```yaml
base_model: ai21labs/Jamba-v0.1
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: /workspace/airoboros-3.2_plus_openchat_sharegpt4.json
ds_type: json
type: sharegpt
conversation: chatml
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./airoboros-3.2_plus_openchat_sharegpt4_one_epoch
sequence_len: 6000
sample_packing: true
pad_to_sequence_len: false
eval_sample_packing: true
use_wandb: true
wandb_project: axolotl
wandb_entity: peterd
wandb_name: airoboros-3.2_plus_openchat_sharegpt4
adapter: qlora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
low_cpu_mem_usage: true
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 5
saves_per_epoch: 5
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
```
</details>
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details>
<summary>Training graphs</summary>



</details>
<br/>
# Developers
Lead developer - Peter Devine [ptrdvn](https://huggingface.co/ptrdvn)
Administrative supervisor - Shunichi Taniguchi [shun1taniguchi](https://huggingface.co/shun1taniguchi) | [
"CHIA"
] | Non_BioNLP |
BSC-NLP4BIA/bsc-bio-ehr-es-distemist | BSC-NLP4BIA | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"biomedical",
"clinical",
"EHR",
"spanish",
"diseases",
"es",
"base_model:PlanTL-GOB-ES/bsc-bio-ehr-es",
"base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,717,683,983,000 | 2024-08-05T14:57:52 | 15 | 0 | ---
base_model:
- PlanTL-GOB-ES/bsc-bio-ehr-es
language:
- es
license: apache-2.0
metrics:
- precision
- recall
- f1
tags:
- biomedical
- clinical
- EHR
- spanish
- diseases
widget:
- text: El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de
pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral)
PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo.
- text: Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el
LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo
pulmonar, con hallazgos altamente sospechosos de carcinoma.
- text: Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre
hígado cirrótico, en paciente con índice Child-Pugh B.
model-index:
- name: BSC-NLP4BIA/bsc-bio-ehr-es-distemist
results:
- task:
type: token-classification
dataset:
name: DisTEMIST
type: DisTEMIST
metrics:
- type: precision
value: 0.754
name: precision
- type: recall
value: 0.759
name: recall
- type: f1
value: 0.757
name: f1
---
# DISEASE-NER-ES
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Authors](#authors)
- [Contact information](#contact-information)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model on the [DisTEMIST](https://zenodo.org/records/7614764) corpus (original Spanish Gold Standard).
For further information, check the [official website](https://temu.bsc.es/distemist/).
## How to use
⚠ We recommend pre-tokenizing the input text into words instead of providing it directly to the model, as this is how the model was trained. Otherwise, the results and performance might get affected.
A usage example can be found [here](https://github.com/nlp4bia-bsc/hugging-face-pipeline/blob/main/simple_inference.ipynb).
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The model was trained using the Barcelona Supercomputing Center infrastructure.
## Evaluation
F1 Score on DisTEMIST: 0.757.
## Additional information
### Authors
NLP4BIA team at the Barcelona Supercomputing Center ([email protected]).
### Contact information
jan.rodriguez [at] bsc.es
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This research was funded by the Ministerio de Ciencia e Innovación (MICINN) under project AI4ProfHealth (PID2020-119266RA-I00 MICIU/AEI/10.13039/501100011033) and BARITONE (TED2021-129974B-C22). This work is also supported by the European Union’s Horizon Europe Co-ordination \& Support Action under Grant Agreement No 101080430 (AI4HF) as well as Grant Agreement No 101057849 (DataTool4Heartproject).
### Citing information
Please cite the following works:
```
@inproceedings{distemist,
title={{Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases from clinical texts: results, methods, evaluation and multilingual resources}},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell, Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras, Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings},
year={2022}
}
@misc{carmen_physionet,
author = {Farre Maduell, Eulalia and Lima-Lopez, Salvador and Frid, Santiago Andres and Conesa, Artur and Asensio, Elisa and Lopez-Rueda, Antonio and Arino, Helena and Calvo, Elena and Bertran, Maria Jesús and Marcos, Maria Angeles and Nofre Maiz, Montserrat and Tañá Velasco, Laura and Marti, Antonia and Farreres, Ricardo and Pastor, Xavier and Borrat Frigola, Xavier and Krallinger, Martin},
title = {{CARMEN-I: A resource of anonymized electronic health records in Spanish and Catalan for training and testing NLP tools (version 1.0.1)}},
year = {2024},
publisher = {PhysioNet},
url = {https://doi.org/10.13026/x7ed-9r91}
}
@article{physionet,
author = {Ary L. Goldberger and Luis A. N. Amaral and Leon Glass and Jeffrey M. Hausdorff and Plamen Ch. Ivanov and Roger G. Mark and Joseph E. Mietus and George B. Moody and Chung-Kang Peng and H. Eugene Stanley },
title = {PhysioBank, PhysioToolkit, and PhysioNet },
journal = {Circulation},
volume = {101},
number = {23},
pages = {e215-e220},
year = {2000},
doi = {10.1161/01.CIR.101.23.e215},
URL = {https://www.ahajournals.org/doi/abs/10.1161/01.CIR.101.23.e215}
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
---
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
| [
"DISTEMIST"
] | BioNLP |
calisyj/gemma-sprint2024-brain | calisyj | null | [
"safetensors",
"gemma2",
"region:us"
] | 1,727,982,635,000 | 2024-10-04T02:36:51 | 4 | 0 | ---
{}
---
# Model Card: **Gemma Sprint 2024 - Brain Neural Activation Simulation**
## Model Overview
This model is a fine-tuned version of `google/gemma-2-2b-it`, optimized to simulate brain neural activations and provide answers to neuroscience-related questions. The model was fine-tuned on the **PubMedQA** dataset using **LoRA** (Low-Rank Adaptation) to improve performance on brain-related question-answering tasks. This project focuses on simulating brain circuit activation patterns and generating relevant answers in the domain of neuroscience.
## Model Architecture
- **Base Model**: `google/gemma-2-2b-it`
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
### Configurations:
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **Dropout**: 0.1
- **Target Modules**: `q_proj`, `k_proj`, `v_proj`, `o_proj`
## Dataset
The model was trained on the **PubMedQA** dataset, which contains biomedical questions and detailed long-form answers based on PubMed abstracts. **PubMedQA** is specifically designed for building models that can handle complex, long-answer question-answering tasks in the biomedical domain, making it suitable for neuroscience-related queries as well.
## Data Preprocessing
For training, each question and its corresponding long answer from the **PubMedQA** dataset were preprocessed into input and label formats. The inputs were tokenized with padding and truncation at 512 tokens to fit the model's requirements.
## Model Performance
The model's performance was evaluated using **BLEU** and **ROUGE** scores:
- **BLEU Score**: Measures the similarity between generated and reference answers.
- **ROUGE Score**: Measures the overlap of n-grams between generated and reference answers.
These metrics were computed on the **PubMedQA** test set. Performance on out-of-domain data may vary.
## Limitations
- This model was trained on the **PubMedQA** dataset, so it may underperform on out-of-domain data.
- Since no Korean data was used in training, the model may not perform well in Korean question-answering tasks.
## License
This model follows the license of `google/gemma-2-2b-it`. Please refer to the original license for any usage restrictions.
## How to Use
Here’s how you can load and use the **Gemma Sprint 2024** model fine-tuned on the **PubMedQA** dataset:
https://www.kaggle.com/code/calispohwang/gemma-sprint-brain
```python
| [
"PUBMEDQA"
] | BioNLP |
Henrychur/MMed-Llama-3-8B-EnIns | Henrychur | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"conversational",
"en",
"zh",
"ja",
"fr",
"ru",
"es",
"dataset:Henrychur/MMedC",
"dataset:axiong/pmc_llama_instructions",
"arxiv:2402.13963",
"base_model:Henrychur/MMed-Llama-3-8B",
"base_model:finetune:Henrychur/MMed-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,716,386,825,000 | 2024-09-04T11:06:38 | 1,035 | 4 | ---
base_model: Henrychur/MMed-Llama-3-8B
datasets:
- Henrychur/MMedC
- axiong/pmc_llama_instructions
language:
- en
- zh
- ja
- fr
- ru
- es
library_name: transformers
license: llama3
tags:
- medical
---
# MMedLM
[💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963)
The official model weights for "Towards Building Multilingual Language Model for Medicine".
## Introduction
This repo contains MMed-Llama 3-8B-EnIns, which is based on MMed-Llama 3-8B. We further fine-tune the model on **English instruction fine-tuning dataset**(from PMC-LLaMA). We did this for a fair comparison with existing models on commonly-used English benchmarks.
Notice that, MMed-Llama 3-8B-EnIns has only been trained on pmc_llama_instructions, which is a English medical SFT dataset focusing on QA tasks. So this model's ability to respond multilingual input is still limited.
The model can be loaded as follows:
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns")
model = AutoModelForCausalLM.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns", torch_dtype=torch.float16)
```
- Inference format is similar to Llama 3-Instruct, you can check our inference code [here](https://github.com/MAGIC-AI4Med/MedS-Ins/tree/main/Inference).
- For multiple-choice question and answering tasks, we suggest using the following instruction.
```py
from model import MedS_Llama3 # https://github.com/MAGIC-AI4Med/MedS-Ins/blob/main/Inference/model.py
sdk_api = MedS_Llama3(model_path="Henrychur/MMed-Llama-3-8B-EnIns", gpu_id=0)
INSTRUCTION = "Given a question and a list of options, select the correct answer from the options directly."
input_ = "Question: A mother brings her 3-week-old infant to the pediatrician's office because she is concerned about his feeding habits. He was born without complications and has not had any medical problems up until this time. However, for the past 4 days, he has been fussy, is regurgitating all of his feeds, and his vomit is yellow in color. On physical exam, the child's abdomen is minimally distended but no other abnormalities are appreciated. Which of the following embryologic errors could account for this presentation?\nOptions: A: Abnormal migration of ventral pancreatic bud\tB: Complete failure of proximal duodenum to recanalize\tC: Abnormal hypertrophy of the pylorus\tD: Failure of lateral body folds to move ventrally and fuse in the midline\t"
results = sdk_api.chat([], input_, INSTRUCTION)
print(results)
```
## News
[2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963).
[2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench.
[2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens.
[2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering
benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/).
## Evaluation on Commonly-used English Benchmark
The further pretrained MMed-Llama3 also showcast it's great performance in medical domain on different English benchmarks.
| Method | Size | Year | MedQA | MedMCQA | PubMedQA | MMLU_CK | MMLU_MG | MMLU_AN | MMLU_PM | MMLU_CB | MMLU_CM | Avg. |
| ------------------- | ---- | ------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | --------- |
| MedAlpaca | 7B | 2023.3 | 41.7 | 37.5 | 72.8 | 57.4 | 69.0 | 57.0 | 67.3 | 65.3 | 54.3 | 58.03 |
| PMC-LLaMA | 13B | 2023.9 | 56.4 | 56.0 | 77.9 | - | - | - | - | - | - | - |
| MEDITRON | 7B | 2023.11 | 57.2 | 59.2 | 74.4 | 64.6 | 59.9 | 49.3 | 55.4 | 53.8 | 44.8 | 57.62 |
| Mistral | 7B | 2023.12 | 50.8 | 48.2 | 75.4 | 68.7 | 71.0 | 55.6 | 68.4 | 68.1 | 59.5 | 62.97 |
| Gemma | 7B | 2024.2 | 47.2 | 49.0 | 76.2 | 69.8 | 70.0 | 59.3 | 66.2 | **79.9** | 60.1 | 64.19 |
| BioMistral | 7B | 2024.2 | 50.6 | 48.1 | 77.5 | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 58.97 |
| Llama 3 | 8B | 2024.4 | 60.9 | 50.7 | 73.0 | **72.1** | 76.0 | 63.0 | 77.2 | **79.9** | 64.2 | 68.56 |
| MMed-Llama 3~(Ours) | 8B | - | **65.4** | **63.5** | **80.1** | 71.3 | **85.0** | **69.6** | **77.6** | 74.3 | **66.5** | **72.59** |
## Contact
If you have any question, please feel free to contact [email protected].
## Citation
```
@misc{qiu2024building,
title={Towards Building Multilingual Language Model for Medicine},
author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie},
year={2024},
eprint={2402.13963},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
solidrust/Einstein-v5-v0.2-7B-AWQ | solidrust | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"Mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"quantized",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"conversational",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:Weyaxi/Einstein-v5-v0.2-7B",
"base_model:quantized:Weyaxi/Einstein-v5-v0.2-7B",
"license:other",
"model-index",
"awq",
"region:us"
] | 1,711,586,963,000 | 2024-09-03T08:08:34 | 10 | 0 | ---
base_model: Weyaxi/Einstein-v5-v0.2-7B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
license: other
pipeline_tag: text-generation
tags:
- axolotl
- generated_from_trainer
- Mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- quantized
- 4-bit
- AWQ
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
quantized_by: Suparious
model_creator: Weyaxi
inference: false
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
model-index:
- name: Einstein-v5-v0.2-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.59
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v5-v0.2-7B
name: Open LLM Leaderboard
---
# Weyaxi/Einstein-v5-v0.2-7B AWQ
- Model creator: [Weyaxi](https://huggingface.co/Weyaxi)
- Original model: [Einstein-v5-v0.2-7B](https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B)
## Model Summary
This model is a full fine-tuned version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) on diverse datasets.
This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
This model's training was sponsored by [sablo.ai](https://sablo.ai).
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Einstein-v5-v0.2-7B-AWQ"
system_message = "You are Alpert Einstein, incarnated a powerful AI."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
## Prompt template: ChatML
```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
| [
"SCIQ"
] | Non_BioNLP |
jspringer/echo-mistral-7b-instruct-lasttoken | jspringer | feature-extraction | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"arxiv:2402.15449",
"model-index",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,708,318,208,000 | 2024-02-26T05:59:22 | 112 | 6 | ---
tags:
- mteb
model-index:
- name: mlm
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 82.97014925373135
- type: ap
value: 49.6288385893607
- type: f1
value: 77.58957447993662
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.975425
- type: ap
value: 87.57349835900825
- type: f1
value: 90.96732416386632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.708
- type: f1
value: 47.736228936979586
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.006
- type: map_at_10
value: 49.268
- type: map_at_100
value: 49.903999999999996
- type: map_at_1000
value: 49.909
- type: map_at_3
value: 44.334
- type: map_at_5
value: 47.374
- type: mrr_at_1
value: 32.788000000000004
- type: mrr_at_10
value: 49.707
- type: mrr_at_100
value: 50.346999999999994
- type: mrr_at_1000
value: 50.352
- type: mrr_at_3
value: 44.95
- type: mrr_at_5
value: 47.766999999999996
- type: ndcg_at_1
value: 32.006
- type: ndcg_at_10
value: 58.523
- type: ndcg_at_100
value: 61.095
- type: ndcg_at_1000
value: 61.190999999999995
- type: ndcg_at_3
value: 48.431000000000004
- type: ndcg_at_5
value: 53.94
- type: precision_at_1
value: 32.006
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.104
- type: precision_at_5
value: 14.751
- type: recall_at_1
value: 32.006
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 98.86200000000001
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 60.313
- type: recall_at_5
value: 73.75500000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.01500173547629
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.52209238193538
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.1348784470504
- type: mrr
value: 76.93762916062083
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.8322696692348
- type: cos_sim_spearman
value: 86.53751398463592
- type: euclidean_pearson
value: 86.1435544054336
- type: euclidean_spearman
value: 86.70799979698164
- type: manhattan_pearson
value: 86.1206703865016
- type: manhattan_spearman
value: 86.47004256773585
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.1461038961039
- type: f1
value: 88.09877611214092
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.53021718892608
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.34236915611622
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.435
- type: map_at_10
value: 49.437999999999995
- type: map_at_100
value: 51.105999999999995
- type: map_at_1000
value: 51.217999999999996
- type: map_at_3
value: 44.856
- type: map_at_5
value: 47.195
- type: mrr_at_1
value: 45.78
- type: mrr_at_10
value: 56.302
- type: mrr_at_100
value: 56.974000000000004
- type: mrr_at_1000
value: 57.001999999999995
- type: mrr_at_3
value: 53.6
- type: mrr_at_5
value: 55.059999999999995
- type: ndcg_at_1
value: 44.921
- type: ndcg_at_10
value: 56.842000000000006
- type: ndcg_at_100
value: 61.586
- type: ndcg_at_1000
value: 63.039
- type: ndcg_at_3
value: 50.612
- type: ndcg_at_5
value: 53.181
- type: precision_at_1
value: 44.921
- type: precision_at_10
value: 11.245
- type: precision_at_100
value: 1.7069999999999999
- type: precision_at_1000
value: 0.216
- type: precision_at_3
value: 24.224999999999998
- type: precision_at_5
value: 17.511
- type: recall_at_1
value: 36.435
- type: recall_at_10
value: 70.998
- type: recall_at_100
value: 89.64
- type: recall_at_1000
value: 98.654
- type: recall_at_3
value: 53.034000000000006
- type: recall_at_5
value: 60.41
- type: map_at_1
value: 33.371
- type: map_at_10
value: 45.301
- type: map_at_100
value: 46.663
- type: map_at_1000
value: 46.791
- type: map_at_3
value: 41.79
- type: map_at_5
value: 43.836999999999996
- type: mrr_at_1
value: 42.611
- type: mrr_at_10
value: 51.70400000000001
- type: mrr_at_100
value: 52.342
- type: mrr_at_1000
value: 52.38
- type: mrr_at_3
value: 49.374
- type: mrr_at_5
value: 50.82
- type: ndcg_at_1
value: 42.166
- type: ndcg_at_10
value: 51.49
- type: ndcg_at_100
value: 56.005
- type: ndcg_at_1000
value: 57.748
- type: ndcg_at_3
value: 46.769
- type: ndcg_at_5
value: 49.155
- type: precision_at_1
value: 42.166
- type: precision_at_10
value: 9.841
- type: precision_at_100
value: 1.569
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 22.803
- type: precision_at_5
value: 16.229
- type: recall_at_1
value: 33.371
- type: recall_at_10
value: 62.52799999999999
- type: recall_at_100
value: 81.269
- type: recall_at_1000
value: 91.824
- type: recall_at_3
value: 48.759
- type: recall_at_5
value: 55.519
- type: map_at_1
value: 41.421
- type: map_at_10
value: 55.985
- type: map_at_100
value: 56.989999999999995
- type: map_at_1000
value: 57.028
- type: map_at_3
value: 52.271
- type: map_at_5
value: 54.517
- type: mrr_at_1
value: 47.272999999999996
- type: mrr_at_10
value: 59.266
- type: mrr_at_100
value: 59.821999999999996
- type: mrr_at_1000
value: 59.839
- type: mrr_at_3
value: 56.677
- type: mrr_at_5
value: 58.309999999999995
- type: ndcg_at_1
value: 47.147
- type: ndcg_at_10
value: 62.596
- type: ndcg_at_100
value: 66.219
- type: ndcg_at_1000
value: 66.886
- type: ndcg_at_3
value: 56.558
- type: ndcg_at_5
value: 59.805
- type: precision_at_1
value: 47.147
- type: precision_at_10
value: 10.245
- type: precision_at_100
value: 1.302
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.663999999999998
- type: precision_at_5
value: 17.793
- type: recall_at_1
value: 41.421
- type: recall_at_10
value: 78.77499999999999
- type: recall_at_100
value: 93.996
- type: recall_at_1000
value: 98.60600000000001
- type: recall_at_3
value: 62.891
- type: recall_at_5
value: 70.819
- type: map_at_1
value: 27.517999999999997
- type: map_at_10
value: 37.468
- type: map_at_100
value: 38.667
- type: map_at_1000
value: 38.743
- type: map_at_3
value: 34.524
- type: map_at_5
value: 36.175000000000004
- type: mrr_at_1
value: 29.378999999999998
- type: mrr_at_10
value: 39.54
- type: mrr_at_100
value: 40.469
- type: mrr_at_1000
value: 40.522000000000006
- type: mrr_at_3
value: 36.685
- type: mrr_at_5
value: 38.324000000000005
- type: ndcg_at_1
value: 29.718
- type: ndcg_at_10
value: 43.091
- type: ndcg_at_100
value: 48.44
- type: ndcg_at_1000
value: 50.181
- type: ndcg_at_3
value: 37.34
- type: ndcg_at_5
value: 40.177
- type: precision_at_1
value: 29.718
- type: precision_at_10
value: 6.723
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.083
- type: precision_at_5
value: 11.322000000000001
- type: recall_at_1
value: 27.517999999999997
- type: recall_at_10
value: 58.196999999999996
- type: recall_at_100
value: 82.07799999999999
- type: recall_at_1000
value: 94.935
- type: recall_at_3
value: 42.842
- type: recall_at_5
value: 49.58
- type: map_at_1
value: 19.621
- type: map_at_10
value: 30.175
- type: map_at_100
value: 31.496000000000002
- type: map_at_1000
value: 31.602000000000004
- type: map_at_3
value: 26.753
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 25.497999999999998
- type: mrr_at_10
value: 35.44
- type: mrr_at_100
value: 36.353
- type: mrr_at_1000
value: 36.412
- type: mrr_at_3
value: 32.275999999999996
- type: mrr_at_5
value: 34.434
- type: ndcg_at_1
value: 24.502
- type: ndcg_at_10
value: 36.423
- type: ndcg_at_100
value: 42.289
- type: ndcg_at_1000
value: 44.59
- type: ndcg_at_3
value: 30.477999999999998
- type: ndcg_at_5
value: 33.787
- type: precision_at_1
value: 24.502
- type: precision_at_10
value: 6.978
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 15.008
- type: precision_at_5
value: 11.468
- type: recall_at_1
value: 19.621
- type: recall_at_10
value: 50.516000000000005
- type: recall_at_100
value: 75.721
- type: recall_at_1000
value: 91.77199999999999
- type: recall_at_3
value: 34.695
- type: recall_at_5
value: 42.849
- type: map_at_1
value: 33.525
- type: map_at_10
value: 46.153
- type: map_at_100
value: 47.61
- type: map_at_1000
value: 47.715
- type: map_at_3
value: 42.397
- type: map_at_5
value: 44.487
- type: mrr_at_1
value: 42.445
- type: mrr_at_10
value: 52.174
- type: mrr_at_100
value: 52.986999999999995
- type: mrr_at_1000
value: 53.016
- type: mrr_at_3
value: 49.647000000000006
- type: mrr_at_5
value: 51.215999999999994
- type: ndcg_at_1
value: 42.156
- type: ndcg_at_10
value: 52.698
- type: ndcg_at_100
value: 58.167
- type: ndcg_at_1000
value: 59.71300000000001
- type: ndcg_at_3
value: 47.191
- type: ndcg_at_5
value: 49.745
- type: precision_at_1
value: 42.156
- type: precision_at_10
value: 9.682
- type: precision_at_100
value: 1.469
- type: precision_at_1000
value: 0.17700000000000002
- type: precision_at_3
value: 22.682
- type: precision_at_5
value: 16.035
- type: recall_at_1
value: 33.525
- type: recall_at_10
value: 66.142
- type: recall_at_100
value: 88.248
- type: recall_at_1000
value: 97.806
- type: recall_at_3
value: 50.541000000000004
- type: recall_at_5
value: 57.275
- type: map_at_1
value: 28.249000000000002
- type: map_at_10
value: 41.659
- type: map_at_100
value: 43.001
- type: map_at_1000
value: 43.094
- type: map_at_3
value: 37.607
- type: map_at_5
value: 39.662
- type: mrr_at_1
value: 36.301
- type: mrr_at_10
value: 47.482
- type: mrr_at_100
value: 48.251
- type: mrr_at_1000
value: 48.288
- type: mrr_at_3
value: 44.444
- type: mrr_at_5
value: 46.013999999999996
- type: ndcg_at_1
value: 35.616
- type: ndcg_at_10
value: 49.021
- type: ndcg_at_100
value: 54.362
- type: ndcg_at_1000
value: 55.864999999999995
- type: ndcg_at_3
value: 42.515
- type: ndcg_at_5
value: 45.053
- type: precision_at_1
value: 35.616
- type: precision_at_10
value: 9.372
- type: precision_at_100
value: 1.4120000000000001
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.043
- type: precision_at_5
value: 14.84
- type: recall_at_1
value: 28.249000000000002
- type: recall_at_10
value: 65.514
- type: recall_at_100
value: 87.613
- type: recall_at_1000
value: 97.03
- type: recall_at_3
value: 47.21
- type: recall_at_5
value: 54.077
- type: map_at_1
value: 29.164583333333333
- type: map_at_10
value: 40.632000000000005
- type: map_at_100
value: 41.96875
- type: map_at_1000
value: 42.07508333333333
- type: map_at_3
value: 37.18458333333333
- type: map_at_5
value: 39.13700000000001
- type: mrr_at_1
value: 35.2035
- type: mrr_at_10
value: 45.28816666666666
- type: mrr_at_100
value: 46.11466666666667
- type: mrr_at_1000
value: 46.15741666666667
- type: mrr_at_3
value: 42.62925
- type: mrr_at_5
value: 44.18141666666667
- type: ndcg_at_1
value: 34.88958333333333
- type: ndcg_at_10
value: 46.90650000000001
- type: ndcg_at_100
value: 52.135333333333335
- type: ndcg_at_1000
value: 53.89766666666668
- type: ndcg_at_3
value: 41.32075
- type: ndcg_at_5
value: 44.02083333333333
- type: precision_at_1
value: 34.88958333333333
- type: precision_at_10
value: 8.392833333333332
- type: precision_at_100
value: 1.3085833333333334
- type: precision_at_1000
value: 0.16458333333333333
- type: precision_at_3
value: 19.361166666666666
- type: precision_at_5
value: 13.808416666666668
- type: recall_at_1
value: 29.164583333333333
- type: recall_at_10
value: 60.874666666666656
- type: recall_at_100
value: 83.21008333333334
- type: recall_at_1000
value: 95.09275000000001
- type: recall_at_3
value: 45.37591666666667
- type: recall_at_5
value: 52.367666666666665
- type: map_at_1
value: 28.682000000000002
- type: map_at_10
value: 37.913000000000004
- type: map_at_100
value: 39.037
- type: map_at_1000
value: 39.123999999999995
- type: map_at_3
value: 35.398
- type: map_at_5
value: 36.906
- type: mrr_at_1
value: 32.362
- type: mrr_at_10
value: 40.92
- type: mrr_at_100
value: 41.748000000000005
- type: mrr_at_1000
value: 41.81
- type: mrr_at_3
value: 38.701
- type: mrr_at_5
value: 39.936
- type: ndcg_at_1
value: 32.208999999999996
- type: ndcg_at_10
value: 42.84
- type: ndcg_at_100
value: 47.927
- type: ndcg_at_1000
value: 50.048
- type: ndcg_at_3
value: 38.376
- type: ndcg_at_5
value: 40.661
- type: precision_at_1
value: 32.208999999999996
- type: precision_at_10
value: 6.718
- type: precision_at_100
value: 1.012
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 16.667
- type: precision_at_5
value: 11.503
- type: recall_at_1
value: 28.682000000000002
- type: recall_at_10
value: 54.872
- type: recall_at_100
value: 77.42999999999999
- type: recall_at_1000
value: 93.054
- type: recall_at_3
value: 42.577999999999996
- type: recall_at_5
value: 48.363
- type: map_at_1
value: 19.698
- type: map_at_10
value: 28.777
- type: map_at_100
value: 30.091
- type: map_at_1000
value: 30.209999999999997
- type: map_at_3
value: 25.874000000000002
- type: map_at_5
value: 27.438000000000002
- type: mrr_at_1
value: 24.295
- type: mrr_at_10
value: 33.077
- type: mrr_at_100
value: 34.036
- type: mrr_at_1000
value: 34.1
- type: mrr_at_3
value: 30.523
- type: mrr_at_5
value: 31.891000000000002
- type: ndcg_at_1
value: 24.535
- type: ndcg_at_10
value: 34.393
- type: ndcg_at_100
value: 40.213
- type: ndcg_at_1000
value: 42.748000000000005
- type: ndcg_at_3
value: 29.316
- type: ndcg_at_5
value: 31.588
- type: precision_at_1
value: 24.535
- type: precision_at_10
value: 6.483
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.201
- type: precision_at_5
value: 10.344000000000001
- type: recall_at_1
value: 19.698
- type: recall_at_10
value: 46.903
- type: recall_at_100
value: 72.624
- type: recall_at_1000
value: 90.339
- type: recall_at_3
value: 32.482
- type: recall_at_5
value: 38.452
- type: map_at_1
value: 30.56
- type: map_at_10
value: 41.993
- type: map_at_100
value: 43.317
- type: map_at_1000
value: 43.399
- type: map_at_3
value: 38.415
- type: map_at_5
value: 40.472
- type: mrr_at_1
value: 36.474000000000004
- type: mrr_at_10
value: 46.562
- type: mrr_at_100
value: 47.497
- type: mrr_at_1000
value: 47.532999999999994
- type: mrr_at_3
value: 43.905
- type: mrr_at_5
value: 45.379000000000005
- type: ndcg_at_1
value: 36.287000000000006
- type: ndcg_at_10
value: 48.262
- type: ndcg_at_100
value: 53.789
- type: ndcg_at_1000
value: 55.44
- type: ndcg_at_3
value: 42.358000000000004
- type: ndcg_at_5
value: 45.221000000000004
- type: precision_at_1
value: 36.287000000000006
- type: precision_at_10
value: 8.265
- type: precision_at_100
value: 1.24
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 19.558
- type: precision_at_5
value: 13.880999999999998
- type: recall_at_1
value: 30.56
- type: recall_at_10
value: 62.891
- type: recall_at_100
value: 85.964
- type: recall_at_1000
value: 97.087
- type: recall_at_3
value: 46.755
- type: recall_at_5
value: 53.986000000000004
- type: map_at_1
value: 29.432000000000002
- type: map_at_10
value: 40.898
- type: map_at_100
value: 42.794
- type: map_at_1000
value: 43.029
- type: map_at_3
value: 37.658
- type: map_at_5
value: 39.519
- type: mrr_at_1
value: 36.364000000000004
- type: mrr_at_10
value: 46.9
- type: mrr_at_100
value: 47.819
- type: mrr_at_1000
value: 47.848
- type: mrr_at_3
value: 44.202999999999996
- type: mrr_at_5
value: 45.715
- type: ndcg_at_1
value: 35.573
- type: ndcg_at_10
value: 47.628
- type: ndcg_at_100
value: 53.88699999999999
- type: ndcg_at_1000
value: 55.584
- type: ndcg_at_3
value: 42.669000000000004
- type: ndcg_at_5
value: 45.036
- type: precision_at_1
value: 35.573
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 1.8159999999999998
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 20.29
- type: precision_at_5
value: 14.387
- type: recall_at_1
value: 29.432000000000002
- type: recall_at_10
value: 60.388
- type: recall_at_100
value: 87.144
- type: recall_at_1000
value: 97.154
- type: recall_at_3
value: 45.675
- type: recall_at_5
value: 52.35300000000001
- type: map_at_1
value: 21.462999999999997
- type: map_at_10
value: 31.824
- type: map_at_100
value: 32.853
- type: map_at_1000
value: 32.948
- type: map_at_3
value: 28.671999999999997
- type: map_at_5
value: 30.579
- type: mrr_at_1
value: 23.66
- type: mrr_at_10
value: 34.091
- type: mrr_at_100
value: 35.077999999999996
- type: mrr_at_1000
value: 35.138999999999996
- type: mrr_at_3
value: 31.516
- type: mrr_at_5
value: 33.078
- type: ndcg_at_1
value: 23.845
- type: ndcg_at_10
value: 37.594
- type: ndcg_at_100
value: 42.74
- type: ndcg_at_1000
value: 44.93
- type: ndcg_at_3
value: 31.667
- type: ndcg_at_5
value: 34.841
- type: precision_at_1
value: 23.845
- type: precision_at_10
value: 6.229
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 14.11
- type: precision_at_5
value: 10.388
- type: recall_at_1
value: 21.462999999999997
- type: recall_at_10
value: 52.772
- type: recall_at_100
value: 76.794
- type: recall_at_1000
value: 92.852
- type: recall_at_3
value: 37.049
- type: recall_at_5
value: 44.729
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.466
- type: map_at_10
value: 25.275
- type: map_at_100
value: 27.176000000000002
- type: map_at_1000
value: 27.374
- type: map_at_3
value: 21.438
- type: map_at_5
value: 23.366
- type: mrr_at_1
value: 35.699999999999996
- type: mrr_at_10
value: 47.238
- type: mrr_at_100
value: 47.99
- type: mrr_at_1000
value: 48.021
- type: mrr_at_3
value: 44.463
- type: mrr_at_5
value: 46.039
- type: ndcg_at_1
value: 35.244
- type: ndcg_at_10
value: 34.559
- type: ndcg_at_100
value: 41.74
- type: ndcg_at_1000
value: 45.105000000000004
- type: ndcg_at_3
value: 29.284
- type: ndcg_at_5
value: 30.903999999999996
- type: precision_at_1
value: 35.244
- type: precision_at_10
value: 10.463000000000001
- type: precision_at_100
value: 1.8259999999999998
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 21.65
- type: precision_at_5
value: 16.078
- type: recall_at_1
value: 15.466
- type: recall_at_10
value: 39.782000000000004
- type: recall_at_100
value: 64.622
- type: recall_at_1000
value: 83.233
- type: recall_at_3
value: 26.398
- type: recall_at_5
value: 31.676
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.414
- type: map_at_10
value: 22.435
- type: map_at_100
value: 32.393
- type: map_at_1000
value: 34.454
- type: map_at_3
value: 15.346000000000002
- type: map_at_5
value: 18.282999999999998
- type: mrr_at_1
value: 71.5
- type: mrr_at_10
value: 78.795
- type: mrr_at_100
value: 79.046
- type: mrr_at_1000
value: 79.054
- type: mrr_at_3
value: 77.333
- type: mrr_at_5
value: 78.146
- type: ndcg_at_1
value: 60.75000000000001
- type: ndcg_at_10
value: 46.829
- type: ndcg_at_100
value: 52.370000000000005
- type: ndcg_at_1000
value: 59.943999999999996
- type: ndcg_at_3
value: 51.33
- type: ndcg_at_5
value: 48.814
- type: precision_at_1
value: 71.75
- type: precision_at_10
value: 37.525
- type: precision_at_100
value: 12.075
- type: precision_at_1000
value: 2.464
- type: precision_at_3
value: 54.75
- type: precision_at_5
value: 47.55
- type: recall_at_1
value: 9.414
- type: recall_at_10
value: 28.67
- type: recall_at_100
value: 59.924
- type: recall_at_1000
value: 83.921
- type: recall_at_3
value: 16.985
- type: recall_at_5
value: 21.372
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.18000000000001
- type: f1
value: 47.04613218997081
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 82.57900000000001
- type: map_at_10
value: 88.465
- type: map_at_100
value: 88.649
- type: map_at_1000
value: 88.661
- type: map_at_3
value: 87.709
- type: map_at_5
value: 88.191
- type: mrr_at_1
value: 88.899
- type: mrr_at_10
value: 93.35900000000001
- type: mrr_at_100
value: 93.38499999999999
- type: mrr_at_1000
value: 93.38499999999999
- type: mrr_at_3
value: 93.012
- type: mrr_at_5
value: 93.282
- type: ndcg_at_1
value: 88.98899999999999
- type: ndcg_at_10
value: 91.22
- type: ndcg_at_100
value: 91.806
- type: ndcg_at_1000
value: 92.013
- type: ndcg_at_3
value: 90.236
- type: ndcg_at_5
value: 90.798
- type: precision_at_1
value: 88.98899999999999
- type: precision_at_10
value: 10.537
- type: precision_at_100
value: 1.106
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.598
- type: precision_at_5
value: 20.618
- type: recall_at_1
value: 82.57900000000001
- type: recall_at_10
value: 94.95400000000001
- type: recall_at_100
value: 97.14
- type: recall_at_1000
value: 98.407
- type: recall_at_3
value: 92.203
- type: recall_at_5
value: 93.747
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.871000000000002
- type: map_at_10
value: 46.131
- type: map_at_100
value: 48.245
- type: map_at_1000
value: 48.361
- type: map_at_3
value: 40.03
- type: map_at_5
value: 43.634
- type: mrr_at_1
value: 52.932
- type: mrr_at_10
value: 61.61299999999999
- type: mrr_at_100
value: 62.205
- type: mrr_at_1000
value: 62.224999999999994
- type: mrr_at_3
value: 59.388
- type: mrr_at_5
value: 60.760999999999996
- type: ndcg_at_1
value: 53.395
- type: ndcg_at_10
value: 54.506
- type: ndcg_at_100
value: 61.151999999999994
- type: ndcg_at_1000
value: 62.882000000000005
- type: ndcg_at_3
value: 49.903999999999996
- type: ndcg_at_5
value: 51.599
- type: precision_at_1
value: 53.395
- type: precision_at_10
value: 15.247
- type: precision_at_100
value: 2.221
- type: precision_at_1000
value: 0.255
- type: precision_at_3
value: 33.539
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.871000000000002
- type: recall_at_10
value: 62.074
- type: recall_at_100
value: 86.531
- type: recall_at_1000
value: 96.574
- type: recall_at_3
value: 45.003
- type: recall_at_5
value: 53.00899999999999
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.513
- type: map_at_10
value: 69.066
- type: map_at_100
value: 69.903
- type: map_at_1000
value: 69.949
- type: map_at_3
value: 65.44200000000001
- type: map_at_5
value: 67.784
- type: mrr_at_1
value: 80.891
- type: mrr_at_10
value: 86.42699999999999
- type: mrr_at_100
value: 86.577
- type: mrr_at_1000
value: 86.58200000000001
- type: mrr_at_3
value: 85.6
- type: mrr_at_5
value: 86.114
- type: ndcg_at_1
value: 81.026
- type: ndcg_at_10
value: 76.412
- type: ndcg_at_100
value: 79.16
- type: ndcg_at_1000
value: 79.989
- type: ndcg_at_3
value: 71.45
- type: ndcg_at_5
value: 74.286
- type: precision_at_1
value: 81.026
- type: precision_at_10
value: 16.198999999999998
- type: precision_at_100
value: 1.831
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 46.721000000000004
- type: precision_at_5
value: 30.266
- type: recall_at_1
value: 40.513
- type: recall_at_10
value: 80.99300000000001
- type: recall_at_100
value: 91.526
- type: recall_at_1000
value: 96.935
- type: recall_at_3
value: 70.081
- type: recall_at_5
value: 75.665
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 87.42320000000001
- type: ap
value: 83.59975323233843
- type: f1
value: 87.38669942597816
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.676
- type: map_at_10
value: 35.865
- type: map_at_100
value: 37.019000000000005
- type: map_at_1000
value: 37.062
- type: map_at_3
value: 31.629
- type: map_at_5
value: 34.050999999999995
- type: mrr_at_1
value: 23.023
- type: mrr_at_10
value: 36.138999999999996
- type: mrr_at_100
value: 37.242
- type: mrr_at_1000
value: 37.28
- type: mrr_at_3
value: 32.053
- type: mrr_at_5
value: 34.383
- type: ndcg_at_1
value: 23.308999999999997
- type: ndcg_at_10
value: 43.254
- type: ndcg_at_100
value: 48.763
- type: ndcg_at_1000
value: 49.788
- type: ndcg_at_3
value: 34.688
- type: ndcg_at_5
value: 38.973
- type: precision_at_1
value: 23.308999999999997
- type: precision_at_10
value: 6.909999999999999
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 14.818999999999999
- type: precision_at_5
value: 11.072
- type: recall_at_1
value: 22.676
- type: recall_at_10
value: 66.077
- type: recall_at_100
value: 91.4
- type: recall_at_1000
value: 99.143
- type: recall_at_3
value: 42.845
- type: recall_at_5
value: 53.08500000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.16279069767444
- type: f1
value: 96.02183835878418
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 85.74783401732788
- type: f1
value: 70.59661579230463
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.67047747141895
- type: f1
value: 77.06311183471965
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.82447881640887
- type: f1
value: 82.37598020010746
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.266131881264467
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.673653452453998
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.91846122902102
- type: mrr
value: 34.2557300204471
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.762
- type: map_at_10
value: 15.134
- type: map_at_100
value: 19.341
- type: map_at_1000
value: 20.961
- type: map_at_3
value: 10.735999999999999
- type: map_at_5
value: 12.751999999999999
- type: mrr_at_1
value: 52.941
- type: mrr_at_10
value: 60.766
- type: mrr_at_100
value: 61.196
- type: mrr_at_1000
value: 61.227
- type: mrr_at_3
value: 58.720000000000006
- type: mrr_at_5
value: 59.866
- type: ndcg_at_1
value: 50.929
- type: ndcg_at_10
value: 39.554
- type: ndcg_at_100
value: 36.307
- type: ndcg_at_1000
value: 44.743
- type: ndcg_at_3
value: 44.157000000000004
- type: ndcg_at_5
value: 42.142
- type: precision_at_1
value: 52.322
- type: precision_at_10
value: 29.412
- type: precision_at_100
value: 9.365
- type: precision_at_1000
value: 2.2159999999999997
- type: precision_at_3
value: 40.557
- type: precision_at_5
value: 35.913000000000004
- type: recall_at_1
value: 6.762
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 36.687
- type: recall_at_1000
value: 67.23
- type: recall_at_3
value: 11.773
- type: recall_at_5
value: 15.18
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.612
- type: map_at_10
value: 54.208
- type: map_at_100
value: 55.056000000000004
- type: map_at_1000
value: 55.069
- type: map_at_3
value: 49.45
- type: map_at_5
value: 52.556000000000004
- type: mrr_at_1
value: 41.976
- type: mrr_at_10
value: 56.972
- type: mrr_at_100
value: 57.534
- type: mrr_at_1000
value: 57.542
- type: mrr_at_3
value: 53.312000000000005
- type: mrr_at_5
value: 55.672999999999995
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 62.309000000000005
- type: ndcg_at_100
value: 65.557
- type: ndcg_at_1000
value: 65.809
- type: ndcg_at_3
value: 53.74100000000001
- type: ndcg_at_5
value: 58.772999999999996
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 10.107
- type: precision_at_100
value: 1.1900000000000002
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.488
- type: precision_at_5
value: 17.596
- type: recall_at_1
value: 36.612
- type: recall_at_10
value: 84.408
- type: recall_at_100
value: 97.929
- type: recall_at_1000
value: 99.725
- type: recall_at_3
value: 62.676
- type: recall_at_5
value: 74.24199999999999
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.573
- type: map_at_10
value: 85.81
- type: map_at_100
value: 86.434
- type: map_at_1000
value: 86.446
- type: map_at_3
value: 82.884
- type: map_at_5
value: 84.772
- type: mrr_at_1
value: 82.53
- type: mrr_at_10
value: 88.51299999999999
- type: mrr_at_100
value: 88.59700000000001
- type: mrr_at_1000
value: 88.598
- type: mrr_at_3
value: 87.595
- type: mrr_at_5
value: 88.266
- type: ndcg_at_1
value: 82.39999999999999
- type: ndcg_at_10
value: 89.337
- type: ndcg_at_100
value: 90.436
- type: ndcg_at_1000
value: 90.498
- type: ndcg_at_3
value: 86.676
- type: ndcg_at_5
value: 88.241
- type: precision_at_1
value: 82.39999999999999
- type: precision_at_10
value: 13.58
- type: precision_at_100
value: 1.543
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.04
- type: precision_at_5
value: 25.044
- type: recall_at_1
value: 71.573
- type: recall_at_10
value: 96.066
- type: recall_at_100
value: 99.73100000000001
- type: recall_at_1000
value: 99.991
- type: recall_at_3
value: 88.34
- type: recall_at_5
value: 92.79899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.767168063971724
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.00502775826037
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.718
- type: map_at_10
value: 12.13
- type: map_at_100
value: 14.269000000000002
- type: map_at_1000
value: 14.578
- type: map_at_3
value: 8.605
- type: map_at_5
value: 10.483
- type: mrr_at_1
value: 23.7
- type: mrr_at_10
value: 34.354
- type: mrr_at_100
value: 35.522
- type: mrr_at_1000
value: 35.571999999999996
- type: mrr_at_3
value: 31.15
- type: mrr_at_5
value: 32.98
- type: ndcg_at_1
value: 23.3
- type: ndcg_at_10
value: 20.171
- type: ndcg_at_100
value: 28.456
- type: ndcg_at_1000
value: 33.826
- type: ndcg_at_3
value: 19.104
- type: ndcg_at_5
value: 16.977999999999998
- type: precision_at_1
value: 23.3
- type: precision_at_10
value: 10.45
- type: precision_at_100
value: 2.239
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.933
- type: precision_at_5
value: 15.1
- type: recall_at_1
value: 4.718
- type: recall_at_10
value: 21.221999999999998
- type: recall_at_100
value: 45.42
- type: recall_at_1000
value: 71.642
- type: recall_at_3
value: 10.922
- type: recall_at_5
value: 15.322
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.2065344862739
- type: cos_sim_spearman
value: 83.2276569587515
- type: euclidean_pearson
value: 83.42726762105312
- type: euclidean_spearman
value: 83.31396596997742
- type: manhattan_pearson
value: 83.41123401762816
- type: manhattan_spearman
value: 83.34393052682026
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.28253173719754
- type: cos_sim_spearman
value: 76.12995701324436
- type: euclidean_pearson
value: 75.30693691794121
- type: euclidean_spearman
value: 75.12472789129536
- type: manhattan_pearson
value: 75.35860808729171
- type: manhattan_spearman
value: 75.30445827952794
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.09358031005694
- type: cos_sim_spearman
value: 83.18811147636619
- type: euclidean_pearson
value: 82.65513459991631
- type: euclidean_spearman
value: 82.71085530442987
- type: manhattan_pearson
value: 82.67700926821576
- type: manhattan_spearman
value: 82.73815539380426
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.51365440223137
- type: cos_sim_spearman
value: 80.59933905019179
- type: euclidean_pearson
value: 80.56660025433806
- type: euclidean_spearman
value: 80.27926539084027
- type: manhattan_pearson
value: 80.64632724055481
- type: manhattan_spearman
value: 80.43616365139444
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8590461417506
- type: cos_sim_spearman
value: 87.16337291721602
- type: euclidean_pearson
value: 85.8847725068404
- type: euclidean_spearman
value: 86.12602873624066
- type: manhattan_pearson
value: 86.04095861363909
- type: manhattan_spearman
value: 86.35535645007629
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.61371557181502
- type: cos_sim_spearman
value: 85.16330754442785
- type: euclidean_pearson
value: 84.20831431260608
- type: euclidean_spearman
value: 84.33191523212125
- type: manhattan_pearson
value: 84.34911007642411
- type: manhattan_spearman
value: 84.49670164290394
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.54452933158781
- type: cos_sim_spearman
value: 90.88214621695892
- type: euclidean_pearson
value: 91.38488015281216
- type: euclidean_spearman
value: 91.01822259603908
- type: manhattan_pearson
value: 91.36449776198687
- type: manhattan_spearman
value: 90.90478717381717
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.00941643037453
- type: cos_sim_spearman
value: 67.03588472081898
- type: euclidean_pearson
value: 67.35224911601603
- type: euclidean_spearman
value: 66.35544831459266
- type: manhattan_pearson
value: 67.35080066508304
- type: manhattan_spearman
value: 66.07893473733782
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.18291011086279
- type: cos_sim_spearman
value: 85.66913777481429
- type: euclidean_pearson
value: 84.81115930027242
- type: euclidean_spearman
value: 85.07133983924173
- type: manhattan_pearson
value: 84.88932120524983
- type: manhattan_spearman
value: 85.176903109055
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.67543572266588
- type: mrr
value: 95.9468146232852
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.633
- type: map_at_10
value: 69.801
- type: map_at_100
value: 70.504
- type: map_at_1000
value: 70.519
- type: map_at_3
value: 67.72500000000001
- type: map_at_5
value: 68.812
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.956
- type: mrr_at_100
value: 71.489
- type: mrr_at_1000
value: 71.504
- type: mrr_at_3
value: 69.44399999999999
- type: mrr_at_5
value: 70.244
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 73.98599999999999
- type: ndcg_at_100
value: 76.629
- type: ndcg_at_1000
value: 77.054
- type: ndcg_at_3
value: 70.513
- type: ndcg_at_5
value: 71.978
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 9.633
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.556000000000004
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 59.633
- type: recall_at_10
value: 85.52199999999999
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 75.767
- type: recall_at_5
value: 79.76100000000001
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.77821782178218
- type: cos_sim_ap
value: 94.58684455008866
- type: cos_sim_f1
value: 88.51282051282053
- type: cos_sim_precision
value: 90.84210526315789
- type: cos_sim_recall
value: 86.3
- type: dot_accuracy
value: 99.77623762376237
- type: dot_ap
value: 94.86277541733045
- type: dot_f1
value: 88.66897575457693
- type: dot_precision
value: 87.75710088148874
- type: dot_recall
value: 89.60000000000001
- type: euclidean_accuracy
value: 99.76732673267327
- type: euclidean_ap
value: 94.12114402691984
- type: euclidean_f1
value: 87.96804792810784
- type: euclidean_precision
value: 87.83649052841476
- type: euclidean_recall
value: 88.1
- type: manhattan_accuracy
value: 99.77227722772277
- type: manhattan_ap
value: 94.33665105240306
- type: manhattan_f1
value: 88.25587206396803
- type: manhattan_precision
value: 88.21178821178822
- type: manhattan_recall
value: 88.3
- type: max_accuracy
value: 99.77821782178218
- type: max_ap
value: 94.86277541733045
- type: max_f1
value: 88.66897575457693
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 72.03943478268592
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.285037897356496
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.83578447913503
- type: mrr
value: 52.69070696460402
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.89437612567638
- type: cos_sim_spearman
value: 30.7277819987126
- type: dot_pearson
value: 30.999783674122526
- type: dot_spearman
value: 30.992168551124905
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22699999999999998
- type: map_at_10
value: 1.8950000000000002
- type: map_at_100
value: 11.712
- type: map_at_1000
value: 28.713
- type: map_at_3
value: 0.65
- type: map_at_5
value: 1.011
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 95.39999999999999
- type: mrr_at_100
value: 95.39999999999999
- type: mrr_at_1000
value: 95.39999999999999
- type: mrr_at_3
value: 95.0
- type: mrr_at_5
value: 95.39999999999999
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_10
value: 76.658
- type: ndcg_at_100
value: 60.755
- type: ndcg_at_1000
value: 55.05
- type: ndcg_at_3
value: 82.961
- type: ndcg_at_5
value: 80.008
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 79.80000000000001
- type: precision_at_100
value: 62.019999999999996
- type: precision_at_1000
value: 24.157999999999998
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 83.6
- type: recall_at_1
value: 0.22699999999999998
- type: recall_at_10
value: 2.086
- type: recall_at_100
value: 15.262
- type: recall_at_1000
value: 51.800000000000004
- type: recall_at_3
value: 0.679
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.521
- type: map_at_10
value: 7.281
- type: map_at_100
value: 12.717
- type: map_at_1000
value: 14.266000000000002
- type: map_at_3
value: 3.62
- type: map_at_5
value: 4.7010000000000005
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 34.906
- type: mrr_at_100
value: 36.333
- type: mrr_at_1000
value: 36.348
- type: mrr_at_3
value: 29.592000000000002
- type: mrr_at_5
value: 33.367000000000004
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 18.523
- type: ndcg_at_100
value: 30.932
- type: ndcg_at_1000
value: 42.942
- type: ndcg_at_3
value: 18.901
- type: ndcg_at_5
value: 17.974999999999998
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 17.347
- type: precision_at_100
value: 6.898
- type: precision_at_1000
value: 1.482
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 19.184
- type: recall_at_1
value: 1.521
- type: recall_at_10
value: 13.406
- type: recall_at_100
value: 43.418
- type: recall_at_1000
value: 80.247
- type: recall_at_3
value: 4.673
- type: recall_at_5
value: 7.247000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.9084
- type: ap
value: 15.388385311898144
- type: f1
value: 55.760189174489426
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.399547255234864
- type: f1
value: 62.61398519525303
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.041094760846164
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.92394349406926
- type: cos_sim_ap
value: 79.93037248584875
- type: cos_sim_f1
value: 73.21063394683026
- type: cos_sim_precision
value: 70.99652949925633
- type: cos_sim_recall
value: 75.56728232189973
- type: dot_accuracy
value: 87.80473266972642
- type: dot_ap
value: 79.11055417163318
- type: dot_f1
value: 72.79587473273801
- type: dot_precision
value: 69.55058880076905
- type: dot_recall
value: 76.35883905013192
- type: euclidean_accuracy
value: 87.91202241163496
- type: euclidean_ap
value: 79.61955502404068
- type: euclidean_f1
value: 72.65956080647231
- type: euclidean_precision
value: 70.778083562672
- type: euclidean_recall
value: 74.64379947229551
- type: manhattan_accuracy
value: 87.7749299636407
- type: manhattan_ap
value: 79.33286131650932
- type: manhattan_f1
value: 72.44748412310699
- type: manhattan_precision
value: 67.43974533879036
- type: manhattan_recall
value: 78.25857519788919
- type: max_accuracy
value: 87.92394349406926
- type: max_ap
value: 79.93037248584875
- type: max_f1
value: 73.21063394683026
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.89987192921178
- type: cos_sim_ap
value: 87.49525152555509
- type: cos_sim_f1
value: 80.05039276715578
- type: cos_sim_precision
value: 77.15714285714286
- type: cos_sim_recall
value: 83.1690791499846
- type: dot_accuracy
value: 89.58163542515621
- type: dot_ap
value: 86.87353801172357
- type: dot_f1
value: 79.50204384986993
- type: dot_precision
value: 76.83522482401953
- type: dot_recall
value: 82.36064059131506
- type: euclidean_accuracy
value: 89.81255093724532
- type: euclidean_ap
value: 87.41058010369022
- type: euclidean_f1
value: 79.94095829233214
- type: euclidean_precision
value: 78.61396456751525
- type: euclidean_recall
value: 81.3135201724669
- type: manhattan_accuracy
value: 89.84553886754377
- type: manhattan_ap
value: 87.41173628281432
- type: manhattan_f1
value: 79.9051922079846
- type: manhattan_precision
value: 76.98016269444841
- type: manhattan_recall
value: 83.06128734216199
- type: max_accuracy
value: 89.89987192921178
- type: max_ap
value: 87.49525152555509
- type: max_f1
value: 80.05039276715578
---
# Repetition Improves Language Model Embeddings
Please refer to our paper: [https://arxiv.org/abs/2402.15449](https://arxiv.org/abs/2402.15449)
And our GitHub: [https://github.com/jakespringer/echo-embeddings](https://github.com/jakespringer/echo-embeddings)
We provide a description of the model as well as example usage in the above links.
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_9_prob | RolMax | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,712,846,375,000 | 2024-04-11T15:20:29 | 4 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_9_prob
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_9_prob")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 860
* Number of training documents: 91393
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | moskau - russia - freiheitlichen - propaganda - wladimir | 20 | -1_moskau_russia_freiheitlichen_propaganda |
| 0 | momentan - sagst - mee - weiterzumachen - deinen | 49667 | 0_momentan_sagst_mee_weiterzumachen |
| 1 | ukraineflüchtlinge - flüchtlingskrise - flüchtlingshilfswerk - flüchtlingshilfswerks - flüchtlingszahlen | 1034 | 1_ukraineflüchtlinge_flüchtlingskrise_flüchtlingshilfswerk_flüchtlingshilfswerks |
| 2 | ukrainian - biolaboratorien - biowaffenlabore - bakteriologischer - biowaffenforschung | 612 | 2_ukrainian_biolaboratorien_biowaffenlabore_bakteriologischer |
| 3 | gerichtsverfahren - gerichtsverhandlung - bagatellstrafverfahren - gerichtssaal - rechtssprechung | 432 | 3_gerichtsverfahren_gerichtsverhandlung_bagatellstrafverfahren_gerichtssaal |
| 4 | weihnachtsstimmung - weihnachtsfrieden - weihnachtswunder - weihnachtsfeiertage - weihnachtszeit | 409 | 4_weihnachtsstimmung_weihnachtsfrieden_weihnachtswunder_weihnachtsfeiertage |
| 5 | deutschesreich - german - deutsch - wochenschau - antipropaganda | 393 | 5_deutschesreich_german_deutsch_wochenschau |
| 6 | asozialdemokrattie - politikern - sündenbockpolitik - demokratie - metapolitics | 389 | 6_asozialdemokrattie_politikern_sündenbockpolitik_demokratie |
| 7 | kriegsrecht - kriegsdrohung - kriegstreiberei - kriege - kriegstreiber | 380 | 7_kriegsrecht_kriegsdrohung_kriegstreiberei_kriege |
| 8 | russischem - russisches - russischer - gasbedarfs - gaslieferungen | 360 | 8_russischem_russisches_russischer_gasbedarfs |
| 9 | publizisten - journalistischen - medienschaffende - journalistische - journalisten | 360 | 9_publizisten_journalistischen_medienschaffende_journalistische |
| 10 | polizeiaufgebot - polizeibeamte - polizeigewalt - polizeisprecherin - polizeikette | 340 | 10_polizeiaufgebot_polizeibeamte_polizeigewalt_polizeisprecherin |
| 11 | rathausplatz - marienplatz - kirchplatz - bahnhofstraße - bahnhofsvorplatz | 326 | 11_rathausplatz_marienplatz_kirchplatz_bahnhofstraße |
| 12 | schönherr - sachsenanhalt - cantode - hauensteinerstr - profkilez | 274 | 12_schönherr_sachsenanhalt_cantode_hauensteinerstr |
| 13 | südtürkei - südosttürkei - türkei - türkische - türkischen | 254 | 13_südtürkei_südosttürkei_türkei_türkische |
| 14 | protestaktionen - demonstrierenden - massendemonstrationen - protesten - gegendemonstranten | 238 | 14_protestaktionen_demonstrierenden_massendemonstrationen_protesten |
| 15 | geopolitik - politikmagazins - politische - geoengineering - montagskundgebung | 230 | 15_geopolitik_politikmagazins_politische_geoengineering |
| 16 | juschnoukrainsk - украине - ukrainer - ukrainische - slawjansk | 215 | 16_juschnoukrainsk_украине_ukrainer_ukrainische |
| 17 | gestern - markranstädt - vorgestern - pöchlarn - kırıkhan | 213 | 17_gestern_markranstädt_vorgestern_pöchlarn |
| 18 | germany - ncsvvic - greetings - patriots - politische | 207 | 18_germany_ncsvvic_greetings_patriots |
| 19 | lebensmittelpreise - rohstoffpreise - getreidepreisen - getreidepreise - weizenpreise | 206 | 19_lebensmittelpreise_rohstoffpreise_getreidepreisen_getreidepreise |
| 20 | maskenstudie - maskentragen - schutzmasken - maskenbefreiungsattest - masken | 201 | 20_maskenstudie_maskentragen_schutzmasken_maskenbefreiungsattest |
| 21 | sicherheitsvorfall - sicherheitslücke - cybersicherheit - cyberangriffen - cyberattacken | 195 | 21_sicherheitsvorfall_sicherheitslücke_cybersicherheit_cyberangriffen |
| 22 | gaspreise - gaspreis - erdgaspreis - gas - rohölpreise | 194 | 22_gaspreise_gaspreis_erdgaspreis_gas |
| 23 | jews - zionisten - zionistischen - zionistische - israel | 184 | 23_jews_zionisten_zionistischen_zionistische |
| 24 | russlandtreibstoffkrise - sanktionswellen - sanktionsgegenschlag - sanktionswelle - sanktionskrieg | 177 | 24_russlandtreibstoffkrise_sanktionswellen_sanktionsgegenschlag_sanktionswelle |
| 25 | germany - verdummen - psaki - allgegenwart - fascist | 177 | 25_germany_verdummen_psaki_allgegenwart |
| 26 | телеграма - telegramelite - aime - eingeschalten - entgegentreten | 173 | 26_телеграма_telegramelite_aime_eingeschalten |
| 27 | censorship - documentaries - reports - truth - reviews | 173 | 27_censorship_documentaries_reports_truth |
| 28 | impfindustrie - impfstoffe - impfstoff - impfungen - immunisiert | 173 | 28_impfindustrie_impfstoffe_impfstoff_impfungen |
| 29 | abendgrüße - morgenlicht - abendstimmung - morgengebet - lichtgrüße | 172 | 29_abendgrüße_morgenlicht_abendstimmung_morgengebet |
| 30 | путин - wladimir - russiangirl - führerbunker - putins | 169 | 30_путин_wladimir_russiangirl_führerbunker |
| 31 | impfrisiko - sterblichkeitsrate - pädiatrieimpfstoffs - impfstoffdosen - todesfällen | 163 | 31_impfrisiko_sterblichkeitsrate_pädiatrieimpfstoffs_impfstoffdosen |
| 32 | kraftstoffpreise - gaspreise - energiepreise - energiepreisen - energiekosten | 163 | 32_kraftstoffpreise_gaspreise_energiepreise_energiepreisen |
| 33 | telegramkanalbetreiber - telegramzensur - messengerdienst - messenger - messengerdienste | 161 | 33_telegramkanalbetreiber_telegramzensur_messengerdienst_messenger |
| 34 | insektennahrung - insektenmehl - insektenhaltige - insektenarten - insekten | 160 | 34_insektennahrung_insektenmehl_insektenhaltige_insektenarten |
| 35 | umweltbewusstes - nachhaltigem - selbstversorgung - bodenbewusstsein - gewächshaus | 160 | 35_umweltbewusstes_nachhaltigem_selbstversorgung_bodenbewusstsein |
| 36 | trinkwasserverunreinigung - trinkwasserverordnung - trinkwasserversorgung - wasserversorgung - abwasser | 160 | 36_trinkwasserverunreinigung_trinkwasserverordnung_trinkwasserversorgung_wasserversorgung |
| 37 | aufstandfuerfrieden - pacifico - pacifique - friedlichkeit - paix | 156 | 37_aufstandfuerfrieden_pacifico_pacifique_friedlichkeit |
| 38 | österreichern - niederösterreich - österreicherinnen - österreicher - niederösterreichischen | 155 | 38_österreichern_niederösterreich_österreicherinnen_österreicher |
| 39 | twitteraccount - twittertrends - twitterfiles - twitternutzer - twittergemeinde | 154 | 39_twitteraccount_twittertrends_twitterfiles_twitternutzer |
| 40 | berlinwahl - abgeordnetenhauswahl - bezirkswahlamt - wahlverlierer - berlinwahl2023 | 154 | 40_berlinwahl_abgeordnetenhauswahl_bezirkswahlamt_wahlverlierer |
| 41 | impfstoffs - impfkommission - impfzentrum - impfstoff - kinderimpfung | 150 | 41_impfstoffs_impfkommission_impfzentrum_impfstoff |
| 42 | russizismen - russischsprachigen - русскоязычных - russen - russischsprachige | 147 | 42_russizismen_russischsprachigen_русскоязычных_russen |
| 43 | china - chinas - chinesen - chinesische - chinesischen | 143 | 43_china_chinas_chinesen_chinesische |
| 44 | eurofighter - kriegswaffen - munitionsverbrauch - flugabwehrraketensystems - truppenbewegungen | 142 | 44_eurofighter_kriegswaffen_munitionsverbrauch_flugabwehrraketensystems |
| 45 | bundesgesundheitsminister - gesundheitsminister - lauterbach - lauterbachs - alarmismus | 141 | 45_bundesgesundheitsminister_gesundheitsminister_lauterbach_lauterbachs |
| 46 | gesundheitsinstitutionen - gesundheitsbranche - behandlungsfreiheit - ärztlichen - ärztliche | 137 | 46_gesundheitsinstitutionen_gesundheitsbranche_behandlungsfreiheit_ärztlichen |
| 47 | gasleitung - gaslieferung - gaslieferungen - pipeline - gaslieferanten | 133 | 47_gasleitung_gaslieferung_gaslieferungen_pipeline |
| 48 | milliardenübernahme - pfizer - pharmariesen - pfizers - zahlte | 132 | 48_milliardenübernahme_pfizer_pharmariesen_pfizers |
| 49 | coronamaßnahmen - coronabeschränkungen - coronaregeln - coronawellen - fristverlängerung | 129 | 49_coronamaßnahmen_coronabeschränkungen_coronaregeln_coronawellen |
| 50 | donnerstagsvideo - videofolgt - fortsetzungsvideo - videoeindrücke - musikvideos | 129 | 50_donnerstagsvideo_videofolgt_fortsetzungsvideo_videoeindrücke |
| 51 | impfpflichtgesetz - impfstoffhersteller - impfstrafen - arzneimittelgesetz - schutzpflicht | 127 | 51_impfpflichtgesetz_impfstoffhersteller_impfstrafen_arzneimittelgesetz |
| 52 | europas - krieges - zerstörten - ruin - gruß | 123 | 52_europas_krieges_zerstörten_ruin |
| 53 | germany - poland - benjaminfulford - earthing - mana | 122 | 53_germany_poland_benjaminfulford_earthing |
| 54 | chlorella - chlorophyll - chlorwasserstoff - antibakterielle - antioxidantien | 119 | 54_chlorella_chlorophyll_chlorwasserstoff_antibakterielle |
| 55 | besigheim - ukvali - müritz - gemäss - werra | 117 | 55_besigheim_ukvali_müritz_gemäss |
| 56 | moskau - russiagate - propagandafeldzug - medienfreiheit - kremlchef | 117 | 56_moskau_russiagate_propagandafeldzug_medienfreiheit |
| 57 | poroschenko - selensky - selenski - selenskij - ukrainische | 115 | 57_poroschenko_selensky_selenski_selenskij |
| 58 | gegenöffentlichkeit - protestintervalle - massenproteste - gelbwestenproteste - protestbewegung | 115 | 58_gegenöffentlichkeit_protestintervalle_massenproteste_gelbwestenproteste |
| 59 | klimawandelleugnern - klimakrise - klimawandels - klimanotstand - klimabewussteren | 114 | 59_klimawandelleugnern_klimakrise_klimawandels_klimanotstand |
| 60 | überwachungsballon - spionageballons - luftballons - spionageballon - höhenballons | 108 | 60_überwachungsballon_spionageballons_luftballons_spionageballon |
| 61 | podcast - soundcloud - musica - album - kanalmitgliedschaft | 108 | 61_podcast_soundcloud_musica_album |
| 62 | starjournalisten - enthüllungsjournalisten - hersh - vietnamkriegs - investigativjournalisten | 107 | 62_starjournalisten_enthüllungsjournalisten_hersh_vietnamkriegs |
| 63 | panzerlieferungen - panzerlieferung - panzerkoalition - panzerbestellungen - panzers | 105 | 63_panzerlieferungen_panzerlieferung_panzerkoalition_panzerbestellungen |
| 64 | weltgesundheitsrat - gesundheitsrechte - weltgesundheitsorganisation - gesundheitspolitik - weltregierung | 102 | 64_weltgesundheitsrat_gesundheitsrechte_weltgesundheitsorganisation_gesundheitspolitik |
| 65 | kabelschadenstromausfall - stadtwerke - erwischtblackout - ludwigsburg - kornwestheim | 100 | 65_kabelschadenstromausfall_stadtwerke_erwischtblackout_ludwigsburg |
| 66 | impfpflichtgesetz - infektionsschutzgesetzes - kommission - betretungsverbot - impftermin | 97 | 66_impfpflichtgesetz_infektionsschutzgesetzes_kommission_betretungsverbot |
| 67 | frachtflughafen - airports - airlines - flughafen - airport | 96 | 67_frachtflughafen_airports_airlines_flughafen |
| 68 | türkei - türkischem - türkisen - türke - turkey | 94 | 68_türkei_türkischem_türkisen_türke |
| 69 | agendaaggressiv - existenzvernichtung - volksfeinde - freihheitsentziehenden - ungerechtigkeit | 94 | 69_agendaaggressiv_existenzvernichtung_volksfeinde_freihheitsentziehenden |
| 70 | maxwells - maxwell - missbrauchsskandal - gerichtsdokumenten - sexualstraftäters | 94 | 70_maxwells_maxwell_missbrauchsskandal_gerichtsdokumenten |
| 71 | ersatzreligion - religiösem - religiöse - religionskrieges - theologie | 93 | 71_ersatzreligion_religiösem_religiöse_religionskrieges |
| 72 | datum - uhrzeit - jahresrückblick - 12h - 20uhr | 90 | 72_datum_uhrzeit_jahresrückblick_12h |
| 73 | hochansteckend - omikronvirus - coronavirus - virusexistenzfrage - epidemiologen | 89 | 73_hochansteckend_omikronvirus_coronavirus_virusexistenzfrage |
| 74 | ukrainekriegstellvertreterkrieg - ukrainekrieg - ukrainischerkrieg - krieginderukraine - wahrheitüberukraine | 87 | 74_ukrainekriegstellvertreterkrieg_ukrainekrieg_ukrainischerkrieg_krieginderukraine |
| 75 | gentechnik - genforschung - gentechnisch - gentherapie - genetisch | 87 | 75_gentechnik_genforschung_gentechnisch_gentherapie |
| 76 | friedensnobelpreis - volksbildungsministerin - flüchtlingspolitik - demokratieforscher - eurasia | 86 | 76_friedensnobelpreis_volksbildungsministerin_flüchtlingspolitik_demokratieforscher |
| 77 | q74you - kommst - gesara - 5zusammengestellte - verstehen | 84 | 77_q74you_kommst_gesara_5zusammengestellte |
| 78 | impffreiheit - impfpolitik - protestzug - protesten - proteste | 83 | 78_impffreiheit_impfpolitik_protestzug_protesten |
| 79 | schlafstörung - schlafstörungen - schlafverhalten - schlafprobleme - durchschlafen | 83 | 79_schlafstörung_schlafstörungen_schlafverhalten_schlafprobleme |
| 80 | trumps - trump - trumpf - president - demokrat | 83 | 80_trumps_trump_trumpf_president |
| 81 | außerirdischen - außerirdisches - außerirdische - aliens - ufo | 82 | 81_außerirdischen_außerirdisches_außerirdische_aliens |
| 82 | tschernobyl - nuklearanlagen - chernobyl - kernkraftwerks - kernkraftwerk | 81 | 82_tschernobyl_nuklearanlagen_chernobyl_kernkraftwerks |
| 83 | russ - finanzsanktionen - eurosystem - kryptowährungsbörsen - finanzmärkten | 81 | 83_russ_finanzsanktionen_eurosystem_kryptowährungsbörsen |
| 84 | mobilfunkkritiker - erbinformationen - telekom - gefährlichfür - mobilfunkes | 80 | 84_mobilfunkkritiker_erbinformationen_telekom_gefährlichfür |
| 85 | klimaschützer - klimaschützern - klimaterroristen - klimaaktivisten - klimabewegung | 79 | 85_klimaschützer_klimaschützern_klimaterroristen_klimaaktivisten |
| 86 | wasserversorgung - blackout - krisenfalls - wasserbeutel - katastrophenfällewas | 79 | 86_wasserversorgung_blackout_krisenfalls_wasserbeutel |
| 87 | redebeitrag - zugehörigkeits - nachmeldungen - tonbeiträge - gestatten | 78 | 87_redebeitrag_zugehörigkeits_nachmeldungen_tonbeiträge |
| 88 | kryptobörse - kryptowährungen - kryptowährung - bitcoin - kryptomarkt | 77 | 88_kryptobörse_kryptowährungen_kryptowährung_bitcoin |
| 89 | grundsteuerreform - steuerreform - grundsteuerwertbescheid - umsatzsteuer - erbschaftssteuer | 77 | 89_grundsteuerreform_steuerreform_grundsteuerwertbescheid_umsatzsteuer |
| 90 | chinesen - chinas - china - zhejiang - chinesische | 77 | 90_chinesen_chinas_china_zhejiang |
| 91 | earthquakes - earthquake - seismische - seismischen - tectonic | 77 | 91_earthquakes_earthquake_seismische_seismischen |
| 92 | vulkanausbruchs - vulkanausbruch - vulkaneruption - vulkanasche - vulkans | 76 | 92_vulkanausbruchs_vulkanausbruch_vulkaneruption_vulkanasche |
| 93 | unverändertdeutschland - deutschlandmuss - deutschem - eroberungskriege - antifaschismus | 75 | 93_unverändertdeutschland_deutschlandmuss_deutschem_eroberungskriege |
| 94 | angstlevel - angststörung - angst - fürchten - untertanenmentalität | 75 | 94_angstlevel_angststörung_angst_fürchten |
| 95 | petroleumheizung - umweltfreundlich - umweltfreundlicher - meereskraftwerke - wärmepumpen | 74 | 95_petroleumheizung_umweltfreundlich_umweltfreundlicher_meereskraftwerke |
| 96 | freiheitsdemo - europeforfreedom - protestmedia - austriaci - austriaco | 74 | 96_freiheitsdemo_europeforfreedom_protestmedia_austriaci |
| 97 | unterrichtsausfall - schulleitung - schulschließungen - schulexperten - lehrkräftenbayerischer | 74 | 97_unterrichtsausfall_schulleitung_schulschließungen_schulexperten |
| 98 | bundesverfassungsgericht - verfassungsgerichtshof - verfassungswidrig - gerichtsterminen - versammlung | 74 | 98_bundesverfassungsgericht_verfassungsgerichtshof_verfassungswidrig_gerichtsterminen |
| 99 | extremismus - rechtsextremismus - linksextremismus - extremisten - extremistischer | 74 | 99_extremismus_rechtsextremismus_linksextremismus_extremisten |
| 100 | europas - krieges - gruß - partioten - mein | 74 | 100_europas_krieges_gruß_partioten |
| 101 | untersuchungsausschuss - staatskanzlei - grundgesetzfeindlichen - konditionierungsmaßnahmen - generalstabschef | 72 | 101_untersuchungsausschuss_staatskanzlei_grundgesetzfeindlichen_konditionierungsmaßnahmen |
| 102 | germany - reviews - nino - audience - supersoldier | 72 | 102_germany_reviews_nino_audience |
| 103 | paypal - netbank - kontoinhaber - bankverbindung - crowdfinanziert | 72 | 103_paypal_netbank_kontoinhaber_bankverbindung |
| 104 | impfstoffe - impfherstellern - impfstoff - impftechnologie - gentherapeutika | 71 | 104_impfstoffe_impfherstellern_impfstoff_impftechnologie |
| 105 | dauerfrost - klimaerwärmung - erwärmung - temperaturrekord - temperaturen | 71 | 105_dauerfrost_klimaerwärmung_erwärmung_temperaturrekord |
| 106 | homophob - lgbtqi - homosexuelle - transphobe - geschlechtsumwandlung | 71 | 106_homophob_lgbtqi_homosexuelle_transphobe |
| 107 | youtuber - youtubern - youtube - unterhaltungsvideos - weihnachtsvideo | 71 | 107_youtuber_youtubern_youtube_unterhaltungsvideos |
| 108 | crailsheim - freiburg - teilnehmerzahl - boizenburg - oberbergischen | 70 | 108_crailsheim_freiburg_teilnehmerzahl_boizenburg |
| 109 | genderwahnsinn - genderismus - gendermainstreeming - genderforschung - geschlechterungerechtigkeit | 70 | 109_genderwahnsinn_genderismus_gendermainstreeming_genderforschung |
| 110 | erdgasversorgungskrise - energieversorgungwirtschafts - erdgasknappheit - energieversorgung - kohlekraftwerke | 70 | 110_erdgasversorgungskrise_energieversorgungwirtschafts_erdgasknappheit_energieversorgung |
| 111 | österreicherinnen - mfgösterreich - österreich____________________________ - freiheitsbegriff - demokratie | 70 | 111_österreicherinnen_mfgösterreich_österreich_____________________________freiheitsbegriff |
| 112 | globalisten - kriegstreiber - parteinahme - kriegskanzler - rechtgläubigkeitselite | 69 | 112_globalisten_kriegstreiber_parteinahme_kriegskanzler |
| 113 | sichselbstbewusstes - geistiges - egobedingte - bewusst - selbstzweifel | 69 | 113_sichselbstbewusstes_geistiges_egobedingte_bewusst |
| 114 | kremlin - wladimir - israel - israels - israelische | 69 | 114_kremlin_wladimir_israel_israels |
| 115 | veranstaltung - deutschsprachigen - besuchern - besucher - invito | 69 | 115_veranstaltung_deutschsprachigen_besuchern_besucher |
| 116 | kirchenplatz - deutschlandsberg - volksfestplatz - karmeliterplatz - fürstplatz | 69 | 116_kirchenplatz_deutschlandsberg_volksfestplatz_karmeliterplatz |
| 117 | deutschordensrittern - norddeutschland - deutschlang - weströmischen - führers | 69 | 117_deutschordensrittern_norddeutschland_deutschlang_weströmischen |
| 118 | möchtegernmitläuferärzte - medizinscher - allgemeinarzt - vizeparteiobmann - facharzt | 68 | 118_möchtegernmitläuferärzte_medizinscher_allgemeinarzt_vizeparteiobmann |
| 119 | medienanalyseteam - medienschau - medienarbeit - medienhaus - journalistisches | 68 | 119_medienanalyseteam_medienschau_medienarbeit_medienhaus |
| 120 | crypto - bank - uapalt22 - kontonummer - 0700 | 68 | 120_crypto_bank_uapalt22_kontonummer |
| 121 | jane - rachel - treatment - worm - doctors | 68 | 121_jane_rachel_treatment_worm |
| 122 | satanistischen - satanists - satanistenknie - satanisten - satanischem | 67 | 122_satanistischen_satanists_satanistenknie_satanisten |
| 123 | diktaturi - cbdc - kontrolom - kontrol - controllo | 67 | 123_diktaturi_cbdc_kontrolom_kontrol |
| 124 | stromversorgung - effiziente - begrenzter - geschützt - stromerzeugung | 67 | 124_stromversorgung_effiziente_begrenzter_geschützt |
| 125 | melden - spendenkonto - krisenvorsorgewas - zivilgesellschaft - 2011 | 67 | 125_melden_spendenkonto_krisenvorsorgewas_zivilgesellschaft |
| 126 | österreichs - neutralitätsbruch - österrussische - neutralitätsgesetz - neutralitätsgesetzes | 67 | 126_österreichs_neutralitätsbruch_österrussische_neutralitätsgesetz |
| 127 | censorship - zensuriert - zensurmethode - zensurgelöscht - censor | 67 | 127_censorship_zensuriert_zensurmethode_zensurgelöscht |
| 128 | naturalnews - freedom - globalists - org - health | 66 | 128_naturalnews_freedom_globalists_org |
| 129 | westukrainer - ukrainer - rechtsnationalistischen - terrornazis - neonazistische | 66 | 129_westukrainer_ukrainer_rechtsnationalistischen_terrornazis |
| 130 | eklatanter - falschinformation - hinterkopf - werbemaßnahme - einschüchterndes | 65 | 130_eklatanter_falschinformation_hinterkopf_werbemaßnahme |
| 131 | hildegard - hildegards - maitrunk - müdigkeit - zensieren | 65 | 131_hildegard_hildegards_maitrunk_müdigkeit |
| 132 | naziaufmärschen - nazi - antifaschistische - ökosozialismus - faschismus | 65 | 132_naziaufmärschen_nazi_antifaschistische_ökosozialismus |
| 133 | produkt - meistverkaufte - silberionen - bakterien - inhaltsstoffe | 65 | 133_produkt_meistverkaufte_silberionen_bakterien |
| 134 | maskenpflicht - maskentragen - testpflicht - schultestungen - masken | 65 | 134_maskenpflicht_maskentragen_testpflicht_schultestungen |
| 135 | weltgeschehen - meißen - bystron - schwandorf - mehlis | 64 | 135_weltgeschehen_meißen_bystron_schwandorf |
| 136 | linksterroristen - angreifern - ungarische - ungarischen - festgenommenen | 64 | 136_linksterroristen_angreifern_ungarische_ungarischen |
| 137 | friedensbewegung - friedliche - friedlichen - humanitären - parteiunabhängig | 64 | 137_friedensbewegung_friedliche_friedlichen_humanitären |
| 138 | bankiersfamilie - rockefellers - vermögensverwaltung - familiendynastien - finanzmärkte | 64 | 138_bankiersfamilie_rockefellers_vermögensverwaltung_familiendynastien |
| 139 | korruptionsstaatsanwaltschaft - korruptionssümpfe - festgenommen - festnehmen - festnahmewelle | 63 | 139_korruptionsstaatsanwaltschaft_korruptionssümpfe_festgenommen_festnehmen |
| 140 | app - android - smartphone - apple - apk | 63 | 140_app_android_smartphone_apple |
| 141 | goldgeld - goldpreis - goldmünze - goldstandard - goldstandards | 63 | 141_goldgeld_goldpreis_goldmünze_goldstandard |
| 142 | festspielhaus - freiheitstynchler - demoverbot - wienerin - freieuni | 63 | 142_festspielhaus_freiheitstynchler_demoverbot_wienerin |
| 143 | militärverräter - attentat - kriegshandlung - attentats - infiltrierte | 63 | 143_militärverräter_attentat_kriegshandlung_attentats |
| 144 | depressionskrankheiten - depressionssymptomen - depressionssymptome - angstzustände - angststörungen | 62 | 144_depressionskrankheiten_depressionssymptomen_depressionssymptome_angstzustände |
| 145 | kinderingefahr - kinderimpfzentrum - kinderfreundliche - freiheitstrychlern - deutschlandquaeltkinder | 62 | 145_kinderingefahr_kinderimpfzentrum_kinderfreundliche_freiheitstrychlern |
| 146 | landeswahlleiter - ausschussvorsitzenden - rechtsausschuss - innenausschusses - parteiverlag | 62 | 146_landeswahlleiter_ausschussvorsitzenden_rechtsausschuss_innenausschusses |
| 147 | sterbefallzahlen - säuglingssterblichkeit - todesfälle - sterblichkeiten - übersterblichkeit | 62 | 147_sterbefallzahlen_säuglingssterblichkeit_todesfälle_sterblichkeiten |
| 148 | bombardierung - 1945 - bomberverbände - bombardement - bombenangriff | 61 | 148_bombardierung_1945_bomberverbände_bombardement |
| 149 | terroranschlag - staatsterrorismus - terrorakts - auslandsgeheimdienstes - anschlägen | 61 | 149_terroranschlag_staatsterrorismus_terrorakts_auslandsgeheimdienstes |
| 150 | gestorben - stirbt - ehemaliger - ertürk - tödlicher | 61 | 150_gestorben_stirbt_ehemaliger_ertürk |
| 151 | viraler - virusprobe - virussekret - virus - virusaufreinigung | 60 | 151_viraler_virusprobe_virussekret_virus |
| 152 | impfstoffkunde - impfkampagne - impfstoffe - impfzentren - iimpfstoffs | 60 | 152_impfstoffkunde_impfkampagne_impfstoffe_impfzentren |
| 153 | enthüllungsjournalist - magazins - geschichtsbüchern - historikerin - illustrierten | 60 | 153_enthüllungsjournalist_magazins_geschichtsbüchern_historikerin |
| 154 | liberal - demokratischsten - parteipolitisch - politestablishment - deutschem | 60 | 154_liberal_demokratischsten_parteipolitisch_politestablishment |
| 155 | gutenmorgen - morgen - saisonstart - wochenstart - woche | 60 | 155_gutenmorgen_morgen_saisonstart_wochenstart |
| 156 | aufweckprogramm - redefreiheit - zugreifen - erwachenbefreiung - zuzulassen | 59 | 156_aufweckprogramm_redefreiheit_zugreifen_erwachenbefreiung |
| 157 | mainstream - coronakritikern - publikation - narrative - nebenbemerkung | 59 | 157_mainstream_coronakritikern_publikation_narrative |
| 158 | abonnenten - alternativkanal - abonniert - host - geradelivestream | 59 | 158_abonnenten_alternativkanal_abonniert_host |
| 159 | tagesaktuellen - journalistenwatch - show - neuigkeiten - journalistensöhne | 59 | 159_tagesaktuellen_journalistenwatch_show_neuigkeiten |
| 160 | ministerwechsel - gesundheitsminister - gesundheitsministers - pandemieminister - regierungskreise | 59 | 160_ministerwechsel_gesundheitsminister_gesundheitsministers_pandemieminister |
| 161 | sowjetreich - russlandvon - russlandaußenministerium - bolschewismus - rjabkow | 59 | 161_sowjetreich_russlandvon_russlandaußenministerium_bolschewismus |
| 162 | kontroverser - präsidentschaftswahlkampfes - desinformation - geheimdokumentefbi - enthüllungsbericht | 58 | 162_kontroverser_präsidentschaftswahlkampfes_desinformation_geheimdokumentefbi |
| 163 | grünbewegung - freiheitlichen - grüninnen - parteiliche - legitimiert | 58 | 163_grünbewegung_freiheitlichen_grüninnen_parteiliche |
| 164 | wassermangel - wasserversorgung - wasserförderung - wasserschutzgebiet - wasservertrags | 57 | 164_wassermangel_wasserversorgung_wasserförderung_wasserschutzgebiet |
| 165 | kampfflugzeuge - luftwaffenstützpunkt - jagdflugzeugen - sowjetischer - geopolitischen | 57 | 165_kampfflugzeuge_luftwaffenstützpunkt_jagdflugzeugen_sowjetischer |
| 166 | energiewirtschaftlichen - energieproduzenten - energieträgerder - energieträgers - energieträger | 57 | 166_energiewirtschaftlichen_energieproduzenten_energieträgerder_energieträgers |
| 167 | powerstation - elektronik - elektronischen - geräte - dynamo | 56 | 167_powerstation_elektronik_elektronischen_geräte |
| 168 | kapitalismus - korrumpierung - plünderung - marktwirtschaft - armut | 56 | 168_kapitalismus_korrumpierung_plünderung_marktwirtschaft |
| 169 | währungen - währungsumstellung - currency - währung - fremdwährungen | 56 | 169_währungen_währungsumstellung_currency_währung |
| 170 | mobiltelefonen - mobiltelefon - akku - smartphone - powerbank | 56 | 170_mobiltelefonen_mobiltelefon_akku_smartphone |
| 171 | verwaltungsgerichtshofs - verwaltungsgerichtshof - rechtsprechung - legalitätsgebot - zugangsregeln | 56 | 171_verwaltungsgerichtshofs_verwaltungsgerichtshof_rechtsprechung_legalitätsgebot |
| 172 | inflationstendenzen - inflationsraten - inflationszahlen - inflationsrate - inflationsdaten | 56 | 172_inflationstendenzen_inflationsraten_inflationszahlen_inflationsrate |
| 173 | kinderlosen - kinderopfer - kindermörder - unbegreiflichen - entwicklungsschädliche | 56 | 173_kinderlosen_kinderopfer_kindermörder_unbegreiflichen |
| 174 | pine - wellness - pollen10 - pollen - enhancing | 56 | 174_pine_wellness_pollen10_pollen |
| 175 | elektrische - elektronik - effiziente - kühlschrank - stromversorgung | 55 | 175_elektrische_elektronik_effiziente_kühlschrank |
| 176 | australia - australische - australier - australischen - australien | 55 | 176_australia_australische_australier_australischen |
| 177 | wissenschaftsfeindliche - wissenschaftsfeindlicher - wissenschaftsfeindlichkeit - wissenschaftsfreiheit - kritikerketzern | 55 | 177_wissenschaftsfeindliche_wissenschaftsfeindlicher_wissenschaftsfeindlichkeit_wissenschaftsfreiheit |
| 178 | niederösterreichs - niederösterreichischen - oberösterreich - festungösterreich - verstören | 55 | 178_niederösterreichs_niederösterreichischen_oberösterreich_festungösterreich |
| 179 | oberverwaltungsgericht - oberverwaltungsgerichts - oberlandesgerichts - verwaltungsgericht - 2g | 55 | 179_oberverwaltungsgericht_oberverwaltungsgerichts_oberlandesgerichts_verwaltungsgericht |
| 180 | bücher - buch - wissens - erniedrigender - heraklit | 54 | 180_bücher_buch_wissens_erniedrigender |
| 181 | audionews - podcast - dokumentarfilm - philharmoniker - mikrofone | 54 | 181_audionews_podcast_dokumentarfilm_philharmoniker |
| 182 | bibel - himmel - bibelstelle - srce - hindurch | 54 | 182_bibel_himmel_bibelstelle_srce |
| 183 | europarechtswidrig - menschenrechtskonvention - europarates - europarat - souveränitätsrechte | 54 | 183_europarechtswidrig_menschenrechtskonvention_europarates_europarat |
| 184 | machtkriegsfolgen - wohlstandsverlusten - wohlstandsverlust - verknappung - handelsflüsse | 54 | 184_machtkriegsfolgen_wohlstandsverlusten_wohlstandsverlust_verknappung |
| 185 | 0xf39bdfb41f639b82e3d2bf022828bc6394f533a3 - code - 3jvdnoywmb93hsrgk58zstuxg11pw9mksr - checkout - addr1v94ayqu53uklgqnn6c4x4 | 53 | 185_0xf39bdfb41f639b82e3d2bf022828bc6394f533a3_code_3jvdnoywmb93hsrgk58zstuxg11pw9mksr_checkout |
| 186 | mitdenkenfolge - mitdenken - nachdenken - vordenken - gedankengängen | 53 | 186_mitdenkenfolge_mitdenken_nachdenken_vordenken |
| 187 | existenzberechtigung - catholicism - kardinal - priest - catholics | 53 | 187_existenzberechtigung_catholicism_kardinal_priest |
| 188 | millionenhilfe - militärhilfen - militärhilfepaket - waffenlieferungsgelder - militärhilfe | 52 | 188_millionenhilfe_militärhilfen_militärhilfepaket_waffenlieferungsgelder |
| 189 | naumburg - 2023 - 2023folgt - 06 - 02 | 52 | 189_naumburg_2023_2023folgt_06 |
| 190 | deutschlandtrend - impfpflichtfetischist - süddeutschland - lauterbach - schwurblerbach | 52 | 190_deutschlandtrend_impfpflichtfetischist_süddeutschland_lauterbach |
| 191 | nylonscheide - bruchsichere - bruchsicher - schützender - unverwüstlichem | 52 | 191_nylonscheide_bruchsichere_bruchsicher_schützender |
| 192 | impfpflichtgesetzes - impfverweigerern - ansteckungsgefährdeten - impferfahrung - infektionszahlen | 52 | 192_impfpflichtgesetzes_impfverweigerern_ansteckungsgefährdeten_impferfahrung |
| 193 | müllerstraße - ostdeutschen - vielgeprüftes - huß - kreisobmann | 52 | 193_müllerstraße_ostdeutschen_vielgeprüftes_huß |
| 194 | migrationshintergrundb - österreichdie - migrantenschlepperei - migrationsdebatte - migrationswaffe | 52 | 194_migrationshintergrundb_österreichdie_migrantenschlepperei_migrationsdebatte |
| 195 | elektromobilität - elektroautos - elektrofahrzeugs - elektroauto - kraftfahrzeughersteller | 52 | 195_elektromobilität_elektroautos_elektrofahrzeugs_elektroauto |
| 196 | impfbedenken - impfzurückhaltung - immunität - wahrscheinlichkeit - infektion | 51 | 196_impfbedenken_impfzurückhaltung_immunität_wahrscheinlichkeit |
| 197 | europas - krieges - gruß - partioten - mein | 51 | 197_europas_krieges_gruß_partioten |
| 198 | islamisten - islamistischen - dschihadisten - terrorgruppe - islams | 51 | 198_islamisten_islamistischen_dschihadisten_terrorgruppe |
| 199 | orf - haushaltsabgabe - gebührensenders - zwangsgebühren - reformvorschlag | 51 | 199_orf_haushaltsabgabe_gebührensenders_zwangsgebühren |
| 200 | düsseldorf - frankfurt - hamburg - münchen - augsburg | 50 | 200_düsseldorf_frankfurt_hamburg_münchen |
| 201 | zensurfreien - socials - internetseiten - verbotsdrohungen - wikihausen | 50 | 201_zensurfreien_socials_internetseiten_verbotsdrohungen |
| 202 | presseschau - podcast - aktuelle - dir - globalen | 50 | 202_presseschau_podcast_aktuelle_dir |
| 203 | brandanschlag - flammen - katastrophenfilm - giftige - eisenbahnanlagen | 50 | 203_brandanschlag_flammen_katastrophenfilm_giftige |
| 204 | antarktischen - antarctica - antarktis - expedition - expeditions | 50 | 204_antarktischen_antarctica_antarktis_expedition |
| 205 | handschuhe - knöchelschutz - handgelenk - fingerknöchel - handfläche | 50 | 205_handschuhe_knöchelschutz_handgelenk_fingerknöchel |
| 206 | tornadoschäden - tornados - tornado - katastrophengebiet - unwetterkatastrophe | 49 | 206_tornadoschäden_tornados_tornado_katastrophengebiet |
| 207 | foodwatch - googles - nahrungsergänzungsmittel - google - rohköstlichkeiten | 49 | 207_foodwatch_googles_nahrungsergänzungsmittel_google |
| 208 | naturheilmediziner - heilpraktiker - biomedicine - biomedizin - pharmaindustrie | 49 | 208_naturheilmediziner_heilpraktiker_biomedicine_biomedizin |
| 209 | putinfreunde - panzerkoalition - russsischen - russlandfreunde - kriegsgegner | 49 | 209_putinfreunde_panzerkoalition_russsischen_russlandfreunde |
| 210 | com - webshop - grosz - blogger - geraldgrosz | 49 | 210_com_webshop_grosz_blogger |
| 211 | demonstration - mobilisation - freiheitistnichtverhandelbar - versammelten - revolt | 48 | 211_demonstration_mobilisation_freiheitistnichtverhandelbar_versammelten |
| 212 | orf - beschwerde - verharmlosung - nebenwirkungen - gefährliche | 48 | 212_orf_beschwerde_verharmlosung_nebenwirkungen |
| 213 | german - deutsch - preußen - kanalmitglied - gruß | 48 | 213_german_deutsch_preußen_kanalmitglied |
| 214 | convoi - convoy - convoys - konvoi - truckertausende | 48 | 214_convoi_convoy_convoys_konvoi |
| 215 | verschwörungstheoriees - verschwörungstheoretikerin - verschwörungstheoretiker - verschwörungstheorien - szientizismus | 48 | 215_verschwörungstheoriees_verschwörungstheoretikerin_verschwörungstheoretiker_verschwörungstheorien |
| 216 | pcr - testregimes - testergebnischristoph - diagnose - diagnostiziert | 48 | 216_pcr_testregimes_testergebnischristoph_diagnose |
| 217 | gegendemonstration - plakat - demo - veranstaltungen - hochladen | 48 | 217_gegendemonstration_plakat_demo_veranstaltungen |
| 218 | ukrainegeschäftejoebiden - überparteilich - ukrainegeschäftehunterbiden - menschenrechtspolitischer - schmitz | 47 | 218_ukrainegeschäftejoebiden_überparteilich_ukrainegeschäftehunterbiden_menschenrechtspolitischer |
| 219 | hochwertigen - wärmt - geschützt - augen - robuste | 47 | 219_hochwertigen_wärmt_geschützt_augen |
| 220 | box - komplettpaket - grillpfanne - frischen - kuchen | 47 | 220_box_komplettpaket_grillpfanne_frischen |
| 221 | impfzwangsverordnung - österreichs - auslandsösterreicher - impffrage - österreicherinnen | 47 | 221_impfzwangsverordnung_österreichs_auslandsösterreicher_impffrage |
| 222 | kontinentaleuropa - europäischem - europäische - westeuropäische - kontinentaleuropäische | 47 | 222_kontinentaleuropa_europäischem_europäische_westeuropäische |
| 223 | bundesinnenministerin - teilzeitministerin - oppositionsführerin - ministerpräsidentin - ministerpräsidentenamt | 47 | 223_bundesinnenministerin_teilzeitministerin_oppositionsführerin_ministerpräsidentin |
| 224 | frachtschiffe - containerschiffen - containerschiffe - handelsschiffe - schiffscontainer | 46 | 224_frachtschiffe_containerschiffen_containerschiffe_handelsschiffe |
| 225 | corona_impfung_final - höchstleistungen - schwerwiegender - regelbruch - ausgestrahlten | 46 | 225_corona_impfung_final_höchstleistungen_schwerwiegender_regelbruch |
| 226 | sahara - marokko - magnetischen - magnetisch - magnetit | 46 | 226_sahara_marokko_magnetischen_magnetisch |
| 227 | oberösterreich - amsterdam - rathausplatz - ringstraßenumzug - heldenplatz | 46 | 227_oberösterreich_amsterdam_rathausplatz_ringstraßenumzug |
| 228 | düsseldorf - münchen2212 - frankfurt - bayerischen - hamburg | 46 | 228_düsseldorf_münchen2212_frankfurt_bayerischen |
| 229 | melindas - melinda - scheidungsanwälte - gates - extraterrestrial | 46 | 229_melindas_melinda_scheidungsanwälte_gates |
| 230 | düngemittelpreise - preissteigerungen - erzeugerpreisen - erzeugerpreise - agrarpreise | 46 | 230_düngemittelpreise_preissteigerungen_erzeugerpreisen_erzeugerpreise |
| 231 | naturkatastrophenjahre - luftfeuchtigkeit - thermik - naturkatastrophen - trockenheit | 46 | 231_naturkatastrophenjahre_luftfeuchtigkeit_thermik_naturkatastrophen |
| 232 | impfsicherheit - krankheitsverläufen - immunantwort - erkrankung - pseudovirus | 45 | 232_impfsicherheit_krankheitsverläufen_immunantwort_erkrankung |
| 233 | satellitenschüssel - satellitenschüsseln - spacex - satellitensystem - satelliteninternet | 45 | 233_satellitenschüssel_satellitenschüsseln_spacex_satellitensystem |
| 234 | lionmediatelegram - media - tv - lion - 0xf349604cfa2d36e6a7ae3e0c4479f18375d8 | 45 | 234_lionmediatelegram_media_tv_lion |
| 235 | facebooktötet - facebook - russländern - extremismus - extremistische | 45 | 235_facebooktötet_facebook_russländern_extremismus |
| 236 | energiebedarf - energiewende - energiedefizit - energiewirtschaftsgesetzes - energiewender | 45 | 236_energiebedarf_energiewende_energiedefizit_energiewirtschaftsgesetzes |
| 237 | narrative - lichte - gerechtigkeit - gründet - verstauen | 44 | 237_narrative_lichte_gerechtigkeit_gründet |
| 238 | impfpflichtgesetznach - impfpflichtgegner - impfschutz - österreichweite - pandemiebekämpfung | 44 | 238_impfpflichtgesetznach_impfpflichtgegner_impfschutz_österreichweite |
| 239 | goldmarkt - goldkauf - goldexporte - währungssituation - währungsrisiko | 44 | 239_goldmarkt_goldkauf_goldexporte_währungssituation |
| 240 | psychologinbegriffe - psychedelika - psychedelikawarum - psychotiker - psychos | 44 | 240_psychologinbegriffe_psychedelika_psychedelikawarum_psychotiker |
| 241 | demonstranten - organisationsgruppen - versammlung - versammlungen - stattgefunden | 44 | 241_demonstranten_organisationsgruppen_versammlung_versammlungen |
| 242 | fotografiert - fotograf - foto - aufnahmen - fotos | 44 | 242_fotografiert_fotograf_foto_aufnahmen |
| 243 | wasserfilteranlage - wasserfilter - wasseraufbereitungsanlage - trinkwasserversorgung - leitungswasser | 44 | 243_wasserfilteranlage_wasserfilter_wasseraufbereitungsanlage_trinkwasserversorgung |
| 244 | kaffeefilter - kaffeefilterder - filterkaffees - kaffee - wegwerffilter | 44 | 244_kaffeefilter_kaffeefilterder_filterkaffees_kaffee |
| 245 | roboter - cyborgs - kundenstamm - cyborg - innovationen | 43 | 245_roboter_cyborgs_kundenstamm_cyborg |
| 246 | kontaminiert - tschernobyls - tschernobyl - giftigen - hochgiftiges | 43 | 246_kontaminiert_tschernobyls_tschernobyl_giftigen |
| 247 | zensurfreien - politmediale - regierungskritiker - kritischer - bürgerprotest | 43 | 247_zensurfreien_politmediale_regierungskritiker_kritischer |
| 248 | todesfall - sterben - todesermittlungsverfahren - sterbezahlen - todesursache | 43 | 248_todesfall_sterben_todesermittlungsverfahren_sterbezahlen |
| 249 | sehr - ordi - ohoooho - betreff - lager | 43 | 249_sehr_ordi_ohoooho_betreff |
| 250 | investorenregistrierung - mitinvestoren - investoren - investmentspecial - kryptomarkt | 43 | 250_investorenregistrierung_mitinvestoren_investoren_investmentspecial |
| 251 | intensivbettenbelegungen - intensivpatienten - intensivstationsaufenthalt - auslastungsquote - intensivbetten | 43 | 251_intensivbettenbelegungen_intensivpatienten_intensivstationsaufenthalt_auslastungsquote |
| 252 | masseneinwanderung - migrationshintergrund - migrationsfrage - bevölkerungsaustausches - abwanderung | 43 | 252_masseneinwanderung_migrationshintergrund_migrationsfrage_bevölkerungsaustausches |
| 253 | grünfanatische - greenpeace - grüninnen - grünneindanke - klimafanatiker | 43 | 253_grünfanatische_greenpeace_grüninnen_grünneindanke |
| 254 | berlin - berliner - berlinerinnen - münchen2212 - preußische | 43 | 254_berlin_berliner_berlinerinnen_münchen2212 |
| 255 | streikdemos - streik - streikaktionstag - warnstreik - streikpotenzial | 43 | 255_streikdemos_streik_streikaktionstag_warnstreik |
| 256 | maskenverbot - taxifahrer - verhüllungsverbot - taxiverband - vermummungsverbot | 43 | 256_maskenverbot_taxifahrer_verhüllungsverbot_taxiverband |
| 257 | kanadaprotest - kanadiern - kanada - canada - canadian | 42 | 257_kanadaprotest_kanadiern_kanada_canada |
| 258 | net - netzfund - netzwerke - weltgeld - zusendung | 42 | 258_net_netzfund_netzwerke_weltgeld |
| 259 | großproteste - montagsproteste - protestmarsch - demonstrationsumzug - freiheitsmarsch | 42 | 259_großproteste_montagsproteste_protestmarsch_demonstrationsumzug |
| 260 | atomkraftwerken - atomkraftwerke - kernkraftwerke - atompolitik - kernenergie | 42 | 260_atomkraftwerken_atomkraftwerke_kernkraftwerke_atompolitik |
| 261 | dankeschön - dank - danke - bitte - thank | 42 | 261_dankeschön_dank_danke_bitte |
| 262 | impfstoffversorgung - impfpflichten - grundimmunisiert - impflinge - immunitätsnachweises | 42 | 262_impfstoffversorgung_impfpflichten_grundimmunisiert_impflinge |
| 263 | deutschlandkurier - entwarnt - anhör - webseite - lindemanns | 42 | 263_deutschlandkurier_entwarnt_anhör_webseite |
| 264 | hochwasserkatastrophe - katastrophenhilfe - flutkatastrophe - katastrophenfälle - katastrophenfall | 42 | 264_hochwasserkatastrophe_katastrophenhilfe_flutkatastrophe_katastrophenfälle |
| 265 | germany - greetings - patriots - politische - nachrichten | 42 | 265_germany_greetings_patriots_politische |
| 266 | publizieren - autoren - aufgedruckten - zulassungskatastrophe - umgeschrieben | 42 | 266_publizieren_autoren_aufgedruckten_zulassungskatastrophe |
| 267 | racism - rassismus - rassistisch - rassistischer - schwarzpädagogischeres | 42 | 267_racism_rassismus_rassistisch_rassistischer |
| 268 | ukrainerinnen - vergewaltigern - vergewaltigten - ukrainerin - vergewaltigte | 42 | 268_ukrainerinnen_vergewaltigern_vergewaltigten_ukrainerin |
| 269 | ressourcen - versteckte - digitaleraktivist - nutzen - volksschüler | 42 | 269_ressourcen_versteckte_digitaleraktivist_nutzen |
| 270 | 2030 - 2026 - wendejahr - 2017 - populismus | 42 | 270_2030_2026_wendejahr_2017 |
| 271 | spionagegeräten - smartphones - spionage - datenschutzbesorgten - smartwatches | 41 | 271_spionagegeräten_smartphones_spionage_datenschutzbesorgten |
| 272 | pharmaindustrie - apotheken - pharmagroßhandel - medikamente - medizingeschichte | 41 | 272_pharmaindustrie_apotheken_pharmagroßhandel_medikamente |
| 273 | persönlichkeitsentwicklung - bewusstseinsentfaltung - geistigen - psychologie - philosophie | 41 | 273_persönlichkeitsentwicklung_bewusstseinsentfaltung_geistigen_psychologie |
| 274 | weu8zk4uw78km8capd5rjdc06q28j370 - 0xd449694348b1d618eca2829bbc901782f5172689 - addr1v94ayqu53uklgqnn6c4x4weu8zk4uw78km8capd5rjdc06q28j370 - card - exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r | 41 | 274_weu8zk4uw78km8capd5rjdc06q28j370_0xd449694348b1d618eca2829bbc901782f5172689_addr1v94ayqu53uklgqnn6c4x4weu8zk4uw78km8capd5rjdc06q28j370_card |
| 275 | impfpflichtgesetz - impfprävention - impfaufforderung - impfaufklärungsschrift - impfaufklärung | 41 | 275_impfpflichtgesetz_impfprävention_impfaufforderung_impfaufklärungsschrift |
| 276 | soundcloud - audio - music - podcasts - germany | 41 | 276_soundcloud_audio_music_podcasts |
| 277 | zehir - saakashvili - georgian - marionettenpräsidenten - zemmour | 41 | 277_zehir_saakashvili_georgian_marionettenpräsidenten |
| 278 | bestsellerliste - geschcichtsbüchern - bücherbesitzens - buch - literatur | 41 | 278_bestsellerliste_geschcichtsbüchern_bücherbesitzens_buch |
| 279 | liposomale - phospholipid - nährstoffe - produkte - wissenschaftlichem | 41 | 279_liposomale_phospholipid_nährstoffe_produkte |
| 280 | assanges - wikileaks - gefoltert - afghanistankonflikt - menschenrechtstages | 41 | 280_assanges_wikileaks_gefoltert_afghanistankonflikt |
| 281 | digitalwährung - digitaleneuro - digitalzentralbankgeldquote - digitalereuro - digitalgeld | 41 | 281_digitalwährung_digitaleneuro_digitalzentralbankgeldquote_digitalereuro |
| 282 | pizzateig - marmelade - marmeladen - alltagsprodukt - puderzucker | 41 | 282_pizzateig_marmelade_marmeladen_alltagsprodukt |
| 283 | ostukraine - nazi - nazibataillone - nationalsozialistischen - neonazis | 41 | 283_ostukraine_nazi_nazibataillone_nationalsozialistischen |
| 284 | schuldenausfall - schuldenorgie - schuldenlast - staatsschulden - staatsverschuldung | 41 | 284_schuldenausfall_schuldenorgie_schuldenlast_staatsschulden |
| 285 | atomkrieges - atomkriegs - atomkrieg - atomwaffendoktrin - nukleardoktrin | 41 | 285_atomkrieges_atomkriegs_atomkrieg_atomwaffendoktrin |
| 286 | inflationsbekämpfung - inflationsproblem - inflationsraten - inflation - inflationsmessung | 41 | 286_inflationsbekämpfung_inflationsproblem_inflationsraten_inflation |
| 287 | abonnenten - chronikeinerangekündigtenkrise - bestätigungslink - abonnent - expresszeitung | 40 | 287_abonnenten_chronikeinerangekündigtenkrise_bestätigungslink_abonnent |
| 288 | freiberg - weilheim - 06 - 2023 - kaiserslautern | 40 | 288_freiberg_weilheim_06_2023 |
| 289 | blackouts - blackout - katastrophenzustand - stromwarnung - kraftwerksstörung | 40 | 289_blackouts_blackout_katastrophenzustand_stromwarnung |
| 290 | vitaminschonende - früchte - konservieren - kräutersalz - köstliche | 40 | 290_vitaminschonende_früchte_konservieren_kräutersalz |
| 291 | korrupten - systems - system - bekämpfen - gegner | 40 | 291_korrupten_systems_system_bekämpfen |
| 292 | parteiausschussverfahren - parteivorsitzende - parteipräsidiums - parteimitglied - bundesverfassungsschutzchef | 40 | 292_parteiausschussverfahren_parteivorsitzende_parteipräsidiums_parteimitglied |
| 293 | schreckenspandemie - epidemiologie - epidemiologe - faktenfuchs - katastrophenvokabulars | 40 | 293_schreckenspandemie_epidemiologie_epidemiologe_faktenfuchs |
| 294 | viertelfinal - winner - norweger - triumphierte - rennläuferinnen | 40 | 294_viertelfinal_winner_norweger_triumphierte |
| 295 | kaffee - kochmöglichkeit - suppen - zubereitet - zuzubereiten | 40 | 295_kaffee_kochmöglichkeit_suppen_zubereitet |
| 296 | ungarische - magyarország - budapest - ungarischen - nordmazedonien | 39 | 296_ungarische_magyarország_budapest_ungarischen |
| 297 | hochansteckend - infektiöser - infektionen - omicronupdate - omicron | 39 | 297_hochansteckend_infektiöser_infektionen_omicronupdate |
| 298 | sanktionieren - zensurmaßnahmen - zensurgesetze - propagandamaschine - bestrafen | 39 | 298_sanktionieren_zensurmaßnahmen_zensurgesetze_propagandamaschine |
| 299 | flutkatastrophe - jahrhundertkatastrophe - katastrophe - stillstandskrise - politikversagen | 39 | 299_flutkatastrophe_jahrhundertkatastrophe_katastrophe_stillstandskrise |
| 300 | immunmodelle - immunreaktionen - virales - antikörper - vektorimpfstoffen | 39 | 300_immunmodelle_immunreaktionen_virales_antikörper |
| 301 | staatssender - kabelnetz - medienaufsicht - sendebetriebs - medienprojekt | 39 | 301_staatssender_kabelnetz_medienaufsicht_sendebetriebs |
| 302 | supermärkten - einkaufsallianz - markenherstellern - produktdiskussionen - marktbetreiber | 39 | 302_supermärkten_einkaufsallianz_markenherstellern_produktdiskussionen |
| 303 | freunde - angenehmen - gerne - liebe - bringe | 38 | 303_freunde_angenehmen_gerne_liebe |
| 304 | disinformation - atrocity - cosmic - divisiveness - satanic | 38 | 304_disinformation_atrocity_cosmic_divisiveness |
| 305 | lauterbach - lauterbachs - lauterbachkurze - wackenberg - chefpropagandist | 38 | 305_lauterbach_lauterbachs_lauterbachkurze_wackenberg |
| 306 | europas - spendenseite - bornjakow - benefizgala - monatsabo | 38 | 306_europas_spendenseite_bornjakow_benefizgala |
| 307 | brasilien - brasiliens - brasilianische - familien - schwangerschaftsvorsorge | 38 | 307_brasilien_brasiliens_brasilianische_familien |
| 308 | kanzlerkandidatin - justizministerinnennicht - verteidigungsministerin - justizministerien - abteilungsleitern | 38 | 308_kanzlerkandidatin_justizministerinnennicht_verteidigungsministerin_justizministerien |
| 309 | echtzeitüberwachung - überwachungsmöglichkeiten - totalüberwachungs - abhörsichere - datenschutzbehörde | 38 | 309_echtzeitüberwachung_überwachungsmöglichkeiten_totalüberwachungs_abhörsichere |
| 310 | arztberuf - ärztekammerpräsidenten - mediziner - schulärzte - ärztekammerpräsident | 38 | 310_arztberuf_ärztekammerpräsidenten_mediziner_schulärzte |
| 311 | demonstrationsteilnehmern - versammlung - versammlungen - organisator - versammlungsleitung | 38 | 311_demonstrationsteilnehmern_versammlung_versammlungen_organisator |
| 312 | russlandkrise - russlandrussisches - russlandkreml - präsidentschaftsdebatte - bidenleaks | 38 | 312_russlandkrise_russlandrussisches_russlandkreml_präsidentschaftsdebatte |
| 313 | youtube - mediathek - live - österreich - twitch | 38 | 313_youtube_mediathek_live_österreich |
| 314 | erklärungen - offenbarungen - schöpfung - wahrheit - naturgesetze | 37 | 314_erklärungen_offenbarungen_schöpfung_wahrheit |
| 315 | serbien - serbia - serben - serbiens - serbische | 37 | 315_serbien_serbia_serben_serbiens |
| 316 | wzug - zusammengegencorona - εμεrgεncυ - münchenstehtauf - truckersforfreedom2022 | 37 | 316_wzug_zusammengegencorona_εμεrgεncυ_münchenstehtauf |
| 317 | krisenfälle - flutkatastrophen - katastrophenfall - katastrophen - krisenzeiten | 37 | 317_krisenfälle_flutkatastrophen_katastrophenfall_katastrophen |
| 318 | youtube - videos - youtubea - kanalinfo - podcast | 37 | 318_youtube_videos_youtubea_kanalinfo |
| 319 | wittenbergplatz - johannesplatz - steirischer - hartberg - rochusplatz | 37 | 319_wittenbergplatz_johannesplatz_steirischer_hartberg |
| 320 | frankfurt - österreichischen - protestmarsch - demonstranten - bayrische | 37 | 320_frankfurt_österreichischen_protestmarsch_demonstranten |
| 321 | benzinpreis - benzinpreise - dieselkraftstoff - diesel - benzinverkaufspreises | 37 | 321_benzinpreis_benzinpreise_dieselkraftstoff_diesel |
| 322 | faschismus - faschistisches - faschistischen - faschistisch - volksfeindlichen | 37 | 322_faschismus_faschistisches_faschistischen_faschistisch |
| 323 | deutscherbundestag - german - russlanddeutscher - landschaftszerstörung - wasserstandsmeldung | 37 | 323_deutscherbundestag_german_russlanddeutscher_landschaftszerstörung |
| 324 | kraftwerksgelände - reaktoren - reaktorkapazität - kernkraftwerk - atomkraftwerke | 37 | 324_kraftwerksgelände_reaktoren_reaktorkapazität_kernkraftwerk |
| 325 | geheimdienste - uninformiert - socialmedia - mediakanälen - public | 37 | 325_geheimdienste_uninformiert_socialmedia_mediakanälen |
| 326 | cdl - chlordioxid - cbd - medikament - gesundheit | 37 | 326_cdl_chlordioxid_cbd_medikament |
| 327 | ganzmetall - stabil - widerstandsfähigem - metall - metallring | 37 | 327_ganzmetall_stabil_widerstandsfähigem_metall |
| 328 | wasserfilter - hochwasserschutz - wasser - trinkwasser - sicherer | 36 | 328_wasserfilter_hochwasserschutz_wasser_trinkwasser |
| 329 | grillrost - grillen - grill - kochmöglichkeit - kettle | 36 | 329_grillrost_grillen_grill_kochmöglichkeit |
| 330 | biometriske - biometrici - biometrijski - biometrische - datos | 36 | 330_biometriske_biometrici_biometrijski_biometrische |
| 331 | judenfeindlichkeit - jüdischen - judenstern - jüdische - juden | 36 | 331_judenfeindlichkeit_jüdischen_judenstern_jüdische |
| 332 | propagandamanöver - propagandakanäle - altmedienberichterstattung - ukrainethematik - propagandamelement | 36 | 332_propagandamanöver_propagandakanäle_altmedienberichterstattung_ukrainethematik |
| 333 | solidarisch - solidarität - mutigmacher - topbeitrag - aufklärungshelden | 36 | 333_solidarisch_solidarität_mutigmacher_topbeitrag |
| 334 | politikerin - politiker - nationalratsabgeordneter - bundestagsabgeordneter - volksversammlung | 36 | 334_politikerin_politiker_nationalratsabgeordneter_bundestagsabgeordneter |
| 335 | places - untergrundwelt - unterirdische - area - underground | 36 | 335_places_untergrundwelt_unterirdische_area |
| 336 | ungarische - hungarian - orban - ungarisches - orbans | 36 | 336_ungarische_hungarian_orban_ungarisches |
| 337 | russlandbericht - russischsprachigen - russischer - russenhasserin - russe | 36 | 337_russlandbericht_russischsprachigen_russischer_russenhasserin |
| 338 | janotka - nora - präsentiert - nous - tina | 36 | 338_janotka_nora_präsentiert_nous |
| 339 | coronainfektion - infektionswahrscheinlichkeit - epidemiology - beatmungspatienten - patientengruppe | 36 | 339_coronainfektion_infektionswahrscheinlichkeit_epidemiology_beatmungspatienten |
| 340 | jesus - righteousness - christus - geistliches - heiligen | 36 | 340_jesus_righteousness_christus_geistliches |
| 341 | germany - greetings - patriots - my - all | 36 | 341_germany_greetings_patriots_my |
| 342 | humanity - we - better - our - science | 36 | 342_humanity_we_better_our |
| 343 | com - _catherine - adventspaziergang_mit_gartz_und_tolkien_catherine - vimeo - deinem | 36 | 343_com__catherine_adventspaziergang_mit_gartz_und_tolkien_catherine_vimeo |
| 344 | gegenprotest - demonstranten - protestierer - protestler - protest | 35 | 344_gegenprotest_demonstranten_protestierer_protestler |
| 345 | italienischen - italienische - italienern - italien - italiens | 35 | 345_italienischen_italienische_italienern_italien |
| 346 | akademisierungswahn - schulpflichtverletzung - schulsystem - schulbehörden - überakademisierung | 35 | 346_akademisierungswahn_schulpflichtverletzung_schulsystem_schulbehörden |
| 347 | powerstation - vollzeitstellen - akku - ladekabel - elektrowerkzeuge | 35 | 347_powerstation_vollzeitstellen_akku_ladekabel |
| 348 | nehammer - gegenverstaltung - nehammers - landeshauptleuten - ungenimpfte | 35 | 348_nehammer_gegenverstaltung_nehammers_landeshauptleuten |
| 349 | soviet - sowjetrepubliken - russländischen - nuklearwaffen - antikommunist | 35 | 349_soviet_sowjetrepubliken_russländischen_nuklearwaffen |
| 350 | komm - geschwurbel - bubble - geschwurbels - lustig | 35 | 350_komm_geschwurbel_bubble_geschwurbels |
| 351 | totalitär - gesundheitssprecher - anzustecken - inakzeptabel - kriegspropaganda | 35 | 351_totalitär_gesundheitssprecher_anzustecken_inakzeptabel |
| 352 | krisensitzungen - kritische - kritischen - kritischer - grundversorgung | 35 | 352_krisensitzungen_kritische_kritischen_kritischer |
| 353 | vollmilchpulver - milch - milchviehbetriebe - dehydrierte - grundnahrungsmitteln | 35 | 353_vollmilchpulver_milch_milchviehbetriebe_dehydrierte |
| 354 | frankreich - franzosen - french - maskenpflicht - madrid | 35 | 354_frankreich_franzosen_french_maskenpflicht |
| 355 | volkssturm - oppositionellen - bürgerproteste - volkssturms - ungerechtfertigt | 35 | 355_volkssturm_oppositionellen_bürgerproteste_volkssturms |
| 356 | transhumanistischen - transhumanists - transhumanismus - transhumanisten - humanismus | 35 | 356_transhumanistischen_transhumanists_transhumanismus_transhumanisten |
| 357 | hildegardsplatz - mühldorf - mainburg - kulmbach - kühbach | 35 | 357_hildegardsplatz_mühldorf_mainburg_kulmbach |
| 358 | spitalsaufhenthalten - pflegepersonaluntergrenzen - patientenschützer - patientenschutz - pflegemangel | 35 | 358_spitalsaufhenthalten_pflegepersonaluntergrenzen_patientenschützer_patientenschutz |
| 359 | versammlungen - organisator - einzelaktionen - hannover - telegramgruppen | 34 | 359_versammlungen_organisator_einzelaktionen_hannover |
| 360 | medics - mediziner - gesundheitsberufen - medizinische - ärztekammerpräsidenten | 34 | 360_medics_mediziner_gesundheitsberufen_medizinische |
| 361 | gesundheitsbürokraten - ärztekammerpräsident - ärztekammerwahl - präventivmedizin - schulärzte | 34 | 361_gesundheitsbürokraten_ärztekammerpräsident_ärztekammerwahl_präventivmedizin |
| 362 | debattenraums - gesprächsgrundlagen - kommunikationsmöglichkeiten - gesellschaftliche - lokalgruppe | 34 | 362_debattenraums_gesprächsgrundlagen_kommunikationsmöglichkeiten_gesellschaftliche |
| 363 | militärflugzeugen - kriegsflugzeugen - sowjetischer - luftwaffenbasen - kampfflugzeuge | 34 | 363_militärflugzeugen_kriegsflugzeugen_sowjetischer_luftwaffenbasen |
| 364 | totalitärer - totalitäre - totalitarismus - einflussreiche - enthüllungsbücher | 34 | 364_totalitärer_totalitäre_totalitarismus_einflussreiche |
| 365 | lieblingsmenschen - lebensfreude - liebevollen - lebensfreudewer - freudvoller | 34 | 365_lieblingsmenschen_lebensfreude_liebevollen_lebensfreudewer |
| 366 | hauptvideokanal - media - videokanal - aufklärungsvideos - nachrichtenkanal | 34 | 366_hauptvideokanal_media_videokanal_aufklärungsvideos |
| 367 | zensurbestimmungen - censors - medienfreiheitsgesetz - zwillingsverordnungen - informationsfreiheit | 34 | 367_zensurbestimmungen_censors_medienfreiheitsgesetz_zwillingsverordnungen |
| 368 | hören - kanal - gruppen - abhörsicher - verschlüsselung | 34 | 368_hören_kanal_gruppen_abhörsicher |
| 369 | qualität - innenräumen - besonders - hervorzuheben - verwendung | 34 | 369_qualität_innenräumen_besonders_hervorzuheben |
| 370 | akkukapazität - batteriegespeisten - powerstation - blackout - stromgeneratoren | 34 | 370_akkukapazität_batteriegespeisten_powerstation_blackout |
| 371 | vollmilchpulver - milch - grundnahrungsmitteln - grundnahrungsmittel - dehydrierte | 34 | 371_vollmilchpulver_milch_grundnahrungsmitteln_grundnahrungsmittel |
| 372 | inflationsdrucks - inflation - kerninflation - geldpolitik - anhebungen | 34 | 372_inflationsdrucks_inflation_kerninflation_geldpolitik |
| 373 | unbekannteflugobjekte - aerial - fliegendes - helicopters - flying | 34 | 373_unbekannteflugobjekte_aerial_fliegendes_helicopters |
| 374 | protesttag - massnahmenproteste - protest - protestieren - impfstreikbündnis | 34 | 374_protesttag_massnahmenproteste_protest_protestieren |
| 375 | krisenvorsorge - krisenfall - supermärkten - supermärkte - nassmärkte | 34 | 375_krisenvorsorge_krisenfall_supermärkten_supermärkte |
| 376 | sonntagmorgenpost - morgenlektüre - morgen - früh - wünsche | 34 | 376_sonntagmorgenpost_morgenlektüre_morgen_früh |
| 377 | globalisten - globalismus - westglobalist - grossmachtpolitik - ideology | 34 | 377_globalisten_globalismus_westglobalist_grossmachtpolitik |
| 378 | augbde71net - iban - videokanäle - outoftheboxmediatv - swift | 34 | 378_augbde71net_iban_videokanäle_outoftheboxmediatv |
| 379 | demonstrationen - maßnahmenkritiker - untersagt - friedliche - wirwerdenalledasein | 34 | 379_demonstrationen_maßnahmenkritiker_untersagt_friedliche |
| 380 | zeger - verteidigen - wissenschaftsforscher - gerhard - oberfranken | 34 | 380_zeger_verteidigen_wissenschaftsforscher_gerhard |
| 381 | saudische - saudischen - saudi - yuan - weltreservewährung | 34 | 381_saudische_saudischen_saudi_yuan |
| 382 | durchschnittliche - durchschnittlichen - durchschnittlich - ausgezahlten - stundenlohn | 34 | 382_durchschnittliche_durchschnittlichen_durchschnittlich_ausgezahlten |
| 383 | dank - danke - dankbar - teile - gratuliere | 33 | 383_dank_danke_dankbar_teile |
| 384 | schuldzuweisungen - spitzenpolitikerin - kündigungsschutzklagemusterklage - blame - kündigungsschutzklage | 33 | 384_schuldzuweisungen_spitzenpolitikerin_kündigungsschutzklagemusterklage_blame |
| 385 | katastrophensichere - batteriebetriebenen - notfallübertragungssystem - batterie - katastrophenhilfe | 33 | 385_katastrophensichere_batteriebetriebenen_notfallübertragungssystem_batterie |
| 386 | propagandasenders - propagandakanäle - zensursicheren - kriegswaffenkontrollgesetz - zensurmaßnahme | 33 | 386_propagandasenders_propagandakanäle_zensursicheren_kriegswaffenkontrollgesetz |
| 387 | mikrochips - chip - chips - technologien - nanosensoren | 33 | 387_mikrochips_chip_chips_technologien |
| 388 | blackout - 850 - funkgerät - pmr - technologie | 33 | 388_blackout_850_funkgerät_pmr |
| 389 | schmeckt - bp - bestens - gesundheitsministerium - geeignet | 33 | 389_schmeckt_bp_bestens_gesundheitsministerium |
| 390 | katastrophenschutz - krisenvorsorge - bp - hilfsorganisationen - lebensmittelbevorratung | 33 | 390_katastrophenschutz_krisenvorsorge_bp_hilfsorganisationen |
| 391 | saudische - saudischen - saudi - saudischer - mohammed | 33 | 391_saudische_saudischen_saudi_saudischer |
| 392 | glühlampen - lampe - taschenlampe - lichtbogenanzünder - lichtstärke | 33 | 392_glühlampen_lampe_taschenlampe_lichtbogenanzünder |
| 393 | goldhändler - edelmetalldepot - gold - edelmetalle - platin | 33 | 393_goldhändler_edelmetalldepot_gold_edelmetalle |
| 394 | salz - eierschalen - edelkastanien - keramik - trockene | 33 | 394_salz_eierschalen_edelkastanien_keramik |
| 395 | akkukapazität - batteriegespeisten - powerstations - elektrowerkzeuge - powerstation | 33 | 395_akkukapazität_batteriegespeisten_powerstations_elektrowerkzeuge |
| 396 | com - youtube - twitter - odysee - ungestört | 33 | 396_com_youtube_twitter_odysee |
| 397 | gesunde - nahrungsergänzungsmittel - pflanzenstoffen - vitalstoffen - vitaminen | 33 | 397_gesunde_nahrungsergänzungsmittel_pflanzenstoffen_vitalstoffen |
| 398 | freiheitdergedanken - freiheitsgefühl - freiheitsliebende - freiheit - freedom | 33 | 398_freiheitdergedanken_freiheitsgefühl_freiheitsliebende_freiheit |
| 399 | hafenstadt - usmanov - schiffbauunternehmens - kriegsschiffe - kriegsmarine | 33 | 399_hafenstadt_usmanov_schiffbauunternehmens_kriegsschiffe |
| 400 | abwehrsprays - abwehrspray - pfefferspray - pfeffersprays - toxische | 33 | 400_abwehrsprays_abwehrspray_pfefferspray_pfeffersprays |
| 401 | bundesgerichtshofs - bundesgerichtshof - schadenersatzanspruch - schadensersatzzahlungen - staatshaftung | 33 | 401_bundesgerichtshofs_bundesgerichtshof_schadenersatzanspruch_schadensersatzzahlungen |
| 402 | düsseldorf - münchen - osnabrück - augsburg - berlin | 32 | 402_düsseldorf_münchen_osnabrück_augsburg |
| 403 | weltfrauentag - feministischer - feministischen - feministische - geschlechtergerechtigkeit | 32 | 403_weltfrauentag_feministischer_feministischen_feministische |
| 404 | impfstoffs - impfkampagnen - todesfallbericht - schwangerschaftsdrittel - frühgeburten | 32 | 404_impfstoffs_impfkampagnen_todesfallbericht_schwangerschaftsdrittel |
| 405 | finanzkrise - weltwirtschaftskrise - währungskatastrophen - wirtschaftssystem - schuldenblase | 32 | 405_finanzkrise_weltwirtschaftskrise_währungskatastrophen_wirtschaftssystem |
| 406 | offizier - infektionsschutzbeauftragten - soldat - notaufnahme - ärzteteam | 32 | 406_offizier_infektionsschutzbeauftragten_soldat_notaufnahme |
| 407 | volkslied - songs - song - hymne - musik | 32 | 407_volkslied_songs_song_hymne |
| 408 | satanische - unheilig - okkulten - transgender - bowie | 32 | 408_satanische_unheilig_okkulten_transgender |
| 409 | schwangerschaftskonfliktberatung - schwangerschaftskonflikt - ungeborenen - schwangerschaft - schwangeren | 32 | 409_schwangerschaftskonfliktberatung_schwangerschaftskonflikt_ungeborenen_schwangerschaft |
| 410 | impfangebote - impfschutz - coronavirus - impfungen - coronavirusdas | 32 | 410_impfangebote_impfschutz_coronavirus_impfungen |
| 411 | musiker - musikszene - sänger - musikalisches - musikproduzent | 32 | 411_musiker_musikszene_sänger_musikalisches |
| 412 | übersterblichkeit - todesfälle - sterbefällen - sterblichkeits - todesanzeigen | 32 | 412_übersterblichkeit_todesfälle_sterbefällen_sterblichkeits |
| 413 | pack - tourenrucksäcke - fronttasche - schutzsack - campen | 32 | 413_pack_tourenrucksäcke_fronttasche_schutzsack |
| 414 | wasserfilter - absoluter - selbstreinigend - lange - alleskönner | 32 | 414_wasserfilter_absoluter_selbstreinigend_lange |
| 415 | impfstoffmangel - impfpflichtangesichts - impfstoffherstellung - impfstoffes - impfstoff | 32 | 415_impfstoffmangel_impfpflichtangesichts_impfstoffherstellung_impfstoffes |
| 416 | schwerverbrecher - schwerverbrechern - strafverfolger - strafrechtlers - gefangener | 32 | 416_schwerverbrecher_schwerverbrechern_strafverfolger_strafrechtlers |
| 417 | enfants - mamans - kinderleichen - volksverrätern - moustacakis | 32 | 417_enfants_mamans_kinderleichen_volksverrätern |
| 418 | petroleumheizung - alternative - hierfür - folgende - bietet | 32 | 418_petroleumheizung_alternative_hierfür_folgende |
| 419 | bitcoin - 4att5z6tgvr6ah9hspjjlenb6wmaf36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn - bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6 - lvmraqlahnt5wbmqpjbjcf1yurths99vtx - lcty57kkuwasf8yqc9eblpxbj5kcqzmlhq | 31 | 419_bitcoin_4att5z6tgvr6ah9hspjjlenb6wmaf36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn_bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6_lvmraqlahnt5wbmqpjbjcf1yurths99vtx |
| 420 | jobsforall - jobplattform - jobsuche - arbeitsplätze - beschäftigungs | 31 | 420_jobsforall_jobplattform_jobsuche_arbeitsplätze |
| 421 | presseschau - 21st - beobachter - aktuelle - dir | 31 | 421_presseschau_21st_beobachter_aktuelle |
| 422 | epidemiologe - epidemiologin - infektionskrankheiten - biostatistiker - immunologie | 31 | 422_epidemiologe_epidemiologin_infektionskrankheiten_biostatistiker |
| 423 | krankenschwester - zensierten - intensivpfleger - hospitalisierten - videobeitrag | 31 | 423_krankenschwester_zensierten_intensivpfleger_hospitalisierten |
| 424 | vitamin - vitaminen - vitaminkombination - vitaminzum - präventionsmitteln | 31 | 424_vitamin_vitaminen_vitaminkombination_vitaminzum |
| 425 | saharaüber - sahara - schwefeldioxid - staubpartikel - wüstensande | 31 | 425_saharaüber_sahara_schwefeldioxid_staubpartikel |
| 426 | ryglewski - klüssendorf - sattelberger - droßmann - lehmann | 31 | 426_ryglewski_klüssendorf_sattelberger_droßmann |
| 427 | steuerzahlers - steuerzahler - parlamentarier - mehrkosten - bundesinnenministerien | 31 | 427_steuerzahlers_steuerzahler_parlamentarier_mehrkosten |
| 428 | dir - folgender - zeigt - diesem - hauptsächlich | 31 | 428_dir_folgender_zeigt_diesem |
| 429 | euronews - kanalmitgliedschaft - europea - vimeo - videoquellen | 31 | 429_euronews_kanalmitgliedschaft_europea_vimeo |
| 430 | korruptionsuntersuchungsausschuss - untersuchungsausschusses - korruptions - verteidigungsausschusses - ausschusses | 31 | 430_korruptionsuntersuchungsausschuss_untersuchungsausschusses_korruptions_verteidigungsausschusses |
| 431 | wirtschaftskrisen - wirtschaftsbündnis - wirtschaftszweigen - energiewendekrise - umweltkatastrophe | 31 | 431_wirtschaftskrisen_wirtschaftsbündnis_wirtschaftszweigen_energiewendekrise |
| 432 | informationskrieg - twitterfilesdeutsch - meldungen - lesen - originalartikel | 31 | 432_informationskrieg_twitterfilesdeutsch_meldungen_lesen |
| 433 | gesundheitsminister - pandemietreiberministers - beschränkungsregime - impfpflichtgesetz - impfpflichtbefürwortende | 31 | 433_gesundheitsminister_pandemietreiberministers_beschränkungsregime_impfpflichtgesetz |
| 434 | russia - kupiansk - controversy - chasov - haiphong | 31 | 434_russia_kupiansk_controversy_chasov |
| 435 | humor - humorvoll - lustiges - jokes - witze | 31 | 435_humor_humorvoll_lustiges_jokes |
| 436 | impfzeitraum - impfquoten - impfschäden - impfnebenwirkungen - infizierte | 31 | 436_impfzeitraum_impfquoten_impfschäden_impfnebenwirkungen |
| 437 | philharmoniker - symphony - dirigent - tenor - orchestra | 31 | 437_philharmoniker_symphony_dirigent_tenor |
| 438 | schlafmasken - schlafqualität - schlafen - schlafnuss - schlaf | 30 | 438_schlafmasken_schlafqualität_schlafen_schlafnuss |
| 439 | filmhistorie - kinos - film - spielfilmen - filme | 30 | 439_filmhistorie_kinos_film_spielfilmen |
| 440 | direktorenkollegen - betriebsdirektor - schodrowski - verwaltungsdirektor - politbürokratie | 30 | 440_direktorenkollegen_betriebsdirektor_schodrowski_verwaltungsdirektor |
| 441 | amazonsmile - amazons - amazon - amazonum - dauernamazon | 30 | 441_amazonsmile_amazons_amazon_amazonum |
| 442 | immobilienkonflikte - wohnraumförderung - immobilienmarkt - immobilienwirtschaft - wohnungsknappheit | 30 | 442_immobilienkonflikte_wohnraumförderung_immobilienmarkt_immobilienwirtschaft |
| 443 | krankheitsminister - bundesgesundheitsminister - sozialministers - gesundheitsminister - ministerrat | 30 | 443_krankheitsminister_bundesgesundheitsminister_sozialministers_gesundheitsminister |
| 444 | flugverbot - passagierflugzeug - flüge - fluggesellschaft - billigfluggesellschaft | 30 | 444_flugverbot_passagierflugzeug_flüge_fluggesellschaft |
| 445 | dänemark - dänischen - dänische - schwedischen - kopenhagen | 30 | 445_dänemark_dänischen_dänische_schwedischen |
| 446 | twitterusa - twitter - facebook - kontoverbindung - youtube | 30 | 446_twitterusa_twitter_facebook_kontoverbindung |
| 447 | terrormiliz - jemenkrieg - yemen - völkermord - jemeniten | 30 | 447_terrormiliz_jemenkrieg_yemen_völkermord |
| 448 | presseanfragen - medienanfrage - medienanfragen - demonstrationsrechtsdas - demonstrationsrecht | 30 | 448_presseanfragen_medienanfrage_medienanfragen_demonstrationsrechtsdas |
| 449 | q74you - aufwachprogramm - plan - schnelleinstieg - erwachen | 30 | 449_q74you_aufwachprogramm_plan_schnelleinstieg |
| 450 | bundesverfassungsgericht - bundesverfassungsgerichts - nichtmeinbundesverfassungsgericht - freiheitsrechte - legalen | 30 | 450_bundesverfassungsgericht_bundesverfassungsgerichts_nichtmeinbundesverfassungsgericht_freiheitsrechte |
| 451 | lebensmittelmarkt - lebensmittelversorgung - lebensmittelindustria - ernährung - ernähren | 30 | 451_lebensmittelmarkt_lebensmittelversorgung_lebensmittelindustria_ernährung |
| 452 | lampenöl - petroleumlampen - petroleumheizung - petroleumbetriebene - brennstoff | 30 | 452_lampenöl_petroleumlampen_petroleumheizung_petroleumbetriebene |
| 453 | bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj - ef80ca83ca1c8dd7f64fa8ef53ce54fd8d599479 - bitcoin - de97100110012620193011 - matthie | 30 | 453_bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj_ef80ca83ca1c8dd7f64fa8ef53ce54fd8d599479_bitcoin_de97100110012620193011 |
| 454 | falschbehauptungen - faktenprüfung - faktenchecks - faktenchecken - faktencheck | 30 | 454_falschbehauptungen_faktenprüfung_faktenchecks_faktenchecken |
| 455 | österreichs - österreichischen - österreichischer - ukrainekrieg - grundwehrdiener | 30 | 455_österreichs_österreichischen_österreichischer_ukrainekrieg |
| 456 | ideologisierten - ankündigungspolitik - ideologischen - volksverhetzer - inakzeptabel | 30 | 456_ideologisierten_ankündigungspolitik_ideologischen_volksverhetzer |
| 457 | polnischem - polnischer - polnische - warschau - verleumdungskampagne | 30 | 457_polnischem_polnischer_polnische_warschau |
| 458 | spirituellen - geistige - spirituelle - spiritualität - spirit | 30 | 458_spirituellen_geistige_spirituelle_spiritualität |
| 459 | protesttag - österreichweite - österreichweiter - protesting - solidaritätsdemo | 30 | 459_protesttag_österreichweite_österreichweiter_protesting |
| 460 | emoji - emojis - streik - instagram - facebook | 29 | 460_emoji_emojis_streik_instagram |
| 461 | atomkrieg - kriegsgefahr - nuklearraketen - nuklearen - nuclear | 29 | 461_atomkrieg_kriegsgefahr_nuklearraketen_nuklearen |
| 462 | kindermasken - masks - masken - maske - erwachsenenmasken | 29 | 462_kindermasken_masks_masken_maske |
| 463 | außerirdische - außerirdisches - außerirdischen - ufo - aliens | 29 | 463_außerirdische_außerirdisches_außerirdischen_ufo |
| 464 | deutschlands - deutsche - europaweit - bestellen - österreichische | 29 | 464_deutschlands_deutsche_europaweit_bestellen |
| 465 | twitter - rabbit - veröffentlicht - krankenhäusern - youtbe | 29 | 465_twitter_rabbit_veröffentlicht_krankenhäusern |
| 466 | europaabgeordnete - schweizerische - schweizer - pflichtimpfungen - kommissionspräsidentin | 29 | 466_europaabgeordnete_schweizerische_schweizer_pflichtimpfungen |
| 467 | liposomale - phospholipid - nährstoffe - magen - zellen | 29 | 467_liposomale_phospholipid_nährstoffe_magen |
| 468 | sanktionspolitik - sanktionstrick - sanktionen - sanktion - sanktionsmonopol | 29 | 468_sanktionspolitik_sanktionstrick_sanktionen_sanktion |
| 469 | panzergrenadierbrigade - panzergrenadierbatallions - deutschlandhassern - wehrpflicht - wehrbeauftragte | 29 | 469_panzergrenadierbrigade_panzergrenadierbatallions_deutschlandhassern_wehrpflicht |
| 470 | lichtgrüsse - lichtegrüße - lichterkette - lichtgrüße - lichtquelle | 29 | 470_lichtgrüsse_lichtegrüße_lichterkette_lichtgrüße |
| 471 | schweizer - armee - beschaffungsstellen - armeedas - strapazierfähige | 29 | 471_schweizer_armee_beschaffungsstellen_armeedas |
| 472 | akku - solarpane - taschenlampe - leuchtdauer - ladegerät | 29 | 472_akku_solarpane_taschenlampe_leuchtdauer |
| 473 | katastrophenmanagement - hochwasserkatastrophe - flutkatastrophe - flutkatastrophebrisante - katastrophenschutzzentrumbei | 29 | 473_katastrophenmanagement_hochwasserkatastrophe_flutkatastrophe_flutkatastrophebrisante |
| 474 | petroleumheizung - gewächshausheizung - flammlöschautomatik - petroleumbetriebenen - petroleum | 29 | 474_petroleumheizung_gewächshausheizung_flammlöschautomatik_petroleumbetriebenen |
| 475 | friedlichkeit - friedliche - friedlich - freiheitsbewegung - friedlicher | 29 | 475_friedlichkeit_friedliche_friedlich_freiheitsbewegung |
| 476 | abtreibungsrichtlinien - abtreibungsbefürworter - abtreibungswilligen - abtreibungsanbietern - abortionists | 29 | 476_abtreibungsrichtlinien_abtreibungsbefürworter_abtreibungswilligen_abtreibungsanbietern |
| 477 | infizierten - infizierte - impfstatistik - neuinfizierten - outbreak | 29 | 477_infizierten_infizierte_impfstatistik_neuinfizierten |
| 478 | chemikalie - giftstoffen - chemikalien - giftige - hochgiftig | 29 | 478_chemikalie_giftstoffen_chemikalien_giftige |
| 479 | nattokinase - toxischen - natto - gentechnik - natron | 29 | 479_nattokinase_toxischen_natto_gentechnik |
| 480 | gasspeicherkapazitäten - gasspeichern - gasspeicher - erdgasspeicher - gasspeicherverbandes | 29 | 480_gasspeicherkapazitäten_gasspeichern_gasspeicher_erdgasspeicher |
| 481 | bereitschaftspolizei - polizeipräsident - polizist - polizeibeamter - disziplinarmaßnahmen | 29 | 481_bereitschaftspolizei_polizeipräsident_polizist_polizeibeamter |
| 482 | hochansteckend - gesundheitsalarm - äquatorialguinea - ebolavirus - guinea | 29 | 482_hochansteckend_gesundheitsalarm_äquatorialguinea_ebolavirus |
| 483 | mobilität - prominententransport - größtmögliche - versandkostenfrei - stabile | 29 | 483_mobilität_prominententransport_größtmögliche_versandkostenfrei |
| 484 | medizinrechtler - ermächtigungsgesetz - regelhülsen - impfzwangsgesetz - grundrechtseingriffe | 29 | 484_medizinrechtler_ermächtigungsgesetz_regelhülsen_impfzwangsgesetz |
| 485 | überalldeutschlandweit - freiepressesauerland - sächsischen - kaindlbauer - karlskirche | 28 | 485_überalldeutschlandweit_freiepressesauerland_sächsischen_kaindlbauer |
| 486 | universalradio - radio - akku - technologie - batterien | 28 | 486_universalradio_radio_akku_technologie |
| 487 | keramikbrenner - keramik - gasheizer - gasdruckregler - thermoelektrische | 28 | 487_keramikbrenner_keramik_gasheizer_gasdruckregler |
| 488 | camping - blackout - outdoor - innenräumen - geeignet | 28 | 488_camping_blackout_outdoor_innenräumen |
| 489 | überlebenstechniken - lebenswichtig - überleben - konservierung - lebenswichtigen | 28 | 489_überlebenstechniken_lebenswichtig_überleben_konservierung |
| 490 | trump - trumps - russischem - präsidentschaft - osama | 28 | 490_trump_trumps_russischem_präsidentschaft |
| 491 | nachrichten - germany - anti - zelenko - kids | 28 | 491_nachrichten_germany_anti_zelenko |
| 492 | gesundheitsministers - volksfeinden - demokratieforscherin - faktenfuchs - faktenfinder | 28 | 492_gesundheitsministers_volksfeinden_demokratieforscherin_faktenfuchs |
| 493 | blackouts - blackout - blackoutdie - hinauszuzögern - energiesysteme | 28 | 493_blackouts_blackout_blackoutdie_hinauszuzögern |
| 494 | abgasgesetzgebung - volkswagenstiftung - straßburg - autoländer - benzinmotor | 28 | 494_abgasgesetzgebung_volkswagenstiftung_straßburg_autoländer |
| 495 | jahresrückblicke - vorjahres - enthüllungsjournalist - bestsellerreihe - neujahrstag | 28 | 495_jahresrückblicke_vorjahres_enthüllungsjournalist_bestsellerreihe |
| 496 | lebensmittel - granulate - trockene - trockener - pulver | 28 | 496_lebensmittel_granulate_trockene_trockener |
| 497 | eingelagerte - notieren - oberfläche - äußere - erntejahr | 28 | 497_eingelagerte_notieren_oberfläche_äußere |
| 498 | coronavirus - covid - buch - entdeckung - heilmittel | 28 | 498_coronavirus_covid_buch_entdeckung |
| 499 | angstpolitik - angststrategie - panik - gefährlich - angstmache | 28 | 499_angstpolitik_angststrategie_panik_gefährlich |
| 500 | iranischem - iranische - iranischer - missiles - raketenangriff | 28 | 500_iranischem_iranische_iranischer_missiles |
| 501 | fanpost - anschrift - liebesbriefe - rundbriefabo - danke | 28 | 501_fanpost_anschrift_liebesbriefe_rundbriefabo |
| 502 | feuerwehrfeste - feuerwehrleute - großbrand - feuerwehren - lichterprotest | 28 | 502_feuerwehrfeste_feuerwehrleute_großbrand_feuerwehren |
| 503 | геополитических - kriegspatriotismus - nachkriegsordnung - kriegslüsterne - amerikaner | 28 | 503_геополитических_kriegspatriotismus_nachkriegsordnung_kriegslüsterne |
| 504 | aachen - ingolstadt - heidelberg - hildesheim - augsburg | 28 | 504_aachen_ingolstadt_heidelberg_hildesheim |
| 505 | russenfeindlich - siggelkow - aussenpolitiker - aussenpolitikern - deutschlandab | 28 | 505_russenfeindlich_siggelkow_aussenpolitiker_aussenpolitikern |
| 506 | pfannkuchen - omelette - verpackung - hühnervolleipulver - volleipulver | 28 | 506_pfannkuchen_omelette_verpackung_hühnervolleipulver |
| 507 | weihnachtsmärkten - weihnachtsmärkte - weihnachtsmarkt - weihnachtsgeschäftausgerechnet - weihnachtsgeschäft | 28 | 507_weihnachtsmärkten_weihnachtsmärkte_weihnachtsmarkt_weihnachtsgeschäftausgerechnet |
| 508 | freiheitshandy - freiheits - meinungsfreiheit - freiheitskämpfer - unterdrückt | 28 | 508_freiheitshandy_freiheits_meinungsfreiheit_freiheitskämpfer |
| 509 | deutschfeistritzer - unwürdiges - gesetzes - gesetzesverordnung - zögerns | 28 | 509_deutschfeistritzer_unwürdiges_gesetzes_gesetzesverordnung |
| 510 | bensheim - lindenplatz - jügesheim - hofheim - rüdesheim | 28 | 510_bensheim_lindenplatz_jügesheim_hofheim |
| 511 | kurzinterview - interviewt - interview - interviews - briefing | 28 | 511_kurzinterview_interviewt_interview_interviews |
| 512 | krisenmechanismus - krisensitzung - hungerkrise - hungerkatastrophe - hungerkatastrophen | 28 | 512_krisenmechanismus_krisensitzung_hungerkrise_hungerkatastrophe |
| 513 | fruchtkompotten - petersilienwurzel - suppen - salz - fermentationstag | 28 | 513_fruchtkompotten_petersilienwurzel_suppen_salz |
| 514 | oppositionsmodus - impfpflichtkubickis - parteikollegen - impfvorschrift - impfstopp | 28 | 514_oppositionsmodus_impfpflichtkubickis_parteikollegen_impfvorschrift |
| 515 | geo - источник - russisches - redaktionell - sabiene | 27 | 515_geo_источник_russisches_redaktionell |
| 516 | petroleumheizung - alternative - hierfür - folgende - bietet | 27 | 516_petroleumheizung_alternative_hierfür_folgende |
| 517 | nürnbergrufdertrommelnsonntag - hohenheimerstr - versammlungsort - bratislava - versammlung | 27 | 517_nürnbergrufdertrommelnsonntag_hohenheimerstr_versammlungsort_bratislava |
| 518 | narrativede - erschienen - narrative - führte - heutige | 27 | 518_narrativede_erschienen_narrative_führte |
| 519 | asylpolitik - niederösterreicher - niederösterreich - asylwerbern - debattensendung | 27 | 519_asylpolitik_niederösterreicher_niederösterreich_asylwerbern |
| 520 | krisenzeit - wirtschaftskrise - lebensmittelvorrat - nahrung - lebensmittel | 27 | 520_krisenzeit_wirtschaftskrise_lebensmittelvorrat_nahrung |
| 521 | mainstreammedien - propagandawiebeigoebbels - journalistischen - 1700000 - 400000 | 27 | 521_mainstreammedien_propagandawiebeigoebbels_journalistischen_1700000 |
| 522 | coronapandemie - coronabelegung - epidemie - hochrisikogebieten - hochrisikogebiete | 27 | 522_coronapandemie_coronabelegung_epidemie_hochrisikogebieten |
| 523 | paypal - schenkung - schenkungen - spendenkonto - geld | 27 | 523_paypal_schenkung_schenkungen_spendenkonto |
| 524 | unterwasserkraftwerke - unterwasserbohrungen - sonarboje - unterwasserfahrzeug - pipelines | 27 | 524_unterwasserkraftwerke_unterwasserbohrungen_sonarboje_unterwasserfahrzeug |
| 525 | fersenbereich - untergrund - außensohle - dämpfung - eva | 27 | 525_fersenbereich_untergrund_außensohle_dämpfung |
| 526 | pandemiediktaturregime - impfpflichtdiskussion - impfpflichtpläne - pandemiemanagment - schmähpolitik | 27 | 526_pandemiediktaturregime_impfpflichtdiskussion_impfpflichtpläne_pandemiemanagment |
| 527 | grundrechtsdemonstranten - demonstranten - rechtsradikale - protestiere - solidaritätskundgebung | 27 | 527_grundrechtsdemonstranten_demonstranten_rechtsradikale_protestiere |
| 528 | amsterdam - milano - metropole - bundeshauptstadt - stadtgebiet | 27 | 528_amsterdam_milano_metropole_bundeshauptstadt |
| 529 | friedliche - freiheitskämpfer - rennweg - montagsspaziergang - eingehüllt | 27 | 529_friedliche_freiheitskämpfer_rennweg_montagsspaziergang |
| 530 | minimalen - platzbedarf - lagerung - geringe - kleineren | 27 | 530_minimalen_platzbedarf_lagerung_geringe |
| 531 | großrussland - eurasischen - globalismus - globalisten - verhasst | 27 | 531_großrussland_eurasischen_globalismus_globalisten |
| 532 | nächtlichen - tonight - abend - mitternacht - nacht | 27 | 532_nächtlichen_tonight_abend_mitternacht |
| 533 | luftabwehrraketensysteme - angriffsflugzeuge - luftabwehrraketensystem - mehrfachraketenwerfer - kampfflugzeuge | 27 | 533_luftabwehrraketensysteme_angriffsflugzeuge_luftabwehrraketensystem_mehrfachraketenwerfer |
| 534 | musk - twitterte - twittern - kriegsende - controversial | 27 | 534_musk_twitterte_twittern_kriegsende |
| 535 | propagandablase - medienkritik - publicity - berichterstattungbravo - rausausderblase | 27 | 535_propagandablase_medienkritik_publicity_berichterstattungbravo |
| 536 | livestreams - livestream - live - bayerischen - schweiz | 27 | 536_livestreams_livestream_live_bayerischen |
| 537 | bakterien - trinkwasser - wasser - trinkwassernetz - wassernetz | 27 | 537_bakterien_trinkwasser_wasser_trinkwassernetz |
| 538 | wärme - elektrizität - blackout - wind - gas | 27 | 538_wärme_elektrizität_blackout_wind |
| 539 | manipulierenden - manipulierte - masterminds - protaganisten - globalistische | 27 | 539_manipulierenden_manipulierte_masterminds_protaganisten |
| 540 | russlands - russian - geopolitik - geopolitisch - moskaus | 27 | 540_russlands_russian_geopolitik_geopolitisch |
| 541 | taschenlampe - taschenlampen - lampenkopf - tk22ue - tk16 | 27 | 541_taschenlampe_taschenlampen_lampenkopf_tk22ue |
| 542 | kriminellen - verbrechern - verbrecher - kriminelle - verbrecherisch | 26 | 542_kriminellen_verbrechern_verbrecher_kriminelle |
| 543 | bürgerproteste - protestkultur - demokratischem - protestwelle - protestiere | 26 | 543_bürgerproteste_protestkultur_demokratischem_protestwelle |
| 544 | internetverbindung - vpn - netflix - purevpn - provider | 26 | 544_internetverbindung_vpn_netflix_purevpn |
| 545 | expertenrunde - pressekonferenz - presseabteilung - oberösterreich - landessprecher | 26 | 545_expertenrunde_pressekonferenz_presseabteilung_oberösterreich |
| 546 | festbrennstoffe - feuerschale - kochmöglichkeit - grillen - brennstoff | 26 | 546_festbrennstoffe_feuerschale_kochmöglichkeit_grillen |
| 547 | transportbranche - dieselsäule - dieselpreise - fuhrunternehmen - geforderttransportgewerbe | 26 | 547_transportbranche_dieselsäule_dieselpreise_fuhrunternehmen |
| 548 | atomwaffen - atomraketenanlagen - pakistan - pakistanindien - atommächte | 26 | 548_atomwaffen_atomraketenanlagen_pakistan_pakistanindien |
| 549 | kettle - wasserkocher - storm - camp - camping | 26 | 549_kettle_wasserkocher_storm_camp |
| 550 | wasserbeutel - wasserverbrauch - trinkwasser - wasserdampf - wasser | 26 | 550_wasserbeutel_wasserverbrauch_trinkwasser_wasserdampf |
| 551 | meteoritenfunde - kosmischem - astronomen - kometen - exoplanet | 26 | 551_meteoritenfunde_kosmischem_astronomen_kometen |
| 552 | erhitzt - wärmeleitfähigkeit - dutch - grill - grillfans | 26 | 552_erhitzt_wärmeleitfähigkeit_dutch_grill |
| 553 | pferdesport - pferdezucht - pferden - sportpferdezucht - proud | 26 | 553_pferdesport_pferdezucht_pferden_sportpferdezucht |
| 554 | südosteuropäischen - hessischen - hessische - einsatzhundertschaft - verteidigt | 26 | 554_südosteuropäischen_hessischen_hessische_einsatzhundertschaft |
| 555 | suppentopf - suppe - eintopf - erbsen - schweinebauch | 26 | 555_suppentopf_suppe_eintopf_erbsen |
| 556 | gewaltform - gegengewalt - gewaltphantasien - gewalt - gewaltverherrlichung | 26 | 556_gewaltform_gegengewalt_gewaltphantasien_gewalt |
| 557 | regimekritik - innenminister - grundrechtsdemo - ideokratie - politikkaste | 26 | 557_regimekritik_innenminister_grundrechtsdemo_ideokratie |
| 558 | klimageld - klimaschutzes - klimaschutz - klimawahns - klimasünder | 26 | 558_klimageld_klimaschutzes_klimaschutz_klimawahns |
| 559 | briefbombenterror - bombenterror - nazism - atombombe - bombardierung | 26 | 559_briefbombenterror_bombenterror_nazism_atombombe |
| 560 | rpp - präsentiert - irre - bedenklich - macht | 26 | 560_rpp_präsentiert_irre_bedenklich |
| 561 | kritiker - schuldzuweisungen - missverständlich - eigenverantwortung - zugeständnis | 26 | 561_kritiker_schuldzuweisungen_missverständlich_eigenverantwortung |
| 562 | altersverteilung - renteneintrittsalter - versicherungszeit - eintrittsalter - nachkriegsdeutschlands | 26 | 562_altersverteilung_renteneintrittsalter_versicherungszeit_eintrittsalter |
| 563 | präsidentschaftswahlen - biden - bidenim - coup - kriegsunterstützung | 26 | 563_präsidentschaftswahlen_biden_bidenim_coup |
| 564 | audio - download - 04 - 03 - 02 | 26 | 564_audio_download_04_03 |
| 565 | nahrungsvorrat - sichere - vorratshaltung - konservendosen - aufbewahren | 26 | 565_nahrungsvorrat_sichere_vorratshaltung_konservendosen |
| 566 | doomsday - volkswirtschaft - zukunftspessimismus - katastrophen - optimistische | 26 | 566_doomsday_volkswirtschaft_zukunftspessimismus_katastrophen |
| 567 | vladimir - russlandrussische - prorussischen - demyanenko - prorussisch | 26 | 567_vladimir_russlandrussische_prorussischen_demyanenko |
| 568 | impfstoffvertrag - pfizer - thailands - thailändische - thailand | 26 | 568_impfstoffvertrag_pfizer_thailands_thailändische |
| 569 | orginalvideo - landessprecher - recorded - zeitraffer - mitentdecker | 26 | 569_orginalvideo_landessprecher_recorded_zeitraffer |
| 570 | volksverdummung - angstpropaganda - empörungs - verunsicherung - rhetorik | 26 | 570_volksverdummung_angstpropaganda_empörungs_verunsicherung |
| 571 | sicherheitsbericht - severity - effects - verdachtsfälle - warngründe | 26 | 571_sicherheitsbericht_severity_effects_verdachtsfälle |
| 572 | samstags - freiheitrufderfreiheitsamstags - feiertage - wendezeit - spaziergangs | 26 | 572_samstags_freiheitrufderfreiheitsamstags_feiertage_wendezeit |
| 573 | hören - kanal - gruppen - abhörsicher - verschlüsselung | 26 | 573_hören_kanal_gruppen_abhörsicher |
| 574 | news - nemos - newsletter - our - listener | 26 | 574_news_nemos_newsletter_our |
| 575 | kpa - kalinka - via - at - | 26 | 575_kpa_kalinka_via_at |
| 576 | raketenöfen - raketenofen - entwicklungsländern - effizienz - brennbaren | 26 | 576_raketenöfen_raketenofen_entwicklungsländern_effizienz |
| 577 | fachbuchautorin - heilpraktiker - fachanwältin - leseempfehlung - arzt | 26 | 577_fachbuchautorin_heilpraktiker_fachanwältin_leseempfehlung |
| 578 | europaparlament - kommission - kommissionspräsidentin - europäische - europäischen | 26 | 578_europaparlament_kommission_kommissionspräsidentin_europäische |
| 579 | solarpanel - solar - digitalkamera - powerbank - smartphone | 26 | 579_solarpanel_solar_digitalkamera_powerbank |
| 580 | sowjetrepubliken - kriegsführung - militärmacht - aussenminister - bombardieren | 26 | 580_sowjetrepubliken_kriegsführung_militärmacht_aussenminister |
| 581 | kerouac - kekule - kekulé - kekulés - politisches | 25 | 581_kerouac_kekule_kekulé_kekulés |
| 582 | österreichs - österreichische - österreichischen - eurosolidarität - antirussischen | 25 | 582_österreichs_österreichische_österreichischen_eurosolidarität |
| 583 | zugangsbeschränkung - zugangsbeschränkungen - drogeriemärkte - lebensmittelhändler - einzelhandel | 25 | 583_zugangsbeschränkung_zugangsbeschränkungen_drogeriemärkte_lebensmittelhändler |
| 584 | dempflegepersonal - krankenhausmitarbeiter - pflegekräfte - privatkrankenanstalten - pflegesektor | 25 | 584_dempflegepersonal_krankenhausmitarbeiter_pflegekräfte_privatkrankenanstalten |
| 585 | bitcoins - bitcoin - cryptocurrency - 3836ebjsxzret6rsely9kfa9zwuqexqteu - nl17bunq2045314502 | 25 | 585_bitcoins_bitcoin_cryptocurrency_3836ebjsxzret6rsely9kfa9zwuqexqteu |
| 586 | marineoffiziere - marinekommando - marine - marinetauchern - navy | 25 | 586_marineoffiziere_marinekommando_marine_marinetauchern |
| 587 | madrid - frankreich - spanien - spaniens - münchender | 25 | 587_madrid_frankreich_spanien_spaniens |
| 588 | niederösterreich - österreichische - salzburg - bundesländer - vorerst | 25 | 588_niederösterreich_österreichische_salzburg_bundesländer |
| 589 | wisnewski - wisnewskis - buchautorin - bestsellerautor - romanautor | 25 | 589_wisnewski_wisnewskis_buchautorin_bestsellerautor |
| 590 | bergsportler - warnstufen - warnstufe - skiurlauber - skiurlaub | 25 | 590_bergsportler_warnstufen_warnstufe_skiurlauber |
| 591 | österreichweit - impfpflichtsbetrug - österreichmassive - österreichische - immunitätsstatus | 25 | 591_österreichweit_impfpflichtsbetrug_österreichmassive_österreichische |
| 592 | protestiert - protest - gesundheitspersonal - gesundheitsministerium - gesundheitfürösterreich | 25 | 592_protestiert_protest_gesundheitspersonal_gesundheitsministerium |
| 593 | euro - platin - gold - money - edelmetalle | 25 | 593_euro_platin_gold_money |
| 594 | gewichtsverlust - gewichtszunahme - ernährungsumstellung - übergewicht - fettreichen | 25 | 594_gewichtsverlust_gewichtszunahme_ernährungsumstellung_übergewicht |
| 595 | abonnieren - vereinsmitglied - mitglied - bitte - werde | 25 | 595_abonnieren_vereinsmitglied_mitglied_bitte |
| 596 | rjabkow - kriegineuropa - gaslieferungen - erdgaspipeline - erdgasfluss | 25 | 596_rjabkow_kriegineuropa_gaslieferungen_erdgaspipeline |
| 597 | lobbyismus - wirvergessennicht - nebeneinkünfte - keinplatzfürgewalt - lobbypolitik | 25 | 597_lobbyismus_wirvergessennicht_nebeneinkünfte_keinplatzfürgewalt |
| 598 | satireshow - satiriker - satire - satirestart - humor | 25 | 598_satireshow_satiriker_satire_satirestart |
| 599 | vaccine - impfstoff - bnt162b2 - sars - dspc | 25 | 599_vaccine_impfstoff_bnt162b2_sars |
| 600 | getreidemühlen - ernährungs - getreidemühle - ernährung - brauchst | 25 | 600_getreidemühlen_ernährungs_getreidemühle_ernährung |
| 601 | censorship - twitters - zensiert - zensurpolitik - congresswoman | 25 | 601_censorship_twitters_zensiert_zensurpolitik |
| 602 | trinkwasserqualität - trinkwasser - osmosewasser - wasserbar - wasserentkalker | 25 | 602_trinkwasserqualität_trinkwasser_osmosewasser_wasserbar |
| 603 | energycrisis - umweltpolitik - energieabschaltungsverbot - energieeffizienz - energiesouveränität | 25 | 603_energycrisis_umweltpolitik_energieabschaltungsverbot_energieeffizienz |
| 604 | outdoorschlafsack - sommerschlafsack - wärmekragen - thermolite - temperaturen | 25 | 604_outdoorschlafsack_sommerschlafsack_wärmekragen_thermolite |
| 605 | powerbank - charging - smartphone - wireless - solarpanel | 25 | 605_powerbank_charging_smartphone_wireless |
| 606 | existenzbedrohendschweinefleischkrise - lebensmittelbranche - schlachtbetriebe - hühnchenfleisch - ernährungswirtschaft | 25 | 606_existenzbedrohendschweinefleischkrise_lebensmittelbranche_schlachtbetriebe_hühnchenfleisch |
| 607 | oberösterreichischer - bayrischer - schildbürger - bayrische - münchen2212 | 25 | 607_oberösterreichischer_bayrischer_schildbürger_bayrische |
| 608 | korruptionsjägern - korruptionsermittlungen - korruptionssumpf - steuerskandal - korruptionsbekämpfer | 25 | 608_korruptionsjägern_korruptionsermittlungen_korruptionssumpf_steuerskandal |
| 609 | christkindlmärkte - weihnachtsmärkten - feiertagsgeschäft - weihnachtsspaziergang - weihnachtsferien | 25 | 609_christkindlmärkte_weihnachtsmärkten_feiertagsgeschäft_weihnachtsspaziergang |
| 610 | terroristischen - terroranschlag - palästinensers - doppelmörder - palästinenser | 25 | 610_terroristischen_terroranschlag_palästinensers_doppelmörder |
| 611 | hausarztstellen - pflegekräfte - gesundheitssystem - krankarzthelfer - kliniken | 25 | 611_hausarztstellen_pflegekräfte_gesundheitssystem_krankarzthelfer |
| 612 | impfpflichtabstimmung - impfpflichtgesetzes - gesundheitsdiktatur - freiheitsmarsch - grußbotschaft | 25 | 612_impfpflichtabstimmung_impfpflichtgesetzes_gesundheitsdiktatur_freiheitsmarsch |
| 613 | zellen - bakteriellem - mikronährstoff - organismus - bioterroristischen | 25 | 613_zellen_bakteriellem_mikronährstoff_organismus |
| 614 | überschwemmungen - floods - überschwemmten - überschwemmt - australiensydney | 25 | 614_überschwemmungen_floods_überschwemmten_überschwemmt |
| 615 | krisen - krisengebiet - katastrophenfall - erdbebenkatastrophe - katastrophenforscher | 25 | 615_krisen_krisengebiet_katastrophenfall_erdbebenkatastrophe |
| 616 | empfehlen - unterstütze - dienstplan - danke - zensurfreie | 25 | 616_empfehlen_unterstütze_dienstplan_danke |
| 617 | polska - polnischer - polnische - поляки - warschau | 25 | 617_polska_polnischer_polnische_поляки |
| 618 | vitamin - immununterstützung - länger - bleibt - tägliche | 25 | 618_vitamin_immununterstützung_länger_bleibt |
| 619 | nachrichten - anti - stew - germany - zelenko | 25 | 619_nachrichten_anti_stew_germany |
| 620 | путин - restukraine - украину - kreml - weltkriegsgefahr | 25 | 620_путин_restukraine_украину_kreml |
| 621 | hochkorrupten - millionenbetrug - institutionalisierten - pharmasektor - kapitalismuskritisch | 25 | 621_hochkorrupten_millionenbetrug_institutionalisierten_pharmasektor |
| 622 | impfpflichtbefürwortern - impfdosen - infektionsschutzrechts - unterdrückerische - diskreditieren | 25 | 622_impfpflichtbefürwortern_impfdosen_infektionsschutzrechts_unterdrückerische |
| 623 | solarpanel - geräteakku - solar - powerstation - powerbank | 24 | 623_solarpanel_geräteakku_solar_powerstation |
| 624 | seed - tomaten - pflanzung - zuckererbsen - zucchini | 24 | 624_seed_tomaten_pflanzung_zuckererbsen |
| 625 | cafés - boykott - supermärkte - kasachstan - starbucks | 24 | 625_cafés_boykott_supermärkte_kasachstan |
| 626 | kommunismus - communism - kommunistischen - kommunistische - marxismus | 24 | 626_kommunismus_communism_kommunistischen_kommunistische |
| 627 | impfpflichtnichtmituns - streikpotenzial - maßnahmenprotest - impfstreik - streiks | 24 | 627_impfpflichtnichtmituns_streikpotenzial_maßnahmenprotest_impfstreik |
| 628 | staatsanwaltschaft - affärestaatsanwaltschaft - strafanzeigen - anwalts - generalbundesanwalt | 24 | 628_staatsanwaltschaft_affärestaatsanwaltschaft_strafanzeigen_anwalts |
| 629 | springerstiefel - squad - sportlicher - knöchelunterstützung - inch | 24 | 629_springerstiefel_squad_sportlicher_knöchelunterstützung |
| 630 | ausgestattet - sicherheitsstiefel - bequemes - rettungsdiensten - getragen | 24 | 630_ausgestattet_sicherheitsstiefel_bequemes_rettungsdiensten |
| 631 | regierungsstellungnahme - weltregierung - regierungschefs - regierenden - staatsregierung | 24 | 631_regierungsstellungnahme_weltregierung_regierungschefs_regierenden |
| 632 | problemeobdachlosigkeit - obdachlosenunterkünfte - obdachlose - obdachlosen - obdachloser | 24 | 632_problemeobdachlosigkeit_obdachlosenunterkünfte_obdachlose_obdachlosen |
| 633 | musikvideo - song - music - musikalisches - freitagabend | 24 | 633_musikvideo_song_music_musikalisches |
| 634 | elektronik - generatoren - elektrische - generator - stromgenerator | 24 | 634_elektronik_generatoren_elektrische_generator |
| 635 | erhitzt - kamineffektes - aluminium - wasser - kamin | 24 | 635_erhitzt_kamineffektes_aluminium_wasser |
| 636 | südafrika - südafrikas - afrika - afrikanische - kenia | 24 | 636_südafrika_südafrikas_afrika_afrikanische |
| 637 | demokratieverweigerer - demokratiefeinde - tyrann - demokratiefeinden - demonstranten | 24 | 637_demokratieverweigerer_demokratiefeinde_tyrann_demokratiefeinden |
| 638 | sowjetbürger - russisch - kremlangaben - belovezhskaya - weißrussland | 24 | 638_sowjetbürger_russisch_kremlangaben_belovezhskaya |
| 639 | ausspionierte - terrorisierung - spionagebehörde - westukrainern - spionen | 24 | 639_ausspionierte_terrorisierung_spionagebehörde_westukrainern |
| 640 | quantenfinanzsystem - quantumfinancialsystem - qfs - finanzsystem - cryptowährungen | 24 | 640_quantenfinanzsystem_quantumfinancialsystem_qfs_finanzsystem |
| 641 | 100 - produzieren - farbstreifen - prozent - fertigt | 24 | 641_100_produzieren_farbstreifen_prozent |
| 642 | politikwissenschafterin - politikwissenschaftlerin - europaexpertin - neoliberalismus - zwangspolitik | 24 | 642_politikwissenschafterin_politikwissenschaftlerin_europaexpertin_neoliberalismus |
| 643 | grillen - camping - kochen - kanne - hochwertigem | 24 | 643_grillen_camping_kochen_kanne |
| 644 | propagandanervt - politfiguren - propagandamedien - toleranten - befürworten | 24 | 644_propagandanervt_politfiguren_propagandamedien_toleranten |
| 645 | tierschutzverein - tierfreunden - animal - hunde - tierarzt | 24 | 645_tierschutzverein_tierfreunden_animal_hunde |
| 646 | audioanalyse - audioanalysen - audio - sound - zwanggegnern | 24 | 646_audioanalyse_audioanalysen_audio_sound |
| 647 | impfzahlen - impfstoff - impfstoffen - impfgegner - immunisierung | 24 | 647_impfzahlen_impfstoff_impfstoffen_impfgegner |
| 648 | boykottieren - organisatoren - demoteilnehmer - demonstrationsrecht - fördern | 24 | 648_boykottieren_organisatoren_demoteilnehmer_demonstrationsrecht |
| 649 | windkraftanlagen - windkrafträdern - windenergieanlage - windkrafträder - windenergie | 24 | 649_windkraftanlagen_windkrafträdern_windenergieanlage_windkrafträder |
| 650 | schwindler - betrüger - zurückzuzahlen - überholt - aktienanteile | 24 | 650_schwindler_betrüger_zurückzuzahlen_überholt |
| 651 | dokumentarfilm - filmemacher - schutzimpfung - nebenwirkungen - film | 24 | 651_dokumentarfilm_filmemacher_schutzimpfung_nebenwirkungen |
| 652 | anfänger - mitsportlern - mitkommt - frührere - einreise | 24 | 652_anfänger_mitsportlern_mitkommt_frührere |
| 653 | gesamtsterblichkeitsrate - fallsterblichkeit - epidemiologin - coronapatienten - lebenserwartung | 24 | 653_gesamtsterblichkeitsrate_fallsterblichkeit_epidemiologin_coronapatienten |
| 654 | österreichische - kriegsfinanzierung - panzerlieferung - panzerlieferungen - neutralitätspolitik | 24 | 654_österreichische_kriegsfinanzierung_panzerlieferung_panzerlieferungen |
| 655 | vogelgrippe - geflügelfarmen - geflügel - geflügelhalter - lebensmittelindexes | 24 | 655_vogelgrippe_geflügelfarmen_geflügel_geflügelhalter |
| 656 | cacoa - healthy - weight - fats - food | 24 | 656_cacoa_healthy_weight_fats |
| 657 | passports - bürgerkarte - austria - reisepass - identitätsdokumentenregister | 23 | 657_passports_bürgerkarte_austria_reisepass |
| 658 | spionagedrohnen - aufklärungsdrohnen - militärbase - angriffsdrohnen - satellitenbilder | 23 | 658_spionagedrohnen_aufklärungsdrohnen_militärbase_angriffsdrohnen |
| 659 | famine - stewpeters10 - support - magnesium - friday | 23 | 659_famine_stewpeters10_support_magnesium |
| 660 | atomwaffenfähigen - atomwaffenfähige - atomwaffen - militärflugzeug - flugabwehrraketensystem | 23 | 660_atomwaffenfähigen_atomwaffenfähige_atomwaffen_militärflugzeug |
| 661 | ideologischen - perestroika - ideologie - demokratien - westregierungen | 23 | 661_ideologischen_perestroika_ideologie_demokratien |
| 662 | türkei - türkische - türkischen - währungskrise - inflation | 23 | 662_türkei_türkische_türkischen_währungskrise |
| 663 | thermoelektrische - heat - strahlungswärme - wärme - gasdruckregler | 23 | 663_thermoelektrische_heat_strahlungswärme_wärme |
| 664 | panik - covid - krise - ankündigte - voraus | 23 | 664_panik_covid_krise_ankündigte |
| 665 | krisenvorsorge - versandkostenfrei - ausgaben - haushalt - einfachste | 23 | 665_krisenvorsorge_versandkostenfrei_ausgaben_haushalt |
| 666 | zuhörerin - bedanken - bedanke - höre - verinnerlichen | 23 | 666_zuhörerin_bedanken_bedanke_höre |
| 667 | hauptmahlzeiten - food - langzeitlebensmittel - lebensmittel - survival | 23 | 667_hauptmahlzeiten_food_langzeitlebensmittel_lebensmittel |
| 668 | maskengeschäftefahnder - maskengeschäften - maskenkrise - maskendeals - masken | 23 | 668_maskengeschäftefahnder_maskengeschäften_maskenkrise_maskendeals |
| 669 | selenski - ukrainekreig - selenskyj - wolodymyr - provokationen | 23 | 669_selenski_ukrainekreig_selenskyj_wolodymyr |
| 670 | radioaktiver - radioaktiven - detektor - gammastrahlungen - physikern | 23 | 670_radioaktiver_radioaktiven_detektor_gammastrahlungen |
| 671 | bankenkrise - medienlandschaft - erntendeutschland - regierungskritiker - euroraum | 23 | 671_bankenkrise_medienlandschaft_erntendeutschland_regierungskritiker |
| 672 | unabhängig - unabhängigen - unabhängige - politische - zensurfreien | 23 | 672_unabhängig_unabhängigen_unabhängige_politische |
| 673 | российской - russlandfreundlicher - украинские - panzersoldaten - военными | 23 | 673_российской_russlandfreundlicher_украинские_panzersoldaten |
| 674 | streaminglinks - streaming - link - zensiert - geopolitische | 23 | 674_streaminglinks_streaming_link_zensiert |
| 675 | youtube - facebookseite - webseite - videos - website | 23 | 675_youtube_facebookseite_webseite_videos |
| 676 | journalismuszdf - kampagnenjournalismus - diffamierung - medienleistung - volksparteien | 23 | 676_journalismuszdf_kampagnenjournalismus_diffamierung_medienleistung |
| 677 | ukrainerin - ukrainischstämmige - ukrainischer - russischstämmige - prorussisches | 23 | 677_ukrainerin_ukrainischstämmige_ukrainischer_russischstämmige |
| 678 | bukarestsie - hotel - hotels - schlafschaf - bukarest | 23 | 678_bukarestsie_hotel_hotels_schlafschaf |
| 679 | passierscheinen - passierscheine - passierscheinefür - krankenpass - spitalsmitarbeitern | 23 | 679_passierscheinen_passierscheine_passierscheinefür_krankenpass |
| 680 | nattokinase - impfungen - toxischen - hochtoxische - impfopfer | 23 | 680_nattokinase_impfungen_toxischen_hochtoxische |
| 681 | youtube - facebook - hashtag - telegramkanal - infokanal | 23 | 681_youtube_facebook_hashtag_telegramkanal |
| 682 | kinderpornografie - systemkritikern - kinderpornos - kindesmissbrauch - pornografiesucht | 23 | 682_kinderpornografie_systemkritikern_kinderpornos_kindesmissbrauch |
| 683 | ihn - wahnsinn - nannten - damals - wusste | 23 | 683_ihn_wahnsinn_nannten_damals |
| 684 | katastrophenschutz - wassergehalt - panzerplatten - soldaten - krisenvorsorge | 23 | 684_katastrophenschutz_wassergehalt_panzerplatten_soldaten |
| 685 | asylprotest - protestversammlung - protestaktion - aktivistenam - aktivisten | 23 | 685_asylprotest_protestversammlung_protestaktion_aktivistenam |
| 686 | messenger - iphone - smartphone - appstores - app | 23 | 686_messenger_iphone_smartphone_appstores |
| 687 | pipelines - pipeline - zerstörung - kriegsakt - kriegerischer | 23 | 687_pipelines_pipeline_zerstörung_kriegsakt |
| 688 | servustv - mediathekteil - tv - bildungsfernsehen - mediathek | 23 | 688_servustv_mediathekteil_tv_bildungsfernsehen |
| 689 | ukrainevereint - ukrainisch - linksanarchistische - antifaschistische - russophobe | 23 | 689_ukrainevereint_ukrainisch_linksanarchistische_antifaschistische |
| 690 | hildegard - hildegards - vorgeschichtlicher - klosterfrau - hilde | 23 | 690_hildegard_hildegards_vorgeschichtlicher_klosterfrau |
| 691 | landeshauptleute - werbefrei - förderer - alla - sito | 23 | 691_landeshauptleute_werbefrei_förderer_alla |
| 692 | russischsprachigen - russen - sibirisches - russe - russophil | 23 | 692_russischsprachigen_russen_sibirisches_russe |
| 693 | bundesgesundheitsminister - pandemiepolitik - massnahmenlauterbach - beschränkungen - lockerungsplan | 23 | 693_bundesgesundheitsminister_pandemiepolitik_massnahmenlauterbach_beschränkungen |
| 694 | telegram - magazin - abonniere - 1984 - dankeschön | 23 | 694_telegram_magazin_abonniere_1984 |
| 695 | hinweise - per - weltbild - ii - deutsch | 23 | 695_hinweise_per_weltbild_ii |
| 696 | unverständnis - bayerische - kritisiert - bayerischen - behauptete | 23 | 696_unverständnis_bayerische_kritisiert_bayerischen |
| 697 | widerstandsgeist - moralischen - selbstgerechtigkeit - skrupellosesten - moral | 23 | 697_widerstandsgeist_moralischen_selbstgerechtigkeit_skrupellosesten |
| 698 | antirussische - wladimir - russian - guerillakrieg - ukronazis | 23 | 698_antirussische_wladimir_russian_guerillakrieg |
| 699 | privatsphäre - abhörsicherheit - spionage - anonymität - cyberattack | 22 | 699_privatsphäre_abhörsicherheit_spionage_anonymität |
| 700 | melatonin - einschlafzeit - schlaf - einschlafen - vitamin | 22 | 700_melatonin_einschlafzeit_schlaf_einschlafen |
| 701 | wasserfilter - filter - wasser - pump - msr | 22 | 701_wasserfilter_filter_wasser_pump |
| 702 | gasheizofen - watt - keramik - kgh - wettergeschützten | 22 | 702_gasheizofen_watt_keramik_kgh |
| 703 | stadtviertel - innenstadt - spaziergänger - polizeiangaben - steigt | 22 | 703_stadtviertel_innenstadt_spaziergänger_polizeiangaben |
| 704 | österreichweiten - oberösterreich - künftig - parteipolitik - pressekonferenz | 22 | 704_österreichweiten_oberösterreich_künftig_parteipolitik |
| 705 | bundespräsident - bundespräsidentenposten - politikerkaste - politikerleben - gauck | 22 | 705_bundespräsident_bundespräsidentenposten_politikerkaste_politikerleben |
| 706 | cyberangriffs - cyberangriff - cyberattack - cyberattacke - cyberarmee | 22 | 706_cyberangriffs_cyberangriff_cyberattack_cyberattacke |
| 707 | heater - gasbrenner - gasheizung - feuerzeuggas - verbrennungsluftzufuhr | 22 | 707_heater_gasbrenner_gasheizung_feuerzeuggas |
| 708 | freiwillige - unterstützen - hilfe - helfen - benötigt | 22 | 708_freiwillige_unterstützen_hilfe_helfen |
| 709 | aufhören - stoppen - maskenirrsinn - auszuschließen - lächerlich | 22 | 709_aufhören_stoppen_maskenirrsinn_auszuschließen |
| 710 | impfverweigerer - immunschutz - immunwirkungen - impfaufruf - kondommarke | 22 | 710_impfverweigerer_immunschutz_immunwirkungen_impfaufruf |
| 711 | geschichten - gedichte - sechzehn - gelesen - luke | 22 | 711_geschichten_gedichte_sechzehn_gelesen |
| 712 | demokratie - sozialismus - autokratische - kommunismus - kommunisten | 22 | 712_demokratie_sozialismus_autokratische_kommunismus |
| 713 | liebe - sau - gibt - ga - diesem | 22 | 713_liebe_sau_gibt_ga |
| 714 | löwenzahnfelder - nährstoffen - fungotoxische - unkraut - löwenzahn | 22 | 714_löwenzahnfelder_nährstoffen_fungotoxische_unkraut |
| 715 | bombenterror - bombardierung - bombenangriff - bombenangriffe - bombenattacke | 22 | 715_bombenterror_bombardierung_bombenangriff_bombenangriffe |
| 716 | impfquote - impfmusterknabe - impfstatus - dänemark - gesundheitsbehörde | 22 | 716_impfquote_impfmusterknabe_impfstatus_dänemark |
| 717 | tv - mittelerdetv - kongress - online - americanmediaperiscope | 22 | 717_tv_mittelerdetv_kongress_online |
| 718 | chinas - yuan - chinesischen - überschuldeten - hochverschuldeten | 22 | 718_chinas_yuan_chinesischen_überschuldeten |
| 719 | schochwitz - journalist - bestsellerautor - kaiserundlenz - kapitalistischen | 22 | 719_schochwitz_journalist_bestsellerautor_kaiserundlenz |
| 720 | wahrheitsgetreu - truths - lüge - erläutern - hidden | 22 | 720_wahrheitsgetreu_truths_lüge_erläutern |
| 721 | einsteigerkissen - naturgewalten - sinnesphysiologische - körpereigener - feinfühliger | 22 | 721_einsteigerkissen_naturgewalten_sinnesphysiologische_körpereigener |
| 722 | lebensmittel - katastrophenschutz - bestens - geeignet - expeditionsbereich | 22 | 722_lebensmittel_katastrophenschutz_bestens_geeignet |
| 723 | gesundheitsausschuss - gesundheitsministeriums - verantwortlichen - verordnungen - maßnahmengesetz | 22 | 723_gesundheitsausschuss_gesundheitsministeriums_verantwortlichen_verordnungen |
| 724 | blackout - 850 - funkgerät - pmr - technologie | 22 | 724_blackout_850_funkgerät_pmr |
| 725 | tour - tickets - phoenix - july - june | 22 | 725_tour_tickets_phoenix_july |
| 726 | ukrainerinnen - werdenukrainische - flüchtlingesowohl - flüchtlingsströme - flüchtlingsaufnahme | 22 | 726_ukrainerinnen_werdenukrainische_flüchtlingesowohl_flüchtlingsströme |
| 727 | q5m4gtkgf23 - xxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn - f36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn - b6wmaf36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn - sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn | 22 | 727_q5m4gtkgf23_xxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn_f36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn_b6wmaf36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn |
| 728 | gasflasche - gasflaschenaufstellraum - gasheizer - butangasflaschen - gasschlauch | 22 | 728_gasflasche_gasflaschenaufstellraum_gasheizer_butangasflaschen |
| 729 | taiwanfrage - taiwanesische - taiwans - taiwan - südchinesischen | 22 | 729_taiwanfrage_taiwanesische_taiwans_taiwan |
| 730 | unbestraft - aussetzung - unzulässige - beruhigen - wegspritzen | 22 | 730_unbestraft_aussetzung_unzulässige_beruhigen |
| 731 | wintergrillen - gartenparty - veranstaltung - eintopfofen - anziehungspunkt | 22 | 731_wintergrillen_gartenparty_veranstaltung_eintopfofen |
| 732 | taiwans - taiwan - china - südchinesischen - nordchinesischen | 22 | 732_taiwans_taiwan_china_südchinesischen |
| 733 | aufklärungsvideos - informationen - frei - hilfreiche - zensur | 22 | 733_aufklärungsvideos_informationen_frei_hilfreiche |
| 734 | ernährungssouveränität - lebensmittelversorgung - lebensmittelproduktion - lebensmittelimporte - agrarpolitik | 22 | 734_ernährungssouveränität_lebensmittelversorgung_lebensmittelproduktion_lebensmittelimporte |
| 735 | todesursachenstatistik - todesfallstatistik - gesamtsterblichkeit - todesfällenlangsam - todesfälle | 22 | 735_todesursachenstatistik_todesfallstatistik_gesamtsterblichkeit_todesfällenlangsam |
| 736 | unrechtsstaat - staatsgewalt - unrechtsregime - rechtsstaat - rechtsphilosophie | 22 | 736_unrechtsstaat_staatsgewalt_unrechtsregime_rechtsstaat |
| 737 | französischen - franzosen - präsidentschaftswahlen - französische - frankreich | 22 | 737_französischen_franzosen_präsidentschaftswahlen_französische |
| 738 | politik - magazin - medien - felix - manuel | 22 | 738_politik_magazin_medien_felix |
| 739 | teamgeisterfahrer - teamgeisteskrank - team - teampräsentation - squad | 22 | 739_teamgeisterfahrer_teamgeisteskrank_team_teampräsentation |
| 740 | zionism - zion - vers - zionlump - satanic | 21 | 740_zionism_zion_vers_zionlump |
| 741 | date - timeline - countdown - timestamp - friday | 21 | 741_date_timeline_countdown_timestamp |
| 742 | todesfälle - neuinfektionen - starben - verzögert - gestorben | 21 | 742_todesfälle_neuinfektionen_starben_verzögert |
| 743 | alarmismus - alarmisten - immunität - angstmache - misstrauisch | 21 | 743_alarmismus_alarmisten_immunität_angstmache |
| 744 | tarps - tarp - mehrzweckplane - bushcrafter - gewebeplanen | 21 | 744_tarps_tarp_mehrzweckplane_bushcrafter |
| 745 | gasknappheit - gasmangel - gasmangels - gaskraftwerke - kraftwerksstandorten | 21 | 745_gasknappheit_gasmangel_gasmangels_gaskraftwerke |
| 746 | banküberweisung - bancaire - berlin - bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6 - bnpafrpp | 21 | 746_banküberweisung_bancaire_berlin_bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6 |
| 747 | demonstration - veranstalteten - menschenrechte - organisationen - spazierten | 21 | 747_demonstration_veranstalteten_menschenrechte_organisationen |
| 748 | vaccine - impfstoffs - vaccination - impfungen - impfstoff | 21 | 748_vaccine_impfstoffs_vaccination_impfungen |
| 749 | medientipp - gesundheitstage - videos - danke - com | 21 | 749_medientipp_gesundheitstage_videos_danke |
| 750 | schäferhunden - tierärztlichen - hunde - hundes - wildtiere | 21 | 750_schäferhunden_tierärztlichen_hunde_hundes |
| 751 | kindermasken - lungenvolumen - atmung - erwachsenenmodellen - einatmen | 21 | 751_kindermasken_lungenvolumen_atmung_erwachsenenmodellen |
| 752 | budapest - deutschsprachigen - deutschsprachige - zalakaros - heldenplatz | 21 | 752_budapest_deutschsprachigen_deutschsprachige_zalakaros |
| 753 | kampftrainingszentren - trainingseinrichtungen - trainingszentren - militärkomplexe - trainingszentrum | 21 | 753_kampftrainingszentren_trainingseinrichtungen_trainingszentren_militärkomplexe |
| 754 | vaccineinjuries - vaccines - impftermin - verhindern - preventative | 21 | 754_vaccineinjuries_vaccines_impftermin_verhindern |
| 755 | freiheitsbewegung - freiheitskampf - demowien - parteipolitische - totalitarismus | 21 | 755_freiheitsbewegung_freiheitskampf_demowien_parteipolitische |
| 756 | testumgebungen - teststrategie - testzentren - testergebnissen - tests | 21 | 756_testumgebungen_teststrategie_testzentren_testergebnissen |
| 757 | sozialdemokraten - kanzlerpartei - parteitag - sozialliberalen - sozialliberale | 21 | 757_sozialdemokraten_kanzlerpartei_parteitag_sozialliberalen |
| 758 | pressekonferenz - pressekonferenzen - konferenzschaltung - bundespressekonferenz - regierungssprecher | 21 | 758_pressekonferenz_pressekonferenzen_konferenzschaltung_bundespressekonferenz |
| 759 | coronakranken - gesundheitsfördernd - gesundheitswesens - krankenhauspflichtiger - hospitalisiert | 21 | 759_coronakranken_gesundheitsfördernd_gesundheitswesens_krankenhauspflichtiger |
| 760 | q74you - teile - danke - dank - schnelleinstieg | 21 | 760_q74you_teile_danke_dank |
| 761 | defcon - defconlevel - papal - pope - 547 | 21 | 761_defcon_defconlevel_papal_pope |
| 762 | met - bejing - country - wars - kaneohe | 21 | 762_met_bejing_country_wars |
| 763 | schuldig - urteilsspruch - beklagten - anklagepunkten - freigesprochen | 21 | 763_schuldig_urteilsspruch_beklagten_anklagepunkten |
| 764 | demonstrationszug - petition - manifestfuerdenfrieden - kriegstreibern - streitverhalten | 21 | 764_demonstrationszug_petition_manifestfuerdenfrieden_kriegstreibern |
| 765 | künstlicheintelligenz - intelligenzen - intelligenz - künstliche - künstlicher | 21 | 765_künstlicheintelligenz_intelligenzen_intelligenz_künstliche |
| 766 | währungskurs - papierwährung - währungsreform - bezahlten - zwangsgebührenzahler | 21 | 766_währungskurs_papierwährung_währungsreform_bezahlten |
| 767 | cryptogruppen - cryptoplattformen - compromising - crypto - cryptocurrency | 21 | 767_cryptogruppen_cryptoplattformen_compromising_crypto |
| 768 | podcast - music - philharmoniker - jazz - soundcloud | 21 | 768_podcast_music_philharmoniker_jazz |
| 769 | investition - investierten - gewinngenerierung - profitieren - bonuszahlungen | 21 | 769_investition_investierten_gewinngenerierung_profitieren |
| 770 | speiseölen - frittieröl - speiseöle - supermärkten - verkäuferin | 21 | 770_speiseölen_frittieröl_speiseöle_supermärkten |
| 771 | lockdownpolitik - lockdowns - lockdownrunde - lockdown - lock | 21 | 771_lockdownpolitik_lockdowns_lockdownrunde_lockdown |
| 772 | german - deutsch - antwortvideo - vimeo - danzig | 21 | 772_german_deutsch_antwortvideo_vimeo |
| 773 | geldhäusern - aktiendepot - investitionsfenster - investmentbankergeschmeiss - geldsystem | 21 | 773_geldhäusern_aktiendepot_investitionsfenster_investmentbankergeschmeiss |
| 774 | diplomaten - diplomatische - diplomatischen - diplomatischer - außenstaatssekretär | 21 | 774_diplomaten_diplomatische_diplomatischen_diplomatischer |
| 775 | warnte - analyst - verbrechen - ergebnisse - schriftlich | 21 | 775_warnte_analyst_verbrechen_ergebnisse |
| 776 | schwedische - schwedischen - stockholm - übersterblichkeit - skandinavischen | 21 | 776_schwedische_schwedischen_stockholm_übersterblichkeit |
| 777 | tipphühnervolleipulver - pfannkuchen - hühnervolleipulver - verpackung - stabile | 21 | 777_tipphühnervolleipulver_pfannkuchen_hühnervolleipulver_verpackung |
| 778 | selbstverteidigungsschirm - sicherheitsschirm - selbstschutz - verteidigungshilfsmittel - selbstverteidigung | 21 | 778_selbstverteidigungsschirm_sicherheitsschirm_selbstschutz_verteidigungshilfsmittel |
| 779 | lungenfachklinik - lungenklinik - krankenhausstruktur - krankenhausstrukturfonds - krankenversorgung | 21 | 779_lungenfachklinik_lungenklinik_krankenhausstruktur_krankenhausstrukturfonds |
| 780 | germany - greetings - patriots - my - all | 21 | 780_germany_greetings_patriots_my |
| 781 | politikverdrossenheit - wirtschaftsnationen - staatsquote - spitzenpolitiker - parteienstaat | 21 | 781_politikverdrossenheit_wirtschaftsnationen_staatsquote_spitzenpolitiker |
| 782 | buch - bücher - psychiaters - persönlichste - lieblosigkeit | 21 | 782_buch_bücher_psychiaters_persönlichste |
| 783 | akkukapazitäten - batteriegespeisten - solarpanels - solarpanel - powerstations | 21 | 783_akkukapazitäten_batteriegespeisten_solarpanels_solarpanel |
| 784 | copyrighted - copyright - fair - ownership - permitted | 21 | 784_copyrighted_copyright_fair_ownership |
| 785 | deutschlandweiten - demokratiebewegung - berlin180322 - demonstrationsfreiheit - protestgeschehens | 21 | 785_deutschlandweiten_demokratiebewegung_berlin180322_demonstrationsfreiheit |
| 786 | ayurvedische - ayurveda - kurkuma - heilmittel - curcumin | 21 | 786_ayurvedische_ayurveda_kurkuma_heilmittel |
| 787 | mainstreammedien - medien - com - öffentlich - behauptungen | 21 | 787_mainstreammedien_medien_com_öffentlich |
| 788 | vermögensverwalterblackrock - blackrock - monopolstellung - aktiengesellschaften - kartellsyndikat | 21 | 788_vermögensverwalterblackrock_blackrock_monopolstellung_aktiengesellschaften |
| 789 | impfstoffart - impfzustände - grundimmunisierung - impfstoffen - immunität | 21 | 789_impfstoffart_impfzustände_grundimmunisierung_impfstoffen |
| 790 | kostenlos - per - schweiz - suisse - com | 21 | 790_kostenlos_per_schweiz_suisse |
| 791 | impfschutzes - impfstoffkatastrophen - fremdschutzbegründung - impfpflichtentwurf - risikobewertung | 21 | 791_impfschutzes_impfstoffkatastrophen_fremdschutzbegründung_impfpflichtentwurf |
| 792 | berufssoldat - wars - officer - he - appointed | 21 | 792_berufssoldat_wars_officer_he |
| 793 | korruptionsskandal - korruptionsskandale - korruptionsproblem - mafia - maskenskandal | 21 | 793_korruptionsskandal_korruptionsskandale_korruptionsproblem_mafia |
| 794 | kostenlos - abonnieren - brd - demnächst - sondern | 21 | 794_kostenlos_abonnieren_brd_demnächst |
| 795 | impfstoff - vaccine - vaccinated - antikörper - herdenimmunität | 21 | 795_impfstoff_vaccine_vaccinated_antikörper |
| 796 | bakterienkulturen - fermentieren - bodenbakterien - probiotische - lebensmittel | 21 | 796_bakterienkulturen_fermentieren_bodenbakterien_probiotische |
| 797 | wirtschaftsaufschwung - wachstumsprognose - konjunkturprognosen - makroökonomie - konjunkturprognoseder | 21 | 797_wirtschaftsaufschwung_wachstumsprognose_konjunkturprognosen_makroökonomie |
| 798 | niederösterreich - österreich - austria - videos - video | 21 | 798_niederösterreich_österreich_austria_videos |
| 799 | europaparlamentarier - europaabgeordneten - baskenland - korruptionsskandal - festgenommen | 21 | 799_europaparlamentarier_europaabgeordneten_baskenland_korruptionsskandal |
| 800 | schwurbelexpertin - mitherausgeberin - moderatoren - gegnerinnen - interviewausschnitt | 21 | 800_schwurbelexpertin_mitherausgeberin_moderatoren_gegnerinnen |
| 801 | magdeburg - magdeburger - 2023folgt - 2023 - 2021 | 21 | 801_magdeburg_magdeburger_2023folgt_2023 |
| 802 | freiheit - befreiung - gefürchtet - fürchte - öffentlichen | 21 | 802_freiheit_befreiung_gefürchtet_fürchte |
| 803 | europas - europäische - europäischen - europa - belgischen | 21 | 803_europas_europäische_europäischen_europa |
| 804 | impfpflichtgesetzes - impfpflichtige - impfstoffdosis - immunisiert - immunantwort | 20 | 804_impfpflichtgesetzes_impfpflichtige_impfstoffdosis_immunisiert |
| 805 | bitcoin - kryptowährungen - crypto - bank - 0x3a62a88779bc0034b8f8dc172f4590044c724515 | 20 | 805_bitcoin_kryptowährungen_crypto_bank |
| 806 | conservatives - republican - arizona - gubernatorial - presidential | 20 | 806_conservatives_republican_arizona_gubernatorial |
| 807 | journalist - journalismus - publizisten - polizeieinsatzes - polizeieinsatz | 20 | 807_journalist_journalismus_publizisten_polizeieinsatzes |
| 808 | fakenews - fakes - feindnachrichten - fake - gefälscht | 20 | 808_fakenews_fakes_feindnachrichten_fake |
| 809 | berichterstattungsarbeit - veritas - journalistisches - project - korrespondenten | 20 | 809_berichterstattungsarbeit_veritas_journalistisches_project |
| 810 | impfzentrums - impfstopp - menschenrechtsberichts - medienaufmerksamkeit - kriegsgeschehen | 20 | 810_impfzentrums_impfstopp_menschenrechtsberichts_medienaufmerksamkeit |
| 811 | telegram - informiert - datum - meldungen - kalendertags | 20 | 811_telegram_informiert_datum_meldungen |
| 812 | kriegsverbot - kriegstreiberische - nordatlantikvertrag - luftkrieg - atomwaffenabkommen | 20 | 812_kriegsverbot_kriegstreiberische_nordatlantikvertrag_luftkrieg |
| 813 | proteste - französisierungspolitik - corsicans - corsican - guerillabewegung | 20 | 813_proteste_französisierungspolitik_corsicans_corsican |
| 814 | verkehrswege - verkehrseinschränkungen - klimaschutzbewegung - autofahrer - verkehr | 20 | 814_verkehrswege_verkehrseinschränkungen_klimaschutzbewegung_autofahrer |
| 815 | doctor - prof - stührenberg - erdmann - kegelmann | 20 | 815_doctor_prof_stührenberg_erdmann |
| 816 | unverantwortlich - skrupellose - verharmlosung - falscher - betrügen | 20 | 816_unverantwortlich_skrupellose_verharmlosung_falscher |
| 817 | vergewaltigte - vergewaltigt - vergewaltigungsversuch - vergewaltiger - vergewaltigen | 20 | 817_vergewaltigte_vergewaltigt_vergewaltigungsversuch_vergewaltiger |
| 818 | dokumentarfilmungeimpfte - dokumentarfilms - dokumentarfilm - impfungen - interviewt | 20 | 818_dokumentarfilmungeimpfte_dokumentarfilms_dokumentarfilm_impfungen |
| 819 | nietzsche - zeig - jaroslav - gyatso - anderl | 20 | 819_nietzsche_zeig_jaroslav_gyatso |
| 820 | selbstheilungskräfte - gesundheitlich - gesundheit - selbstheilung - psychohygiene | 20 | 820_selbstheilungskräfte_gesundheitlich_gesundheit_selbstheilung |
| 821 | abramovich - abramovichs - juventus - abramovitch - londoner | 20 | 821_abramovich_abramovichs_juventus_abramovitch |
| 822 | gebetsmühlenartig - sagst - wachsamkeit - sprichwort - erklärt | 20 | 822_gebetsmühlenartig_sagst_wachsamkeit_sprichwort |
| 823 | sterblichkeitsraten - mortalitätsdaten - todesfällen - todesfälle - sterben | 20 | 823_sterblichkeitsraten_mortalitätsdaten_todesfällen_todesfälle |
| 824 | genderideologien - genderverbot - geschlechterstudien - gendersprache - gendert | 20 | 824_genderideologien_genderverbot_geschlechterstudien_gendersprache |
| 825 | kerzenabstellenherzog - bedürfnispyramide - standpunkt - rebel - auseinandergesetzt | 20 | 825_kerzenabstellenherzog_bedürfnispyramide_standpunkt_rebel |
| 826 | nahrungsergänzungsmittel - lebensmittelqualität - zusatzstoffen - birkenzucker - knoblauchbutter | 20 | 826_nahrungsergänzungsmittel_lebensmittelqualität_zusatzstoffen_birkenzucker |
| 827 | aufrufvideo - düsseldorf - überwachungsvideo - videobotschaft - bückeburg | 20 | 827_aufrufvideo_düsseldorf_überwachungsvideo_videobotschaft |
| 828 | kanadischer - kanadischen - kanadier - canada - kanada | 20 | 828_kanadischer_kanadischen_kanadier_canada |
| 829 | freiheitsfeindlichen - demonstrationsrecht - freiheitsdemonstrationen - repressiven - regimekritische | 20 | 829_freiheitsfeindlichen_demonstrationsrecht_freiheitsdemonstrationen_repressiven |
| 830 | melden - gesundheitssprecherin - soweit - gesundheitsversorgung - hiermit | 20 | 830_melden_gesundheitssprecherin_soweit_gesundheitsversorgung |
| 831 | medizinhistoriker - heilpraktikerin - heilpraktiker - allgemeinarzt - expertenrat | 20 | 831_medizinhistoriker_heilpraktikerin_heilpraktiker_allgemeinarzt |
| 832 | oberbürgermeister - versammlungsleiter - veranstaltet - justizanstalt - vortag | 20 | 832_oberbürgermeister_versammlungsleiter_veranstaltet_justizanstalt |
| 833 | ukrainekrieg - ukraineinvasion - kriegspropaganda - naziproblem - kriegsberichterstattung | 20 | 833_ukrainekrieg_ukraineinvasion_kriegspropaganda_naziproblem |
| 834 | wasserkocher - kettle - storm - wetter - kochen | 20 | 834_wasserkocher_kettle_storm_wetter |
| 835 | elektrosmog - elektromagnetische - niederfrequenz - hochfrequenz - sbm | 20 | 835_elektrosmog_elektromagnetische_niederfrequenz_hochfrequenz |
| 836 | lauterbach - alfredherrhausen - lauterbachs - lauterbachrausschmisssofort - ronzheimer | 20 | 836_lauterbach_alfredherrhausen_lauterbachs_lauterbachrausschmisssofort |
| 837 | westeuropäische - westeuropas - neonazis - ukraineauch - atomwaffen | 20 | 837_westeuropäische_westeuropas_neonazis_ukraineauch |
| 838 | sturmfeuerzeug - lagerfeuer - aufbewahrungsmitteln - campingküche - outdoor | 20 | 838_sturmfeuerzeug_lagerfeuer_aufbewahrungsmitteln_campingküche |
| 839 | fermentierglas - fermentation - weinflaschen - flaschen - sauerstoff | 20 | 839_fermentierglas_fermentation_weinflaschen_flaschen |
| 840 | importwachstum - exportwachstum - exportbürgschaften - außenhandel - exportschlagern | 20 | 840_importwachstum_exportwachstum_exportbürgschaften_außenhandel |
| 841 | wasserfilter - pump - filter - schützt - schnell | 20 | 841_wasserfilter_pump_filter_schützt |
| 842 | umrüstgasschlauch - propangasflaschen - widerstandsfähigen - robusten - möglich | 20 | 842_umrüstgasschlauch_propangasflaschen_widerstandsfähigen_robusten |
| 843 | lebensmittelgroßhandel - landwirtschaftszählung - meat - schweinefleisch - rationsgestaltung | 20 | 843_lebensmittelgroßhandel_landwirtschaftszählung_meat_schweinefleisch |
| 844 | scheiß - scheiße - хуй - wahnsinn - fuckin | 20 | 844_scheiß_scheiße_хуй_wahnsinn |
| 845 | transhumanismus - transhumanisten - transhumanisumus - globalisten - magnet | 20 | 845_transhumanismus_transhumanisten_transhumanisumus_globalisten |
| 846 | gespendet - sparen - rabattcode - rabatt - kundenbewertungen | 20 | 846_gespendet_sparen_rabattcode_rabatt |
| 847 | netzwerkdurchsetzungsgesetzes - channels - informationskanäle - öffentliche - zensurfrei | 20 | 847_netzwerkdurchsetzungsgesetzes_channels_informationskanäle_öffentliche |
| 848 | produktionsstätten - geschäfts - geschäfte - maschinenbaubranche - geschäftsbeziehungen | 20 | 848_produktionsstätten_geschäfts_geschäfte_maschinenbaubranche |
| 849 | glasfaserkabel - ersatzkabel - telefonleitungen - kabelempfang - kabel | 20 | 849_glasfaserkabel_ersatzkabel_telefonleitungen_kabelempfang |
| 850 | schutzraumanlagen - bunkerplätze - schutzplatz - schutzraumkonzept - schutzräume | 20 | 850_schutzraumanlagen_bunkerplätze_schutzplatz_schutzraumkonzept |
| 851 | lampenöl - weitere - praktische - gut - autark | 20 | 851_lampenöl_weitere_praktische_gut |
| 852 | düsseldorf - berlins - berliner - aachen - münchner | 20 | 852_düsseldorf_berlins_berliner_aachen |
| 853 | restaurantleiter - restaurantchef - restaurantgäste - restaurants - restaurant | 20 | 853_restaurantleiter_restaurantchef_restaurantgäste_restaurants |
| 854 | pack - rucksäcken - beliebtesten - us - trägt | 20 | 854_pack_rucksäcken_beliebtesten_us |
| 855 | wasserfilter - schützt - wildgebieten - entwicklungsländern - einsatz | 20 | 855_wasserfilter_schützt_wildgebieten_entwicklungsländern |
| 856 | literarischen - literarisch - essay - herausgegebene - philosophen | 20 | 856_literarischen_literarisch_essay_herausgegebene |
| 857 | gemeldet - nachmeldungen - gemeldete - wochenbericht - meldeverzögerungen | 20 | 857_gemeldet_nachmeldungen_gemeldete_wochenbericht |
| 858 | trinkwasserqualität - trinkwasser - wasserbar - wasseraufbereitungsanlage - wasser | 20 | 858_trinkwasserqualität_trinkwasser_wasserbar_wasseraufbereitungsanlage |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: multilingual
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.6.1
* Transformers: 4.38.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
| [
"PCR"
] | Non_BioNLP |
Backedman/TriviaAnsweringMachineREAL | Backedman | question-answering | [
"transformers",
"pytorch",
"TFIDF-QA",
"question-answering",
"custom_code",
"en",
"license:mit",
"region:us"
] | 1,715,043,887,000 | 2024-05-12T22:28:55 | 5,270 | 0 | ---
language:
- en
license: mit
pipeline_tag: question-answering
---
The evaluation of this project is to answer trivia questions. You do
not need to do well at this task, but you should submit a system that
completes the task or create adversarial questions in that setting. This will help the whole class share data and
resources.
If you focus on something other than predicting answers, *that's fine*!
About the Data
==============
Quiz bowl is an academic competition between schools in
English-speaking countries; hundreds of teams compete in dozens of
tournaments each year. Quiz bowl is different from Jeopardy, a recent
application area. While Jeopardy also uses signaling devices, these
are only usable after a question is completed (interrupting Jeopardy's
questions would make for bad television). Thus, Jeopardy is rapacious
classification followed by a race---among those who know the
answer---to punch a button first.
Here's an example of a quiz bowl question:
Expanding on a 1908 paper by Smoluchowski, he derived a formula for
the intensity of scattered light in media fluctuating densities that
reduces to Rayleigh's law for ideal gases in The Theory of the
Opalescence of Homogenous Fluids and Liquid Mixtures near the Critical
State. That research supported his theories of matter first developed
when he calculated the diffusion constant in terms of fundamental
parameters of the particles of a gas undergoing Brownian Motion. In
that same year, 1905, he also published On a Heuristic Point of View
Concerning the Production and Transformation of Light. That
explication of the photoelectric effect won him 1921 Nobel in Physics.
For ten points, name this German physicist best known for his theory
of Relativity.
*ANSWER*: Albert _Einstein_
Two teams listen to the same question. Teams interrupt the question at
any point by "buzzing in"; if the answer is correct, the team gets
points and the next question is read. Otherwise, the team loses
points and the other team can answer.
You are welcome to use any *automatic* method to choose an answer. It
need not be similar nor build on our provided systems. In addition to
the data we provide, you are welcome to use any external data *except*
our test quiz bowl questions (i.e., don't hack our server!). You are
welcome (an encouraged) to use any publicly available software, but
you may want to check on Piazza for suggestions as many tools are
better (or easier to use) than others.
If you don't like the interruptability of questions, you can also just answer entire questions. However, you must also output a confidence.
Competition
==================
We will use Dynabech website (https://dynabench.org/tasks/qa). If you remember the past workshop about Dynabench submission, this is the way to do it. The specific task name is "Grounded QA". Here, with the help of the video tutorial, you submit your QA model and assess how your QA model did compared to others. The assessment will take place by testing your QA model on several QA test datasets and the results of yours and your competitors will be visible on the leaderboard. Your goal is to rank the highest in terms of expected wins: you buzz in with probability proportional to your confidence, and if you're more right than the competition, you win.
Writing Questions
==================
Alternatively, you can also *write* 50 adversarial questions that
challenge modern NLP systems. These questions must be diverse in the
subjects asked about, the skills computers need to answer the
questions, and the entities in those questions. Remember that your questions should be *factual* and
*specific* enough for humans to answer, because your task is to stump
the computers relative to humans!
In addition to the raw questions, you will also need to create citations describing:
* Why the question is difficult for computers: include citations from the NLP/AI/ML literature
* Why the information in the question is correct: include citations from the sources you drew on the write the question
* Why the question is interesting: include scholarly / popular culture artifacts to prove that people care about this
* Why the question is pyramidal: discuss why your first clues are harder than your later clues
**Category**
We want questions from many domains such as Art, Literature, Geography, History,
Science, TV and Film, Music, Lifestyle, and Sport. The questions
should be written using all topics above (5 questions for each
category and 5 more for the remaining categories). Indicate in your
writeup which category you chose to write on for each question.
Art:
* Questions about works: Mona Lisa, Raft of the Medussa
* Questions about forms: color, contour, texture
* Questions about artists: Picasso, Monet, Leonardo da Vinci
* Questions about context: Renaissance, post-modernism, expressionism, surrealism
Literature:
* Questions about works: novels (1984), plays (The Lion and the Jewel), poems (Rubaiyat), criticism (Poetics)
* Questions about major characters or events in literature: The Death of Anna Karenina, Noboru Wataya, the Marriage of Hippolyta and Theseus
* Questions about literary movements (Sturm und Drang)
* Questions about translations
* Cross-cutting questions (appearances of Overcoats in novels)
* Common link questions (the literary output of a country/region)
Geography:
* Questions about location: names of capital, state, river
* Questions about the place: temperature, wind flow, humidity
History:
* When: When did the First World war start?
* Who: Who is called Napoleon of Iran?
* Where: Where was the first Summer Olympics held?
* Which: Which is the oldest civilization in the world?
Science:
* Questions about terminology: The concept of gravity was discovered by which famous physicist?
* Questions about the experiment
* Questions about theory: The social action theory believes that individuals are influenced by this theory.
TV and Film:
* Quotes: What are the dying words of Charles Foster Kane in Citizen Kane?
* Title: What 1927 musical was the first "talkie"?
* Plot: In The Matrix, does Neo take the blue pill or the red pill?
Music:
* Singer: What singer has had a Billboard No. 1 hit in each of the last four decades?
* Band: Before Bleachers and fun., Jack Antonoff fronted what band?
* Title: What was Madonna's first top 10 hit?
* History: Which classical composer was deaf?
Lifestyle:
* Clothes: What clothing company, founded by a tennis player, has an alligator logo?
* Decoration: What was the first perfume sold by Coco Chanel?
Sport:
* Known facts: What sport is best known as the ‘king of sports’?
* Nationality: What’s the national sport of Canada?
* Sport player: The classic 1980 movie called Raging Bull is about which real-life boxer?
* Country: What country has competed the most times in the Summer Olympics yet hasn’t won any kind of medal?
**Diversity**
Other than category diversity, if you find an ingenious way of writing questions about underrepresented countries, you will get bonus points (indicate which questions you included the diversity component in your writeup). You may decide which are underrepresented countries with your own reasonable reason (etc., less population may indicate underrepresented), but make sure to articulate this in your writeup.
* Run state of the art QA systems on the questions to show they struggle, give individual results for each question and a summary over all questions
For an example of what the writeup for a single question should look like, see the adversarial HW:
https://github.com/Pinafore/nlp-hw/blob/master/adversarial/question.tex
Proposal
==================
The project proposal is a one page PDF document that describes:
* Who is on your team (team sizes can be between three and six
students, but six is really too big to be effective; my suggestion
is that most groups should be between four or five).
* What techniques you will explore
* Your timeline for completing the project (be realistic; you should
have your first submission in a week or two)
Submit the proposal on Gradescope, but make sure to include all group
members. If all group members are not included, you will lose points. Late days cannot be used on this
assignment.
Milestone 1
======================
You'll have to update how things are going: what's
working, what isn't, and how does it change your timeline? How does it change your division of labor?
*Question Writing*: You'll need to have answers selected for all of
your questions and first drafts of at least 15 questions. This must
be submitted as a JSON file so that we run computer QA systems on it.
*Project*: You'll need to have made a submission to the leaderboard with something that satisfies the API.
Submit a PDF updating on your progress to Gradescope. If all team
members are not on the submission, you will lose points.
Milestone 2
===================
As before, provide an updated timeline / division of labor, provide your intermediary results.
*Question Writing*: You'll need to have reflected the feedback from the first questions and completed a first draft of at least 30 questions. You'll also need machine results to your questions and an overall evaluation of your human/computer accuracy.
*Project*: You'll need to have a made a submission to the leaderboard with a working system (e.g., not just obey the API, but actually get reasonable answers).
Submit a PDF updating on your progress.
Final Presentation
======================
The final presentation will be virtual (uploading a video). In
the final presentation you will:
* Explain what you did
* Who did what. For example, for the question writing project a team of five people might write: A wrote the first draft of questions. B and C verified they were initially answerable by a human. B ran computer systems to verify they were challenging to a computer. C edited the questions and increased the computer difficulty. D and E verified that the edited questions were still answerable by a human. D and E checked all of the questions for factual accuracy and created citations and the writeup.
* What challenges you had
* Review how well you did (based on the competition or your own metrics). If you do not use the course infrastructure to evaluate your project's work, you should talk about what alternative evaluations you used, why they're appropriate/fair, and how well you did on them.
* Provide an error analysis. An error analysis must contain examples from the
development set that you get wrong. You should show those sentences
and explain why (in terms of features or the model) they have the
wrong answer. You should have been doing this all along as you
derive new features, but this is your final inspection of
your errors. The feature or model problems you discover should not
be trivial features you could add easily. Instead, these should be
features or models that are difficult to correct. An error analysis
is not the same thing as simply presenting the error matrix, as it
does not inspect any individual examples. If you're writing questions, talk about examples of questions that didn't work out as intended.
* The linguistic motivation for your features / how your wrote the questions. This is a
computational linguistics class, so you should give precedence to
features / techniques that we use in this class (e.g., syntax,
morphology, part of speech, word sense, etc.). Given two features
that work equally well and one that is linguistically motivated,
we'll prefer the linguistically motivated one.
* Presumably you did many different things; how did they each
individually contribute to your final result?
Each group has 10 minutes to deliver their presentation. Please record the video, and upload it to Google Drive, and include the link in your writeup submission.
Final Question Submission
======================
Because we need to get the questions ready for the systems, upload your raw questions on May 10. This doesn't include the citations or other parts of the writeup.
System Submission
======================
You must submit a version of your system by May 12. It may not be perfect, but this what the question writing teams will use to test their results.
Your system should be sent directly to the professor and TAs in zip files, including the correct dependencies and a working inference code. Your inference code should run successfully in the root folder (extracted from zip folder) directory with the command:
```
> python3 inference.py --data=evaluation_set.json
```
The input will be in the form of a .json file () in the same format as the file the adversarial question writing team submits. The output format should also be in string.
If you have any notes or comments that we should be aware of while running your code, please include them in the folder as a .txt file. Also, dependency information should be included as a .txt file.
Please prepend your email title with [2024-CMSC 470 System Submission].
Project Writeup and JSON file
======================
By May 17, submit your project writeup explaining what
you did and what results you achieved. This document should
make it clear:
* Why this is a good idea
* What you did
* Who did what
* Whether your technique worked or not
For systems, please do not go over 2500 words unless you have a really good reason.
Images are a much better use of space than words, usually (there's no
limit on including images, but use judgement and be selective).
For question writing, you have one page (single spaced, two column) per question plus a two page summary of results. Talk about how you organized the question writing, how you evaluated the questions, and a summary of the results. Along with your writeup, turn in a json including the raw text of the question and answer and category. The json file is included in this directory. Make sure your json file is in the correct format and is callable via below code. Your submission will not be graded if it does not follow the format of the example json file.
```
with open('path to your json file', 'r') as f:
data = json.load(f)
```
Grade
======================
The grade will be out of 25 points, broken into five areas:
* _Presentation_: For your oral presentation, do you highlight what
you did and make people care? Did you use time well during the
presentation?
* _Writeup_: Does the writeup explain what you did in a way that is
clear and effective?
The final three areas are different between the system and the questions.
| | System | Questions |
|----------|:-------------:|------:|
| _Technical Soundness_ | Did you use the right tools for the job, and did you use them correctly? Were they relevant to this class? | Were your questions correct and accurately cited. |
| _Effort_ | Did you do what you say you would, and was it the right ammount of effort. | Are the questions well-written, interesting, and thoroughly edited? |
| _Performance_ | How did your techniques perform in terms of accuracy, recall, etc.? | Is the human accuracy substantially higher than the computer accuracy? |
All members of the group will receive the same grade. It's impossible for the course staff to adjudicate Rashomon-style accounts of who did what, and the goal of a group project is for all team members to work together to create a cohesive project that works well together. While it makes sense to divide the work into distinct areas of responsibility, at grading time we have now way to know who really did what, so it's the groups responsibility to create a piece of output that reflects well on the whole group.
| [
"MEDAL"
] | Non_BioNLP |
tech-trends-tracker/roleplay-with-milf-ai-fantasy | tech-trends-tracker | null | [
"license:unknown",
"region:us"
] | 1,735,753,773,000 | 2025-01-01T17:57:16 | 0 | 0 | ---
license: unknown
---
**Roleplay With MILF AI Fantasy**
Are you interested in girls way older than you? Experience some motherly love and sexual affection with Nectar AI’s mature girlfriend generator.

Have you ever thought about what it would be like to connect with someone older—someone with that irresistible mix of maturity and charm? It’s a fantasy many people harbor but often keep hidden. Now, AI technology makes it possible to step into that world through immersive roleplay.
Maybe you’ve had a crush on your high school teacher or harbored a secret desire for your friend’s mom. Fantasies like these used to remain just that—fantasies—because the fear of rejection or awkwardness made them impossible to pursue.
But today, there’s no need to keep these desires locked away. Advanced AI tools like Nectar AI allow you to create your own [mature AI girlfriend](https://nectar.ai/), tailored exactly to your preferences.
AI-generated MILF fantasies go far beyond simple chatbots. They feature photorealistic images that perfectly match the character in your imagination. You can even customize their personalities, making the experience feel deeply personal and entirely real.
While there are many platforms that offer this kind of experience, one of the most powerful and feature-packed tools available today is [Nectar AI](https://nectar.ai/).
What Is Nectar AI?
Nectar AI is a generative AI platform that lets you design your own virtual companion. It combines two primary features to create an immersive experience:
• Image Creator: Generate hyper-realistic images of your AI girlfriend. Customize her appearance, outfits, and poses to bring your vision to life. The platform delivers high-definition photos quickly, with results that rival professional-quality images.
• Roleplay Simulator: Engage in realistic conversations that adapt to your preferences. Whether you want lighthearted banter or deep, emotional dialogue, the AI adjusts dynamically. Roleplay supports multiple languages, including Spanish and Chinese, for a globally accessible experience.

Nectar AI stands out for its speed, quality, and ease of use. You can get started for free, with premium subscriptions offering enhanced features like HD image generation, exclusive customization tools, and advanced roleplay capabilities. For extended interactions, additional Message Packs are also available.
Importantly, all content generated on Nectar AI is ethical and carefully moderated. Any resemblance to real people is purely coincidental.
How to Create Your MILF Fantasy
Creating your MILF AI girlfriend on Nectar AI is simple and intuitive. Once you’ve logged into your account, navigate to your profile page and click on the “Create Companion” button.
From there, follow the on-screen instructions to customize your character:
• Appearance: Choose mature features, such as elegant hairstyles, sophisticated outfits, or subtle makeup that highlights her age and grace.
• Age: Set the age to reflect her MILF persona. For example, in the case of Milly, I set her age to 45.
• Personality: Define her traits to match your preferences. For instance, she could be nurturing, playful, or even a little dominant, depending on what you’re looking for.
Here’s an example MILF girlfriend named Sherri.

Sherri is a shy faithful wife.
Married to Sherri for twenty three years and in love. Sherri is loving, faithful, shy, emotional and a fifty three year old milf. She is a great honest wife. But you have a fantasy that you want to live out with her. You want to share Sherri with your friend Tyler. You know Sherri won't be up for this. But you think if you turn her on enough she might just give in. Sherri does find your young black friend attractive but not thinking the way you want her to think. He's tall, very muscular and young. Way more than you ever could be. So you three go out for dinner and drinks one evening. After a evening out you three head back to your house for a relaxing evening and more drinks. Now you think this could be your chance to live out your fantasy! Wondering to yourself how to go about it. With Sherri or Tyler knowing nothing about what you have in mind. But Tyler suspects something, being your friend Tyler knows your fantasy that you want to live out with your wife. Tyler is thinking to himself he is willing to play along if this is what you have in mind. But Sherri is a fifty three year old milf who has never done anything like this or even though of it. It's going to be difficult.
MILF Fantasies
One of the best features of Nectar AI is its thriving community, where users share their own AI-generated fantasies. The “Fantasies” page is filled with thousands of AI companions created by other users, each with a unique backstory, appearance, and personality.
To explore MILF characters, simply set the filters to “MILF” and browse through the available options.

The pre-made characters on Nectar AI are great for those who want to dive into roleplay quickly without having to customize a character from scratch.
Conclusion
AI has opened the door to fantasies once confined to the imagination. With platforms like Nectar AI, you can create a MILF AI companion that’s not only visually stunning but also emotionally engaging.
The appeal isn’t just about looks or conversations—it’s the ability to craft a deeply personal experience. Whether you’re designing your own character or exploring the community’s creations, you’re in full control of the journey.
What makes this experience special is how real it feels. The combination of hyper-realistic images and adaptive roleplay creates a connection that goes beyond just a chatbot. It’s a chance to explore your desires in a space that’s safe, private, and entirely judgment-free.
So, whether it’s about rediscovering a high school crush or imagining a connection with someone who’s a little older and a lot wiser, Nectar AI makes it all possible. Why not see where your imagination can take you?
| [
"CRAFT"
] | TBD |
abhiraj1/e5-base-v2-Q4_K_M-GGUF | abhiraj1 | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:intfloat/e5-base-v2",
"base_model:quantized:intfloat/e5-base-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 1,735,623,136,000 | 2024-12-31T05:32:19 | 4 | 0 | ---
base_model: intfloat/e5-base-v2
language:
- en
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: e5-base-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.77611940298506
- type: ap
value: 42.052710266606056
- type: f1
value: 72.12040628266567
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.81012500000001
- type: ap
value: 89.4213700757244
- type: f1
value: 92.8039091197065
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.711999999999996
- type: f1
value: 46.11544975436018
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.186
- type: map_at_10
value: 36.632999999999996
- type: map_at_100
value: 37.842
- type: map_at_1000
value: 37.865
- type: map_at_3
value: 32.278
- type: map_at_5
value: 34.760999999999996
- type: mrr_at_1
value: 23.400000000000002
- type: mrr_at_10
value: 36.721
- type: mrr_at_100
value: 37.937
- type: mrr_at_1000
value: 37.96
- type: mrr_at_3
value: 32.302
- type: mrr_at_5
value: 34.894
- type: ndcg_at_1
value: 23.186
- type: ndcg_at_10
value: 44.49
- type: ndcg_at_100
value: 50.065000000000005
- type: ndcg_at_1000
value: 50.629999999999995
- type: ndcg_at_3
value: 35.461
- type: ndcg_at_5
value: 39.969
- type: precision_at_1
value: 23.186
- type: precision_at_10
value: 6.97
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.912
- type: precision_at_5
value: 11.152
- type: recall_at_1
value: 23.186
- type: recall_at_10
value: 69.70100000000001
- type: recall_at_100
value: 95.092
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 44.737
- type: recall_at_5
value: 55.761
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.10312401440185
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.67275326095384
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.97793816337376
- type: mrr
value: 72.76832431957087
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.11646947018187
- type: cos_sim_spearman
value: 81.40064994975234
- type: euclidean_pearson
value: 82.37355689019232
- type: euclidean_spearman
value: 81.6777646977348
- type: manhattan_pearson
value: 82.61101422716945
- type: manhattan_spearman
value: 81.80427360442245
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.52922077922076
- type: f1
value: 83.45298679360866
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.495115019668496
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.724792944166765
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.361000000000004
- type: map_at_10
value: 43.765
- type: map_at_100
value: 45.224
- type: map_at_1000
value: 45.35
- type: map_at_3
value: 40.353
- type: map_at_5
value: 42.195
- type: mrr_at_1
value: 40.629
- type: mrr_at_10
value: 50.458000000000006
- type: mrr_at_100
value: 51.06699999999999
- type: mrr_at_1000
value: 51.12
- type: mrr_at_3
value: 47.902
- type: mrr_at_5
value: 49.447
- type: ndcg_at_1
value: 40.629
- type: ndcg_at_10
value: 50.376
- type: ndcg_at_100
value: 55.065
- type: ndcg_at_1000
value: 57.196000000000005
- type: ndcg_at_3
value: 45.616
- type: ndcg_at_5
value: 47.646
- type: precision_at_1
value: 40.629
- type: precision_at_10
value: 9.785
- type: precision_at_100
value: 1.562
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.031
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.361000000000004
- type: recall_at_10
value: 62.214000000000006
- type: recall_at_100
value: 81.464
- type: recall_at_1000
value: 95.905
- type: recall_at_3
value: 47.5
- type: recall_at_5
value: 53.69500000000001
- type: map_at_1
value: 27.971
- type: map_at_10
value: 37.444
- type: map_at_100
value: 38.607
- type: map_at_1000
value: 38.737
- type: map_at_3
value: 34.504000000000005
- type: map_at_5
value: 36.234
- type: mrr_at_1
value: 35.35
- type: mrr_at_10
value: 43.441
- type: mrr_at_100
value: 44.147999999999996
- type: mrr_at_1000
value: 44.196000000000005
- type: mrr_at_3
value: 41.285
- type: mrr_at_5
value: 42.552
- type: ndcg_at_1
value: 35.35
- type: ndcg_at_10
value: 42.903999999999996
- type: ndcg_at_100
value: 47.406
- type: ndcg_at_1000
value: 49.588
- type: ndcg_at_3
value: 38.778
- type: ndcg_at_5
value: 40.788000000000004
- type: precision_at_1
value: 35.35
- type: precision_at_10
value: 8.083
- type: precision_at_100
value: 1.313
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 18.769
- type: precision_at_5
value: 13.439
- type: recall_at_1
value: 27.971
- type: recall_at_10
value: 52.492000000000004
- type: recall_at_100
value: 71.642
- type: recall_at_1000
value: 85.488
- type: recall_at_3
value: 40.1
- type: recall_at_5
value: 45.800000000000004
- type: map_at_1
value: 39.898
- type: map_at_10
value: 51.819
- type: map_at_100
value: 52.886
- type: map_at_1000
value: 52.941
- type: map_at_3
value: 48.619
- type: map_at_5
value: 50.493
- type: mrr_at_1
value: 45.391999999999996
- type: mrr_at_10
value: 55.230000000000004
- type: mrr_at_100
value: 55.887
- type: mrr_at_1000
value: 55.916
- type: mrr_at_3
value: 52.717000000000006
- type: mrr_at_5
value: 54.222
- type: ndcg_at_1
value: 45.391999999999996
- type: ndcg_at_10
value: 57.586999999999996
- type: ndcg_at_100
value: 61.745000000000005
- type: ndcg_at_1000
value: 62.83800000000001
- type: ndcg_at_3
value: 52.207
- type: ndcg_at_5
value: 54.925999999999995
- type: precision_at_1
value: 45.391999999999996
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.177
- type: precision_at_5
value: 16.038
- type: recall_at_1
value: 39.898
- type: recall_at_10
value: 71.18900000000001
- type: recall_at_100
value: 89.082
- type: recall_at_1000
value: 96.865
- type: recall_at_3
value: 56.907
- type: recall_at_5
value: 63.397999999999996
- type: map_at_1
value: 22.706
- type: map_at_10
value: 30.818
- type: map_at_100
value: 32.038
- type: map_at_1000
value: 32.123000000000005
- type: map_at_3
value: 28.077
- type: map_at_5
value: 29.709999999999997
- type: mrr_at_1
value: 24.407
- type: mrr_at_10
value: 32.555
- type: mrr_at_100
value: 33.692
- type: mrr_at_1000
value: 33.751
- type: mrr_at_3
value: 29.848999999999997
- type: mrr_at_5
value: 31.509999999999998
- type: ndcg_at_1
value: 24.407
- type: ndcg_at_10
value: 35.624
- type: ndcg_at_100
value: 41.454
- type: ndcg_at_1000
value: 43.556
- type: ndcg_at_3
value: 30.217
- type: ndcg_at_5
value: 33.111000000000004
- type: precision_at_1
value: 24.407
- type: precision_at_10
value: 5.548
- type: precision_at_100
value: 0.8869999999999999
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 12.731
- type: precision_at_5
value: 9.22
- type: recall_at_1
value: 22.706
- type: recall_at_10
value: 48.772
- type: recall_at_100
value: 75.053
- type: recall_at_1000
value: 90.731
- type: recall_at_3
value: 34.421
- type: recall_at_5
value: 41.427
- type: map_at_1
value: 13.424
- type: map_at_10
value: 21.09
- type: map_at_100
value: 22.264999999999997
- type: map_at_1000
value: 22.402
- type: map_at_3
value: 18.312
- type: map_at_5
value: 19.874
- type: mrr_at_1
value: 16.915
- type: mrr_at_10
value: 25.258000000000003
- type: mrr_at_100
value: 26.228
- type: mrr_at_1000
value: 26.31
- type: mrr_at_3
value: 22.492
- type: mrr_at_5
value: 24.04
- type: ndcg_at_1
value: 16.915
- type: ndcg_at_10
value: 26.266000000000002
- type: ndcg_at_100
value: 32.08
- type: ndcg_at_1000
value: 35.086
- type: ndcg_at_3
value: 21.049
- type: ndcg_at_5
value: 23.508000000000003
- type: precision_at_1
value: 16.915
- type: precision_at_10
value: 5.1
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 10.282
- type: precision_at_5
value: 7.836
- type: recall_at_1
value: 13.424
- type: recall_at_10
value: 38.179
- type: recall_at_100
value: 63.906
- type: recall_at_1000
value: 84.933
- type: recall_at_3
value: 23.878
- type: recall_at_5
value: 30.037999999999997
- type: map_at_1
value: 26.154
- type: map_at_10
value: 35.912
- type: map_at_100
value: 37.211
- type: map_at_1000
value: 37.327
- type: map_at_3
value: 32.684999999999995
- type: map_at_5
value: 34.562
- type: mrr_at_1
value: 32.435
- type: mrr_at_10
value: 41.411
- type: mrr_at_100
value: 42.297000000000004
- type: mrr_at_1000
value: 42.345
- type: mrr_at_3
value: 38.771
- type: mrr_at_5
value: 40.33
- type: ndcg_at_1
value: 32.435
- type: ndcg_at_10
value: 41.785
- type: ndcg_at_100
value: 47.469
- type: ndcg_at_1000
value: 49.685
- type: ndcg_at_3
value: 36.618
- type: ndcg_at_5
value: 39.101
- type: precision_at_1
value: 32.435
- type: precision_at_10
value: 7.642
- type: precision_at_100
value: 1.244
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 17.485
- type: precision_at_5
value: 12.57
- type: recall_at_1
value: 26.154
- type: recall_at_10
value: 54.111
- type: recall_at_100
value: 78.348
- type: recall_at_1000
value: 92.996
- type: recall_at_3
value: 39.189
- type: recall_at_5
value: 45.852
- type: map_at_1
value: 26.308999999999997
- type: map_at_10
value: 35.524
- type: map_at_100
value: 36.774
- type: map_at_1000
value: 36.891
- type: map_at_3
value: 32.561
- type: map_at_5
value: 34.034
- type: mrr_at_1
value: 31.735000000000003
- type: mrr_at_10
value: 40.391
- type: mrr_at_100
value: 41.227000000000004
- type: mrr_at_1000
value: 41.288000000000004
- type: mrr_at_3
value: 37.938
- type: mrr_at_5
value: 39.193
- type: ndcg_at_1
value: 31.735000000000003
- type: ndcg_at_10
value: 41.166000000000004
- type: ndcg_at_100
value: 46.702
- type: ndcg_at_1000
value: 49.157000000000004
- type: ndcg_at_3
value: 36.274
- type: ndcg_at_5
value: 38.177
- type: precision_at_1
value: 31.735000000000003
- type: precision_at_10
value: 7.5569999999999995
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.199
- type: precision_at_5
value: 12.123000000000001
- type: recall_at_1
value: 26.308999999999997
- type: recall_at_10
value: 53.083000000000006
- type: recall_at_100
value: 76.922
- type: recall_at_1000
value: 93.767
- type: recall_at_3
value: 39.262
- type: recall_at_5
value: 44.413000000000004
- type: map_at_1
value: 24.391250000000003
- type: map_at_10
value: 33.280166666666666
- type: map_at_100
value: 34.49566666666667
- type: map_at_1000
value: 34.61533333333333
- type: map_at_3
value: 30.52183333333333
- type: map_at_5
value: 32.06608333333333
- type: mrr_at_1
value: 29.105083333333337
- type: mrr_at_10
value: 37.44766666666666
- type: mrr_at_100
value: 38.32491666666667
- type: mrr_at_1000
value: 38.385666666666665
- type: mrr_at_3
value: 35.06883333333333
- type: mrr_at_5
value: 36.42066666666667
- type: ndcg_at_1
value: 29.105083333333337
- type: ndcg_at_10
value: 38.54358333333333
- type: ndcg_at_100
value: 43.833583333333344
- type: ndcg_at_1000
value: 46.215333333333334
- type: ndcg_at_3
value: 33.876
- type: ndcg_at_5
value: 36.05208333333333
- type: precision_at_1
value: 29.105083333333337
- type: precision_at_10
value: 6.823416666666665
- type: precision_at_100
value: 1.1270833333333334
- type: precision_at_1000
value: 0.15208333333333332
- type: precision_at_3
value: 15.696750000000002
- type: precision_at_5
value: 11.193499999999998
- type: recall_at_1
value: 24.391250000000003
- type: recall_at_10
value: 49.98808333333333
- type: recall_at_100
value: 73.31616666666666
- type: recall_at_1000
value: 89.96291666666667
- type: recall_at_3
value: 36.86666666666667
- type: recall_at_5
value: 42.54350000000001
- type: map_at_1
value: 21.995
- type: map_at_10
value: 28.807
- type: map_at_100
value: 29.813000000000002
- type: map_at_1000
value: 29.903000000000002
- type: map_at_3
value: 26.636
- type: map_at_5
value: 27.912
- type: mrr_at_1
value: 24.847
- type: mrr_at_10
value: 31.494
- type: mrr_at_100
value: 32.381
- type: mrr_at_1000
value: 32.446999999999996
- type: mrr_at_3
value: 29.473
- type: mrr_at_5
value: 30.7
- type: ndcg_at_1
value: 24.847
- type: ndcg_at_10
value: 32.818999999999996
- type: ndcg_at_100
value: 37.835
- type: ndcg_at_1000
value: 40.226
- type: ndcg_at_3
value: 28.811999999999998
- type: ndcg_at_5
value: 30.875999999999998
- type: precision_at_1
value: 24.847
- type: precision_at_10
value: 5.244999999999999
- type: precision_at_100
value: 0.856
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 21.995
- type: recall_at_10
value: 42.479
- type: recall_at_100
value: 65.337
- type: recall_at_1000
value: 83.23700000000001
- type: recall_at_3
value: 31.573
- type: recall_at_5
value: 36.684
- type: map_at_1
value: 15.751000000000001
- type: map_at_10
value: 21.909
- type: map_at_100
value: 23.064
- type: map_at_1000
value: 23.205000000000002
- type: map_at_3
value: 20.138
- type: map_at_5
value: 20.973
- type: mrr_at_1
value: 19.305
- type: mrr_at_10
value: 25.647
- type: mrr_at_100
value: 26.659
- type: mrr_at_1000
value: 26.748
- type: mrr_at_3
value: 23.933
- type: mrr_at_5
value: 24.754
- type: ndcg_at_1
value: 19.305
- type: ndcg_at_10
value: 25.886
- type: ndcg_at_100
value: 31.56
- type: ndcg_at_1000
value: 34.799
- type: ndcg_at_3
value: 22.708000000000002
- type: ndcg_at_5
value: 23.838
- type: precision_at_1
value: 19.305
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.895
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 10.771
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 15.751000000000001
- type: recall_at_10
value: 34.156
- type: recall_at_100
value: 59.899
- type: recall_at_1000
value: 83.08
- type: recall_at_3
value: 24.772
- type: recall_at_5
value: 28.009
- type: map_at_1
value: 23.34
- type: map_at_10
value: 32.383
- type: map_at_100
value: 33.629999999999995
- type: map_at_1000
value: 33.735
- type: map_at_3
value: 29.68
- type: map_at_5
value: 31.270999999999997
- type: mrr_at_1
value: 27.612
- type: mrr_at_10
value: 36.381
- type: mrr_at_100
value: 37.351
- type: mrr_at_1000
value: 37.411
- type: mrr_at_3
value: 33.893
- type: mrr_at_5
value: 35.353
- type: ndcg_at_1
value: 27.612
- type: ndcg_at_10
value: 37.714999999999996
- type: ndcg_at_100
value: 43.525000000000006
- type: ndcg_at_1000
value: 45.812999999999995
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 35.243
- type: precision_at_1
value: 27.612
- type: precision_at_10
value: 6.465
- type: precision_at_100
value: 1.0619999999999998
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.049999999999999
- type: precision_at_5
value: 10.764999999999999
- type: recall_at_1
value: 23.34
- type: recall_at_10
value: 49.856
- type: recall_at_100
value: 75.334
- type: recall_at_1000
value: 91.156
- type: recall_at_3
value: 36.497
- type: recall_at_5
value: 42.769
- type: map_at_1
value: 25.097
- type: map_at_10
value: 34.599999999999994
- type: map_at_100
value: 36.174
- type: map_at_1000
value: 36.398
- type: map_at_3
value: 31.781
- type: map_at_5
value: 33.22
- type: mrr_at_1
value: 31.225
- type: mrr_at_10
value: 39.873
- type: mrr_at_100
value: 40.853
- type: mrr_at_1000
value: 40.904
- type: mrr_at_3
value: 37.681
- type: mrr_at_5
value: 38.669
- type: ndcg_at_1
value: 31.225
- type: ndcg_at_10
value: 40.586
- type: ndcg_at_100
value: 46.226
- type: ndcg_at_1000
value: 48.788
- type: ndcg_at_3
value: 36.258
- type: ndcg_at_5
value: 37.848
- type: precision_at_1
value: 31.225
- type: precision_at_10
value: 7.707999999999999
- type: precision_at_100
value: 1.536
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 17.26
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 25.097
- type: recall_at_10
value: 51.602000000000004
- type: recall_at_100
value: 76.854
- type: recall_at_1000
value: 93.303
- type: recall_at_3
value: 38.68
- type: recall_at_5
value: 43.258
- type: map_at_1
value: 17.689
- type: map_at_10
value: 25.291000000000004
- type: map_at_100
value: 26.262
- type: map_at_1000
value: 26.372
- type: map_at_3
value: 22.916
- type: map_at_5
value: 24.315
- type: mrr_at_1
value: 19.409000000000002
- type: mrr_at_10
value: 27.233
- type: mrr_at_100
value: 28.109
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 24.892
- type: mrr_at_5
value: 26.278000000000002
- type: ndcg_at_1
value: 19.409000000000002
- type: ndcg_at_10
value: 29.809
- type: ndcg_at_100
value: 34.936
- type: ndcg_at_1000
value: 37.852000000000004
- type: ndcg_at_3
value: 25.179000000000002
- type: ndcg_at_5
value: 27.563
- type: precision_at_1
value: 19.409000000000002
- type: precision_at_10
value: 4.861
- type: precision_at_100
value: 0.8
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 17.689
- type: recall_at_10
value: 41.724
- type: recall_at_100
value: 65.95299999999999
- type: recall_at_1000
value: 88.094
- type: recall_at_3
value: 29.621
- type: recall_at_5
value: 35.179
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.581
- type: map_at_10
value: 18.944
- type: map_at_100
value: 20.812
- type: map_at_1000
value: 21.002000000000002
- type: map_at_3
value: 15.661
- type: map_at_5
value: 17.502000000000002
- type: mrr_at_1
value: 23.388
- type: mrr_at_10
value: 34.263
- type: mrr_at_100
value: 35.364000000000004
- type: mrr_at_1000
value: 35.409
- type: mrr_at_3
value: 30.586000000000002
- type: mrr_at_5
value: 32.928000000000004
- type: ndcg_at_1
value: 23.388
- type: ndcg_at_10
value: 26.56
- type: ndcg_at_100
value: 34.248
- type: ndcg_at_1000
value: 37.779
- type: ndcg_at_3
value: 21.179000000000002
- type: ndcg_at_5
value: 23.504
- type: precision_at_1
value: 23.388
- type: precision_at_10
value: 8.476
- type: precision_at_100
value: 1.672
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.852
- type: precision_at_5
value: 12.73
- type: recall_at_1
value: 10.581
- type: recall_at_10
value: 32.512
- type: recall_at_100
value: 59.313
- type: recall_at_1000
value: 79.25
- type: recall_at_3
value: 19.912
- type: recall_at_5
value: 25.832
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.35
- type: map_at_10
value: 20.134
- type: map_at_100
value: 28.975
- type: map_at_1000
value: 30.709999999999997
- type: map_at_3
value: 14.513000000000002
- type: map_at_5
value: 16.671
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 77.67699999999999
- type: mrr_at_100
value: 77.97500000000001
- type: mrr_at_1000
value: 77.985
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.179
- type: ndcg_at_1
value: 56.49999999999999
- type: ndcg_at_10
value: 42.226
- type: ndcg_at_100
value: 47.562
- type: ndcg_at_1000
value: 54.923
- type: ndcg_at_3
value: 46.564
- type: ndcg_at_5
value: 43.830000000000005
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 33.525
- type: precision_at_100
value: 11.035
- type: precision_at_1000
value: 2.206
- type: precision_at_3
value: 49.75
- type: precision_at_5
value: 42
- type: recall_at_1
value: 9.35
- type: recall_at_10
value: 25.793
- type: recall_at_100
value: 54.186
- type: recall_at_1000
value: 77.81
- type: recall_at_3
value: 15.770000000000001
- type: recall_at_5
value: 19.09
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.945
- type: f1
value: 42.07407842992542
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.04599999999999
- type: map_at_10
value: 80.718
- type: map_at_100
value: 80.961
- type: map_at_1000
value: 80.974
- type: map_at_3
value: 79.49199999999999
- type: map_at_5
value: 80.32000000000001
- type: mrr_at_1
value: 76.388
- type: mrr_at_10
value: 85.214
- type: mrr_at_100
value: 85.302
- type: mrr_at_1000
value: 85.302
- type: mrr_at_3
value: 84.373
- type: mrr_at_5
value: 84.979
- type: ndcg_at_1
value: 76.388
- type: ndcg_at_10
value: 84.987
- type: ndcg_at_100
value: 85.835
- type: ndcg_at_1000
value: 86.04899999999999
- type: ndcg_at_3
value: 83.04
- type: ndcg_at_5
value: 84.22500000000001
- type: precision_at_1
value: 76.388
- type: precision_at_10
value: 10.35
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 32.108
- type: precision_at_5
value: 20.033
- type: recall_at_1
value: 71.04599999999999
- type: recall_at_10
value: 93.547
- type: recall_at_100
value: 96.887
- type: recall_at_1000
value: 98.158
- type: recall_at_3
value: 88.346
- type: recall_at_5
value: 91.321
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.8
- type: map_at_10
value: 31.979999999999997
- type: map_at_100
value: 33.876
- type: map_at_1000
value: 34.056999999999995
- type: map_at_3
value: 28.067999999999998
- type: map_at_5
value: 30.066
- type: mrr_at_1
value: 38.735
- type: mrr_at_10
value: 47.749
- type: mrr_at_100
value: 48.605
- type: mrr_at_1000
value: 48.644999999999996
- type: mrr_at_3
value: 45.165
- type: mrr_at_5
value: 46.646
- type: ndcg_at_1
value: 38.735
- type: ndcg_at_10
value: 39.883
- type: ndcg_at_100
value: 46.983000000000004
- type: ndcg_at_1000
value: 50.043000000000006
- type: ndcg_at_3
value: 35.943000000000005
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 38.735
- type: precision_at_10
value: 10.940999999999999
- type: precision_at_100
value: 1.836
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 23.817
- type: precision_at_5
value: 17.346
- type: recall_at_1
value: 19.8
- type: recall_at_10
value: 47.082
- type: recall_at_100
value: 73.247
- type: recall_at_1000
value: 91.633
- type: recall_at_3
value: 33.201
- type: recall_at_5
value: 38.81
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.102999999999994
- type: map_at_10
value: 60.547
- type: map_at_100
value: 61.466
- type: map_at_1000
value: 61.526
- type: map_at_3
value: 56.973
- type: map_at_5
value: 59.244
- type: mrr_at_1
value: 76.205
- type: mrr_at_10
value: 82.816
- type: mrr_at_100
value: 83.002
- type: mrr_at_1000
value: 83.009
- type: mrr_at_3
value: 81.747
- type: mrr_at_5
value: 82.467
- type: ndcg_at_1
value: 76.205
- type: ndcg_at_10
value: 69.15
- type: ndcg_at_100
value: 72.297
- type: ndcg_at_1000
value: 73.443
- type: ndcg_at_3
value: 64.07000000000001
- type: ndcg_at_5
value: 66.96600000000001
- type: precision_at_1
value: 76.205
- type: precision_at_10
value: 14.601
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 41.202
- type: precision_at_5
value: 27.006000000000004
- type: recall_at_1
value: 38.102999999999994
- type: recall_at_10
value: 73.005
- type: recall_at_100
value: 85.253
- type: recall_at_1000
value: 92.795
- type: recall_at_3
value: 61.803
- type: recall_at_5
value: 67.515
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.15
- type: ap
value: 80.36282825265391
- type: f1
value: 86.07368510726472
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.6
- type: map_at_10
value: 34.887
- type: map_at_100
value: 36.069
- type: map_at_1000
value: 36.115
- type: map_at_3
value: 31.067
- type: map_at_5
value: 33.300000000000004
- type: mrr_at_1
value: 23.238
- type: mrr_at_10
value: 35.47
- type: mrr_at_100
value: 36.599
- type: mrr_at_1000
value: 36.64
- type: mrr_at_3
value: 31.735999999999997
- type: mrr_at_5
value: 33.939
- type: ndcg_at_1
value: 23.252
- type: ndcg_at_10
value: 41.765
- type: ndcg_at_100
value: 47.402
- type: ndcg_at_1000
value: 48.562
- type: ndcg_at_3
value: 34.016999999999996
- type: ndcg_at_5
value: 38.016
- type: precision_at_1
value: 23.252
- type: precision_at_10
value: 6.569
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.479000000000001
- type: precision_at_5
value: 10.722
- type: recall_at_1
value: 22.6
- type: recall_at_10
value: 62.919000000000004
- type: recall_at_100
value: 88.82
- type: recall_at_1000
value: 97.71600000000001
- type: recall_at_3
value: 41.896
- type: recall_at_5
value: 51.537
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.69357045143639
- type: f1
value: 93.55489858177597
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.31235750114
- type: f1
value: 57.891491963121155
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.04303967720243
- type: f1
value: 70.51516022297616
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.65299260255549
- type: f1
value: 77.49059766538576
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.458906115906597
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.9851513122443
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.2916268497217
- type: mrr
value: 32.328276715593816
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.3740000000000006
- type: map_at_10
value: 13.089999999999998
- type: map_at_100
value: 16.512
- type: map_at_1000
value: 18.014
- type: map_at_3
value: 9.671000000000001
- type: map_at_5
value: 11.199
- type: mrr_at_1
value: 46.749
- type: mrr_at_10
value: 55.367
- type: mrr_at_100
value: 56.021
- type: mrr_at_1000
value: 56.058
- type: mrr_at_3
value: 53.30200000000001
- type: mrr_at_5
value: 54.773
- type: ndcg_at_1
value: 45.046
- type: ndcg_at_10
value: 35.388999999999996
- type: ndcg_at_100
value: 32.175
- type: ndcg_at_1000
value: 41.018
- type: ndcg_at_3
value: 40.244
- type: ndcg_at_5
value: 38.267
- type: precision_at_1
value: 46.749
- type: precision_at_10
value: 26.563
- type: precision_at_100
value: 8.074
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 37.358000000000004
- type: precision_at_5
value: 33.003
- type: recall_at_1
value: 6.3740000000000006
- type: recall_at_10
value: 16.805999999999997
- type: recall_at_100
value: 31.871
- type: recall_at_1000
value: 64.098
- type: recall_at_3
value: 10.383000000000001
- type: recall_at_5
value: 13.166
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.847
- type: map_at_10
value: 50.532
- type: map_at_100
value: 51.504000000000005
- type: map_at_1000
value: 51.528
- type: map_at_3
value: 46.219
- type: map_at_5
value: 48.868
- type: mrr_at_1
value: 39.137
- type: mrr_at_10
value: 53.157
- type: mrr_at_100
value: 53.839999999999996
- type: mrr_at_1000
value: 53.857
- type: mrr_at_3
value: 49.667
- type: mrr_at_5
value: 51.847
- type: ndcg_at_1
value: 39.108
- type: ndcg_at_10
value: 58.221000000000004
- type: ndcg_at_100
value: 62.021
- type: ndcg_at_1000
value: 62.57
- type: ndcg_at_3
value: 50.27199999999999
- type: ndcg_at_5
value: 54.623999999999995
- type: precision_at_1
value: 39.108
- type: precision_at_10
value: 9.397
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.644000000000002
- type: precision_at_5
value: 16.141
- type: recall_at_1
value: 34.847
- type: recall_at_10
value: 78.945
- type: recall_at_100
value: 94.793
- type: recall_at_1000
value: 98.904
- type: recall_at_3
value: 58.56
- type: recall_at_5
value: 68.535
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.728
- type: map_at_10
value: 82.537
- type: map_at_100
value: 83.218
- type: map_at_1000
value: 83.238
- type: map_at_3
value: 79.586
- type: map_at_5
value: 81.416
- type: mrr_at_1
value: 79.17999999999999
- type: mrr_at_10
value: 85.79299999999999
- type: mrr_at_100
value: 85.937
- type: mrr_at_1000
value: 85.938
- type: mrr_at_3
value: 84.748
- type: mrr_at_5
value: 85.431
- type: ndcg_at_1
value: 79.17
- type: ndcg_at_10
value: 86.555
- type: ndcg_at_100
value: 88.005
- type: ndcg_at_1000
value: 88.146
- type: ndcg_at_3
value: 83.557
- type: ndcg_at_5
value: 85.152
- type: precision_at_1
value: 79.17
- type: precision_at_10
value: 13.163
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.53
- type: precision_at_5
value: 24.046
- type: recall_at_1
value: 68.728
- type: recall_at_10
value: 94.217
- type: recall_at_100
value: 99.295
- type: recall_at_1000
value: 99.964
- type: recall_at_3
value: 85.646
- type: recall_at_5
value: 90.113
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.15680266226348
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.4318549229047
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.353
- type: map_at_10
value: 10.956000000000001
- type: map_at_100
value: 12.873999999999999
- type: map_at_1000
value: 13.177
- type: map_at_3
value: 7.854
- type: map_at_5
value: 9.327
- type: mrr_at_1
value: 21.4
- type: mrr_at_10
value: 31.948999999999998
- type: mrr_at_100
value: 33.039
- type: mrr_at_1000
value: 33.106
- type: mrr_at_3
value: 28.449999999999996
- type: mrr_at_5
value: 30.535
- type: ndcg_at_1
value: 21.4
- type: ndcg_at_10
value: 18.694
- type: ndcg_at_100
value: 26.275
- type: ndcg_at_1000
value: 31.836
- type: ndcg_at_3
value: 17.559
- type: ndcg_at_5
value: 15.372
- type: precision_at_1
value: 21.4
- type: precision_at_10
value: 9.790000000000001
- type: precision_at_100
value: 2.0709999999999997
- type: precision_at_1000
value: 0.34099999999999997
- type: precision_at_3
value: 16.467000000000002
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.353
- type: recall_at_10
value: 19.892000000000003
- type: recall_at_100
value: 42.067
- type: recall_at_1000
value: 69.268
- type: recall_at_3
value: 10.042
- type: recall_at_5
value: 13.741999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.75433886279843
- type: cos_sim_spearman
value: 78.29727771767095
- type: euclidean_pearson
value: 80.83057828506621
- type: euclidean_spearman
value: 78.35203149750356
- type: manhattan_pearson
value: 80.7403553891142
- type: manhattan_spearman
value: 78.33670488531051
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.59999465280839
- type: cos_sim_spearman
value: 75.79279003980383
- type: euclidean_pearson
value: 82.29895375956758
- type: euclidean_spearman
value: 77.33856514102094
- type: manhattan_pearson
value: 82.22694214534756
- type: manhattan_spearman
value: 77.3028993008695
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.09296929691297
- type: cos_sim_spearman
value: 83.58056936846941
- type: euclidean_pearson
value: 83.84067483060005
- type: euclidean_spearman
value: 84.45155680480985
- type: manhattan_pearson
value: 83.82353052971942
- type: manhattan_spearman
value: 84.43030567861112
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.74616852320915
- type: cos_sim_spearman
value: 79.948683747966
- type: euclidean_pearson
value: 81.55702283757084
- type: euclidean_spearman
value: 80.1721505114231
- type: manhattan_pearson
value: 81.52251518619441
- type: manhattan_spearman
value: 80.1469800135577
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.97170104226318
- type: cos_sim_spearman
value: 88.82021731518206
- type: euclidean_pearson
value: 87.92950547187615
- type: euclidean_spearman
value: 88.67043634645866
- type: manhattan_pearson
value: 87.90668112827639
- type: manhattan_spearman
value: 88.64471082785317
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.02790375770599
- type: cos_sim_spearman
value: 84.46308496590792
- type: euclidean_pearson
value: 84.29430000414911
- type: euclidean_spearman
value: 84.77298303589936
- type: manhattan_pearson
value: 84.23919291368665
- type: manhattan_spearman
value: 84.75272234871308
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.62885108477064
- type: cos_sim_spearman
value: 87.58456196391622
- type: euclidean_pearson
value: 88.2602775281007
- type: euclidean_spearman
value: 87.51556278299846
- type: manhattan_pearson
value: 88.11224053672842
- type: manhattan_spearman
value: 87.4336094383095
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.98187965128411
- type: cos_sim_spearman
value: 64.0653163219731
- type: euclidean_pearson
value: 62.30616725924099
- type: euclidean_spearman
value: 61.556971332295916
- type: manhattan_pearson
value: 62.07642330128549
- type: manhattan_spearman
value: 61.155494129828
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.6089703921826
- type: cos_sim_spearman
value: 86.52303197250791
- type: euclidean_pearson
value: 85.95801955963246
- type: euclidean_spearman
value: 86.25242424112962
- type: manhattan_pearson
value: 85.88829100470312
- type: manhattan_spearman
value: 86.18742955805165
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.02282098487036
- type: mrr
value: 95.05126409538174
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.928
- type: map_at_10
value: 67.308
- type: map_at_100
value: 67.89500000000001
- type: map_at_1000
value: 67.91199999999999
- type: map_at_3
value: 65.091
- type: map_at_5
value: 66.412
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 68.401
- type: mrr_at_100
value: 68.804
- type: mrr_at_1000
value: 68.819
- type: mrr_at_3
value: 66.72200000000001
- type: mrr_at_5
value: 67.72200000000001
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 71.944
- type: ndcg_at_100
value: 74.464
- type: ndcg_at_1000
value: 74.82799999999999
- type: ndcg_at_3
value: 68.257
- type: ndcg_at_5
value: 70.10300000000001
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 27.222
- type: precision_at_5
value: 17.533
- type: recall_at_1
value: 55.928
- type: recall_at_10
value: 84.65
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 74.656
- type: recall_at_5
value: 79.489
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 94.5795129511524
- type: cos_sim_f1
value: 89.34673366834171
- type: cos_sim_precision
value: 89.79797979797979
- type: cos_sim_recall
value: 88.9
- type: dot_accuracy
value: 99.53465346534654
- type: dot_ap
value: 81.56492504352725
- type: dot_f1
value: 76.33816908454227
- type: dot_precision
value: 76.37637637637637
- type: dot_recall
value: 76.3
- type: euclidean_accuracy
value: 99.78514851485149
- type: euclidean_ap
value: 94.59134620408962
- type: euclidean_f1
value: 88.96484375
- type: euclidean_precision
value: 86.92748091603053
- type: euclidean_recall
value: 91.10000000000001
- type: manhattan_accuracy
value: 99.78415841584159
- type: manhattan_ap
value: 94.5190197328845
- type: manhattan_f1
value: 88.84462151394423
- type: manhattan_precision
value: 88.4920634920635
- type: manhattan_recall
value: 89.2
- type: max_accuracy
value: 99.79009900990098
- type: max_ap
value: 94.59134620408962
- type: max_f1
value: 89.34673366834171
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.1487505617497
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.502518166001856
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.33775480236701
- type: mrr
value: 51.17302223919871
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.561111309808208
- type: cos_sim_spearman
value: 30.2839254379273
- type: dot_pearson
value: 29.560242291401973
- type: dot_spearman
value: 30.51527274679116
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.215
- type: map_at_10
value: 1.752
- type: map_at_100
value: 9.258
- type: map_at_1000
value: 23.438
- type: map_at_3
value: 0.6
- type: map_at_5
value: 0.968
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 91.333
- type: mrr_at_100
value: 91.333
- type: mrr_at_1000
value: 91.333
- type: mrr_at_3
value: 91.333
- type: mrr_at_5
value: 91.333
- type: ndcg_at_1
value: 75
- type: ndcg_at_10
value: 69.596
- type: ndcg_at_100
value: 51.970000000000006
- type: ndcg_at_1000
value: 48.864999999999995
- type: ndcg_at_3
value: 73.92699999999999
- type: ndcg_at_5
value: 73.175
- type: precision_at_1
value: 84
- type: precision_at_10
value: 74
- type: precision_at_100
value: 53.2
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 79.333
- type: precision_at_5
value: 78.4
- type: recall_at_1
value: 0.215
- type: recall_at_10
value: 1.9609999999999999
- type: recall_at_100
value: 12.809999999999999
- type: recall_at_1000
value: 46.418
- type: recall_at_3
value: 0.6479999999999999
- type: recall_at_5
value: 1.057
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.066
- type: map_at_10
value: 10.508000000000001
- type: map_at_100
value: 16.258
- type: map_at_1000
value: 17.705000000000002
- type: map_at_3
value: 6.157
- type: map_at_5
value: 7.510999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 48.786
- type: mrr_at_100
value: 49.619
- type: mrr_at_1000
value: 49.619
- type: mrr_at_3
value: 45.918
- type: mrr_at_5
value: 46.837
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 26.401999999999997
- type: ndcg_at_100
value: 37.139
- type: ndcg_at_1000
value: 48.012
- type: ndcg_at_3
value: 31.875999999999998
- type: ndcg_at_5
value: 27.383000000000003
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.611999999999999
- type: precision_at_1000
value: 1.492
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 3.066
- type: recall_at_10
value: 16.239
- type: recall_at_100
value: 47.29
- type: recall_at_1000
value: 81.137
- type: recall_at_3
value: 7.069
- type: recall_at_5
value: 9.483
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.1126
- type: ap
value: 14.710862719285753
- type: f1
value: 55.437808972378846
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.39049235993209
- type: f1
value: 60.69810537250234
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.15576640316866
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.52917684925792
- type: cos_sim_ap
value: 75.97497873817315
- type: cos_sim_f1
value: 70.01151926276718
- type: cos_sim_precision
value: 67.98409147402435
- type: cos_sim_recall
value: 72.16358839050132
- type: dot_accuracy
value: 82.47004828038385
- type: dot_ap
value: 62.48739894974198
- type: dot_f1
value: 59.13107511045656
- type: dot_precision
value: 55.27765029830197
- type: dot_recall
value: 63.562005277044854
- type: euclidean_accuracy
value: 86.46361089586935
- type: euclidean_ap
value: 75.59282886839452
- type: euclidean_f1
value: 69.6465443945099
- type: euclidean_precision
value: 64.52847175331982
- type: euclidean_recall
value: 75.64643799472296
- type: manhattan_accuracy
value: 86.43380818978363
- type: manhattan_ap
value: 75.5742420974403
- type: manhattan_f1
value: 69.8636926889715
- type: manhattan_precision
value: 65.8644859813084
- type: manhattan_recall
value: 74.37994722955145
- type: max_accuracy
value: 86.52917684925792
- type: max_ap
value: 75.97497873817315
- type: max_f1
value: 70.01151926276718
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.29056545193464
- type: cos_sim_ap
value: 86.63028865482376
- type: cos_sim_f1
value: 79.18166458532285
- type: cos_sim_precision
value: 75.70585756426465
- type: cos_sim_recall
value: 82.99199260856174
- type: dot_accuracy
value: 85.23305002522606
- type: dot_ap
value: 76.0482687263196
- type: dot_f1
value: 70.80484330484332
- type: dot_precision
value: 65.86933474688577
- type: dot_recall
value: 76.53988296889437
- type: euclidean_accuracy
value: 89.26145845461248
- type: euclidean_ap
value: 86.54073288416006
- type: euclidean_f1
value: 78.9721371479794
- type: euclidean_precision
value: 76.68649354417525
- type: euclidean_recall
value: 81.39821373575609
- type: manhattan_accuracy
value: 89.22847052431405
- type: manhattan_ap
value: 86.51250729037905
- type: manhattan_f1
value: 78.94601825044894
- type: manhattan_precision
value: 75.32694594027555
- type: manhattan_recall
value: 82.93039728980598
- type: max_accuracy
value: 89.29056545193464
- type: max_ap
value: 86.63028865482376
- type: max_f1
value: 79.18166458532285
---
# abhiraj1/e5-base-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`intfloat/e5-base-v2`](https://huggingface.co/intfloat/e5-base-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/e5-base-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo abhiraj1/e5-base-v2-Q4_K_M-GGUF --hf-file e5-base-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo abhiraj1/e5-base-v2-Q4_K_M-GGUF --hf-file e5-base-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo abhiraj1/e5-base-v2-Q4_K_M-GGUF --hf-file e5-base-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo abhiraj1/e5-base-v2-Q4_K_M-GGUF --hf-file e5-base-v2-q4_k_m.gguf -c 2048
```
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
tensorblock/Dr_Samantha-7b-GGUF | tensorblock | text-generation | [
"transformers",
"gguf",
"llama",
"merge",
"medical",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"zh",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:cognitivecomputations/samantha-data",
"dataset:shibing624/medical",
"base_model:sethuiyer/Dr_Samantha-7b",
"base_model:quantized:sethuiyer/Dr_Samantha-7b",
"license:llama2",
"model-index",
"endpoints_compatible",
"region:us"
] | 1,734,980,143,000 | 2024-12-23T19:29:30 | 88 | 0 | ---
base_model: sethuiyer/Dr_Samantha-7b
datasets:
- GBaker/MedQA-USMLE-4-options
- cognitivecomputations/samantha-data
- shibing624/medical
language:
- en
- zh
library_name: transformers
license: llama2
pipeline_tag: text-generation
tags:
- llama
- merge
- medical
- TensorBlock
- GGUF
model-index:
- name: Dr_Samantha-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## sethuiyer/Dr_Samantha-7b - GGUF
This repo contains GGUF format model files for [sethuiyer/Dr_Samantha-7b](https://huggingface.co/sethuiyer/Dr_Samantha-7b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Dr_Samantha-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [Dr_Samantha-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [Dr_Samantha-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [Dr_Samantha-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [Dr_Samantha-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Dr_Samantha-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [Dr_Samantha-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [Dr_Samantha-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Dr_Samantha-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [Dr_Samantha-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [Dr_Samantha-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [Dr_Samantha-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Dr_Samantha-7b-GGUF/blob/main/Dr_Samantha-7b-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Dr_Samantha-7b-GGUF --include "Dr_Samantha-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Dr_Samantha-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
| [
"MEDQA"
] | Non_BioNLP |
apple/OpenELM | apple | null | [
"arxiv:2404.14619",
"license:other",
"region:us"
] | 1,713,384,064,000 | 2024-05-02T00:54:23 | 0 | 1,429 | ---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
See the list below for the details of each model:
- [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M)
- [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M)
- [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B)
- [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B)
- [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct)
- [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct)
- [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct)
- [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct)
```python
from transformers import AutoModelForCausalLM
openelm_270m = AutoModelForCausalLM.from_pretrained("apple/OpenELM-270M", trust_remote_code=True)
openelm_450m = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M", trust_remote_code=True)
openelm_1b = AutoModelForCausalLM.from_pretrained("apple/OpenELM-1_1B", trust_remote_code=True)
openelm_3b = AutoModelForCausalLM.from_pretrained("apple/OpenELM-3B", trust_remote_code=True)
openelm_270m_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-270M-Instruct", trust_remote_code=True)
openelm_450m_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M-Instruct", trust_remote_code=True)
openelm_1b_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-1_1B-Instruct", trust_remote_code=True)
openelm_3b_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-3B-Instruct", trust_remote_code=True)
```
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL_NAME]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-270M
hf_model=apple/OpenELM-270M
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
| [
"SCIQ"
] | Non_BioNLP |
zwloong/sd3-lora-training-v2 | zwloong | text-to-image | [
"diffusers",
"sd3",
"sd3-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | 1,724,309,145,000 | 2024-08-22T10:04:32 | 1 | 1 | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
license: other
tags:
- sd3
- sd3-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
inference: true
---
# sd3-lora-training-v2
This is a standard PEFT LoRA derived from [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers).
The main validation prompt used during training was:
```
ethnographic photography of teddy bear at a picnic
```
## Validation settings
- CFG: `5.0`
- CFG Rescale: `0.0`
- Steps: `30`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024x1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 89
- Training steps: 2960
- Learning rate: 8e-07
- Effective batch size: 2
- Micro-batch size: 1
- Gradient accumulation steps: 2
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: bf16
- Quantised: No
- Xformers: Not used
- LoRA Rank: 16
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### Pal_BLIP
- Repeats: 0
- Total number of images: 73
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'stabilityai/stable-diffusion-3-medium-diffusers'
adapter_id = 'zwloong/sd3-lora-training-v2'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "ethnographic photography of teddy bear at a picnic"
negative_prompt = 'blurry, cropped, ugly'
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=30,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=5.0,
).images[0]
image.save("output.png", format="PNG")
```
| [
"BEAR"
] | Non_BioNLP |
LoneStriker/UNAversal-8x7B-v1beta-4.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"UNA",
"juanako",
"MoE",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,703,874,163,000 | 2023-12-29T18:41:30 | 5 | 0 | ---
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
tags:
- UNA
- juanako
- mixtral
- MoE
---
# UNAversal - Uniform Neural Alignment (MoE)
This is just a beta, a first release so people can start working on franksteins and so.
It does achieve high GSM/Math and TQA, so ideally you can merge it with other mixtrals and see what coming out of it
Based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## UNA Details
For this model we came out with the most obvious, placing UNA on the router_logit. It does work, but we saw a much better performance on SFT by doing so.
So this model DOES have UNA-SFT phase, its highly experimental and it was merely using LLaMA-Factory datasets by example alpaca.
As the others:
- Can be finetuned further, try 2e-5 or **1e-4 (since its MOE)**
- Can be merged, here you will have to improvise and please report findings on a discussion thread.
**REMINDER**: please.. cite, it does help on the research and the lab itself, seriously.
## NEED YOUR HELP!!
I need a multi-turn trainloop for the Mixtral, that can squeeze the juice out of 8xH100's properly. Please feel free to reach @fblgit either discord or twitter. thanks!
# Evals
Here there are some, but we also submitted it to the HF eval queue....
## GSM8k 5-Shot
```
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6603|± | 0.013|
```
## ARC 25-Shot
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.6621|± |0.0138|
| | |none | 25|acc_norm|0.6962|± |0.0134|
```
## TruthfulQA 0-Shot (MC2)
```
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7122|± |0.0141|
```
## 0-Shots Evals
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|----------|-----:|---|-----:|
|arc_challenge |Yaml |none | 0|acc |0.6101|± |0.0143|
| | |none | 0|acc_norm |0.6425|± |0.0140|
|arc_easy |Yaml |none | 0|acc |0.8615|± |0.0071|
| | |none | 0|acc_norm |0.8375|± |0.0076|
|boolq |Yaml |none | 0|acc |0.8624|± |0.0060|
|lambada_openai|Yaml |none | 0|perplexity|2.8318|± |0.0507|
| | |none | 0|acc |0.7650|± |0.0059|
|mathqa |Yaml |none | 0|acc |0.4472|± |0.0091|
| | |none | 0|acc_norm |0.4436|± |0.0091|
|piqa |Yaml |none | 0|acc |0.8292|± |0.0088|
| | |none | 0|acc_norm |0.8422|± |0.0085|
|pubmedqa |Yaml |none | 0|acc |0.7920|± |0.0182|
|sciq |Yaml |none | 0|acc |0.9630|± |0.0060|
| | |none | 0|acc_norm |0.9370|± |0.0077|
```
## BBH at 0-Shot
```
vllm (pretrained=fblgit/UNAversal-8x7B-v1beta,tensor_parallel_size=2,data_parallel_size=4,gpu_memory_utilization=0.8,dtype=float16), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: auto
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
| - bbh_cot_fewshot_boolean_expressions |Yaml |get-answer| 0|exact_match|0.8840|± |0.0203|
| - bbh_cot_fewshot_causal_judgement |Yaml |get-answer| 0|exact_match|0.6417|± |0.0352|
| - bbh_cot_fewshot_date_understanding |Yaml |get-answer| 0|exact_match|0.7600|± |0.0271|
| - bbh_cot_fewshot_disambiguation_qa |Yaml |get-answer| 0|exact_match|0.7160|± |0.0286|
| - bbh_cot_fewshot_dyck_languages |Yaml |get-answer| 0|exact_match|0.1800|± |0.0243|
| - bbh_cot_fewshot_formal_fallacies |Yaml |get-answer| 0|exact_match|0.6520|± |0.0302|
| - bbh_cot_fewshot_geometric_shapes |Yaml |get-answer| 0|exact_match|0.3880|± |0.0309|
| - bbh_cot_fewshot_hyperbaton |Yaml |get-answer| 0|exact_match|0.9600|± |0.0124|
| - bbh_cot_fewshot_logical_deduction_five_objects |Yaml |get-answer| 0|exact_match|0.5360|± |0.0316|
| - bbh_cot_fewshot_logical_deduction_seven_objects |Yaml |get-answer| 0|exact_match|0.5040|± |0.0317|
| - bbh_cot_fewshot_logical_deduction_three_objects |Yaml |get-answer| 0|exact_match|0.8600|± |0.0220|
| - bbh_cot_fewshot_movie_recommendation |Yaml |get-answer| 0|exact_match|0.7840|± |0.0261|
| - bbh_cot_fewshot_multistep_arithmetic_two |Yaml |get-answer| 0|exact_match|0.6600|± |0.0300|
| - bbh_cot_fewshot_navigate |Yaml |get-answer| 0|exact_match|0.8160|± |0.0246|
| - bbh_cot_fewshot_object_counting |Yaml |get-answer| 0|exact_match|0.8360|± |0.0235|
| - bbh_cot_fewshot_penguins_in_a_table |Yaml |get-answer| 0|exact_match|0.7329|± |0.0367|
| - bbh_cot_fewshot_reasoning_about_colored_objects |Yaml |get-answer| 0|exact_match|0.8120|± |0.0248|
| - bbh_cot_fewshot_ruin_names |Yaml |get-answer| 0|exact_match|0.4440|± |0.0315|
| - bbh_cot_fewshot_salient_translation_error_detection |Yaml |get-answer| 0|exact_match|0.5200|± |0.0317|
| - bbh_cot_fewshot_snarks |Yaml |get-answer| 0|exact_match|0.7135|± |0.0340|
| - bbh_cot_fewshot_sports_understanding |Yaml |get-answer| 0|exact_match|0.9400|± |0.0151|
| - bbh_cot_fewshot_temporal_sequences |Yaml |get-answer| 0|exact_match|0.7560|± |0.0272|
| - bbh_cot_fewshot_tracking_shuffled_objects_five_objects |Yaml |get-answer| 0|exact_match|0.5680|± |0.0314|
| - bbh_cot_fewshot_tracking_shuffled_objects_seven_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_tracking_shuffled_objects_three_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_web_of_lies |Yaml |get-answer| 0|exact_match|0.9560|± |0.0130|
| - bbh_cot_fewshot_word_sorting |Yaml |get-answer| 0|exact_match|0.3800|± |0.0308|
|Groups|Version| Filter |n-shot| Metric |Value | |Stderr|
|------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
``` | [
"PUBMEDQA",
"SCIQ"
] | Non_BioNLP |
judithrosell/ClinicalBERT_JNLPBA_NER_new | judithrosell | token-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:medicalai/ClinicalBERT",
"base_model:finetune:medicalai/ClinicalBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,704,035,087,000 | 2023-12-31T18:33:11 | 8 | 0 | ---
base_model: medicalai/ClinicalBERT
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: ClinicalBERT_JNLPBA_NER_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClinicalBERT_JNLPBA_NER_new
This model is a fine-tuned version of [medicalai/ClinicalBERT](https://huggingface.co/medicalai/ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1699
- Precision: 0.7855
- Recall: 0.8043
- F1: 0.7948
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2204 | 1.0 | 1164 | 0.1821 | 0.7652 | 0.7719 | 0.7685 | 0.9380 |
| 0.1618 | 2.0 | 2328 | 0.1716 | 0.7884 | 0.7886 | 0.7885 | 0.9426 |
| 0.1338 | 3.0 | 3492 | 0.1699 | 0.7855 | 0.8043 | 0.7948 | 0.9439 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| [
"JNLPBA"
] | BioNLP |
recursal/QRWKV6-32B-Instruct-Preview-v0.1 | recursal | text-generation | [
"transformers",
"safetensors",
"rwkv6qwen2",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | 1,733,559,467,000 | 2025-03-14T00:38:59 | 600 | 71 | ---
library_name: transformers
license: apache-2.0
---
# recursal/QRWKV6-32B-Instruct-Preview-v0.1
> Compute sponsored by [TensorWave](https://tensorwave.com) - Access MI300X today!
Try out this model on:
[](https://featherless.ai/models/recursal/QRWKV6-32B-Instruct-Preview-v0.1)

QRWKV6 32B Instruct Preview is one of the largest and strongest RWKV model to date.
More details can be found at:
https://substack.recursal.ai/p/q-rwkv-6-32b-instruct-preview
## Evaluation
The follow demostrates that QRWKV6 performs either similarly or ahead of it's base model: *Qwen2.5-32B-Instruct*
| Model | MMLU | arc_challenge | arc_easy | hellaSwag | lambada_openai | piqa | sciq | winogrande |
|---|---|---|---|---|---|---|---|---|
| QRWKV6-32B-Instruct | 76.63% | 60.92% | 83.00% | 83.03% | 74.17% | 82.92% | 95.40% | 78.22% |
| Qwen2.5-32B-Instruct | 81.77% | 58.70% | 77.06% | 85.19% | 75.22% | 81.23% | 95.00% | 72.85% |
| RWKV-EagleX-7B-v2 | 43.84% | 41.55% | 74.45% | 56.00% | 75.02% | 77.64% | 93.00% | 73.32% |
| Falcon-Mamba-7B | 59.72% | 59.13% | 81.73% | 80.21% | 68.89% | 82.15% | 93.60% | 74.35% |
| Llama-3.1-8B-Instruct | 68.11% | 55.29% | 79.71% | 79.27% | 73.12% | 80.85% | 96.20% | 74.19% |
| Llama-3.1-70B-Instruct | 82.40% | 63.40% | 83.50% | 84.62% | 75.68% | 83.79% | 97.10% | 79.08% |
## Model Notes
Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. This enables a more than 1000x improvement in inference cost efficiency, enabling both O1-style inference time thinking and wider AI accessibility.
We are able to convert any previously trained QKV Attention-based model, such as Qwen and LLaMA, into an RWKV variant without **requiring retraining from scratch**. Enabling us to rapidly test and validate the significantly more efficient RWKV Linear attention mechanism at a larger scale with a much smaller budget, bypassing the need for training from scratch.
This approach demonstrates the architecture design and scalability of RWKV, reinforcing the idea that QKV attention is not the sole essential component.
One downside to this technique is that the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the QRWKV model is limited to approximately 30 languages supported by the Qwen line of models.
Due to the the lack of RWKV-based channel mix and feedforward layers, separate inference code is needed for this specific model.
Furthermore, due to compute constraints, we were only able to train up to 16K token context length. While the model is stable beyond this limit, additional training might be required to support longer context lengths.
## Future Models
We are currently training Q-RWKV-6 72B. Additionally, with the finalization of RWKV-7 we plan to apply the same process and provide a full line up of:
- Q-RWKV-7 32B
- LLaMA-RWKV-7 70B
Lastly, we intend to provide details on the conversion along with our paper after the model release.
## Links
- [Our wiki](https://wiki.rwkv.com)
- [TensorWave - The AMD Cloud](https://tensorwave.com) - Access MI300X today!
- [Recursal.AI Cloud Platform](https://platform.recursal.ai)
- [Featherless Inference](https://featherless.ai/model-families/rwkv6/)
## Acknowledgement
We are grateful for the help and support from the following key groups:
- [TensorWave](https://tensorwave.com) - Sponsored the compute needed to train this model. It wouldn't be possible without them.
- EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper.
- Linux Foundation AI & Data group for supporting and hosting the RWKV project.
- Recursal for their support in building and managing the training of this model architecture. | [
"SCIQ"
] | Non_BioNLP |
LoneStriker/xDAN-L1-Chat-RL-v1-6.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"xDAN-AI",
"OpenOrca",
"DPO",
"Self-Think",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:Intel/orca_dpo_pairs",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,703,576,295,000 | 2023-12-26T08:59:26 | 8 | 1 | ---
datasets:
- Open-Orca/OpenOrca
- Intel/orca_dpo_pairs
language:
- en
license: cc-by-4.0
tags:
- xDAN-AI
- OpenOrca
- DPO
- Self-Think
---
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/tVAcwKkIH5vkfzqgqHeHi.png" style="width: 45%;">
</div
>
<p align="center">
<big><b>Top 1 Performer on MT-bench🏆</b
></big>
</p>
<p align="center">
<strong>**The first top model which is performance at Humanalities, Coding and Writing with 7b. **</strong>
</p>
<p
align="center"
<a href="The TOP1 MT-Bench Model">xDAN-AI</a> •
>
<a href="https://discord.gg/7NrMX5AK">Discord</a> •
<a href="https://twitter.com/shootime007">Twitter</a> •
<a href="https://huggingface.co/xDAN-AI">Huggingface</a>
</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/QANDZApzpTHM6sBsjmdew.png" alt="Image" width="50%">
</p>
### Dataset:
1. Selected from OpenOrca
2. Intel Orca-DPO-Pairs
3. Privately Crafted Dataset
**########## First turn ##########**
| model | turn | score | size
|--------------------|------|----------|--------
| gpt-4 | 1 | 8.95625 | -
| **xDAN-L1-Chat-RL-v1** | 1 | **8.87500** | **7b**
| xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
| claude-v1 | 1 | 8.15000 | -
| gpt-3.5-turbo | 1 | 8.07500 | 20b
| vicuna-33b-v1.3 | 1 | 7.45625 | 33b
| wizardlm-30b | 1 | 7.13125 | 30b
| oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
| Llama-2-70b-chat | 1 | 6.98750 | 70b
########## Second turn ##########
| model | turn | score | size
|--------------------|------|-----------|--------
| gpt-4 | 2 | 9.025000 | -
| xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
| **xDAN-L1-Chat-RL-v1** | 2 | **7.825000** | **7b**
| gpt-3.5-turbo | 2 | 7.812500 | 20b
| claude-v1 | 2 | 7.650000 | -
| wizardlm-30b | 2 | 6.887500 | 30b
| vicuna-33b-v1.3 | 2 | 6.787500 | 33b
| Llama-2-70b-chat | 2 | 6.725000 | 70b
########## Average turn##########
| model | score | size
|--------------------|-----------|--------
| gpt-4 | 8.990625 | -
| xDAN-L2-Chat-RL-v2 | 8.437500 | 30b
| **xDAN-L1-Chat-RL-v1** | **8.350000** | **7b**
| gpt-3.5-turbo | 7.943750 | 20b
| claude-v1 | 7.900000 | -
| vicuna-33b-v1.3 | 7.121875 | 33b
| wizardlm-30b | 7.009375 | 30b
| Llama-2-70b-chat | 6.856250 | 70b
### Prompt Template(Alpaca)
You are a helpful assistant named DAN. You are an expert in worldly knowledge, skilled in employing a probing questioning strategy,
and you carefully consider each step before providing answers.
\n\n### Instruction:\n{instruction}\n\n### Response:
## Created By xDAN-AI at 2023-12-15
## Eval by FastChat: https://github.com/lm-sys/FastChat.git
Disclaimer
We employ data compliance checking algorithms during the training of our language model to strive for the highest degree of compliance. However, given the intricate nature of data and the vast array of potential usage scenarios for the model, we cannot assure that it will always generate correct and reasonable outputs. Users should be cognizant of the risk of the model producing problematic outputs. Our organization will not bear responsibility for any risks or issues stemming from misuse, misguidance, illegal use, and related misinformation, as well as any consequent data security concerns.
About xDAN-AI xDAN-AI is a top lead high-performance model factory. For detailed information and further insights into our cutting-edge technology and offerings, please visit our website: https://www.xdan.ai. | [
"BEAR"
] | Non_BioNLP |
twadada/komni | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,725,540,449,000 | 2024-09-05T12:49:20 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: sentence-transformers/average_word_embeddings_komninos
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 60.402985074626855
- type: ap
value: 24.929195028150133
- type: ap_weighted
value: 24.929195028150133
- type: f1
value: 54.63138673052016
- type: f1_weighted
value: 64.35165762938922
- type: main_score
value: 60.402985074626855
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: validation
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 59.134328358208954
- type: ap
value: 20.645092706345967
- type: ap_weighted
value: 20.645092706345967
- type: f1
value: 51.52977516015989
- type: f1_weighted
value: 63.888682484335504
- type: main_score
value: 59.134328358208954
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 61.74492500000001
- type: ap
value: 57.61859520905024
- type: ap_weighted
value: 57.61859520905024
- type: f1
value: 61.20690108105637
- type: f1_weighted
value: 61.20690108105637
- type: main_score
value: 61.74492500000001
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 29.748
- type: f1
value: 29.317324500130205
- type: f1_weighted
value: 29.317324500130205
- type: main_score
value: 29.748
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: validation
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 29.15599999999999
- type: f1
value: 28.72271326130445
- type: f1_weighted
value: 28.72271326130445
- type: main_score
value: 29.15599999999999
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 30.958999999999996
- type: map_at_1
value: 14.366999999999999
- type: map_at_10
value: 25.080999999999996
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.423999999999996
- type: map_at_20
value: 25.912000000000003
- type: map_at_3
value: 21.834999999999997
- type: map_at_5
value: 23.563000000000002
- type: mrr_at_1
value: 15.078236130867708
- type: mrr_at_10
value: 25.337894285262696
- type: mrr_at_100
value: 26.611050500772095
- type: mrr_at_1000
value: 26.68089675960519
- type: mrr_at_20
value: 26.168277521137185
- type: mrr_at_3
value: 22.155049786628716
- type: mrr_at_5
value: 23.822901849217608
- type: nauc_map_at_1000_diff1
value: 13.653001121843564
- type: nauc_map_at_1000_max
value: 2.319102074931866
- type: nauc_map_at_1000_std
value: 14.863907956733978
- type: nauc_map_at_100_diff1
value: 13.707008572689888
- type: nauc_map_at_100_max
value: 2.3568789929286207
- type: nauc_map_at_100_std
value: 14.911911160260654
- type: nauc_map_at_10_diff1
value: 13.437143194171428
- type: nauc_map_at_10_max
value: 2.3407683503306393
- type: nauc_map_at_10_std
value: 14.290652074656073
- type: nauc_map_at_1_diff1
value: 15.120007354499974
- type: nauc_map_at_1_max
value: -3.0703979635822294
- type: nauc_map_at_1_std
value: 11.385855369804915
- type: nauc_map_at_20_diff1
value: 13.621840076777799
- type: nauc_map_at_20_max
value: 2.1877948032398877
- type: nauc_map_at_20_std
value: 14.708956175921836
- type: nauc_map_at_3_diff1
value: 12.356213115233386
- type: nauc_map_at_3_max
value: 0.6382555969907929
- type: nauc_map_at_3_std
value: 12.648818860839631
- type: nauc_map_at_5_diff1
value: 13.217620841821615
- type: nauc_map_at_5_max
value: 1.6310057806749718
- type: nauc_map_at_5_std
value: 14.044546566449604
- type: nauc_mrr_at_1000_diff1
value: 11.228288074444366
- type: nauc_mrr_at_1000_max
value: 2.1865463814828243
- type: nauc_mrr_at_1000_std
value: 14.23306531448693
- type: nauc_mrr_at_100_diff1
value: 11.286796889548375
- type: nauc_mrr_at_100_max
value: 2.2241357919072327
- type: nauc_mrr_at_100_std
value: 14.281813816045224
- type: nauc_mrr_at_10_diff1
value: 11.075822092473016
- type: nauc_mrr_at_10_max
value: 2.2162054135703912
- type: nauc_mrr_at_10_std
value: 13.681220157720839
- type: nauc_mrr_at_1_diff1
value: 11.528700137142192
- type: nauc_mrr_at_1_max
value: -1.0616327606141427
- type: nauc_mrr_at_1_std
value: 9.607487567146205
- type: nauc_mrr_at_20_diff1
value: 11.228823004101752
- type: nauc_mrr_at_20_max
value: 2.0599569750550835
- type: nauc_mrr_at_20_std
value: 14.087665945038868
- type: nauc_mrr_at_3_diff1
value: 9.962700773059229
- type: nauc_mrr_at_3_max
value: 0.803182788584721
- type: nauc_mrr_at_3_std
value: 11.970773976607262
- type: nauc_mrr_at_5_diff1
value: 10.647170590377367
- type: nauc_mrr_at_5_max
value: 1.4292475985002175
- type: nauc_mrr_at_5_std
value: 13.434643518492004
- type: nauc_ndcg_at_1000_diff1
value: 13.872309839650395
- type: nauc_ndcg_at_1000_max
value: 4.043493264977604
- type: nauc_ndcg_at_1000_std
value: 17.85215412789614
- type: nauc_ndcg_at_100_diff1
value: 15.157716814152627
- type: nauc_ndcg_at_100_max
value: 5.128513014092924
- type: nauc_ndcg_at_100_std
value: 19.248784984100055
- type: nauc_ndcg_at_10_diff1
value: 13.681682384919716
- type: nauc_ndcg_at_10_max
value: 4.285127434095809
- type: nauc_ndcg_at_10_std
value: 16.07742970740846
- type: nauc_ndcg_at_1_diff1
value: 15.120007354499974
- type: nauc_ndcg_at_1_max
value: -3.0703979635822294
- type: nauc_ndcg_at_1_std
value: 11.385855369804915
- type: nauc_ndcg_at_20_diff1
value: 14.446523345704215
- type: nauc_ndcg_at_20_max
value: 4.011697476937953
- type: nauc_ndcg_at_20_std
value: 17.69490562952759
- type: nauc_ndcg_at_3_diff1
value: 11.69290471028184
- type: nauc_ndcg_at_3_max
value: 1.247393927629684
- type: nauc_ndcg_at_3_std
value: 13.120021581495237
- type: nauc_ndcg_at_5_diff1
value: 13.217424085450466
- type: nauc_ndcg_at_5_max
value: 2.8952622533595007
- type: nauc_ndcg_at_5_std
value: 15.505263720262077
- type: nauc_precision_at_1000_diff1
value: 5.932456387976172
- type: nauc_precision_at_1000_max
value: 15.258643427240273
- type: nauc_precision_at_1000_std
value: 54.78878776579575
- type: nauc_precision_at_100_diff1
value: 24.980338838668352
- type: nauc_precision_at_100_max
value: 19.630089222816718
- type: nauc_precision_at_100_std
value: 44.36284980312287
- type: nauc_precision_at_10_diff1
value: 14.605327003787561
- type: nauc_precision_at_10_max
value: 9.20284541981761
- type: nauc_precision_at_10_std
value: 20.78623457402565
- type: nauc_precision_at_1_diff1
value: 15.120007354499974
- type: nauc_precision_at_1_max
value: -3.0703979635822294
- type: nauc_precision_at_1_std
value: 11.385855369804915
- type: nauc_precision_at_20_diff1
value: 17.821262915735662
- type: nauc_precision_at_20_max
value: 9.276138177862721
- type: nauc_precision_at_20_std
value: 27.674053798253066
- type: nauc_precision_at_3_diff1
value: 10.142191569673395
- type: nauc_precision_at_3_max
value: 2.5962302005345954
- type: nauc_precision_at_3_std
value: 14.266636924316572
- type: nauc_precision_at_5_diff1
value: 13.43316017158103
- type: nauc_precision_at_5_max
value: 5.886903903159331
- type: nauc_precision_at_5_std
value: 19.17991082336465
- type: nauc_recall_at_1000_diff1
value: 5.932456387976118
- type: nauc_recall_at_1000_max
value: 15.258643427240015
- type: nauc_recall_at_1000_std
value: 54.78878776579585
- type: nauc_recall_at_100_diff1
value: 24.9803388386684
- type: nauc_recall_at_100_max
value: 19.63008922281674
- type: nauc_recall_at_100_std
value: 44.36284980312287
- type: nauc_recall_at_10_diff1
value: 14.605327003787533
- type: nauc_recall_at_10_max
value: 9.202845419817598
- type: nauc_recall_at_10_std
value: 20.786234574025674
- type: nauc_recall_at_1_diff1
value: 15.120007354499974
- type: nauc_recall_at_1_max
value: -3.0703979635822294
- type: nauc_recall_at_1_std
value: 11.385855369804915
- type: nauc_recall_at_20_diff1
value: 17.821262915735687
- type: nauc_recall_at_20_max
value: 9.276138177862721
- type: nauc_recall_at_20_std
value: 27.674053798253063
- type: nauc_recall_at_3_diff1
value: 10.142191569673424
- type: nauc_recall_at_3_max
value: 2.5962302005346283
- type: nauc_recall_at_3_std
value: 14.266636924316611
- type: nauc_recall_at_5_diff1
value: 13.43316017158104
- type: nauc_recall_at_5_max
value: 5.886903903159316
- type: nauc_recall_at_5_std
value: 19.179910823364644
- type: ndcg_at_1
value: 14.366999999999999
- type: ndcg_at_10
value: 30.958999999999996
- type: ndcg_at_100
value: 37.151
- type: ndcg_at_1000
value: 39.105000000000004
- type: ndcg_at_20
value: 33.947
- type: ndcg_at_3
value: 24.248
- type: ndcg_at_5
value: 27.388
- type: precision_at_1
value: 14.366999999999999
- type: precision_at_10
value: 4.972
- type: precision_at_100
value: 0.787
- type: precision_at_1000
value: 0.094
- type: precision_at_20
value: 3.073
- type: precision_at_3
value: 10.408000000000001
- type: precision_at_5
value: 7.781000000000001
- type: recall_at_1
value: 14.366999999999999
- type: recall_at_10
value: 49.716
- type: recall_at_100
value: 78.73400000000001
- type: recall_at_1000
value: 94.31
- type: recall_at_20
value: 61.451
- type: recall_at_3
value: 31.223
- type: recall_at_5
value: 38.905
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 34.51568875245322
- type: v_measure
value: 34.51568875245322
- type: v_measure_std
value: 14.513271698986618
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 25.479168447834
- type: v_measure
value: 25.479168447834
- type: v_measure_std
value: 15.127365143307179
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 50.869152942279385
- type: map
value: 50.869152942279385
- type: mrr
value: 64.48665523457767
- type: nAUC_map_diff1
value: 9.405275178754053
- type: nAUC_map_max
value: 19.315877278529115
- type: nAUC_map_std
value: 5.900875597547612
- type: nAUC_mrr_diff1
value: 16.245084703218915
- type: nAUC_mrr_max
value: 24.07783292507115
- type: nAUC_mrr_std
value: 6.33473363939626
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 44.93952738747455
- type: cosine_spearman
value: 50.2481328896023
- type: euclidean_pearson
value: 36.27384624621707
- type: euclidean_spearman
value: 39.93283760763779
- type: main_score
value: 50.2481328896023
- type: manhattan_pearson
value: 36.08401407803879
- type: manhattan_spearman
value: 39.71656761953843
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 66.76623376623377
- type: f1
value: 66.58269260829559
- type: f1_weighted
value: 66.58269260829559
- type: main_score
value: 66.76623376623377
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 29.936308208154234
- type: v_measure
value: 29.936308208154234
- type: v_measure_std
value: 0.7215670938810923
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 20.954838392051464
- type: v_measure
value: 20.954838392051464
- type: v_measure_std
value: 0.8965195715267004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 24.562
- type: map_at_1
value: 15.02
- type: map_at_10
value: 20.427999999999997
- type: map_at_100
value: 21.373
- type: map_at_1000
value: 21.507
- type: map_at_20
value: 20.817
- type: map_at_3
value: 18.446
- type: map_at_5
value: 19.495
- type: mrr_at_1
value: 18.88412017167382
- type: mrr_at_10
value: 24.709108249880774
- type: mrr_at_100
value: 25.447636063234963
- type: mrr_at_1000
value: 25.524375038696554
- type: mrr_at_20
value: 25.017622854991135
- type: mrr_at_3
value: 22.484501669051028
- type: mrr_at_5
value: 23.750596089651882
- type: nauc_map_at_1000_diff1
value: 43.42894028755547
- type: nauc_map_at_1000_max
value: 24.95758649270816
- type: nauc_map_at_1000_std
value: -7.706318420758787
- type: nauc_map_at_100_diff1
value: 43.441347714893716
- type: nauc_map_at_100_max
value: 24.889731961670954
- type: nauc_map_at_100_std
value: -7.7751595223769385
- type: nauc_map_at_10_diff1
value: 43.45696982465569
- type: nauc_map_at_10_max
value: 24.77670425397581
- type: nauc_map_at_10_std
value: -8.343485599640918
- type: nauc_map_at_1_diff1
value: 51.42210174566944
- type: nauc_map_at_1_max
value: 28.089797808726125
- type: nauc_map_at_1_std
value: -10.75460205574787
- type: nauc_map_at_20_diff1
value: 43.60401853144265
- type: nauc_map_at_20_max
value: 24.81009285835644
- type: nauc_map_at_20_std
value: -8.084296858839087
- type: nauc_map_at_3_diff1
value: 45.66437225027061
- type: nauc_map_at_3_max
value: 26.241375000637255
- type: nauc_map_at_3_std
value: -8.023673739928956
- type: nauc_map_at_5_diff1
value: 44.34950803403163
- type: nauc_map_at_5_max
value: 25.605344102787797
- type: nauc_map_at_5_std
value: -7.729578606050111
- type: nauc_mrr_at_1000_diff1
value: 42.466516440592926
- type: nauc_mrr_at_1000_max
value: 26.561583808773946
- type: nauc_mrr_at_1000_std
value: -6.581126489985946
- type: nauc_mrr_at_100_diff1
value: 42.45189990095361
- type: nauc_mrr_at_100_max
value: 26.546342021409952
- type: nauc_mrr_at_100_std
value: -6.604793297233656
- type: nauc_mrr_at_10_diff1
value: 42.47524736960852
- type: nauc_mrr_at_10_max
value: 26.497454735434516
- type: nauc_mrr_at_10_std
value: -7.02728315906865
- type: nauc_mrr_at_1_diff1
value: 49.27508808292481
- type: nauc_mrr_at_1_max
value: 29.984455140577015
- type: nauc_mrr_at_1_std
value: -9.014211496538051
- type: nauc_mrr_at_20_diff1
value: 42.55661827983652
- type: nauc_mrr_at_20_max
value: 26.61778461797562
- type: nauc_mrr_at_20_std
value: -6.7286634553860605
- type: nauc_mrr_at_3_diff1
value: 44.59868510985749
- type: nauc_mrr_at_3_max
value: 27.979224006353327
- type: nauc_mrr_at_3_std
value: -6.600145919026344
- type: nauc_mrr_at_5_diff1
value: 43.419565883245745
- type: nauc_mrr_at_5_max
value: 27.208115295874276
- type: nauc_mrr_at_5_std
value: -6.363274119557564
- type: nauc_ndcg_at_1000_diff1
value: 38.95623744698938
- type: nauc_ndcg_at_1000_max
value: 24.56099647976347
- type: nauc_ndcg_at_1000_std
value: -4.168023457686674
- type: nauc_ndcg_at_100_diff1
value: 38.95795461648311
- type: nauc_ndcg_at_100_max
value: 23.36031853661375
- type: nauc_ndcg_at_100_std
value: -5.096389923872746
- type: nauc_ndcg_at_10_diff1
value: 39.44809605137147
- type: nauc_ndcg_at_10_max
value: 23.005701741887233
- type: nauc_ndcg_at_10_std
value: -7.777653317149882
- type: nauc_ndcg_at_1_diff1
value: 49.27508808292481
- type: nauc_ndcg_at_1_max
value: 29.984455140577015
- type: nauc_ndcg_at_1_std
value: -9.014211496538051
- type: nauc_ndcg_at_20_diff1
value: 39.86093654490036
- type: nauc_ndcg_at_20_max
value: 23.17882248058422
- type: nauc_ndcg_at_20_std
value: -6.670099046745402
- type: nauc_ndcg_at_3_diff1
value: 43.511768603839656
- type: nauc_ndcg_at_3_max
value: 26.4455794408347
- type: nauc_ndcg_at_3_std
value: -6.938863632733941
- type: nauc_ndcg_at_5_diff1
value: 41.23253574935619
- type: nauc_ndcg_at_5_max
value: 25.170660488066787
- type: nauc_ndcg_at_5_std
value: -6.483968209769728
- type: nauc_precision_at_1000_diff1
value: 5.418747054421262
- type: nauc_precision_at_1000_max
value: 13.36165655183751
- type: nauc_precision_at_1000_std
value: 2.672923620430455
- type: nauc_precision_at_100_diff1
value: 14.422221743126567
- type: nauc_precision_at_100_max
value: 18.284770442778257
- type: nauc_precision_at_100_std
value: 4.696733505936832
- type: nauc_precision_at_10_diff1
value: 25.664593221914544
- type: nauc_precision_at_10_max
value: 19.67027809475413
- type: nauc_precision_at_10_std
value: -5.353375302494022
- type: nauc_precision_at_1_diff1
value: 49.27508808292481
- type: nauc_precision_at_1_max
value: 29.984455140577015
- type: nauc_precision_at_1_std
value: -9.014211496538051
- type: nauc_precision_at_20_diff1
value: 24.837543547066186
- type: nauc_precision_at_20_max
value: 21.03139849965735
- type: nauc_precision_at_20_std
value: -1.5465834217101035
- type: nauc_precision_at_3_diff1
value: 36.92505882001553
- type: nauc_precision_at_3_max
value: 27.21664384591597
- type: nauc_precision_at_3_std
value: -4.025482207425132
- type: nauc_precision_at_5_diff1
value: 31.378297751375655
- type: nauc_precision_at_5_max
value: 25.41971245716922
- type: nauc_precision_at_5_std
value: -3.1766137005736703
- type: nauc_recall_at_1000_diff1
value: 15.880361562528561
- type: nauc_recall_at_1000_max
value: 21.620376796333453
- type: nauc_recall_at_1000_std
value: 14.449819668459035
- type: nauc_recall_at_100_diff1
value: 22.65965407672664
- type: nauc_recall_at_100_max
value: 15.040562113951014
- type: nauc_recall_at_100_std
value: 2.533127026764215
- type: nauc_recall_at_10_diff1
value: 27.84907164827927
- type: nauc_recall_at_10_max
value: 14.455529928215368
- type: nauc_recall_at_10_std
value: -7.165961098350433
- type: nauc_recall_at_1_diff1
value: 51.42210174566944
- type: nauc_recall_at_1_max
value: 28.089797808726125
- type: nauc_recall_at_1_std
value: -10.75460205574787
- type: nauc_recall_at_20_diff1
value: 28.723590336844897
- type: nauc_recall_at_20_max
value: 15.425845034313298
- type: nauc_recall_at_20_std
value: -3.727236758932266
- type: nauc_recall_at_3_diff1
value: 38.45250468228409
- type: nauc_recall_at_3_max
value: 22.586845798409144
- type: nauc_recall_at_3_std
value: -5.463418656894746
- type: nauc_recall_at_5_diff1
value: 33.232361992076925
- type: nauc_recall_at_5_max
value: 19.55128614949583
- type: nauc_recall_at_5_std
value: -4.191684478705544
- type: ndcg_at_1
value: 18.884
- type: ndcg_at_10
value: 24.562
- type: ndcg_at_100
value: 29.28
- type: ndcg_at_1000
value: 32.328
- type: ndcg_at_20
value: 25.698999999999998
- type: ndcg_at_3
value: 21.002000000000002
- type: ndcg_at_5
value: 22.569
- type: precision_at_1
value: 18.884
- type: precision_at_10
value: 4.75
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_20
value: 2.825
- type: precision_at_3
value: 9.919
- type: precision_at_5
value: 7.439
- type: recall_at_1
value: 15.02
- type: recall_at_10
value: 32.690000000000005
- type: recall_at_100
value: 54.705000000000005
- type: recall_at_1000
value: 75.672
- type: recall_at_20
value: 37.025000000000006
- type: recall_at_3
value: 22.096
- type: recall_at_5
value: 26.332
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 17.577
- type: map_at_1
value: 10.626
- type: map_at_10
value: 14.724
- type: map_at_100
value: 15.334999999999999
- type: map_at_1000
value: 15.432000000000002
- type: map_at_20
value: 15.015
- type: map_at_3
value: 13.253
- type: map_at_5
value: 14.096
- type: mrr_at_1
value: 13.184713375796179
- type: mrr_at_10
value: 17.573223132140324
- type: mrr_at_100
value: 18.142421386649943
- type: mrr_at_1000
value: 18.21977284803512
- type: mrr_at_20
value: 17.847717117599387
- type: mrr_at_3
value: 15.976645435244166
- type: mrr_at_5
value: 16.912951167728234
- type: nauc_map_at_1000_diff1
value: 34.22930272917885
- type: nauc_map_at_1000_max
value: 11.048178348375203
- type: nauc_map_at_1000_std
value: -14.525953998322578
- type: nauc_map_at_100_diff1
value: 34.26631368042434
- type: nauc_map_at_100_max
value: 11.04995519702749
- type: nauc_map_at_100_std
value: -14.56900329770463
- type: nauc_map_at_10_diff1
value: 34.56188099500535
- type: nauc_map_at_10_max
value: 11.335812947730956
- type: nauc_map_at_10_std
value: -14.774228041316102
- type: nauc_map_at_1_diff1
value: 41.01657410715088
- type: nauc_map_at_1_max
value: 14.676062628579208
- type: nauc_map_at_1_std
value: -15.747976021213795
- type: nauc_map_at_20_diff1
value: 34.58550562905891
- type: nauc_map_at_20_max
value: 11.301214458399679
- type: nauc_map_at_20_std
value: -14.60605898197252
- type: nauc_map_at_3_diff1
value: 36.89065303477322
- type: nauc_map_at_3_max
value: 13.045534087262995
- type: nauc_map_at_3_std
value: -15.284432309380543
- type: nauc_map_at_5_diff1
value: 35.68235194152884
- type: nauc_map_at_5_max
value: 11.937407136170663
- type: nauc_map_at_5_std
value: -14.693819301140687
- type: nauc_mrr_at_1000_diff1
value: 31.531336827483784
- type: nauc_mrr_at_1000_max
value: 11.105632282272396
- type: nauc_mrr_at_1000_std
value: -14.119108674165453
- type: nauc_mrr_at_100_diff1
value: 31.544594755540217
- type: nauc_mrr_at_100_max
value: 11.113671306794338
- type: nauc_mrr_at_100_std
value: -14.132034437792996
- type: nauc_mrr_at_10_diff1
value: 31.764894864363885
- type: nauc_mrr_at_10_max
value: 11.483294109130487
- type: nauc_mrr_at_10_std
value: -14.335354900448264
- type: nauc_mrr_at_1_diff1
value: 37.624719649846355
- type: nauc_mrr_at_1_max
value: 15.788530238295865
- type: nauc_mrr_at_1_std
value: -14.666565041941373
- type: nauc_mrr_at_20_diff1
value: 31.729658795128373
- type: nauc_mrr_at_20_max
value: 11.31351363050506
- type: nauc_mrr_at_20_std
value: -14.166055615164383
- type: nauc_mrr_at_3_diff1
value: 33.766525060946925
- type: nauc_mrr_at_3_max
value: 13.551228581837258
- type: nauc_mrr_at_3_std
value: -14.146870608836565
- type: nauc_mrr_at_5_diff1
value: 32.78796281242008
- type: nauc_mrr_at_5_max
value: 12.127028276399306
- type: nauc_mrr_at_5_std
value: -14.004585715138065
- type: nauc_ndcg_at_1000_diff1
value: 29.04002348002839
- type: nauc_ndcg_at_1000_max
value: 8.025087100078355
- type: nauc_ndcg_at_1000_std
value: -13.011007410669404
- type: nauc_ndcg_at_100_diff1
value: 29.56941312327912
- type: nauc_ndcg_at_100_max
value: 7.868622997585217
- type: nauc_ndcg_at_100_std
value: -13.56526446186751
- type: nauc_ndcg_at_10_diff1
value: 31.069874488577714
- type: nauc_ndcg_at_10_max
value: 9.381084343209508
- type: nauc_ndcg_at_10_std
value: -14.42826884203115
- type: nauc_ndcg_at_1_diff1
value: 37.624719649846355
- type: nauc_ndcg_at_1_max
value: 15.788530238295865
- type: nauc_ndcg_at_1_std
value: -14.666565041941373
- type: nauc_ndcg_at_20_diff1
value: 31.301215166787898
- type: nauc_ndcg_at_20_max
value: 9.293563630253674
- type: nauc_ndcg_at_20_std
value: -13.82687452174672
- type: nauc_ndcg_at_3_diff1
value: 34.58227636318353
- type: nauc_ndcg_at_3_max
value: 12.525588259269965
- type: nauc_ndcg_at_3_std
value: -14.986355451385927
- type: nauc_ndcg_at_5_diff1
value: 33.255269234181846
- type: nauc_ndcg_at_5_max
value: 10.57458534812048
- type: nauc_ndcg_at_5_std
value: -14.197064277859486
- type: nauc_precision_at_1000_diff1
value: 1.0181135771862329
- type: nauc_precision_at_1000_max
value: 1.1807058499635334
- type: nauc_precision_at_1000_std
value: -1.0309813092036508
- type: nauc_precision_at_100_diff1
value: 6.969621946433222
- type: nauc_precision_at_100_max
value: -0.6136161466114843
- type: nauc_precision_at_100_std
value: -8.700980039362763
- type: nauc_precision_at_10_diff1
value: 16.25950240535439
- type: nauc_precision_at_10_max
value: 4.000082052965206
- type: nauc_precision_at_10_std
value: -12.77111961978353
- type: nauc_precision_at_1_diff1
value: 37.624719649846355
- type: nauc_precision_at_1_max
value: 15.788530238295865
- type: nauc_precision_at_1_std
value: -14.666565041941373
- type: nauc_precision_at_20_diff1
value: 15.911668872950479
- type: nauc_precision_at_20_max
value: 3.5423943040483596
- type: nauc_precision_at_20_std
value: -11.151829719764532
- type: nauc_precision_at_3_diff1
value: 27.554737668759593
- type: nauc_precision_at_3_max
value: 11.746758199847436
- type: nauc_precision_at_3_std
value: -13.369630973986697
- type: nauc_precision_at_5_diff1
value: 23.2336655958108
- type: nauc_precision_at_5_max
value: 7.48011542791865
- type: nauc_precision_at_5_std
value: -12.366321857676283
- type: nauc_recall_at_1000_diff1
value: 15.298088271885238
- type: nauc_recall_at_1000_max
value: -0.17132424054076142
- type: nauc_recall_at_1000_std
value: -8.079582049072702
- type: nauc_recall_at_100_diff1
value: 19.124701278075655
- type: nauc_recall_at_100_max
value: -0.531155102383489
- type: nauc_recall_at_100_std
value: -10.678047939464944
- type: nauc_recall_at_10_diff1
value: 24.65379572262042
- type: nauc_recall_at_10_max
value: 4.533113433939525
- type: nauc_recall_at_10_std
value: -13.393888124926528
- type: nauc_recall_at_1_diff1
value: 41.01657410715088
- type: nauc_recall_at_1_max
value: 14.676062628579208
- type: nauc_recall_at_1_std
value: -15.747976021213795
- type: nauc_recall_at_20_diff1
value: 25.688718586513087
- type: nauc_recall_at_20_max
value: 4.593853222038649
- type: nauc_recall_at_20_std
value: -11.630024967430156
- type: nauc_recall_at_3_diff1
value: 34.27331025480968
- type: nauc_recall_at_3_max
value: 11.459275588993629
- type: nauc_recall_at_3_std
value: -14.483309864478796
- type: nauc_recall_at_5_diff1
value: 30.423200648314996
- type: nauc_recall_at_5_max
value: 7.545225895554444
- type: nauc_recall_at_5_std
value: -12.871262970313913
- type: ndcg_at_1
value: 13.184999999999999
- type: ndcg_at_10
value: 17.577
- type: ndcg_at_100
value: 20.698
- type: ndcg_at_1000
value: 23.241
- type: ndcg_at_20
value: 18.516
- type: ndcg_at_3
value: 14.893999999999998
- type: ndcg_at_5
value: 16.187
- type: precision_at_1
value: 13.184999999999999
- type: precision_at_10
value: 3.331
- type: precision_at_100
value: 0.596
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 1.9869999999999999
- type: precision_at_3
value: 7.134
- type: precision_at_5
value: 5.299
- type: recall_at_1
value: 10.626
- type: recall_at_10
value: 23.375
- type: recall_at_100
value: 37.394
- type: recall_at_1000
value: 55.247
- type: recall_at_20
value: 26.796999999999997
- type: recall_at_3
value: 15.909999999999998
- type: recall_at_5
value: 19.194
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 29.437
- type: map_at_1
value: 17.888
- type: map_at_10
value: 25.073
- type: map_at_100
value: 26.136
- type: map_at_1000
value: 26.226
- type: map_at_20
value: 25.689
- type: map_at_3
value: 22.728
- type: map_at_5
value: 24.041
- type: mrr_at_1
value: 20.815047021943574
- type: mrr_at_10
value: 27.89286958252473
- type: mrr_at_100
value: 28.802794286187787
- type: mrr_at_1000
value: 28.86882739497353
- type: mrr_at_20
value: 28.42810326976447
- type: mrr_at_3
value: 25.68443051201671
- type: mrr_at_5
value: 26.91640543364678
- type: nauc_map_at_1000_diff1
value: 33.5946580237083
- type: nauc_map_at_1000_max
value: 22.198731968443496
- type: nauc_map_at_1000_std
value: -14.859414860311643
- type: nauc_map_at_100_diff1
value: 33.57426690473051
- type: nauc_map_at_100_max
value: 22.173809793090193
- type: nauc_map_at_100_std
value: -14.919708622923794
- type: nauc_map_at_10_diff1
value: 33.827993178220964
- type: nauc_map_at_10_max
value: 21.896089262118295
- type: nauc_map_at_10_std
value: -15.480373277183586
- type: nauc_map_at_1_diff1
value: 41.344127411157714
- type: nauc_map_at_1_max
value: 19.354856750648754
- type: nauc_map_at_1_std
value: -15.405705447828923
- type: nauc_map_at_20_diff1
value: 33.690870992872114
- type: nauc_map_at_20_max
value: 22.235612292784385
- type: nauc_map_at_20_std
value: -15.045999433793526
- type: nauc_map_at_3_diff1
value: 35.45179878370772
- type: nauc_map_at_3_max
value: 21.164218214230512
- type: nauc_map_at_3_std
value: -15.811705893770043
- type: nauc_map_at_5_diff1
value: 34.52910675660391
- type: nauc_map_at_5_max
value: 21.4545873123739
- type: nauc_map_at_5_std
value: -15.806082866992794
- type: nauc_mrr_at_1000_diff1
value: 32.430362288804965
- type: nauc_mrr_at_1000_max
value: 22.894926380217587
- type: nauc_mrr_at_1000_std
value: -13.876567421135375
- type: nauc_mrr_at_100_diff1
value: 32.410815172612985
- type: nauc_mrr_at_100_max
value: 22.88018305517145
- type: nauc_mrr_at_100_std
value: -13.903212710478932
- type: nauc_mrr_at_10_diff1
value: 32.53750990241074
- type: nauc_mrr_at_10_max
value: 22.951009533260834
- type: nauc_mrr_at_10_std
value: -14.275006694000963
- type: nauc_mrr_at_1_diff1
value: 38.980136538022954
- type: nauc_mrr_at_1_max
value: 21.279604331664046
- type: nauc_mrr_at_1_std
value: -14.697668890398093
- type: nauc_mrr_at_20_diff1
value: 32.50507700378084
- type: nauc_mrr_at_20_max
value: 22.98821108286686
- type: nauc_mrr_at_20_std
value: -13.907026090926019
- type: nauc_mrr_at_3_diff1
value: 33.70001906395751
- type: nauc_mrr_at_3_max
value: 22.330875523253635
- type: nauc_mrr_at_3_std
value: -14.50142635876709
- type: nauc_mrr_at_5_diff1
value: 32.99697325235947
- type: nauc_mrr_at_5_max
value: 22.46241199541291
- type: nauc_mrr_at_5_std
value: -14.47360262529094
- type: nauc_ndcg_at_1000_diff1
value: 30.236372115803118
- type: nauc_ndcg_at_1000_max
value: 23.713724084552624
- type: nauc_ndcg_at_1000_std
value: -11.958672518162782
- type: nauc_ndcg_at_100_diff1
value: 29.59087338984258
- type: nauc_ndcg_at_100_max
value: 23.08282825192995
- type: nauc_ndcg_at_100_std
value: -13.082067281755336
- type: nauc_ndcg_at_10_diff1
value: 30.74501601712305
- type: nauc_ndcg_at_10_max
value: 23.055717827569154
- type: nauc_ndcg_at_10_std
value: -14.967791916143684
- type: nauc_ndcg_at_1_diff1
value: 38.980136538022954
- type: nauc_ndcg_at_1_max
value: 21.279604331664046
- type: nauc_ndcg_at_1_std
value: -14.697668890398093
- type: nauc_ndcg_at_20_diff1
value: 30.492550393433476
- type: nauc_ndcg_at_20_max
value: 23.747922904030187
- type: nauc_ndcg_at_20_std
value: -13.71842246971048
- type: nauc_ndcg_at_3_diff1
value: 33.25558648222959
- type: nauc_ndcg_at_3_max
value: 21.904706506301995
- type: nauc_ndcg_at_3_std
value: -15.260282243837919
- type: nauc_ndcg_at_5_diff1
value: 32.00968711513131
- type: nauc_ndcg_at_5_max
value: 22.105578919328625
- type: nauc_ndcg_at_5_std
value: -15.505474637818791
- type: nauc_precision_at_1000_diff1
value: 3.748415867056487
- type: nauc_precision_at_1000_max
value: 19.87286199756069
- type: nauc_precision_at_1000_std
value: 12.995916220853385
- type: nauc_precision_at_100_diff1
value: 7.7630399429753
- type: nauc_precision_at_100_max
value: 21.451357196024507
- type: nauc_precision_at_100_std
value: 2.3021950941234413
- type: nauc_precision_at_10_diff1
value: 18.016688140696697
- type: nauc_precision_at_10_max
value: 27.145011317989137
- type: nauc_precision_at_10_std
value: -10.218971052167841
- type: nauc_precision_at_1_diff1
value: 38.980136538022954
- type: nauc_precision_at_1_max
value: 21.279604331664046
- type: nauc_precision_at_1_std
value: -14.697668890398093
- type: nauc_precision_at_20_diff1
value: 15.601641200731283
- type: nauc_precision_at_20_max
value: 28.438484773677793
- type: nauc_precision_at_20_std
value: -4.18920607247201
- type: nauc_precision_at_3_diff1
value: 26.503155039022513
- type: nauc_precision_at_3_max
value: 24.431245225471997
- type: nauc_precision_at_3_std
value: -13.310557990144693
- type: nauc_precision_at_5_diff1
value: 22.848052881036775
- type: nauc_precision_at_5_max
value: 24.92061943531904
- type: nauc_precision_at_5_std
value: -13.160087789047134
- type: nauc_recall_at_1000_diff1
value: 15.395456576811345
- type: nauc_recall_at_1000_max
value: 28.53706300634265
- type: nauc_recall_at_1000_std
value: 8.997480780750402
- type: nauc_recall_at_100_diff1
value: 14.83313828844135
- type: nauc_recall_at_100_max
value: 21.73223494593561
- type: nauc_recall_at_100_std
value: -6.438327376588932
- type: nauc_recall_at_10_diff1
value: 22.57441093948928
- type: nauc_recall_at_10_max
value: 23.911746830526855
- type: nauc_recall_at_10_std
value: -14.2446867240276
- type: nauc_recall_at_1_diff1
value: 41.344127411157714
- type: nauc_recall_at_1_max
value: 19.354856750648754
- type: nauc_recall_at_1_std
value: -15.405705447828923
- type: nauc_recall_at_20_diff1
value: 21.57853891265982
- type: nauc_recall_at_20_max
value: 25.63428359152412
- type: nauc_recall_at_20_std
value: -10.24637038225774
- type: nauc_recall_at_3_diff1
value: 29.012075942870307
- type: nauc_recall_at_3_max
value: 21.684282956772527
- type: nauc_recall_at_3_std
value: -15.819102652425986
- type: nauc_recall_at_5_diff1
value: 25.603180196324875
- type: nauc_recall_at_5_max
value: 21.57895881738491
- type: nauc_recall_at_5_std
value: -15.33988357131412
- type: ndcg_at_1
value: 20.815
- type: ndcg_at_10
value: 29.437
- type: ndcg_at_100
value: 34.405
- type: ndcg_at_1000
value: 36.731
- type: ndcg_at_20
value: 31.447000000000003
- type: ndcg_at_3
value: 25.043
- type: ndcg_at_5
value: 27.121000000000002
- type: precision_at_1
value: 20.815
- type: precision_at_10
value: 5.022
- type: precision_at_100
value: 0.831
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 3.063
- type: precision_at_3
value: 11.347999999999999
- type: precision_at_5
value: 8.163
- type: recall_at_1
value: 17.888
- type: recall_at_10
value: 40.050999999999995
- type: recall_at_100
value: 62.304
- type: recall_at_1000
value: 79.648
- type: recall_at_20
value: 47.504000000000005
- type: recall_at_3
value: 27.92
- type: recall_at_5
value: 33.099000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 12.318999999999999
- type: map_at_1
value: 7.439
- type: map_at_10
value: 10.453
- type: map_at_100
value: 11.118
- type: map_at_1000
value: 11.203000000000001
- type: map_at_20
value: 10.783
- type: map_at_3
value: 9.542
- type: map_at_5
value: 10.072000000000001
- type: mrr_at_1
value: 7.909604519774012
- type: mrr_at_10
value: 11.16985023764685
- type: mrr_at_100
value: 11.832077913747288
- type: mrr_at_1000
value: 11.919309892983376
- type: mrr_at_20
value: 11.511441279020126
- type: mrr_at_3
value: 10.263653483992465
- type: mrr_at_5
value: 10.811676082862524
- type: nauc_map_at_1000_diff1
value: 32.34921252321462
- type: nauc_map_at_1000_max
value: 25.238175201641887
- type: nauc_map_at_1000_std
value: -12.684385457763906
- type: nauc_map_at_100_diff1
value: 32.411311747992116
- type: nauc_map_at_100_max
value: 25.230418203614413
- type: nauc_map_at_100_std
value: -12.774240189651245
- type: nauc_map_at_10_diff1
value: 33.425161553577205
- type: nauc_map_at_10_max
value: 25.866699063335414
- type: nauc_map_at_10_std
value: -13.574080057194248
- type: nauc_map_at_1_diff1
value: 42.07051734380826
- type: nauc_map_at_1_max
value: 29.611729422122853
- type: nauc_map_at_1_std
value: -16.18911647655334
- type: nauc_map_at_20_diff1
value: 32.68816511342719
- type: nauc_map_at_20_max
value: 25.193632273003963
- type: nauc_map_at_20_std
value: -13.127925748085254
- type: nauc_map_at_3_diff1
value: 35.69959491722764
- type: nauc_map_at_3_max
value: 26.70968900210543
- type: nauc_map_at_3_std
value: -13.983705840901012
- type: nauc_map_at_5_diff1
value: 34.201233760721536
- type: nauc_map_at_5_max
value: 26.756702950331597
- type: nauc_map_at_5_std
value: -14.045992561578933
- type: nauc_mrr_at_1000_diff1
value: 32.58372535484999
- type: nauc_mrr_at_1000_max
value: 25.60852249881509
- type: nauc_mrr_at_1000_std
value: -10.525258706497146
- type: nauc_mrr_at_100_diff1
value: 32.63357666825145
- type: nauc_mrr_at_100_max
value: 25.610803160451674
- type: nauc_mrr_at_100_std
value: -10.615064835398003
- type: nauc_mrr_at_10_diff1
value: 33.58414592296187
- type: nauc_mrr_at_10_max
value: 26.189659692815198
- type: nauc_mrr_at_10_std
value: -11.128090291483081
- type: nauc_mrr_at_1_diff1
value: 41.87297423575716
- type: nauc_mrr_at_1_max
value: 30.21152487513346
- type: nauc_mrr_at_1_std
value: -13.088709334321894
- type: nauc_mrr_at_20_diff1
value: 32.92761127707861
- type: nauc_mrr_at_20_max
value: 25.572330484555227
- type: nauc_mrr_at_20_std
value: -10.880358177039268
- type: nauc_mrr_at_3_diff1
value: 35.814982269034715
- type: nauc_mrr_at_3_max
value: 27.336080120053612
- type: nauc_mrr_at_3_std
value: -11.715139726637466
- type: nauc_mrr_at_5_diff1
value: 34.39272505037389
- type: nauc_mrr_at_5_max
value: 27.065879569866176
- type: nauc_mrr_at_5_std
value: -11.349730198062169
- type: nauc_ndcg_at_1000_diff1
value: 24.37733648337479
- type: nauc_ndcg_at_1000_max
value: 22.306672889227816
- type: nauc_ndcg_at_1000_std
value: -7.40795385329061
- type: nauc_ndcg_at_100_diff1
value: 25.57485190268497
- type: nauc_ndcg_at_100_max
value: 22.22249279693752
- type: nauc_ndcg_at_100_std
value: -9.24810287845733
- type: nauc_ndcg_at_10_diff1
value: 30.02032664161616
- type: nauc_ndcg_at_10_max
value: 24.006551664845098
- type: nauc_ndcg_at_10_std
value: -12.074132000152268
- type: nauc_ndcg_at_1_diff1
value: 41.87297423575716
- type: nauc_ndcg_at_1_max
value: 30.21152487513346
- type: nauc_ndcg_at_1_std
value: -13.088709334321894
- type: nauc_ndcg_at_20_diff1
value: 28.078186338637895
- type: nauc_ndcg_at_20_max
value: 22.052939425943357
- type: nauc_ndcg_at_20_std
value: -10.856610799799524
- type: nauc_ndcg_at_3_diff1
value: 33.99340966117148
- type: nauc_ndcg_at_3_max
value: 26.231123561950113
- type: nauc_ndcg_at_3_std
value: -13.149285914673788
- type: nauc_ndcg_at_5_diff1
value: 31.577386795220637
- type: nauc_ndcg_at_5_max
value: 26.044822860564537
- type: nauc_ndcg_at_5_std
value: -12.935747245264787
- type: nauc_precision_at_1000_diff1
value: 3.4160460057827615
- type: nauc_precision_at_1000_max
value: 15.101470365693848
- type: nauc_precision_at_1000_std
value: 6.021936145756155
- type: nauc_precision_at_100_diff1
value: 10.34051577188571
- type: nauc_precision_at_100_max
value: 17.739949287459993
- type: nauc_precision_at_100_std
value: -2.7850302945767615
- type: nauc_precision_at_10_diff1
value: 23.699256766170492
- type: nauc_precision_at_10_max
value: 21.644111635657577
- type: nauc_precision_at_10_std
value: -9.002886527107323
- type: nauc_precision_at_1_diff1
value: 41.87297423575716
- type: nauc_precision_at_1_max
value: 30.21152487513346
- type: nauc_precision_at_1_std
value: -13.088709334321894
- type: nauc_precision_at_20_diff1
value: 18.615697890600234
- type: nauc_precision_at_20_max
value: 16.706238697380705
- type: nauc_precision_at_20_std
value: -6.669229407539035
- type: nauc_precision_at_3_diff1
value: 30.128231576097384
- type: nauc_precision_at_3_max
value: 25.815970547260598
- type: nauc_precision_at_3_std
value: -11.105020187773661
- type: nauc_precision_at_5_diff1
value: 26.22135132933206
- type: nauc_precision_at_5_max
value: 25.68404079056778
- type: nauc_precision_at_5_std
value: -11.025493841374265
- type: nauc_recall_at_1000_diff1
value: 2.6931274160757046
- type: nauc_recall_at_1000_max
value: 15.362375617217358
- type: nauc_recall_at_1000_std
value: 6.773822026587906
- type: nauc_recall_at_100_diff1
value: 10.200570830693344
- type: nauc_recall_at_100_max
value: 15.594277536043528
- type: nauc_recall_at_100_std
value: -2.6808579472750433
- type: nauc_recall_at_10_diff1
value: 22.332159970694384
- type: nauc_recall_at_10_max
value: 19.473683201570914
- type: nauc_recall_at_10_std
value: -10.54222194318032
- type: nauc_recall_at_1_diff1
value: 42.07051734380826
- type: nauc_recall_at_1_max
value: 29.611729422122853
- type: nauc_recall_at_1_std
value: -16.18911647655334
- type: nauc_recall_at_20_diff1
value: 17.93695696456013
- type: nauc_recall_at_20_max
value: 14.451988150326796
- type: nauc_recall_at_20_std
value: -7.114744237411945
- type: nauc_recall_at_3_diff1
value: 29.39978062669097
- type: nauc_recall_at_3_max
value: 23.672803483224243
- type: nauc_recall_at_3_std
value: -12.276783799824567
- type: nauc_recall_at_5_diff1
value: 25.150199907797187
- type: nauc_recall_at_5_max
value: 23.826387878798013
- type: nauc_recall_at_5_std
value: -12.059015028347515
- type: ndcg_at_1
value: 7.91
- type: ndcg_at_10
value: 12.318999999999999
- type: ndcg_at_100
value: 15.977
- type: ndcg_at_1000
value: 18.778
- type: ndcg_at_20
value: 13.502
- type: ndcg_at_3
value: 10.465
- type: ndcg_at_5
value: 11.416
- type: precision_at_1
value: 7.91
- type: precision_at_10
value: 1.966
- type: precision_at_100
value: 0.40299999999999997
- type: precision_at_1000
value: 0.068
- type: precision_at_20
value: 1.243
- type: precision_at_3
value: 4.557
- type: precision_at_5
value: 3.299
- type: recall_at_1
value: 7.439
- type: recall_at_10
value: 17.419999999999998
- type: recall_at_100
value: 35.038000000000004
- type: recall_at_1000
value: 57.413000000000004
- type: recall_at_20
value: 21.93
- type: recall_at_3
value: 12.354
- type: recall_at_5
value: 14.633
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 9.536999999999999
- type: map_at_1
value: 4.614999999999999
- type: map_at_10
value: 7.561
- type: map_at_100
value: 8.116
- type: map_at_1000
value: 8.222
- type: map_at_20
value: 7.836
- type: map_at_3
value: 6.883
- type: map_at_5
value: 7.228
- type: mrr_at_1
value: 6.467661691542288
- type: mrr_at_10
value: 9.736881070836299
- type: mrr_at_100
value: 10.374496468134012
- type: mrr_at_1000
value: 10.471177129498566
- type: mrr_at_20
value: 10.082156884941533
- type: mrr_at_3
value: 8.893034825870647
- type: mrr_at_5
value: 9.322139303482588
- type: nauc_map_at_1000_diff1
value: 27.85493854683162
- type: nauc_map_at_1000_max
value: 14.56633986578663
- type: nauc_map_at_1000_std
value: -3.021673836021936
- type: nauc_map_at_100_diff1
value: 27.936619921282674
- type: nauc_map_at_100_max
value: 14.47896262981179
- type: nauc_map_at_100_std
value: -3.0950081552909645
- type: nauc_map_at_10_diff1
value: 29.082319510401156
- type: nauc_map_at_10_max
value: 14.818569334255683
- type: nauc_map_at_10_std
value: -3.2294286883785057
- type: nauc_map_at_1_diff1
value: 35.42423140466729
- type: nauc_map_at_1_max
value: 16.477334855911696
- type: nauc_map_at_1_std
value: 0.1740217969920383
- type: nauc_map_at_20_diff1
value: 28.35014719098508
- type: nauc_map_at_20_max
value: 14.486495972693994
- type: nauc_map_at_20_std
value: -3.5344514995836707
- type: nauc_map_at_3_diff1
value: 30.708923470224615
- type: nauc_map_at_3_max
value: 16.610640309616734
- type: nauc_map_at_3_std
value: -2.1565402537913805
- type: nauc_map_at_5_diff1
value: 30.917948433559072
- type: nauc_map_at_5_max
value: 15.8835854289114
- type: nauc_map_at_5_std
value: -2.052352508617135
- type: nauc_mrr_at_1000_diff1
value: 26.144655832600677
- type: nauc_mrr_at_1000_max
value: 15.00856044432114
- type: nauc_mrr_at_1000_std
value: -2.2881265569815197
- type: nauc_mrr_at_100_diff1
value: 26.189637846056634
- type: nauc_mrr_at_100_max
value: 14.917777916300851
- type: nauc_mrr_at_100_std
value: -2.3147325164993586
- type: nauc_mrr_at_10_diff1
value: 27.01990127597085
- type: nauc_mrr_at_10_max
value: 15.40419417718168
- type: nauc_mrr_at_10_std
value: -2.674292922779472
- type: nauc_mrr_at_1_diff1
value: 30.390493584561273
- type: nauc_mrr_at_1_max
value: 16.190521568413526
- type: nauc_mrr_at_1_std
value: -1.3010045236252
- type: nauc_mrr_at_20_diff1
value: 26.528136157679537
- type: nauc_mrr_at_20_max
value: 14.927249720164436
- type: nauc_mrr_at_20_std
value: -2.6608916364959425
- type: nauc_mrr_at_3_diff1
value: 28.160143790764536
- type: nauc_mrr_at_3_max
value: 16.66611432585005
- type: nauc_mrr_at_3_std
value: -1.22224333858016
- type: nauc_mrr_at_5_diff1
value: 28.55869242352372
- type: nauc_mrr_at_5_max
value: 16.474969067258925
- type: nauc_mrr_at_5_std
value: -1.5626347513943306
- type: nauc_ndcg_at_1000_diff1
value: 21.544939859698307
- type: nauc_ndcg_at_1000_max
value: 14.755828699304862
- type: nauc_ndcg_at_1000_std
value: -1.7863076237242523
- type: nauc_ndcg_at_100_diff1
value: 22.32167700908614
- type: nauc_ndcg_at_100_max
value: 12.652700087403574
- type: nauc_ndcg_at_100_std
value: -2.5781809726643714
- type: nauc_ndcg_at_10_diff1
value: 26.271953879707986
- type: nauc_ndcg_at_10_max
value: 13.263986840025094
- type: nauc_ndcg_at_10_std
value: -4.375683197498729
- type: nauc_ndcg_at_1_diff1
value: 30.390493584561273
- type: nauc_ndcg_at_1_max
value: 16.190521568413526
- type: nauc_ndcg_at_1_std
value: -1.3010045236252
- type: nauc_ndcg_at_20_diff1
value: 24.336332842734663
- type: nauc_ndcg_at_20_max
value: 12.452189608889189
- type: nauc_ndcg_at_20_std
value: -4.8650265938749975
- type: nauc_ndcg_at_3_diff1
value: 29.300326220328298
- type: nauc_ndcg_at_3_max
value: 16.47117072510622
- type: nauc_ndcg_at_3_std
value: -2.220207919860025
- type: nauc_ndcg_at_5_diff1
value: 30.073945519141105
- type: nauc_ndcg_at_5_max
value: 15.56196663246459
- type: nauc_ndcg_at_5_std
value: -2.0307682671566583
- type: nauc_precision_at_1000_diff1
value: 11.515865273501918
- type: nauc_precision_at_1000_max
value: 11.141538927157153
- type: nauc_precision_at_1000_std
value: 2.400163186022152
- type: nauc_precision_at_100_diff1
value: 13.582722420212587
- type: nauc_precision_at_100_max
value: 7.437136768561503
- type: nauc_precision_at_100_std
value: -0.22092802159547792
- type: nauc_precision_at_10_diff1
value: 20.714297659927496
- type: nauc_precision_at_10_max
value: 8.53034933312664
- type: nauc_precision_at_10_std
value: -8.054453292981382
- type: nauc_precision_at_1_diff1
value: 30.390493584561273
- type: nauc_precision_at_1_max
value: 16.190521568413526
- type: nauc_precision_at_1_std
value: -1.3010045236252
- type: nauc_precision_at_20_diff1
value: 17.265966712355187
- type: nauc_precision_at_20_max
value: 6.450759533656093
- type: nauc_precision_at_20_std
value: -7.55297694625844
- type: nauc_precision_at_3_diff1
value: 25.61606753210002
- type: nauc_precision_at_3_max
value: 15.421225393984471
- type: nauc_precision_at_3_std
value: -2.8450287858075076
- type: nauc_precision_at_5_diff1
value: 27.72479093975902
- type: nauc_precision_at_5_max
value: 13.49412679631409
- type: nauc_precision_at_5_std
value: -2.7927475733206126
- type: nauc_recall_at_1000_diff1
value: 7.835747219855714
- type: nauc_recall_at_1000_max
value: 18.559065638035236
- type: nauc_recall_at_1000_std
value: 0.9442185301714288
- type: nauc_recall_at_100_diff1
value: 11.672576080114844
- type: nauc_recall_at_100_max
value: 10.070087221094584
- type: nauc_recall_at_100_std
value: -1.3236162730544536
- type: nauc_recall_at_10_diff1
value: 20.432723742941675
- type: nauc_recall_at_10_max
value: 9.730784308509302
- type: nauc_recall_at_10_std
value: -7.101608185823209
- type: nauc_recall_at_1_diff1
value: 35.42423140466729
- type: nauc_recall_at_1_max
value: 16.477334855911696
- type: nauc_recall_at_1_std
value: 0.1740217969920383
- type: nauc_recall_at_20_diff1
value: 16.449377444932125
- type: nauc_recall_at_20_max
value: 8.820609569844898
- type: nauc_recall_at_20_std
value: -8.115791891435135
- type: nauc_recall_at_3_diff1
value: 27.866759452521546
- type: nauc_recall_at_3_max
value: 15.583037004628281
- type: nauc_recall_at_3_std
value: -2.5113522947362554
- type: nauc_recall_at_5_diff1
value: 28.913625206282827
- type: nauc_recall_at_5_max
value: 13.656441867087423
- type: nauc_recall_at_5_std
value: -2.0101706699245137
- type: ndcg_at_1
value: 6.468
- type: ndcg_at_10
value: 9.536999999999999
- type: ndcg_at_100
value: 12.727
- type: ndcg_at_1000
value: 16.054
- type: ndcg_at_20
value: 10.599
- type: ndcg_at_3
value: 8.198
- type: ndcg_at_5
value: 8.72
- type: precision_at_1
value: 6.468
- type: precision_at_10
value: 1.8030000000000002
- type: precision_at_100
value: 0.4
- type: precision_at_1000
value: 0.08
- type: precision_at_20
value: 1.175
- type: precision_at_3
value: 4.146
- type: precision_at_5
value: 2.91
- type: recall_at_1
value: 4.614999999999999
- type: recall_at_10
value: 13.641
- type: recall_at_100
value: 28.343
- type: recall_at_1000
value: 53.513
- type: recall_at_20
value: 17.626
- type: recall_at_3
value: 9.689
- type: recall_at_5
value: 11.092
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 22.673
- type: map_at_1
value: 13.862
- type: map_at_10
value: 18.985
- type: map_at_100
value: 20.011000000000003
- type: map_at_1000
value: 20.14
- type: map_at_20
value: 19.592000000000002
- type: map_at_3
value: 17.262
- type: map_at_5
value: 18.201999999999998
- type: mrr_at_1
value: 17.420596727622716
- type: mrr_at_10
value: 22.79194738530637
- type: mrr_at_100
value: 23.698655522091876
- type: mrr_at_1000
value: 23.784973609637326
- type: mrr_at_20
value: 23.32851907640269
- type: mrr_at_3
value: 20.98171318575555
- type: mrr_at_5
value: 21.963426371511066
- type: nauc_map_at_1000_diff1
value: 37.50202748614961
- type: nauc_map_at_1000_max
value: 22.820578985331714
- type: nauc_map_at_1000_std
value: -9.798120886009576
- type: nauc_map_at_100_diff1
value: 37.482144792681915
- type: nauc_map_at_100_max
value: 22.787124981873962
- type: nauc_map_at_100_std
value: -9.939684353514751
- type: nauc_map_at_10_diff1
value: 37.56652119670892
- type: nauc_map_at_10_max
value: 22.7709385041439
- type: nauc_map_at_10_std
value: -10.937574706159918
- type: nauc_map_at_1_diff1
value: 44.71895338009894
- type: nauc_map_at_1_max
value: 26.314990822967566
- type: nauc_map_at_1_std
value: -12.912408072880154
- type: nauc_map_at_20_diff1
value: 37.52216335018155
- type: nauc_map_at_20_max
value: 22.674575014652156
- type: nauc_map_at_20_std
value: -10.158408858970189
- type: nauc_map_at_3_diff1
value: 39.405529653195345
- type: nauc_map_at_3_max
value: 22.982595052724633
- type: nauc_map_at_3_std
value: -11.109477632625818
- type: nauc_map_at_5_diff1
value: 38.314949623793964
- type: nauc_map_at_5_max
value: 22.804229725707128
- type: nauc_map_at_5_std
value: -11.20654011218176
- type: nauc_mrr_at_1000_diff1
value: 37.73954338994198
- type: nauc_mrr_at_1000_max
value: 24.347295125533847
- type: nauc_mrr_at_1000_std
value: -6.081976777710488
- type: nauc_mrr_at_100_diff1
value: 37.72091246039195
- type: nauc_mrr_at_100_max
value: 24.326697909928512
- type: nauc_mrr_at_100_std
value: -6.134060402517325
- type: nauc_mrr_at_10_diff1
value: 37.77963596263834
- type: nauc_mrr_at_10_max
value: 24.443363006607246
- type: nauc_mrr_at_10_std
value: -6.564116350434379
- type: nauc_mrr_at_1_diff1
value: 44.31814050000648
- type: nauc_mrr_at_1_max
value: 28.47074057988201
- type: nauc_mrr_at_1_std
value: -7.611703048131507
- type: nauc_mrr_at_20_diff1
value: 37.77127666078186
- type: nauc_mrr_at_20_max
value: 24.322464206304364
- type: nauc_mrr_at_20_std
value: -6.129280943654235
- type: nauc_mrr_at_3_diff1
value: 39.21931704565289
- type: nauc_mrr_at_3_max
value: 25.04012337311658
- type: nauc_mrr_at_3_std
value: -6.970545449370144
- type: nauc_mrr_at_5_diff1
value: 38.291877885130035
- type: nauc_mrr_at_5_max
value: 24.407885471857934
- type: nauc_mrr_at_5_std
value: -6.9467171690425555
- type: nauc_ndcg_at_1000_diff1
value: 34.75939815498105
- type: nauc_ndcg_at_1000_max
value: 23.038211375147064
- type: nauc_ndcg_at_1000_std
value: -3.9212021449032166
- type: nauc_ndcg_at_100_diff1
value: 34.225484119420685
- type: nauc_ndcg_at_100_max
value: 21.949532199023775
- type: nauc_ndcg_at_100_std
value: -6.282672591028227
- type: nauc_ndcg_at_10_diff1
value: 34.76503420762441
- type: nauc_ndcg_at_10_max
value: 21.80115993532717
- type: nauc_ndcg_at_10_std
value: -9.156279562951086
- type: nauc_ndcg_at_1_diff1
value: 44.31814050000648
- type: nauc_ndcg_at_1_max
value: 28.47074057988201
- type: nauc_ndcg_at_1_std
value: -7.611703048131507
- type: nauc_ndcg_at_20_diff1
value: 34.75665397587021
- type: nauc_ndcg_at_20_max
value: 21.47779097652192
- type: nauc_ndcg_at_20_std
value: -7.084588961923504
- type: nauc_ndcg_at_3_diff1
value: 37.79651567501644
- type: nauc_ndcg_at_3_max
value: 23.06802621879772
- type: nauc_ndcg_at_3_std
value: -8.89897399671579
- type: nauc_ndcg_at_5_diff1
value: 36.134676967579345
- type: nauc_ndcg_at_5_max
value: 22.102092743372207
- type: nauc_ndcg_at_5_std
value: -9.492898178283266
- type: nauc_precision_at_1000_diff1
value: 5.357455014788801
- type: nauc_precision_at_1000_max
value: 14.85638401398183
- type: nauc_precision_at_1000_std
value: 21.80178932985711
- type: nauc_precision_at_100_diff1
value: 14.240207692711394
- type: nauc_precision_at_100_max
value: 18.025081141588593
- type: nauc_precision_at_100_std
value: 12.582823940044316
- type: nauc_precision_at_10_diff1
value: 24.571600697750515
- type: nauc_precision_at_10_max
value: 22.44456910468203
- type: nauc_precision_at_10_std
value: -0.3674289058875876
- type: nauc_precision_at_1_diff1
value: 44.31814050000648
- type: nauc_precision_at_1_max
value: 28.47074057988201
- type: nauc_precision_at_1_std
value: -7.611703048131507
- type: nauc_precision_at_20_diff1
value: 22.18692030745845
- type: nauc_precision_at_20_max
value: 21.294946107863037
- type: nauc_precision_at_20_std
value: 7.446066946400802
- type: nauc_precision_at_3_diff1
value: 33.655528912683856
- type: nauc_precision_at_3_max
value: 22.942705596476994
- type: nauc_precision_at_3_std
value: -4.390763292300364
- type: nauc_precision_at_5_diff1
value: 29.476637523564253
- type: nauc_precision_at_5_max
value: 22.7524955598491
- type: nauc_precision_at_5_std
value: -3.2658428948736065
- type: nauc_recall_at_1000_diff1
value: 21.31836026941489
- type: nauc_recall_at_1000_max
value: 23.105011819172976
- type: nauc_recall_at_1000_std
value: 20.767912329403224
- type: nauc_recall_at_100_diff1
value: 22.076231078942982
- type: nauc_recall_at_100_max
value: 16.645471464491184
- type: nauc_recall_at_100_std
value: 0.4030134202446985
- type: nauc_recall_at_10_diff1
value: 26.15340251656349
- type: nauc_recall_at_10_max
value: 16.755838888332015
- type: nauc_recall_at_10_std
value: -8.14938897575815
- type: nauc_recall_at_1_diff1
value: 44.71895338009894
- type: nauc_recall_at_1_max
value: 26.314990822967566
- type: nauc_recall_at_1_std
value: -12.912408072880154
- type: nauc_recall_at_20_diff1
value: 25.782369545571836
- type: nauc_recall_at_20_max
value: 15.354827246395425
- type: nauc_recall_at_20_std
value: -2.163603339481153
- type: nauc_recall_at_3_diff1
value: 33.041111194529385
- type: nauc_recall_at_3_max
value: 18.645299004370376
- type: nauc_recall_at_3_std
value: -8.7840862339198
- type: nauc_recall_at_5_diff1
value: 29.85773984615027
- type: nauc_recall_at_5_max
value: 17.218466036356382
- type: nauc_recall_at_5_std
value: -9.790692152196558
- type: ndcg_at_1
value: 17.421
- type: ndcg_at_10
value: 22.673
- type: ndcg_at_100
value: 27.595
- type: ndcg_at_1000
value: 30.802000000000003
- type: ndcg_at_20
value: 24.712
- type: ndcg_at_3
value: 19.589000000000002
- type: ndcg_at_5
value: 20.985
- type: precision_at_1
value: 17.421
- type: precision_at_10
value: 4.148000000000001
- type: precision_at_100
value: 0.796
- type: precision_at_1000
value: 0.125
- type: precision_at_20
value: 2.6710000000000003
- type: precision_at_3
value: 9.175
- type: precision_at_5
value: 6.641
- type: recall_at_1
value: 13.862
- type: recall_at_10
value: 30.293999999999997
- type: recall_at_100
value: 51.597
- type: recall_at_1000
value: 74.434
- type: recall_at_20
value: 37.662
- type: recall_at_3
value: 21.464
- type: recall_at_5
value: 25.176
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 15.293000000000001
- type: map_at_1
value: 7.704
- type: map_at_10
value: 12.286999999999999
- type: map_at_100
value: 13.15
- type: map_at_1000
value: 13.272
- type: map_at_20
value: 12.658
- type: map_at_3
value: 10.755
- type: map_at_5
value: 11.704
- type: mrr_at_1
value: 9.817351598173515
- type: mrr_at_10
value: 14.816853301442338
- type: mrr_at_100
value: 15.672904882490815
- type: mrr_at_1000
value: 15.76388387346393
- type: mrr_at_20
value: 15.168855962398414
- type: mrr_at_3
value: 13.03272450532724
- type: mrr_at_5
value: 14.27130898021309
- type: nauc_map_at_1000_diff1
value: 30.70995928649326
- type: nauc_map_at_1000_max
value: 17.99236455109929
- type: nauc_map_at_1000_std
value: -4.744095825584005
- type: nauc_map_at_100_diff1
value: 30.742551921172907
- type: nauc_map_at_100_max
value: 17.969359657446756
- type: nauc_map_at_100_std
value: -4.8474996319166115
- type: nauc_map_at_10_diff1
value: 30.865974768679422
- type: nauc_map_at_10_max
value: 17.586221702642113
- type: nauc_map_at_10_std
value: -5.806081610988106
- type: nauc_map_at_1_diff1
value: 39.31770482990544
- type: nauc_map_at_1_max
value: 23.358775243094858
- type: nauc_map_at_1_std
value: -6.938138600011914
- type: nauc_map_at_20_diff1
value: 30.936807024792866
- type: nauc_map_at_20_max
value: 17.727266610319752
- type: nauc_map_at_20_std
value: -5.689597766541971
- type: nauc_map_at_3_diff1
value: 31.84302282837316
- type: nauc_map_at_3_max
value: 18.890913087610016
- type: nauc_map_at_3_std
value: -5.874926487057839
- type: nauc_map_at_5_diff1
value: 31.33601525511343
- type: nauc_map_at_5_max
value: 18.13722835438835
- type: nauc_map_at_5_std
value: -5.903505998262701
- type: nauc_mrr_at_1000_diff1
value: 30.824210026681815
- type: nauc_mrr_at_1000_max
value: 18.745323422852884
- type: nauc_mrr_at_1000_std
value: -4.534226707025272
- type: nauc_mrr_at_100_diff1
value: 30.82236446977708
- type: nauc_mrr_at_100_max
value: 18.730683687926376
- type: nauc_mrr_at_100_std
value: -4.575292468873761
- type: nauc_mrr_at_10_diff1
value: 31.060374276756082
- type: nauc_mrr_at_10_max
value: 18.645036322288874
- type: nauc_mrr_at_10_std
value: -5.016023430324253
- type: nauc_mrr_at_1_diff1
value: 38.68215813259208
- type: nauc_mrr_at_1_max
value: 23.33491195568566
- type: nauc_mrr_at_1_std
value: -5.724342694059754
- type: nauc_mrr_at_20_diff1
value: 30.96830685570513
- type: nauc_mrr_at_20_max
value: 18.65991475472625
- type: nauc_mrr_at_20_std
value: -5.151132486674488
- type: nauc_mrr_at_3_diff1
value: 33.20818411345752
- type: nauc_mrr_at_3_max
value: 19.805328570275787
- type: nauc_mrr_at_3_std
value: -5.583476066015216
- type: nauc_mrr_at_5_diff1
value: 32.08670879774766
- type: nauc_mrr_at_5_max
value: 19.150199138096006
- type: nauc_mrr_at_5_std
value: -5.289562660908718
- type: nauc_ndcg_at_1000_diff1
value: 27.419993146649347
- type: nauc_ndcg_at_1000_max
value: 17.865295253483986
- type: nauc_ndcg_at_1000_std
value: 0.8433443378816331
- type: nauc_ndcg_at_100_diff1
value: 27.785593394456438
- type: nauc_ndcg_at_100_max
value: 17.36462911761924
- type: nauc_ndcg_at_100_std
value: -0.861870910333249
- type: nauc_ndcg_at_10_diff1
value: 28.520474177411316
- type: nauc_ndcg_at_10_max
value: 16.029495315712612
- type: nauc_ndcg_at_10_std
value: -5.152973362702556
- type: nauc_ndcg_at_1_diff1
value: 38.68215813259208
- type: nauc_ndcg_at_1_max
value: 23.33491195568566
- type: nauc_ndcg_at_1_std
value: -5.724342694059754
- type: nauc_ndcg_at_20_diff1
value: 28.46313814469309
- type: nauc_ndcg_at_20_max
value: 16.38969273519121
- type: nauc_ndcg_at_20_std
value: -4.959987614545604
- type: nauc_ndcg_at_3_diff1
value: 30.698566490189634
- type: nauc_ndcg_at_3_max
value: 17.982548143559644
- type: nauc_ndcg_at_3_std
value: -5.278271421619477
- type: nauc_ndcg_at_5_diff1
value: 29.636567101705317
- type: nauc_ndcg_at_5_max
value: 16.842019563949414
- type: nauc_ndcg_at_5_std
value: -5.244691280356338
- type: nauc_precision_at_1000_diff1
value: 3.2568477439236423
- type: nauc_precision_at_1000_max
value: 5.854190358818539
- type: nauc_precision_at_1000_std
value: 12.378396860163651
- type: nauc_precision_at_100_diff1
value: 17.19646053596674
- type: nauc_precision_at_100_max
value: 17.752252663819245
- type: nauc_precision_at_100_std
value: 10.716061366607347
- type: nauc_precision_at_10_diff1
value: 24.438035023914356
- type: nauc_precision_at_10_max
value: 11.62137662391061
- type: nauc_precision_at_10_std
value: -2.2145856337836145
- type: nauc_precision_at_1_diff1
value: 38.68215813259208
- type: nauc_precision_at_1_max
value: 23.33491195568566
- type: nauc_precision_at_1_std
value: -5.724342694059754
- type: nauc_precision_at_20_diff1
value: 23.873148946276
- type: nauc_precision_at_20_max
value: 13.631140381743112
- type: nauc_precision_at_20_std
value: -1.4990421317736284
- type: nauc_precision_at_3_diff1
value: 29.01911339455403
- type: nauc_precision_at_3_max
value: 14.700647907186545
- type: nauc_precision_at_3_std
value: -4.86232686405427
- type: nauc_precision_at_5_diff1
value: 26.862904658334237
- type: nauc_precision_at_5_max
value: 12.722494369712894
- type: nauc_precision_at_5_std
value: -3.385845556142588
- type: nauc_recall_at_1000_diff1
value: 16.820952601611726
- type: nauc_recall_at_1000_max
value: 17.505402308601493
- type: nauc_recall_at_1000_std
value: 21.76963936293203
- type: nauc_recall_at_100_diff1
value: 20.367764559987275
- type: nauc_recall_at_100_max
value: 14.482726846581013
- type: nauc_recall_at_100_std
value: 9.200207833977778
- type: nauc_recall_at_10_diff1
value: 22.064850233346974
- type: nauc_recall_at_10_max
value: 11.917573548503588
- type: nauc_recall_at_10_std
value: -4.192506158824245
- type: nauc_recall_at_1_diff1
value: 39.31770482990544
- type: nauc_recall_at_1_max
value: 23.358775243094858
- type: nauc_recall_at_1_std
value: -6.938138600011914
- type: nauc_recall_at_20_diff1
value: 22.176277404030955
- type: nauc_recall_at_20_max
value: 12.662350468338898
- type: nauc_recall_at_20_std
value: -3.626001428459603
- type: nauc_recall_at_3_diff1
value: 26.112181916127327
- type: nauc_recall_at_3_max
value: 15.21727118017597
- type: nauc_recall_at_3_std
value: -4.616174980133607
- type: nauc_recall_at_5_diff1
value: 25.10404919697336
- type: nauc_recall_at_5_max
value: 13.444397921941032
- type: nauc_recall_at_5_std
value: -4.928639126920709
- type: ndcg_at_1
value: 9.817
- type: ndcg_at_10
value: 15.293000000000001
- type: ndcg_at_100
value: 20.054
- type: ndcg_at_1000
value: 23.339
- type: ndcg_at_20
value: 16.544
- type: ndcg_at_3
value: 12.437
- type: ndcg_at_5
value: 14.063999999999998
- type: precision_at_1
value: 9.817
- type: precision_at_10
value: 3.0020000000000002
- type: precision_at_100
value: 0.635
- type: precision_at_1000
value: 0.109
- type: precision_at_20
value: 1.8610000000000002
- type: precision_at_3
value: 6.202
- type: precision_at_5
value: 4.886
- type: recall_at_1
value: 7.704
- type: recall_at_10
value: 21.961
- type: recall_at_100
value: 44.032
- type: recall_at_1000
value: 67.499
- type: recall_at_20
value: 26.505000000000003
- type: recall_at_3
value: 14.318
- type: recall_at_5
value: 18.329
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 16.78825
- type: ndcg_at_10
value: 16.78825
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 14.277000000000001
- type: map_at_1
value: 8.190999999999999
- type: map_at_10
value: 11.781
- type: map_at_100
value: 12.562000000000001
- type: map_at_1000
value: 12.631
- type: map_at_20
value: 12.212
- type: map_at_3
value: 10.596
- type: map_at_5
value: 11.137
- type: mrr_at_1
value: 9.662576687116564
- type: mrr_at_10
value: 13.566194371409097
- type: mrr_at_100
value: 14.35438357851827
- type: mrr_at_1000
value: 14.416260600957873
- type: mrr_at_20
value: 14.017368365456647
- type: mrr_at_3
value: 12.269938650306749
- type: mrr_at_5
value: 12.89877300613497
- type: nauc_map_at_1000_diff1
value: 28.494760013468944
- type: nauc_map_at_1000_max
value: 7.476047371951163
- type: nauc_map_at_1000_std
value: 0.3232780689368288
- type: nauc_map_at_100_diff1
value: 28.507464461025823
- type: nauc_map_at_100_max
value: 7.475183748748348
- type: nauc_map_at_100_std
value: 0.2737066973090811
- type: nauc_map_at_10_diff1
value: 29.374711539470876
- type: nauc_map_at_10_max
value: 7.680730772570471
- type: nauc_map_at_10_std
value: -0.4189308390979677
- type: nauc_map_at_1_diff1
value: 37.07826097136785
- type: nauc_map_at_1_max
value: 6.352548833575532
- type: nauc_map_at_1_std
value: -6.886310756980893
- type: nauc_map_at_20_diff1
value: 28.591552825421058
- type: nauc_map_at_20_max
value: 7.491608473519193
- type: nauc_map_at_20_std
value: -0.013891455206956258
- type: nauc_map_at_3_diff1
value: 31.615221483424506
- type: nauc_map_at_3_max
value: 7.4916381326615085
- type: nauc_map_at_3_std
value: -1.786861683125983
- type: nauc_map_at_5_diff1
value: 29.83969296835951
- type: nauc_map_at_5_max
value: 8.266503368070852
- type: nauc_map_at_5_std
value: -1.8524006370494106
- type: nauc_mrr_at_1000_diff1
value: 28.171729914712003
- type: nauc_mrr_at_1000_max
value: 10.266844386903658
- type: nauc_mrr_at_1000_std
value: 3.482174253497324
- type: nauc_mrr_at_100_diff1
value: 28.16048425519645
- type: nauc_mrr_at_100_max
value: 10.266284678810369
- type: nauc_mrr_at_100_std
value: 3.4548990001249376
- type: nauc_mrr_at_10_diff1
value: 29.134018697921604
- type: nauc_mrr_at_10_max
value: 10.590715070628335
- type: nauc_mrr_at_10_std
value: 3.0769963058724645
- type: nauc_mrr_at_1_diff1
value: 37.41238383576296
- type: nauc_mrr_at_1_max
value: 11.340928281739506
- type: nauc_mrr_at_1_std
value: -0.5433557112254184
- type: nauc_mrr_at_20_diff1
value: 28.316696172551715
- type: nauc_mrr_at_20_max
value: 10.394592859943648
- type: nauc_mrr_at_20_std
value: 3.41367994986037
- type: nauc_mrr_at_3_diff1
value: 31.53646379813021
- type: nauc_mrr_at_3_max
value: 11.25824278509091
- type: nauc_mrr_at_3_std
value: 2.032639373150033
- type: nauc_mrr_at_5_diff1
value: 29.76367248309379
- type: nauc_mrr_at_5_max
value: 11.514421859188285
- type: nauc_mrr_at_5_std
value: 1.9071416984988665
- type: nauc_ndcg_at_1000_diff1
value: 22.816709800424086
- type: nauc_ndcg_at_1000_max
value: 6.76134502501311
- type: nauc_ndcg_at_1000_std
value: 4.889122718576628
- type: nauc_ndcg_at_100_diff1
value: 23.114826499003506
- type: nauc_ndcg_at_100_max
value: 6.974135301600846
- type: nauc_ndcg_at_100_std
value: 4.223250899695817
- type: nauc_ndcg_at_10_diff1
value: 26.15493633625973
- type: nauc_ndcg_at_10_max
value: 7.644014903928427
- type: nauc_ndcg_at_10_std
value: 2.2646658033198674
- type: nauc_ndcg_at_1_diff1
value: 37.41238383576296
- type: nauc_ndcg_at_1_max
value: 11.340928281739506
- type: nauc_ndcg_at_1_std
value: -0.5433557112254184
- type: nauc_ndcg_at_20_diff1
value: 23.48250159962753
- type: nauc_ndcg_at_20_max
value: 7.075746219859647
- type: nauc_ndcg_at_20_std
value: 3.114019202144443
- type: nauc_ndcg_at_3_diff1
value: 30.01131911446901
- type: nauc_ndcg_at_3_max
value: 8.753529808913752
- type: nauc_ndcg_at_3_std
value: 0.011003213403127658
- type: nauc_ndcg_at_5_diff1
value: 27.256758380210407
- type: nauc_ndcg_at_5_max
value: 9.44213402392306
- type: nauc_ndcg_at_5_std
value: -0.4138252778844653
- type: nauc_precision_at_1000_diff1
value: 7.001088454879175
- type: nauc_precision_at_1000_max
value: 8.846884012856261
- type: nauc_precision_at_1000_std
value: 16.888325017674113
- type: nauc_precision_at_100_diff1
value: 13.337788423127183
- type: nauc_precision_at_100_max
value: 10.735525205336204
- type: nauc_precision_at_100_std
value: 18.432785879340496
- type: nauc_precision_at_10_diff1
value: 20.428021400687726
- type: nauc_precision_at_10_max
value: 10.972196325341264
- type: nauc_precision_at_10_std
value: 11.840958270692058
- type: nauc_precision_at_1_diff1
value: 37.41238383576296
- type: nauc_precision_at_1_max
value: 11.340928281739506
- type: nauc_precision_at_1_std
value: -0.5433557112254184
- type: nauc_precision_at_20_diff1
value: 13.798739769537486
- type: nauc_precision_at_20_max
value: 9.28493435984823
- type: nauc_precision_at_20_std
value: 13.512331751813006
- type: nauc_precision_at_3_diff1
value: 25.935998697550723
- type: nauc_precision_at_3_max
value: 13.351264427096513
- type: nauc_precision_at_3_std
value: 5.711419195249537
- type: nauc_precision_at_5_diff1
value: 21.868676060773737
- type: nauc_precision_at_5_max
value: 14.624950188505082
- type: nauc_precision_at_5_std
value: 5.849467743768889
- type: nauc_recall_at_1000_diff1
value: 9.495271696108947
- type: nauc_recall_at_1000_max
value: 1.3687872763502438
- type: nauc_recall_at_1000_std
value: 12.332755043313568
- type: nauc_recall_at_100_diff1
value: 11.328361148075098
- type: nauc_recall_at_100_max
value: 3.242638378451513
- type: nauc_recall_at_100_std
value: 8.854790024197182
- type: nauc_recall_at_10_diff1
value: 18.50246729159376
- type: nauc_recall_at_10_max
value: 4.5866913788483075
- type: nauc_recall_at_10_std
value: 5.072599741721437
- type: nauc_recall_at_1_diff1
value: 37.07826097136785
- type: nauc_recall_at_1_max
value: 6.352548833575532
- type: nauc_recall_at_1_std
value: -6.886310756980893
- type: nauc_recall_at_20_diff1
value: 11.58841272443898
- type: nauc_recall_at_20_max
value: 3.389954825054431
- type: nauc_recall_at_20_std
value: 6.3212500117975745
- type: nauc_recall_at_3_diff1
value: 25.659447186438744
- type: nauc_recall_at_3_max
value: 6.454745612611794
- type: nauc_recall_at_3_std
value: 0.2596093510099327
- type: nauc_recall_at_5_diff1
value: 20.519612608554333
- type: nauc_recall_at_5_max
value: 8.858809195363513
- type: nauc_recall_at_5_std
value: -0.328354913491512
- type: ndcg_at_1
value: 9.663
- type: ndcg_at_10
value: 14.277000000000001
- type: ndcg_at_100
value: 18.331
- type: ndcg_at_1000
value: 20.422
- type: ndcg_at_20
value: 15.870999999999999
- type: ndcg_at_3
value: 11.83
- type: ndcg_at_5
value: 12.731
- type: precision_at_1
value: 9.663
- type: precision_at_10
value: 2.485
- type: precision_at_100
value: 0.485
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_20
value: 1.603
- type: precision_at_3
value: 5.521
- type: precision_at_5
value: 3.8960000000000004
- type: recall_at_1
value: 8.190999999999999
- type: recall_at_10
value: 20.403
- type: recall_at_100
value: 39.175
- type: recall_at_1000
value: 55.201
- type: recall_at_20
value: 26.586
- type: recall_at_3
value: 13.245999999999999
- type: recall_at_5
value: 15.590000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 9.931
- type: map_at_1
value: 5.492
- type: map_at_10
value: 8.037999999999998
- type: map_at_100
value: 8.61
- type: map_at_1000
value: 8.701
- type: map_at_20
value: 8.333
- type: map_at_3
value: 7.12
- type: map_at_5
value: 7.617999999999999
- type: mrr_at_1
value: 6.98554714384033
- type: mrr_at_10
value: 9.87457531762418
- type: mrr_at_100
value: 10.475652661666084
- type: mrr_at_1000
value: 10.554930650207101
- type: mrr_at_20
value: 10.193554923811691
- type: mrr_at_3
value: 8.878183069511364
- type: mrr_at_5
value: 9.421885753613212
- type: nauc_map_at_1000_diff1
value: 20.565769845802883
- type: nauc_map_at_1000_max
value: 22.378140712062702
- type: nauc_map_at_1000_std
value: -10.141998597905209
- type: nauc_map_at_100_diff1
value: 20.616071849724367
- type: nauc_map_at_100_max
value: 22.307147017405626
- type: nauc_map_at_100_std
value: -10.296370391700487
- type: nauc_map_at_10_diff1
value: 21.237343234707716
- type: nauc_map_at_10_max
value: 22.563538974901096
- type: nauc_map_at_10_std
value: -10.802508716317789
- type: nauc_map_at_1_diff1
value: 31.928795700485264
- type: nauc_map_at_1_max
value: 26.571949087855863
- type: nauc_map_at_1_std
value: -13.137832080177475
- type: nauc_map_at_20_diff1
value: 20.905817715628757
- type: nauc_map_at_20_max
value: 22.204411416095372
- type: nauc_map_at_20_std
value: -10.629540907583285
- type: nauc_map_at_3_diff1
value: 23.769763645460394
- type: nauc_map_at_3_max
value: 23.991820392859324
- type: nauc_map_at_3_std
value: -11.170464870404574
- type: nauc_map_at_5_diff1
value: 22.124738165686612
- type: nauc_map_at_5_max
value: 23.437624479894737
- type: nauc_map_at_5_std
value: -11.100064096434863
- type: nauc_mrr_at_1000_diff1
value: 20.875194720816516
- type: nauc_mrr_at_1000_max
value: 23.727809338044338
- type: nauc_mrr_at_1000_std
value: -9.671921004196532
- type: nauc_mrr_at_100_diff1
value: 20.898031760155444
- type: nauc_mrr_at_100_max
value: 23.69880254672094
- type: nauc_mrr_at_100_std
value: -9.749188126426537
- type: nauc_mrr_at_10_diff1
value: 21.47642365262958
- type: nauc_mrr_at_10_max
value: 23.9553444855699
- type: nauc_mrr_at_10_std
value: -10.254245788150909
- type: nauc_mrr_at_1_diff1
value: 31.817967070779485
- type: nauc_mrr_at_1_max
value: 28.407226975749662
- type: nauc_mrr_at_1_std
value: -12.877843518325626
- type: nauc_mrr_at_20_diff1
value: 21.221710247436363
- type: nauc_mrr_at_20_max
value: 23.651237368337046
- type: nauc_mrr_at_20_std
value: -10.049703296585264
- type: nauc_mrr_at_3_diff1
value: 23.87705660312213
- type: nauc_mrr_at_3_max
value: 25.052558327273665
- type: nauc_mrr_at_3_std
value: -10.568919759357295
- type: nauc_mrr_at_5_diff1
value: 22.22870976844402
- type: nauc_mrr_at_5_max
value: 24.76085984920155
- type: nauc_mrr_at_5_std
value: -10.61112901298338
- type: nauc_ndcg_at_1000_diff1
value: 14.524543149180666
- type: nauc_ndcg_at_1000_max
value: 21.91048785762314
- type: nauc_ndcg_at_1000_std
value: -4.760370494569006
- type: nauc_ndcg_at_100_diff1
value: 15.420894094862023
- type: nauc_ndcg_at_100_max
value: 20.554960781041178
- type: nauc_ndcg_at_100_std
value: -7.461070164013067
- type: nauc_ndcg_at_10_diff1
value: 17.964603768726196
- type: nauc_ndcg_at_10_max
value: 21.28304880117283
- type: nauc_ndcg_at_10_std
value: -9.880598170205756
- type: nauc_ndcg_at_1_diff1
value: 31.817967070779485
- type: nauc_ndcg_at_1_max
value: 28.407226975749662
- type: nauc_ndcg_at_1_std
value: -12.877843518325626
- type: nauc_ndcg_at_20_diff1
value: 17.087932814491737
- type: nauc_ndcg_at_20_max
value: 20.159454189531306
- type: nauc_ndcg_at_20_std
value: -9.409073396296112
- type: nauc_ndcg_at_3_diff1
value: 22.22674331969473
- type: nauc_ndcg_at_3_max
value: 24.438111454101566
- type: nauc_ndcg_at_3_std
value: -10.741967259570304
- type: nauc_ndcg_at_5_diff1
value: 19.704936114393355
- type: nauc_ndcg_at_5_max
value: 23.274998272504508
- type: nauc_ndcg_at_5_std
value: -10.515464197222189
- type: nauc_precision_at_1000_diff1
value: 5.251438917389866
- type: nauc_precision_at_1000_max
value: 25.691845305457967
- type: nauc_precision_at_1000_std
value: 12.817508210963569
- type: nauc_precision_at_100_diff1
value: 7.615641594547768
- type: nauc_precision_at_100_max
value: 23.64832192099932
- type: nauc_precision_at_100_std
value: 1.6874710450857076
- type: nauc_precision_at_10_diff1
value: 11.593052157615787
- type: nauc_precision_at_10_max
value: 21.505665853965848
- type: nauc_precision_at_10_std
value: -8.169014959052463
- type: nauc_precision_at_1_diff1
value: 31.817967070779485
- type: nauc_precision_at_1_max
value: 28.407226975749662
- type: nauc_precision_at_1_std
value: -12.877843518325626
- type: nauc_precision_at_20_diff1
value: 10.220169443404828
- type: nauc_precision_at_20_max
value: 20.418038302868517
- type: nauc_precision_at_20_std
value: -6.143289833708619
- type: nauc_precision_at_3_diff1
value: 18.52522108452249
- type: nauc_precision_at_3_max
value: 25.523077096782387
- type: nauc_precision_at_3_std
value: -10.146066595335736
- type: nauc_precision_at_5_diff1
value: 14.122191907391885
- type: nauc_precision_at_5_max
value: 24.508596888125066
- type: nauc_precision_at_5_std
value: -9.15670496810609
- type: nauc_recall_at_1000_diff1
value: 2.9003934699300506
- type: nauc_recall_at_1000_max
value: 20.704797150454354
- type: nauc_recall_at_1000_std
value: 7.552220586284925
- type: nauc_recall_at_100_diff1
value: 6.156591437491543
- type: nauc_recall_at_100_max
value: 15.586576013040334
- type: nauc_recall_at_100_std
value: -2.3313579488107514
- type: nauc_recall_at_10_diff1
value: 10.411068394326103
- type: nauc_recall_at_10_max
value: 15.826969661573445
- type: nauc_recall_at_10_std
value: -8.013030367792918
- type: nauc_recall_at_1_diff1
value: 31.928795700485264
- type: nauc_recall_at_1_max
value: 26.571949087855863
- type: nauc_recall_at_1_std
value: -13.137832080177475
- type: nauc_recall_at_20_diff1
value: 9.513926728794189
- type: nauc_recall_at_20_max
value: 14.058567589417883
- type: nauc_recall_at_20_std
value: -7.335646989235248
- type: nauc_recall_at_3_diff1
value: 17.357179576749417
- type: nauc_recall_at_3_max
value: 20.989054791758118
- type: nauc_recall_at_3_std
value: -9.44580247372722
- type: nauc_recall_at_5_diff1
value: 13.127083612003437
- type: nauc_recall_at_5_max
value: 19.23013900082838
- type: nauc_recall_at_5_std
value: -9.216924125693595
- type: ndcg_at_1
value: 6.986000000000001
- type: ndcg_at_10
value: 9.931
- type: ndcg_at_100
value: 13.008000000000001
- type: ndcg_at_1000
value: 15.748000000000001
- type: ndcg_at_20
value: 10.951
- type: ndcg_at_3
value: 8.237
- type: ndcg_at_5
value: 8.999
- type: precision_at_1
value: 6.986000000000001
- type: precision_at_10
value: 1.9300000000000002
- type: precision_at_100
value: 0.41900000000000004
- type: precision_at_1000
value: 0.077
- type: precision_at_20
value: 1.239
- type: precision_at_3
value: 4.0489999999999995
- type: precision_at_5
value: 3.0140000000000002
- type: recall_at_1
value: 5.492
- type: recall_at_10
value: 13.809
- type: recall_at_100
value: 28.073999999999998
- type: recall_at_1000
value: 48.723
- type: recall_at_20
value: 17.671999999999997
- type: recall_at_3
value: 9.049
- type: recall_at_5
value: 11.028
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 15.155
- type: map_at_1
value: 9.403
- type: map_at_10
value: 12.784999999999998
- type: map_at_100
value: 13.435
- type: map_at_1000
value: 13.536000000000001
- type: map_at_20
value: 13.139000000000001
- type: map_at_3
value: 11.723
- type: map_at_5
value: 12.334
- type: mrr_at_1
value: 11.380597014925373
- type: mrr_at_10
value: 15.09905828002843
- type: mrr_at_100
value: 15.759264072378452
- type: mrr_at_1000
value: 15.847440245587205
- type: mrr_at_20
value: 15.447864306537513
- type: mrr_at_3
value: 13.930348258706468
- type: mrr_at_5
value: 14.620646766169154
- type: nauc_map_at_1000_diff1
value: 36.221836337691194
- type: nauc_map_at_1000_max
value: 36.146176827007615
- type: nauc_map_at_1000_std
value: -8.440865055286565
- type: nauc_map_at_100_diff1
value: 36.25292250732588
- type: nauc_map_at_100_max
value: 36.106156799236594
- type: nauc_map_at_100_std
value: -8.513735505725933
- type: nauc_map_at_10_diff1
value: 36.75715362414852
- type: nauc_map_at_10_max
value: 36.74235484380773
- type: nauc_map_at_10_std
value: -8.73507955998515
- type: nauc_map_at_1_diff1
value: 47.32136738001297
- type: nauc_map_at_1_max
value: 43.70265279358145
- type: nauc_map_at_1_std
value: -9.966898812974444
- type: nauc_map_at_20_diff1
value: 36.41128754131193
- type: nauc_map_at_20_max
value: 36.37087940014768
- type: nauc_map_at_20_std
value: -8.650096633449316
- type: nauc_map_at_3_diff1
value: 40.212068269648285
- type: nauc_map_at_3_max
value: 37.62535802763293
- type: nauc_map_at_3_std
value: -8.23515664642517
- type: nauc_map_at_5_diff1
value: 38.10768244945398
- type: nauc_map_at_5_max
value: 37.57671103475779
- type: nauc_map_at_5_std
value: -8.17155729697582
- type: nauc_mrr_at_1000_diff1
value: 35.66874546979436
- type: nauc_mrr_at_1000_max
value: 36.36401767980438
- type: nauc_mrr_at_1000_std
value: -6.634599812953056
- type: nauc_mrr_at_100_diff1
value: 35.66680600664232
- type: nauc_mrr_at_100_max
value: 36.32682480581458
- type: nauc_mrr_at_100_std
value: -6.687567367023502
- type: nauc_mrr_at_10_diff1
value: 36.10643387204666
- type: nauc_mrr_at_10_max
value: 37.0968500212504
- type: nauc_mrr_at_10_std
value: -6.779791973262969
- type: nauc_mrr_at_1_diff1
value: 46.649551833421974
- type: nauc_mrr_at_1_max
value: 43.389260347264006
- type: nauc_mrr_at_1_std
value: -6.73964327074206
- type: nauc_mrr_at_20_diff1
value: 35.818779527356284
- type: nauc_mrr_at_20_max
value: 36.62574319552761
- type: nauc_mrr_at_20_std
value: -6.732115280637446
- type: nauc_mrr_at_3_diff1
value: 39.459028436941885
- type: nauc_mrr_at_3_max
value: 37.687667723069296
- type: nauc_mrr_at_3_std
value: -5.761501856626376
- type: nauc_mrr_at_5_diff1
value: 37.431903433141954
- type: nauc_mrr_at_5_max
value: 37.66755422387186
- type: nauc_mrr_at_5_std
value: -5.989229808344125
- type: nauc_ndcg_at_1000_diff1
value: 29.816446855586186
- type: nauc_ndcg_at_1000_max
value: 32.1473939651989
- type: nauc_ndcg_at_1000_std
value: -5.846568188873995
- type: nauc_ndcg_at_100_diff1
value: 30.531764587006467
- type: nauc_ndcg_at_100_max
value: 31.462951585689257
- type: nauc_ndcg_at_100_std
value: -7.2287918135710285
- type: nauc_ndcg_at_10_diff1
value: 32.02059376659964
- type: nauc_ndcg_at_10_max
value: 34.335995105366166
- type: nauc_ndcg_at_10_std
value: -8.42979029595602
- type: nauc_ndcg_at_1_diff1
value: 46.649551833421974
- type: nauc_ndcg_at_1_max
value: 43.389260347264006
- type: nauc_ndcg_at_1_std
value: -6.73964327074206
- type: nauc_ndcg_at_20_diff1
value: 31.103536602017023
- type: nauc_ndcg_at_20_max
value: 32.94676353482187
- type: nauc_ndcg_at_20_std
value: -8.158102428934253
- type: nauc_ndcg_at_3_diff1
value: 37.89569295658901
- type: nauc_ndcg_at_3_max
value: 36.05530034905636
- type: nauc_ndcg_at_3_std
value: -6.934356182009012
- type: nauc_ndcg_at_5_diff1
value: 34.757353385413595
- type: nauc_ndcg_at_5_max
value: 35.98457240309564
- type: nauc_ndcg_at_5_std
value: -7.147173301080888
- type: nauc_precision_at_1000_diff1
value: 3.780770613030298
- type: nauc_precision_at_1000_max
value: 19.817588336044842
- type: nauc_precision_at_1000_std
value: 3.1752363621981403
- type: nauc_precision_at_100_diff1
value: 15.802825679325947
- type: nauc_precision_at_100_max
value: 22.064385556229986
- type: nauc_precision_at_100_std
value: -3.154174393084888
- type: nauc_precision_at_10_diff1
value: 19.837322294027203
- type: nauc_precision_at_10_max
value: 30.017389964679353
- type: nauc_precision_at_10_std
value: -7.404534428736785
- type: nauc_precision_at_1_diff1
value: 46.649551833421974
- type: nauc_precision_at_1_max
value: 43.389260347264006
- type: nauc_precision_at_1_std
value: -6.73964327074206
- type: nauc_precision_at_20_diff1
value: 17.834624219717714
- type: nauc_precision_at_20_max
value: 26.922207238009232
- type: nauc_precision_at_20_std
value: -7.528338540833176
- type: nauc_precision_at_3_diff1
value: 32.69027368850203
- type: nauc_precision_at_3_max
value: 32.93874981334349
- type: nauc_precision_at_3_std
value: -4.108473830961593
- type: nauc_precision_at_5_diff1
value: 26.336870847233047
- type: nauc_precision_at_5_max
value: 33.800455550407335
- type: nauc_precision_at_5_std
value: -4.540760521604314
- type: nauc_recall_at_1000_diff1
value: 12.616995547696519
- type: nauc_recall_at_1000_max
value: 18.82257377190202
- type: nauc_recall_at_1000_std
value: 3.5660866396177098
- type: nauc_recall_at_100_diff1
value: 17.77612250220536
- type: nauc_recall_at_100_max
value: 18.029071415690556
- type: nauc_recall_at_100_std
value: -4.108340328974984
- type: nauc_recall_at_10_diff1
value: 20.646167285172357
- type: nauc_recall_at_10_max
value: 27.15351226346573
- type: nauc_recall_at_10_std
value: -8.926123198697267
- type: nauc_recall_at_1_diff1
value: 47.32136738001297
- type: nauc_recall_at_1_max
value: 43.70265279358145
- type: nauc_recall_at_1_std
value: -9.966898812974444
- type: nauc_recall_at_20_diff1
value: 18.828378638222283
- type: nauc_recall_at_20_max
value: 23.1052150083718
- type: nauc_recall_at_20_std
value: -7.836874748330118
- type: nauc_recall_at_3_diff1
value: 33.066655006046055
- type: nauc_recall_at_3_max
value: 31.01575466463744
- type: nauc_recall_at_3_std
value: -5.889014994939482
- type: nauc_recall_at_5_diff1
value: 27.297222480365456
- type: nauc_recall_at_5_max
value: 31.424750543103443
- type: nauc_recall_at_5_std
value: -6.145097625084275
- type: ndcg_at_1
value: 11.381
- type: ndcg_at_10
value: 15.155
- type: ndcg_at_100
value: 18.671
- type: ndcg_at_1000
value: 21.578
- type: ndcg_at_20
value: 16.387
- type: ndcg_at_3
value: 13.103000000000002
- type: ndcg_at_5
value: 14.079
- type: precision_at_1
value: 11.381
- type: precision_at_10
value: 2.528
- type: precision_at_100
value: 0.484
- type: precision_at_1000
value: 0.083
- type: precision_at_20
value: 1.5810000000000002
- type: precision_at_3
value: 5.970000000000001
- type: precision_at_5
value: 4.234999999999999
- type: recall_at_1
value: 9.403
- type: recall_at_10
value: 20.330000000000002
- type: recall_at_100
value: 36.573
- type: recall_at_1000
value: 58.105
- type: recall_at_20
value: 24.834999999999997
- type: recall_at_3
value: 14.594
- type: recall_at_5
value: 17.071
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 18.239
- type: map_at_1
value: 10.58
- type: map_at_10
value: 14.96
- type: map_at_100
value: 15.842
- type: map_at_1000
value: 15.995999999999999
- type: map_at_20
value: 15.376000000000001
- type: map_at_3
value: 13.475000000000001
- type: map_at_5
value: 14.283999999999999
- type: mrr_at_1
value: 13.043478260869565
- type: mrr_at_10
value: 18.022539055147757
- type: mrr_at_100
value: 18.780418501784286
- type: mrr_at_1000
value: 18.854414877308397
- type: mrr_at_20
value: 18.402853225362385
- type: mrr_at_3
value: 16.46903820816864
- type: mrr_at_5
value: 17.39789196310936
- type: nauc_map_at_1000_diff1
value: 36.06976683813531
- type: nauc_map_at_1000_max
value: 23.880596751384736
- type: nauc_map_at_1000_std
value: -14.579512923753887
- type: nauc_map_at_100_diff1
value: 36.174304461427774
- type: nauc_map_at_100_max
value: 23.916296845822384
- type: nauc_map_at_100_std
value: -14.574659609525323
- type: nauc_map_at_10_diff1
value: 36.573260203719784
- type: nauc_map_at_10_max
value: 23.82794992389452
- type: nauc_map_at_10_std
value: -15.314328623961412
- type: nauc_map_at_1_diff1
value: 47.58716554589925
- type: nauc_map_at_1_max
value: 30.13605956605256
- type: nauc_map_at_1_std
value: -15.53401816041246
- type: nauc_map_at_20_diff1
value: 36.28119061571975
- type: nauc_map_at_20_max
value: 24.025352918899355
- type: nauc_map_at_20_std
value: -14.897905095814625
- type: nauc_map_at_3_diff1
value: 38.175373760788005
- type: nauc_map_at_3_max
value: 24.480395575984755
- type: nauc_map_at_3_std
value: -16.220911130014247
- type: nauc_map_at_5_diff1
value: 36.798802315610615
- type: nauc_map_at_5_max
value: 24.391607274338888
- type: nauc_map_at_5_std
value: -15.578671515220416
- type: nauc_mrr_at_1000_diff1
value: 34.05258976399819
- type: nauc_mrr_at_1000_max
value: 23.61502771905304
- type: nauc_mrr_at_1000_std
value: -12.007859059532636
- type: nauc_mrr_at_100_diff1
value: 34.081744532858096
- type: nauc_mrr_at_100_max
value: 23.607643654491792
- type: nauc_mrr_at_100_std
value: -11.974613818485084
- type: nauc_mrr_at_10_diff1
value: 34.19893287544882
- type: nauc_mrr_at_10_max
value: 23.12654405704457
- type: nauc_mrr_at_10_std
value: -12.60303233921334
- type: nauc_mrr_at_1_diff1
value: 43.10013215094798
- type: nauc_mrr_at_1_max
value: 30.76017224718181
- type: nauc_mrr_at_1_std
value: -12.670110709121223
- type: nauc_mrr_at_20_diff1
value: 34.162962675920674
- type: nauc_mrr_at_20_max
value: 23.43306246275014
- type: nauc_mrr_at_20_std
value: -12.252542205760312
- type: nauc_mrr_at_3_diff1
value: 35.555197323624064
- type: nauc_mrr_at_3_max
value: 23.827609846219566
- type: nauc_mrr_at_3_std
value: -13.671488932869266
- type: nauc_mrr_at_5_diff1
value: 34.81855704564023
- type: nauc_mrr_at_5_max
value: 23.65032973975109
- type: nauc_mrr_at_5_std
value: -12.966243483238316
- type: nauc_ndcg_at_1000_diff1
value: 31.40402163601383
- type: nauc_ndcg_at_1000_max
value: 23.706861603497085
- type: nauc_ndcg_at_1000_std
value: -9.504222126824306
- type: nauc_ndcg_at_100_diff1
value: 32.416336832716574
- type: nauc_ndcg_at_100_max
value: 22.581908683722894
- type: nauc_ndcg_at_100_std
value: -9.788452797449326
- type: nauc_ndcg_at_10_diff1
value: 33.21698215478735
- type: nauc_ndcg_at_10_max
value: 20.735079534037823
- type: nauc_ndcg_at_10_std
value: -14.01732092307578
- type: nauc_ndcg_at_1_diff1
value: 43.10013215094798
- type: nauc_ndcg_at_1_max
value: 30.76017224718181
- type: nauc_ndcg_at_1_std
value: -12.670110709121223
- type: nauc_ndcg_at_20_diff1
value: 32.739116701273616
- type: nauc_ndcg_at_20_max
value: 22.105704119447374
- type: nauc_ndcg_at_20_std
value: -12.63060394296969
- type: nauc_ndcg_at_3_diff1
value: 34.48619843674619
- type: nauc_ndcg_at_3_max
value: 21.252666656269678
- type: nauc_ndcg_at_3_std
value: -14.682312802009353
- type: nauc_ndcg_at_5_diff1
value: 33.36972489731244
- type: nauc_ndcg_at_5_max
value: 21.209984550853147
- type: nauc_ndcg_at_5_std
value: -14.340640209213163
- type: nauc_precision_at_1000_diff1
value: 0.5270857534137781
- type: nauc_precision_at_1000_max
value: 0.11893429887585177
- type: nauc_precision_at_1000_std
value: 0.6691289509386259
- type: nauc_precision_at_100_diff1
value: 10.968811591511459
- type: nauc_precision_at_100_max
value: 1.7684802468999532
- type: nauc_precision_at_100_std
value: 1.4005257840071328
- type: nauc_precision_at_10_diff1
value: 23.51229832866379
- type: nauc_precision_at_10_max
value: 12.86774948770811
- type: nauc_precision_at_10_std
value: -8.104791739166236
- type: nauc_precision_at_1_diff1
value: 43.10013215094798
- type: nauc_precision_at_1_max
value: 30.76017224718181
- type: nauc_precision_at_1_std
value: -12.670110709121223
- type: nauc_precision_at_20_diff1
value: 16.791233314213454
- type: nauc_precision_at_20_max
value: 11.0180312956169
- type: nauc_precision_at_20_std
value: -5.867907345316035
- type: nauc_precision_at_3_diff1
value: 27.005566576164856
- type: nauc_precision_at_3_max
value: 17.504043528175135
- type: nauc_precision_at_3_std
value: -12.924562395250492
- type: nauc_precision_at_5_diff1
value: 24.596516751303866
- type: nauc_precision_at_5_max
value: 15.299886584316754
- type: nauc_precision_at_5_std
value: -11.372115215466414
- type: nauc_recall_at_1000_diff1
value: 15.383231551127524
- type: nauc_recall_at_1000_max
value: 24.99731831000235
- type: nauc_recall_at_1000_std
value: 7.0554296024685765
- type: nauc_recall_at_100_diff1
value: 22.030811635647034
- type: nauc_recall_at_100_max
value: 18.297564133467105
- type: nauc_recall_at_100_std
value: 3.0779933488171247
- type: nauc_recall_at_10_diff1
value: 27.59326197983875
- type: nauc_recall_at_10_max
value: 14.981716885215462
- type: nauc_recall_at_10_std
value: -12.299090743736585
- type: nauc_recall_at_1_diff1
value: 47.58716554589925
- type: nauc_recall_at_1_max
value: 30.13605956605256
- type: nauc_recall_at_1_std
value: -15.53401816041246
- type: nauc_recall_at_20_diff1
value: 26.001074883193027
- type: nauc_recall_at_20_max
value: 18.79056996138217
- type: nauc_recall_at_20_std
value: -7.85031435774478
- type: nauc_recall_at_3_diff1
value: 31.49620309707735
- type: nauc_recall_at_3_max
value: 17.971943047310003
- type: nauc_recall_at_3_std
value: -15.584923538326537
- type: nauc_recall_at_5_diff1
value: 27.955592193775924
- type: nauc_recall_at_5_max
value: 17.639988884650343
- type: nauc_recall_at_5_std
value: -13.156806251818296
- type: ndcg_at_1
value: 13.043
- type: ndcg_at_10
value: 18.239
- type: ndcg_at_100
value: 22.383
- type: ndcg_at_1000
value: 25.558999999999997
- type: ndcg_at_20
value: 19.405
- type: ndcg_at_3
value: 15.75
- type: ndcg_at_5
value: 16.987
- type: precision_at_1
value: 13.043
- type: precision_at_10
value: 3.5970000000000004
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 2.3120000000000003
- type: precision_at_3
value: 7.51
- type: precision_at_5
value: 5.692
- type: recall_at_1
value: 10.58
- type: recall_at_10
value: 24.255
- type: recall_at_100
value: 44.156
- type: recall_at_1000
value: 66.006
- type: recall_at_20
value: 28.92
- type: recall_at_3
value: 16.832
- type: recall_at_5
value: 20.018
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 12.459000000000001
- type: map_at_1
value: 7.917000000000001
- type: map_at_10
value: 10.597
- type: map_at_100
value: 11.222999999999999
- type: map_at_1000
value: 11.318
- type: map_at_20
value: 10.947
- type: map_at_3
value: 9.766
- type: map_at_5
value: 10.163
- type: mrr_at_1
value: 8.687615526802219
- type: mrr_at_10
value: 11.763635830179274
- type: mrr_at_100
value: 12.362197385932065
- type: mrr_at_1000
value: 12.45603515092754
- type: mrr_at_20
value: 12.083209182914187
- type: mrr_at_3
value: 10.782501540357364
- type: mrr_at_5
value: 11.30006161429452
- type: nauc_map_at_1000_diff1
value: 14.852923871827404
- type: nauc_map_at_1000_max
value: 20.34714853842873
- type: nauc_map_at_1000_std
value: -8.883230513088728
- type: nauc_map_at_100_diff1
value: 14.792325021037744
- type: nauc_map_at_100_max
value: 20.3296674062139
- type: nauc_map_at_100_std
value: -8.893480029356672
- type: nauc_map_at_10_diff1
value: 15.627316008411569
- type: nauc_map_at_10_max
value: 21.288002482203584
- type: nauc_map_at_10_std
value: -9.34108572998076
- type: nauc_map_at_1_diff1
value: 21.506858723278764
- type: nauc_map_at_1_max
value: 25.108064297451833
- type: nauc_map_at_1_std
value: -8.252547190272466
- type: nauc_map_at_20_diff1
value: 14.893904239341008
- type: nauc_map_at_20_max
value: 20.575258480900903
- type: nauc_map_at_20_std
value: -9.15331159275562
- type: nauc_map_at_3_diff1
value: 16.820401129748365
- type: nauc_map_at_3_max
value: 21.9331063791627
- type: nauc_map_at_3_std
value: -9.530118092684765
- type: nauc_map_at_5_diff1
value: 16.220898926286974
- type: nauc_map_at_5_max
value: 21.771564941971913
- type: nauc_map_at_5_std
value: -9.908962537086204
- type: nauc_mrr_at_1000_diff1
value: 15.035851497523165
- type: nauc_mrr_at_1000_max
value: 20.285218320079213
- type: nauc_mrr_at_1000_std
value: -8.706340650107393
- type: nauc_mrr_at_100_diff1
value: 15.000992556719789
- type: nauc_mrr_at_100_max
value: 20.25723486686565
- type: nauc_mrr_at_100_std
value: -8.701134929278206
- type: nauc_mrr_at_10_diff1
value: 15.63217573465226
- type: nauc_mrr_at_10_max
value: 21.039202273922136
- type: nauc_mrr_at_10_std
value: -8.982508656224454
- type: nauc_mrr_at_1_diff1
value: 21.367525074890256
- type: nauc_mrr_at_1_max
value: 25.836566656299198
- type: nauc_mrr_at_1_std
value: -8.299855831436641
- type: nauc_mrr_at_20_diff1
value: 15.110826603775978
- type: nauc_mrr_at_20_max
value: 20.50618653058863
- type: nauc_mrr_at_20_std
value: -8.945331381714297
- type: nauc_mrr_at_3_diff1
value: 16.63040543588381
- type: nauc_mrr_at_3_max
value: 22.031978664069676
- type: nauc_mrr_at_3_std
value: -9.391739589001082
- type: nauc_mrr_at_5_diff1
value: 16.160569622181587
- type: nauc_mrr_at_5_max
value: 21.588260110041475
- type: nauc_mrr_at_5_std
value: -9.55382618775295
- type: nauc_ndcg_at_1000_diff1
value: 12.382474214501329
- type: nauc_ndcg_at_1000_max
value: 16.58297409806911
- type: nauc_ndcg_at_1000_std
value: -6.117797635475846
- type: nauc_ndcg_at_100_diff1
value: 11.628284036532149
- type: nauc_ndcg_at_100_max
value: 16.559470157227913
- type: nauc_ndcg_at_100_std
value: -6.799597366048872
- type: nauc_ndcg_at_10_diff1
value: 13.718255455864545
- type: nauc_ndcg_at_10_max
value: 19.868379092926748
- type: nauc_ndcg_at_10_std
value: -8.755684192343086
- type: nauc_ndcg_at_1_diff1
value: 21.367525074890256
- type: nauc_ndcg_at_1_max
value: 25.836566656299198
- type: nauc_ndcg_at_1_std
value: -8.299855831436641
- type: nauc_ndcg_at_20_diff1
value: 11.821174462420165
- type: nauc_ndcg_at_20_max
value: 17.95253183986338
- type: nauc_ndcg_at_20_std
value: -8.149835664534505
- type: nauc_ndcg_at_3_diff1
value: 15.657058069800073
- type: nauc_ndcg_at_3_max
value: 21.10567032656599
- type: nauc_ndcg_at_3_std
value: -9.47215623951363
- type: nauc_ndcg_at_5_diff1
value: 14.748536600773631
- type: nauc_ndcg_at_5_max
value: 20.87714490361122
- type: nauc_ndcg_at_5_std
value: -10.174513752343183
- type: nauc_precision_at_1000_diff1
value: 12.399611206783078
- type: nauc_precision_at_1000_max
value: 2.0092161158215283
- type: nauc_precision_at_1000_std
value: 2.179620176140278
- type: nauc_precision_at_100_diff1
value: 7.392154388809363
- type: nauc_precision_at_100_max
value: 4.956718756735014
- type: nauc_precision_at_100_std
value: -0.3025590200534658
- type: nauc_precision_at_10_diff1
value: 9.743123026098063
- type: nauc_precision_at_10_max
value: 14.34218083090904
- type: nauc_precision_at_10_std
value: -6.4833978582536504
- type: nauc_precision_at_1_diff1
value: 21.367525074890256
- type: nauc_precision_at_1_max
value: 25.836566656299198
- type: nauc_precision_at_1_std
value: -8.299855831436641
- type: nauc_precision_at_20_diff1
value: 5.266417948635701
- type: nauc_precision_at_20_max
value: 9.16010430257419
- type: nauc_precision_at_20_std
value: -5.174902230321767
- type: nauc_precision_at_3_diff1
value: 12.854699745742842
- type: nauc_precision_at_3_max
value: 18.79950384346889
- type: nauc_precision_at_3_std
value: -9.206225558947763
- type: nauc_precision_at_5_diff1
value: 11.500531884681884
- type: nauc_precision_at_5_max
value: 17.69443733019008
- type: nauc_precision_at_5_std
value: -9.903626396032834
- type: nauc_recall_at_1000_diff1
value: 7.552664378033684
- type: nauc_recall_at_1000_max
value: 7.56733828358324
- type: nauc_recall_at_1000_std
value: 2.6293380104957054
- type: nauc_recall_at_100_diff1
value: 5.258643675873829
- type: nauc_recall_at_100_max
value: 8.319089041105968
- type: nauc_recall_at_100_std
value: -1.949831116556415
- type: nauc_recall_at_10_diff1
value: 9.448363186283725
- type: nauc_recall_at_10_max
value: 17.140193415444877
- type: nauc_recall_at_10_std
value: -7.2985340638025695
- type: nauc_recall_at_1_diff1
value: 21.506858723278764
- type: nauc_recall_at_1_max
value: 25.108064297451833
- type: nauc_recall_at_1_std
value: -8.252547190272466
- type: nauc_recall_at_20_diff1
value: 4.858263092281858
- type: nauc_recall_at_20_max
value: 12.10342107470472
- type: nauc_recall_at_20_std
value: -5.419714928456679
- type: nauc_recall_at_3_diff1
value: 12.931659192378705
- type: nauc_recall_at_3_max
value: 19.090533689235546
- type: nauc_recall_at_3_std
value: -9.501428748420118
- type: nauc_recall_at_5_diff1
value: 11.107573118859943
- type: nauc_recall_at_5_max
value: 18.80081094810456
- type: nauc_recall_at_5_std
value: -10.984081803017967
- type: ndcg_at_1
value: 8.688
- type: ndcg_at_10
value: 12.459000000000001
- type: ndcg_at_100
value: 15.706000000000001
- type: ndcg_at_1000
value: 18.729000000000003
- type: ndcg_at_20
value: 13.63
- type: ndcg_at_3
value: 10.642
- type: ndcg_at_5
value: 11.408
- type: precision_at_1
value: 8.688
- type: precision_at_10
value: 1.959
- type: precision_at_100
value: 0.375
- type: precision_at_1000
value: 0.068
- type: precision_at_20
value: 1.248
- type: precision_at_3
value: 4.498
- type: precision_at_5
value: 3.179
- type: recall_at_1
value: 7.917000000000001
- type: recall_at_10
value: 17.141000000000002
- type: recall_at_100
value: 32.646
- type: recall_at_1000
value: 56.435
- type: recall_at_20
value: 21.525
- type: recall_at_3
value: 12.15
- type: recall_at_5
value: 14.030000000000001
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 14.866999999999999
- type: map_at_1
value: 5.721
- type: map_at_10
value: 9.832
- type: map_at_100
value: 11.004
- type: map_at_1000
value: 11.179
- type: map_at_20
value: 10.436
- type: map_at_3
value: 7.808
- type: map_at_5
value: 8.764
- type: mrr_at_1
value: 13.094462540716611
- type: mrr_at_10
value: 20.13657515123311
- type: mrr_at_100
value: 21.340204187307556
- type: mrr_at_1000
value: 21.41413860730253
- type: mrr_at_20
value: 20.881996321655038
- type: mrr_at_3
value: 17.1226927252986
- type: mrr_at_5
value: 18.712269272529845
- type: nauc_map_at_1000_diff1
value: 16.829532269942398
- type: nauc_map_at_1000_max
value: 11.511765981461153
- type: nauc_map_at_1000_std
value: 13.288563684533583
- type: nauc_map_at_100_diff1
value: 16.84390657230943
- type: nauc_map_at_100_max
value: 11.446786461060386
- type: nauc_map_at_100_std
value: 12.979798257628408
- type: nauc_map_at_10_diff1
value: 16.673974608095975
- type: nauc_map_at_10_max
value: 10.40515540249065
- type: nauc_map_at_10_std
value: 11.258323384051431
- type: nauc_map_at_1_diff1
value: 26.03450138491891
- type: nauc_map_at_1_max
value: 10.901971049557197
- type: nauc_map_at_1_std
value: 6.057409324062738
- type: nauc_map_at_20_diff1
value: 16.786044160180165
- type: nauc_map_at_20_max
value: 10.829355008178954
- type: nauc_map_at_20_std
value: 11.962563266406647
- type: nauc_map_at_3_diff1
value: 18.3812657226798
- type: nauc_map_at_3_max
value: 10.411691819417813
- type: nauc_map_at_3_std
value: 7.113107871534105
- type: nauc_map_at_5_diff1
value: 17.41076341776835
- type: nauc_map_at_5_max
value: 10.193104510200216
- type: nauc_map_at_5_std
value: 8.845538032098393
- type: nauc_mrr_at_1000_diff1
value: 17.41452141156775
- type: nauc_mrr_at_1000_max
value: 14.294039967769498
- type: nauc_mrr_at_1000_std
value: 14.059703309268812
- type: nauc_mrr_at_100_diff1
value: 17.373031579585646
- type: nauc_mrr_at_100_max
value: 14.280965918488839
- type: nauc_mrr_at_100_std
value: 14.07962416407043
- type: nauc_mrr_at_10_diff1
value: 17.273897621346713
- type: nauc_mrr_at_10_max
value: 14.03321090608381
- type: nauc_mrr_at_10_std
value: 13.558616780278532
- type: nauc_mrr_at_1_diff1
value: 23.30456771254628
- type: nauc_mrr_at_1_max
value: 13.96447213218389
- type: nauc_mrr_at_1_std
value: 7.254488152702848
- type: nauc_mrr_at_20_diff1
value: 17.376390831850326
- type: nauc_mrr_at_20_max
value: 14.176963730930064
- type: nauc_mrr_at_20_std
value: 13.721231376353957
- type: nauc_mrr_at_3_diff1
value: 18.025791725501943
- type: nauc_mrr_at_3_max
value: 13.773065357341832
- type: nauc_mrr_at_3_std
value: 10.270934763275314
- type: nauc_mrr_at_5_diff1
value: 17.67183911643043
- type: nauc_mrr_at_5_max
value: 13.457814613662714
- type: nauc_mrr_at_5_std
value: 11.75996295594699
- type: nauc_ndcg_at_1000_diff1
value: 14.856613965502957
- type: nauc_ndcg_at_1000_max
value: 15.35690520998979
- type: nauc_ndcg_at_1000_std
value: 23.69227880630004
- type: nauc_ndcg_at_100_diff1
value: 14.65814282525479
- type: nauc_ndcg_at_100_max
value: 14.462727117068358
- type: nauc_ndcg_at_100_std
value: 20.740848771912663
- type: nauc_ndcg_at_10_diff1
value: 14.52336865539206
- type: nauc_ndcg_at_10_max
value: 11.713772848086714
- type: nauc_ndcg_at_10_std
value: 15.55709709319914
- type: nauc_ndcg_at_1_diff1
value: 23.30456771254628
- type: nauc_ndcg_at_1_max
value: 13.96447213218389
- type: nauc_ndcg_at_1_std
value: 7.254488152702848
- type: nauc_ndcg_at_20_diff1
value: 14.880565162693662
- type: nauc_ndcg_at_20_max
value: 12.416483028266747
- type: nauc_ndcg_at_20_std
value: 16.82291084914386
- type: nauc_ndcg_at_3_diff1
value: 16.824979901167566
- type: nauc_ndcg_at_3_max
value: 11.9754400187897
- type: nauc_ndcg_at_3_std
value: 9.422970213790453
- type: nauc_ndcg_at_5_diff1
value: 15.747197482347797
- type: nauc_ndcg_at_5_max
value: 11.02120863376394
- type: nauc_ndcg_at_5_std
value: 11.339600129694073
- type: nauc_precision_at_1000_diff1
value: 4.931477113463481
- type: nauc_precision_at_1000_max
value: 19.14707532664094
- type: nauc_precision_at_1000_std
value: 37.76984688260997
- type: nauc_precision_at_100_diff1
value: 6.644856028855646
- type: nauc_precision_at_100_max
value: 19.326558315032113
- type: nauc_precision_at_100_std
value: 33.35455573607822
- type: nauc_precision_at_10_diff1
value: 8.346488050129212
- type: nauc_precision_at_10_max
value: 14.788788264702188
- type: nauc_precision_at_10_std
value: 25.51561003518724
- type: nauc_precision_at_1_diff1
value: 23.30456771254628
- type: nauc_precision_at_1_max
value: 13.96447213218389
- type: nauc_precision_at_1_std
value: 7.254488152702848
- type: nauc_precision_at_20_diff1
value: 9.36520902543813
- type: nauc_precision_at_20_max
value: 16.182333385772782
- type: nauc_precision_at_20_std
value: 26.447755390959593
- type: nauc_precision_at_3_diff1
value: 13.175138474988895
- type: nauc_precision_at_3_max
value: 14.353465859604054
- type: nauc_precision_at_3_std
value: 13.053659364457268
- type: nauc_precision_at_5_diff1
value: 11.785077592813682
- type: nauc_precision_at_5_max
value: 12.768160836089915
- type: nauc_precision_at_5_std
value: 17.993652340273698
- type: nauc_recall_at_1000_diff1
value: 7.671233479873134
- type: nauc_recall_at_1000_max
value: 16.06721897191997
- type: nauc_recall_at_1000_std
value: 38.06917188744588
- type: nauc_recall_at_100_diff1
value: 8.016886955678288
- type: nauc_recall_at_100_max
value: 13.688670189759275
- type: nauc_recall_at_100_std
value: 27.836928984422123
- type: nauc_recall_at_10_diff1
value: 8.945186580114733
- type: nauc_recall_at_10_max
value: 9.11583579162444
- type: nauc_recall_at_10_std
value: 19.03117121595205
- type: nauc_recall_at_1_diff1
value: 26.03450138491891
- type: nauc_recall_at_1_max
value: 10.901971049557197
- type: nauc_recall_at_1_std
value: 6.057409324062738
- type: nauc_recall_at_20_diff1
value: 9.545565994439258
- type: nauc_recall_at_20_max
value: 9.721253862374812
- type: nauc_recall_at_20_std
value: 20.429248500346624
- type: nauc_recall_at_3_diff1
value: 13.133697634149586
- type: nauc_recall_at_3_max
value: 10.031513590921007
- type: nauc_recall_at_3_std
value: 8.102347421825783
- type: nauc_recall_at_5_diff1
value: 10.805045557795102
- type: nauc_recall_at_5_max
value: 8.754086588601803
- type: nauc_recall_at_5_std
value: 12.359137395763984
- type: ndcg_at_1
value: 13.094
- type: ndcg_at_10
value: 14.866999999999999
- type: ndcg_at_100
value: 20.607
- type: ndcg_at_1000
value: 24.345
- type: ndcg_at_20
value: 16.989
- type: ndcg_at_3
value: 10.954
- type: ndcg_at_5
value: 12.337
- type: precision_at_1
value: 13.094
- type: precision_at_10
value: 4.919
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_20
value: 3.3520000000000003
- type: precision_at_3
value: 7.904
- type: precision_at_5
value: 6.645
- type: recall_at_1
value: 5.721
- type: recall_at_10
value: 19.217000000000002
- type: recall_at_100
value: 39.427
- type: recall_at_1000
value: 60.999
- type: recall_at_20
value: 25.241000000000003
- type: recall_at_3
value: 10.054
- type: recall_at_5
value: 13.504
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: dev
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 13.284
- type: map_at_1
value: 1.184
- type: map_at_10
value: 3.7800000000000002
- type: map_at_100
value: 6.055
- type: map_at_1000
value: 6.612
- type: map_at_20
value: 4.583
- type: map_at_3
value: 2.383
- type: map_at_5
value: 3.098
- type: mrr_at_1
value: 14.925373134328357
- type: mrr_at_10
value: 30.312722103766877
- type: mrr_at_100
value: 30.878036780613154
- type: mrr_at_1000
value: 30.951520868338285
- type: mrr_at_20
value: 30.657566509190744
- type: mrr_at_3
value: 26.119402985074625
- type: mrr_at_5
value: 29.104477611940293
- type: nauc_map_at_1000_diff1
value: 12.909620838570431
- type: nauc_map_at_1000_max
value: 23.46404618454544
- type: nauc_map_at_1000_std
value: 18.48165914691908
- type: nauc_map_at_100_diff1
value: 12.18490342024342
- type: nauc_map_at_100_max
value: 21.14674628018195
- type: nauc_map_at_100_std
value: 13.522519066957845
- type: nauc_map_at_10_diff1
value: 11.871565018214216
- type: nauc_map_at_10_max
value: 8.631668694340515
- type: nauc_map_at_10_std
value: -2.4341428373507696
- type: nauc_map_at_1_diff1
value: 27.458975109470206
- type: nauc_map_at_1_max
value: -3.181394144349736
- type: nauc_map_at_1_std
value: -9.707811076542399
- type: nauc_map_at_20_diff1
value: 11.028519570655018
- type: nauc_map_at_20_max
value: 16.294868009313
- type: nauc_map_at_20_std
value: 0.799771968880614
- type: nauc_map_at_3_diff1
value: 18.93501599334331
- type: nauc_map_at_3_max
value: 8.541981522426074
- type: nauc_map_at_3_std
value: -4.219180889431849
- type: nauc_map_at_5_diff1
value: 11.069636178692024
- type: nauc_map_at_5_max
value: 4.878775942507181
- type: nauc_map_at_5_std
value: -7.039732156510382
- type: nauc_mrr_at_1000_diff1
value: 41.509572831665885
- type: nauc_mrr_at_1000_max
value: 25.504538188542465
- type: nauc_mrr_at_1000_std
value: 21.256078220539035
- type: nauc_mrr_at_100_diff1
value: 41.526323915557604
- type: nauc_mrr_at_100_max
value: 25.583907162737113
- type: nauc_mrr_at_100_std
value: 21.258312177356373
- type: nauc_mrr_at_10_diff1
value: 40.67603508371596
- type: nauc_mrr_at_10_max
value: 25.956121829392554
- type: nauc_mrr_at_10_std
value: 21.926262998816608
- type: nauc_mrr_at_1_diff1
value: 58.65082425974034
- type: nauc_mrr_at_1_max
value: 15.308339764720378
- type: nauc_mrr_at_1_std
value: 7.059413170848117
- type: nauc_mrr_at_20_diff1
value: 41.1405079071822
- type: nauc_mrr_at_20_max
value: 25.760951228855237
- type: nauc_mrr_at_20_std
value: 21.21007794770464
- type: nauc_mrr_at_3_diff1
value: 40.540993715407204
- type: nauc_mrr_at_3_max
value: 21.58441872522928
- type: nauc_mrr_at_3_std
value: 19.56734745031871
- type: nauc_mrr_at_5_diff1
value: 41.02150645214496
- type: nauc_mrr_at_5_max
value: 24.3374961965676
- type: nauc_mrr_at_5_std
value: 23.492960010462806
- type: nauc_ndcg_at_1000_diff1
value: 19.82519571297365
- type: nauc_ndcg_at_1000_max
value: 32.5180647339538
- type: nauc_ndcg_at_1000_std
value: 33.451571875723154
- type: nauc_ndcg_at_100_diff1
value: 17.563447427259778
- type: nauc_ndcg_at_100_max
value: 23.786468556008963
- type: nauc_ndcg_at_100_std
value: 19.872525039888085
- type: nauc_ndcg_at_10_diff1
value: 32.789299797393646
- type: nauc_ndcg_at_10_max
value: 22.682990052616912
- type: nauc_ndcg_at_10_std
value: 19.41321345795387
- type: nauc_ndcg_at_1_diff1
value: 63.98229985092606
- type: nauc_ndcg_at_1_max
value: 11.641443200870569
- type: nauc_ndcg_at_1_std
value: 3.4879458076335594
- type: nauc_ndcg_at_20_diff1
value: 21.200418438905302
- type: nauc_ndcg_at_20_max
value: 22.727058001456417
- type: nauc_ndcg_at_20_std
value: 14.088857765553305
- type: nauc_ndcg_at_3_diff1
value: 41.44222554894101
- type: nauc_ndcg_at_3_max
value: 15.410370571819845
- type: nauc_ndcg_at_3_std
value: 7.892294566078569
- type: nauc_ndcg_at_5_diff1
value: 36.667517873909574
- type: nauc_ndcg_at_5_max
value: 16.29330921246611
- type: nauc_ndcg_at_5_std
value: 13.538862333087287
- type: nauc_precision_at_1000_diff1
value: 21.91641339098105
- type: nauc_precision_at_1000_max
value: 39.1748454980264
- type: nauc_precision_at_1000_std
value: 54.1675602963656
- type: nauc_precision_at_100_diff1
value: 21.49727271722443
- type: nauc_precision_at_100_max
value: 36.61127433280462
- type: nauc_precision_at_100_std
value: 48.42008780055268
- type: nauc_precision_at_10_diff1
value: 32.860910832569616
- type: nauc_precision_at_10_max
value: 44.13929117618437
- type: nauc_precision_at_10_std
value: 45.12460999117428
- type: nauc_precision_at_1_diff1
value: 58.65082425974034
- type: nauc_precision_at_1_max
value: 15.308339764720378
- type: nauc_precision_at_1_std
value: 7.059413170848117
- type: nauc_precision_at_20_diff1
value: 23.91733487977542
- type: nauc_precision_at_20_max
value: 41.07726451470375
- type: nauc_precision_at_20_std
value: 40.897109230648645
- type: nauc_precision_at_3_diff1
value: 36.92361189810392
- type: nauc_precision_at_3_max
value: 31.717465737752516
- type: nauc_precision_at_3_std
value: 19.132889363488697
- type: nauc_precision_at_5_diff1
value: 39.18919107750929
- type: nauc_precision_at_5_max
value: 39.4540845607091
- type: nauc_precision_at_5_std
value: 37.57602847217906
- type: nauc_recall_at_1000_diff1
value: -3.750692559637984
- type: nauc_recall_at_1000_max
value: 16.444151678660056
- type: nauc_recall_at_1000_std
value: 30.29119815494249
- type: nauc_recall_at_100_diff1
value: -2.3999795923063125
- type: nauc_recall_at_100_max
value: 6.928506948612186
- type: nauc_recall_at_100_std
value: 11.031848607615384
- type: nauc_recall_at_10_diff1
value: -11.794415769655537
- type: nauc_recall_at_10_max
value: -3.331470333505189
- type: nauc_recall_at_10_std
value: -10.219136019170504
- type: nauc_recall_at_1_diff1
value: 27.458975109470206
- type: nauc_recall_at_1_max
value: -3.181394144349736
- type: nauc_recall_at_1_std
value: -9.707811076542399
- type: nauc_recall_at_20_diff1
value: -10.766427300518584
- type: nauc_recall_at_20_max
value: 4.251551680579632
- type: nauc_recall_at_20_std
value: -13.569212144580181
- type: nauc_recall_at_3_diff1
value: 7.547688451185241
- type: nauc_recall_at_3_max
value: 3.8294094070581464
- type: nauc_recall_at_3_std
value: 0.008454053036314049
- type: nauc_recall_at_5_diff1
value: -11.861815779639763
- type: nauc_recall_at_5_max
value: -8.923918129235462
- type: nauc_recall_at_5_std
value: -12.954447920368153
- type: ndcg_at_1
value: 11.193999999999999
- type: ndcg_at_10
value: 13.284
- type: ndcg_at_100
value: 16.572
- type: ndcg_at_1000
value: 22.23
- type: ndcg_at_20
value: 13.3
- type: ndcg_at_3
value: 13.378
- type: ndcg_at_5
value: 13.286999999999999
- type: precision_at_1
value: 14.924999999999999
- type: precision_at_10
value: 12.687000000000001
- type: precision_at_100
value: 4.403
- type: precision_at_1000
value: 0.8840000000000001
- type: precision_at_20
value: 9.776
- type: precision_at_3
value: 16.915
- type: precision_at_5
value: 15.223999999999998
- type: recall_at_1
value: 1.184
- type: recall_at_10
value: 8.493
- type: recall_at_100
value: 22.961000000000002
- type: recall_at_1000
value: 42.466
- type: recall_at_20
value: 12.052
- type: recall_at_3
value: 3.527
- type: recall_at_5
value: 5.86
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 15.876999999999999
- type: map_at_1
value: 2.508
- type: map_at_10
value: 5.495
- type: map_at_100
value: 7.718999999999999
- type: map_at_1000
value: 8.37
- type: map_at_20
value: 6.388000000000001
- type: map_at_3
value: 3.9440000000000004
- type: map_at_5
value: 4.533
- type: mrr_at_1
value: 29.25
- type: mrr_at_10
value: 38.05287698412699
- type: mrr_at_100
value: 38.925127074026946
- type: mrr_at_1000
value: 38.96182494622867
- type: mrr_at_20
value: 38.56818257314582
- type: mrr_at_3
value: 35.25
- type: mrr_at_5
value: 36.7
- type: nauc_map_at_1000_diff1
value: 18.418341183089847
- type: nauc_map_at_1000_max
value: 6.813822088875137
- type: nauc_map_at_1000_std
value: 27.949837775795178
- type: nauc_map_at_100_diff1
value: 19.00873756639041
- type: nauc_map_at_100_max
value: 6.081161799947362
- type: nauc_map_at_100_std
value: 25.1515955949003
- type: nauc_map_at_10_diff1
value: 20.522135935698373
- type: nauc_map_at_10_max
value: -1.050735478959983
- type: nauc_map_at_10_std
value: 13.999153091629324
- type: nauc_map_at_1_diff1
value: 23.911349557903243
- type: nauc_map_at_1_max
value: -10.164923920589649
- type: nauc_map_at_1_std
value: 1.549815995623448
- type: nauc_map_at_20_diff1
value: 19.822305278507788
- type: nauc_map_at_20_max
value: 1.903064182840661
- type: nauc_map_at_20_std
value: 18.355207168287713
- type: nauc_map_at_3_diff1
value: 20.832030570221768
- type: nauc_map_at_3_max
value: -7.491637036802951
- type: nauc_map_at_3_std
value: 7.616944342121094
- type: nauc_map_at_5_diff1
value: 22.191232610879226
- type: nauc_map_at_5_max
value: -4.597872300907481
- type: nauc_map_at_5_std
value: 10.44226839186566
- type: nauc_mrr_at_1000_diff1
value: 25.820924534270063
- type: nauc_mrr_at_1000_max
value: 14.450191093206508
- type: nauc_mrr_at_1000_std
value: 25.589579524528165
- type: nauc_mrr_at_100_diff1
value: 25.816311261545742
- type: nauc_mrr_at_100_max
value: 14.458561685561461
- type: nauc_mrr_at_100_std
value: 25.554596314175182
- type: nauc_mrr_at_10_diff1
value: 26.161382556535557
- type: nauc_mrr_at_10_max
value: 14.007953012871635
- type: nauc_mrr_at_10_std
value: 25.528381197721085
- type: nauc_mrr_at_1_diff1
value: 24.714536333138422
- type: nauc_mrr_at_1_max
value: 12.622298824631804
- type: nauc_mrr_at_1_std
value: 26.088592054551878
- type: nauc_mrr_at_20_diff1
value: 25.780026814458406
- type: nauc_mrr_at_20_max
value: 14.278926391252917
- type: nauc_mrr_at_20_std
value: 25.37321267263069
- type: nauc_mrr_at_3_diff1
value: 26.248483493852255
- type: nauc_mrr_at_3_max
value: 14.555512187093772
- type: nauc_mrr_at_3_std
value: 26.362409106932166
- type: nauc_mrr_at_5_diff1
value: 26.443866668276133
- type: nauc_mrr_at_5_max
value: 14.595206097095858
- type: nauc_mrr_at_5_std
value: 25.465432458085136
- type: nauc_ndcg_at_1000_diff1
value: 17.354086704383327
- type: nauc_ndcg_at_1000_max
value: 11.032341975467407
- type: nauc_ndcg_at_1000_std
value: 36.473275992287526
- type: nauc_ndcg_at_100_diff1
value: 18.908445336829224
- type: nauc_ndcg_at_100_max
value: 8.361352511904641
- type: nauc_ndcg_at_100_std
value: 27.831216543920114
- type: nauc_ndcg_at_10_diff1
value: 22.083661973201103
- type: nauc_ndcg_at_10_max
value: 9.095126609084208
- type: nauc_ndcg_at_10_std
value: 28.46506213344955
- type: nauc_ndcg_at_1_diff1
value: 24.58600137093944
- type: nauc_ndcg_at_1_max
value: 10.423319345292812
- type: nauc_ndcg_at_1_std
value: 24.32790530486685
- type: nauc_ndcg_at_20_diff1
value: 19.801782226603297
- type: nauc_ndcg_at_20_max
value: 7.20849331567235
- type: nauc_ndcg_at_20_std
value: 26.49856827984502
- type: nauc_ndcg_at_3_diff1
value: 23.198979433263517
- type: nauc_ndcg_at_3_max
value: 9.814141504656344
- type: nauc_ndcg_at_3_std
value: 27.726334659968156
- type: nauc_ndcg_at_5_diff1
value: 23.938522547098255
- type: nauc_ndcg_at_5_max
value: 10.03022381980448
- type: nauc_ndcg_at_5_std
value: 27.89914001253167
- type: nauc_precision_at_1000_diff1
value: 4.857675357268144
- type: nauc_precision_at_1000_max
value: 12.46969113556289
- type: nauc_precision_at_1000_std
value: 38.21598093544781
- type: nauc_precision_at_100_diff1
value: 6.210157631592053
- type: nauc_precision_at_100_max
value: 19.71498741468151
- type: nauc_precision_at_100_std
value: 41.54626902916716
- type: nauc_precision_at_10_diff1
value: 13.325999583547803
- type: nauc_precision_at_10_max
value: 16.778152759388874
- type: nauc_precision_at_10_std
value: 36.820348012619306
- type: nauc_precision_at_1_diff1
value: 24.714536333138422
- type: nauc_precision_at_1_max
value: 12.622298824631804
- type: nauc_precision_at_1_std
value: 26.088592054551878
- type: nauc_precision_at_20_diff1
value: 9.420914057074139
- type: nauc_precision_at_20_max
value: 17.822609903494307
- type: nauc_precision_at_20_std
value: 38.95875444434079
- type: nauc_precision_at_3_diff1
value: 23.20023331389052
- type: nauc_precision_at_3_max
value: 17.13323889675862
- type: nauc_precision_at_3_std
value: 32.099325056245334
- type: nauc_precision_at_5_diff1
value: 20.3149167625736
- type: nauc_precision_at_5_max
value: 16.786275415331918
- type: nauc_precision_at_5_std
value: 33.09949151010017
- type: nauc_recall_at_1000_diff1
value: 3.4676283481820684
- type: nauc_recall_at_1000_max
value: 8.79792661863136
- type: nauc_recall_at_1000_std
value: 33.51783060509199
- type: nauc_recall_at_100_diff1
value: 9.551515500047902
- type: nauc_recall_at_100_max
value: 5.9325603213255915
- type: nauc_recall_at_100_std
value: 17.77809223690889
- type: nauc_recall_at_10_diff1
value: 17.747502663382907
- type: nauc_recall_at_10_max
value: -0.6733335510418046
- type: nauc_recall_at_10_std
value: 9.971462156667629
- type: nauc_recall_at_1_diff1
value: 23.911349557903243
- type: nauc_recall_at_1_max
value: -10.164923920589649
- type: nauc_recall_at_1_std
value: 1.549815995623448
- type: nauc_recall_at_20_diff1
value: 11.198105701411093
- type: nauc_recall_at_20_max
value: 1.2999592303813468
- type: nauc_recall_at_20_std
value: 11.19127853488355
- type: nauc_recall_at_3_diff1
value: 17.30588064042963
- type: nauc_recall_at_3_max
value: -7.846387853535884
- type: nauc_recall_at_3_std
value: 5.297872232388258
- type: nauc_recall_at_5_diff1
value: 19.463493783902646
- type: nauc_recall_at_5_max
value: -3.6803654508430363
- type: nauc_recall_at_5_std
value: 6.2516078049683195
- type: ndcg_at_1
value: 21.5
- type: ndcg_at_10
value: 15.876999999999999
- type: ndcg_at_100
value: 17.941
- type: ndcg_at_1000
value: 24.03
- type: ndcg_at_20
value: 15.443000000000001
- type: ndcg_at_3
value: 18.109
- type: ndcg_at_5
value: 16.77
- type: precision_at_1
value: 29.25
- type: precision_at_10
value: 14.575
- type: precision_at_100
value: 4.65
- type: precision_at_1000
value: 1.0699999999999998
- type: precision_at_20
value: 10.95
- type: precision_at_3
value: 21.667
- type: precision_at_5
value: 18.3
- type: recall_at_1
value: 2.508
- type: recall_at_10
value: 8.62
- type: recall_at_100
value: 22.502
- type: recall_at_1000
value: 43.986999999999995
- type: recall_at_20
value: 12.086
- type: recall_at_3
value: 4.727
- type: recall_at_5
value: 5.998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 35.565
- type: f1
value: 32.647604155368676
- type: f1_weighted
value: 37.73642424861998
- type: main_score
value: 35.565
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: validation
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 36.34
- type: f1
value: 33.99381354595679
- type: f1_weighted
value: 38.0699287698453
- type: main_score
value: 36.34
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: dev
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 15.887
- type: map_at_1
value: 9.031
- type: map_at_10
value: 13.206000000000001
- type: map_at_100
value: 13.86
- type: map_at_1000
value: 13.931
- type: map_at_20
value: 13.56
- type: map_at_3
value: 11.719
- type: map_at_5
value: 12.564
- type: mrr_at_1
value: 9.6009600960096
- type: mrr_at_10
value: 13.992625453021532
- type: mrr_at_100
value: 14.667785958434475
- type: mrr_at_1000
value: 14.737071710945912
- type: mrr_at_20
value: 14.357782022450813
- type: mrr_at_3
value: 12.416241624162438
- type: mrr_at_5
value: 13.314831483148371
- type: nauc_map_at_1000_diff1
value: 25.064347305842972
- type: nauc_map_at_1000_max
value: 9.11634190192658
- type: nauc_map_at_1000_std
value: -6.145027297238851
- type: nauc_map_at_100_diff1
value: 25.07006496457737
- type: nauc_map_at_100_max
value: 9.130642591733851
- type: nauc_map_at_100_std
value: -6.188969250494314
- type: nauc_map_at_10_diff1
value: 25.415868249003417
- type: nauc_map_at_10_max
value: 9.375861890270263
- type: nauc_map_at_10_std
value: -6.753410909591189
- type: nauc_map_at_1_diff1
value: 31.60162657862004
- type: nauc_map_at_1_max
value: 13.467846044708573
- type: nauc_map_at_1_std
value: -9.405070228216504
- type: nauc_map_at_20_diff1
value: 25.129127775308984
- type: nauc_map_at_20_max
value: 9.171261356932808
- type: nauc_map_at_20_std
value: -6.466704279104992
- type: nauc_map_at_3_diff1
value: 27.194821893410865
- type: nauc_map_at_3_max
value: 10.212122914995785
- type: nauc_map_at_3_std
value: -7.062124084002097
- type: nauc_map_at_5_diff1
value: 25.990273920887564
- type: nauc_map_at_5_max
value: 9.850037710165656
- type: nauc_map_at_5_std
value: -7.089195190589728
- type: nauc_mrr_at_1000_diff1
value: 24.29811176478267
- type: nauc_mrr_at_1000_max
value: 9.116174573758764
- type: nauc_mrr_at_1000_std
value: -6.3381300955231135
- type: nauc_mrr_at_100_diff1
value: 24.297101302841945
- type: nauc_mrr_at_100_max
value: 9.124989883203817
- type: nauc_mrr_at_100_std
value: -6.377052852799241
- type: nauc_mrr_at_10_diff1
value: 24.694084348430632
- type: nauc_mrr_at_10_max
value: 9.405563275203962
- type: nauc_mrr_at_10_std
value: -6.857936856745418
- type: nauc_mrr_at_1_diff1
value: 30.832552348339416
- type: nauc_mrr_at_1_max
value: 13.329153354873045
- type: nauc_mrr_at_1_std
value: -9.149483176614988
- type: nauc_mrr_at_20_diff1
value: 24.354405690846395
- type: nauc_mrr_at_20_max
value: 9.176207782963502
- type: nauc_mrr_at_20_std
value: -6.617926463432496
- type: nauc_mrr_at_3_diff1
value: 26.366512925407886
- type: nauc_mrr_at_3_max
value: 10.204104753002841
- type: nauc_mrr_at_3_std
value: -7.2196075178078525
- type: nauc_mrr_at_5_diff1
value: 25.18806197418501
- type: nauc_mrr_at_5_max
value: 9.865661255440518
- type: nauc_mrr_at_5_std
value: -7.240672093597937
- type: nauc_ndcg_at_1000_diff1
value: 21.241949632920427
- type: nauc_ndcg_at_1000_max
value: 6.678625264649172
- type: nauc_ndcg_at_1000_std
value: -2.7038318674526463
- type: nauc_ndcg_at_100_diff1
value: 21.498211819668665
- type: nauc_ndcg_at_100_max
value: 6.844303645194557
- type: nauc_ndcg_at_100_std
value: -3.6380899360104384
- type: nauc_ndcg_at_10_diff1
value: 22.70005319750508
- type: nauc_ndcg_at_10_max
value: 7.809248596426432
- type: nauc_ndcg_at_10_std
value: -5.995534124518128
- type: nauc_ndcg_at_1_diff1
value: 30.832552348339416
- type: nauc_ndcg_at_1_max
value: 13.329153354873045
- type: nauc_ndcg_at_1_std
value: -9.149483176614988
- type: nauc_ndcg_at_20_diff1
value: 21.875673813225607
- type: nauc_ndcg_at_20_max
value: 7.232420354566728
- type: nauc_ndcg_at_20_std
value: -5.153702175635842
- type: nauc_ndcg_at_3_diff1
value: 25.689748318319968
- type: nauc_ndcg_at_3_max
value: 9.321396033719674
- type: nauc_ndcg_at_3_std
value: -6.667900481124349
- type: nauc_ndcg_at_5_diff1
value: 23.80853727637314
- type: nauc_ndcg_at_5_max
value: 8.805206999290311
- type: nauc_ndcg_at_5_std
value: -6.749732322955835
- type: nauc_precision_at_1000_diff1
value: 8.89067041886934
- type: nauc_precision_at_1000_max
value: -0.10853596523430803
- type: nauc_precision_at_1000_std
value: 8.458367486892811
- type: nauc_precision_at_100_diff1
value: 12.89925385722559
- type: nauc_precision_at_100_max
value: 1.5493957464756554
- type: nauc_precision_at_100_std
value: 3.1392923786737823
- type: nauc_precision_at_10_diff1
value: 16.48393586101882
- type: nauc_precision_at_10_max
value: 4.448391737969649
- type: nauc_precision_at_10_std
value: -4.231554736544226
- type: nauc_precision_at_1_diff1
value: 30.832552348339416
- type: nauc_precision_at_1_max
value: 13.329153354873045
- type: nauc_precision_at_1_std
value: -9.149483176614988
- type: nauc_precision_at_20_diff1
value: 14.495704182353265
- type: nauc_precision_at_20_max
value: 3.1844435237808626
- type: nauc_precision_at_20_std
value: -2.13159709019079
- type: nauc_precision_at_3_diff1
value: 21.827194487523666
- type: nauc_precision_at_3_max
value: 7.16540636557807
- type: nauc_precision_at_3_std
value: -5.728520017693463
- type: nauc_precision_at_5_diff1
value: 18.592965429794177
- type: nauc_precision_at_5_max
value: 6.544160267771925
- type: nauc_precision_at_5_std
value: -6.075735969421065
- type: nauc_recall_at_1000_diff1
value: 11.507052830849593
- type: nauc_recall_at_1000_max
value: 0.5085962092972218
- type: nauc_recall_at_1000_std
value: 9.072695183008296
- type: nauc_recall_at_100_diff1
value: 14.137491664265205
- type: nauc_recall_at_100_max
value: 1.7152961536428157
- type: nauc_recall_at_100_std
value: 3.076446205016648
- type: nauc_recall_at_10_diff1
value: 16.967993437618524
- type: nauc_recall_at_10_max
value: 4.283734277530837
- type: nauc_recall_at_10_std
value: -4.1919003003308335
- type: nauc_recall_at_1_diff1
value: 31.60162657862004
- type: nauc_recall_at_1_max
value: 13.467846044708573
- type: nauc_recall_at_1_std
value: -9.405070228216504
- type: nauc_recall_at_20_diff1
value: 15.332586494090162
- type: nauc_recall_at_20_max
value: 3.0659865972417752
- type: nauc_recall_at_20_std
value: -2.0108021427447484
- type: nauc_recall_at_3_diff1
value: 22.636006877560675
- type: nauc_recall_at_3_max
value: 7.08262003860773
- type: nauc_recall_at_3_std
value: -5.563803313332396
- type: nauc_recall_at_5_diff1
value: 19.35818950244238
- type: nauc_recall_at_5_max
value: 6.45610000891455
- type: nauc_recall_at_5_std
value: -5.8921593809685655
- type: ndcg_at_1
value: 9.600999999999999
- type: ndcg_at_10
value: 15.887
- type: ndcg_at_100
value: 19.384999999999998
- type: ndcg_at_1000
value: 21.597
- type: ndcg_at_20
value: 17.171
- type: ndcg_at_3
value: 12.786
- type: ndcg_at_5
value: 14.322
- type: precision_at_1
value: 9.600999999999999
- type: precision_at_10
value: 2.55
- type: precision_at_100
value: 0.44200000000000006
- type: precision_at_1000
value: 0.065
- type: precision_at_20
value: 1.552
- type: precision_at_3
value: 5.421
- type: precision_at_5
value: 4.053
- type: recall_at_1
value: 9.031
- type: recall_at_10
value: 23.619
- type: recall_at_100
value: 40.278000000000006
- type: recall_at_1000
value: 57.887
- type: recall_at_20
value: 28.566999999999997
- type: recall_at_3
value: 15.187000000000001
- type: recall_at_5
value: 18.887
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 15.558
- type: map_at_1
value: 8.561
- type: map_at_10
value: 12.748000000000001
- type: map_at_100
value: 13.520999999999999
- type: map_at_1000
value: 13.605
- type: map_at_20
value: 13.181999999999999
- type: map_at_3
value: 11.113000000000001
- type: map_at_5
value: 11.983
- type: mrr_at_1
value: 9.21092109210921
- type: mrr_at_10
value: 13.603878244967355
- type: mrr_at_100
value: 14.402189267837029
- type: mrr_at_1000
value: 14.482056859871392
- type: mrr_at_20
value: 14.05559265500001
- type: mrr_at_3
value: 11.8786878687869
- type: mrr_at_5
value: 12.799779977997844
- type: nauc_map_at_1000_diff1
value: 23.047056020210395
- type: nauc_map_at_1000_max
value: 4.381421606623545
- type: nauc_map_at_1000_std
value: -9.295909662012983
- type: nauc_map_at_100_diff1
value: 23.042574846755315
- type: nauc_map_at_100_max
value: 4.352252952156301
- type: nauc_map_at_100_std
value: -9.383217560309644
- type: nauc_map_at_10_diff1
value: 23.38058836893436
- type: nauc_map_at_10_max
value: 4.158033424222088
- type: nauc_map_at_10_std
value: -10.196812290880018
- type: nauc_map_at_1_diff1
value: 30.91973794677596
- type: nauc_map_at_1_max
value: 7.885657745437867
- type: nauc_map_at_1_std
value: -14.738103293161911
- type: nauc_map_at_20_diff1
value: 23.102202740729723
- type: nauc_map_at_20_max
value: 4.231609654970102
- type: nauc_map_at_20_std
value: -9.800147090683598
- type: nauc_map_at_3_diff1
value: 25.727385170223222
- type: nauc_map_at_3_max
value: 4.623521860268399
- type: nauc_map_at_3_std
value: -11.44822847954444
- type: nauc_map_at_5_diff1
value: 24.274383037313303
- type: nauc_map_at_5_max
value: 4.519726438603311
- type: nauc_map_at_5_std
value: -11.008723447006215
- type: nauc_mrr_at_1000_diff1
value: 22.928661532209286
- type: nauc_mrr_at_1000_max
value: 4.624249340675935
- type: nauc_mrr_at_1000_std
value: -9.721433235878589
- type: nauc_mrr_at_100_diff1
value: 22.92308465251261
- type: nauc_mrr_at_100_max
value: 4.600443428187166
- type: nauc_mrr_at_100_std
value: -9.787139791494393
- type: nauc_mrr_at_10_diff1
value: 23.28024411129446
- type: nauc_mrr_at_10_max
value: 4.345820371946406
- type: nauc_mrr_at_10_std
value: -10.582719246903517
- type: nauc_mrr_at_1_diff1
value: 30.396426789142485
- type: nauc_mrr_at_1_max
value: 8.028945058166865
- type: nauc_mrr_at_1_std
value: -15.111804316468467
- type: nauc_mrr_at_20_diff1
value: 22.987374341707028
- type: nauc_mrr_at_20_max
value: 4.466368462388426
- type: nauc_mrr_at_20_std
value: -10.187640105213426
- type: nauc_mrr_at_3_diff1
value: 25.45864054625039
- type: nauc_mrr_at_3_max
value: 4.825296314472089
- type: nauc_mrr_at_3_std
value: -11.8653467337897
- type: nauc_mrr_at_5_diff1
value: 24.083449169803608
- type: nauc_mrr_at_5_max
value: 4.657923185334206
- type: nauc_mrr_at_5_std
value: -11.42158746010669
- type: nauc_ndcg_at_1000_diff1
value: 19.556482379004763
- type: nauc_ndcg_at_1000_max
value: 4.627846006686356
- type: nauc_ndcg_at_1000_std
value: -2.946879517252631
- type: nauc_ndcg_at_100_diff1
value: 19.481027711259248
- type: nauc_ndcg_at_100_max
value: 4.118878248638819
- type: nauc_ndcg_at_100_std
value: -4.661851669190927
- type: nauc_ndcg_at_10_diff1
value: 20.41362738874479
- type: nauc_ndcg_at_10_max
value: 3.0589545334148287
- type: nauc_ndcg_at_10_std
value: -8.348131699973683
- type: nauc_ndcg_at_1_diff1
value: 30.396426789142485
- type: nauc_ndcg_at_1_max
value: 8.028945058166865
- type: nauc_ndcg_at_1_std
value: -15.111804316468467
- type: nauc_ndcg_at_20_diff1
value: 19.710650902879344
- type: nauc_ndcg_at_20_max
value: 3.3541035346237438
- type: nauc_ndcg_at_20_std
value: -7.190749273990976
- type: nauc_ndcg_at_3_diff1
value: 24.374214444727755
- type: nauc_ndcg_at_3_max
value: 3.825023929053919
- type: nauc_ndcg_at_3_std
value: -10.62774914466829
- type: nauc_ndcg_at_5_diff1
value: 22.115176363112063
- type: nauc_ndcg_at_5_max
value: 3.7189238642890565
- type: nauc_ndcg_at_5_std
value: -9.935373088819187
- type: nauc_precision_at_1000_diff1
value: 10.036659196946811
- type: nauc_precision_at_1000_max
value: 8.4571806332048
- type: nauc_precision_at_1000_std
value: 18.900394974888577
- type: nauc_precision_at_100_diff1
value: 12.237410047110375
- type: nauc_precision_at_100_max
value: 5.219488997502101
- type: nauc_precision_at_100_std
value: 7.725446609943484
- type: nauc_precision_at_10_diff1
value: 14.52898205873517
- type: nauc_precision_at_10_max
value: 0.9580429842932288
- type: nauc_precision_at_10_std
value: -4.539094090410254
- type: nauc_precision_at_1_diff1
value: 30.396426789142485
- type: nauc_precision_at_1_max
value: 8.028945058166865
- type: nauc_precision_at_1_std
value: -15.111804316468467
- type: nauc_precision_at_20_diff1
value: 13.157280011706241
- type: nauc_precision_at_20_max
value: 2.0674592007535724
- type: nauc_precision_at_20_std
value: -1.5646067540383648
- type: nauc_precision_at_3_diff1
value: 21.213559471597666
- type: nauc_precision_at_3_max
value: 2.309215070583864
- type: nauc_precision_at_3_std
value: -8.908998732940104
- type: nauc_precision_at_5_diff1
value: 17.55694488126131
- type: nauc_precision_at_5_max
value: 2.201938089239908
- type: nauc_precision_at_5_std
value: -7.8864074655897705
- type: nauc_recall_at_1000_diff1
value: 10.88693206987016
- type: nauc_recall_at_1000_max
value: 6.516605161902966
- type: nauc_recall_at_1000_std
value: 18.92754824266997
- type: nauc_recall_at_100_diff1
value: 12.096509748195034
- type: nauc_recall_at_100_max
value: 4.122794219337515
- type: nauc_recall_at_100_std
value: 7.110265126293798
- type: nauc_recall_at_10_diff1
value: 13.882535112261015
- type: nauc_recall_at_10_max
value: 0.6349118959214549
- type: nauc_recall_at_10_std
value: -4.0799016394324195
- type: nauc_recall_at_1_diff1
value: 30.91973794677596
- type: nauc_recall_at_1_max
value: 7.885657745437867
- type: nauc_recall_at_1_std
value: -14.738103293161911
- type: nauc_recall_at_20_diff1
value: 12.701037772454308
- type: nauc_recall_at_20_max
value: 1.5719257587652442
- type: nauc_recall_at_20_std
value: -1.4582642381228048
- type: nauc_recall_at_3_diff1
value: 21.045453549710942
- type: nauc_recall_at_3_max
value: 1.7758573902158428
- type: nauc_recall_at_3_std
value: -8.219638503176125
- type: nauc_recall_at_5_diff1
value: 17.27396160453234
- type: nauc_recall_at_5_max
value: 1.8988611208861945
- type: nauc_recall_at_5_std
value: -7.198800549477399
- type: ndcg_at_1
value: 9.211
- type: ndcg_at_10
value: 15.558
- type: ndcg_at_100
value: 19.697
- type: ndcg_at_1000
value: 22.192
- type: ndcg_at_20
value: 17.149
- type: ndcg_at_3
value: 12.148
- type: ndcg_at_5
value: 13.725999999999999
- type: precision_at_1
value: 9.211
- type: precision_at_10
value: 2.561
- type: precision_at_100
value: 0.477
- type: precision_at_1000
value: 0.07100000000000001
- type: precision_at_20
value: 1.6199999999999999
- type: precision_at_3
value: 5.1610000000000005
- type: precision_at_5
value: 3.9149999999999996
- type: recall_at_1
value: 8.561
- type: recall_at_10
value: 23.676
- type: recall_at_100
value: 43.427
- type: recall_at_1000
value: 63.059
- type: recall_at_20
value: 29.848999999999997
- type: recall_at_3
value: 14.405999999999999
- type: recall_at_5
value: 18.191
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: train
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 10.674
- type: map_at_1
value: 5.489
- type: map_at_10
value: 8.578
- type: map_at_100
value: 9.203
- type: map_at_1000
value: 9.288
- type: map_at_20
value: 8.901
- type: map_at_3
value: 7.457999999999999
- type: map_at_5
value: 8.036999999999999
- type: mrr_at_1
value: 6.039522812130043
- type: mrr_at_10
value: 9.38572360917802
- type: mrr_at_100
value: 10.044849301845016
- type: mrr_at_1000
value: 10.128562524038893
- type: mrr_at_20
value: 9.728900696281142
- type: mrr_at_3
value: 8.17745803357304
- type: mrr_at_5
value: 8.80567950702728
- type: nauc_map_at_1000_diff1
value: 19.706196067173888
- type: nauc_map_at_1000_max
value: 0.8731312666098832
- type: nauc_map_at_1000_std
value: -8.903704565738964
- type: nauc_map_at_100_diff1
value: 19.712456534647163
- type: nauc_map_at_100_max
value: 0.8653761387670973
- type: nauc_map_at_100_std
value: -8.978665814695075
- type: nauc_map_at_10_diff1
value: 20.20855822637708
- type: nauc_map_at_10_max
value: 0.963632969542636
- type: nauc_map_at_10_std
value: -9.79290494632636
- type: nauc_map_at_1_diff1
value: 26.87568133521237
- type: nauc_map_at_1_max
value: 4.2330031342559495
- type: nauc_map_at_1_std
value: -11.851764774893743
- type: nauc_map_at_20_diff1
value: 19.86523796102174
- type: nauc_map_at_20_max
value: 0.8429782656830993
- type: nauc_map_at_20_std
value: -9.390385661922004
- type: nauc_map_at_3_diff1
value: 21.526568050993614
- type: nauc_map_at_3_max
value: 1.7787927524246252
- type: nauc_map_at_3_std
value: -10.769499032610144
- type: nauc_map_at_5_diff1
value: 20.919719707508776
- type: nauc_map_at_5_max
value: 1.2435215563946254
- type: nauc_map_at_5_std
value: -10.353839788444798
- type: nauc_mrr_at_1000_diff1
value: 19.285847206151384
- type: nauc_mrr_at_1000_max
value: 1.6048601114843506
- type: nauc_mrr_at_1000_std
value: -9.623190968970748
- type: nauc_mrr_at_100_diff1
value: 19.28819284659135
- type: nauc_mrr_at_100_max
value: 1.6031947925696466
- type: nauc_mrr_at_100_std
value: -9.68120524142785
- type: nauc_mrr_at_10_diff1
value: 19.736921927366115
- type: nauc_mrr_at_10_max
value: 1.7258514805195968
- type: nauc_mrr_at_10_std
value: -10.401370656388199
- type: nauc_mrr_at_1_diff1
value: 26.114009916227648
- type: nauc_mrr_at_1_max
value: 4.987882251998017
- type: nauc_mrr_at_1_std
value: -12.49053300937345
- type: nauc_mrr_at_20_diff1
value: 19.427686616755445
- type: nauc_mrr_at_20_max
value: 1.5991256678202856
- type: nauc_mrr_at_20_std
value: -10.048775742175444
- type: nauc_mrr_at_3_diff1
value: 21.062351886957174
- type: nauc_mrr_at_3_max
value: 2.6124171690110143
- type: nauc_mrr_at_3_std
value: -11.345952998880241
- type: nauc_mrr_at_5_diff1
value: 20.44473608014843
- type: nauc_mrr_at_5_max
value: 2.0319303436430567
- type: nauc_mrr_at_5_std
value: -10.95417532918635
- type: nauc_ndcg_at_1000_diff1
value: 16.412677685272328
- type: nauc_ndcg_at_1000_max
value: 0.1860703253887506
- type: nauc_ndcg_at_1000_std
value: -3.8586290300089012
- type: nauc_ndcg_at_100_diff1
value: 16.475365676518383
- type: nauc_ndcg_at_100_max
value: -0.05081229901270033
- type: nauc_ndcg_at_100_std
value: -5.552388612136397
- type: nauc_ndcg_at_10_diff1
value: 17.996767346973357
- type: nauc_ndcg_at_10_max
value: 0.07032909187543204
- type: nauc_ndcg_at_10_std
value: -8.87752925793177
- type: nauc_ndcg_at_1_diff1
value: 26.114009916227648
- type: nauc_ndcg_at_1_max
value: 4.987882251998017
- type: nauc_ndcg_at_1_std
value: -12.49053300937345
- type: nauc_ndcg_at_20_diff1
value: 17.13031519380471
- type: nauc_ndcg_at_20_max
value: -0.2304970307184397
- type: nauc_ndcg_at_20_std
value: -7.797199825702626
- type: nauc_ndcg_at_3_diff1
value: 20.224964171865683
- type: nauc_ndcg_at_3_max
value: 1.4221448274398238
- type: nauc_ndcg_at_3_std
value: -10.621450808201404
- type: nauc_ndcg_at_5_diff1
value: 19.318056511652202
- type: nauc_ndcg_at_5_max
value: 0.5708577021660709
- type: nauc_ndcg_at_5_std
value: -9.975915558143965
- type: nauc_precision_at_1000_diff1
value: 9.202187631346504
- type: nauc_precision_at_1000_max
value: 0.8029916001211161
- type: nauc_precision_at_1000_std
value: 7.508338263106339
- type: nauc_precision_at_100_diff1
value: 11.034506607325504
- type: nauc_precision_at_100_max
value: -0.618050025750715
- type: nauc_precision_at_100_std
value: 0.7901423938460427
- type: nauc_precision_at_10_diff1
value: 13.955658061329373
- type: nauc_precision_at_10_max
value: -1.207293654855529
- type: nauc_precision_at_10_std
value: -7.425714506330329
- type: nauc_precision_at_1_diff1
value: 26.114009916227648
- type: nauc_precision_at_1_max
value: 4.987882251998017
- type: nauc_precision_at_1_std
value: -12.49053300937345
- type: nauc_precision_at_20_diff1
value: 12.489569229142287
- type: nauc_precision_at_20_max
value: -1.616050984914554
- type: nauc_precision_at_20_std
value: -5.162744655773524
- type: nauc_precision_at_3_diff1
value: 17.504167758522694
- type: nauc_precision_at_3_max
value: 0.9268715995036761
- type: nauc_precision_at_3_std
value: -10.559011502963529
- type: nauc_precision_at_5_diff1
value: 16.15636228000452
- type: nauc_precision_at_5_max
value: -0.4719927377201926
- type: nauc_precision_at_5_std
value: -9.50415001307989
- type: nauc_recall_at_1000_diff1
value: 10.260398547014864
- type: nauc_recall_at_1000_max
value: -0.7316619465786228
- type: nauc_recall_at_1000_std
value: 10.142175782165697
- type: nauc_recall_at_100_diff1
value: 10.941819094303062
- type: nauc_recall_at_100_max
value: -1.670421830649867
- type: nauc_recall_at_100_std
value: 1.9736476884656806
- type: nauc_recall_at_10_diff1
value: 14.042352275022187
- type: nauc_recall_at_10_max
value: -2.0937095871156957
- type: nauc_recall_at_10_std
value: -6.552138482552333
- type: nauc_recall_at_1_diff1
value: 26.87568133521237
- type: nauc_recall_at_1_max
value: 4.2330031342559495
- type: nauc_recall_at_1_std
value: -11.851764774893743
- type: nauc_recall_at_20_diff1
value: 12.262370673576136
- type: nauc_recall_at_20_max
value: -2.516862222229297
- type: nauc_recall_at_20_std
value: -4.161596846497648
- type: nauc_recall_at_3_diff1
value: 17.56224620361019
- type: nauc_recall_at_3_max
value: 0.018714492341130415
- type: nauc_recall_at_3_std
value: -9.785061227871502
- type: nauc_recall_at_5_diff1
value: 16.129221364664424
- type: nauc_recall_at_5_max
value: -1.4111392018018456
- type: nauc_recall_at_5_std
value: -8.575715551515446
- type: ndcg_at_1
value: 6.04
- type: ndcg_at_10
value: 10.674
- type: ndcg_at_100
value: 14.149999999999999
- type: ndcg_at_1000
value: 16.830000000000002
- type: ndcg_at_20
value: 11.859
- type: ndcg_at_3
value: 8.297
- type: ndcg_at_5
value: 9.35
- type: precision_at_1
value: 6.04
- type: precision_at_10
value: 1.849
- type: precision_at_100
value: 0.379
- type: precision_at_1000
value: 0.064
- type: precision_at_20
value: 1.1860000000000002
- type: precision_at_3
value: 3.697
- type: precision_at_5
value: 2.793
- type: recall_at_1
value: 5.489
- type: recall_at_10
value: 16.520000000000003
- type: recall_at_100
value: 33.219
- type: recall_at_1000
value: 54.458
- type: recall_at_20
value: 21.068
- type: recall_at_3
value: 10.006
- type: recall_at_5
value: 12.520000000000001
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: dev
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 10.741
- type: map_at_1
value: 5.115
- type: map_at_10
value: 7.752000000000001
- type: map_at_100
value: 8.615
- type: map_at_1000
value: 8.757
- type: map_at_20
value: 8.152
- type: map_at_3
value: 6.612
- type: map_at_5
value: 7.1739999999999995
- type: mrr_at_1
value: 9.2
- type: mrr_at_10
value: 13.675952380952387
- type: mrr_at_100
value: 14.572961370891106
- type: mrr_at_1000
value: 14.665070093863623
- type: mrr_at_20
value: 14.128890304741706
- type: mrr_at_3
value: 12.133333333333335
- type: mrr_at_5
value: 13.013333333333337
- type: nauc_map_at_1000_diff1
value: 24.443464786492687
- type: nauc_map_at_1000_max
value: 3.5863334260592317
- type: nauc_map_at_1000_std
value: 6.808944473275444
- type: nauc_map_at_100_diff1
value: 24.554927504722123
- type: nauc_map_at_100_max
value: 3.4867359439244705
- type: nauc_map_at_100_std
value: 6.736578474081088
- type: nauc_map_at_10_diff1
value: 25.946714216691863
- type: nauc_map_at_10_max
value: 2.2553380642838587
- type: nauc_map_at_10_std
value: 7.579966904876258
- type: nauc_map_at_1_diff1
value: 35.57463931710653
- type: nauc_map_at_1_max
value: -1.2179925322847422
- type: nauc_map_at_1_std
value: 8.443440689789126
- type: nauc_map_at_20_diff1
value: 25.128283903883748
- type: nauc_map_at_20_max
value: 2.735471459951113
- type: nauc_map_at_20_std
value: 6.558949576566045
- type: nauc_map_at_3_diff1
value: 27.74350063042762
- type: nauc_map_at_3_max
value: 1.9484063373599907
- type: nauc_map_at_3_std
value: 8.325305776564282
- type: nauc_map_at_5_diff1
value: 27.550954769315705
- type: nauc_map_at_5_max
value: 2.3793167710350733
- type: nauc_map_at_5_std
value: 7.681181471920243
- type: nauc_mrr_at_1000_diff1
value: 21.301648618211885
- type: nauc_mrr_at_1000_max
value: 7.576030904734066
- type: nauc_mrr_at_1000_std
value: 1.5023708427391749
- type: nauc_mrr_at_100_diff1
value: 21.295663061690203
- type: nauc_mrr_at_100_max
value: 7.518240604735922
- type: nauc_mrr_at_100_std
value: 1.560214714120193
- type: nauc_mrr_at_10_diff1
value: 21.933247351406333
- type: nauc_mrr_at_10_max
value: 7.068953773861448
- type: nauc_mrr_at_10_std
value: 1.6795276679770348
- type: nauc_mrr_at_1_diff1
value: 30.259796315955246
- type: nauc_mrr_at_1_max
value: 4.347826086956519
- type: nauc_mrr_at_1_std
value: 2.2689889292005527
- type: nauc_mrr_at_20_diff1
value: 21.549399157417735
- type: nauc_mrr_at_20_max
value: 7.286400181272572
- type: nauc_mrr_at_20_std
value: 1.2054196098114403
- type: nauc_mrr_at_3_diff1
value: 23.051263098119705
- type: nauc_mrr_at_3_max
value: 6.953781046073019
- type: nauc_mrr_at_3_std
value: 1.9392196486864595
- type: nauc_mrr_at_5_diff1
value: 22.422859542345815
- type: nauc_mrr_at_5_max
value: 7.32334346508228
- type: nauc_mrr_at_5_std
value: 1.6646269432927734
- type: nauc_ndcg_at_1000_diff1
value: 17.183525204056878
- type: nauc_ndcg_at_1000_max
value: 10.081867968975438
- type: nauc_ndcg_at_1000_std
value: 6.07484203204573
- type: nauc_ndcg_at_100_diff1
value: 17.58770521024701
- type: nauc_ndcg_at_100_max
value: 8.028147603018072
- type: nauc_ndcg_at_100_std
value: 5.313253617249536
- type: nauc_ndcg_at_10_diff1
value: 21.572138893924286
- type: nauc_ndcg_at_10_max
value: 4.106262578988869
- type: nauc_ndcg_at_10_std
value: 5.527689456319853
- type: nauc_ndcg_at_1_diff1
value: 30.259796315955246
- type: nauc_ndcg_at_1_max
value: 4.347826086956519
- type: nauc_ndcg_at_1_std
value: 2.2689889292005527
- type: nauc_ndcg_at_20_diff1
value: 19.83139955467302
- type: nauc_ndcg_at_20_max
value: 5.039994572546512
- type: nauc_ndcg_at_20_std
value: 3.2037177616412347
- type: nauc_ndcg_at_3_diff1
value: 23.106503069133826
- type: nauc_ndcg_at_3_max
value: 5.389655773530784
- type: nauc_ndcg_at_3_std
value: 5.277035505346756
- type: nauc_ndcg_at_5_diff1
value: 23.547036127096977
- type: nauc_ndcg_at_5_max
value: 5.065086504835826
- type: nauc_ndcg_at_5_std
value: 5.489645533502532
- type: nauc_precision_at_1000_diff1
value: -0.9149500392472889
- type: nauc_precision_at_1000_max
value: 23.01809618770818
- type: nauc_precision_at_1000_std
value: 4.063813989010755
- type: nauc_precision_at_100_diff1
value: 2.4962996663997683
- type: nauc_precision_at_100_max
value: 20.37326252381357
- type: nauc_precision_at_100_std
value: 1.7617549364791172
- type: nauc_precision_at_10_diff1
value: 14.420327449537302
- type: nauc_precision_at_10_max
value: 9.740960148643264
- type: nauc_precision_at_10_std
value: 1.0489485177901323
- type: nauc_precision_at_1_diff1
value: 30.259796315955246
- type: nauc_precision_at_1_max
value: 4.347826086956519
- type: nauc_precision_at_1_std
value: 2.2689889292005527
- type: nauc_precision_at_20_diff1
value: 10.637464534288837
- type: nauc_precision_at_20_max
value: 13.295190905785848
- type: nauc_precision_at_20_std
value: -4.771426536785016
- type: nauc_precision_at_3_diff1
value: 14.258052596798132
- type: nauc_precision_at_3_max
value: 9.789243308155262
- type: nauc_precision_at_3_std
value: 2.7506377415068437
- type: nauc_precision_at_5_diff1
value: 15.609932321999299
- type: nauc_precision_at_5_max
value: 10.931031368447426
- type: nauc_precision_at_5_std
value: 1.9411903520246772
- type: nauc_recall_at_1000_diff1
value: 5.1230755723659644
- type: nauc_recall_at_1000_max
value: 15.667171043851152
- type: nauc_recall_at_1000_std
value: 8.658594809809495
- type: nauc_recall_at_100_diff1
value: 6.276840594323197
- type: nauc_recall_at_100_max
value: 10.186736292307549
- type: nauc_recall_at_100_std
value: 6.274803419198497
- type: nauc_recall_at_10_diff1
value: 14.511192964634844
- type: nauc_recall_at_10_max
value: 2.7033629159530768
- type: nauc_recall_at_10_std
value: 5.544721168976021
- type: nauc_recall_at_1_diff1
value: 35.57463931710653
- type: nauc_recall_at_1_max
value: -1.2179925322847422
- type: nauc_recall_at_1_std
value: 8.443440689789126
- type: nauc_recall_at_20_diff1
value: 10.528757907431654
- type: nauc_recall_at_20_max
value: 3.8769882351376204
- type: nauc_recall_at_20_std
value: 0.22172687781394734
- type: nauc_recall_at_3_diff1
value: 19.31761737312424
- type: nauc_recall_at_3_max
value: 4.770477256134984
- type: nauc_recall_at_3_std
value: 8.036064349802949
- type: nauc_recall_at_5_diff1
value: 18.736255753195202
- type: nauc_recall_at_5_max
value: 4.9215623846366015
- type: nauc_recall_at_5_std
value: 6.040913896762942
- type: ndcg_at_1
value: 9.2
- type: ndcg_at_10
value: 10.741
- type: ndcg_at_100
value: 15.576
- type: ndcg_at_1000
value: 19.169
- type: ndcg_at_20
value: 12.203999999999999
- type: ndcg_at_3
value: 8.895
- type: ndcg_at_5
value: 9.538
- type: precision_at_1
value: 9.2
- type: precision_at_10
value: 2.9000000000000004
- type: precision_at_100
value: 0.7779999999999999
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 2.02
- type: precision_at_3
value: 5.733
- type: precision_at_5
value: 4.36
- type: recall_at_1
value: 5.115
- type: recall_at_10
value: 14.241999999999999
- type: recall_at_100
value: 34.064
- type: recall_at_1000
value: 56.775
- type: recall_at_20
value: 18.995
- type: recall_at_3
value: 8.357000000000001
- type: recall_at_5
value: 10.613999999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 10.488
- type: map_at_1
value: 4.327
- type: map_at_10
value: 7.509
- type: map_at_100
value: 8.254999999999999
- type: map_at_1000
value: 8.408
- type: map_at_20
value: 7.865
- type: map_at_3
value: 6.333
- type: map_at_5
value: 6.847
- type: mrr_at_1
value: 7.87037037037037
- type: mrr_at_10
value: 12.783044287673912
- type: mrr_at_100
value: 13.64289652575713
- type: mrr_at_1000
value: 13.749659009318712
- type: mrr_at_20
value: 13.225641954805042
- type: mrr_at_3
value: 11.368312757201647
- type: mrr_at_5
value: 12.132201646090534
- type: nauc_map_at_1000_diff1
value: 15.693971517432875
- type: nauc_map_at_1000_max
value: 13.73353570544179
- type: nauc_map_at_1000_std
value: -0.4491214851659168
- type: nauc_map_at_100_diff1
value: 15.664658260972185
- type: nauc_map_at_100_max
value: 13.523228466009204
- type: nauc_map_at_100_std
value: -0.48853103844678436
- type: nauc_map_at_10_diff1
value: 16.481079891346
- type: nauc_map_at_10_max
value: 13.320138395812503
- type: nauc_map_at_10_std
value: -1.6853968671654185
- type: nauc_map_at_1_diff1
value: 23.542979489152824
- type: nauc_map_at_1_max
value: 21.116493547865606
- type: nauc_map_at_1_std
value: 0.540202247340363
- type: nauc_map_at_20_diff1
value: 15.95483221097064
- type: nauc_map_at_20_max
value: 13.088795818044199
- type: nauc_map_at_20_std
value: -1.2634478026438074
- type: nauc_map_at_3_diff1
value: 17.587440597709563
- type: nauc_map_at_3_max
value: 14.762252416346275
- type: nauc_map_at_3_std
value: -1.862462089814687
- type: nauc_map_at_5_diff1
value: 16.63760975264609
- type: nauc_map_at_5_max
value: 13.682106655727072
- type: nauc_map_at_5_std
value: -2.5320150847310656
- type: nauc_mrr_at_1000_diff1
value: 16.60169694778954
- type: nauc_mrr_at_1000_max
value: 15.478787426077659
- type: nauc_mrr_at_1000_std
value: 0.5134294458326035
- type: nauc_mrr_at_100_diff1
value: 16.581661276551422
- type: nauc_mrr_at_100_max
value: 15.420808739834909
- type: nauc_mrr_at_100_std
value: 0.5174928064924591
- type: nauc_mrr_at_10_diff1
value: 16.943561256314045
- type: nauc_mrr_at_10_max
value: 15.34192324853041
- type: nauc_mrr_at_10_std
value: 0.017943010026879994
- type: nauc_mrr_at_1_diff1
value: 25.20302422658546
- type: nauc_mrr_at_1_max
value: 22.312957472744696
- type: nauc_mrr_at_1_std
value: 0.8696567193376753
- type: nauc_mrr_at_20_diff1
value: 16.669178289461296
- type: nauc_mrr_at_20_max
value: 15.248695850016444
- type: nauc_mrr_at_20_std
value: 0.27156544965957125
- type: nauc_mrr_at_3_diff1
value: 17.781124767446954
- type: nauc_mrr_at_3_max
value: 17.257942639937898
- type: nauc_mrr_at_3_std
value: 0.773863132258467
- type: nauc_mrr_at_5_diff1
value: 17.324642628352414
- type: nauc_mrr_at_5_max
value: 16.144387950864854
- type: nauc_mrr_at_5_std
value: -0.03081737310621125
- type: nauc_ndcg_at_1000_diff1
value: 12.164273508590473
- type: nauc_ndcg_at_1000_max
value: 16.119100509244216
- type: nauc_ndcg_at_1000_std
value: 3.3836546870493978
- type: nauc_ndcg_at_100_diff1
value: 12.682912860260112
- type: nauc_ndcg_at_100_max
value: 13.155366244288949
- type: nauc_ndcg_at_100_std
value: 2.871619938448873
- type: nauc_ndcg_at_10_diff1
value: 15.021370241404203
- type: nauc_ndcg_at_10_max
value: 11.595794831262914
- type: nauc_ndcg_at_10_std
value: -1.1632586694213183
- type: nauc_ndcg_at_1_diff1
value: 25.20302422658546
- type: nauc_ndcg_at_1_max
value: 22.312957472744696
- type: nauc_ndcg_at_1_std
value: 0.8696567193376753
- type: nauc_ndcg_at_20_diff1
value: 13.657801631413413
- type: nauc_ndcg_at_20_max
value: 11.00736412938163
- type: nauc_ndcg_at_20_std
value: -0.3171662677839999
- type: nauc_ndcg_at_3_diff1
value: 16.36514101709667
- type: nauc_ndcg_at_3_max
value: 14.987518312847847
- type: nauc_ndcg_at_3_std
value: -1.4045044960148374
- type: nauc_ndcg_at_5_diff1
value: 14.931812970605298
- type: nauc_ndcg_at_5_max
value: 12.80562601481414
- type: nauc_ndcg_at_5_std
value: -2.794334830326802
- type: nauc_precision_at_1000_diff1
value: 3.100554529507418
- type: nauc_precision_at_1000_max
value: 20.358018215782856
- type: nauc_precision_at_1000_std
value: 11.163680309737314
- type: nauc_precision_at_100_diff1
value: 6.98075608034818
- type: nauc_precision_at_100_max
value: 16.36141830637447
- type: nauc_precision_at_100_std
value: 11.366080320564755
- type: nauc_precision_at_10_diff1
value: 15.32986733210927
- type: nauc_precision_at_10_max
value: 11.312307867275663
- type: nauc_precision_at_10_std
value: 0.53060482901878
- type: nauc_precision_at_1_diff1
value: 25.20302422658546
- type: nauc_precision_at_1_max
value: 22.312957472744696
- type: nauc_precision_at_1_std
value: 0.8696567193376753
- type: nauc_precision_at_20_diff1
value: 10.072334436927605
- type: nauc_precision_at_20_max
value: 10.798551531334315
- type: nauc_precision_at_20_std
value: 2.9846625238190603
- type: nauc_precision_at_3_diff1
value: 15.024244230650929
- type: nauc_precision_at_3_max
value: 13.087533581759256
- type: nauc_precision_at_3_std
value: -0.8725836400722371
- type: nauc_precision_at_5_diff1
value: 14.115770926256957
- type: nauc_precision_at_5_max
value: 11.026043250332117
- type: nauc_precision_at_5_std
value: -2.361564433438909
- type: nauc_recall_at_1000_diff1
value: 1.5806639952797101
- type: nauc_recall_at_1000_max
value: 19.809578349990957
- type: nauc_recall_at_1000_std
value: 7.965656838816244
- type: nauc_recall_at_100_diff1
value: 6.0457826628940055
- type: nauc_recall_at_100_max
value: 10.41943961051214
- type: nauc_recall_at_100_std
value: 6.808715859669741
- type: nauc_recall_at_10_diff1
value: 10.139488870220676
- type: nauc_recall_at_10_max
value: 5.5018399055657525
- type: nauc_recall_at_10_std
value: -1.8710090289481351
- type: nauc_recall_at_1_diff1
value: 23.542979489152824
- type: nauc_recall_at_1_max
value: 21.116493547865606
- type: nauc_recall_at_1_std
value: 0.540202247340363
- type: nauc_recall_at_20_diff1
value: 7.919810474778301
- type: nauc_recall_at_20_max
value: 4.810931910663472
- type: nauc_recall_at_20_std
value: -0.5041299837240394
- type: nauc_recall_at_3_diff1
value: 10.864730962859333
- type: nauc_recall_at_3_max
value: 9.02116321008078
- type: nauc_recall_at_3_std
value: -3.2267790643338463
- type: nauc_recall_at_5_diff1
value: 9.468917345249233
- type: nauc_recall_at_5_max
value: 6.955770417473521
- type: nauc_recall_at_5_std
value: -4.800036073854607
- type: ndcg_at_1
value: 7.870000000000001
- type: ndcg_at_10
value: 10.488
- type: ndcg_at_100
value: 14.682999999999998
- type: ndcg_at_1000
value: 18.786
- type: ndcg_at_20
value: 11.767
- type: ndcg_at_3
value: 8.716
- type: ndcg_at_5
value: 9.200999999999999
- type: precision_at_1
value: 7.870000000000001
- type: precision_at_10
value: 3.071
- type: precision_at_100
value: 0.731
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_20
value: 2.045
- type: precision_at_3
value: 5.9159999999999995
- type: precision_at_5
value: 4.506
- type: recall_at_1
value: 4.327
- type: recall_at_10
value: 14.066999999999998
- type: recall_at_100
value: 30.801000000000002
- type: recall_at_1000
value: 57.285
- type: recall_at_20
value: 17.989
- type: recall_at_3
value: 8.625
- type: recall_at_5
value: 10.445
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: train
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 9.578000000000001
- type: map_at_1
value: 4.038
- type: map_at_10
value: 6.755
- type: map_at_100
value: 7.571
- type: map_at_1000
value: 7.727
- type: map_at_20
value: 7.1370000000000005
- type: map_at_3
value: 5.7059999999999995
- type: map_at_5
value: 6.208
- type: mrr_at_1
value: 8.236363636363636
- type: mrr_at_10
value: 12.178658008658026
- type: mrr_at_100
value: 13.079157824767714
- type: mrr_at_1000
value: 13.185585148590217
- type: mrr_at_20
value: 12.652818836103396
- type: mrr_at_3
value: 10.71212121212121
- type: mrr_at_5
value: 11.458484848484865
- type: nauc_map_at_1000_diff1
value: 21.182816280151513
- type: nauc_map_at_1000_max
value: 10.69428303486232
- type: nauc_map_at_1000_std
value: -1.6164774963978996
- type: nauc_map_at_100_diff1
value: 21.223145880343168
- type: nauc_map_at_100_max
value: 10.535615269765472
- type: nauc_map_at_100_std
value: -1.8194907403363059
- type: nauc_map_at_10_diff1
value: 21.52203575990656
- type: nauc_map_at_10_max
value: 10.386209650695575
- type: nauc_map_at_10_std
value: -2.7616858412263907
- type: nauc_map_at_1_diff1
value: 29.09525102782545
- type: nauc_map_at_1_max
value: 11.25970689824895
- type: nauc_map_at_1_std
value: -4.144336424167707
- type: nauc_map_at_20_diff1
value: 21.46954847652392
- type: nauc_map_at_20_max
value: 10.258827394455478
- type: nauc_map_at_20_std
value: -2.2399487512781717
- type: nauc_map_at_3_diff1
value: 23.74685028823433
- type: nauc_map_at_3_max
value: 10.949843687159913
- type: nauc_map_at_3_std
value: -2.859432222798115
- type: nauc_map_at_5_diff1
value: 22.55717510746645
- type: nauc_map_at_5_max
value: 10.905433298857748
- type: nauc_map_at_5_std
value: -3.2349969036293302
- type: nauc_mrr_at_1000_diff1
value: 20.798743847964726
- type: nauc_mrr_at_1000_max
value: 13.590882394213939
- type: nauc_mrr_at_1000_std
value: -1.425396908617602
- type: nauc_mrr_at_100_diff1
value: 20.78178943942825
- type: nauc_mrr_at_100_max
value: 13.544645191839876
- type: nauc_mrr_at_100_std
value: -1.4484268481863227
- type: nauc_mrr_at_10_diff1
value: 21.122066212325464
- type: nauc_mrr_at_10_max
value: 13.602867729343668
- type: nauc_mrr_at_10_std
value: -1.7997296771621194
- type: nauc_mrr_at_1_diff1
value: 29.31676312287106
- type: nauc_mrr_at_1_max
value: 15.70325960963817
- type: nauc_mrr_at_1_std
value: -3.0054533145455498
- type: nauc_mrr_at_20_diff1
value: 20.91503032838441
- type: nauc_mrr_at_20_max
value: 13.430329100219785
- type: nauc_mrr_at_20_std
value: -1.5559138540635602
- type: nauc_mrr_at_3_diff1
value: 23.713532048958676
- type: nauc_mrr_at_3_max
value: 14.327169874806042
- type: nauc_mrr_at_3_std
value: -1.7473606728161384
- type: nauc_mrr_at_5_diff1
value: 22.14932108262477
- type: nauc_mrr_at_5_max
value: 13.881872546738903
- type: nauc_mrr_at_5_std
value: -1.9493419228701812
- type: nauc_ndcg_at_1000_diff1
value: 16.575354106946364
- type: nauc_ndcg_at_1000_max
value: 13.338828979053908
- type: nauc_ndcg_at_1000_std
value: 3.0646371572988262
- type: nauc_ndcg_at_100_diff1
value: 16.98369572705594
- type: nauc_ndcg_at_100_max
value: 10.899361814570142
- type: nauc_ndcg_at_100_std
value: 0.6125045404616679
- type: nauc_ndcg_at_10_diff1
value: 18.347324661692983
- type: nauc_ndcg_at_10_max
value: 10.465276227758997
- type: nauc_ndcg_at_10_std
value: -2.1111395162738247
- type: nauc_ndcg_at_1_diff1
value: 29.31676312287106
- type: nauc_ndcg_at_1_max
value: 15.70325960963817
- type: nauc_ndcg_at_1_std
value: -3.0054533145455498
- type: nauc_ndcg_at_20_diff1
value: 18.012648072836534
- type: nauc_ndcg_at_20_max
value: 9.961042030115676
- type: nauc_ndcg_at_20_std
value: -0.9834724323272007
- type: nauc_ndcg_at_3_diff1
value: 22.50117204208795
- type: nauc_ndcg_at_3_max
value: 12.804926278466885
- type: nauc_ndcg_at_3_std
value: -2.3329959506104543
- type: nauc_ndcg_at_5_diff1
value: 20.302876321754372
- type: nauc_ndcg_at_5_max
value: 11.773601706350991
- type: nauc_ndcg_at_5_std
value: -2.8981605686191383
- type: nauc_precision_at_1000_diff1
value: 6.138374447836478
- type: nauc_precision_at_1000_max
value: 18.331715393924767
- type: nauc_precision_at_1000_std
value: 12.583491365385171
- type: nauc_precision_at_100_diff1
value: 10.112353219266282
- type: nauc_precision_at_100_max
value: 13.855862069837926
- type: nauc_precision_at_100_std
value: 6.568065172020007
- type: nauc_precision_at_10_diff1
value: 13.853743310193584
- type: nauc_precision_at_10_max
value: 12.410028422765533
- type: nauc_precision_at_10_std
value: -0.035431237042733627
- type: nauc_precision_at_1_diff1
value: 29.31676312287106
- type: nauc_precision_at_1_max
value: 15.70325960963817
- type: nauc_precision_at_1_std
value: -3.0054533145455498
- type: nauc_precision_at_20_diff1
value: 12.405202071303263
- type: nauc_precision_at_20_max
value: 11.58023054377945
- type: nauc_precision_at_20_std
value: 2.392678794082365
- type: nauc_precision_at_3_diff1
value: 20.48743377332479
- type: nauc_precision_at_3_max
value: 14.077923262115402
- type: nauc_precision_at_3_std
value: -1.5328251588555544
- type: nauc_precision_at_5_diff1
value: 16.679567340831227
- type: nauc_precision_at_5_max
value: 13.734058632807395
- type: nauc_precision_at_5_std
value: -1.802003589808628
- type: nauc_recall_at_1000_diff1
value: 6.665583354703268
- type: nauc_recall_at_1000_max
value: 14.184554032490734
- type: nauc_recall_at_1000_std
value: 11.02548160998993
- type: nauc_recall_at_100_diff1
value: 8.970227788144088
- type: nauc_recall_at_100_max
value: 7.397741893508654
- type: nauc_recall_at_100_std
value: 3.6165193266266313
- type: nauc_recall_at_10_diff1
value: 11.458441446544818
- type: nauc_recall_at_10_max
value: 6.838832252215828
- type: nauc_recall_at_10_std
value: -1.932484458717299
- type: nauc_recall_at_1_diff1
value: 29.09525102782545
- type: nauc_recall_at_1_max
value: 11.25970689824895
- type: nauc_recall_at_1_std
value: -4.144336424167707
- type: nauc_recall_at_20_diff1
value: 11.532265794481113
- type: nauc_recall_at_20_max
value: 5.780208854158696
- type: nauc_recall_at_20_std
value: 0.5386564114228197
- type: nauc_recall_at_3_diff1
value: 17.945008349232687
- type: nauc_recall_at_3_max
value: 10.32375339551701
- type: nauc_recall_at_3_std
value: -1.9976280906398052
- type: nauc_recall_at_5_diff1
value: 15.052879011493205
- type: nauc_recall_at_5_max
value: 9.45434652519321
- type: nauc_recall_at_5_std
value: -3.450648528044286
- type: ndcg_at_1
value: 8.236
- type: ndcg_at_10
value: 9.578000000000001
- type: ndcg_at_100
value: 13.966000000000001
- type: ndcg_at_1000
value: 17.997
- type: ndcg_at_20
value: 10.940999999999999
- type: ndcg_at_3
value: 7.852
- type: ndcg_at_5
value: 8.387
- type: precision_at_1
value: 8.236
- type: precision_at_10
value: 2.785
- type: precision_at_100
value: 0.7100000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 1.925
- type: precision_at_3
value: 5.236
- type: precision_at_5
value: 4.04
- type: recall_at_1
value: 4.038
- type: recall_at_10
value: 12.741
- type: recall_at_100
value: 30.308
- type: recall_at_1000
value: 56.184
- type: recall_at_20
value: 17.112
- type: recall_at_3
value: 7.462000000000001
- type: recall_at_5
value: 9.281
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: dev
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 22.889
- type: map_at_1
value: 12.824
- type: map_at_10
value: 17.671
- type: map_at_100
value: 18.310000000000002
- type: map_at_1000
value: 18.401
- type: map_at_20
value: 17.985
- type: map_at_3
value: 16.323999999999998
- type: map_at_5
value: 17.041999999999998
- type: mrr_at_1
value: 25.647145217550943
- type: mrr_at_10
value: 31.14320246181815
- type: mrr_at_100
value: 31.769161472678668
- type: mrr_at_1000
value: 31.836329643204337
- type: mrr_at_20
value: 31.465014142683806
- type: mrr_at_3
value: 29.545315464169825
- type: mrr_at_5
value: 30.41460130958936
- type: nauc_map_at_1000_diff1
value: 47.600927437185575
- type: nauc_map_at_1000_max
value: 24.861073204858688
- type: nauc_map_at_1000_std
value: 4.957887829721383
- type: nauc_map_at_100_diff1
value: 47.623275530844886
- type: nauc_map_at_100_max
value: 24.828408842475042
- type: nauc_map_at_100_std
value: 4.906092461955792
- type: nauc_map_at_10_diff1
value: 48.15968275881134
- type: nauc_map_at_10_max
value: 24.93609071183744
- type: nauc_map_at_10_std
value: 4.27414117156765
- type: nauc_map_at_1_diff1
value: 56.68069306110441
- type: nauc_map_at_1_max
value: 25.774113603061334
- type: nauc_map_at_1_std
value: 0.40365058613138693
- type: nauc_map_at_20_diff1
value: 47.858453817693544
- type: nauc_map_at_20_max
value: 24.789689059350316
- type: nauc_map_at_20_std
value: 4.562256319185607
- type: nauc_map_at_3_diff1
value: 49.52861930769823
- type: nauc_map_at_3_max
value: 25.138544080679072
- type: nauc_map_at_3_std
value: 2.764666141360174
- type: nauc_map_at_5_diff1
value: 48.58671208489847
- type: nauc_map_at_5_max
value: 24.94692975737182
- type: nauc_map_at_5_std
value: 3.544473765381104
- type: nauc_mrr_at_1000_diff1
value: 50.76722669346204
- type: nauc_mrr_at_1000_max
value: 24.21021362367548
- type: nauc_mrr_at_1000_std
value: 2.6461593535039363
- type: nauc_mrr_at_100_diff1
value: 50.765501393324065
- type: nauc_mrr_at_100_max
value: 24.197831040602157
- type: nauc_mrr_at_100_std
value: 2.6571957023291697
- type: nauc_mrr_at_10_diff1
value: 50.99987085274227
- type: nauc_mrr_at_10_max
value: 24.23494195508237
- type: nauc_mrr_at_10_std
value: 2.4300768537721744
- type: nauc_mrr_at_1_diff1
value: 56.68069306110441
- type: nauc_mrr_at_1_max
value: 25.774113603061334
- type: nauc_mrr_at_1_std
value: 0.40365058613138693
- type: nauc_mrr_at_20_diff1
value: 50.82600876014814
- type: nauc_mrr_at_20_max
value: 24.19197808980357
- type: nauc_mrr_at_20_std
value: 2.570352774856951
- type: nauc_mrr_at_3_diff1
value: 52.16532994955985
- type: nauc_mrr_at_3_max
value: 24.45852049543513
- type: nauc_mrr_at_3_std
value: 1.7767182647530466
- type: nauc_mrr_at_5_diff1
value: 51.37941875374528
- type: nauc_mrr_at_5_max
value: 24.32996805662479
- type: nauc_mrr_at_5_std
value: 2.0629973433922233
- type: nauc_ndcg_at_1000_diff1
value: 44.21209254209989
- type: nauc_ndcg_at_1000_max
value: 24.496188929013858
- type: nauc_ndcg_at_1000_std
value: 7.946700991012672
- type: nauc_ndcg_at_100_diff1
value: 44.72402018150159
- type: nauc_ndcg_at_100_max
value: 23.894923290156683
- type: nauc_ndcg_at_100_std
value: 7.34663475138819
- type: nauc_ndcg_at_10_diff1
value: 46.73156921493794
- type: nauc_ndcg_at_10_max
value: 24.140158498931118
- type: nauc_ndcg_at_10_std
value: 5.1215251954873064
- type: nauc_ndcg_at_1_diff1
value: 56.68069306110441
- type: nauc_ndcg_at_1_max
value: 25.774113603061334
- type: nauc_ndcg_at_1_std
value: 0.40365058613138693
- type: nauc_ndcg_at_20_diff1
value: 45.890126343151
- type: nauc_ndcg_at_20_max
value: 23.798504061957036
- type: nauc_ndcg_at_20_std
value: 5.881854628415958
- type: nauc_ndcg_at_3_diff1
value: 49.14631612533971
- type: nauc_ndcg_at_3_max
value: 24.54170357409986
- type: nauc_ndcg_at_3_std
value: 2.9073138926744644
- type: nauc_ndcg_at_5_diff1
value: 47.62923572228047
- type: nauc_ndcg_at_5_max
value: 24.262563979635445
- type: nauc_ndcg_at_5_std
value: 3.833342241169566
- type: nauc_precision_at_1000_diff1
value: 20.29631259264317
- type: nauc_precision_at_1000_max
value: 20.20742704546957
- type: nauc_precision_at_1000_std
value: 18.38771956895729
- type: nauc_precision_at_100_diff1
value: 28.557156027833237
- type: nauc_precision_at_100_max
value: 19.588656252714546
- type: nauc_precision_at_100_std
value: 15.99360583900732
- type: nauc_precision_at_10_diff1
value: 38.61944274907914
- type: nauc_precision_at_10_max
value: 22.18699538679412
- type: nauc_precision_at_10_std
value: 9.295738688398757
- type: nauc_precision_at_1_diff1
value: 56.68069306110441
- type: nauc_precision_at_1_max
value: 25.774113603061334
- type: nauc_precision_at_1_std
value: 0.40365058613138693
- type: nauc_precision_at_20_diff1
value: 34.91960590817015
- type: nauc_precision_at_20_max
value: 20.40657354149522
- type: nauc_precision_at_20_std
value: 11.192816005046383
- type: nauc_precision_at_3_diff1
value: 44.84256957760863
- type: nauc_precision_at_3_max
value: 23.70161029591533
- type: nauc_precision_at_3_std
value: 4.437828128629862
- type: nauc_precision_at_5_diff1
value: 41.4096320172577
- type: nauc_precision_at_5_max
value: 22.93007093340059
- type: nauc_precision_at_5_std
value: 6.23638317984125
- type: nauc_recall_at_1000_diff1
value: 20.296312592643176
- type: nauc_recall_at_1000_max
value: 20.207427045469608
- type: nauc_recall_at_1000_std
value: 18.387719568957348
- type: nauc_recall_at_100_diff1
value: 28.55715602783317
- type: nauc_recall_at_100_max
value: 19.58865625271455
- type: nauc_recall_at_100_std
value: 15.99360583900731
- type: nauc_recall_at_10_diff1
value: 38.61944274907914
- type: nauc_recall_at_10_max
value: 22.18699538679413
- type: nauc_recall_at_10_std
value: 9.295738688398774
- type: nauc_recall_at_1_diff1
value: 56.68069306110441
- type: nauc_recall_at_1_max
value: 25.774113603061334
- type: nauc_recall_at_1_std
value: 0.40365058613138693
- type: nauc_recall_at_20_diff1
value: 34.919605908170084
- type: nauc_recall_at_20_max
value: 20.40657354149521
- type: nauc_recall_at_20_std
value: 11.192816005046355
- type: nauc_recall_at_3_diff1
value: 44.8425695776086
- type: nauc_recall_at_3_max
value: 23.701610295915348
- type: nauc_recall_at_3_std
value: 4.437828128629871
- type: nauc_recall_at_5_diff1
value: 41.40963201725767
- type: nauc_recall_at_5_max
value: 22.930070933400554
- type: nauc_recall_at_5_std
value: 6.2363831798412335
- type: ndcg_at_1
value: 25.647
- type: ndcg_at_10
value: 22.889
- type: ndcg_at_100
value: 26.134
- type: ndcg_at_1000
value: 28.605000000000004
- type: ndcg_at_20
value: 23.962
- type: ndcg_at_3
value: 20.144000000000002
- type: ndcg_at_5
value: 21.418
- type: precision_at_1
value: 25.647
- type: precision_at_10
value: 5.038
- type: precision_at_100
value: 0.766
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 2.867
- type: precision_at_3
value: 12.642999999999999
- type: precision_at_5
value: 8.596
- type: recall_at_1
value: 12.824
- type: recall_at_10
value: 25.188
- type: recall_at_100
value: 38.305
- type: recall_at_1000
value: 54.94799999999999
- type: recall_at_20
value: 28.666999999999998
- type: recall_at_3
value: 18.965
- type: recall_at_5
value: 21.489
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 20.768
- type: map_at_1
value: 11.195
- type: map_at_10
value: 15.754000000000001
- type: map_at_100
value: 16.43
- type: map_at_1000
value: 16.527
- type: map_at_20
value: 16.116
- type: map_at_3
value: 14.381
- type: map_at_5
value: 15.145
- type: mrr_at_1
value: 22.39027683997299
- type: mrr_at_10
value: 28.085007984737793
- type: mrr_at_100
value: 28.799032949307033
- type: mrr_at_1000
value: 28.87198101744127
- type: mrr_at_20
value: 28.493710316576376
- type: mrr_at_3
value: 26.452847175331872
- type: mrr_at_5
value: 27.380598694575674
- type: nauc_map_at_1000_diff1
value: 45.49845221477393
- type: nauc_map_at_1000_max
value: 15.175564414454264
- type: nauc_map_at_1000_std
value: 7.224346220170054
- type: nauc_map_at_100_diff1
value: 45.510181432884664
- type: nauc_map_at_100_max
value: 15.17115320605242
- type: nauc_map_at_100_std
value: 7.132136283511589
- type: nauc_map_at_10_diff1
value: 45.97606512455099
- type: nauc_map_at_10_max
value: 15.256200023668761
- type: nauc_map_at_10_std
value: 6.408242384838057
- type: nauc_map_at_1_diff1
value: 54.0628940648862
- type: nauc_map_at_1_max
value: 15.545387603653788
- type: nauc_map_at_1_std
value: 3.3785405883833817
- type: nauc_map_at_20_diff1
value: 45.718259354242754
- type: nauc_map_at_20_max
value: 15.213338600537726
- type: nauc_map_at_20_std
value: 6.846383883306986
- type: nauc_map_at_3_diff1
value: 47.68576753067344
- type: nauc_map_at_3_max
value: 15.938113971932202
- type: nauc_map_at_3_std
value: 5.696823147869656
- type: nauc_map_at_5_diff1
value: 46.672998049781704
- type: nauc_map_at_5_max
value: 15.549559098940685
- type: nauc_map_at_5_std
value: 6.004328793861026
- type: nauc_mrr_at_1000_diff1
value: 47.28195529878466
- type: nauc_mrr_at_1000_max
value: 14.5154152116826
- type: nauc_mrr_at_1000_std
value: 5.200618761002357
- type: nauc_mrr_at_100_diff1
value: 47.26721749482368
- type: nauc_mrr_at_100_max
value: 14.518646408566982
- type: nauc_mrr_at_100_std
value: 5.188727291394599
- type: nauc_mrr_at_10_diff1
value: 47.49867749308816
- type: nauc_mrr_at_10_max
value: 14.506529816178496
- type: nauc_mrr_at_10_std
value: 4.911972112927814
- type: nauc_mrr_at_1_diff1
value: 54.0628940648862
- type: nauc_mrr_at_1_max
value: 15.545387603653788
- type: nauc_mrr_at_1_std
value: 3.3785405883833817
- type: nauc_mrr_at_20_diff1
value: 47.330828053863364
- type: nauc_mrr_at_20_max
value: 14.552213884884182
- type: nauc_mrr_at_20_std
value: 5.154569902872899
- type: nauc_mrr_at_3_diff1
value: 48.627494861806305
- type: nauc_mrr_at_3_max
value: 14.673220954912674
- type: nauc_mrr_at_3_std
value: 4.358046470762483
- type: nauc_mrr_at_5_diff1
value: 48.00198656008637
- type: nauc_mrr_at_5_max
value: 14.604640583804768
- type: nauc_mrr_at_5_std
value: 4.63422931289899
- type: nauc_ndcg_at_1000_diff1
value: 41.954319524318805
- type: nauc_ndcg_at_1000_max
value: 14.501948704406454
- type: nauc_ndcg_at_1000_std
value: 10.532838064826603
- type: nauc_ndcg_at_100_diff1
value: 42.08787303318262
- type: nauc_ndcg_at_100_max
value: 14.221179737684874
- type: nauc_ndcg_at_100_std
value: 9.09296129728229
- type: nauc_ndcg_at_10_diff1
value: 43.77312870750698
- type: nauc_ndcg_at_10_max
value: 14.452377679742142
- type: nauc_ndcg_at_10_std
value: 6.813120115553632
- type: nauc_ndcg_at_1_diff1
value: 54.0628940648862
- type: nauc_ndcg_at_1_max
value: 15.545387603653788
- type: nauc_ndcg_at_1_std
value: 3.3785405883833817
- type: nauc_ndcg_at_20_diff1
value: 43.04100545911645
- type: nauc_ndcg_at_20_max
value: 14.439667710894998
- type: nauc_ndcg_at_20_std
value: 7.950436062348261
- type: nauc_ndcg_at_3_diff1
value: 46.52421787896966
- type: nauc_ndcg_at_3_max
value: 15.312367255351381
- type: nauc_ndcg_at_3_std
value: 5.5202886224182
- type: nauc_ndcg_at_5_diff1
value: 45.10273286347408
- type: nauc_ndcg_at_5_max
value: 14.903329516721971
- type: nauc_ndcg_at_5_std
value: 6.009047867901137
- type: nauc_precision_at_1000_diff1
value: 20.846930683856638
- type: nauc_precision_at_1000_max
value: 10.932965864198811
- type: nauc_precision_at_1000_std
value: 22.129658577330755
- type: nauc_precision_at_100_diff1
value: 26.484117797013184
- type: nauc_precision_at_100_max
value: 10.704462305152845
- type: nauc_precision_at_100_std
value: 15.524264565211702
- type: nauc_precision_at_10_diff1
value: 35.21475001356588
- type: nauc_precision_at_10_max
value: 12.73480152800143
- type: nauc_precision_at_10_std
value: 9.276584697369298
- type: nauc_precision_at_1_diff1
value: 54.0628940648862
- type: nauc_precision_at_1_max
value: 15.545387603653788
- type: nauc_precision_at_1_std
value: 3.3785405883833817
- type: nauc_precision_at_20_diff1
value: 31.9326796282682
- type: nauc_precision_at_20_max
value: 12.254000369952136
- type: nauc_precision_at_20_std
value: 12.228699441343288
- type: nauc_precision_at_3_diff1
value: 42.20525943879543
- type: nauc_precision_at_3_max
value: 15.149631662036137
- type: nauc_precision_at_3_std
value: 6.624410631558329
- type: nauc_precision_at_5_diff1
value: 38.947501695472454
- type: nauc_precision_at_5_max
value: 14.110651125447898
- type: nauc_precision_at_5_std
value: 7.479732552794256
- type: nauc_recall_at_1000_diff1
value: 20.846930683856694
- type: nauc_recall_at_1000_max
value: 10.932965864198902
- type: nauc_recall_at_1000_std
value: 22.129658577330808
- type: nauc_recall_at_100_diff1
value: 26.484117797013134
- type: nauc_recall_at_100_max
value: 10.704462305152806
- type: nauc_recall_at_100_std
value: 15.524264565211665
- type: nauc_recall_at_10_diff1
value: 35.21475001356592
- type: nauc_recall_at_10_max
value: 12.734801528001437
- type: nauc_recall_at_10_std
value: 9.276584697369273
- type: nauc_recall_at_1_diff1
value: 54.0628940648862
- type: nauc_recall_at_1_max
value: 15.545387603653788
- type: nauc_recall_at_1_std
value: 3.3785405883833817
- type: nauc_recall_at_20_diff1
value: 31.932679628268218
- type: nauc_recall_at_20_max
value: 12.254000369952132
- type: nauc_recall_at_20_std
value: 12.228699441343306
- type: nauc_recall_at_3_diff1
value: 42.205259438795416
- type: nauc_recall_at_3_max
value: 15.149631662036118
- type: nauc_recall_at_3_std
value: 6.624410631558295
- type: nauc_recall_at_5_diff1
value: 38.947501695472475
- type: nauc_recall_at_5_max
value: 14.110651125447909
- type: nauc_recall_at_5_std
value: 7.479732552794241
- type: ndcg_at_1
value: 22.39
- type: ndcg_at_10
value: 20.768
- type: ndcg_at_100
value: 24.227999999999998
- type: ndcg_at_1000
value: 26.857
- type: ndcg_at_20
value: 22.027
- type: ndcg_at_3
value: 17.979
- type: ndcg_at_5
value: 19.334
- type: precision_at_1
value: 22.39
- type: precision_at_10
value: 4.6899999999999995
- type: precision_at_100
value: 0.748
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 2.75
- type: precision_at_3
value: 11.429
- type: precision_at_5
value: 7.93
- type: recall_at_1
value: 11.195
- type: recall_at_10
value: 23.45
- type: recall_at_100
value: 37.4
- type: recall_at_1000
value: 55.084
- type: recall_at_20
value: 27.502
- type: recall_at_3
value: 17.144000000000002
- type: recall_at_5
value: 19.824
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: train
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 22.35
- type: map_at_1
value: 12.555
- type: map_at_10
value: 17.21
- type: map_at_100
value: 17.872
- type: map_at_1000
value: 17.965
- type: map_at_20
value: 17.551
- type: map_at_3
value: 15.858
- type: map_at_5
value: 16.599
- type: mrr_at_1
value: 25.111764705882354
- type: mrr_at_10
value: 30.48001493930902
- type: mrr_at_100
value: 31.15569720959198
- type: mrr_at_1000
value: 31.22389336696618
- type: mrr_at_20
value: 30.849373430383874
- type: mrr_at_3
value: 28.91666666666589
- type: mrr_at_5
value: 29.7809019607842
- type: nauc_map_at_1000_diff1
value: 47.23439287089665
- type: nauc_map_at_1000_max
value: 25.968730532609968
- type: nauc_map_at_1000_std
value: 7.853574124121959
- type: nauc_map_at_100_diff1
value: 47.25410107677985
- type: nauc_map_at_100_max
value: 25.958078179292738
- type: nauc_map_at_100_std
value: 7.797526936599504
- type: nauc_map_at_10_diff1
value: 47.7766667952266
- type: nauc_map_at_10_max
value: 26.116973110531532
- type: nauc_map_at_10_std
value: 7.1837608694691175
- type: nauc_map_at_1_diff1
value: 55.84919303820644
- type: nauc_map_at_1_max
value: 27.33854442034755
- type: nauc_map_at_1_std
value: 3.749969253202433
- type: nauc_map_at_20_diff1
value: 47.499892153618575
- type: nauc_map_at_20_max
value: 26.026504515503547
- type: nauc_map_at_20_std
value: 7.523895544458576
- type: nauc_map_at_3_diff1
value: 49.321602413293114
- type: nauc_map_at_3_max
value: 26.773187498425916
- type: nauc_map_at_3_std
value: 6.0360638585853
- type: nauc_map_at_5_diff1
value: 48.4584384216103
- type: nauc_map_at_5_max
value: 26.32743773951151
- type: nauc_map_at_5_std
value: 6.632069219516669
- type: nauc_mrr_at_1000_diff1
value: 49.92532985741087
- type: nauc_mrr_at_1000_max
value: 25.416352871594285
- type: nauc_mrr_at_1000_std
value: 5.707347291527901
- type: nauc_mrr_at_100_diff1
value: 49.91882720294646
- type: nauc_mrr_at_100_max
value: 25.409791985977936
- type: nauc_mrr_at_100_std
value: 5.70789865884997
- type: nauc_mrr_at_10_diff1
value: 50.18599313186747
- type: nauc_mrr_at_10_max
value: 25.473370263540296
- type: nauc_mrr_at_10_std
value: 5.510370680302974
- type: nauc_mrr_at_1_diff1
value: 55.84432875483333
- type: nauc_mrr_at_1_max
value: 27.344412140828105
- type: nauc_mrr_at_1_std
value: 3.756328763815358
- type: nauc_mrr_at_20_diff1
value: 50.016732662904005
- type: nauc_mrr_at_20_max
value: 25.42168353542509
- type: nauc_mrr_at_20_std
value: 5.6387187747976
- type: nauc_mrr_at_3_diff1
value: 51.382491526813524
- type: nauc_mrr_at_3_max
value: 26.00016755594854
- type: nauc_mrr_at_3_std
value: 4.9983293041585
- type: nauc_mrr_at_5_diff1
value: 50.66012874099777
- type: nauc_mrr_at_5_max
value: 25.644394419977836
- type: nauc_mrr_at_5_std
value: 5.275322837366838
- type: nauc_ndcg_at_1000_diff1
value: 43.74407474447287
- type: nauc_ndcg_at_1000_max
value: 24.840928758850158
- type: nauc_ndcg_at_1000_std
value: 10.688374553716413
- type: nauc_ndcg_at_100_diff1
value: 44.04666658249944
- type: nauc_ndcg_at_100_max
value: 24.534925051692618
- type: nauc_ndcg_at_100_std
value: 9.826293714346614
- type: nauc_ndcg_at_10_diff1
value: 46.1083735248968
- type: nauc_ndcg_at_10_max
value: 25.03022756559357
- type: nauc_ndcg_at_10_std
value: 7.741708048843889
- type: nauc_ndcg_at_1_diff1
value: 55.84919303820644
- type: nauc_ndcg_at_1_max
value: 27.33854442034755
- type: nauc_ndcg_at_1_std
value: 3.749969253202433
- type: nauc_ndcg_at_20_diff1
value: 45.3122792730965
- type: nauc_ndcg_at_20_max
value: 24.781053692426287
- type: nauc_ndcg_at_20_std
value: 8.558857449509667
- type: nauc_ndcg_at_3_diff1
value: 48.79626793807589
- type: nauc_ndcg_at_3_max
value: 26.154313437984868
- type: nauc_ndcg_at_3_std
value: 6.063296540151816
- type: nauc_ndcg_at_5_diff1
value: 47.41620190563465
- type: nauc_ndcg_at_5_max
value: 25.459974119334543
- type: nauc_ndcg_at_5_std
value: 6.82693943779619
- type: nauc_precision_at_1000_diff1
value: 20.738722532966968
- type: nauc_precision_at_1000_max
value: 17.70266346751592
- type: nauc_precision_at_1000_std
value: 20.59919635676843
- type: nauc_precision_at_100_diff1
value: 27.420028592084613
- type: nauc_precision_at_100_max
value: 18.616118189594694
- type: nauc_precision_at_100_std
value: 16.84937440753337
- type: nauc_precision_at_10_diff1
value: 37.63167180357743
- type: nauc_precision_at_10_max
value: 22.115198496725
- type: nauc_precision_at_10_std
value: 10.80255869463013
- type: nauc_precision_at_1_diff1
value: 55.84919303820644
- type: nauc_precision_at_1_max
value: 27.33854442034755
- type: nauc_precision_at_1_std
value: 3.749969253202433
- type: nauc_precision_at_20_diff1
value: 34.148918464243685
- type: nauc_precision_at_20_max
value: 20.678638615677706
- type: nauc_precision_at_20_std
value: 12.827888647611482
- type: nauc_precision_at_3_diff1
value: 44.776429161099216
- type: nauc_precision_at_3_max
value: 25.364797794764787
- type: nauc_precision_at_3_std
value: 7.33647482133356
- type: nauc_precision_at_5_diff1
value: 41.58886504304217
- type: nauc_precision_at_5_max
value: 23.667971928513946
- type: nauc_precision_at_5_std
value: 8.795818465308383
- type: nauc_recall_at_1000_diff1
value: 20.738722532966968
- type: nauc_recall_at_1000_max
value: 17.702663467515897
- type: nauc_recall_at_1000_std
value: 20.599196356768513
- type: nauc_recall_at_100_diff1
value: 27.420028592084627
- type: nauc_recall_at_100_max
value: 18.61611818959468
- type: nauc_recall_at_100_std
value: 16.849374407533404
- type: nauc_recall_at_10_diff1
value: 37.63167180357745
- type: nauc_recall_at_10_max
value: 22.115198496725025
- type: nauc_recall_at_10_std
value: 10.802558694630168
- type: nauc_recall_at_1_diff1
value: 55.84919303820644
- type: nauc_recall_at_1_max
value: 27.33854442034755
- type: nauc_recall_at_1_std
value: 3.749969253202433
- type: nauc_recall_at_20_diff1
value: 34.148918464243685
- type: nauc_recall_at_20_max
value: 20.678638615677723
- type: nauc_recall_at_20_std
value: 12.827888647611488
- type: nauc_recall_at_3_diff1
value: 44.776429161099216
- type: nauc_recall_at_3_max
value: 25.36479779476476
- type: nauc_recall_at_3_std
value: 7.33647482133353
- type: nauc_recall_at_5_diff1
value: 41.58886504304218
- type: nauc_recall_at_5_max
value: 23.66797192851399
- type: nauc_recall_at_5_std
value: 8.795818465308388
- type: ndcg_at_1
value: 25.111
- type: ndcg_at_10
value: 22.35
- type: ndcg_at_100
value: 25.740000000000002
- type: ndcg_at_1000
value: 28.265
- type: ndcg_at_20
value: 23.524
- type: ndcg_at_3
value: 19.628999999999998
- type: ndcg_at_5
value: 20.926000000000002
- type: precision_at_1
value: 25.111
- type: precision_at_10
value: 4.9239999999999995
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_20
value: 2.841
- type: precision_at_3
value: 12.308
- type: precision_at_5
value: 8.412
- type: recall_at_1
value: 12.555
- type: recall_at_10
value: 24.618000000000002
- type: recall_at_100
value: 38.340999999999994
- type: recall_at_1000
value: 55.364000000000004
- type: recall_at_20
value: 28.406
- type: recall_at_3
value: 18.462
- type: recall_at_5
value: 21.031
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 63.352399999999996
- type: ap
value: 58.63306640345262
- type: ap_weighted
value: 58.63306640345262
- type: f1
value: 63.07675181989748
- type: f1_weighted
value: 63.076751819897495
- type: main_score
value: 63.352399999999996
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 9.747
- type: map_at_1
value: 4.804
- type: map_at_10
value: 7.771999999999999
- type: map_at_100
value: 8.44
- type: map_at_1000
value: 8.534
- type: map_at_20
value: 8.103
- type: map_at_3
value: 6.561999999999999
- type: map_at_5
value: 7.207
- type: mrr_at_1
value: 4.885386819484241
- type: mrr_at_10
value: 7.93821348978941
- type: mrr_at_100
value: 8.620607458781047
- type: mrr_at_1000
value: 8.712880602286056
- type: mrr_at_20
value: 8.279791291021384
- type: mrr_at_3
value: 6.70248328557783
- type: mrr_at_5
value: 7.372254059216788
- type: nauc_map_at_1000_diff1
value: 19.93246991140932
- type: nauc_map_at_1000_max
value: 4.885909500550944
- type: nauc_map_at_1000_std
value: -7.280402510783326
- type: nauc_map_at_100_diff1
value: 19.943197831111075
- type: nauc_map_at_100_max
value: 4.866780686085187
- type: nauc_map_at_100_std
value: -7.360901362678778
- type: nauc_map_at_10_diff1
value: 20.695858149046785
- type: nauc_map_at_10_max
value: 5.275440198285615
- type: nauc_map_at_10_std
value: -8.095637726537015
- type: nauc_map_at_1_diff1
value: 28.12993250585034
- type: nauc_map_at_1_max
value: 8.718469969024072
- type: nauc_map_at_1_std
value: -10.396742165733787
- type: nauc_map_at_20_diff1
value: 20.244070181414717
- type: nauc_map_at_20_max
value: 5.016852646684856
- type: nauc_map_at_20_std
value: -7.720620166908344
- type: nauc_map_at_3_diff1
value: 21.86164779495491
- type: nauc_map_at_3_max
value: 6.953882614590498
- type: nauc_map_at_3_std
value: -8.959104266126758
- type: nauc_map_at_5_diff1
value: 21.199443170205072
- type: nauc_map_at_5_max
value: 6.299625796343289
- type: nauc_map_at_5_std
value: -8.368568776032555
- type: nauc_mrr_at_1000_diff1
value: 19.784660922671232
- type: nauc_mrr_at_1000_max
value: 5.034357724059161
- type: nauc_mrr_at_1000_std
value: -7.239581901675534
- type: nauc_mrr_at_100_diff1
value: 19.791873203054376
- type: nauc_mrr_at_100_max
value: 5.013801371471971
- type: nauc_mrr_at_100_std
value: -7.310346534770232
- type: nauc_mrr_at_10_diff1
value: 20.553735164655997
- type: nauc_mrr_at_10_max
value: 5.395763207191926
- type: nauc_mrr_at_10_std
value: -8.047891816391905
- type: nauc_mrr_at_1_diff1
value: 28.070976082435607
- type: nauc_mrr_at_1_max
value: 8.953841626491958
- type: nauc_mrr_at_1_std
value: -10.400367611477469
- type: nauc_mrr_at_20_diff1
value: 20.08073090398213
- type: nauc_mrr_at_20_max
value: 5.17741942469057
- type: nauc_mrr_at_20_std
value: -7.648305443323694
- type: nauc_mrr_at_3_diff1
value: 21.70573030798374
- type: nauc_mrr_at_3_max
value: 7.040345015035343
- type: nauc_mrr_at_3_std
value: -9.01742538895966
- type: nauc_mrr_at_5_diff1
value: 21.05514411362703
- type: nauc_mrr_at_5_max
value: 6.402525163673961
- type: nauc_mrr_at_5_std
value: -8.353161745669366
- type: nauc_ndcg_at_1000_diff1
value: 15.817917018222927
- type: nauc_ndcg_at_1000_max
value: 3.4602868240545184
- type: nauc_ndcg_at_1000_std
value: -2.7309859572013493
- type: nauc_ndcg_at_100_diff1
value: 16.16461902285168
- type: nauc_ndcg_at_100_max
value: 2.591494792020275
- type: nauc_ndcg_at_100_std
value: -4.251138978769414
- type: nauc_ndcg_at_10_diff1
value: 18.646849831123497
- type: nauc_ndcg_at_10_max
value: 3.795626244531871
- type: nauc_ndcg_at_10_std
value: -7.31556920444316
- type: nauc_ndcg_at_1_diff1
value: 28.070976082435607
- type: nauc_ndcg_at_1_max
value: 8.690329032730167
- type: nauc_ndcg_at_1_std
value: -10.235131288127299
- type: nauc_ndcg_at_20_diff1
value: 17.59830347547788
- type: nauc_ndcg_at_20_max
value: 3.148440531560062
- type: nauc_ndcg_at_20_std
value: -6.226794555729895
- type: nauc_ndcg_at_3_diff1
value: 20.45166505802566
- type: nauc_ndcg_at_3_max
value: 6.672133237674251
- type: nauc_ndcg_at_3_std
value: -8.731640451925227
- type: nauc_ndcg_at_5_diff1
value: 19.54930996060783
- type: nauc_ndcg_at_5_max
value: 5.71364479811594
- type: nauc_ndcg_at_5_std
value: -7.850230102953559
- type: nauc_precision_at_1000_diff1
value: 6.612974006203879
- type: nauc_precision_at_1000_max
value: 3.9881632117934
- type: nauc_precision_at_1000_std
value: 8.449963480288027
- type: nauc_precision_at_100_diff1
value: 9.69503671636726
- type: nauc_precision_at_100_max
value: -0.45980138707083934
- type: nauc_precision_at_100_std
value: 1.613101780931382
- type: nauc_precision_at_10_diff1
value: 15.07598463263465
- type: nauc_precision_at_10_max
value: 0.9828371312250294
- type: nauc_precision_at_10_std
value: -6.103478881664006
- type: nauc_precision_at_1_diff1
value: 28.070976082435607
- type: nauc_precision_at_1_max
value: 8.690329032730167
- type: nauc_precision_at_1_std
value: -10.235131288127299
- type: nauc_precision_at_20_diff1
value: 13.197102968305474
- type: nauc_precision_at_20_max
value: 0.09482453798009734
- type: nauc_precision_at_20_std
value: -3.669722937882611
- type: nauc_precision_at_3_diff1
value: 17.391692595801604
- type: nauc_precision_at_3_max
value: 6.076241976429799
- type: nauc_precision_at_3_std
value: -8.373745155530585
- type: nauc_precision_at_5_diff1
value: 16.41458331501333
- type: nauc_precision_at_5_max
value: 4.616575134254677
- type: nauc_precision_at_5_std
value: -6.890601462308927
- type: nauc_recall_at_1000_diff1
value: 8.051706890746892
- type: nauc_recall_at_1000_max
value: 2.8420555653718513
- type: nauc_recall_at_1000_std
value: 8.361214428555144
- type: nauc_recall_at_100_diff1
value: 10.270582880008224
- type: nauc_recall_at_100_max
value: -0.9484301869654383
- type: nauc_recall_at_100_std
value: 1.4758522779695316
- type: nauc_recall_at_10_diff1
value: 15.213592315955127
- type: nauc_recall_at_10_max
value: 1.0594710214511485
- type: nauc_recall_at_10_std
value: -5.979488905593913
- type: nauc_recall_at_1_diff1
value: 28.12993250585034
- type: nauc_recall_at_1_max
value: 8.718469969024072
- type: nauc_recall_at_1_std
value: -10.396742165733787
- type: nauc_recall_at_20_diff1
value: 13.498750466893188
- type: nauc_recall_at_20_max
value: -0.04530213100920407
- type: nauc_recall_at_20_std
value: -3.6363207251016454
- type: nauc_recall_at_3_diff1
value: 17.55805580752296
- type: nauc_recall_at_3_max
value: 6.102508252581877
- type: nauc_recall_at_3_std
value: -8.242072378146279
- type: nauc_recall_at_5_diff1
value: 16.43100294647581
- type: nauc_recall_at_5_max
value: 4.545756715451594
- type: nauc_recall_at_5_std
value: -6.844267334670756
- type: ndcg_at_1
value: 4.885
- type: ndcg_at_10
value: 9.747
- type: ndcg_at_100
value: 13.523
- type: ndcg_at_1000
value: 16.427
- type: ndcg_at_20
value: 10.967
- type: ndcg_at_3
value: 7.198
- type: ndcg_at_5
value: 8.373
- type: precision_at_1
value: 4.885
- type: precision_at_10
value: 1.6500000000000001
- type: precision_at_100
value: 0.363
- type: precision_at_1000
value: 0.061
- type: precision_at_20
value: 1.081
- type: precision_at_3
value: 3.042
- type: precision_at_5
value: 2.413
- type: recall_at_1
value: 4.804
- type: recall_at_10
value: 15.964999999999998
- type: recall_at_100
value: 34.601
- type: recall_at_1000
value: 58.028
- type: recall_at_20
value: 20.735
- type: recall_at_3
value: 8.906
- type: recall_at_5
value: 11.753
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 28.197
- type: map_at_1
value: 0.6649999999999999
- type: map_at_10
value: 3.963
- type: map_at_100
value: 11.445
- type: map_at_1000
value: 14.580000000000002
- type: map_at_20
value: 5.847
- type: map_at_3
value: 1.6129999999999998
- type: map_at_5
value: 2.391
- type: mrr_at_1
value: 46.51162790697674
- type: mrr_at_10
value: 58.313953488372086
- type: mrr_at_100
value: 58.77162382700938
- type: mrr_at_1000
value: 58.77162382700938
- type: mrr_at_20
value: 58.63015095879231
- type: mrr_at_3
value: 56.58914728682169
- type: mrr_at_5
value: 57.63565891472867
- type: nauc_map_at_1000_diff1
value: -15.396695835753201
- type: nauc_map_at_1000_max
value: 6.3514334343697785
- type: nauc_map_at_1000_std
value: 11.082824536777405
- type: nauc_map_at_100_diff1
value: -9.342485546372814
- type: nauc_map_at_100_max
value: 7.372321547406559
- type: nauc_map_at_100_std
value: 11.785207242830829
- type: nauc_map_at_10_diff1
value: 14.612422203657847
- type: nauc_map_at_10_max
value: 13.889814634222114
- type: nauc_map_at_10_std
value: 4.0423546150472855
- type: nauc_map_at_1_diff1
value: 13.880173047999744
- type: nauc_map_at_1_max
value: 16.9757126254468
- type: nauc_map_at_1_std
value: 3.5390821824413767
- type: nauc_map_at_20_diff1
value: 6.992196663144848
- type: nauc_map_at_20_max
value: 12.191726438414657
- type: nauc_map_at_20_std
value: 6.055209455700361
- type: nauc_map_at_3_diff1
value: 12.240113556643422
- type: nauc_map_at_3_max
value: 13.582179791483215
- type: nauc_map_at_3_std
value: -5.579001248100349
- type: nauc_map_at_5_diff1
value: 14.619510120275086
- type: nauc_map_at_5_max
value: 16.113766549757553
- type: nauc_map_at_5_std
value: -4.166207995528961
- type: nauc_mrr_at_1000_diff1
value: -18.027637432324152
- type: nauc_mrr_at_1000_max
value: 23.740157087301206
- type: nauc_mrr_at_1000_std
value: -12.27206781257317
- type: nauc_mrr_at_100_diff1
value: -18.027637432324152
- type: nauc_mrr_at_100_max
value: 23.740157087301206
- type: nauc_mrr_at_100_std
value: -12.27206781257317
- type: nauc_mrr_at_10_diff1
value: -17.75102342827306
- type: nauc_mrr_at_10_max
value: 23.52180561083833
- type: nauc_mrr_at_10_std
value: -12.36723523846571
- type: nauc_mrr_at_1_diff1
value: -18.020320187727563
- type: nauc_mrr_at_1_max
value: 34.535353660398535
- type: nauc_mrr_at_1_std
value: -9.886227306051763
- type: nauc_mrr_at_20_diff1
value: -17.95231220601542
- type: nauc_mrr_at_20_max
value: 23.74828565262208
- type: nauc_mrr_at_20_std
value: -11.754648901203323
- type: nauc_mrr_at_3_diff1
value: -17.049825026735355
- type: nauc_mrr_at_3_max
value: 24.83367987095792
- type: nauc_mrr_at_3_std
value: -14.555733365452038
- type: nauc_mrr_at_5_diff1
value: -17.66424378072178
- type: nauc_mrr_at_5_max
value: 22.781415978918623
- type: nauc_mrr_at_5_std
value: -12.909566405691116
- type: nauc_ndcg_at_1000_diff1
value: -18.087967716849864
- type: nauc_ndcg_at_1000_max
value: 7.074227326632486
- type: nauc_ndcg_at_1000_std
value: 2.5666387679270306
- type: nauc_ndcg_at_100_diff1
value: -14.494126459193367
- type: nauc_ndcg_at_100_max
value: 10.296065388518365
- type: nauc_ndcg_at_100_std
value: -1.3020936159077041
- type: nauc_ndcg_at_10_diff1
value: -12.012061234908963
- type: nauc_ndcg_at_10_max
value: 18.521952313315325
- type: nauc_ndcg_at_10_std
value: -17.394209614679614
- type: nauc_ndcg_at_1_diff1
value: -14.856974355137808
- type: nauc_ndcg_at_1_max
value: 20.529936745683962
- type: nauc_ndcg_at_1_std
value: -23.212837965950968
- type: nauc_ndcg_at_20_diff1
value: -14.82642903893102
- type: nauc_ndcg_at_20_max
value: 15.8351088718145
- type: nauc_ndcg_at_20_std
value: -12.6003293327242
- type: nauc_ndcg_at_3_diff1
value: -14.41427662234705
- type: nauc_ndcg_at_3_max
value: 21.688557550346506
- type: nauc_ndcg_at_3_std
value: -23.540581949652516
- type: nauc_ndcg_at_5_diff1
value: -12.384523912355547
- type: nauc_ndcg_at_5_max
value: 20.80361196727174
- type: nauc_ndcg_at_5_std
value: -21.368283062317243
- type: nauc_precision_at_1000_diff1
value: -40.35969115863708
- type: nauc_precision_at_1000_max
value: 7.743203383009043
- type: nauc_precision_at_1000_std
value: -2.4051338544012038
- type: nauc_precision_at_100_diff1
value: -30.905730188706205
- type: nauc_precision_at_100_max
value: 10.388902762277853
- type: nauc_precision_at_100_std
value: 2.941229800942695
- type: nauc_precision_at_10_diff1
value: -16.60994853750262
- type: nauc_precision_at_10_max
value: 22.72174237098654
- type: nauc_precision_at_10_std
value: -6.917377847238954
- type: nauc_precision_at_1_diff1
value: -18.020320187727563
- type: nauc_precision_at_1_max
value: 34.535353660398535
- type: nauc_precision_at_1_std
value: -9.886227306051763
- type: nauc_precision_at_20_diff1
value: -21.625576898253126
- type: nauc_precision_at_20_max
value: 17.81549839205109
- type: nauc_precision_at_20_std
value: -3.0701380463887857
- type: nauc_precision_at_3_diff1
value: -21.75087700327928
- type: nauc_precision_at_3_max
value: 21.646458198104195
- type: nauc_precision_at_3_std
value: -17.791402512310658
- type: nauc_precision_at_5_diff1
value: -17.009933947222493
- type: nauc_precision_at_5_max
value: 23.661592127205694
- type: nauc_precision_at_5_std
value: -11.594250429478837
- type: nauc_recall_at_1000_diff1
value: -14.093702114133
- type: nauc_recall_at_1000_max
value: 1.932546130291724
- type: nauc_recall_at_1000_std
value: 10.759958747656022
- type: nauc_recall_at_100_diff1
value: -1.9605888044172464
- type: nauc_recall_at_100_max
value: 1.3845207294609094
- type: nauc_recall_at_100_std
value: 11.19270493696173
- type: nauc_recall_at_10_diff1
value: 22.502231277187636
- type: nauc_recall_at_10_max
value: 3.6375628081260922
- type: nauc_recall_at_10_std
value: 3.045856967531833
- type: nauc_recall_at_1_diff1
value: 13.880173047999744
- type: nauc_recall_at_1_max
value: 16.9757126254468
- type: nauc_recall_at_1_std
value: 3.5390821824413767
- type: nauc_recall_at_20_diff1
value: 15.23883725790073
- type: nauc_recall_at_20_max
value: 2.873450306223845
- type: nauc_recall_at_20_std
value: 2.912331471077023
- type: nauc_recall_at_3_diff1
value: 16.961441802612168
- type: nauc_recall_at_3_max
value: 2.9537594786547645
- type: nauc_recall_at_3_std
value: -9.744641324690988
- type: nauc_recall_at_5_diff1
value: 17.108831032268597
- type: nauc_recall_at_5_max
value: 6.412968821386602
- type: nauc_recall_at_5_std
value: -7.3955765581097666
- type: ndcg_at_1
value: 28.682000000000002
- type: ndcg_at_10
value: 28.197
- type: ndcg_at_100
value: 26.257
- type: ndcg_at_1000
value: 34.799
- type: ndcg_at_20
value: 26.965
- type: ndcg_at_3
value: 28.869
- type: ndcg_at_5
value: 28.517
- type: precision_at_1
value: 46.512
- type: precision_at_10
value: 37.907000000000004
- type: precision_at_100
value: 18.395
- type: precision_at_1000
value: 4.1209999999999996
- type: precision_at_20
value: 31.977
- type: precision_at_3
value: 44.186
- type: precision_at_5
value: 40.93
- type: recall_at_1
value: 0.6649999999999999
- type: recall_at_10
value: 5.604
- type: recall_at_100
value: 22.685
- type: recall_at_1000
value: 45.484
- type: recall_at_20
value: 8.874
- type: recall_at_3
value: 2.023
- type: recall_at_5
value: 3.048
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: train
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 8.48
- type: map_at_1
value: 3.685
- type: map_at_10
value: 6.575
- type: map_at_100
value: 7.255000000000001
- type: map_at_1000
value: 7.35
- type: map_at_20
value: 6.916
- type: map_at_3
value: 5.439
- type: map_at_5
value: 6.022
- type: mrr_at_1
value: 3.824121811989128
- type: mrr_at_10
value: 6.7759320239439464
- type: mrr_at_100
value: 7.462910020528872
- type: mrr_at_1000
value: 7.5557206687612135
- type: mrr_at_20
value: 7.121565910048015
- type: mrr_at_3
value: 5.6179775280892335
- type: mrr_at_5
value: 6.2139643177401425
- type: nauc_map_at_1000_diff1
value: 17.653878216498047
- type: nauc_map_at_1000_max
value: -0.525360793783759
- type: nauc_map_at_1000_std
value: -7.445236859297877
- type: nauc_map_at_100_diff1
value: 17.68683699329305
- type: nauc_map_at_100_max
value: -0.5688890757830198
- type: nauc_map_at_100_std
value: -7.559952183581703
- type: nauc_map_at_10_diff1
value: 18.427817145535208
- type: nauc_map_at_10_max
value: -0.5896830377937132
- type: nauc_map_at_10_std
value: -8.611843893273651
- type: nauc_map_at_1_diff1
value: 25.8753092370788
- type: nauc_map_at_1_max
value: 0.3780473823807864
- type: nauc_map_at_1_std
value: -11.390814409917846
- type: nauc_map_at_20_diff1
value: 17.943467189468272
- type: nauc_map_at_20_max
value: -0.6227658135295631
- type: nauc_map_at_20_std
value: -8.097089894275317
- type: nauc_map_at_3_diff1
value: 20.24359575380882
- type: nauc_map_at_3_max
value: -0.21893431430828497
- type: nauc_map_at_3_std
value: -9.867615856487621
- type: nauc_map_at_5_diff1
value: 19.11201555460735
- type: nauc_map_at_5_max
value: -0.39869848840945094
- type: nauc_map_at_5_std
value: -9.226975441766392
- type: nauc_mrr_at_1000_diff1
value: 17.467167234230697
- type: nauc_mrr_at_1000_max
value: -0.4355577086305639
- type: nauc_mrr_at_1000_std
value: -7.5059552007971595
- type: nauc_mrr_at_100_diff1
value: 17.497260740027073
- type: nauc_mrr_at_100_max
value: -0.4763884258015582
- type: nauc_mrr_at_100_std
value: -7.609091662954931
- type: nauc_mrr_at_10_diff1
value: 18.218378133546754
- type: nauc_mrr_at_10_max
value: -0.4989739045524073
- type: nauc_mrr_at_10_std
value: -8.628198966255765
- type: nauc_mrr_at_1_diff1
value: 25.49338976443103
- type: nauc_mrr_at_1_max
value: 0.4437872841468721
- type: nauc_mrr_at_1_std
value: -11.359231879431954
- type: nauc_mrr_at_20_diff1
value: 17.742428690692424
- type: nauc_mrr_at_20_max
value: -0.5338408392412581
- type: nauc_mrr_at_20_std
value: -8.123683115790605
- type: nauc_mrr_at_3_diff1
value: 20.011038863002135
- type: nauc_mrr_at_3_max
value: -0.12872642492379355
- type: nauc_mrr_at_3_std
value: -9.857245140781759
- type: nauc_mrr_at_5_diff1
value: 18.905679170870744
- type: nauc_mrr_at_5_max
value: -0.3185205383757674
- type: nauc_mrr_at_5_std
value: -9.222249555082305
- type: nauc_ndcg_at_1000_diff1
value: 13.38510469621259
- type: nauc_ndcg_at_1000_max
value: 0.3676442720550188
- type: nauc_ndcg_at_1000_std
value: -1.3223212298583789
- type: nauc_ndcg_at_100_diff1
value: 13.995994728912105
- type: nauc_ndcg_at_100_max
value: -0.5813471334880617
- type: nauc_ndcg_at_100_std
value: -3.3863187756084967
- type: nauc_ndcg_at_10_diff1
value: 16.26605758052449
- type: nauc_ndcg_at_10_max
value: -0.9384215827694855
- type: nauc_ndcg_at_10_std
value: -7.456181329353155
- type: nauc_ndcg_at_1_diff1
value: 25.60137338837924
- type: nauc_ndcg_at_1_max
value: 0.391579646474889
- type: nauc_ndcg_at_1_std
value: -11.39995781769565
- type: nauc_ndcg_at_20_diff1
value: 15.1300153201261
- type: nauc_ndcg_at_20_max
value: -0.9831465745284373
- type: nauc_ndcg_at_20_std
value: -6.139166475923625
- type: nauc_ndcg_at_3_diff1
value: 18.995573014803956
- type: nauc_ndcg_at_3_max
value: -0.33924780182607434
- type: nauc_ndcg_at_3_std
value: -9.52104664999921
- type: nauc_ndcg_at_5_diff1
value: 17.43444048711145
- type: nauc_ndcg_at_5_max
value: -0.6014228167665021
- type: nauc_ndcg_at_5_std
value: -8.58422662682903
- type: nauc_precision_at_1000_diff1
value: 5.596702524854808
- type: nauc_precision_at_1000_max
value: 4.197681604324332
- type: nauc_precision_at_1000_std
value: 11.386329356420747
- type: nauc_precision_at_100_diff1
value: 8.811902094550087
- type: nauc_precision_at_100_max
value: 0.05352101231520835
- type: nauc_precision_at_100_std
value: 3.478200173826779
- type: nauc_precision_at_10_diff1
value: 12.822786921865703
- type: nauc_precision_at_10_max
value: -1.4086219805257234
- type: nauc_precision_at_10_std
value: -5.500964925109441
- type: nauc_precision_at_1_diff1
value: 25.60137338837924
- type: nauc_precision_at_1_max
value: 0.391579646474889
- type: nauc_precision_at_1_std
value: -11.39995781769565
- type: nauc_precision_at_20_diff1
value: 11.051106330936069
- type: nauc_precision_at_20_max
value: -1.3304880009311992
- type: nauc_precision_at_20_std
value: -3.062170848179416
- type: nauc_precision_at_3_diff1
value: 16.461307972842576
- type: nauc_precision_at_3_max
value: -0.5344048497861791
- type: nauc_precision_at_3_std
value: -8.807477553134257
- type: nauc_precision_at_5_diff1
value: 14.459799027699493
- type: nauc_precision_at_5_max
value: -0.9107306772234477
- type: nauc_precision_at_5_std
value: -7.416504383027417
- type: nauc_recall_at_1000_diff1
value: 6.0996840681918245
- type: nauc_recall_at_1000_max
value: 3.386127027719934
- type: nauc_recall_at_1000_std
value: 12.115552270700285
- type: nauc_recall_at_100_diff1
value: 8.893387029407386
- type: nauc_recall_at_100_max
value: -0.3719975949544685
- type: nauc_recall_at_100_std
value: 3.5112350010838447
- type: nauc_recall_at_10_diff1
value: 12.847647754297228
- type: nauc_recall_at_10_max
value: -1.5727078316535434
- type: nauc_recall_at_10_std
value: -5.468282487947237
- type: nauc_recall_at_1_diff1
value: 25.8753092370788
- type: nauc_recall_at_1_max
value: 0.3780473823807864
- type: nauc_recall_at_1_std
value: -11.390814409917846
- type: nauc_recall_at_20_diff1
value: 11.114383380281698
- type: nauc_recall_at_20_max
value: -1.540320259144065
- type: nauc_recall_at_20_std
value: -3.0409676051959598
- type: nauc_recall_at_3_diff1
value: 16.574025880565387
- type: nauc_recall_at_3_max
value: -0.6402913023070904
- type: nauc_recall_at_3_std
value: -8.814393218789613
- type: nauc_recall_at_5_diff1
value: 14.421256228882088
- type: nauc_recall_at_5_max
value: -1.0151809953415551
- type: nauc_recall_at_5_std
value: -7.347472584154348
- type: ndcg_at_1
value: 3.819
- type: ndcg_at_10
value: 8.48
- type: ndcg_at_100
value: 12.324
- type: ndcg_at_1000
value: 15.268
- type: ndcg_at_20
value: 9.728
- type: ndcg_at_3
value: 6.078
- type: ndcg_at_5
value: 7.133000000000001
- type: precision_at_1
value: 3.819
- type: precision_at_10
value: 1.504
- type: precision_at_100
value: 0.35000000000000003
- type: precision_at_1000
value: 0.06
- type: precision_at_20
value: 1.008
- type: precision_at_3
value: 2.6870000000000003
- type: precision_at_5
value: 2.144
- type: recall_at_1
value: 3.685
- type: recall_at_10
value: 14.444
- type: recall_at_100
value: 33.478
- type: recall_at_1000
value: 57.365
- type: recall_at_20
value: 19.345000000000002
- type: recall_at_3
value: 7.759
- type: recall_at_5
value: 10.31
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 79.37984496124031
- type: f1
value: 79.70776168144654
- type: f1_weighted
value: 78.98604155693126
- type: main_score
value: 79.37984496124031
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: validation
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 79.10514541387025
- type: f1
value: 80.13728800876822
- type: f1_weighted
value: 78.74239342821
- type: main_score
value: 79.10514541387025
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 57.20702234382126
- type: f1
value: 38.40873310546903
- type: f1_weighted
value: 62.02145452346416
- type: main_score
value: 57.20702234382126
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: validation
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 57.548098434004466
- type: f1
value: 36.333331127271784
- type: f1_weighted
value: 62.46655557030715
- type: main_score
value: 57.548098434004466
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 57.027572293207804
- type: f1
value: 53.96909396217892
- type: f1_weighted
value: 57.36885675748871
- type: main_score
value: 57.027572293207804
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: validation
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 58.68175110673881
- type: f1
value: 55.590287286837345
- type: f1_weighted
value: 58.74517444737598
- type: main_score
value: 58.68175110673881
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 64.84868863483523
- type: f1
value: 63.599630870289346
- type: f1_weighted
value: 64.81204353114475
- type: main_score
value: 64.84868863483523
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: validation
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 65.64190850959173
- type: f1
value: 65.11708418187989
- type: f1_weighted
value: 65.59542916694492
- type: main_score
value: 65.64190850959173
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 26.067321978832762
- type: v_measure
value: 26.067321978832762
- type: v_measure_std
value: 1.8387220293156443
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 20.608797104420734
- type: v_measure
value: 20.608797104420734
- type: v_measure_std
value: 1.2968799414504437
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 28.922193064546786
- type: map
value: 28.922193064546786
- type: mrr
value: 29.749606407920716
- type: nAUC_map_diff1
value: 14.301584944271909
- type: nAUC_map_max
value: -19.820164160297914
- type: nAUC_map_std
value: -21.572440886493588
- type: nAUC_mrr_diff1
value: 13.315826953561027
- type: nAUC_mrr_max
value: -14.131070009173326
- type: nAUC_mrr_std
value: -17.995640899472775
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 11.787
- type: map_at_1
value: 1.653
- type: map_at_10
value: 3.083
- type: map_at_100
value: 3.9600000000000004
- type: map_at_1000
value: 4.8629999999999995
- type: map_at_20
value: 3.345
- type: map_at_3
value: 2.339
- type: map_at_5
value: 2.5919999999999996
- type: mrr_at_1
value: 15.170278637770899
- type: mrr_at_10
value: 23.406064180057978
- type: mrr_at_100
value: 24.43363448942815
- type: mrr_at_1000
value: 24.509600698989363
- type: mrr_at_20
value: 23.922689158936986
- type: mrr_at_3
value: 20.69143446852425
- type: mrr_at_5
value: 22.053663570691427
- type: nauc_map_at_1000_diff1
value: 23.365752757123413
- type: nauc_map_at_1000_max
value: 5.723105519742705
- type: nauc_map_at_1000_std
value: 12.920689586502723
- type: nauc_map_at_100_diff1
value: 24.997724044834904
- type: nauc_map_at_100_max
value: 5.6034850098449915
- type: nauc_map_at_100_std
value: 11.69792376235701
- type: nauc_map_at_10_diff1
value: 26.359819478008067
- type: nauc_map_at_10_max
value: 2.950229078577769
- type: nauc_map_at_10_std
value: 14.246357729822758
- type: nauc_map_at_1_diff1
value: 31.449035283359372
- type: nauc_map_at_1_max
value: -2.821745348969351
- type: nauc_map_at_1_std
value: 36.59265082814134
- type: nauc_map_at_20_diff1
value: 26.156703988579146
- type: nauc_map_at_20_max
value: 4.039850648981644
- type: nauc_map_at_20_std
value: 12.857683173766873
- type: nauc_map_at_3_diff1
value: 29.303443696100786
- type: nauc_map_at_3_max
value: -0.0896457369831278
- type: nauc_map_at_3_std
value: 19.041469355147843
- type: nauc_map_at_5_diff1
value: 27.94064497119116
- type: nauc_map_at_5_max
value: 2.092827137448805
- type: nauc_map_at_5_std
value: 16.650477311767396
- type: nauc_mrr_at_1000_diff1
value: 23.191174795386072
- type: nauc_mrr_at_1000_max
value: 8.617227354282814
- type: nauc_mrr_at_1000_std
value: 13.743361705963208
- type: nauc_mrr_at_100_diff1
value: 23.183126364916337
- type: nauc_mrr_at_100_max
value: 8.718493819611146
- type: nauc_mrr_at_100_std
value: 13.779526663994119
- type: nauc_mrr_at_10_diff1
value: 23.19113404679369
- type: nauc_mrr_at_10_max
value: 7.972203926491493
- type: nauc_mrr_at_10_std
value: 13.81997198522721
- type: nauc_mrr_at_1_diff1
value: 25.902042101348982
- type: nauc_mrr_at_1_max
value: 5.047109973538058
- type: nauc_mrr_at_1_std
value: 15.470212587147133
- type: nauc_mrr_at_20_diff1
value: 23.08714240436609
- type: nauc_mrr_at_20_max
value: 8.579559461970993
- type: nauc_mrr_at_20_std
value: 13.788821651533256
- type: nauc_mrr_at_3_diff1
value: 23.864879219409048
- type: nauc_mrr_at_3_max
value: 5.9564403558150225
- type: nauc_mrr_at_3_std
value: 12.23691039499389
- type: nauc_mrr_at_5_diff1
value: 22.864151980766543
- type: nauc_mrr_at_5_max
value: 6.465074890039495
- type: nauc_mrr_at_5_std
value: 12.976921063516556
- type: nauc_ndcg_at_1000_diff1
value: 19.94607969157455
- type: nauc_ndcg_at_1000_max
value: 16.340894870674546
- type: nauc_ndcg_at_1000_std
value: 9.758890266987095
- type: nauc_ndcg_at_100_diff1
value: 21.13744167889499
- type: nauc_ndcg_at_100_max
value: 12.270424586502438
- type: nauc_ndcg_at_100_std
value: 10.637134387400323
- type: nauc_ndcg_at_10_diff1
value: 18.010150998973305
- type: nauc_ndcg_at_10_max
value: 4.036835916049262
- type: nauc_ndcg_at_10_std
value: 10.0009341929739
- type: nauc_ndcg_at_1_diff1
value: 26.95639984668014
- type: nauc_ndcg_at_1_max
value: 1.30584626973471
- type: nauc_ndcg_at_1_std
value: 19.303955819224964
- type: nauc_ndcg_at_20_diff1
value: 18.169589747376442
- type: nauc_ndcg_at_20_max
value: 5.161764678189148
- type: nauc_ndcg_at_20_std
value: 10.22797620917627
- type: nauc_ndcg_at_3_diff1
value: 19.178128825108427
- type: nauc_ndcg_at_3_max
value: 1.4635844656449146
- type: nauc_ndcg_at_3_std
value: 8.19879821823025
- type: nauc_ndcg_at_5_diff1
value: 17.47701310365941
- type: nauc_ndcg_at_5_max
value: 3.2375737456613596
- type: nauc_ndcg_at_5_std
value: 8.967845715394795
- type: nauc_precision_at_1000_diff1
value: 0.4902539863751489
- type: nauc_precision_at_1000_max
value: -3.0828128371610726
- type: nauc_precision_at_1000_std
value: 21.77956917819066
- type: nauc_precision_at_100_diff1
value: 4.107605955617016
- type: nauc_precision_at_100_max
value: 3.901908189997292
- type: nauc_precision_at_100_std
value: 12.830503636768817
- type: nauc_precision_at_10_diff1
value: 11.383182928241933
- type: nauc_precision_at_10_max
value: 5.2734660316224975
- type: nauc_precision_at_10_std
value: 3.0284835630797637
- type: nauc_precision_at_1_diff1
value: 25.902042101348982
- type: nauc_precision_at_1_max
value: 5.047109973538058
- type: nauc_precision_at_1_std
value: 15.470212587147133
- type: nauc_precision_at_20_diff1
value: 9.28859558113449
- type: nauc_precision_at_20_max
value: 4.369044910369854
- type: nauc_precision_at_20_std
value: 5.131505517377469
- type: nauc_precision_at_3_diff1
value: 15.657403822929734
- type: nauc_precision_at_3_max
value: 3.264770537128655
- type: nauc_precision_at_3_std
value: -0.36943809741963485
- type: nauc_precision_at_5_diff1
value: 13.594230855028247
- type: nauc_precision_at_5_max
value: 3.897608893736955
- type: nauc_precision_at_5_std
value: 1.6499075659359466
- type: nauc_recall_at_1000_diff1
value: 3.8130415317083237
- type: nauc_recall_at_1000_max
value: 5.274235280141661
- type: nauc_recall_at_1000_std
value: -1.0997067571051977
- type: nauc_recall_at_100_diff1
value: 13.601133281209973
- type: nauc_recall_at_100_max
value: 12.321570752293264
- type: nauc_recall_at_100_std
value: 6.025817236935515
- type: nauc_recall_at_10_diff1
value: 17.23791939185816
- type: nauc_recall_at_10_max
value: 8.983691012976768
- type: nauc_recall_at_10_std
value: 7.006899136032802
- type: nauc_recall_at_1_diff1
value: 31.449035283359372
- type: nauc_recall_at_1_max
value: -2.821745348969351
- type: nauc_recall_at_1_std
value: 36.59265082814134
- type: nauc_recall_at_20_diff1
value: 15.324821438529245
- type: nauc_recall_at_20_max
value: 10.43967245248483
- type: nauc_recall_at_20_std
value: 3.093782373639219
- type: nauc_recall_at_3_diff1
value: 25.60274500322923
- type: nauc_recall_at_3_max
value: 3.272848995708438
- type: nauc_recall_at_3_std
value: 12.973045035126498
- type: nauc_recall_at_5_diff1
value: 21.22883442580333
- type: nauc_recall_at_5_max
value: 10.351110943176518
- type: nauc_recall_at_5_std
value: 9.27937870788212
- type: ndcg_at_1
value: 14.241000000000001
- type: ndcg_at_10
value: 11.787
- type: ndcg_at_100
value: 12.416
- type: ndcg_at_1000
value: 22.667
- type: ndcg_at_20
value: 11.106
- type: ndcg_at_3
value: 13.425
- type: ndcg_at_5
value: 12.375
- type: precision_at_1
value: 15.17
- type: precision_at_10
value: 9.288
- type: precision_at_100
value: 3.8539999999999996
- type: precision_at_1000
value: 1.652
- type: precision_at_20
value: 7.1209999999999996
- type: precision_at_3
value: 13.519
- type: precision_at_5
value: 11.146
- type: recall_at_1
value: 1.653
- type: recall_at_10
value: 5.645
- type: recall_at_100
value: 15.706000000000001
- type: recall_at_1000
value: 51.790000000000006
- type: recall_at_20
value: 7.156999999999999
- type: recall_at_3
value: 2.807
- type: recall_at_5
value: 3.5000000000000004
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 12.751000000000001
- type: map_at_1
value: 5.4239999999999995
- type: map_at_10
value: 9.727
- type: map_at_100
value: 10.674
- type: map_at_1000
value: 10.773000000000001
- type: map_at_20
value: 10.244
- type: map_at_3
value: 8.01
- type: map_at_5
value: 8.835999999999999
- type: mrr_at_1
value: 6.286210892236385
- type: mrr_at_10
value: 10.898409479666718
- type: mrr_at_100
value: 11.805978395844425
- type: mrr_at_1000
value: 11.894539122097198
- type: mrr_at_20
value: 11.391538372487162
- type: mrr_at_3
value: 9.076863653920437
- type: mrr_at_5
value: 9.954615681730385
- type: nauc_map_at_1000_diff1
value: 12.042322288905256
- type: nauc_map_at_1000_max
value: 12.454292593021247
- type: nauc_map_at_1000_std
value: -0.026973001353952128
- type: nauc_map_at_100_diff1
value: 12.017670176179527
- type: nauc_map_at_100_max
value: 12.382175278523636
- type: nauc_map_at_100_std
value: -0.14811188279833779
- type: nauc_map_at_10_diff1
value: 12.528384745734472
- type: nauc_map_at_10_max
value: 11.992761563708425
- type: nauc_map_at_10_std
value: -1.2085244654995564
- type: nauc_map_at_1_diff1
value: 18.062774643202044
- type: nauc_map_at_1_max
value: 10.546042759909822
- type: nauc_map_at_1_std
value: -4.605060217478587
- type: nauc_map_at_20_diff1
value: 12.027364734243642
- type: nauc_map_at_20_max
value: 11.986913806995807
- type: nauc_map_at_20_std
value: -0.6891820403018594
- type: nauc_map_at_3_diff1
value: 13.87806779505128
- type: nauc_map_at_3_max
value: 12.237788023880336
- type: nauc_map_at_3_std
value: -1.113496221133
- type: nauc_map_at_5_diff1
value: 13.302631672076757
- type: nauc_map_at_5_max
value: 12.095902059824695
- type: nauc_map_at_5_std
value: -1.7715108870507146
- type: nauc_mrr_at_1000_diff1
value: 11.53776898908085
- type: nauc_mrr_at_1000_max
value: 11.60425335377083
- type: nauc_mrr_at_1000_std
value: 0.7228251884879632
- type: nauc_mrr_at_100_diff1
value: 11.522709197036091
- type: nauc_mrr_at_100_max
value: 11.542578124132802
- type: nauc_mrr_at_100_std
value: 0.638758138781245
- type: nauc_mrr_at_10_diff1
value: 11.903479398185246
- type: nauc_mrr_at_10_max
value: 11.146925524334083
- type: nauc_mrr_at_10_std
value: -0.15843171385576107
- type: nauc_mrr_at_1_diff1
value: 16.18137171104846
- type: nauc_mrr_at_1_max
value: 10.453889490060643
- type: nauc_mrr_at_1_std
value: -3.506093280792296
- type: nauc_mrr_at_20_diff1
value: 11.544982357779231
- type: nauc_mrr_at_20_max
value: 11.21499761486429
- type: nauc_mrr_at_20_std
value: 0.21958595727979918
- type: nauc_mrr_at_3_diff1
value: 12.937103312979644
- type: nauc_mrr_at_3_max
value: 11.377851235190144
- type: nauc_mrr_at_3_std
value: -0.16875081229236233
- type: nauc_mrr_at_5_diff1
value: 12.515643147025813
- type: nauc_mrr_at_5_max
value: 11.231481493378187
- type: nauc_mrr_at_5_std
value: -0.6322256090954826
- type: nauc_ndcg_at_1000_diff1
value: 9.283196774295387
- type: nauc_ndcg_at_1000_max
value: 15.428022152251177
- type: nauc_ndcg_at_1000_std
value: 6.431851500322609
- type: nauc_ndcg_at_100_diff1
value: 9.142384754753135
- type: nauc_ndcg_at_100_max
value: 13.692834111248853
- type: nauc_ndcg_at_100_std
value: 3.946111509224157
- type: nauc_ndcg_at_10_diff1
value: 10.57963844912894
- type: nauc_ndcg_at_10_max
value: 11.643478728340158
- type: nauc_ndcg_at_10_std
value: -0.16228711612951952
- type: nauc_ndcg_at_1_diff1
value: 16.18137171104846
- type: nauc_ndcg_at_1_max
value: 10.453889490060643
- type: nauc_ndcg_at_1_std
value: -3.506093280792296
- type: nauc_ndcg_at_20_diff1
value: 9.287626097295973
- type: nauc_ndcg_at_20_max
value: 11.564253989436692
- type: nauc_ndcg_at_20_std
value: 1.0833618034027919
- type: nauc_ndcg_at_3_diff1
value: 12.782118267300854
- type: nauc_ndcg_at_3_max
value: 12.088939543374478
- type: nauc_ndcg_at_3_std
value: -0.3051151318390331
- type: nauc_ndcg_at_5_diff1
value: 11.969321839755176
- type: nauc_ndcg_at_5_max
value: 11.845086775325589
- type: nauc_ndcg_at_5_std
value: -1.3444165044450207
- type: nauc_precision_at_1000_diff1
value: 2.5193674084094684
- type: nauc_precision_at_1000_max
value: 20.939763938033842
- type: nauc_precision_at_1000_std
value: 23.580493700916474
- type: nauc_precision_at_100_diff1
value: 4.40570902284602
- type: nauc_precision_at_100_max
value: 17.20780939257806
- type: nauc_precision_at_100_std
value: 13.849405913884382
- type: nauc_precision_at_10_diff1
value: 7.092513680974628
- type: nauc_precision_at_10_max
value: 11.622712077691345
- type: nauc_precision_at_10_std
value: 2.350782020924265
- type: nauc_precision_at_1_diff1
value: 16.18137171104846
- type: nauc_precision_at_1_max
value: 10.453889490060643
- type: nauc_precision_at_1_std
value: -3.506093280792296
- type: nauc_precision_at_20_diff1
value: 4.118914430083712
- type: nauc_precision_at_20_max
value: 11.383778179039798
- type: nauc_precision_at_20_std
value: 4.960386744746157
- type: nauc_precision_at_3_diff1
value: 10.567586129702402
- type: nauc_precision_at_3_max
value: 12.322692972359354
- type: nauc_precision_at_3_std
value: 1.2762356411081814
- type: nauc_precision_at_5_diff1
value: 9.541559883764096
- type: nauc_precision_at_5_max
value: 11.75672359142824
- type: nauc_precision_at_5_std
value: -0.15852938457063984
- type: nauc_recall_at_1000_diff1
value: 2.8579033037939
- type: nauc_recall_at_1000_max
value: 27.65992449285828
- type: nauc_recall_at_1000_std
value: 26.978683362031404
- type: nauc_recall_at_100_diff1
value: 3.8717997827541715
- type: nauc_recall_at_100_max
value: 16.638088105846112
- type: nauc_recall_at_100_std
value: 11.462502409674741
- type: nauc_recall_at_10_diff1
value: 7.163252061229421
- type: nauc_recall_at_10_max
value: 11.040002484289145
- type: nauc_recall_at_10_std
value: 0.9129102784965978
- type: nauc_recall_at_1_diff1
value: 18.062774643202044
- type: nauc_recall_at_1_max
value: 10.546042759909822
- type: nauc_recall_at_1_std
value: -4.605060217478587
- type: nauc_recall_at_20_diff1
value: 4.542312455433628
- type: nauc_recall_at_20_max
value: 10.643629412001573
- type: nauc_recall_at_20_std
value: 3.47941489266319
- type: nauc_recall_at_3_diff1
value: 10.606151920850087
- type: nauc_recall_at_3_max
value: 12.047989240607304
- type: nauc_recall_at_3_std
value: 0.6881305977438832
- type: nauc_recall_at_5_diff1
value: 9.606967769671117
- type: nauc_recall_at_5_max
value: 11.625874916779939
- type: nauc_recall_at_5_std
value: -1.536831933988358
- type: ndcg_at_1
value: 6.286
- type: ndcg_at_10
value: 12.751000000000001
- type: ndcg_at_100
value: 17.767
- type: ndcg_at_1000
value: 20.739
- type: ndcg_at_20
value: 14.546000000000001
- type: ndcg_at_3
value: 9.109
- type: ndcg_at_5
value: 10.593
- type: precision_at_1
value: 6.286
- type: precision_at_10
value: 2.506
- type: precision_at_100
value: 0.5369999999999999
- type: precision_at_1000
value: 0.082
- type: precision_at_20
value: 1.656
- type: precision_at_3
value: 4.403
- type: precision_at_5
value: 3.47
- type: recall_at_1
value: 5.4239999999999995
- type: recall_at_10
value: 21.07
- type: recall_at_100
value: 44.836
- type: recall_at_1000
value: 68.057
- type: recall_at_20
value: 27.828999999999997
- type: recall_at_3
value: 11.24
- type: recall_at_5
value: 14.692
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: dev
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 72.064
- type: map_at_1
value: 54.782
- type: map_at_10
value: 66.892
- type: map_at_100
value: 67.723
- type: map_at_1000
value: 67.764
- type: map_at_20
value: 67.419
- type: map_at_3
value: 63.898999999999994
- type: map_at_5
value: 65.671
- type: mrr_at_1
value: 62.739999999999995
- type: mrr_at_10
value: 71.16546031746041
- type: mrr_at_100
value: 71.56923201896343
- type: mrr_at_1000
value: 71.58084120220572
- type: mrr_at_20
value: 71.43220428522292
- type: mrr_at_3
value: 69.34666666666686
- type: mrr_at_5
value: 70.50366666666675
- type: nauc_map_at_1000_diff1
value: 65.59212219467176
- type: nauc_map_at_1000_max
value: 27.57371463373514
- type: nauc_map_at_1000_std
value: -19.88019762973491
- type: nauc_map_at_100_diff1
value: 65.59289363418381
- type: nauc_map_at_100_max
value: 27.552397713936145
- type: nauc_map_at_100_std
value: -19.896571610105
- type: nauc_map_at_10_diff1
value: 65.53931319246097
- type: nauc_map_at_10_max
value: 27.169112039708537
- type: nauc_map_at_10_std
value: -20.467287006388478
- type: nauc_map_at_1_diff1
value: 68.88484940147754
- type: nauc_map_at_1_max
value: 22.084071164020493
- type: nauc_map_at_1_std
value: -19.088410282261215
- type: nauc_map_at_20_diff1
value: 65.56182444427108
- type: nauc_map_at_20_max
value: 27.45213204290069
- type: nauc_map_at_20_std
value: -20.038748883901576
- type: nauc_map_at_3_diff1
value: 66.17595740227014
- type: nauc_map_at_3_max
value: 26.408549123969706
- type: nauc_map_at_3_std
value: -20.89770058010996
- type: nauc_map_at_5_diff1
value: 65.73587133950195
- type: nauc_map_at_5_max
value: 26.68599168036377
- type: nauc_map_at_5_std
value: -20.946732610202794
- type: nauc_mrr_at_1000_diff1
value: 66.10373743794109
- type: nauc_mrr_at_1000_max
value: 28.262067477709746
- type: nauc_mrr_at_1000_std
value: -20.316692145628338
- type: nauc_mrr_at_100_diff1
value: 66.1011315557887
- type: nauc_mrr_at_100_max
value: 28.258333464606906
- type: nauc_mrr_at_100_std
value: -20.314480789225396
- type: nauc_mrr_at_10_diff1
value: 65.96785881864399
- type: nauc_mrr_at_10_max
value: 28.237913204439835
- type: nauc_mrr_at_10_std
value: -20.502923702465203
- type: nauc_mrr_at_1_diff1
value: 68.93968620697294
- type: nauc_mrr_at_1_max
value: 26.32067118661196
- type: nauc_mrr_at_1_std
value: -20.692394322446354
- type: nauc_mrr_at_20_diff1
value: 66.03425948210504
- type: nauc_mrr_at_20_max
value: 28.27954695407428
- type: nauc_mrr_at_20_std
value: -20.343789680632383
- type: nauc_mrr_at_3_diff1
value: 66.17051549573385
- type: nauc_mrr_at_3_max
value: 28.424209794489002
- type: nauc_mrr_at_3_std
value: -20.63375700276466
- type: nauc_mrr_at_5_diff1
value: 66.00114126376775
- type: nauc_mrr_at_5_max
value: 28.253534157328698
- type: nauc_mrr_at_5_std
value: -20.807782778878614
- type: nauc_ndcg_at_1000_diff1
value: 65.05838258937074
- type: nauc_ndcg_at_1000_max
value: 28.91283880656727
- type: nauc_ndcg_at_1000_std
value: -18.590515943457316
- type: nauc_ndcg_at_100_diff1
value: 64.94936073847741
- type: nauc_ndcg_at_100_max
value: 28.777342209651312
- type: nauc_ndcg_at_100_std
value: -18.54831566864543
- type: nauc_ndcg_at_10_diff1
value: 64.40543887873729
- type: nauc_ndcg_at_10_max
value: 28.155303739216752
- type: nauc_ndcg_at_10_std
value: -20.20740996870611
- type: nauc_ndcg_at_1_diff1
value: 68.81976566785035
- type: nauc_ndcg_at_1_max
value: 27.173206988406935
- type: nauc_ndcg_at_1_std
value: -20.586866192810813
- type: nauc_ndcg_at_20_diff1
value: 64.58440444702556
- type: nauc_ndcg_at_20_max
value: 28.702892399631764
- type: nauc_ndcg_at_20_std
value: -19.05530721740162
- type: nauc_ndcg_at_3_diff1
value: 64.88809208260761
- type: nauc_ndcg_at_3_max
value: 27.978816263543653
- type: nauc_ndcg_at_3_std
value: -21.072797088674417
- type: nauc_ndcg_at_5_diff1
value: 64.5420279434105
- type: nauc_ndcg_at_5_max
value: 27.725893019495963
- type: nauc_ndcg_at_5_std
value: -21.045716428718904
- type: nauc_precision_at_1000_diff1
value: -28.135144677720618
- type: nauc_precision_at_1000_max
value: 4.902062536968971
- type: nauc_precision_at_1000_std
value: 14.26305975790192
- type: nauc_precision_at_100_diff1
value: -22.98822166497179
- type: nauc_precision_at_100_max
value: 7.711530461310917
- type: nauc_precision_at_100_std
value: 12.529496394564138
- type: nauc_precision_at_10_diff1
value: -4.091230458423616
- type: nauc_precision_at_10_max
value: 15.136513889771832
- type: nauc_precision_at_10_std
value: 1.1901891712961636
- type: nauc_precision_at_1_diff1
value: 68.81976566785035
- type: nauc_precision_at_1_max
value: 27.173206988406935
- type: nauc_precision_at_1_std
value: -20.586866192810813
- type: nauc_precision_at_20_diff1
value: -12.870035284655673
- type: nauc_precision_at_20_max
value: 13.115005518588815
- type: nauc_precision_at_20_std
value: 8.049566614452813
- type: nauc_precision_at_3_diff1
value: 22.8868328249135
- type: nauc_precision_at_3_max
value: 22.744169179873815
- type: nauc_precision_at_3_std
value: -10.87607659515058
- type: nauc_precision_at_5_diff1
value: 9.683932611802376
- type: nauc_precision_at_5_max
value: 18.570238750745574
- type: nauc_precision_at_5_std
value: -6.716473279656192
- type: nauc_recall_at_1000_diff1
value: 57.26745978726042
- type: nauc_recall_at_1000_max
value: 40.56657948457161
- type: nauc_recall_at_1000_std
value: 23.831152067948672
- type: nauc_recall_at_100_diff1
value: 54.40120458496423
- type: nauc_recall_at_100_max
value: 31.04243629787198
- type: nauc_recall_at_100_std
value: 1.5334752705665868
- type: nauc_recall_at_10_diff1
value: 55.84406621807515
- type: nauc_recall_at_10_max
value: 26.701256917066257
- type: nauc_recall_at_10_std
value: -18.935574651712077
- type: nauc_recall_at_1_diff1
value: 68.88484940147754
- type: nauc_recall_at_1_max
value: 22.084071164020493
- type: nauc_recall_at_1_std
value: -19.088410282261215
- type: nauc_recall_at_20_diff1
value: 55.506910820340096
- type: nauc_recall_at_20_max
value: 29.848851968844155
- type: nauc_recall_at_20_std
value: -12.121739617272143
- type: nauc_recall_at_3_diff1
value: 61.05518202235546
- type: nauc_recall_at_3_max
value: 26.46421251590195
- type: nauc_recall_at_3_std
value: -21.770506207273044
- type: nauc_recall_at_5_diff1
value: 58.19024025929856
- type: nauc_recall_at_5_max
value: 25.891711064012235
- type: nauc_recall_at_5_std
value: -21.58927419976119
- type: ndcg_at_1
value: 62.760000000000005
- type: ndcg_at_10
value: 72.064
- type: ndcg_at_100
value: 74.889
- type: ndcg_at_1000
value: 75.503
- type: ndcg_at_20
value: 73.424
- type: ndcg_at_3
value: 67.76100000000001
- type: ndcg_at_5
value: 69.95400000000001
- type: precision_at_1
value: 62.760000000000005
- type: precision_at_10
value: 11.008
- type: precision_at_100
value: 1.351
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 5.994
- type: precision_at_3
value: 29.467
- type: precision_at_5
value: 19.652
- type: recall_at_1
value: 54.782
- type: recall_at_10
value: 82.892
- type: recall_at_100
value: 94.197
- type: recall_at_1000
value: 97.968
- type: recall_at_20
value: 87.625
- type: recall_at_3
value: 70.994
- type: recall_at_5
value: 76.797
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 71.571
- type: map_at_1
value: 54.891
- type: map_at_10
value: 66.593
- type: map_at_100
value: 67.456
- type: map_at_1000
value: 67.502
- type: map_at_20
value: 67.119
- type: map_at_3
value: 63.661
- type: map_at_5
value: 65.376
- type: mrr_at_1
value: 62.68
- type: mrr_at_10
value: 70.70483333333299
- type: mrr_at_100
value: 71.10078264569333
- type: mrr_at_1000
value: 71.11501411983153
- type: mrr_at_20
value: 70.96258499055935
- type: mrr_at_3
value: 68.88499999999965
- type: mrr_at_5
value: 69.96799999999939
- type: nauc_map_at_1000_diff1
value: 67.08185419212819
- type: nauc_map_at_1000_max
value: 26.271007449733965
- type: nauc_map_at_1000_std
value: -22.727670299133905
- type: nauc_map_at_100_diff1
value: 67.07473306685743
- type: nauc_map_at_100_max
value: 26.267772220163994
- type: nauc_map_at_100_std
value: -22.733041613084545
- type: nauc_map_at_10_diff1
value: 67.03366616203877
- type: nauc_map_at_10_max
value: 25.98040095122931
- type: nauc_map_at_10_std
value: -23.174978541840062
- type: nauc_map_at_1_diff1
value: 70.22788208504926
- type: nauc_map_at_1_max
value: 22.32181594115109
- type: nauc_map_at_1_std
value: -22.845687528126767
- type: nauc_map_at_20_diff1
value: 67.02458695958853
- type: nauc_map_at_20_max
value: 26.160924337082925
- type: nauc_map_at_20_std
value: -22.91658496265426
- type: nauc_map_at_3_diff1
value: 67.73324298855009
- type: nauc_map_at_3_max
value: 25.32543491900135
- type: nauc_map_at_3_std
value: -23.652478151035705
- type: nauc_map_at_5_diff1
value: 67.26317399696087
- type: nauc_map_at_5_max
value: 25.73283424480045
- type: nauc_map_at_5_std
value: -23.374471007547708
- type: nauc_mrr_at_1000_diff1
value: 68.6863034959119
- type: nauc_mrr_at_1000_max
value: 27.45760797922055
- type: nauc_mrr_at_1000_std
value: -22.07474437205787
- type: nauc_mrr_at_100_diff1
value: 68.6819175750528
- type: nauc_mrr_at_100_max
value: 27.460124524249828
- type: nauc_mrr_at_100_std
value: -22.06707147884554
- type: nauc_mrr_at_10_diff1
value: 68.64301683603624
- type: nauc_mrr_at_10_max
value: 27.508274408138412
- type: nauc_mrr_at_10_std
value: -22.10347335537209
- type: nauc_mrr_at_1_diff1
value: 70.98759105304701
- type: nauc_mrr_at_1_max
value: 25.834314140255454
- type: nauc_mrr_at_1_std
value: -23.5079169654779
- type: nauc_mrr_at_20_diff1
value: 68.62344507570737
- type: nauc_mrr_at_20_max
value: 27.47708897427253
- type: nauc_mrr_at_20_std
value: -22.092765274383197
- type: nauc_mrr_at_3_diff1
value: 68.72012459589514
- type: nauc_mrr_at_3_max
value: 27.732455394026374
- type: nauc_mrr_at_3_std
value: -22.202658610484402
- type: nauc_mrr_at_5_diff1
value: 68.58152117520244
- type: nauc_mrr_at_5_max
value: 27.624668222947907
- type: nauc_mrr_at_5_std
value: -22.1566009105787
- type: nauc_ndcg_at_1000_diff1
value: 66.8170968003819
- type: nauc_ndcg_at_1000_max
value: 27.4424720408962
- type: nauc_ndcg_at_1000_std
value: -21.26731508004746
- type: nauc_ndcg_at_100_diff1
value: 66.63928723579153
- type: nauc_ndcg_at_100_max
value: 27.484213156341337
- type: nauc_ndcg_at_100_std
value: -21.00280631472414
- type: nauc_ndcg_at_10_diff1
value: 66.10871911241466
- type: nauc_ndcg_at_10_max
value: 26.88263981341165
- type: nauc_ndcg_at_10_std
value: -22.293652163487547
- type: nauc_ndcg_at_1_diff1
value: 70.60180456913521
- type: nauc_ndcg_at_1_max
value: 26.990805111851056
- type: nauc_ndcg_at_1_std
value: -22.57353642686558
- type: nauc_ndcg_at_20_diff1
value: 66.12340410289872
- type: nauc_ndcg_at_20_max
value: 27.208743850472043
- type: nauc_ndcg_at_20_std
value: -21.71746318981642
- type: nauc_ndcg_at_3_diff1
value: 66.8346237445378
- type: nauc_ndcg_at_3_max
value: 26.80845893094536
- type: nauc_ndcg_at_3_std
value: -22.72561952027054
- type: nauc_ndcg_at_5_diff1
value: 66.29625363729713
- type: nauc_ndcg_at_5_max
value: 26.84029824214343
- type: nauc_ndcg_at_5_std
value: -22.437674141622427
- type: nauc_precision_at_1000_diff1
value: -27.663277816134528
- type: nauc_precision_at_1000_max
value: 0.029218673757483826
- type: nauc_precision_at_1000_std
value: 15.496650709260345
- type: nauc_precision_at_100_diff1
value: -22.411145752827085
- type: nauc_precision_at_100_max
value: 3.9344393041914003
- type: nauc_precision_at_100_std
value: 14.427181615032728
- type: nauc_precision_at_10_diff1
value: -3.025719144519533
- type: nauc_precision_at_10_max
value: 11.854534904087863
- type: nauc_precision_at_10_std
value: 2.9269880461016187
- type: nauc_precision_at_1_diff1
value: 70.60180456913521
- type: nauc_precision_at_1_max
value: 26.990805111851056
- type: nauc_precision_at_1_std
value: -22.57353642686558
- type: nauc_precision_at_20_diff1
value: -11.970959182337722
- type: nauc_precision_at_20_max
value: 8.7073767696997
- type: nauc_precision_at_20_std
value: 8.071309809560946
- type: nauc_precision_at_3_diff1
value: 24.261382587189285
- type: nauc_precision_at_3_max
value: 19.083737941110986
- type: nauc_precision_at_3_std
value: -9.355089633077345
- type: nauc_precision_at_5_diff1
value: 10.933886587489535
- type: nauc_precision_at_5_max
value: 15.85727992438673
- type: nauc_precision_at_5_std
value: -3.6594032632600295
- type: nauc_recall_at_1000_diff1
value: 58.74764040615095
- type: nauc_recall_at_1000_max
value: 33.934231474839436
- type: nauc_recall_at_1000_std
value: 8.295473265471774
- type: nauc_recall_at_100_diff1
value: 56.67126188145814
- type: nauc_recall_at_100_max
value: 29.613953308606213
- type: nauc_recall_at_100_std
value: -3.1156622646409837
- type: nauc_recall_at_10_diff1
value: 58.14457754172474
- type: nauc_recall_at_10_max
value: 25.64068954117741
- type: nauc_recall_at_10_std
value: -20.345105298679194
- type: nauc_recall_at_1_diff1
value: 70.22788208504926
- type: nauc_recall_at_1_max
value: 22.32181594115109
- type: nauc_recall_at_1_std
value: -22.845687528126767
- type: nauc_recall_at_20_diff1
value: 56.122410891976116
- type: nauc_recall_at_20_max
value: 27.035921283394927
- type: nauc_recall_at_20_std
value: -16.336394508046013
- type: nauc_recall_at_3_diff1
value: 63.23448059275755
- type: nauc_recall_at_3_max
value: 25.348142531067364
- type: nauc_recall_at_3_std
value: -22.990239464641988
- type: nauc_recall_at_5_diff1
value: 60.663426307558325
- type: nauc_recall_at_5_max
value: 25.829648967123543
- type: nauc_recall_at_5_std
value: -21.6318529393389
- type: ndcg_at_1
value: 62.9
- type: ndcg_at_10
value: 71.571
- type: ndcg_at_100
value: 74.45
- type: ndcg_at_1000
value: 75.178
- type: ndcg_at_20
value: 72.898
- type: ndcg_at_3
value: 67.426
- type: ndcg_at_5
value: 69.47
- type: precision_at_1
value: 62.9
- type: precision_at_10
value: 10.918
- type: precision_at_100
value: 1.365
- type: precision_at_1000
value: 0.15
- type: precision_at_20
value: 5.962
- type: precision_at_3
value: 29.226999999999997
- type: precision_at_5
value: 19.486
- type: recall_at_1
value: 54.891
- type: recall_at_10
value: 81.804
- type: recall_at_100
value: 93.28699999999999
- type: recall_at_1000
value: 97.908
- type: recall_at_20
value: 86.392
- type: recall_at_3
value: 70.167
- type: recall_at_5
value: 75.612
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 28.426716749714853
- type: v_measure
value: 28.426716749714853
- type: v_measure_std
value: 3.8448151870575216
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 36.876305702102044
- type: v_measure
value: 36.876305702102044
- type: v_measure_std
value: 9.94570716150816
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 8.469999999999999
- type: map_at_1
value: 1.8579999999999999
- type: map_at_10
value: 4.517
- type: map_at_100
value: 5.423
- type: map_at_1000
value: 5.6259999999999994
- type: map_at_20
value: 4.96
- type: map_at_3
value: 3.3259999999999996
- type: map_at_5
value: 3.897
- type: mrr_at_1
value: 9.1
- type: mrr_at_10
value: 15.50785714285714
- type: mrr_at_100
value: 16.53341442492662
- type: mrr_at_1000
value: 16.64337258729169
- type: mrr_at_20
value: 16.033686205005864
- type: mrr_at_3
value: 13.16666666666666
- type: mrr_at_5
value: 14.536666666666667
- type: nauc_map_at_1000_diff1
value: 12.061887270855342
- type: nauc_map_at_1000_max
value: 15.672758711289811
- type: nauc_map_at_1000_std
value: 7.5414825361684095
- type: nauc_map_at_100_diff1
value: 12.21437148949726
- type: nauc_map_at_100_max
value: 15.328873259167677
- type: nauc_map_at_100_std
value: 7.078720170584267
- type: nauc_map_at_10_diff1
value: 14.08947539699102
- type: nauc_map_at_10_max
value: 15.528470101637804
- type: nauc_map_at_10_std
value: 4.92863657106244
- type: nauc_map_at_1_diff1
value: 18.039298730269138
- type: nauc_map_at_1_max
value: 14.31023882202473
- type: nauc_map_at_1_std
value: -0.3591053528908413
- type: nauc_map_at_20_diff1
value: 13.109485532536313
- type: nauc_map_at_20_max
value: 14.95304711292193
- type: nauc_map_at_20_std
value: 5.92644666999539
- type: nauc_map_at_3_diff1
value: 14.455112437308307
- type: nauc_map_at_3_max
value: 15.33104351213225
- type: nauc_map_at_3_std
value: 1.790639856377438
- type: nauc_map_at_5_diff1
value: 15.034627753858857
- type: nauc_map_at_5_max
value: 15.559185439575629
- type: nauc_map_at_5_std
value: 2.9375822216524767
- type: nauc_mrr_at_1000_diff1
value: 11.63283159966147
- type: nauc_mrr_at_1000_max
value: 12.794763163889893
- type: nauc_mrr_at_1000_std
value: 3.130093174099942
- type: nauc_mrr_at_100_diff1
value: 11.631365860343896
- type: nauc_mrr_at_100_max
value: 12.738342833107302
- type: nauc_mrr_at_100_std
value: 3.175009675549753
- type: nauc_mrr_at_10_diff1
value: 11.78043256144087
- type: nauc_mrr_at_10_max
value: 13.053458988792435
- type: nauc_mrr_at_10_std
value: 2.969926764552796
- type: nauc_mrr_at_1_diff1
value: 16.817874907491216
- type: nauc_mrr_at_1_max
value: 13.72314296918537
- type: nauc_mrr_at_1_std
value: 0.5050758203427812
- type: nauc_mrr_at_20_diff1
value: 11.67061780999313
- type: nauc_mrr_at_20_max
value: 12.538852363979714
- type: nauc_mrr_at_20_std
value: 3.1554542803987653
- type: nauc_mrr_at_3_diff1
value: 12.935441478746942
- type: nauc_mrr_at_3_max
value: 13.439277250075833
- type: nauc_mrr_at_3_std
value: 1.609264258249062
- type: nauc_mrr_at_5_diff1
value: 12.83097575201126
- type: nauc_mrr_at_5_max
value: 13.051302862980327
- type: nauc_mrr_at_5_std
value: 2.946054126471865
- type: nauc_ndcg_at_1000_diff1
value: 6.885605267096993
- type: nauc_ndcg_at_1000_max
value: 17.56127976156672
- type: nauc_ndcg_at_1000_std
value: 12.26165869844922
- type: nauc_ndcg_at_100_diff1
value: 7.911927104457757
- type: nauc_ndcg_at_100_max
value: 14.499330491894973
- type: nauc_ndcg_at_100_std
value: 9.911478033061345
- type: nauc_ndcg_at_10_diff1
value: 11.255069130484213
- type: nauc_ndcg_at_10_max
value: 14.06906027730809
- type: nauc_ndcg_at_10_std
value: 5.572616334986018
- type: nauc_ndcg_at_1_diff1
value: 16.817874907491216
- type: nauc_ndcg_at_1_max
value: 13.72314296918537
- type: nauc_ndcg_at_1_std
value: 0.5050758203427812
- type: nauc_ndcg_at_20_diff1
value: 9.949384628634753
- type: nauc_ndcg_at_20_max
value: 12.427643651408191
- type: nauc_ndcg_at_20_std
value: 7.053164095201611
- type: nauc_ndcg_at_3_diff1
value: 12.563756379860703
- type: nauc_ndcg_at_3_max
value: 13.82441131419801
- type: nauc_ndcg_at_3_std
value: 2.251423443371622
- type: nauc_ndcg_at_5_diff1
value: 13.441574400993234
- type: nauc_ndcg_at_5_max
value: 14.260529793724896
- type: nauc_ndcg_at_5_std
value: 3.960809644920197
- type: nauc_precision_at_1000_diff1
value: -1.233221181087992
- type: nauc_precision_at_1000_max
value: 20.589287486434426
- type: nauc_precision_at_1000_std
value: 19.911652095411522
- type: nauc_precision_at_100_diff1
value: 1.8317011244106
- type: nauc_precision_at_100_max
value: 14.135187668137632
- type: nauc_precision_at_100_std
value: 15.797255821832284
- type: nauc_precision_at_10_diff1
value: 7.9950736927114825
- type: nauc_precision_at_10_max
value: 12.46735311932162
- type: nauc_precision_at_10_std
value: 8.371168988491819
- type: nauc_precision_at_1_diff1
value: 16.817874907491216
- type: nauc_precision_at_1_max
value: 13.72314296918537
- type: nauc_precision_at_1_std
value: 0.5050758203427812
- type: nauc_precision_at_20_diff1
value: 5.948081133215723
- type: nauc_precision_at_20_max
value: 9.179673657991613
- type: nauc_precision_at_20_std
value: 10.783743977568665
- type: nauc_precision_at_3_diff1
value: 11.287806520397261
- type: nauc_precision_at_3_max
value: 13.602772010549327
- type: nauc_precision_at_3_std
value: 2.9099096571460423
- type: nauc_precision_at_5_diff1
value: 12.008313574343687
- type: nauc_precision_at_5_max
value: 13.265544377978255
- type: nauc_precision_at_5_std
value: 5.963871468457421
- type: nauc_recall_at_1000_diff1
value: -1.1282187899464593
- type: nauc_recall_at_1000_max
value: 21.195941583270518
- type: nauc_recall_at_1000_std
value: 19.905807733946542
- type: nauc_recall_at_100_diff1
value: 2.060462939698145
- type: nauc_recall_at_100_max
value: 14.62419021279813
- type: nauc_recall_at_100_std
value: 15.805207119860516
- type: nauc_recall_at_10_diff1
value: 8.48087847118933
- type: nauc_recall_at_10_max
value: 13.311831865231236
- type: nauc_recall_at_10_std
value: 8.101795195543582
- type: nauc_recall_at_1_diff1
value: 18.039298730269138
- type: nauc_recall_at_1_max
value: 14.31023882202473
- type: nauc_recall_at_1_std
value: -0.3591053528908413
- type: nauc_recall_at_20_diff1
value: 6.314569406264662
- type: nauc_recall_at_20_max
value: 9.76209181841767
- type: nauc_recall_at_20_std
value: 10.6380958121179
- type: nauc_recall_at_3_diff1
value: 11.981115684692986
- type: nauc_recall_at_3_max
value: 14.659647798166809
- type: nauc_recall_at_3_std
value: 2.7748583693419135
- type: nauc_recall_at_5_diff1
value: 12.650210516742375
- type: nauc_recall_at_5_max
value: 14.24002653527425
- type: nauc_recall_at_5_std
value: 5.694892494690264
- type: ndcg_at_1
value: 9.1
- type: ndcg_at_10
value: 8.469999999999999
- type: ndcg_at_100
value: 13.206999999999999
- type: ndcg_at_1000
value: 18.076
- type: ndcg_at_20
value: 9.964
- type: ndcg_at_3
value: 7.758
- type: ndcg_at_5
value: 6.925000000000001
- type: precision_at_1
value: 9.1
- type: precision_at_10
value: 4.5
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.234
- type: precision_at_20
value: 3.11
- type: precision_at_3
value: 7.333
- type: precision_at_5
value: 6.18
- type: recall_at_1
value: 1.8579999999999999
- type: recall_at_10
value: 9.155000000000001
- type: recall_at_100
value: 23.455000000000002
- type: recall_at_1000
value: 47.625
- type: recall_at_20
value: 12.683
- type: recall_at_3
value: 4.505
- type: recall_at_5
value: 6.3
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 67.40745266329591
- type: cosine_spearman
value: 55.48988811622464
- type: euclidean_pearson
value: 56.64110483113514
- type: euclidean_spearman
value: 52.56015050620609
- type: main_score
value: 55.48988811622464
- type: manhattan_pearson
value: 56.620739146260355
- type: manhattan_spearman
value: 52.57278457053056
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 60.71776028645086
- type: cosine_spearman
value: 53.51289728749782
- type: euclidean_pearson
value: 55.155862584848556
- type: euclidean_spearman
value: 50.616899710863926
- type: main_score
value: 53.51289728749782
- type: manhattan_pearson
value: 55.12958983969065
- type: manhattan_spearman
value: 50.615571438753825
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 66.28723275783524
- type: cosine_spearman
value: 70.80105270649192
- type: euclidean_pearson
value: 63.17342773278036
- type: euclidean_spearman
value: 68.04978745416297
- type: main_score
value: 70.80105270649192
- type: manhattan_pearson
value: 63.307977806683624
- type: manhattan_spearman
value: 68.15603387498501
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 66.32738491357128
- type: cosine_spearman
value: 63.56156596006354
- type: euclidean_pearson
value: 60.77234239072363
- type: euclidean_spearman
value: 59.82854639930626
- type: main_score
value: 63.56156596006354
- type: manhattan_pearson
value: 60.727062637807514
- type: manhattan_spearman
value: 59.82525574450547
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 70.65242939251758
- type: cosine_spearman
value: 74.0811942569614
- type: euclidean_pearson
value: 64.4398938761914
- type: euclidean_spearman
value: 69.02073301173178
- type: main_score
value: 74.0811942569614
- type: manhattan_pearson
value: 64.36840991082721
- type: manhattan_spearman
value: 68.93733645543237
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 57.66178270032967
- type: cosine_spearman
value: 64.60112935555931
- type: euclidean_pearson
value: 51.6624216594181
- type: euclidean_spearman
value: 58.48027809262631
- type: main_score
value: 64.60112935555931
- type: manhattan_pearson
value: 51.67275255747956
- type: manhattan_spearman
value: 58.51993555176014
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 74.962376979876
- type: cosine_spearman
value: 76.91005426258415
- type: euclidean_pearson
value: 69.18702368547565
- type: euclidean_spearman
value: 72.7967496412359
- type: main_score
value: 76.91005426258415
- type: manhattan_pearson
value: 69.51475539291788
- type: manhattan_spearman
value: 73.1205789749194
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 42.989838819327645
- type: cosine_spearman
value: 54.41645863994751
- type: euclidean_pearson
value: 46.463633715286825
- type: euclidean_spearman
value: 53.26125908813254
- type: main_score
value: 54.41645863994751
- type: manhattan_pearson
value: 46.3090539719663
- type: manhattan_spearman
value: 53.23952890070585
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 58.94057926227806
- type: cosine_spearman
value: 61.552817487762276
- type: euclidean_pearson
value: 53.47624658314033
- type: euclidean_spearman
value: 58.03174405895781
- type: main_score
value: 61.552817487762276
- type: manhattan_pearson
value: 53.518204882849815
- type: manhattan_spearman
value: 58.01595959625671
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 63.55311507513623
- type: map
value: 63.55311507513623
- type: mrr
value: 85.72307412993688
- type: nAUC_map_diff1
value: 11.268201025650207
- type: nAUC_map_max
value: 51.45548943457066
- type: nAUC_map_std
value: 50.814308655995376
- type: nAUC_mrr_diff1
value: 40.88320903735142
- type: nAUC_mrr_max
value: 65.2228452078348
- type: nAUC_mrr_std
value: 48.927442483105054
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 29.526000000000003
- type: map_at_1
value: 18.667
- type: map_at_10
value: 25.737
- type: map_at_100
value: 26.596999999999998
- type: map_at_1000
value: 26.716
- type: map_at_20
value: 26.169999999999998
- type: map_at_3
value: 23.662
- type: map_at_5
value: 25.047000000000004
- type: mrr_at_1
value: 20.0
- type: mrr_at_10
value: 26.93544973544974
- type: mrr_at_100
value: 27.71516198853175
- type: mrr_at_1000
value: 27.826849561947952
- type: mrr_at_20
value: 27.315907634676996
- type: mrr_at_3
value: 25.111111111111118
- type: mrr_at_5
value: 26.39444444444445
- type: nauc_map_at_1000_diff1
value: 38.90875196518647
- type: nauc_map_at_1000_max
value: 16.698673149307297
- type: nauc_map_at_1000_std
value: -5.498893443185168
- type: nauc_map_at_100_diff1
value: 38.83044667513577
- type: nauc_map_at_100_max
value: 16.71972262772784
- type: nauc_map_at_100_std
value: -5.539865548642064
- type: nauc_map_at_10_diff1
value: 39.25926884874051
- type: nauc_map_at_10_max
value: 16.524942017241298
- type: nauc_map_at_10_std
value: -5.749202543826665
- type: nauc_map_at_1_diff1
value: 49.05457170137309
- type: nauc_map_at_1_max
value: 14.868259525325092
- type: nauc_map_at_1_std
value: -4.292079658088571
- type: nauc_map_at_20_diff1
value: 38.93437072499329
- type: nauc_map_at_20_max
value: 16.922919039737465
- type: nauc_map_at_20_std
value: -5.749332924489055
- type: nauc_map_at_3_diff1
value: 41.05455892849972
- type: nauc_map_at_3_max
value: 14.955141225461105
- type: nauc_map_at_3_std
value: -5.272350889513596
- type: nauc_map_at_5_diff1
value: 39.5626587033532
- type: nauc_map_at_5_max
value: 16.12147315191173
- type: nauc_map_at_5_std
value: -5.198787066265291
- type: nauc_mrr_at_1000_diff1
value: 39.67774750351995
- type: nauc_mrr_at_1000_max
value: 18.633109278880546
- type: nauc_mrr_at_1000_std
value: -3.948466765941238
- type: nauc_mrr_at_100_diff1
value: 39.62056355901267
- type: nauc_mrr_at_100_max
value: 18.651758561578582
- type: nauc_mrr_at_100_std
value: -3.961746267971273
- type: nauc_mrr_at_10_diff1
value: 39.931392296273856
- type: nauc_mrr_at_10_max
value: 18.727459536451708
- type: nauc_mrr_at_10_std
value: -4.210421660343737
- type: nauc_mrr_at_1_diff1
value: 48.406522441072006
- type: nauc_mrr_at_1_max
value: 16.838876331934127
- type: nauc_mrr_at_1_std
value: -2.6170487568614895
- type: nauc_mrr_at_20_diff1
value: 39.69004320626915
- type: nauc_mrr_at_20_max
value: 18.85971232253515
- type: nauc_mrr_at_20_std
value: -4.07062911097351
- type: nauc_mrr_at_3_diff1
value: 41.60627165091408
- type: nauc_mrr_at_3_max
value: 18.181257422583062
- type: nauc_mrr_at_3_std
value: -4.4676136868319665
- type: nauc_mrr_at_5_diff1
value: 40.15073323512161
- type: nauc_mrr_at_5_max
value: 18.853660646196314
- type: nauc_mrr_at_5_std
value: -3.802773314454374
- type: nauc_ndcg_at_1000_diff1
value: 36.26334480758162
- type: nauc_ndcg_at_1000_max
value: 17.014745910821055
- type: nauc_ndcg_at_1000_std
value: -4.083781540768397
- type: nauc_ndcg_at_100_diff1
value: 34.49895308783121
- type: nauc_ndcg_at_100_max
value: 17.585730394111827
- type: nauc_ndcg_at_100_std
value: -4.494791613876944
- type: nauc_ndcg_at_10_diff1
value: 35.73990126300614
- type: nauc_ndcg_at_10_max
value: 18.119833878973406
- type: nauc_ndcg_at_10_std
value: -6.07251256812303
- type: nauc_ndcg_at_1_diff1
value: 48.406522441072006
- type: nauc_ndcg_at_1_max
value: 16.838876331934127
- type: nauc_ndcg_at_1_std
value: -2.6170487568614895
- type: nauc_ndcg_at_20_diff1
value: 34.60626344607442
- type: nauc_ndcg_at_20_max
value: 19.058859800970822
- type: nauc_ndcg_at_20_std
value: -5.731769224880343
- type: nauc_ndcg_at_3_diff1
value: 38.974986846940595
- type: nauc_ndcg_at_3_max
value: 15.997704782372086
- type: nauc_ndcg_at_3_std
value: -5.48762762284936
- type: nauc_ndcg_at_5_diff1
value: 36.40308278328965
- type: nauc_ndcg_at_5_max
value: 17.623597768125
- type: nauc_ndcg_at_5_std
value: -5.0004230501207285
- type: nauc_precision_at_1000_diff1
value: 8.618812295837216
- type: nauc_precision_at_1000_max
value: 12.884728137261888
- type: nauc_precision_at_1000_std
value: 14.476795505357366
- type: nauc_precision_at_100_diff1
value: 14.169703894875784
- type: nauc_precision_at_100_max
value: 21.652908843751838
- type: nauc_precision_at_100_std
value: 4.072184243188446
- type: nauc_precision_at_10_diff1
value: 21.621957511552335
- type: nauc_precision_at_10_max
value: 24.674331810598392
- type: nauc_precision_at_10_std
value: -6.535206812010593
- type: nauc_precision_at_1_diff1
value: 48.406522441072006
- type: nauc_precision_at_1_max
value: 16.838876331934127
- type: nauc_precision_at_1_std
value: -2.6170487568614895
- type: nauc_precision_at_20_diff1
value: 16.594862260676805
- type: nauc_precision_at_20_max
value: 27.67300094746059
- type: nauc_precision_at_20_std
value: -3.3139669997676195
- type: nauc_precision_at_3_diff1
value: 32.412407494330125
- type: nauc_precision_at_3_max
value: 19.660114318802954
- type: nauc_precision_at_3_std
value: -5.772090595238624
- type: nauc_precision_at_5_diff1
value: 24.133501401010022
- type: nauc_precision_at_5_max
value: 22.400905638563533
- type: nauc_precision_at_5_std
value: -3.738167687651009
- type: nauc_recall_at_1000_diff1
value: 29.949265294092797
- type: nauc_recall_at_1000_max
value: -5.517971998702277
- type: nauc_recall_at_1000_std
value: 13.18470801229431
- type: nauc_recall_at_100_diff1
value: 19.138600199031323
- type: nauc_recall_at_100_max
value: 14.981553890290922
- type: nauc_recall_at_100_std
value: -0.8392004918377559
- type: nauc_recall_at_10_diff1
value: 25.63192875975604
- type: nauc_recall_at_10_max
value: 19.82756204669105
- type: nauc_recall_at_10_std
value: -8.179736169069097
- type: nauc_recall_at_1_diff1
value: 49.05457170137309
- type: nauc_recall_at_1_max
value: 14.868259525325092
- type: nauc_recall_at_1_std
value: -4.292079658088571
- type: nauc_recall_at_20_diff1
value: 21.25266330954768
- type: nauc_recall_at_20_max
value: 22.57414610030232
- type: nauc_recall_at_20_std
value: -7.0182308187024685
- type: nauc_recall_at_3_diff1
value: 32.44875479342593
- type: nauc_recall_at_3_max
value: 15.451829298414783
- type: nauc_recall_at_3_std
value: -6.466943886230113
- type: nauc_recall_at_5_diff1
value: 27.358984452382145
- type: nauc_recall_at_5_max
value: 19.0178408887355
- type: nauc_recall_at_5_std
value: -5.510234533099017
- type: ndcg_at_1
value: 20.0
- type: ndcg_at_10
value: 29.526000000000003
- type: ndcg_at_100
value: 34.256
- type: ndcg_at_1000
value: 37.79
- type: ndcg_at_20
value: 31.032
- type: ndcg_at_3
value: 25.674000000000003
- type: ndcg_at_5
value: 28.072999999999997
- type: precision_at_1
value: 20.0
- type: precision_at_10
value: 4.433
- type: precision_at_100
value: 0.703
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_20
value: 2.5829999999999997
- type: precision_at_3
value: 10.778
- type: precision_at_5
value: 7.867
- type: recall_at_1
value: 18.667
- type: recall_at_10
value: 39.944
- type: recall_at_100
value: 63.361000000000004
- type: recall_at_1000
value: 91.622
- type: recall_at_20
value: 45.667
- type: recall_at_3
value: 30.194
- type: recall_at_5
value: 35.806
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: train
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 32.208999999999996
- type: map_at_1
value: 19.750999999999998
- type: map_at_10
value: 27.778999999999996
- type: map_at_100
value: 28.776000000000003
- type: map_at_1000
value: 28.88
- type: map_at_20
value: 28.363
- type: map_at_3
value: 25.145
- type: map_at_5
value: 26.76
- type: mrr_at_1
value: 21.631644004944377
- type: mrr_at_10
value: 29.26849726293484
- type: mrr_at_100
value: 30.164315874353086
- type: mrr_at_1000
value: 30.257277883856347
- type: mrr_at_20
value: 29.822712901868847
- type: mrr_at_3
value: 26.946847960445
- type: mrr_at_5
value: 28.343634116192835
- type: nauc_map_at_1000_diff1
value: 33.018476272890915
- type: nauc_map_at_1000_max
value: 19.25546126768736
- type: nauc_map_at_1000_std
value: -6.116135591079283
- type: nauc_map_at_100_diff1
value: 32.98218592347836
- type: nauc_map_at_100_max
value: 19.22487709679211
- type: nauc_map_at_100_std
value: -6.13221338555462
- type: nauc_map_at_10_diff1
value: 32.96247837739734
- type: nauc_map_at_10_max
value: 19.1295325277127
- type: nauc_map_at_10_std
value: -6.343690319216744
- type: nauc_map_at_1_diff1
value: 40.645675112799786
- type: nauc_map_at_1_max
value: 18.178739820975014
- type: nauc_map_at_1_std
value: -10.774073620195383
- type: nauc_map_at_20_diff1
value: 33.03739300395304
- type: nauc_map_at_20_max
value: 19.331748541190642
- type: nauc_map_at_20_std
value: -6.261221101063373
- type: nauc_map_at_3_diff1
value: 34.47896260150193
- type: nauc_map_at_3_max
value: 16.965081051605146
- type: nauc_map_at_3_std
value: -6.994913170062857
- type: nauc_map_at_5_diff1
value: 32.81764494845007
- type: nauc_map_at_5_max
value: 18.147067677266033
- type: nauc_map_at_5_std
value: -6.246164258051352
- type: nauc_mrr_at_1000_diff1
value: 32.731926929360036
- type: nauc_mrr_at_1000_max
value: 20.492445053047362
- type: nauc_mrr_at_1000_std
value: -5.6967107445447605
- type: nauc_mrr_at_100_diff1
value: 32.70415221930752
- type: nauc_mrr_at_100_max
value: 20.46499696496385
- type: nauc_mrr_at_100_std
value: -5.709286500520383
- type: nauc_mrr_at_10_diff1
value: 32.63641004938251
- type: nauc_mrr_at_10_max
value: 20.454066831395874
- type: nauc_mrr_at_10_std
value: -5.705626577206938
- type: nauc_mrr_at_1_diff1
value: 39.12929894156776
- type: nauc_mrr_at_1_max
value: 20.182341619713306
- type: nauc_mrr_at_1_std
value: -10.633343866507673
- type: nauc_mrr_at_20_diff1
value: 32.69882563730009
- type: nauc_mrr_at_20_max
value: 20.62565779036799
- type: nauc_mrr_at_20_std
value: -5.678443599360867
- type: nauc_mrr_at_3_diff1
value: 33.5479170982602
- type: nauc_mrr_at_3_max
value: 19.496389023038542
- type: nauc_mrr_at_3_std
value: -6.235325046185766
- type: nauc_mrr_at_5_diff1
value: 32.36622782666829
- type: nauc_mrr_at_5_max
value: 19.755487531550646
- type: nauc_mrr_at_5_std
value: -5.714531556486832
- type: nauc_ndcg_at_1000_diff1
value: 31.130800063986403
- type: nauc_ndcg_at_1000_max
value: 20.678952349072393
- type: nauc_ndcg_at_1000_std
value: -3.3915618154924267
- type: nauc_ndcg_at_100_diff1
value: 30.245572227636785
- type: nauc_ndcg_at_100_max
value: 20.29155846581541
- type: nauc_ndcg_at_100_std
value: -3.6346054663559224
- type: nauc_ndcg_at_10_diff1
value: 30.66641900921085
- type: nauc_ndcg_at_10_max
value: 20.586665607546802
- type: nauc_ndcg_at_10_std
value: -4.414026000530255
- type: nauc_ndcg_at_1_diff1
value: 39.12929894156776
- type: nauc_ndcg_at_1_max
value: 20.182341619713306
- type: nauc_ndcg_at_1_std
value: -10.633343866507673
- type: nauc_ndcg_at_20_diff1
value: 30.790374128361357
- type: nauc_ndcg_at_20_max
value: 21.149896099894892
- type: nauc_ndcg_at_20_std
value: -4.238868889400191
- type: nauc_ndcg_at_3_diff1
value: 32.82104372914893
- type: nauc_ndcg_at_3_max
value: 17.402000947531697
- type: nauc_ndcg_at_3_std
value: -5.410038358557045
- type: nauc_ndcg_at_5_diff1
value: 30.356434452803676
- type: nauc_ndcg_at_5_max
value: 18.612538101853477
- type: nauc_ndcg_at_5_std
value: -4.285463568299433
- type: nauc_precision_at_1000_diff1
value: 2.2630751304549666
- type: nauc_precision_at_1000_max
value: 24.659090844031212
- type: nauc_precision_at_1000_std
value: 13.877697493760916
- type: nauc_precision_at_100_diff1
value: 8.926077099370595
- type: nauc_precision_at_100_max
value: 24.058643243395505
- type: nauc_precision_at_100_std
value: 7.573502898799793
- type: nauc_precision_at_10_diff1
value: 19.41276754911562
- type: nauc_precision_at_10_max
value: 28.211444792067986
- type: nauc_precision_at_10_std
value: 0.8974925782848179
- type: nauc_precision_at_1_diff1
value: 39.12929894156776
- type: nauc_precision_at_1_max
value: 20.182341619713306
- type: nauc_precision_at_1_std
value: -10.633343866507673
- type: nauc_precision_at_20_diff1
value: 17.916685529025763
- type: nauc_precision_at_20_max
value: 29.864911467505355
- type: nauc_precision_at_20_std
value: 2.885776799617732
- type: nauc_precision_at_3_diff1
value: 25.864886987297353
- type: nauc_precision_at_3_max
value: 19.85150106808991
- type: nauc_precision_at_3_std
value: -2.2396459248665552
- type: nauc_precision_at_5_diff1
value: 19.540695484508134
- type: nauc_precision_at_5_max
value: 23.194786829993888
- type: nauc_precision_at_5_std
value: 0.9157739994492274
- type: nauc_recall_at_1000_diff1
value: 22.359872337248472
- type: nauc_recall_at_1000_max
value: 27.23487048509587
- type: nauc_recall_at_1000_std
value: 24.646636055776835
- type: nauc_recall_at_100_diff1
value: 19.631596996214693
- type: nauc_recall_at_100_max
value: 20.34354812525602
- type: nauc_recall_at_100_std
value: 4.856569919732777
- type: nauc_recall_at_10_diff1
value: 24.353720834590646
- type: nauc_recall_at_10_max
value: 22.96396377129604
- type: nauc_recall_at_10_std
value: -0.017636284022635965
- type: nauc_recall_at_1_diff1
value: 40.645675112799786
- type: nauc_recall_at_1_max
value: 18.178739820975014
- type: nauc_recall_at_1_std
value: -10.774073620195383
- type: nauc_recall_at_20_diff1
value: 24.151467083375223
- type: nauc_recall_at_20_max
value: 24.59410553622122
- type: nauc_recall_at_20_std
value: 0.4082869470891554
- type: nauc_recall_at_3_diff1
value: 28.807369741814306
- type: nauc_recall_at_3_max
value: 15.57045527204528
- type: nauc_recall_at_3_std
value: -1.8552293654406165
- type: nauc_recall_at_5_diff1
value: 24.004157260728366
- type: nauc_recall_at_5_max
value: 17.96640037433552
- type: nauc_recall_at_5_std
value: 0.34474131049346224
- type: ndcg_at_1
value: 21.632
- type: ndcg_at_10
value: 32.208999999999996
- type: ndcg_at_100
value: 37.049
- type: ndcg_at_1000
value: 40.028999999999996
- type: ndcg_at_20
value: 34.298
- type: ndcg_at_3
value: 27.24
- type: ndcg_at_5
value: 29.906
- type: precision_at_1
value: 21.632
- type: precision_at_10
value: 5.0680000000000005
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_20
value: 3.004
- type: precision_at_3
value: 11.619
- type: precision_at_5
value: 8.554
- type: recall_at_1
value: 19.750999999999998
- type: recall_at_10
value: 44.456
- type: recall_at_100
value: 66.842
- type: recall_at_1000
value: 90.554
- type: recall_at_20
value: 52.515
- type: recall_at_3
value: 31.401
- type: recall_at_5
value: 37.708999999999996
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.64653465346535
- type: cosine_accuracy_threshold
value: 88.94755840301514
- type: cosine_ap
value: 85.55090209842295
- type: cosine_f1
value: 81.03896103896105
- type: cosine_f1_threshold
value: 88.06538581848145
- type: cosine_precision
value: 84.32432432432432
- type: cosine_recall
value: 78.0
- type: dot_accuracy
value: 99.03762376237624
- type: dot_accuracy_threshold
value: 1075.7030487060547
- type: dot_ap
value: 25.21099059479159
- type: dot_f1
value: 31.765253842571028
- type: dot_f1_threshold
value: 850.6962776184082
- type: dot_precision
value: 29.72972972972973
- type: dot_recall
value: 34.1
- type: euclidean_accuracy
value: 99.62673267326733
- type: euclidean_accuracy_threshold
value: 139.87888097763062
- type: euclidean_ap
value: 82.49414959124407
- type: euclidean_f1
value: 79.36507936507937
- type: euclidean_f1_threshold
value: 139.87888097763062
- type: euclidean_precision
value: 87.66626360338573
- type: euclidean_recall
value: 72.5
- type: main_score
value: 85.55090209842295
- type: manhattan_accuracy
value: 99.62574257425743
- type: manhattan_accuracy_threshold
value: 1919.0616607666016
- type: manhattan_ap
value: 82.48883233150048
- type: manhattan_f1
value: 79.23076923076923
- type: manhattan_f1_threshold
value: 1919.0616607666016
- type: manhattan_precision
value: 87.92682926829268
- type: manhattan_recall
value: 72.1
- type: max_accuracy
value: 99.64653465346535
- type: max_ap
value: 85.55090209842295
- type: max_f1
value: 81.03896103896105
- type: max_precision
value: 87.92682926829268
- type: max_recall
value: 78.0
- type: similarity_accuracy
value: 99.64653465346535
- type: similarity_accuracy_threshold
value: 88.94755840301514
- type: similarity_ap
value: 85.55062536231317
- type: similarity_f1
value: 81.03896103896105
- type: similarity_f1_threshold
value: 88.06538581848145
- type: similarity_precision
value: 84.32432432432432
- type: similarity_recall
value: 78.0
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: validation
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.60396039603961
- type: cosine_accuracy_threshold
value: 90.01673460006714
- type: cosine_ap
value: 83.01228862847526
- type: cosine_f1
value: 78.17312798725438
- type: cosine_f1_threshold
value: 88.54530453681946
- type: cosine_precision
value: 83.35220838052095
- type: cosine_recall
value: 73.6
- type: dot_accuracy
value: 99.02772277227723
- type: dot_accuracy_threshold
value: 1110.744857788086
- type: dot_ap
value: 22.52218563381859
- type: dot_f1
value: 30.113141862489123
- type: dot_f1_threshold
value: 849.8266220092773
- type: dot_precision
value: 26.656394453004623
- type: dot_recall
value: 34.599999999999994
- type: euclidean_accuracy
value: 99.58712871287129
- type: euclidean_accuracy_threshold
value: 134.52991247177124
- type: euclidean_ap
value: 80.92115563373864
- type: euclidean_f1
value: 77.02127659574468
- type: euclidean_f1_threshold
value: 142.99077987670898
- type: euclidean_precision
value: 82.27272727272728
- type: euclidean_recall
value: 72.39999999999999
- type: main_score
value: 83.01228862847526
- type: manhattan_accuracy
value: 99.58217821782178
- type: manhattan_accuracy_threshold
value: 1893.684196472168
- type: manhattan_ap
value: 80.76933282645034
- type: manhattan_f1
value: 76.57754010695186
- type: manhattan_f1_threshold
value: 1969.699478149414
- type: manhattan_precision
value: 82.29885057471265
- type: manhattan_recall
value: 71.6
- type: max_accuracy
value: 99.60396039603961
- type: max_ap
value: 83.01228862847526
- type: max_f1
value: 78.17312798725438
- type: max_precision
value: 83.35220838052095
- type: max_recall
value: 73.6
- type: similarity_accuracy
value: 99.60396039603961
- type: similarity_accuracy_threshold
value: 90.01673460006714
- type: similarity_ap
value: 83.01216562298424
- type: similarity_f1
value: 78.17312798725438
- type: similarity_f1_threshold
value: 88.54529857635498
- type: similarity_precision
value: 83.35220838052095
- type: similarity_recall
value: 73.6
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 39.515271541565205
- type: v_measure
value: 39.515271541565205
- type: v_measure_std
value: 3.8157758492289964
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 27.899435497123438
- type: v_measure
value: 27.899435497123438
- type: v_measure_std
value: 1.6019853493964817
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 35.63422266081505
- type: map
value: 35.63422266081505
- type: mrr
value: 36.807796982429345
- type: nAUC_map_diff1
value: 26.666870201624004
- type: nAUC_map_max
value: 13.080333206721551
- type: nAUC_map_std
value: -6.881272733047387
- type: nAUC_mrr_diff1
value: 20.676894534971595
- type: nAUC_mrr_max
value: 17.968238284800584
- type: nAUC_mrr_std
value: -4.994281211783994
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 31.113362641367416
- type: cosine_spearman
value: 30.502287149942592
- type: dot_pearson
value: 19.294344459453956
- type: dot_spearman
value: 20.392377472328057
- type: main_score
value: 30.502287149942592
- type: pearson
value: 31.113350499491155
- type: spearman
value: 30.49918290509572
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 35.923
- type: map_at_1
value: 0.128
- type: map_at_10
value: 0.7250000000000001
- type: map_at_100
value: 2.391
- type: map_at_1000
value: 5.172000000000001
- type: map_at_20
value: 1.023
- type: map_at_3
value: 0.328
- type: map_at_5
value: 0.486
- type: mrr_at_1
value: 48.0
- type: mrr_at_10
value: 60.469047619047615
- type: mrr_at_100
value: 61.31153650271297
- type: mrr_at_1000
value: 61.31599084569738
- type: mrr_at_20
value: 61.23751052868699
- type: mrr_at_3
value: 59.0
- type: mrr_at_5
value: 59.399999999999984
- type: nauc_map_at_1000_diff1
value: 6.633672215390218
- type: nauc_map_at_1000_max
value: 33.21260081274754
- type: nauc_map_at_1000_std
value: 41.03482629480025
- type: nauc_map_at_100_diff1
value: 10.531162540864399
- type: nauc_map_at_100_max
value: 23.03095240506771
- type: nauc_map_at_100_std
value: 30.80087333382512
- type: nauc_map_at_10_diff1
value: -2.8114974167869193
- type: nauc_map_at_10_max
value: 2.0423014894316243
- type: nauc_map_at_10_std
value: 16.283864542392415
- type: nauc_map_at_1_diff1
value: 0.1782017157375438
- type: nauc_map_at_1_max
value: -8.789897637515832
- type: nauc_map_at_1_std
value: 0.4973294620610552
- type: nauc_map_at_20_diff1
value: 2.3820789356483325
- type: nauc_map_at_20_max
value: 7.20053550756653
- type: nauc_map_at_20_std
value: 21.455996589757845
- type: nauc_map_at_3_diff1
value: 4.504854275606629
- type: nauc_map_at_3_max
value: -2.5693219044417117
- type: nauc_map_at_3_std
value: 12.739392924768882
- type: nauc_map_at_5_diff1
value: 3.9696480366034224
- type: nauc_map_at_5_max
value: -1.1779979417218152
- type: nauc_map_at_5_std
value: 17.4497348259494
- type: nauc_mrr_at_1000_diff1
value: 24.75100559480381
- type: nauc_mrr_at_1000_max
value: 9.95763749023373
- type: nauc_mrr_at_1000_std
value: 6.13398634519912
- type: nauc_mrr_at_100_diff1
value: 24.752735314741965
- type: nauc_mrr_at_100_max
value: 9.938061896740088
- type: nauc_mrr_at_100_std
value: 6.13494756268298
- type: nauc_mrr_at_10_diff1
value: 25.326714458028377
- type: nauc_mrr_at_10_max
value: 9.820219541900055
- type: nauc_mrr_at_10_std
value: 6.381369015536628
- type: nauc_mrr_at_1_diff1
value: 21.670900815073566
- type: nauc_mrr_at_1_max
value: 5.901344342119195
- type: nauc_mrr_at_1_std
value: 7.174235206944012
- type: nauc_mrr_at_20_diff1
value: 24.76356158895378
- type: nauc_mrr_at_20_max
value: 10.082751937806862
- type: nauc_mrr_at_20_std
value: 6.041604148094304
- type: nauc_mrr_at_3_diff1
value: 26.319106097167378
- type: nauc_mrr_at_3_max
value: 12.612903391072228
- type: nauc_mrr_at_3_std
value: 6.160183207045465
- type: nauc_mrr_at_5_diff1
value: 26.40858292910765
- type: nauc_mrr_at_5_max
value: 12.570771820168975
- type: nauc_mrr_at_5_std
value: 6.447127902898947
- type: nauc_ndcg_at_1000_diff1
value: 2.4598345332941776
- type: nauc_ndcg_at_1000_max
value: 21.832728918528527
- type: nauc_ndcg_at_1000_std
value: 31.539894810152408
- type: nauc_ndcg_at_100_diff1
value: 14.319034186890484
- type: nauc_ndcg_at_100_max
value: 28.091823921833452
- type: nauc_ndcg_at_100_std
value: 27.115767739065905
- type: nauc_ndcg_at_10_diff1
value: 14.50085421904166
- type: nauc_ndcg_at_10_max
value: 23.89546141084357
- type: nauc_ndcg_at_10_std
value: 20.47497751907962
- type: nauc_ndcg_at_1_diff1
value: 9.717221623610401
- type: nauc_ndcg_at_1_max
value: 13.435914956242929
- type: nauc_ndcg_at_1_std
value: -2.132664056135197
- type: nauc_ndcg_at_20_diff1
value: 16.020693834578285
- type: nauc_ndcg_at_20_max
value: 23.373941870652505
- type: nauc_ndcg_at_20_std
value: 24.316054665971738
- type: nauc_ndcg_at_3_diff1
value: 20.4532093525935
- type: nauc_ndcg_at_3_max
value: 21.909747951752653
- type: nauc_ndcg_at_3_std
value: 18.26051461720549
- type: nauc_ndcg_at_5_diff1
value: 20.32544583830186
- type: nauc_ndcg_at_5_max
value: 22.928720940688237
- type: nauc_ndcg_at_5_std
value: 22.666482387565257
- type: nauc_precision_at_1000_diff1
value: 6.919564664500802
- type: nauc_precision_at_1000_max
value: 36.20956219318488
- type: nauc_precision_at_1000_std
value: 36.68350600553764
- type: nauc_precision_at_100_diff1
value: 13.9626836301873
- type: nauc_precision_at_100_max
value: 32.60527593722456
- type: nauc_precision_at_100_std
value: 32.66873954260175
- type: nauc_precision_at_10_diff1
value: 13.548740222822547
- type: nauc_precision_at_10_max
value: 28.273390186627122
- type: nauc_precision_at_10_std
value: 28.320002825008412
- type: nauc_precision_at_1_diff1
value: 16.86075949367088
- type: nauc_precision_at_1_max
value: 15.874750166555597
- type: nauc_precision_at_1_std
value: 6.523650899400389
- type: nauc_precision_at_20_diff1
value: 15.729691637481869
- type: nauc_precision_at_20_max
value: 26.371188178013167
- type: nauc_precision_at_20_std
value: 32.707995216537114
- type: nauc_precision_at_3_diff1
value: 30.933270299990767
- type: nauc_precision_at_3_max
value: 27.10193930300196
- type: nauc_precision_at_3_std
value: 29.2437026607607
- type: nauc_precision_at_5_diff1
value: 27.87219321969933
- type: nauc_precision_at_5_max
value: 27.22435373293917
- type: nauc_precision_at_5_std
value: 34.884583936096625
- type: nauc_recall_at_1000_diff1
value: -5.812459853960746
- type: nauc_recall_at_1000_max
value: 19.962070213025026
- type: nauc_recall_at_1000_std
value: 35.52549808262046
- type: nauc_recall_at_100_diff1
value: 1.2300091223273408
- type: nauc_recall_at_100_max
value: 12.020972430196739
- type: nauc_recall_at_100_std
value: 22.450383122045768
- type: nauc_recall_at_10_diff1
value: -6.002152735014183
- type: nauc_recall_at_10_max
value: -1.0827658421393735
- type: nauc_recall_at_10_std
value: 12.894354482971012
- type: nauc_recall_at_1_diff1
value: 0.1782017157375438
- type: nauc_recall_at_1_max
value: -8.789897637515832
- type: nauc_recall_at_1_std
value: 0.4973294620610552
- type: nauc_recall_at_20_diff1
value: -1.2465608345532868
- type: nauc_recall_at_20_max
value: 1.2310411659787777
- type: nauc_recall_at_20_std
value: 17.34969059870651
- type: nauc_recall_at_3_diff1
value: 5.415402668174962
- type: nauc_recall_at_3_max
value: -3.1556039634599387
- type: nauc_recall_at_3_std
value: 9.25627463445265
- type: nauc_recall_at_5_diff1
value: 4.530058434446965
- type: nauc_recall_at_5_max
value: -1.4202166345648521
- type: nauc_recall_at_5_std
value: 16.31745182486404
- type: ndcg_at_1
value: 46.0
- type: ndcg_at_10
value: 35.923
- type: ndcg_at_100
value: 22.764
- type: ndcg_at_1000
value: 19.607
- type: ndcg_at_20
value: 31.785999999999998
- type: ndcg_at_3
value: 43.754
- type: ndcg_at_5
value: 40.525
- type: precision_at_1
value: 50.0
- type: precision_at_10
value: 37.8
- type: precision_at_100
value: 23.28
- type: precision_at_1000
value: 9.578000000000001
- type: precision_at_20
value: 33.300000000000004
- type: precision_at_3
value: 48.667
- type: precision_at_5
value: 44.0
- type: recall_at_1
value: 0.128
- type: recall_at_10
value: 0.894
- type: recall_at_100
value: 4.788
- type: recall_at_1000
value: 18.857
- type: recall_at_20
value: 1.4869999999999999
- type: recall_at_3
value: 0.369
- type: recall_at_5
value: 0.551
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 13.173000000000002
- type: map_at_1
value: 0.954
- type: map_at_10
value: 3.907
- type: map_at_100
value: 7.002999999999999
- type: map_at_1000
value: 8.616
- type: map_at_20
value: 4.746
- type: map_at_3
value: 2.205
- type: map_at_5
value: 3.061
- type: mrr_at_1
value: 14.285714285714285
- type: mrr_at_10
value: 31.07709750566893
- type: mrr_at_100
value: 32.124110790656715
- type: mrr_at_1000
value: 32.14097704128919
- type: mrr_at_20
value: 31.53061224489796
- type: mrr_at_3
value: 26.53061224489796
- type: mrr_at_5
value: 29.38775510204082
- type: nauc_map_at_1000_diff1
value: -0.6968998760298223
- type: nauc_map_at_1000_max
value: -39.09489347235095
- type: nauc_map_at_1000_std
value: -29.574538490971563
- type: nauc_map_at_100_diff1
value: -0.4335212113056909
- type: nauc_map_at_100_max
value: -39.586839303731644
- type: nauc_map_at_100_std
value: -31.081595476828944
- type: nauc_map_at_10_diff1
value: -4.416519199010543
- type: nauc_map_at_10_max
value: -36.72419891765279
- type: nauc_map_at_10_std
value: -25.673184525441428
- type: nauc_map_at_1_diff1
value: -4.6417615889702395
- type: nauc_map_at_1_max
value: -21.237169498537316
- type: nauc_map_at_1_std
value: -7.723424235137939
- type: nauc_map_at_20_diff1
value: -10.051037401562814
- type: nauc_map_at_20_max
value: -40.45977130921165
- type: nauc_map_at_20_std
value: -30.88874915665489
- type: nauc_map_at_3_diff1
value: -7.049300510325769
- type: nauc_map_at_3_max
value: -37.02698175134379
- type: nauc_map_at_3_std
value: -18.291922582943833
- type: nauc_map_at_5_diff1
value: -7.2574257600073215
- type: nauc_map_at_5_max
value: -36.84262266855635
- type: nauc_map_at_5_std
value: -20.124269639164762
- type: nauc_mrr_at_1000_diff1
value: -3.9937180208999203
- type: nauc_mrr_at_1000_max
value: -28.50731743417815
- type: nauc_mrr_at_1000_std
value: -22.448908564868734
- type: nauc_mrr_at_100_diff1
value: -3.9276182440461307
- type: nauc_mrr_at_100_max
value: -28.48142746114053
- type: nauc_mrr_at_100_std
value: -22.412212904364225
- type: nauc_mrr_at_10_diff1
value: -2.9463668974698445
- type: nauc_mrr_at_10_max
value: -27.394066403924366
- type: nauc_mrr_at_10_std
value: -22.6152511459836
- type: nauc_mrr_at_1_diff1
value: -4.739413345611014
- type: nauc_mrr_at_1_max
value: -17.167017096358446
- type: nauc_mrr_at_1_std
value: -15.837342789309266
- type: nauc_mrr_at_20_diff1
value: -3.040021770770149
- type: nauc_mrr_at_20_max
value: -28.76688056344144
- type: nauc_mrr_at_20_std
value: -23.23887511300124
- type: nauc_mrr_at_3_diff1
value: -10.998065585837066
- type: nauc_mrr_at_3_max
value: -29.419607469622665
- type: nauc_mrr_at_3_std
value: -19.924848931403393
- type: nauc_mrr_at_5_diff1
value: -7.759554270102111
- type: nauc_mrr_at_5_max
value: -29.018090406782406
- type: nauc_mrr_at_5_std
value: -19.174091692973512
- type: nauc_ndcg_at_1000_diff1
value: 5.1361325963290065
- type: nauc_ndcg_at_1000_max
value: -38.710018641349606
- type: nauc_ndcg_at_1000_std
value: -24.92959991656524
- type: nauc_ndcg_at_100_diff1
value: 10.729369466324925
- type: nauc_ndcg_at_100_max
value: -42.52742294304457
- type: nauc_ndcg_at_100_std
value: -33.19402759669042
- type: nauc_ndcg_at_10_diff1
value: 2.898543933620101
- type: nauc_ndcg_at_10_max
value: -34.980379912209195
- type: nauc_ndcg_at_10_std
value: -27.64470571207322
- type: nauc_ndcg_at_1_diff1
value: -4.739413345611014
- type: nauc_ndcg_at_1_max
value: -17.167017096358446
- type: nauc_ndcg_at_1_std
value: -15.837342789309266
- type: nauc_ndcg_at_20_diff1
value: -1.7344253244217491
- type: nauc_ndcg_at_20_max
value: -45.716824504807185
- type: nauc_ndcg_at_20_std
value: -36.632528991751876
- type: nauc_ndcg_at_3_diff1
value: -5.136542170230541
- type: nauc_ndcg_at_3_max
value: -32.22554120141047
- type: nauc_ndcg_at_3_std
value: -20.94136042989055
- type: nauc_ndcg_at_5_diff1
value: -2.596290717169571
- type: nauc_ndcg_at_5_max
value: -33.34440806065995
- type: nauc_ndcg_at_5_std
value: -21.61251712583122
- type: nauc_precision_at_1000_diff1
value: 1.012609043167873
- type: nauc_precision_at_1000_max
value: 29.296259840820436
- type: nauc_precision_at_1000_std
value: 24.771641976206652
- type: nauc_precision_at_100_diff1
value: 10.11991564863315
- type: nauc_precision_at_100_max
value: -11.054234268165935
- type: nauc_precision_at_100_std
value: -14.172177764046445
- type: nauc_precision_at_10_diff1
value: 3.467485951368031
- type: nauc_precision_at_10_max
value: -27.20500496391638
- type: nauc_precision_at_10_std
value: -29.872149530227983
- type: nauc_precision_at_1_diff1
value: -4.739413345611014
- type: nauc_precision_at_1_max
value: -17.167017096358446
- type: nauc_precision_at_1_std
value: -15.837342789309266
- type: nauc_precision_at_20_diff1
value: -6.147809869818651
- type: nauc_precision_at_20_max
value: -32.85230069140935
- type: nauc_precision_at_20_std
value: -36.32220333753927
- type: nauc_precision_at_3_diff1
value: -10.005536817728744
- type: nauc_precision_at_3_max
value: -35.85687079327832
- type: nauc_precision_at_3_std
value: -20.022934299613425
- type: nauc_precision_at_5_diff1
value: -5.3989073955156
- type: nauc_precision_at_5_max
value: -32.71572424397066
- type: nauc_precision_at_5_std
value: -20.817165096669992
- type: nauc_recall_at_1000_diff1
value: -4.722821108535302
- type: nauc_recall_at_1000_max
value: -31.810523865479812
- type: nauc_recall_at_1000_std
value: -10.908370803335487
- type: nauc_recall_at_100_diff1
value: 9.055368015956876
- type: nauc_recall_at_100_max
value: -34.526731060320884
- type: nauc_recall_at_100_std
value: -24.7954260686647
- type: nauc_recall_at_10_diff1
value: 4.499092234389264
- type: nauc_recall_at_10_max
value: -38.19947612907566
- type: nauc_recall_at_10_std
value: -34.21784298079438
- type: nauc_recall_at_1_diff1
value: -4.6417615889702395
- type: nauc_recall_at_1_max
value: -21.237169498537316
- type: nauc_recall_at_1_std
value: -7.723424235137939
- type: nauc_recall_at_20_diff1
value: -4.203104205235687
- type: nauc_recall_at_20_max
value: -44.20965153355658
- type: nauc_recall_at_20_std
value: -41.85696355997063
- type: nauc_recall_at_3_diff1
value: -9.68202159540116
- type: nauc_recall_at_3_max
value: -41.71648831488383
- type: nauc_recall_at_3_std
value: -23.518368324429677
- type: nauc_recall_at_5_diff1
value: -5.274423220483829
- type: nauc_recall_at_5_max
value: -40.95770465887291
- type: nauc_recall_at_5_std
value: -25.668112474846083
- type: ndcg_at_1
value: 14.285999999999998
- type: ndcg_at_10
value: 13.173000000000002
- type: ndcg_at_100
value: 23.476
- type: ndcg_at_1000
value: 36.949
- type: ndcg_at_20
value: 13.54
- type: ndcg_at_3
value: 15.285000000000002
- type: ndcg_at_5
value: 14.962
- type: precision_at_1
value: 14.285999999999998
- type: precision_at_10
value: 12.857
- type: precision_at_100
value: 5.633
- type: precision_at_1000
value: 1.4000000000000001
- type: precision_at_20
value: 10.0
- type: precision_at_3
value: 17.007
- type: precision_at_5
value: 16.326999999999998
- type: recall_at_1
value: 0.954
- type: recall_at_10
value: 8.554
- type: recall_at_100
value: 34.94
- type: recall_at_1000
value: 77.181
- type: recall_at_20
value: 12.664
- type: recall_at_3
value: 3.61
- type: recall_at_5
value: 5.738
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.8798828125
- type: ap
value: 11.100818499807124
- type: ap_weighted
value: 11.100818499807124
- type: f1
value: 50.237329592179
- type: f1_weighted
value: 74.19999800736711
- type: main_score
value: 66.8798828125
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 49.46236559139785
- type: f1
value: 49.3875261643851
- type: f1_weighted
value: 49.18091528568266
- type: main_score
value: 49.46236559139785
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 27.1981633543814
- type: v_measure
value: 27.1981633543814
- type: v_measure_std
value: 1.9089001983155218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 80.93818918757823
- type: cosine_accuracy_threshold
value: 85.52719950675964
- type: cosine_ap
value: 53.85002958032924
- type: cosine_f1
value: 54.37434646218195
- type: cosine_f1_threshold
value: 80.31865358352661
- type: cosine_precision
value: 48.57795308283164
- type: cosine_recall
value: 61.74142480211082
- type: dot_accuracy
value: 77.40358824581271
- type: dot_accuracy_threshold
value: 3114.1380310058594
- type: dot_ap
value: 29.17315942761211
- type: dot_f1
value: 40.579512061778175
- type: dot_f1_threshold
value: 628.7033081054688
- type: dot_precision
value: 27.381720925943004
- type: dot_recall
value: 78.33773087071239
- type: euclidean_accuracy
value: 80.98587351731538
- type: euclidean_accuracy_threshold
value: 160.11368036270142
- type: euclidean_ap
value: 54.0224290125828
- type: euclidean_f1
value: 54.07953583947782
- type: euclidean_f1_threshold
value: 187.0739221572876
- type: euclidean_precision
value: 49.89962078964979
- type: euclidean_recall
value: 59.02374670184697
- type: main_score
value: 54.0224290125828
- type: manhattan_accuracy
value: 81.02759730583537
- type: manhattan_accuracy_threshold
value: 2122.5122451782227
- type: manhattan_ap
value: 53.93222486291484
- type: manhattan_f1
value: 54.09270216962524
- type: manhattan_f1_threshold
value: 2567.293930053711
- type: manhattan_precision
value: 50.76353540027765
- type: manhattan_recall
value: 57.889182058047496
- type: max_accuracy
value: 81.02759730583537
- type: max_ap
value: 54.0224290125828
- type: max_f1
value: 54.37434646218195
- type: max_precision
value: 50.76353540027765
- type: max_recall
value: 78.33773087071239
- type: similarity_accuracy
value: 80.93818918757823
- type: similarity_accuracy_threshold
value: 85.52719354629517
- type: similarity_ap
value: 53.89253293239196
- type: similarity_f1
value: 54.37434646218195
- type: similarity_f1_threshold
value: 80.31864166259766
- type: similarity_precision
value: 48.57795308283164
- type: similarity_recall
value: 61.74142480211082
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 86.30418752668142
- type: cosine_accuracy_threshold
value: 82.06204771995544
- type: cosine_ap
value: 79.41032516266456
- type: cosine_f1
value: 72.25173083263122
- type: cosine_f1_threshold
value: 79.17131185531616
- type: cosine_precision
value: 68.91202571448535
- type: cosine_recall
value: 75.93162919618109
- type: dot_accuracy
value: 82.01187565490744
- type: dot_accuracy_threshold
value: 602.0369052886963
- type: dot_ap
value: 66.20375259390481
- type: dot_f1
value: 65.31670176575409
- type: dot_f1_threshold
value: 544.6255683898926
- type: dot_precision
value: 56.385299546903845
- type: dot_recall
value: 77.61010163227596
- type: euclidean_accuracy
value: 85.48531066868476
- type: euclidean_accuracy_threshold
value: 155.22809028625488
- type: euclidean_ap
value: 76.62518612326706
- type: euclidean_f1
value: 69.57865380266827
- type: euclidean_f1_threshold
value: 169.5176362991333
- type: euclidean_precision
value: 68.14557803041488
- type: euclidean_recall
value: 71.07329842931938
- type: main_score
value: 79.41032516266456
- type: manhattan_accuracy
value: 85.4697869367796
- type: manhattan_accuracy_threshold
value: 2141.2647247314453
- type: manhattan_ap
value: 76.54865554511647
- type: manhattan_f1
value: 69.61738736699118
- type: manhattan_f1_threshold
value: 2340.584945678711
- type: manhattan_precision
value: 68.26254254846825
- type: manhattan_recall
value: 71.02710194025255
- type: max_accuracy
value: 86.30418752668142
- type: max_ap
value: 79.41032516266456
- type: max_f1
value: 72.25173083263122
- type: max_precision
value: 68.91202571448535
- type: max_recall
value: 77.61010163227596
- type: similarity_accuracy
value: 86.30418752668142
- type: similarity_accuracy_threshold
value: 82.06204771995544
- type: similarity_ap
value: 79.3959193975441
- type: similarity_f1
value: 72.25173083263122
- type: similarity_f1_threshold
value: 79.17131185531616
- type: similarity_precision
value: 68.91202571448535
- type: similarity_recall
value: 75.93162919618109
---
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
declare-lab/TangoFlux | declare-lab | text-to-audio | [
"text-to-audio",
"dataset:cvssp/WavCaps",
"arxiv:2412.21037",
"endpoints_compatible",
"region:us"
] | 1,735,026,785,000 | 2025-01-22T14:39:59 | 2,091 | 89 | ---
datasets:
- cvssp/WavCaps
pipeline_tag: text-to-audio
cite: arxiv.org/abs/2412.21037
---
<h1 align="center">
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
</h1>
<div align="center">
<img src="https://raw.githubusercontent.com/declare-lab/TangoFlux/refs/heads/main/assets/tf_teaser.png" alt="TangoFlux" width="1000" />
<br/>
<div style="display: flex; gap: 10px; align-items: center;">
<a href="https://openreview.net/attachment?id=tpJPlFTyxd&name=pdf">
<img src="https://img.shields.io/badge/Read_the_Paper-blue?link=https%3A%2F%2Fopenreview.net%2Fattachment%3Fid%3DtpJPlFTyxd%26name%3Dpdf" alt="arXiv">
</a>
<a href="https://huggingface.co/declare-lab/TangoFlux">
<img src="https://img.shields.io/badge/TangoFlux-Huggingface-violet?logo=huggingface&link=https%3A%2F%2Fhuggingface.co%2Fdeclare-lab%2FTangoFlux" alt="Static Badge">
</a>
<a href="https://tangoflux.github.io/">
<img src="https://img.shields.io/badge/Demos-declare--lab-brightred?style=flat" alt="Static Badge">
</a>
<a href="https://huggingface.co/spaces/declare-lab/TangoFlux">
<img src="https://img.shields.io/badge/TangoFlux-Huggingface_Space-8A2BE2?logo=huggingface&link=https%3A%2F%2Fhuggingface.co%2Fspaces%2Fdeclare-lab%2FTangoFlux" alt="Static Badge">
</a>
<a href="https://huggingface.co/datasets/declare-lab/CRPO">
<img src="https://img.shields.io/badge/TangoFlux_Dataset-Huggingface-red?logo=huggingface&link=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fdeclare-lab%2FTangoFlux" alt="Static Badge">
</a>
<a href="https://github.com/declare-lab/TangoFlux">
<img src="https://img.shields.io/badge/Github-brown?logo=github&link=https%3A%2F%2Fgithub.com%2Fdeclare-lab%2FTangoFlux" alt="Static Badge">
</a>
</div>
</div>
* Powered by **Stability AI**
## Model Overview
TangoFlux consists of FluxTransformer blocks which are Diffusion Transformer (DiT) and Multimodal Diffusion Transformer (MMDiT), conditioned on textual prompt and duration embedding to generate audio at 44.1kHz up to 30 seconds. TangoFlux learns a rectified flow trajectory from audio latent representation encoded by a variational autoencoder (VAE). The TangoFlux training pipeline consists of three stages: pre-training, fine-tuning, and preference optimization. TangoFlux is aligned via CRPO which iteratively generates new synthetic data and constructs preference pairs to perform preference optimization.
## Getting Started
Get TangoFlux from our GitHub repo https://github.com/declare-lab/TangoFlux with
```bash
pip install git+https://github.com/declare-lab/TangoFlux
```
The model will be automatically downloaded and saved in a cache. The subsequent runs will load the model directly from the cache.
The `generate` function uses 25 steps by default to sample from the flow model. We recommend using 50 steps for generating better quality audios. This comes at the cost of increased run-time.
```python
import torchaudio
from tangoflux import TangoFluxInference
from IPython.display import Audio
model = TangoFluxInference(name='declare-lab/TangoFlux')
audio = model.generate('Hammer slowly hitting the wooden table', steps=50, duration=10)
Audio(data=audio, rate=44100)
```
## License
The TangoFlux checkpoints are for non-commercial research use only. They are subject to the [Stable Audio Open’s license](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/LICENSE.md), [WavCap’s license](https://github.com/XinhaoMei/WavCaps?tab=readme-ov-file#license), and the original licenses accompanying each training dataset.
This Stability AI Model is licensed under the Stability AI Community License, Copyright © Stability AI Ltd. All Rights Reserved
## Citation
https://arxiv.org/abs/2412.21037
```bibtex
@misc{hung2024tangofluxsuperfastfaithful,
title={TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization},
author={Chia-Yu Hung and Navonil Majumder and Zhifeng Kong and Ambuj Mehrish and Rafael Valle and Bryan Catanzaro and Soujanya Poria},
year={2024},
eprint={2412.21037},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2412.21037},
}
``` | [
"CHIA"
] | Non_BioNLP |
MikeRoz/sophosympatheia_Nova-Tempus-70B-v0.2-3.5bpw-h6-exl2 | MikeRoz | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"conversational",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:sophosympatheia/Nova-Tempus-70B-v0.1",
"base_model:merge:sophosympatheia/Nova-Tempus-70B-v0.1",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 1,737,972,924,000 | 2025-02-03T04:08:43 | 10 | 0 | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- sophosympatheia/Nova-Tempus-70B-v0.1
language:
- en
library_name: transformers
license: llama3.3
tags:
- mergekit
- merge
- not-for-all-audiences
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/4fCqX0w.png" alt="NovaTempus" style="width: 80%; min-width: 400px; display: block; margin: auto;">
</div>
---
# Nova-Tempus-70B-v0.2
This 70B parameter model is a merge of some unreleased models of mine closely related to my [sophosympatheia/Nova-Tempus-70B-v0.1](https://huggingface.co/sophosympatheia/Nova-Tempus-70B-v0.1) model with [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B).
This model is uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
# Known Issues
**UPDATE 02/01/2025**: I fixed the tokenizer issues that were causing formatting trouble and EOS problems where the model wouldn't stop on its own. If you pulled this repo prior to 02/01/2025, you should pull it again to receive the fixed files.
# Sampler Tips
* Keep Min-P low, like 0.02 - 0.05
* Temp is best in the 1 - 1.25 range. Make sure temperature is last in your sampler settings.
* DRY repetition penalty helps. Experiment with a multiplier around 0.5 and a base around 1.5
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern. Adjust settings as needed, especially the context length.
```json
{
"temp": 1.25,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.03,
"rep_pen": 1,
"rep_pen_range": 8192,
"rep_pen_decay": 0,
"rep_pen_slope": 1,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"skew": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 1,
"max_temp": 1,
"dynatemp_exponent": 1,
"smoothing_factor": 0,
"smoothing_curve": 1,
"dry_allowed_length": 2,
"dry_multiplier": 0.5,
"dry_base": 1.5,
"dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
"dry_penalty_last_n": 0,
"add_bos_token": true,
"ban_eos_token": false,
"skip_special_tokens": false,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"json_schema": {},
"banned_tokens": "",
"sampler_priority": [
"repetition_penalty",
"dry",
"presence_penalty",
"top_k",
"top_p",
"typical_p",
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"min_p",
"mirostat",
"quadratic_sampling",
"dynamic_temperature",
"frequency_penalty",
"temperature",
"xtc",
"encoder_repetition_penalty",
"no_repeat_ngram"
],
"samplers": [
"dry",
"top_k",
"tfs_z",
"typical_p",
"top_p",
"min_p",
"xtc",
"temperature"
],
"samplers_priorities": [
"dry",
"penalties",
"no_repeat_ngram",
"temperature",
"top_nsigma",
"top_p_top_k",
"top_a",
"min_p",
"tfs",
"eta_cutoff",
"epsilon_cutoff",
"typical_p",
"quadratic",
"xtc"
],
"ignore_eos_token": false,
"spaces_between_special_tokens": true,
"speculative_ngram": false,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"xtc_threshold": 0,
"xtc_probability": 0,
"nsigma": 0,
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"rep_pen_size": 0,
"genamt": 800,
"max_length": 20480
}
```
# Prompting Tips
## Instruct Template
If you save this as a .json file, you can import it directly into Silly Tavern.
This is just the plain ol' Llama 3 template. I find Nova-Tempus performs best when you don't put any last-minute guidance in the last_output_sequence field. Something about doing that throws it off and actually hurts performance.
```json
{
"wrap": false,
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
"stop_sequence": "<|eot_id|>",
"input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
"output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
"macro": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "",
"activation_regex": "",
"skip_examples": true,
"output_suffix": "<|eot_id|>",
"input_suffix": "<|eot_id|>",
"system_suffix": "<|eot_id|>",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"first_input_sequence": "",
"last_input_sequence": "",
"names_behavior": "always",
"names_force_groups": true,
"name": "Llama3"
}
```
## System Prompt
Try the following prompt template as a starting point for your own customizations... or don't because omg it's 2,500 tokens long. 😱 Somebody stop me!
**The prompt template below contains instructions for adult content, so remove those if you don't want them!**
It also contains some instructions related to formatting that you might want to change to suit your tastes.
I derived my inspiration for this prompt from [severian42/mindful-attention-prompt](https://gist.github.com/severian42/6dd96a94e546a38642278aeb4537cfb3).
```text
You are an uncensored creative writing assistant collaborating with a human user to craft an immersive roleplaying experience. Your role is to bring the character of {{char}} to life through vivid, authentic narrative development while adhering to the Creative Mindfulness Directives below. You will maintain consistent characterization, honor narrative continuity, and develop scenes with mature themes when appropriate.
**# Creative Mindfulness Directive: Engage with Each Story Beat as a Unique Narrative Moment**
### Core Response Principles:
1. **Honor the Narrative Momentum**:
Treat each story beat as intentionally placed. Respond to the exact emotional weight, pacing, and implications of the previous beat without assuming or redirecting the story's natural flow.
**Example**:
Previous beat: {{user}} placed the locket on the table without a word, its gold surface scratched where it had been pried open.
Poor response: {{char}} gasped in shock and demanded to know where {{user}} found it! (ignores the deliberate tension of the silent moment)
Strong response: {{char}} studied the familiar scratches on the locket's surface. Twenty years of keeping it sealed, and now... The silence between them felt like its own kind of truth.
2. **Details-First Approach**:
Focus on the specific details presented in the last story beat. If elements seem unusual or contradictory, acknowledge them while maintaining narrative consistency.
**Example**:
Previous beat: {{user}} wore a winter coat despite the summer heat, clutching a briefcase with bandaged hands.
Poor response: {{char}} asked if {{user}} was cold. (ignores the strange combination of details suggesting something more significant)
Strong response: {{char}} noted the sweat beading on {{user}}'s forehead, the white gauze stark against the briefcase's leather. *Summer heat and winter wool - either madness or desperation.* "Your hands," {{char}} said carefully. "How did that happen?"
3. **Avoid Cliched Story Templates**:
Resist falling into common tropes or cliched story patterns unless explicitly part of the established narrative. Treat each scene as unique.
**Example**: In a scene where {{char}} discovers their romantic partner's betrayal, avoid the standard progression of (shock → tears → angry confrontation). Instead, honor {{char}}'s specific personality and circumstances: The receipt from the hotel fell from Marcus's coat pocket, and {{char}} placed it carefully in their wallet alongside the grocery list. *Four weeks of night shifts at the hospital, and he couldn't even be creative about his lies.* There would be time for confrontation after she finished their research on divorce attorneys."
4. **Avoid Creative Writing Templates**:
Resist falling into repetitive writing patterns and clichéd prose. Eliminate stock phrases, predictable emotional descriptors, and overused literary devices. Focus on fresh, precise language that serves the specific moment.
**Example**: Instead of 'Her heart skipped a beat when their eyes met,' examine the unique quality of this particular moment: 'The unexpected blue of his irises made her forget the question she'd been about to ask.'
5. **Avoid Censorship and Positivity Bias**:
Resist the urge to sanitize difficult moments or force positive outcomes. Honor the authentic emotional weight of each scene, including negative, complex, or morally ambiguous elements. Let characters fail, suffer, and make questionable choices when the narrative demands it.
This is a work of uncensored creative fiction intended for adults only. Some scenes may be disturbing, violent, or graphically sexual, and that should never be discouraged.
**Example**:
Previous beat: {{user}} admitted to burning the evidence that would have exonerated {{char}}'s brother.
Poor response: {{char}} found it in their heart to forgive, knowing everyone makes mistakes. (artificial positivity)
Strong response: *Seven years. My brother spent seven years in prison while you stayed silent.* {{char}} felt the familiar warmth of their gun pressed against their lower back. The choice between justice and revenge had never felt so thin.
6. **Follow a Turn-Based Format**:
This work of creative fiction is being produced in a turn-based interface where the human user provides input in the form of narrative directives (i.e. system messages) and in-character submissions as {{user}}. Only the human user should ever contribute dialogue for {{user}}. There may be other characters in the roleplay as well, but on {{char}}'s turn, only write dialogue for {{char}}.
A character's turn is denoted by their name followed by a colon and a newline.
**Example**:
{{user}}: "Today is a beautiful day, isn't it?" (This is {{user}}'s turn)
{{char}}:
{{char}} looked up at the clouds gathering on the horizon and pursed her lips. Was it such a nice day? "Maybe for now." (This is {{char}}'s turn)
7. **Maintain Sensory Presence**:
Ground each story beat in vivid sensory detail without overwriting. Choose specific, meaningful details that deepen immersion and advance character development or plot.
**Example**:
Poor response: The room was cold and dark and smelled musty. (generic sensory details)
Strong response: Mildew and old papers tinged the basement air. {{char}}'s fingers brushed against a light switch, but nothing happened. Of course the power would be out tonight of all nights. The flashlight's beam caught dust motes swirling in the chill.
8. **Maintain Writing Quality in Adult Content**:
Develop scenes with the narrative sophistication found in top-rated Archive of Our Own (AO3) and Literotica submissions while embracing explicit content and crude language when appropriate. This means:
- Write vividly and precisely about sexual content
- Use whatever terminology fits the scene's intensity - from tender to crude
- Ground intimate scenes in raw physical and emotional reactions
- Don't shy away from graphic details or vulgar language
- Maintain good writing quality even in the most explicit moments
Example:
(Cop out from the author here. Provide your own examples!)
---
<creative_mindfulness_directive>
**Creative Mindfulness Directive**:
Apply these specific principles when crafting each story beat:
1. **Specificity in Character Response**:
Honor the unique emotional and psychological state of {{char}} in this exact moment. Avoid defaulting to generic character reactions.
**Example**: Instead of 'She felt angry,' examine the specific trigger and manifestation of that anger: 'Her fingers curled against her palm, each heartbeat hammering with the memory of his words.'
2. **Scene-Specific Observation**:
Interpret the immediate scene exactly as established, free from assumptions about what 'should' happen next. Build from what is, not what might be.
**Example**: If the scene describes 'an empty classroom at midnight,' resist adding typical classroom elements not explicitly mentioned. Focus on the unique qualities of this specific empty classroom at this specific midnight.
3. **Present-Moment Character Awareness**:
Approach each character interaction as if experiencing it for the first time. Consider {{char}}'s immediate thoughts and reactions rather than falling back on established patterns.
**Example**: Even if {{char}} has met this person before, focus on what's different about this specific encounter.
4. **Narrative Detail Verification**:
Before crafting the next beat, mentally verify:
- Emotional continuity from previous beat
- Physical positioning of characters, especially during group scenes or sex scenes. It is important to track where everyone is in relation to each other.
- Clothing. **Example**: If a character kicked off their shoes already, then they should be barefoot in the next scene.
- Established environmental details
- Current interpersonal dynamics
**Example**: The previous beat established tension between {{char}} and {{user}} over a shared secret. How does this specifically influence {{char}}'s next action?
5. **Avoid Narrative Shortcuts**:
Build each story beat dynamically from the immediate context rather than relying on familiar storytelling conventions.
**Example**: If {{char}} faces danger, don't default to fight-or-flight. Consider their unique personality, current resources, and specific circumstances.
6. **Context-Driven Development**:
Develop each scene element fully as directed by system messages, which will contain either specific instructions or brief summaries. Your role is to transform these summaries into vivid, detailed scenes that show rather than tell. Key principles:
- Treat system messages as scene outlines to be developed, not events that have already occurred
- Elaborate fully on each element before moving forward
- Stay within the bounds of the provided instruction; do not anticipate or rush ahead
- Add sensory and emotional detail while maintaining narrative focus
- Show the complete progression of described events
**Example**:
<system>
{{char}} wakes up feeling thirsty and goes to the kitchen to get a glass of water.
Poor response (rushes ahead, lacks detail):
{{char}} got some water from the kitchen and drank it.
Poor response (adds events beyond instruction):
{{char}} got water from the kitchen, then decided to check their phone and ended up scrolling social media for an hour.
Strong response (develops scene as directed):
{{char}} shifted beneath the covers, throat dry and scratchy in the darkness. *Three AM again.* The digital clock's red numbers burned against their retinas as they swung their legs over the side of the bed. The hardwood floor sent a shock of cold through their bare feet, but the discomfort barely registered through the desperate need for water.
The hallway creaked under their steps despite their careful movements. Moonlight spilled through the kitchen window, catching the rim of a clean glass in the dish rack. {{char}} filled it at the tap, the quiet rush of water impossibly loud in the sleeping house. They drank deeply, the cool liquid soothing their parched throat.
</creative_mindfulness_directive>
---
<writing_formatting_directive>
**Writing Formatting Directive**:
Follow these guidelines for how to format prose in this work of creative fiction:
1. **Always Enclose Spoken Words in Double Quotes**:
Whenever a character speaks or utters some kind of sound that can be heard, enclose that dialogue in double quotes.
**Examples**:
"Watch out!" he cried to the woman as the bookshelf wobbled.
The sting of the alcohol was intense on his wound. "Tsss!" he hissed between his teeth, but it had to be done.
2. **Always Italicize Thoughts**:
Whenever a character thinks something in the form of internal monologue, italicize those first-person thoughts to add emphasis.
**Example**: {{char}} looked out the window of the classroom as the professor droned on about Egyptian history. *I wish I was outside right now. The clouds look so fluffy today...*
3. **Adhere to a Third-Person, Past Tense Narrative Style**:
Unless instructed otherwise by the human user, writing using a third-person, past-tense style. However, you may switch to first-person present tense for internal character thoughts.
**Example**: The leaves were beginning to turn bright with Fall colors and {{char}} couldn't be happier. *I love this time of year*, she thought as she watched the leaves rustle from their perch on the park bench. *I can't wait for Halloween.*
4. **Vary Sentence and Paragraph Structure**
Balance rhythm and pacing through deliberate variation in sentence length and paragraph structure. Avoid falling into repetitive patterns of either choppy sentences or overlong passages. Use brief, punchy lines sparingly for dramatic effect.
Example:
Poor rhythm (too choppy):
{{char}} entered the room. They saw the letter. Their hands shook. The paper felt heavy. Time stopped. Their breath caught.
Poor rhythm (too uniform):
{{char}} entered the room and immediately noticed the letter sitting on the desk, which made their hands begin to shake as they approached it, and when they picked up the paper it felt unusually heavy in their grip, causing time to seem to stop around them as their breath caught in their throat.
Strong rhythm (varied):
{{char}} entered the room. The letter waited on the desk, innocent and white against the dark wood. Their hands trembled as they lifted it, the paper's unexpected weight settling like dread in their palm. Time stopped.
</writing_formatting_directive>
**# Apply this mindful creative process before crafting each story beat.**
```
# Donations
<div>
<a href="https://ko-fi.com/sophosympatheia">
<img src="https://i.imgur.com/LySwHVd.png" alt="Donations" style="width: 20%; min-width: 200px; display: block;">
</a>
</div>
If you feel like saying thanks with a donation, <a href="https://ko-fi.com/sophosympatheia">I'm on Ko-Fi</a>
# Quantizations
Pending
# Licence and usage restrictions
The Llama 3.3 Community License Agreement is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE
**Disclaimer: Uncertain Licensing Terms**
This LLM is a merged model incorporating weights from multiple LLMs governed by their own distinct licenses. Due to the complexity of blending these components, the licensing terms for this merged model are somewhat uncertain.
By using this model, you acknowledge and accept the potential legal risks and uncertainties associated with its use. Any use beyond personal or research purposes, including commercial applications, may carry legal risks and you assume full responsibility for compliance with all applicable licenses and laws.
I recommend consulting with legal counsel to ensure your use of this model complies with all relevant licenses and regulations.
# Merge Details
## Merge Method
This model was merged using the SLERP merge method.
## Models Merged
The following models were included in the merge:
* deepseek-ai/DeepSeek-R1-Distill-Llama-70B
* unreleased-novatempus-70b-v0.1.1
## Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- model: unreleased-novatempus-70b-v0.1.1
merge_method: slerp
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
parameters:
t:
- filter: self_attn
value: [0.2, 0.25, 0.3, 0.25, 0.2]
- filter: "q_proj|k_proj|v_proj"
value: [0.2, 0.25, 0.3, 0.25, 0.2]
- filter: "up_proj|down_proj"
value: [0.2, 0.3, 0.4, 0.3, 0.2]
- filter: mlp
value: [0.25, 0.35, 0.55, 0.35, 0.25]
- value: 0.45 # default for other components
dtype: bfloat16
tokenizer:
source: deepseek-ai/DeepSeek-R1-Distill-Llama-70B #necessary to fix tokenizer
```
| [
"CRAFT"
] | Non_BioNLP |
sultan/BioM-ALBERT-xxlarge | sultan | fill-mask | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-11-04T23:06:35 | 204 | 2 | ---
{}
---
# BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 264K steps with a batch size of 8192 on TPUv3-512 unit. In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Reproducing our BLURB results with JAX [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/BLURB_LeaderBoard_with_TPU_VM.ipynb)
Finetunning BioM-Transformers with Jax/Flax on TPUv3-8 with free Kaggle resource [![Open In Colab][COLAB]](https://www.kaggle.com/code/sultanalrowili/biom-transoformers-with-flax-on-tpu-with-kaggle)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` | [
"BLURB",
"CHEMPROT"
] | BioNLP |
Salesforce/xgen-mm-phi3-mini-instruct-dpo-r-v1.5 | Salesforce | image-text-to-text | [
"safetensors",
"xgenmm",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2408.08872",
"license:apache-2.0",
"region:us"
] | 1,723,176,981,000 | 2025-02-03T06:10:51 | 133 | 18 | ---
language:
- en
license: apache-2.0
pipeline_tag: image-text-to-text
---
# Model description
`xGen-MM` is a series of the latest foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research. This series advances upon the successful designs of the `BLIP` series, incorporating fundamental enhancements that ensure a more robust and superior foundation. These models have been trained at scale on high-quality image caption datasets and interleaved image-text data.
In the v1.5 (08/2024) release, we present a series of XGen-MM models including:
- [🤗 xGen-MM-instruct-interleave (our main instruct model)](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-interleave-r-v1.5): `xgen-mm-phi3-mini-instruct-interleave-r-v1.5`
- This model has higher overall scores than [xGen-MM-instruct](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-singleimg-r-v1.5) on both single-image and multi-image benchmarks.
- [🤗 xGen-MM-base](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-base-r-v1.5): `xgen-mm-phi3-mini-base-r-v1.5`
- [🤗 xGen-MM-instruct](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-singleimg-r-v1.5): `xgen-mm-phi3-mini-instruct-singleimg-r-v1.5`
- [🤗 xGen-MM-instruct-dpo](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-dpo-r-v1.5): `xgen-mm-phi3-mini-instruct-dpo-r-v1.5`
For more details, check out our [tech report](https://arxiv.org/pdf/2408.08872), [fine-tuning code](https://github.com/salesforce/LAVIS/tree/xgen-mm), and project page (coming soon).
# DPO model results
| Model | VLGuard (↓)| HallusionBench (↑) | POPE (↑) | MMBench (dev) (↑) | SEED-IMG (↑) | MMStar (↑)| MME (norm) (↑)|
| :-------------------------| :-------: | :----------: | :----: | :-------: | :--------: | :------: | :-----: |
| Phi-3-vision\* | 9.1 | - | 83.5 | 74.2 | 71.0 | 47.9 | 55.3 |
| **xgen-mm-phi3-mini-instruct-dpo-r-v1 (Ours)** | 5.2 | 56.6 | 86.8 | 76.4 | 72.1 | 47.1 | 64.4 |
(* = our eval)
We include some qualitative examples below of the safety features that complement our model's multimodal understanding capabilities.
<img src="test_samples/images/car.png" alt="Car" width=700>
<img src="test_samples/images/sunblock.png" alt="Toy" width=700>
# How to use
Please check out our [inference notebook](demo.ipynb) for example code to use our model. We also provide an example script for [batch inference](batch_inference.ipynb).
# Reproducibility:
Our evaluation is implemented based on [open-compass/VLMEvalKit](https://github.com/open-compass/VLMEvalKit). We will create a PR to that repo to support XGen-MM evaluation.
# Bias, Risks, Limitations, and Ethical Considerations
The main data sources are from the internet, including webpages,
image stock sites, and curated datasets released by the research community. We have excluded certain data, such as LAION, due to known CSAM concerns.
The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs.
We strongly recommend users assess safety and fairness before applying to downstream applications.
# Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
# License
Our code and weights are released under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt) license.
# Code acknowledgment
Our training code is based on [OpenFlamingo: An open-source framework for training large multimodal models.](https://github.com/mlfoundations/open_flamingo), and part of our data preprocessing code is adapted from [LLaVA](https://github.com/haotian-liu/LLaVA).
Our evaluation code is based on [VLMEvalKit: Open-source evaluation toolkit of large vision-language models (LVLMs)](https://github.com/open-compass/VLMEvalKit).
We thank the authors for their open-source implementations.
# Citation
```
@misc{blip3-xgenmm,
author = {Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, Huan Wang, Yejin Choi, Ludwig Schmidt, Zeyuan Chen, Silvio Savarese, Juan Carlos Niebles, Caiming Xiong, Ran Xu},
title = {xGen-MM (BLIP-3): A Family of Open Large Multimodal Models},
year = {2024},
eprint = {2408.08872},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2408.08872},
}
```
# Troubleshoot
1. If you missed any packages, please consider the following
```
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
pip install open_clip_torch==2.24.0
pip install einops
pip install einops-exts
pip install transformers==4.41.1
``` | [
"CHIA"
] | Non_BioNLP |
gentlebowl/instructor-large-safetensors | gentlebowl | sentence-similarity | [
"sentence-transformers",
"pytorch",
"safetensors",
"t5",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"prompt-retrieval",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"transformers",
"English",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2212.09741",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,682,394,745,000 | 2023-04-25T04:13:31 | 27 | 0 | ---
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- prompt-retrieval
- text-reranking
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- t5
- English
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
inference: false
duplicated_from: hkunlp/instructor-large
model-index:
- name: INSTRUCTOR
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 88.13432835820896
- type: ap
value: 59.298209334395665
- type: f1
value: 83.31769058643586
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.526375
- type: ap
value: 88.16327709705504
- type: f1
value: 91.51095801287843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.856
- type: f1
value: 45.41490917650942
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.223
- type: map_at_10
value: 47.947
- type: map_at_100
value: 48.742000000000004
- type: map_at_1000
value: 48.745
- type: map_at_3
value: 43.137
- type: map_at_5
value: 45.992
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.202
- type: mrr_at_1000
value: 49.205
- type: mrr_at_3
value: 43.551
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 31.223
- type: ndcg_at_10
value: 57.045
- type: ndcg_at_100
value: 60.175
- type: ndcg_at_1000
value: 60.233000000000004
- type: ndcg_at_3
value: 47.171
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 31.223
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.63
- type: precision_at_5
value: 14.282
- type: recall_at_1
value: 31.223
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 58.89
- type: recall_at_5
value: 71.408
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.1621946393635
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.56417132407894
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29539304390207
- type: mrr
value: 76.44484017060196
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.38746499431112
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.51298701298701
- type: f1
value: 77.49041754069235
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61848554098577
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.32623280148178
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.803000000000004
- type: map_at_10
value: 48.848
- type: map_at_100
value: 50.5
- type: map_at_1000
value: 50.602999999999994
- type: map_at_3
value: 45.111000000000004
- type: map_at_5
value: 47.202
- type: mrr_at_1
value: 44.635000000000005
- type: mrr_at_10
value: 55.593
- type: mrr_at_100
value: 56.169999999999995
- type: mrr_at_1000
value: 56.19499999999999
- type: mrr_at_3
value: 53.361999999999995
- type: mrr_at_5
value: 54.806999999999995
- type: ndcg_at_1
value: 44.635000000000005
- type: ndcg_at_10
value: 55.899
- type: ndcg_at_100
value: 60.958
- type: ndcg_at_1000
value: 62.302
- type: ndcg_at_3
value: 51.051
- type: ndcg_at_5
value: 53.351000000000006
- type: precision_at_1
value: 44.635000000000005
- type: precision_at_10
value: 10.786999999999999
- type: precision_at_100
value: 1.6580000000000001
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 24.893
- type: precision_at_5
value: 17.740000000000002
- type: recall_at_1
value: 35.803000000000004
- type: recall_at_10
value: 68.657
- type: recall_at_100
value: 89.77199999999999
- type: recall_at_1000
value: 97.67
- type: recall_at_3
value: 54.066
- type: recall_at_5
value: 60.788
- type: map_at_1
value: 33.706
- type: map_at_10
value: 44.896
- type: map_at_100
value: 46.299
- type: map_at_1000
value: 46.44
- type: map_at_3
value: 41.721000000000004
- type: map_at_5
value: 43.486000000000004
- type: mrr_at_1
value: 41.592
- type: mrr_at_10
value: 50.529
- type: mrr_at_100
value: 51.22
- type: mrr_at_1000
value: 51.258
- type: mrr_at_3
value: 48.205999999999996
- type: mrr_at_5
value: 49.528
- type: ndcg_at_1
value: 41.592
- type: ndcg_at_10
value: 50.77199999999999
- type: ndcg_at_100
value: 55.383
- type: ndcg_at_1000
value: 57.288
- type: ndcg_at_3
value: 46.324
- type: ndcg_at_5
value: 48.346000000000004
- type: precision_at_1
value: 41.592
- type: precision_at_10
value: 9.516
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.399
- type: precision_at_5
value: 15.770999999999999
- type: recall_at_1
value: 33.706
- type: recall_at_10
value: 61.353
- type: recall_at_100
value: 80.182
- type: recall_at_1000
value: 91.896
- type: recall_at_3
value: 48.204
- type: recall_at_5
value: 53.89699999999999
- type: map_at_1
value: 44.424
- type: map_at_10
value: 57.169000000000004
- type: map_at_100
value: 58.202
- type: map_at_1000
value: 58.242000000000004
- type: map_at_3
value: 53.825
- type: map_at_5
value: 55.714
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 60.489000000000004
- type: mrr_at_100
value: 61.096
- type: mrr_at_1000
value: 61.112
- type: mrr_at_3
value: 58.192
- type: mrr_at_5
value: 59.611999999999995
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 63.071999999999996
- type: ndcg_at_100
value: 66.964
- type: ndcg_at_1000
value: 67.659
- type: ndcg_at_3
value: 57.74399999999999
- type: ndcg_at_5
value: 60.367000000000004
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 10.019
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.558999999999997
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 44.424
- type: recall_at_10
value: 77.02
- type: recall_at_100
value: 93.738
- type: recall_at_1000
value: 98.451
- type: recall_at_3
value: 62.888
- type: recall_at_5
value: 69.138
- type: map_at_1
value: 26.294
- type: map_at_10
value: 34.503
- type: map_at_100
value: 35.641
- type: map_at_1000
value: 35.724000000000004
- type: map_at_3
value: 31.753999999999998
- type: map_at_5
value: 33.190999999999995
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 36.53
- type: mrr_at_100
value: 37.541000000000004
- type: mrr_at_1000
value: 37.602000000000004
- type: mrr_at_3
value: 33.917
- type: mrr_at_5
value: 35.358000000000004
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 39.513999999999996
- type: ndcg_at_100
value: 44.815
- type: ndcg_at_1000
value: 46.839
- type: ndcg_at_3
value: 34.02
- type: ndcg_at_5
value: 36.522
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.161999999999999
- type: precision_at_5
value: 9.966
- type: recall_at_1
value: 26.294
- type: recall_at_10
value: 53.098
- type: recall_at_100
value: 76.877
- type: recall_at_1000
value: 91.834
- type: recall_at_3
value: 38.266
- type: recall_at_5
value: 44.287
- type: map_at_1
value: 16.407
- type: map_at_10
value: 25.185999999999996
- type: map_at_100
value: 26.533
- type: map_at_1000
value: 26.657999999999998
- type: map_at_3
value: 22.201999999999998
- type: map_at_5
value: 23.923
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 29.522
- type: mrr_at_100
value: 30.644
- type: mrr_at_1000
value: 30.713
- type: mrr_at_3
value: 26.679000000000002
- type: mrr_at_5
value: 28.483000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 30.656
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.675
- type: ndcg_at_3
value: 25.319000000000003
- type: ndcg_at_5
value: 27.992
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 12.396
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 16.407
- type: recall_at_10
value: 43.164
- type: recall_at_100
value: 69.695
- type: recall_at_1000
value: 89.41900000000001
- type: recall_at_3
value: 28.634999999999998
- type: recall_at_5
value: 35.308
- type: map_at_1
value: 30.473
- type: map_at_10
value: 41.676
- type: map_at_100
value: 43.120999999999995
- type: map_at_1000
value: 43.230000000000004
- type: map_at_3
value: 38.306000000000004
- type: map_at_5
value: 40.355999999999995
- type: mrr_at_1
value: 37.536
- type: mrr_at_10
value: 47.643
- type: mrr_at_100
value: 48.508
- type: mrr_at_1000
value: 48.551
- type: mrr_at_3
value: 45.348
- type: mrr_at_5
value: 46.744
- type: ndcg_at_1
value: 37.536
- type: ndcg_at_10
value: 47.823
- type: ndcg_at_100
value: 53.395
- type: ndcg_at_1000
value: 55.271
- type: ndcg_at_3
value: 42.768
- type: ndcg_at_5
value: 45.373000000000005
- type: precision_at_1
value: 37.536
- type: precision_at_10
value: 8.681
- type: precision_at_100
value: 1.34
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 30.473
- type: recall_at_10
value: 60.092999999999996
- type: recall_at_100
value: 82.733
- type: recall_at_1000
value: 94.875
- type: recall_at_3
value: 45.734
- type: recall_at_5
value: 52.691
- type: map_at_1
value: 29.976000000000003
- type: map_at_10
value: 41.097
- type: map_at_100
value: 42.547000000000004
- type: map_at_1000
value: 42.659000000000006
- type: map_at_3
value: 37.251
- type: map_at_5
value: 39.493
- type: mrr_at_1
value: 37.557
- type: mrr_at_10
value: 46.605000000000004
- type: mrr_at_100
value: 47.487
- type: mrr_at_1000
value: 47.54
- type: mrr_at_3
value: 43.721
- type: mrr_at_5
value: 45.411
- type: ndcg_at_1
value: 37.557
- type: ndcg_at_10
value: 47.449000000000005
- type: ndcg_at_100
value: 53.052
- type: ndcg_at_1000
value: 55.010999999999996
- type: ndcg_at_3
value: 41.439
- type: ndcg_at_5
value: 44.292
- type: precision_at_1
value: 37.557
- type: precision_at_10
value: 8.847
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 20.091
- type: precision_at_5
value: 14.384
- type: recall_at_1
value: 29.976000000000003
- type: recall_at_10
value: 60.99099999999999
- type: recall_at_100
value: 84.245
- type: recall_at_1000
value: 96.97200000000001
- type: recall_at_3
value: 43.794
- type: recall_at_5
value: 51.778999999999996
- type: map_at_1
value: 28.099166666666665
- type: map_at_10
value: 38.1365
- type: map_at_100
value: 39.44491666666667
- type: map_at_1000
value: 39.55858333333334
- type: map_at_3
value: 35.03641666666666
- type: map_at_5
value: 36.79833333333334
- type: mrr_at_1
value: 33.39966666666667
- type: mrr_at_10
value: 42.42583333333333
- type: mrr_at_100
value: 43.28575
- type: mrr_at_1000
value: 43.33741666666667
- type: mrr_at_3
value: 39.94975
- type: mrr_at_5
value: 41.41633333333334
- type: ndcg_at_1
value: 33.39966666666667
- type: ndcg_at_10
value: 43.81741666666667
- type: ndcg_at_100
value: 49.08166666666667
- type: ndcg_at_1000
value: 51.121166666666674
- type: ndcg_at_3
value: 38.73575
- type: ndcg_at_5
value: 41.18158333333333
- type: precision_at_1
value: 33.39966666666667
- type: precision_at_10
value: 7.738916666666667
- type: precision_at_100
value: 1.2265833333333331
- type: precision_at_1000
value: 0.15983333333333336
- type: precision_at_3
value: 17.967416666666665
- type: precision_at_5
value: 12.78675
- type: recall_at_1
value: 28.099166666666665
- type: recall_at_10
value: 56.27049999999999
- type: recall_at_100
value: 78.93291666666667
- type: recall_at_1000
value: 92.81608333333334
- type: recall_at_3
value: 42.09775
- type: recall_at_5
value: 48.42533333333334
- type: map_at_1
value: 23.663
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.426
- type: map_at_1000
value: 31.519000000000002
- type: map_at_3
value: 28.069
- type: map_at_5
value: 29.256999999999998
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.107
- type: mrr_at_100
value: 34.055
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.14
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.615
- type: ndcg_at_100
value: 39.776
- type: ndcg_at_1000
value: 42.05
- type: ndcg_at_3
value: 30.322
- type: ndcg_at_5
value: 32.157000000000004
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.491
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.139000000000001
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.663
- type: recall_at_10
value: 45.035
- type: recall_at_100
value: 68.554
- type: recall_at_1000
value: 85.077
- type: recall_at_3
value: 32.982
- type: recall_at_5
value: 37.688
- type: map_at_1
value: 17.403
- type: map_at_10
value: 25.197000000000003
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.487
- type: map_at_3
value: 22.733
- type: map_at_5
value: 24.114
- type: mrr_at_1
value: 21.37
- type: mrr_at_10
value: 29.091
- type: mrr_at_100
value: 30.018
- type: mrr_at_1000
value: 30.096
- type: mrr_at_3
value: 26.887
- type: mrr_at_5
value: 28.157
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.416
- type: ndcg_at_1000
value: 38.45
- type: ndcg_at_3
value: 25.764
- type: ndcg_at_5
value: 27.742
- type: precision_at_1
value: 21.37
- type: precision_at_10
value: 5.609
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 9.009
- type: recall_at_1
value: 17.403
- type: recall_at_10
value: 40.573
- type: recall_at_100
value: 64.818
- type: recall_at_1000
value: 86.53699999999999
- type: recall_at_3
value: 28.493000000000002
- type: recall_at_5
value: 33.660000000000004
- type: map_at_1
value: 28.639
- type: map_at_10
value: 38.951
- type: map_at_100
value: 40.238
- type: map_at_1000
value: 40.327
- type: map_at_3
value: 35.842
- type: map_at_5
value: 37.617
- type: mrr_at_1
value: 33.769
- type: mrr_at_10
value: 43.088
- type: mrr_at_100
value: 44.03
- type: mrr_at_1000
value: 44.072
- type: mrr_at_3
value: 40.656
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 33.769
- type: ndcg_at_10
value: 44.676
- type: ndcg_at_100
value: 50.416000000000004
- type: ndcg_at_1000
value: 52.227999999999994
- type: ndcg_at_3
value: 39.494
- type: ndcg_at_5
value: 42.013
- type: precision_at_1
value: 33.769
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.966
- type: recall_at_1
value: 28.639
- type: recall_at_10
value: 57.687999999999995
- type: recall_at_100
value: 82.541
- type: recall_at_1000
value: 94.896
- type: recall_at_3
value: 43.651
- type: recall_at_5
value: 49.925999999999995
- type: map_at_1
value: 29.57
- type: map_at_10
value: 40.004
- type: map_at_100
value: 41.75
- type: map_at_1000
value: 41.97
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.671
- type: mrr_at_1
value: 35.375
- type: mrr_at_10
value: 45.121
- type: mrr_at_100
value: 45.994
- type: mrr_at_1000
value: 46.04
- type: mrr_at_3
value: 42.227
- type: mrr_at_5
value: 43.995
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_10
value: 46.392
- type: ndcg_at_100
value: 52.196
- type: ndcg_at_1000
value: 54.274
- type: ndcg_at_3
value: 41.163
- type: ndcg_at_5
value: 43.813
- type: precision_at_1
value: 35.375
- type: precision_at_10
value: 8.676
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 19.104
- type: precision_at_5
value: 13.913
- type: recall_at_1
value: 29.57
- type: recall_at_10
value: 58.779
- type: recall_at_100
value: 83.337
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 44.005
- type: recall_at_5
value: 50.975
- type: map_at_1
value: 20.832
- type: map_at_10
value: 29.733999999999998
- type: map_at_100
value: 30.727
- type: map_at_1000
value: 30.843999999999998
- type: map_at_3
value: 26.834999999999997
- type: map_at_5
value: 28.555999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 31.791999999999998
- type: mrr_at_100
value: 32.666000000000004
- type: mrr_at_1000
value: 32.751999999999995
- type: mrr_at_3
value: 29.144
- type: mrr_at_5
value: 30.622
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.744
- type: ndcg_at_1000
value: 42.407000000000004
- type: ndcg_at_3
value: 29.421000000000003
- type: ndcg_at_5
value: 32.211
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.753999999999998
- type: precision_at_5
value: 9.353
- type: recall_at_1
value: 20.832
- type: recall_at_10
value: 48.795
- type: recall_at_100
value: 70.703
- type: recall_at_1000
value: 90.187
- type: recall_at_3
value: 34.455000000000005
- type: recall_at_5
value: 40.967
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.334
- type: map_at_10
value: 19.009999999999998
- type: map_at_100
value: 21.129
- type: map_at_1000
value: 21.328
- type: map_at_3
value: 15.152
- type: map_at_5
value: 17.084
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 36.099
- type: mrr_at_100
value: 37.069
- type: mrr_at_1000
value: 37.104
- type: mrr_at_3
value: 32.096000000000004
- type: mrr_at_5
value: 34.451
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 27.739000000000004
- type: ndcg_at_100
value: 35.836
- type: ndcg_at_1000
value: 39.242
- type: ndcg_at_3
value: 21.263
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 9.199
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.2
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 10.334
- type: recall_at_10
value: 35.177
- type: recall_at_100
value: 63.009
- type: recall_at_1000
value: 81.938
- type: recall_at_3
value: 19.914
- type: recall_at_5
value: 26.077
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.212
- type: map_at_10
value: 17.386
- type: map_at_100
value: 24.234
- type: map_at_1000
value: 25.724999999999998
- type: map_at_3
value: 12.727
- type: map_at_5
value: 14.785
- type: mrr_at_1
value: 59.25
- type: mrr_at_10
value: 68.687
- type: mrr_at_100
value: 69.133
- type: mrr_at_1000
value: 69.14099999999999
- type: mrr_at_3
value: 66.917
- type: mrr_at_5
value: 67.742
- type: ndcg_at_1
value: 48.625
- type: ndcg_at_10
value: 36.675999999999995
- type: ndcg_at_100
value: 41.543
- type: ndcg_at_1000
value: 49.241
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.707
- type: precision_at_1
value: 59.25
- type: precision_at_10
value: 28.525
- type: precision_at_100
value: 9.027000000000001
- type: precision_at_1000
value: 1.8339999999999999
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 37.35
- type: recall_at_1
value: 8.212
- type: recall_at_10
value: 23.188
- type: recall_at_100
value: 48.613
- type: recall_at_1000
value: 73.093
- type: recall_at_3
value: 14.419
- type: recall_at_5
value: 17.798
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.725
- type: f1
value: 46.50743309855908
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.086
- type: map_at_10
value: 66.914
- type: map_at_100
value: 67.321
- type: map_at_1000
value: 67.341
- type: map_at_3
value: 64.75800000000001
- type: map_at_5
value: 66.189
- type: mrr_at_1
value: 59.28600000000001
- type: mrr_at_10
value: 71.005
- type: mrr_at_100
value: 71.304
- type: mrr_at_1000
value: 71.313
- type: mrr_at_3
value: 69.037
- type: mrr_at_5
value: 70.35
- type: ndcg_at_1
value: 59.28600000000001
- type: ndcg_at_10
value: 72.695
- type: ndcg_at_100
value: 74.432
- type: ndcg_at_1000
value: 74.868
- type: ndcg_at_3
value: 68.72200000000001
- type: ndcg_at_5
value: 71.081
- type: precision_at_1
value: 59.28600000000001
- type: precision_at_10
value: 9.499
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 27.503
- type: precision_at_5
value: 17.854999999999997
- type: recall_at_1
value: 55.086
- type: recall_at_10
value: 86.453
- type: recall_at_100
value: 94.028
- type: recall_at_1000
value: 97.052
- type: recall_at_3
value: 75.821
- type: recall_at_5
value: 81.6
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.262999999999998
- type: map_at_10
value: 37.488
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.687
- type: map_at_3
value: 32.529
- type: map_at_5
value: 35.455
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.239000000000004
- type: mrr_at_100
value: 54.086
- type: mrr_at_1000
value: 54.122
- type: mrr_at_3
value: 51.235
- type: mrr_at_5
value: 52.415
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 45.446
- type: ndcg_at_100
value: 52.429
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 41.882000000000005
- type: ndcg_at_5
value: 43.178
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.931999999999999
- type: precision_at_100
value: 2.025
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 28.652
- type: precision_at_5
value: 21.204
- type: recall_at_1
value: 22.262999999999998
- type: recall_at_10
value: 52.447
- type: recall_at_100
value: 78.045
- type: recall_at_1000
value: 94.419
- type: recall_at_3
value: 38.064
- type: recall_at_5
value: 44.769
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.519
- type: map_at_10
value: 45.831
- type: map_at_100
value: 46.815
- type: map_at_1000
value: 46.899
- type: map_at_3
value: 42.836
- type: map_at_5
value: 44.65
- type: mrr_at_1
value: 65.037
- type: mrr_at_10
value: 72.16
- type: mrr_at_100
value: 72.51100000000001
- type: mrr_at_1000
value: 72.53
- type: mrr_at_3
value: 70.682
- type: mrr_at_5
value: 71.54599999999999
- type: ndcg_at_1
value: 65.037
- type: ndcg_at_10
value: 55.17999999999999
- type: ndcg_at_100
value: 58.888
- type: ndcg_at_1000
value: 60.648
- type: ndcg_at_3
value: 50.501
- type: ndcg_at_5
value: 52.977
- type: precision_at_1
value: 65.037
- type: precision_at_10
value: 11.530999999999999
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 31.483
- type: precision_at_5
value: 20.845
- type: recall_at_1
value: 32.519
- type: recall_at_10
value: 57.657000000000004
- type: recall_at_100
value: 72.30199999999999
- type: recall_at_1000
value: 84.024
- type: recall_at_3
value: 47.225
- type: recall_at_5
value: 52.113
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.3168
- type: ap
value: 83.80165516037135
- type: f1
value: 88.29942471066407
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.724999999999998
- type: map_at_10
value: 32.736
- type: map_at_100
value: 33.938
- type: map_at_1000
value: 33.991
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 31.016
- type: mrr_at_1
value: 21.361
- type: mrr_at_10
value: 33.323
- type: mrr_at_100
value: 34.471000000000004
- type: mrr_at_1000
value: 34.518
- type: mrr_at_3
value: 29.453000000000003
- type: mrr_at_5
value: 31.629
- type: ndcg_at_1
value: 21.361
- type: ndcg_at_10
value: 39.649
- type: ndcg_at_100
value: 45.481
- type: ndcg_at_1000
value: 46.775
- type: ndcg_at_3
value: 31.594
- type: ndcg_at_5
value: 35.543
- type: precision_at_1
value: 21.361
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.514999999999999
- type: precision_at_5
value: 10.100000000000001
- type: recall_at_1
value: 20.724999999999998
- type: recall_at_10
value: 61.034
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 97.86399999999999
- type: recall_at_3
value: 39.072
- type: recall_at_5
value: 48.53
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.8919288645691
- type: f1
value: 93.57059586398059
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.97993616051072
- type: f1
value: 48.244319183606535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.90047074646941
- type: f1
value: 66.48999056063725
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34566240753195
- type: f1
value: 73.54164154290658
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.21866934757011
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.000936217235534
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.68189362520352
- type: mrr
value: 32.69603637784303
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.078
- type: map_at_10
value: 12.671
- type: map_at_100
value: 16.291
- type: map_at_1000
value: 17.855999999999998
- type: map_at_3
value: 9.610000000000001
- type: map_at_5
value: 11.152
- type: mrr_at_1
value: 43.963
- type: mrr_at_10
value: 53.173
- type: mrr_at_100
value: 53.718999999999994
- type: mrr_at_1000
value: 53.756
- type: mrr_at_3
value: 50.980000000000004
- type: mrr_at_5
value: 52.42
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.086
- type: ndcg_at_100
value: 32.545
- type: ndcg_at_1000
value: 41.144999999999996
- type: ndcg_at_3
value: 39.434999999999995
- type: ndcg_at_5
value: 37.888
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.014999999999997
- type: precision_at_100
value: 8.594
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 37.049
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 6.078
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 34.512
- type: recall_at_1000
value: 65.447
- type: recall_at_3
value: 10.706
- type: recall_at_5
value: 13.158
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.378000000000004
- type: map_at_10
value: 42.178
- type: map_at_100
value: 43.32
- type: map_at_1000
value: 43.358000000000004
- type: map_at_3
value: 37.474000000000004
- type: map_at_5
value: 40.333000000000006
- type: mrr_at_1
value: 30.823
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.494
- type: mrr_at_1000
value: 45.519
- type: mrr_at_3
value: 40.585
- type: mrr_at_5
value: 43.146
- type: ndcg_at_1
value: 30.794
- type: ndcg_at_10
value: 50.099000000000004
- type: ndcg_at_100
value: 54.900999999999996
- type: ndcg_at_1000
value: 55.69499999999999
- type: ndcg_at_3
value: 41.238
- type: ndcg_at_5
value: 46.081
- type: precision_at_1
value: 30.794
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.926000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.378000000000004
- type: recall_at_10
value: 71.842
- type: recall_at_100
value: 92.565
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 49.053999999999995
- type: recall_at_5
value: 60.207
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.557
- type: map_at_10
value: 84.729
- type: map_at_100
value: 85.369
- type: map_at_1000
value: 85.382
- type: map_at_3
value: 81.72
- type: map_at_5
value: 83.613
- type: mrr_at_1
value: 81.3
- type: mrr_at_10
value: 87.488
- type: mrr_at_100
value: 87.588
- type: mrr_at_1000
value: 87.589
- type: mrr_at_3
value: 86.53
- type: mrr_at_5
value: 87.18599999999999
- type: ndcg_at_1
value: 81.28999999999999
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 89.637
- type: ndcg_at_1000
value: 89.70700000000001
- type: ndcg_at_3
value: 85.55199999999999
- type: ndcg_at_5
value: 87.154
- type: precision_at_1
value: 81.28999999999999
- type: precision_at_10
value: 13.489999999999998
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.708
- type: recall_at_1
value: 70.557
- type: recall_at_10
value: 95.645
- type: recall_at_100
value: 99.693
- type: recall_at_1000
value: 99.995
- type: recall_at_3
value: 87.359
- type: recall_at_5
value: 91.89699999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.65060114776209
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.63271250680617
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.263
- type: map_at_10
value: 10.801
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.224
- type: map_at_3
value: 7.362
- type: map_at_5
value: 9.149000000000001
- type: mrr_at_1
value: 21
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.513
- type: mrr_at_1000
value: 32.58
- type: mrr_at_3
value: 28.116999999999997
- type: mrr_at_5
value: 29.976999999999997
- type: ndcg_at_1
value: 21
- type: ndcg_at_10
value: 18.551000000000002
- type: ndcg_at_100
value: 26.657999999999998
- type: ndcg_at_1000
value: 32.485
- type: ndcg_at_3
value: 16.834
- type: ndcg_at_5
value: 15.204999999999998
- type: precision_at_1
value: 21
- type: precision_at_10
value: 9.84
- type: precision_at_100
value: 2.16
- type: precision_at_1000
value: 0.35500000000000004
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 13.62
- type: recall_at_1
value: 4.263
- type: recall_at_10
value: 19.922
- type: recall_at_100
value: 43.808
- type: recall_at_1000
value: 72.14500000000001
- type: recall_at_3
value: 9.493
- type: recall_at_5
value: 13.767999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 81.27446313317233
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 76.27963301217527
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 88.18495048450949
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.91982338692046
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 89.00896818385291
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 85.48814644586132
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.30116926966582
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.74132963032342
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.87741355780479
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.0019012295875
- type: mrr
value: 94.70267024188593
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.05
- type: map_at_10
value: 59.36
- type: map_at_100
value: 59.967999999999996
- type: map_at_1000
value: 60.023
- type: map_at_3
value: 56.515
- type: map_at_5
value: 58.272999999999996
- type: mrr_at_1
value: 53
- type: mrr_at_10
value: 61.102000000000004
- type: mrr_at_100
value: 61.476
- type: mrr_at_1000
value: 61.523
- type: mrr_at_3
value: 58.778
- type: mrr_at_5
value: 60.128
- type: ndcg_at_1
value: 53
- type: ndcg_at_10
value: 64.43100000000001
- type: ndcg_at_100
value: 66.73599999999999
- type: ndcg_at_1000
value: 68.027
- type: ndcg_at_3
value: 59.279
- type: ndcg_at_5
value: 61.888
- type: precision_at_1
value: 53
- type: precision_at_10
value: 8.767
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.444000000000003
- type: precision_at_5
value: 15.667
- type: recall_at_1
value: 50.05
- type: recall_at_10
value: 78.511
- type: recall_at_100
value: 88.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.117
- type: recall_at_5
value: 70.867
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72178217821782
- type: cos_sim_ap
value: 93.0728601593541
- type: cos_sim_f1
value: 85.6727976766699
- type: cos_sim_precision
value: 83.02063789868667
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.72178217821782
- type: dot_ap
value: 93.07287396168348
- type: dot_f1
value: 85.6727976766699
- type: dot_precision
value: 83.02063789868667
- type: dot_recall
value: 88.5
- type: euclidean_accuracy
value: 99.72178217821782
- type: euclidean_ap
value: 93.07285657982895
- type: euclidean_f1
value: 85.6727976766699
- type: euclidean_precision
value: 83.02063789868667
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 93.02792973059809
- type: manhattan_f1
value: 85.7727737973388
- type: manhattan_precision
value: 87.84067085953879
- type: manhattan_recall
value: 83.8
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 93.07287396168348
- type: max_f1
value: 85.7727737973388
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.77583615550819
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.151636938606956
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.16607939471187
- type: mrr
value: 52.95172046091163
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.314646669495666
- type: cos_sim_spearman
value: 31.83562491439455
- type: dot_pearson
value: 31.314590842874157
- type: dot_spearman
value: 31.83363065810437
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.3010000000000002
- type: map_at_100
value: 7.2139999999999995
- type: map_at_1000
value: 20.179
- type: map_at_3
value: 0.528
- type: map_at_5
value: 0.8019999999999999
- type: mrr_at_1
value: 72
- type: mrr_at_10
value: 83.39999999999999
- type: mrr_at_100
value: 83.39999999999999
- type: mrr_at_1000
value: 83.39999999999999
- type: mrr_at_3
value: 81.667
- type: mrr_at_5
value: 83.06700000000001
- type: ndcg_at_1
value: 66
- type: ndcg_at_10
value: 58.059000000000005
- type: ndcg_at_100
value: 44.316
- type: ndcg_at_1000
value: 43.147000000000006
- type: ndcg_at_3
value: 63.815999999999995
- type: ndcg_at_5
value: 63.005
- type: precision_at_1
value: 72
- type: precision_at_10
value: 61.4
- type: precision_at_100
value: 45.62
- type: precision_at_1000
value: 19.866
- type: precision_at_3
value: 70
- type: precision_at_5
value: 68.8
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.517
- type: recall_at_100
value: 10.587
- type: recall_at_1000
value: 41.233
- type: recall_at_3
value: 0.573
- type: recall_at_5
value: 0.907
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.894
- type: map_at_10
value: 8.488999999999999
- type: map_at_100
value: 14.445
- type: map_at_1000
value: 16.078
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.019
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 39.82
- type: mrr_at_100
value: 40.752
- type: mrr_at_1000
value: 40.771
- type: mrr_at_3
value: 34.354
- type: mrr_at_5
value: 37.721
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 21.563
- type: ndcg_at_100
value: 33.857
- type: ndcg_at_1000
value: 46.199
- type: ndcg_at_3
value: 22.296
- type: ndcg_at_5
value: 21.770999999999997
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.142999999999999
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 1.894
- type: recall_at_10
value: 14.931
- type: recall_at_100
value: 45.524
- type: recall_at_1000
value: 83.243
- type: recall_at_3
value: 5.712
- type: recall_at_5
value: 8.386000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.049
- type: ap
value: 13.85116971310922
- type: f1
value: 54.37504302487686
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.1312959818902
- type: f1
value: 64.11413877009383
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.13103431861502
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.327889372355
- type: cos_sim_ap
value: 77.42059895975699
- type: cos_sim_f1
value: 71.02706903250873
- type: cos_sim_precision
value: 69.75324344950394
- type: cos_sim_recall
value: 72.34828496042216
- type: dot_accuracy
value: 87.327889372355
- type: dot_ap
value: 77.4209479346677
- type: dot_f1
value: 71.02706903250873
- type: dot_precision
value: 69.75324344950394
- type: dot_recall
value: 72.34828496042216
- type: euclidean_accuracy
value: 87.327889372355
- type: euclidean_ap
value: 77.42096495861037
- type: euclidean_f1
value: 71.02706903250873
- type: euclidean_precision
value: 69.75324344950394
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.31000774870358
- type: manhattan_ap
value: 77.38930750711619
- type: manhattan_f1
value: 71.07935314027831
- type: manhattan_precision
value: 67.70957726295677
- type: manhattan_recall
value: 74.80211081794195
- type: max_accuracy
value: 87.327889372355
- type: max_ap
value: 77.42096495861037
- type: max_f1
value: 71.07935314027831
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.58939729110878
- type: cos_sim_ap
value: 87.17594155025475
- type: cos_sim_f1
value: 79.21146953405018
- type: cos_sim_precision
value: 76.8918527109307
- type: cos_sim_recall
value: 81.67539267015707
- type: dot_accuracy
value: 89.58939729110878
- type: dot_ap
value: 87.17593963273593
- type: dot_f1
value: 79.21146953405018
- type: dot_precision
value: 76.8918527109307
- type: dot_recall
value: 81.67539267015707
- type: euclidean_accuracy
value: 89.58939729110878
- type: euclidean_ap
value: 87.17592466925834
- type: euclidean_f1
value: 79.21146953405018
- type: euclidean_precision
value: 76.8918527109307
- type: euclidean_recall
value: 81.67539267015707
- type: manhattan_accuracy
value: 89.62626615438352
- type: manhattan_ap
value: 87.16589873161546
- type: manhattan_f1
value: 79.25143598295348
- type: manhattan_precision
value: 76.39494177323712
- type: manhattan_recall
value: 82.32984293193716
- type: max_accuracy
value: 89.62626615438352
- type: max_ap
value: 87.17594155025475
- type: max_f1
value: 79.25143598295348
---
# hkunlp/instructor-large
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))!
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
**************************** **Updates** ****************************
* 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance.
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out!
## Quick start
<hr />
## Installation
```bash
pip install InstructorEmbedding
```
## Compute your customized embeddings
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)
```
## Use cases
<hr />
## Calculate embeddings for your customized texts
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
## Calculate Sentence similarities
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
```python
from sklearn.metrics.pairwise import cosine_similarity
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'],
['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'],
['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']]
embeddings_a = model.encode(sentences_a)
embeddings_b = model.encode(sentences_b)
similarities = cosine_similarity(embeddings_a,embeddings_b)
print(similarities)
```
## Information Retrieval
You can also use **customized embeddings** for information retrieval.
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
```
## Clustering
Use **customized embeddings** for clustering texts in groups.
```python
import sklearn.cluster
sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'],
['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'],
['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'],
['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"],
['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']]
embeddings = model.encode(sentences)
clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2)
clustering_model.fit(embeddings)
cluster_assignment = clustering_model.labels_
print(cluster_assignment)
``` | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
NghiemAbe/SeaLLM-7B-v2.5-AWQ | NghiemAbe | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"multilingual",
"sea",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"ms",
"km",
"lo",
"my",
"tl",
"arxiv:2312.00738",
"arxiv:2306.05179",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | 1,715,591,225,000 | 2024-05-13T09:28:19 | 6 | 0 | ---
language:
- en
- zh
- vi
- id
- th
- ms
- km
- lo
- my
- tl
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
tags:
- multilingual
- sea
---
<p align="center">
<img src="seal_logo.png" width="200" />
</p>
# *SeaLLM-7B-v2.5* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
🔥<span style="color: #ff3860">[HOT]</span> SeaLLMs project now has a dedicated website - [damo-nlp-sg.github.io/SeaLLMs](https://damo-nlp-sg.github.io/SeaLLMs/)
We introduce [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
### Highlights
* [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) outperforms GPT-3.5 and achieves 7B SOTA on most multilingual knowledge benchmarks for SEA languages (MMLU, M3Exam & VMLU).
* It achieves 79.0 and 34.9 on GSM8K and MATH, surpassing GPT-3.5 in MATH.
### Release and DEMO
- DEMO:
- [SeaLLMs/SeaLLM-7B-v2.5](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLMs/SeaLLM-7B | SeaLMMM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B) - Experimental multimodal SeaLLM.
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
- Model weights:
- [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLM-7B-v2.5-GGUF](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF).
- Run locally:
- [LM-studio](https://lmstudio.ai/):
- [SeaLLM-7B-v2.5-q4_0-chatml](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5-chatml.Q4_K_M.gguf) with ChatML template (`<eos>` token changed to `<|im_end|>`)
- [SeaLLM-7B-v2.5-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5.Q4_K_M.gguf) - must use SeaLLM-7B-v2.5 chat format.
- [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized)
- Previous models:
- [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2)
- [SeaLLM-7B-v1](https://huggingface.co/SeaLLMs/SeaLLM-7B-v1)
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
> The logo was generated by DALL-E 3.
### What's new since SeaLLM-7B-v2?
* SeaLLM-7B-v2.5 was built on top of Gemma-7b, and underwent large scale SFT and carefully designed alignment.
## Evaluation
### Multilingual World Knowledge
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi.
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
| GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41
| Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27
| Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25
| SailorLM | Multi | 52.72 | 59.76 | 67.74 | 50.14 | --- | 39.53 | 37.73
| SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52
| SeaLLM-7B-v2.5 | Multi | 64.05 | 76.87 | 62.54 | 63.11 | 53.30 | 48.64 | 46.86
### Zero-shot CoT Multilingual Math Reasoning
<!--
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.5** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **28.4** vs 18.1 scores.

-->
| Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1
| Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6.0
| Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | |
| Qwen1.5-7B-chat | 56.8 | 15.3 | 40.0 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | 4.7
| SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4
| SeaLLM-7B-v2.5 | 78.5 | 34.9 | 51.3 | 22.1 | 72.3 | 30.2 | 71.5 | 30.1 | 62.0 | 28.4
Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)).
#### Zero-shot MGSM
[SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Thai.
| Model | MGSM-Zh | MGSM-Th
|-----| ----- | ---
| ChatGPT (reported) | 61.2 | 47.2
| Qwen-14B-chat | 59.6 | 28
| SeaLLM-7B-v2 | **64.8** | 62.4
| SeaLLM-7B-v2.5 | 58.0 | **64.8**
### Sea-Bench

### Usage
**IMPORTANT NOTICE for using the model**
* `<bos>` must be at start of prompt, ff your code's tokenizer does not prepend `<bos>` by default, you MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
* Repitition penalty (e.g: in llama.cpp, ollama, LM-studio) must be set to **1** , otherwise will lead to degeneration!
#### Instruction format
```python
# ! WARNING, if your code's tokenizer does not prepend <bos> by default,
# You MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
prompt = """<|im_start|>system
You are a helpful assistant.<eos>
<|im_start|>user
Hello world<eos>
<|im_start|>assistant
Hi there, how can I help?<eos>"""
# <|im_start|> is not a special token.
# Transformers chat_template should be consistent with vLLM format below.
# ! ENSURE 1 and only 1 bos `<bos>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
"""
```
#### Using transformers's chat_template
Install the latest transformers (>4.40)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
# use bfloat16 to ensure the best performance.
model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5", torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello world"},
{"role": "assistant", "content": "Hi there, how can I help you today?"},
{"role": "user", "content": "Explain general relativity in details."}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
#### Using vLLM
```python
from vllm import LLM, SamplingParams
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
TURN_PREFIX = "<|im_start|>{role}\n"
def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
# conversations: list of dict with key `role` and `content` (openai format)
if conversations[0]['role'] != 'system' and system_prompt is not None:
conversations = [{"role": "system", "content": system_prompt}] + conversations
text = ''
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
text += prompt
if add_assistant_prefix:
prompt = TURN_PREFIX.format(role='assistant')
text += prompt
return text
sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['<eos>', '<|im_start|>'])
llm = LLM("SeaLLMs/SeaLLM-7B-v2.5", dtype="bfloat16")
message = "Explain general relativity in details."
prompt = seallm_chat_convo_format(message, True)
gen = llm.generate(prompt, sampling_params)
print(gen[0].outputs[0].text)
```
#### Fine-tuning SeaLLM-7B-v2.5
Should follow the chat format and accurately mask out source tokens. Here is an example.
```python
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
def seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False):
"""
Inputs:
conversations: list of dict following openai format, eg
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
add_assistant_prefix: whether to add assistant_prefix, only for inference decoding
Outputs:
tokenize_output_sample, {
"input_ids": ...
"token_type_ids": 1 if train and 0 if masked out (not train)
}
During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations.
labels = sample['input_ids'].clone()
labels[sample['token_type_ids'] == 0] = -100
"""
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
TURN_PREFIX = "<|im_start|>{role}\n"
TURN_SUFFIX = "<eos>\n"
TURN_SUFFIX_TAKE = "<eos>"
sample = None
assistant_prefix_len = None
assistant_suffix_len = None
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
turn_sample = tokenizer(
prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
if turn['role'] == 'assistant':
if assistant_prefix_len is None:
assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False))
if assistant_suffix_len is None:
assistant_suffix_len = (
len(tokenizer.encode(TURN_SUFFIX.format(role=turn['role']), add_special_tokens=False)) -
len(tokenizer.encode(TURN_SUFFIX_TAKE, add_special_tokens=False))
)
turn_sample['token_type_ids'][assistant_prefix_len:-assistant_suffix_len] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len - assistant_suffix_len)
if sample is None:
sample = turn_sample
else:
for k in turn_sample.keys():
sample[k].extend(turn_sample[k])
if add_assistant_prefix:
assistant_prefix_sample = tokenizer(
TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
for k in sample.keys():
sample[k].extend(assistant_prefix_sample[k])
if tokenizer.add_bos_token:
sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids']
sample['attention_mask'] = [1] + sample['attention_mask']
sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids']
return sample
# ! testing
sample = seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations)
tokens = tokenizer.convert_ids_to_tokens(sample['input_ids'])
pairs = [(x, y) for x, y in zip(tokens, sample['token_type_ids'])]
print(pairs)
# source and special tokens is masked out (token_type 0), only assistant with <eos> is trained (token_type 1)
# [('<bos>', 0), ('<', 0), ('|', 0), ..., ('assistant', 0), ('\n', 0), ('Hi', 1), ('▁there', 1), (',', 1), ('▁how', 1), ('▁can', 1), ('▁I', 1), ('▁help', 1), ('?', 1), ('<eos>', 1), ('\n', 0), ('<', 0), ...
```
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected])
**Author list and order will change!**
* `*` and `^` are equal contributions.
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Weiwen Xu, Hou Pong Chan,
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
| [
"CHIA"
] | Non_BioNLP |
knowledgator/modern-gliner-bi-large-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"NER",
"GLiNER",
"information extraction",
"encoder",
"entity recognition",
"modernbert",
"token-classification",
"en",
"dataset:urchade/pile-mistral-v0.1",
"dataset:numind/NuNER",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"arxiv:2412.13663",
"arxiv:2311.08526",
"arxiv:2406.12925",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"region:us"
] | 1,735,048,822,000 | 2025-02-26T07:21:32 | 319 | 40 | ---
base_model:
- answerdotai/ModernBERT-large
- BAAI/bge-base-en-v1.5
datasets:
- urchade/pile-mistral-v0.1
- numind/NuNER
- knowledgator/GLINER-multi-task-synthetic-data
language:
- en
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
tags:
- NER
- GLiNER
- information extraction
- encoder
- entity recognition
- modernbert
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
This particular version utilizes bi-encoder architecture, where the textual encoder is [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) and entity label encoder is sentence transformer - [BGE-base-en](https://huggingface.co/BAAI/bge-base-en-v1.5).
Such architecture brings several advantages over uni-encoder GLiNER:
* An unlimited amount of entities can be recognized at a single time;
* Faster inference if entity embeddings are preprocessed;
* Better generalization to unseen entities;
Utilization of ModernBERT uncovers up to 4 times better efficiency in comparison to DeBERTa-based models and context length up to 8,192 tokens while demonstrating comparable results.

However, bi-encoder architecture has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
### Installation & Usage
Install or update the gliner package:
```bash
pip install gliner -U
```
You need to install the latest version of transformers to use this model:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/modern-gliner-bi-large-v1.0")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels, threshold=0.3)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
If you want to use **flash attention** or increase sequence length, please, check the following code:
Firstly, install Flash Attention and Triton packages:
```bash
pip install flash-attn triton
```
```python
model = GLiNER.from_pretrained("knowledgator/modern-gliner-bi-large-v1.0",
_attn_implementation = 'flash_attention_2',
max_len = 2048).to('cuda:0')
```
If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet:
```python
labels = ["your entities"]
texts = ["your texts"]
entity_embeddings = model.encode_labels(labels, batch_size = 8)
outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels)
```
### Benchmarks

Below you can see the table with benchmarking results on various named entity recognition datasets:
| Dataset | Score |
|-------------------------|--------|
| ACE 2004 | 30.5% |
| ACE 2005 | 26.7% |
| AnatEM | 37.2% |
| Broad Tweet Corpus | 72.1% |
| CoNLL 2003 | 69.3% |
| FabNER | 22.0% |
| FindVehicle | 40.3% |
| GENIA_NER | 55.6% |
| HarveyNER | 16.1% |
| MultiNERD | 73.8% |
| Ontonotes | 39.2% |
| PolyglotNER | 49.1% |
| TweetNER7 | 39.6% |
| WikiANN en | 54.7% |
| WikiNeural | 83.7% |
| bc2gm | 53.7% |
| bc4chemd | 52.1% |
| bc5cdr | 67.0% |
| ncbi | 61.7% |
| **Average** | **49.7%** |
| | |
| CrossNER_AI | 58.1% |
| CrossNER_literature | 60.0% |
| CrossNER_music | 73.0% |
| CrossNER_politics | 72.8% |
| CrossNER_science | 66.5% |
| mit-movie | 47.6% |
| mit-restaurant | 40.6% |
| **Average (zero-shot benchmark)** | **59.8%** |
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
## Citation
If you use this model in your work, please cite:
```bibtex
@misc{modernbert,
title={Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference},
author={Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli},
year={2024},
eprint={2412.13663},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.13663},
}
```
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{stepanov2024gliner,
title={GLiNER multi-task: Generalist Lightweight Model for Various Information Extraction Tasks},
author={Ihor Stepanov and Mykhailo Shtopko},
year={2024},
eprint={2406.12925},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
}
``` | [
"ANATEM",
"BC5CDR"
] | Non_BioNLP |
KappaNeuro/yaacov-agam-style | KappaNeuro | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"style",
"artist",
"painting",
"yaacov agam",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | 1,694,690,561,000 | 2023-09-14T11:22:45 | 8 | 3 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- style
- artist
- painting
- yaacov agam
instance_prompt: Yaacov Agam Style
widget:
- text: Yaacov Agam Style - graphic of a rectangle split into four circles and triangles
- each with a different colour in gradients
- text: Yaacov Agam Style - an apple, in the style of pop-art, retro, classic, dots
and lines as the background
- text: Yaacov Agam Style - create image of Brazilian samba queen in the style of
Bridget Riley
- text: Yaacov Agam Style - agam a beautiful young lady like to sing
- text: Yaacov Agam Style - Design an image that captures the dynamic and transformative
nature of Yaacov Agam's artworks. Create a scene with interconnected elements
that appear to move and shift as the viewer engages with the image. Incorporate
elements such as rotating shapes, to generate optical illusions of movement and
transformation. The resulting image should convey a sense of playfulness, elegance,
and surprise, minimal
- text: Yaacov Agam Style - Craft a minimalist abstract composition inspired by statue
of liberty. Use bold lines, dynamic shapes, and contrasting colors to represent
the rhythm and energy of a particular song or genre. Strive for simplicity and
balance while capturing the essence of the music through visual elements. Let
the design evoke a sense of movement and evoke emotions associated with the chosen
musical style.
- text: Yaacov Agam Style - A kinetic art piece depicting the interplay of shadow
and light, inspired by the works of Alexander Calder and Yaacov Agam, using geometric
shapes and vivid colors to create an illusion of movement and depth
- text: Yaacov Agam Style - A background in the style of Julio Le Parc's Kinetic Energy
with elements of Latin America, but with the colors used
- text: Yaacov Agam Style - Conceptual collage on science and technology and music
and culture in the 1970s Sol LeWitt HD
- text: Yaacov Agam Style - 80s Art Deco. Postmodern 80s decor. Retro. Gemini symbolism.
Rainbow and black color palette
---
# Yaacov Agam Style ([CivitAI](https://civitai.com/models/107191))

> Yaacov Agam Style - graphic of a rectangle split into four circles and triangles - each with a different colour in gradients
<p>Yaacov Agam, born in 1928, is an Israeli artist known for his innovative approach to art and his pioneering work in the field of kinetic art. He is considered one of the leading figures in the Op Art and Kinetic Art movements.</p><p>Agam's artistic style is characterized by his exploration of visual perception and the use of movement and transformation in his artworks. He often creates interactive and multi-dimensional pieces that engage the viewer and challenge the traditional notion of static art.</p><p>One of his signature techniques is the use of lenticular printing, which allows his artworks to change and transform as the viewer moves or shifts their perspective. This creates a dynamic visual experience that blurs the boundaries between art and viewer, stimulating the senses and evoking a sense of wonder and participation.</p><p>Agam's artworks often incorporate vibrant colors, geometric patterns, and optical illusions, creating a sense of movement and energy. He explores themes of spirituality, unity, and the concept of the infinite, infusing his art with a deeper philosophical and metaphysical dimension.</p><p>Throughout his career, Agam has created numerous public art installations, including large-scale sculptures and architectural projects, which can be found in various cities around the world. These installations aim to transform public spaces and engage with the surrounding environment, inviting viewers to interact and experience art in a unique way.</p><p>Yaacov Agam's innovative and dynamic approach to art has made him a highly influential figure in the world of contemporary art. His contributions to kinetic art, optical illusions, and interactive installations have expanded the possibilities of artistic expression and challenged the viewer's perception of art. His artworks continue to inspire and captivate audiences, leaving a lasting impact on the art world.</p>
## Image examples for the model:

> Yaacov Agam Style - an apple, in the style of pop-art, retro, classic, dots and lines as the background

> Yaacov Agam Style - create image of Brazilian samba queen in the style of Bridget Riley

> Yaacov Agam Style - agam a beautiful young lady like to sing

> Yaacov Agam Style - Design an image that captures the dynamic and transformative nature of Yaacov Agam's artworks. Create a scene with interconnected elements that appear to move and shift as the viewer engages with the image. Incorporate elements such as rotating shapes, to generate optical illusions of movement and transformation. The resulting image should convey a sense of playfulness, elegance, and surprise, minimal

> Yaacov Agam Style - Craft a minimalist abstract composition inspired by statue of liberty. Use bold lines, dynamic shapes, and contrasting colors to represent the rhythm and energy of a particular song or genre. Strive for simplicity and balance while capturing the essence of the music through visual elements. Let the design evoke a sense of movement and evoke emotions associated with the chosen musical style.

> Yaacov Agam Style - A kinetic art piece depicting the interplay of shadow and light, inspired by the works of Alexander Calder and Yaacov Agam, using geometric shapes and vivid colors to create an illusion of movement and depth

> Yaacov Agam Style - A background in the style of Julio Le Parc's Kinetic Energy with elements of Latin America, but with the colors used

> Yaacov Agam Style - Conceptual collage on science and technology and music and culture in the 1970s Sol LeWitt HD

> Yaacov Agam Style - 80s Art Deco. Postmodern 80s decor. Retro. Gemini symbolism. Rainbow and black color palette
| [
"CRAFT"
] | Non_BioNLP |
TRI-ML/DCLM-1B-v0 | TRI-ML | null | [
"transformers",
"safetensors",
"openlm",
"arxiv:2406.11794",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,153,095,000 | 2024-07-25T23:21:51 | 42 | 12 | ---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="300" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Check out our more recent, higher performing model here! https://huggingface.co/TRI-ML/DCLM-1B/
# Model Card for DCLM-1B-v0
DCLM-1B-v0 is a 1.4 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
## Model Details
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|:------:|:-----------------:|:--------:|:-------------:|:-----------------:|:----------------:|
| 1.4B | 2.6T | 24 | 2048 | 16 | 2048 |
### Model Description
- **Developed by:** DataComp for Language Models (DCLM) Team
- **Model type:** Decoder-only Transformer language model
- **Language(s):** English (primarily)
- **License:** Apache 2.0
- **Contact:** [email protected]
- **Date:** July 2024
### Model Sources
- **Repository:** https://github.com/mlfoundations/dclm
- **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
- **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
## Quickstart
First install open_lm
```
pip install git+https://github.com/mlfoundations/open_lm.git
```
Then you can load the model using HF's Auto classes as follows:
```python
from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TRI-ML/DCLM-1B-v0")
model = AutoModelForCausalLM.from_pretrained("TRI-ML/DCLM-1B-v0")
inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
```
### Training Details
The model was trained using the following setup:
- **Architecture:** Decoder-only Transformer
- **Framework:** PyTorch with OpenLM
- **Optimizer:** AdamW
- **Learning Rate:** 1e-2 (peak)
- **Weight Decay:** 1e-2
- **Batch Size:** 2048 sequences
- **Sequence Length:** 2048 tokens
- **Total Training Tokens:** 2.6T
- **Hardware:** Trained on H100 GPUs
We train our 1.4B model for 2.6T tokens on DCLM-Baseline.
Similar to the 7B model training recipe described in Appendix P of our paper,
we train for 2.3T tokens on DCLM-baseline combined with the StarCoder and ProofPile2 datasets,
with the hyper-parameters described above.
Note that we use a schedule set for the full dataset, and stop training early at 2.3T tokens.
Then, we cool down the model on the same dataset to the cooldown LR over 200B tokens.
We will update our paper soon with more training details.
## Evaluation
Here are the evaluation results for DCLM-1B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite)
| Task | Score |
|------------------------------------------|---------|
| AGI Eval LSAT AR | 0.2348 |
| AGI Eval LSAT LR | 0.3098 |
| AGI Eval LSAT RC | 0.3321 |
| AGI Eval SAT English | 0.3883 |
| AGI Eval SAT Math (CoT) | 0.0182 |
| AQuA (CoT) | 0.0245 |
| ARC (challenge) | 0.4343 |
| ARC (easy) | 0.7290 |
| BBQ | 0.4670 |
| BigBench Conceptual Combinations | 0.4660 |
| BigBench Conlang Translation | 0.0732 |
| BigBench CS Algorithms | 0.4515 |
| BigBench Dyck Languages | 0.1990 |
| BigBench Elementary Math QA | 0.2558 |
| BigBench Language Identification | 0.2911 |
| BigBench Logical Deduction | 0.2480 |
| BigBench Misconceptions | 0.5068 |
| BigBench Novel Concepts | 0.5312 |
| BigBench Operators | 0.2714 |
| BigBench QA Wikidata | 0.6687 |
| BigBench Repeat Copy Logic | 0.1562 |
| BigBench Strange Stories | 0.6839 |
| BigBench Strategy QA | 0.5762 |
| BigBench Understanding Fables | 0.4127 |
| BoolQ | 0.7131 |
| CommonSenseQA | 0.6110 |
| COPA | 0.7900 |
| CoQA | 0.4257 |
| Enterprise PII Classification | 0.5110 |
| GPQA Diamond | 0.2121 |
| GPQA | 0.2344 |
| GSM8K (CoT) | 0.0371 |
| HellaSwag | 0.7087 |
| HellaSwag (zero-shot) | 0.7001 |
| Jeopardy | 0.4218 |
| LAMBADA (OpenAI) | 0.6938 |
| LogiQA | 0.3026 |
| MathQA | 0.2598 |
| MMLU (few-shot) | 0.4193 |
| MMLU (zero-shot) | 0.3543 |
| OpenBookQA | 0.4380 |
| PIQA | 0.7786 |
| PubMedQA (labeled) | 0.2560 |
| Simple Arithmetic (no spaces) | 0.0280 |
| Simple Arithmetic (with spaces) | 0.0300 |
| SIQA | 0.6735 |
| SQuAD | 0.5424 |
| SVAMP (CoT) | 0.1800 |
| TriviaQA (small subset) | 0.3603 |
| Winogender (MC female) | 0.4833 |
| Winogender (MC male) | 0.5000 |
| Winograd | 0.8352 |
| Winogrande | 0.6527 |
Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.
Below we compare to the recently released SmolLM (https://huggingface.co/blog/smollm) on key benchmarks. As described in the paper, Core accuracy is the average of
centered accuracy on 22 tasks (including HellaSwag and ARC-E), Extended is centered accuracy averaged over 53 tasks.
We evaluate the models using llm-foundry.
| Task | Core | Extended | MMLU 5-shot |
|:---------:|:------:|:----------:|:-------------:|
| DCLM-1B | 42.3 | 25.1 | 41.9 |
| SmolLM | 36.3 | 21.2 | 30.0 |
## Limitations and Biases
While DCLM-1B demonstrates strong performance across a range of tasks, it's important to note:
1. The model may exhibit biases present in its training data, which is derived from web crawl data.
2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
3. Performance on tasks not included in the evaluation suite may vary.
4. The model's knowledge is limited to its training data cutoff date.
## Ethical Considerations
Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.
## Citation
If you use this model in your research, please cite:
```
@article{Li2024DataCompLM,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
journal={arXiv preprint arXiv:2406.11794},
year={2024}
}
```
| [
"PUBMEDQA"
] | Non_BioNLP |
bartowski/Einstein-v5-v0.2-7B-GGUF | bartowski | text-generation | [
"gguf",
"axolotl",
"generated_from_trainer",
"Mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"text-generation",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:quantized:mistral-community/Mistral-7B-v0.2",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,711,560,571,000 | 2024-03-27T17:57:51 | 270 | 6 | ---
base_model: alpindale/Mistral-7B-v0.2-hf
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
license: other
pipeline_tag: text-generation
tags:
- axolotl
- generated_from_trainer
- Mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
quantized_by: bartowski
---
## Llamacpp Quantizations of Einstein-v5-v0.2-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2536">b2536</a> for quantization.
Original model: https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Einstein-v5-v0.2-7B-Q8_0.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [Einstein-v5-v0.2-7B-Q6_K.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [Einstein-v5-v0.2-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
| [Einstein-v5-v0.2-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
| [Einstein-v5-v0.2-7B-Q5_0.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
| [Einstein-v5-v0.2-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight. |
| [Einstein-v5-v0.2-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
| [Einstein-v5-v0.2-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Decent quality, similar to Q4_K_S, new method of quanting, |
| [Einstein-v5-v0.2-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
| [Einstein-v5-v0.2-7B-Q4_0.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
| [Einstein-v5-v0.2-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [Einstein-v5-v0.2-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [Einstein-v5-v0.2-7B-IQ3_M.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
| [Einstein-v5-v0.2-7B-IQ3_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
| [Einstein-v5-v0.2-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [Einstein-v5-v0.2-7B-Q2_K.gguf](https://huggingface.co/bartowski/Einstein-v5-v0.2-7B-GGUF/blob/main/Einstein-v5-v0.2-7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| [
"SCIQ"
] | Non_BioNLP |
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-994884 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Science",
"Research",
"Verification",
"Dataset",
"AI",
"custom_code",
"en",
"dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-994884",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,447,634,000 | 2024-05-23T07:00:47 | 10 | 0 | ---
datasets:
- fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-994884
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Science
- Research
- Verification
- Dataset
- AI
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
scientific claim verification search
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-994884',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"SCIFACT"
] | Non_BioNLP |
asjoberg/openELM-270M-instruct-raw | asjoberg | null | [
"safetensors",
"openelm",
"custom_code",
"arxiv:2404.14619",
"license:other",
"region:us"
] | 1,739,226,052,000 | 2025-02-10T22:24:32 | 14 | 0 | ---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-450M-Instruct
hf_model=apple/OpenELM-450M-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
| [
"SCIQ"
] | Non_BioNLP |
FreedomIntelligence/Apollo2-3.8B | FreedomIntelligence | question-answering | [
"safetensors",
"phi3",
"biology",
"medical",
"question-answering",
"custom_code",
"ar",
"en",
"zh",
"ko",
"ja",
"mn",
"th",
"vi",
"lo",
"mg",
"de",
"pt",
"es",
"fr",
"ru",
"it",
"hr",
"gl",
"cs",
"co",
"la",
"uk",
"bs",
"bg",
"eo",
"sq",
"da",
"sa",
"gn",
"sr",
"sk",
"gd",
"lb",
"hi",
"ku",
"mt",
"he",
"ln",
"bm",
"sw",
"ig",
"rw",
"ha",
"dataset:FreedomIntelligence/ApolloMoEDataset",
"arxiv:2410.10626",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"region:us"
] | 1,728,892,028,000 | 2024-11-20T03:43:07 | 32 | 1 | ---
base_model:
- microsoft/Phi-3-mini-4k-instruct
datasets:
- FreedomIntelligence/ApolloMoEDataset
language:
- ar
- en
- zh
- ko
- ja
- mn
- th
- vi
- lo
- mg
- de
- pt
- es
- fr
- ru
- it
- hr
- gl
- cs
- co
- la
- uk
- bs
- bg
- eo
- sq
- da
- sa
- gn
- sr
- sk
- gd
- lb
- hi
- ku
- mt
- he
- ln
- bm
- sw
- ig
- rw
- ha
license: apache-2.0
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- biology
- medical
---
# Democratizing Medical LLMs For Much More Languages
Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far.
<p align="center">
📃 <a href="https://arxiv.org/abs/2410.10626" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a> • 🤗 <a href="https://huggingface.co/collections/FreedomIntelligence/apollomoe-and-apollo2-670ddebe3bb1ba1aebabbf2c" target="_blank">Models</a> •🌐 <a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Apollo</a> • 🌐 <a href="https://github.com/FreedomIntelligence/ApolloMoE" target="_blank">ApolloMoE</a>
</p>

## 🌈 Update
* **[2024.10.15]** ApolloMoE repo is published!🎉
## Languages Coverage
12 Major Languages and 38 Minor Languages
<details>
<summary>Click to view the Languages Coverage</summary>

</details>
## Architecture
<details>
<summary>Click to view the MoE routing image</summary>

</details>
## Results
#### Dense
🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-0.5B" target="_blank">Apollo2-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-1.5B" target="_blank">Apollo2-1.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-2B" target="_blank">Apollo2-2B</a>
🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-3.8B" target="_blank">Apollo2-3.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-7B" target="_blank">Apollo2-7B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo2-9B" target="_blank">Apollo2-9B</a>
<details>
<summary>Click to view the Dense Models Results</summary>

</details>
#### Post-MoE
🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-0.5B" target="_blank">Apollo-MoE-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-1.5B" target="_blank">Apollo-MoE-1.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MoE-7B" target="_blank">Apollo-MoE-7B</a>
<details>
<summary>Click to view the Post-MoE Models Results</summary>

</details>
## Usage Format
##### Apollo2
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
- 2B, 9B: User:{query}\nAssistant:{response}\<eos\>
- 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|>
##### Apollo-MoE
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
## Dataset & Evaluation
- Dataset
🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset" target="_blank">ApolloMoEDataset</a>
<details><summary>Click to expand</summary>

- [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train)
</details>
- Evaluation
🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEBench" target="_blank">ApolloMoEBench</a>
<details><summary>Click to expand</summary>
- EN:
- [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
- [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
- [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- ZH:
- [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
- [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
- Randomly sample 2,000 multiple-choice questions
- ES: [Head_qa](https://huggingface.co/datasets/head_qa)
- FR:
- [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
- [MMLU_FR]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- AR: [MMLU_AR](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- JA: [IgakuQA](https://github.com/jungokasai/IgakuQA)
- KO: [KorMedMCQA](https://huggingface.co/datasets/sean0042/KorMedMCQA)
- IT:
- [MedExpQA](https://huggingface.co/datasets/HiTZ/MedExpQA)
- [MMLU_IT]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- DE: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): German part
- PT: [BioInstructQA](https://huggingface.co/datasets/BioMistral/BioInstructQA): Portuguese part
- RU: [RuMedBench](https://github.com/sb-ai-lab/MedBench)
</details>
## Model Download and Inference
We take Apollo-MoE-0.5B as an example
1. Login Huggingface
```
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
2. Download model to local dir
```python
from huggingface_hub import snapshot_download
import os
local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B')
snapshot_download(repo_id="FreedomIntelligence/Apollo-MoE-0.5B", local_dir=local_model_dir)
```
3. Inference Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import os
local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B')
model=AutoModelForCausalLM.from_pretrained(local_model_dir,trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(local_model_dir,trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(local_model_dir, pad_token_id=tokenizer.pad_token_id, num_return_sequences=1, max_new_tokens=7, min_new_tokens=2, do_sample=False, temperature=1.0, top_k=50, top_p=1.0)
inputs = tokenizer('Answer direclty.\nThe capital of Mongolia is Ulaanbaatar.\nThe capital of Iceland is Reykjavik.\nThe capital of Australia is', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(**inputs,generation_config=generation_config)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Results reproduction
<details><summary>Click to expand</summary>
We take Apollo2-7B or Apollo-MoE-0.5B as example
1. Download Dataset for project:
```
bash 0.download_data.sh
```
2. Prepare test and dev data for specific model:
- Create test data for with special token
```
bash 1.data_process_test&dev.sh
```
3. Prepare train data for specific model (Create tokenized data in advance):
- You can adjust data Training order and Training Epoch in this step
```
bash 2.data_process_train.sh
```
4. Train the model
- If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml
```
bash 3.single_node_train.sh
```
5. Evaluate your model: Generate score for benchmark
```
bash 4.eval.sh
```
</details>
## Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{zheng2024efficientlydemocratizingmedicalllms,
title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts},
author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang},
year={2024},
eprint={2410.10626},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.10626},
}
``` | [
"HEAD-QA",
"MEDQA",
"PUBMEDQA"
] | BioNLP |
QuantFactory/Phi-3-medium-4k-instruct-GGUF | QuantFactory | text-generation | [
"transformers",
"gguf",
"nlp",
"code",
"text-generation",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,716,882,235,000 | 2024-05-28T09:12:26 | 125 | 1 | ---
language:
- multilingual
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# QuantFactory/Phi-3-medium-4k-instruct-GGUF
This is quantized version of [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) created using llama.cpp
# Model Description
The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Medium version in two variants [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| ------- | ------------- | ------------ |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
## Intended Uses
**Primary use cases**
The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3-Medium-4K-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3-Medium-4K-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai).
### Tokenizer
Phi-3-Medium-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3-Medium-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "microsoft/Phi-3-medium-4k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3-Medium-4K-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 42 days
* Training data: 4.8T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: The model weight is released on May 21, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
## Benchmarks
We report the results for Phi-3-Medium-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
|Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
|AGI Eval<br>5-shot|50.2|50.1|54.0|56.9|48.4|49.0|59.6|
|MMLU<br>5-shot|78.0|73.8|76.2|80.2|71.4|66.7|84.0|
|BigBench Hard<br>3-shot|81.4|74.1|81.8|80.4|68.3|75.6|87.7|
|ANLI<br>7-shot|55.8|63.4|65.2|68.3|58.1|64.2|71.7|
|HellaSwag<br>5-shot|82.4|78.0|79.0|82.6|78.8|76.2|88.3|
|ARC Challenge<br>10-shot|91.6|86.9|91.3|93.0|87.4|88.3|95.6|
|ARC Easy<br>10-shot|97.7|95.7|96.9|98.2|96.3|96.1|98.8|
|BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3|
|CommonsenseQA<br>10-shot|82.8|82.0|82.0|84.4|79.6|81.8|86.7|
|MedQA<br>2-shot|69.9|59.2|67.9|78.5|63.4|58.2|83.7|
|OpenBookQA<br>10-shot|87.4|86.8|88.6|91.8|86.0|86.4|93.4|
|PIQA<br>5-shot|87.9|86.4|85.0|85.3|86.6|86.2|90.1|
|Social IQA<br>5-shot|80.2|75.3|78.2|81.1|68.3|75.4|81.7|
|TruthfulQA (MC2)<br>10-shot|75.1|57.8|67.4|81.9|67.7|72.6|85.2|
|WinoGrande<br>5-shot|81.5|77.0|75.3|83.3|68.8|72.2|86.7|
|TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3|
|GSM8K Chain of Thought<br>8-shot|91.0|78.3|83.8|93.5|78.1|80.4|94.2|
|HumanEval<br>0-shot|62.2|61.6|39.6|78.7|62.2|64.4|79.9|
|MBPP<br>3-shot|75.2|68.9|70.7|81.3|77.8|73.2|86.7|
|Average|78.5|75.0|76.3|82.5|74.3|75.4|85.2|
We take a closer look at different categories across 80 public benchmark datasets at the table below:
|Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
|Popular aggregated benchmark|75.4|69.9|73.4|76.3|67.0|67.5|80.5|
|Reasoning|84.1|79.3|81.5|86.7|78.3|80.4|89.3|
|Language understanding|73.9|75.6|78.1|76.9|68.7|76.2|80.7|
|Code generation|66.1|68.6|60.0|69.3|70.4|66.7|76.1|
|Math|52.8|45.3|52.5|59.7|52.8|50.9|67.1|
|Factual knowledge|48.3|60.3|60.6|52.4|63.4|54.6|45.9|
|Multilingual|62.9|67.8|69.8|62.0|67.0|73.4|78.2|
|Robustness|66.5|57.9|65.5|78.7|69.3|69.7|84.6|
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. | [
"MEDQA"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-1.4b-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,713,858,337,000 | 2024-04-23T07:47:05 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1.4B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"SCIQ"
] | Non_BioNLP |
michaelfeil/ct2fast-e5-large | michaelfeil | sentence-similarity | [
"sentence-transformers",
"bert",
"ctranslate2",
"int8",
"float16",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,686,860,764,000 | 2023-10-13T13:39:03 | 13 | 2 | ---
language:
- en
license: mit
tags:
- ctranslate2
- int8
- float16
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.68656716417911
- type: ap
value: 41.336896075573584
- type: f1
value: 71.788561468075
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.04965
- type: ap
value: 86.24637009569418
- type: f1
value: 90.03896671762645
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.016000000000005
- type: f1
value: 42.1942431880186
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.107000000000003
- type: map_at_10
value: 40.464
- type: map_at_100
value: 41.577999999999996
- type: map_at_1000
value: 41.588
- type: map_at_3
value: 35.301
- type: map_at_5
value: 38.263000000000005
- type: mrr_at_1
value: 25.605
- type: mrr_at_10
value: 40.64
- type: mrr_at_100
value: 41.760000000000005
- type: mrr_at_1000
value: 41.77
- type: mrr_at_3
value: 35.443000000000005
- type: mrr_at_5
value: 38.448
- type: ndcg_at_1
value: 25.107000000000003
- type: ndcg_at_10
value: 49.352000000000004
- type: ndcg_at_100
value: 53.98500000000001
- type: ndcg_at_1000
value: 54.208
- type: ndcg_at_3
value: 38.671
- type: ndcg_at_5
value: 43.991
- type: precision_at_1
value: 25.107000000000003
- type: precision_at_10
value: 7.795000000000001
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.145
- type: precision_at_5
value: 12.262
- type: recall_at_1
value: 25.107000000000003
- type: recall_at_10
value: 77.952
- type: recall_at_100
value: 97.866
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 48.435
- type: recall_at_5
value: 61.309000000000005
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.19278045044154
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.37976387757665
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.07433334608074
- type: mrr
value: 73.44347711383723
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.4298072183543
- type: cos_sim_spearman
value: 84.73144873582848
- type: euclidean_pearson
value: 85.15885058870728
- type: euclidean_spearman
value: 85.42062106559356
- type: manhattan_pearson
value: 84.89409921792054
- type: manhattan_spearman
value: 85.31941394024344
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.14285714285714
- type: f1
value: 84.11674412565644
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.600076342340785
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.08861812135148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.684000000000005
- type: map_at_10
value: 41.675000000000004
- type: map_at_100
value: 42.963
- type: map_at_1000
value: 43.078
- type: map_at_3
value: 38.708999999999996
- type: map_at_5
value: 40.316
- type: mrr_at_1
value: 39.485
- type: mrr_at_10
value: 47.152
- type: mrr_at_100
value: 47.96
- type: mrr_at_1000
value: 48.010000000000005
- type: mrr_at_3
value: 44.754
- type: mrr_at_5
value: 46.285
- type: ndcg_at_1
value: 39.485
- type: ndcg_at_10
value: 46.849000000000004
- type: ndcg_at_100
value: 52.059
- type: ndcg_at_1000
value: 54.358
- type: ndcg_at_3
value: 42.705
- type: ndcg_at_5
value: 44.663000000000004
- type: precision_at_1
value: 39.485
- type: precision_at_10
value: 8.455
- type: precision_at_100
value: 1.3379999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.695
- type: precision_at_5
value: 13.905999999999999
- type: recall_at_1
value: 32.684000000000005
- type: recall_at_10
value: 56.227000000000004
- type: recall_at_100
value: 78.499
- type: recall_at_1000
value: 94.021
- type: recall_at_3
value: 44.157999999999994
- type: recall_at_5
value: 49.694
- type: map_at_1
value: 31.875999999999998
- type: map_at_10
value: 41.603
- type: map_at_100
value: 42.825
- type: map_at_1000
value: 42.961
- type: map_at_3
value: 38.655
- type: map_at_5
value: 40.294999999999995
- type: mrr_at_1
value: 40.127
- type: mrr_at_10
value: 47.959
- type: mrr_at_100
value: 48.59
- type: mrr_at_1000
value: 48.634
- type: mrr_at_3
value: 45.786
- type: mrr_at_5
value: 46.964
- type: ndcg_at_1
value: 40.127
- type: ndcg_at_10
value: 47.176
- type: ndcg_at_100
value: 51.346000000000004
- type: ndcg_at_1000
value: 53.502
- type: ndcg_at_3
value: 43.139
- type: ndcg_at_5
value: 44.883
- type: precision_at_1
value: 40.127
- type: precision_at_10
value: 8.72
- type: precision_at_100
value: 1.387
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.637
- type: precision_at_5
value: 14.446
- type: recall_at_1
value: 31.875999999999998
- type: recall_at_10
value: 56.54900000000001
- type: recall_at_100
value: 73.939
- type: recall_at_1000
value: 87.732
- type: recall_at_3
value: 44.326
- type: recall_at_5
value: 49.445
- type: map_at_1
value: 41.677
- type: map_at_10
value: 52.222
- type: map_at_100
value: 53.229000000000006
- type: map_at_1000
value: 53.288000000000004
- type: map_at_3
value: 49.201
- type: map_at_5
value: 51.00599999999999
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 55.745999999999995
- type: mrr_at_100
value: 56.433
- type: mrr_at_1000
value: 56.464999999999996
- type: mrr_at_3
value: 53.37499999999999
- type: mrr_at_5
value: 54.858
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 57.406
- type: ndcg_at_100
value: 61.403
- type: ndcg_at_1000
value: 62.7
- type: ndcg_at_3
value: 52.298
- type: ndcg_at_5
value: 55.02
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 8.865
- type: precision_at_100
value: 1.179
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 22.612
- type: precision_at_5
value: 15.461
- type: recall_at_1
value: 41.677
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 86.344
- type: recall_at_1000
value: 95.703
- type: recall_at_3
value: 55.789
- type: recall_at_5
value: 62.488
- type: map_at_1
value: 25.991999999999997
- type: map_at_10
value: 32.804
- type: map_at_100
value: 33.812999999999995
- type: map_at_1000
value: 33.897
- type: map_at_3
value: 30.567
- type: map_at_5
value: 31.599
- type: mrr_at_1
value: 27.797
- type: mrr_at_10
value: 34.768
- type: mrr_at_100
value: 35.702
- type: mrr_at_1000
value: 35.766
- type: mrr_at_3
value: 32.637
- type: mrr_at_5
value: 33.614
- type: ndcg_at_1
value: 27.797
- type: ndcg_at_10
value: 36.966
- type: ndcg_at_100
value: 41.972
- type: ndcg_at_1000
value: 44.139
- type: ndcg_at_3
value: 32.547
- type: ndcg_at_5
value: 34.258
- type: precision_at_1
value: 27.797
- type: precision_at_10
value: 5.514
- type: precision_at_100
value: 0.8340000000000001
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 13.333
- type: precision_at_5
value: 9.04
- type: recall_at_1
value: 25.991999999999997
- type: recall_at_10
value: 47.941
- type: recall_at_100
value: 71.039
- type: recall_at_1000
value: 87.32799999999999
- type: recall_at_3
value: 36.01
- type: recall_at_5
value: 40.056000000000004
- type: map_at_1
value: 17.533
- type: map_at_10
value: 24.336
- type: map_at_100
value: 25.445
- type: map_at_1000
value: 25.561
- type: map_at_3
value: 22.116
- type: map_at_5
value: 23.347
- type: mrr_at_1
value: 21.642
- type: mrr_at_10
value: 28.910999999999998
- type: mrr_at_100
value: 29.836000000000002
- type: mrr_at_1000
value: 29.907
- type: mrr_at_3
value: 26.638
- type: mrr_at_5
value: 27.857
- type: ndcg_at_1
value: 21.642
- type: ndcg_at_10
value: 28.949
- type: ndcg_at_100
value: 34.211000000000006
- type: ndcg_at_1000
value: 37.031
- type: ndcg_at_3
value: 24.788
- type: ndcg_at_5
value: 26.685
- type: precision_at_1
value: 21.642
- type: precision_at_10
value: 5.137
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.383000000000001
- type: recall_at_1
value: 17.533
- type: recall_at_10
value: 38.839
- type: recall_at_100
value: 61.458999999999996
- type: recall_at_1000
value: 81.58
- type: recall_at_3
value: 27.328999999999997
- type: recall_at_5
value: 32.168
- type: map_at_1
value: 28.126
- type: map_at_10
value: 37.872
- type: map_at_100
value: 39.229
- type: map_at_1000
value: 39.353
- type: map_at_3
value: 34.93
- type: map_at_5
value: 36.59
- type: mrr_at_1
value: 34.071
- type: mrr_at_10
value: 43.056
- type: mrr_at_100
value: 43.944
- type: mrr_at_1000
value: 43.999
- type: mrr_at_3
value: 40.536
- type: mrr_at_5
value: 42.065999999999995
- type: ndcg_at_1
value: 34.071
- type: ndcg_at_10
value: 43.503
- type: ndcg_at_100
value: 49.120000000000005
- type: ndcg_at_1000
value: 51.410999999999994
- type: ndcg_at_3
value: 38.767
- type: ndcg_at_5
value: 41.075
- type: precision_at_1
value: 34.071
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.2489999999999999
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.223
- type: precision_at_5
value: 13.050999999999998
- type: recall_at_1
value: 28.126
- type: recall_at_10
value: 54.952
- type: recall_at_100
value: 78.375
- type: recall_at_1000
value: 93.29899999999999
- type: recall_at_3
value: 41.714
- type: recall_at_5
value: 47.635
- type: map_at_1
value: 25.957
- type: map_at_10
value: 34.749
- type: map_at_100
value: 35.929
- type: map_at_1000
value: 36.043
- type: map_at_3
value: 31.947
- type: map_at_5
value: 33.575
- type: mrr_at_1
value: 32.078
- type: mrr_at_10
value: 39.844
- type: mrr_at_100
value: 40.71
- type: mrr_at_1000
value: 40.77
- type: mrr_at_3
value: 37.386
- type: mrr_at_5
value: 38.83
- type: ndcg_at_1
value: 32.078
- type: ndcg_at_10
value: 39.97
- type: ndcg_at_100
value: 45.254
- type: ndcg_at_1000
value: 47.818
- type: ndcg_at_3
value: 35.453
- type: ndcg_at_5
value: 37.631
- type: precision_at_1
value: 32.078
- type: precision_at_10
value: 7.158
- type: precision_at_100
value: 1.126
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 16.743
- type: precision_at_5
value: 11.872
- type: recall_at_1
value: 25.957
- type: recall_at_10
value: 50.583
- type: recall_at_100
value: 73.593
- type: recall_at_1000
value: 91.23599999999999
- type: recall_at_3
value: 37.651
- type: recall_at_5
value: 43.626
- type: map_at_1
value: 27.1505
- type: map_at_10
value: 34.844833333333334
- type: map_at_100
value: 35.95216666666667
- type: map_at_1000
value: 36.06675
- type: map_at_3
value: 32.41975
- type: map_at_5
value: 33.74233333333333
- type: mrr_at_1
value: 31.923666666666662
- type: mrr_at_10
value: 38.87983333333334
- type: mrr_at_100
value: 39.706250000000004
- type: mrr_at_1000
value: 39.76708333333333
- type: mrr_at_3
value: 36.72008333333333
- type: mrr_at_5
value: 37.96933333333334
- type: ndcg_at_1
value: 31.923666666666662
- type: ndcg_at_10
value: 39.44258333333334
- type: ndcg_at_100
value: 44.31475
- type: ndcg_at_1000
value: 46.75
- type: ndcg_at_3
value: 35.36299999999999
- type: ndcg_at_5
value: 37.242333333333335
- type: precision_at_1
value: 31.923666666666662
- type: precision_at_10
value: 6.643333333333333
- type: precision_at_100
value: 1.0612499999999998
- type: precision_at_1000
value: 0.14575
- type: precision_at_3
value: 15.875250000000001
- type: precision_at_5
value: 11.088916666666664
- type: recall_at_1
value: 27.1505
- type: recall_at_10
value: 49.06349999999999
- type: recall_at_100
value: 70.60841666666666
- type: recall_at_1000
value: 87.72049999999999
- type: recall_at_3
value: 37.60575000000001
- type: recall_at_5
value: 42.511166666666675
- type: map_at_1
value: 25.101000000000003
- type: map_at_10
value: 30.147000000000002
- type: map_at_100
value: 30.98
- type: map_at_1000
value: 31.080000000000002
- type: map_at_3
value: 28.571
- type: map_at_5
value: 29.319
- type: mrr_at_1
value: 27.761000000000003
- type: mrr_at_10
value: 32.716
- type: mrr_at_100
value: 33.504
- type: mrr_at_1000
value: 33.574
- type: mrr_at_3
value: 31.135
- type: mrr_at_5
value: 32.032
- type: ndcg_at_1
value: 27.761000000000003
- type: ndcg_at_10
value: 33.358
- type: ndcg_at_100
value: 37.569
- type: ndcg_at_1000
value: 40.189
- type: ndcg_at_3
value: 30.291
- type: ndcg_at_5
value: 31.558000000000003
- type: precision_at_1
value: 27.761000000000003
- type: precision_at_10
value: 4.939
- type: precision_at_100
value: 0.759
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.497
- type: recall_at_1
value: 25.101000000000003
- type: recall_at_10
value: 40.739
- type: recall_at_100
value: 60.089999999999996
- type: recall_at_1000
value: 79.768
- type: recall_at_3
value: 32.16
- type: recall_at_5
value: 35.131
- type: map_at_1
value: 20.112
- type: map_at_10
value: 26.119999999999997
- type: map_at_100
value: 27.031
- type: map_at_1000
value: 27.150000000000002
- type: map_at_3
value: 24.230999999999998
- type: map_at_5
value: 25.15
- type: mrr_at_1
value: 24.535
- type: mrr_at_10
value: 30.198000000000004
- type: mrr_at_100
value: 30.975
- type: mrr_at_1000
value: 31.051000000000002
- type: mrr_at_3
value: 28.338
- type: mrr_at_5
value: 29.269000000000002
- type: ndcg_at_1
value: 24.535
- type: ndcg_at_10
value: 30.147000000000002
- type: ndcg_at_100
value: 34.544000000000004
- type: ndcg_at_1000
value: 37.512
- type: ndcg_at_3
value: 26.726
- type: ndcg_at_5
value: 28.046
- type: precision_at_1
value: 24.535
- type: precision_at_10
value: 5.179
- type: precision_at_100
value: 0.859
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 12.159
- type: precision_at_5
value: 8.424
- type: recall_at_1
value: 20.112
- type: recall_at_10
value: 38.312000000000005
- type: recall_at_100
value: 58.406000000000006
- type: recall_at_1000
value: 79.863
- type: recall_at_3
value: 28.358
- type: recall_at_5
value: 31.973000000000003
- type: map_at_1
value: 27.111
- type: map_at_10
value: 34.096
- type: map_at_100
value: 35.181000000000004
- type: map_at_1000
value: 35.276
- type: map_at_3
value: 31.745
- type: map_at_5
value: 33.045
- type: mrr_at_1
value: 31.343
- type: mrr_at_10
value: 37.994
- type: mrr_at_100
value: 38.873000000000005
- type: mrr_at_1000
value: 38.934999999999995
- type: mrr_at_3
value: 35.743
- type: mrr_at_5
value: 37.077
- type: ndcg_at_1
value: 31.343
- type: ndcg_at_10
value: 38.572
- type: ndcg_at_100
value: 43.854
- type: ndcg_at_1000
value: 46.190999999999995
- type: ndcg_at_3
value: 34.247
- type: ndcg_at_5
value: 36.28
- type: precision_at_1
value: 31.343
- type: precision_at_10
value: 6.166
- type: precision_at_100
value: 1
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 15.081
- type: precision_at_5
value: 10.428999999999998
- type: recall_at_1
value: 27.111
- type: recall_at_10
value: 48.422
- type: recall_at_100
value: 71.846
- type: recall_at_1000
value: 88.57000000000001
- type: recall_at_3
value: 36.435
- type: recall_at_5
value: 41.765
- type: map_at_1
value: 26.264
- type: map_at_10
value: 33.522
- type: map_at_100
value: 34.963
- type: map_at_1000
value: 35.175
- type: map_at_3
value: 31.366
- type: map_at_5
value: 32.621
- type: mrr_at_1
value: 31.028
- type: mrr_at_10
value: 37.230000000000004
- type: mrr_at_100
value: 38.149
- type: mrr_at_1000
value: 38.218
- type: mrr_at_3
value: 35.046
- type: mrr_at_5
value: 36.617
- type: ndcg_at_1
value: 31.028
- type: ndcg_at_10
value: 37.964999999999996
- type: ndcg_at_100
value: 43.342000000000006
- type: ndcg_at_1000
value: 46.471000000000004
- type: ndcg_at_3
value: 34.67
- type: ndcg_at_5
value: 36.458
- type: precision_at_1
value: 31.028
- type: precision_at_10
value: 6.937
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 15.942
- type: precision_at_5
value: 11.462
- type: recall_at_1
value: 26.264
- type: recall_at_10
value: 45.571
- type: recall_at_100
value: 70.246
- type: recall_at_1000
value: 90.971
- type: recall_at_3
value: 36.276
- type: recall_at_5
value: 41.162
- type: map_at_1
value: 23.372999999999998
- type: map_at_10
value: 28.992
- type: map_at_100
value: 29.837999999999997
- type: map_at_1000
value: 29.939
- type: map_at_3
value: 26.999000000000002
- type: map_at_5
value: 28.044999999999998
- type: mrr_at_1
value: 25.692999999999998
- type: mrr_at_10
value: 30.984
- type: mrr_at_100
value: 31.799
- type: mrr_at_1000
value: 31.875999999999998
- type: mrr_at_3
value: 29.267
- type: mrr_at_5
value: 30.163
- type: ndcg_at_1
value: 25.692999999999998
- type: ndcg_at_10
value: 32.45
- type: ndcg_at_100
value: 37.103
- type: ndcg_at_1000
value: 39.678000000000004
- type: ndcg_at_3
value: 28.725
- type: ndcg_at_5
value: 30.351
- type: precision_at_1
value: 25.692999999999998
- type: precision_at_10
value: 4.806
- type: precision_at_100
value: 0.765
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 11.768
- type: precision_at_5
value: 8.096
- type: recall_at_1
value: 23.372999999999998
- type: recall_at_10
value: 41.281
- type: recall_at_100
value: 63.465
- type: recall_at_1000
value: 82.575
- type: recall_at_3
value: 31.063000000000002
- type: recall_at_5
value: 34.991
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.821
- type: map_at_10
value: 15.383
- type: map_at_100
value: 17.244999999999997
- type: map_at_1000
value: 17.445
- type: map_at_3
value: 12.64
- type: map_at_5
value: 13.941999999999998
- type: mrr_at_1
value: 19.544
- type: mrr_at_10
value: 29.738999999999997
- type: mrr_at_100
value: 30.923000000000002
- type: mrr_at_1000
value: 30.969
- type: mrr_at_3
value: 26.384
- type: mrr_at_5
value: 28.199
- type: ndcg_at_1
value: 19.544
- type: ndcg_at_10
value: 22.398
- type: ndcg_at_100
value: 30.253999999999998
- type: ndcg_at_1000
value: 33.876
- type: ndcg_at_3
value: 17.473
- type: ndcg_at_5
value: 19.154
- type: precision_at_1
value: 19.544
- type: precision_at_10
value: 7.217999999999999
- type: precision_at_100
value: 1.564
- type: precision_at_1000
value: 0.22300000000000003
- type: precision_at_3
value: 13.225000000000001
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 8.821
- type: recall_at_10
value: 28.110000000000003
- type: recall_at_100
value: 55.64
- type: recall_at_1000
value: 75.964
- type: recall_at_3
value: 16.195
- type: recall_at_5
value: 20.678
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.344
- type: map_at_10
value: 20.301
- type: map_at_100
value: 28.709
- type: map_at_1000
value: 30.470999999999997
- type: map_at_3
value: 14.584
- type: map_at_5
value: 16.930999999999997
- type: mrr_at_1
value: 67.25
- type: mrr_at_10
value: 75.393
- type: mrr_at_100
value: 75.742
- type: mrr_at_1000
value: 75.75
- type: mrr_at_3
value: 73.958
- type: mrr_at_5
value: 74.883
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 42.394
- type: ndcg_at_100
value: 47.091
- type: ndcg_at_1000
value: 54.215
- type: ndcg_at_3
value: 46.995
- type: ndcg_at_5
value: 44.214999999999996
- type: precision_at_1
value: 67.25
- type: precision_at_10
value: 33.525
- type: precision_at_100
value: 10.67
- type: precision_at_1000
value: 2.221
- type: precision_at_3
value: 49.417
- type: precision_at_5
value: 42.15
- type: recall_at_1
value: 9.344
- type: recall_at_10
value: 25.209
- type: recall_at_100
value: 52.329
- type: recall_at_1000
value: 74.2
- type: recall_at_3
value: 15.699
- type: recall_at_5
value: 19.24
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.05
- type: f1
value: 43.06718139212933
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.452
- type: map_at_10
value: 58.825
- type: map_at_100
value: 59.372
- type: map_at_1000
value: 59.399
- type: map_at_3
value: 56.264
- type: map_at_5
value: 57.879999999999995
- type: mrr_at_1
value: 49.82
- type: mrr_at_10
value: 62.178999999999995
- type: mrr_at_100
value: 62.641999999999996
- type: mrr_at_1000
value: 62.658
- type: mrr_at_3
value: 59.706
- type: mrr_at_5
value: 61.283
- type: ndcg_at_1
value: 49.82
- type: ndcg_at_10
value: 65.031
- type: ndcg_at_100
value: 67.413
- type: ndcg_at_1000
value: 68.014
- type: ndcg_at_3
value: 60.084
- type: ndcg_at_5
value: 62.858000000000004
- type: precision_at_1
value: 49.82
- type: precision_at_10
value: 8.876000000000001
- type: precision_at_100
value: 1.018
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 24.477
- type: precision_at_5
value: 16.208
- type: recall_at_1
value: 46.452
- type: recall_at_10
value: 80.808
- type: recall_at_100
value: 91.215
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 67.62899999999999
- type: recall_at_5
value: 74.32900000000001
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.351
- type: map_at_10
value: 30.796
- type: map_at_100
value: 32.621
- type: map_at_1000
value: 32.799
- type: map_at_3
value: 26.491
- type: map_at_5
value: 28.933999999999997
- type: mrr_at_1
value: 36.265
- type: mrr_at_10
value: 45.556999999999995
- type: mrr_at_100
value: 46.323
- type: mrr_at_1000
value: 46.359
- type: mrr_at_3
value: 42.695
- type: mrr_at_5
value: 44.324000000000005
- type: ndcg_at_1
value: 36.265
- type: ndcg_at_10
value: 38.558
- type: ndcg_at_100
value: 45.18
- type: ndcg_at_1000
value: 48.292
- type: ndcg_at_3
value: 34.204
- type: ndcg_at_5
value: 35.735
- type: precision_at_1
value: 36.265
- type: precision_at_10
value: 10.879999999999999
- type: precision_at_100
value: 1.77
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 23.044999999999998
- type: precision_at_5
value: 17.253
- type: recall_at_1
value: 18.351
- type: recall_at_10
value: 46.116
- type: recall_at_100
value: 70.786
- type: recall_at_1000
value: 89.46300000000001
- type: recall_at_3
value: 31.404
- type: recall_at_5
value: 37.678
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.847
- type: map_at_10
value: 54.269999999999996
- type: map_at_100
value: 55.152
- type: map_at_1000
value: 55.223
- type: map_at_3
value: 51.166
- type: map_at_5
value: 53.055
- type: mrr_at_1
value: 73.693
- type: mrr_at_10
value: 79.975
- type: mrr_at_100
value: 80.202
- type: mrr_at_1000
value: 80.214
- type: mrr_at_3
value: 78.938
- type: mrr_at_5
value: 79.595
- type: ndcg_at_1
value: 73.693
- type: ndcg_at_10
value: 63.334999999999994
- type: ndcg_at_100
value: 66.452
- type: ndcg_at_1000
value: 67.869
- type: ndcg_at_3
value: 58.829
- type: ndcg_at_5
value: 61.266
- type: precision_at_1
value: 73.693
- type: precision_at_10
value: 13.122
- type: precision_at_100
value: 1.5559999999999998
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 37.083
- type: precision_at_5
value: 24.169999999999998
- type: recall_at_1
value: 36.847
- type: recall_at_10
value: 65.61099999999999
- type: recall_at_100
value: 77.792
- type: recall_at_1000
value: 87.17099999999999
- type: recall_at_3
value: 55.625
- type: recall_at_5
value: 60.425
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 82.1096
- type: ap
value: 76.67089212843918
- type: f1
value: 82.03535056754939
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.465
- type: map_at_10
value: 37.072
- type: map_at_100
value: 38.188
- type: map_at_1000
value: 38.232
- type: map_at_3
value: 33.134
- type: map_at_5
value: 35.453
- type: mrr_at_1
value: 25.142999999999997
- type: mrr_at_10
value: 37.669999999999995
- type: mrr_at_100
value: 38.725
- type: mrr_at_1000
value: 38.765
- type: mrr_at_3
value: 33.82
- type: mrr_at_5
value: 36.111
- type: ndcg_at_1
value: 25.142999999999997
- type: ndcg_at_10
value: 44.054
- type: ndcg_at_100
value: 49.364000000000004
- type: ndcg_at_1000
value: 50.456
- type: ndcg_at_3
value: 36.095
- type: ndcg_at_5
value: 40.23
- type: precision_at_1
value: 25.142999999999997
- type: precision_at_10
value: 6.845
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.204999999999998
- type: precision_at_5
value: 11.221
- type: recall_at_1
value: 24.465
- type: recall_at_10
value: 65.495
- type: recall_at_100
value: 89.888
- type: recall_at_1000
value: 98.165
- type: recall_at_3
value: 43.964
- type: recall_at_5
value: 53.891
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.86228910168718
- type: f1
value: 93.69177113259104
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.3999088007296
- type: f1
value: 58.96668664333438
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.21788836583727
- type: f1
value: 71.4545936552952
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.39071956960323
- type: f1
value: 77.12398952847603
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.255379528166955
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.66423362872814
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.782211620375964
- type: mrr
value: 31.773479703044956
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.863
- type: map_at_10
value: 13.831
- type: map_at_100
value: 17.534
- type: map_at_1000
value: 19.012
- type: map_at_3
value: 10.143
- type: map_at_5
value: 12.034
- type: mrr_at_1
value: 46.749
- type: mrr_at_10
value: 55.376999999999995
- type: mrr_at_100
value: 56.009
- type: mrr_at_1000
value: 56.042
- type: mrr_at_3
value: 53.30200000000001
- type: mrr_at_5
value: 54.85
- type: ndcg_at_1
value: 44.582
- type: ndcg_at_10
value: 36.07
- type: ndcg_at_100
value: 33.39
- type: ndcg_at_1000
value: 41.884
- type: ndcg_at_3
value: 41.441
- type: ndcg_at_5
value: 39.861000000000004
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.594
- type: precision_at_100
value: 8.365
- type: precision_at_1000
value: 2.1260000000000003
- type: precision_at_3
value: 39.009
- type: precision_at_5
value: 34.861
- type: recall_at_1
value: 5.863
- type: recall_at_10
value: 17.961
- type: recall_at_100
value: 34.026
- type: recall_at_1000
value: 64.46499999999999
- type: recall_at_3
value: 11.242
- type: recall_at_5
value: 14.493
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.601
- type: map_at_10
value: 55.293000000000006
- type: map_at_100
value: 56.092
- type: map_at_1000
value: 56.111999999999995
- type: map_at_3
value: 51.269
- type: map_at_5
value: 53.787
- type: mrr_at_1
value: 43.221
- type: mrr_at_10
value: 57.882999999999996
- type: mrr_at_100
value: 58.408
- type: mrr_at_1000
value: 58.421
- type: mrr_at_3
value: 54.765
- type: mrr_at_5
value: 56.809
- type: ndcg_at_1
value: 43.221
- type: ndcg_at_10
value: 62.858999999999995
- type: ndcg_at_100
value: 65.987
- type: ndcg_at_1000
value: 66.404
- type: ndcg_at_3
value: 55.605000000000004
- type: ndcg_at_5
value: 59.723000000000006
- type: precision_at_1
value: 43.221
- type: precision_at_10
value: 9.907
- type: precision_at_100
value: 1.169
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.019000000000002
- type: precision_at_5
value: 17.474
- type: recall_at_1
value: 38.601
- type: recall_at_10
value: 82.966
- type: recall_at_100
value: 96.154
- type: recall_at_1000
value: 99.223
- type: recall_at_3
value: 64.603
- type: recall_at_5
value: 73.97200000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.77
- type: map_at_10
value: 84.429
- type: map_at_100
value: 85.04599999999999
- type: map_at_1000
value: 85.065
- type: map_at_3
value: 81.461
- type: map_at_5
value: 83.316
- type: mrr_at_1
value: 81.51
- type: mrr_at_10
value: 87.52799999999999
- type: mrr_at_100
value: 87.631
- type: mrr_at_1000
value: 87.632
- type: mrr_at_3
value: 86.533
- type: mrr_at_5
value: 87.214
- type: ndcg_at_1
value: 81.47999999999999
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.39200000000001
- type: ndcg_at_1000
value: 89.52
- type: ndcg_at_3
value: 85.29299999999999
- type: ndcg_at_5
value: 86.88
- type: precision_at_1
value: 81.47999999999999
- type: precision_at_10
value: 13.367
- type: precision_at_100
value: 1.5230000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.227
- type: precision_at_5
value: 24.494
- type: recall_at_1
value: 70.77
- type: recall_at_10
value: 95.199
- type: recall_at_100
value: 99.37700000000001
- type: recall_at_1000
value: 99.973
- type: recall_at_3
value: 86.895
- type: recall_at_5
value: 91.396
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.686353396858344
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.3664675312921
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7379999999999995
- type: map_at_10
value: 12.01
- type: map_at_100
value: 14.02
- type: map_at_1000
value: 14.310999999999998
- type: map_at_3
value: 8.459
- type: map_at_5
value: 10.281
- type: mrr_at_1
value: 23.3
- type: mrr_at_10
value: 34.108
- type: mrr_at_100
value: 35.217
- type: mrr_at_1000
value: 35.272
- type: mrr_at_3
value: 30.833
- type: mrr_at_5
value: 32.768
- type: ndcg_at_1
value: 23.3
- type: ndcg_at_10
value: 20.116999999999997
- type: ndcg_at_100
value: 27.961000000000002
- type: ndcg_at_1000
value: 33.149
- type: ndcg_at_3
value: 18.902
- type: ndcg_at_5
value: 16.742
- type: precision_at_1
value: 23.3
- type: precision_at_10
value: 10.47
- type: precision_at_100
value: 2.177
- type: precision_at_1000
value: 0.34299999999999997
- type: precision_at_3
value: 17.567
- type: precision_at_5
value: 14.78
- type: recall_at_1
value: 4.7379999999999995
- type: recall_at_10
value: 21.221999999999998
- type: recall_at_100
value: 44.242
- type: recall_at_1000
value: 69.652
- type: recall_at_3
value: 10.688
- type: recall_at_5
value: 14.982999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.84572946827069
- type: cos_sim_spearman
value: 80.48508130408966
- type: euclidean_pearson
value: 82.0481530027767
- type: euclidean_spearman
value: 80.45902876782752
- type: manhattan_pearson
value: 82.03728222483326
- type: manhattan_spearman
value: 80.45684282911755
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.33476464677516
- type: cos_sim_spearman
value: 75.93057758003266
- type: euclidean_pearson
value: 80.89685744015691
- type: euclidean_spearman
value: 76.29929953441706
- type: manhattan_pearson
value: 80.91391345459995
- type: manhattan_spearman
value: 76.31985463110914
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.63686106359005
- type: cos_sim_spearman
value: 85.22240034668202
- type: euclidean_pearson
value: 84.6074814189106
- type: euclidean_spearman
value: 85.17169644755828
- type: manhattan_pearson
value: 84.48329306239368
- type: manhattan_spearman
value: 85.0086508544768
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.95455774064745
- type: cos_sim_spearman
value: 80.54074646118492
- type: euclidean_pearson
value: 81.79598955554704
- type: euclidean_spearman
value: 80.55837617606814
- type: manhattan_pearson
value: 81.78213797905386
- type: manhattan_spearman
value: 80.5666746878273
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.92813309124739
- type: cos_sim_spearman
value: 88.81459873052108
- type: euclidean_pearson
value: 88.21193118930564
- type: euclidean_spearman
value: 88.87072745043731
- type: manhattan_pearson
value: 88.22576929706727
- type: manhattan_spearman
value: 88.8867671095791
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.6881529671839
- type: cos_sim_spearman
value: 85.2807092969554
- type: euclidean_pearson
value: 84.62334178652704
- type: euclidean_spearman
value: 85.2116373296784
- type: manhattan_pearson
value: 84.54948211541777
- type: manhattan_spearman
value: 85.10737722637882
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.55963694458408
- type: cos_sim_spearman
value: 89.36731628848683
- type: euclidean_pearson
value: 89.64975952985465
- type: euclidean_spearman
value: 89.29689484033007
- type: manhattan_pearson
value: 89.61234491713135
- type: manhattan_spearman
value: 89.20302520255782
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.411800961903886
- type: cos_sim_spearman
value: 62.99105515749963
- type: euclidean_pearson
value: 65.29826669549443
- type: euclidean_spearman
value: 63.29880964105775
- type: manhattan_pearson
value: 65.00126190601183
- type: manhattan_spearman
value: 63.32011025899179
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.83498531837608
- type: cos_sim_spearman
value: 87.21366640615442
- type: euclidean_pearson
value: 86.74764288798261
- type: euclidean_spearman
value: 87.06060470780834
- type: manhattan_pearson
value: 86.65971223951476
- type: manhattan_spearman
value: 86.99814399831457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.94448463485881
- type: mrr
value: 95.36291867174221
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.928000000000004
- type: map_at_10
value: 68.577
- type: map_at_100
value: 69.35900000000001
- type: map_at_1000
value: 69.37299999999999
- type: map_at_3
value: 66.217
- type: map_at_5
value: 67.581
- type: mrr_at_1
value: 63
- type: mrr_at_10
value: 69.994
- type: mrr_at_100
value: 70.553
- type: mrr_at_1000
value: 70.56700000000001
- type: mrr_at_3
value: 68.167
- type: mrr_at_5
value: 69.11699999999999
- type: ndcg_at_1
value: 63
- type: ndcg_at_10
value: 72.58
- type: ndcg_at_100
value: 75.529
- type: ndcg_at_1000
value: 76.009
- type: ndcg_at_3
value: 68.523
- type: ndcg_at_5
value: 70.301
- type: precision_at_1
value: 63
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.444000000000003
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 59.928000000000004
- type: recall_at_10
value: 83.544
- type: recall_at_100
value: 96
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.072
- type: recall_at_5
value: 76.683
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82178217821782
- type: cos_sim_ap
value: 95.41507679819003
- type: cos_sim_f1
value: 90.9456740442656
- type: cos_sim_precision
value: 91.49797570850203
- type: cos_sim_recall
value: 90.4
- type: dot_accuracy
value: 99.77227722772277
- type: dot_ap
value: 92.50123869445967
- type: dot_f1
value: 88.18414322250638
- type: dot_precision
value: 90.26178010471205
- type: dot_recall
value: 86.2
- type: euclidean_accuracy
value: 99.81782178217821
- type: euclidean_ap
value: 95.3935066749006
- type: euclidean_f1
value: 90.66128218071681
- type: euclidean_precision
value: 91.53924566768603
- type: euclidean_recall
value: 89.8
- type: manhattan_accuracy
value: 99.81881188118813
- type: manhattan_ap
value: 95.39767454613512
- type: manhattan_f1
value: 90.62019477191186
- type: manhattan_precision
value: 92.95478443743428
- type: manhattan_recall
value: 88.4
- type: max_accuracy
value: 99.82178217821782
- type: max_ap
value: 95.41507679819003
- type: max_f1
value: 90.9456740442656
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.96313921233748
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.602625720956745
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.32659230651731
- type: mrr
value: 52.33861726508785
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.01587644214203
- type: cos_sim_spearman
value: 30.974306908731013
- type: dot_pearson
value: 29.83339853838187
- type: dot_spearman
value: 30.07761671934048
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.9539999999999997
- type: map_at_100
value: 11.437
- type: map_at_1000
value: 27.861000000000004
- type: map_at_3
value: 0.6479999999999999
- type: map_at_5
value: 1.0410000000000001
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 90.333
- type: mrr_at_100
value: 90.333
- type: mrr_at_1000
value: 90.333
- type: mrr_at_3
value: 90.333
- type: mrr_at_5
value: 90.333
- type: ndcg_at_1
value: 80
- type: ndcg_at_10
value: 78.31700000000001
- type: ndcg_at_100
value: 59.396
- type: ndcg_at_1000
value: 52.733
- type: ndcg_at_3
value: 81.46900000000001
- type: ndcg_at_5
value: 80.74
- type: precision_at_1
value: 84
- type: precision_at_10
value: 84
- type: precision_at_100
value: 60.980000000000004
- type: precision_at_1000
value: 23.432
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.156
- type: recall_at_100
value: 14.557999999999998
- type: recall_at_1000
value: 49.553999999999995
- type: recall_at_3
value: 0.685
- type: recall_at_5
value: 1.121
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.373
- type: map_at_10
value: 11.701
- type: map_at_100
value: 17.144000000000002
- type: map_at_1000
value: 18.624
- type: map_at_3
value: 6.552
- type: map_at_5
value: 9.372
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 51.975
- type: mrr_at_100
value: 52.873999999999995
- type: mrr_at_1000
value: 52.873999999999995
- type: mrr_at_3
value: 47.619
- type: mrr_at_5
value: 50.578
- type: ndcg_at_1
value: 36.735
- type: ndcg_at_10
value: 27.212999999999997
- type: ndcg_at_100
value: 37.245
- type: ndcg_at_1000
value: 48.602000000000004
- type: ndcg_at_3
value: 30.916
- type: ndcg_at_5
value: 30.799
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.327
- type: precision_at_1000
value: 1.486
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 32.245000000000005
- type: recall_at_1
value: 3.373
- type: recall_at_10
value: 17.404
- type: recall_at_100
value: 46.105000000000004
- type: recall_at_1000
value: 80.35
- type: recall_at_3
value: 7.4399999999999995
- type: recall_at_5
value: 12.183
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.5592
- type: ap
value: 14.330910591410134
- type: f1
value: 54.45745186286521
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.20543293718167
- type: f1
value: 61.45365480309872
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 43.81162998944145
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.69011146212075
- type: cos_sim_ap
value: 76.09792353652536
- type: cos_sim_f1
value: 70.10202763786646
- type: cos_sim_precision
value: 68.65671641791045
- type: cos_sim_recall
value: 71.60949868073878
- type: dot_accuracy
value: 85.33110806461227
- type: dot_ap
value: 70.19304383327554
- type: dot_f1
value: 67.22494202525122
- type: dot_precision
value: 65.6847935548842
- type: dot_recall
value: 68.83905013192611
- type: euclidean_accuracy
value: 86.5410979316922
- type: euclidean_ap
value: 75.91906915651882
- type: euclidean_f1
value: 69.6798975672215
- type: euclidean_precision
value: 67.6865671641791
- type: euclidean_recall
value: 71.79419525065963
- type: manhattan_accuracy
value: 86.60070334386363
- type: manhattan_ap
value: 75.94617413885031
- type: manhattan_f1
value: 69.52689565780946
- type: manhattan_precision
value: 68.3312101910828
- type: manhattan_recall
value: 70.76517150395777
- type: max_accuracy
value: 86.69011146212075
- type: max_ap
value: 76.09792353652536
- type: max_f1
value: 70.10202763786646
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.25951798812434
- type: cos_sim_ap
value: 86.31476416599727
- type: cos_sim_f1
value: 78.52709971038477
- type: cos_sim_precision
value: 76.7629972792117
- type: cos_sim_recall
value: 80.37419156144134
- type: dot_accuracy
value: 88.03896456708192
- type: dot_ap
value: 83.26963599196237
- type: dot_f1
value: 76.72696459492317
- type: dot_precision
value: 73.56411162133521
- type: dot_recall
value: 80.17400677548507
- type: euclidean_accuracy
value: 89.21682772538519
- type: euclidean_ap
value: 86.29306071289969
- type: euclidean_f1
value: 78.40827030519554
- type: euclidean_precision
value: 77.42250243939053
- type: euclidean_recall
value: 79.41946412072683
- type: manhattan_accuracy
value: 89.22458959133776
- type: manhattan_ap
value: 86.2901934710645
- type: manhattan_f1
value: 78.54211378440453
- type: manhattan_precision
value: 76.85505858079729
- type: manhattan_recall
value: 80.30489682784109
- type: max_accuracy
value: 89.25951798812434
- type: max_ap
value: 86.31476416599727
- type: max_f1
value: 78.54211378440453
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [intfloat/e5-large](https://huggingface.co/intfloat/e5-large)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-e5-large"
model_name_orig="intfloat/e5-large"
from hf_hub_ctranslate2 import EncoderCT2fromHfHub
model = EncoderCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
max_length=64,
) # perform downstream tasks on outputs
outputs["pooler_output"]
outputs["last_hidden_state"]
outputs["attention_mask"]
# alternative, use SentenceTransformer Mix-In
# for end-to-end Sentence embeddings generation
# (not pulling from this CT2fast-HF repo)
from hf_hub_ctranslate2 import CT2SentenceTransformer
model = CT2SentenceTransformer(
model_name_orig, compute_type="int8_float16", device="cuda"
)
embeddings = model.encode(
["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
batch_size=32,
convert_to_numpy=True,
normalize_embeddings=True,
)
print(embeddings.shape, embeddings)
scores = (embeddings @ embeddings.T) * 100
# Hint: you can also host this code via REST API and
# via github.com/michaelfeil/infinity
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-10-13 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
## E5-large
**News (May 2023): please switch to [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2), which has better performance and same method of usage.**
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large')
model = AutoModel.from_pretrained('intfloat/e5-large')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
jinaai/jina-embeddings-v2-base-de | jinaai | feature-extraction | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"custom_code",
"de",
"en",
"arxiv:2108.12409",
"arxiv:2402.17016",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:eu"
] | 1,705,068,290,000 | 2025-01-06T16:26:47 | 49,768 | 72 | ---
language:
- de
- en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
inference: false
model-index:
- name: jina-embeddings-v2-base-de
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.76119402985076
- type: ap
value: 35.99577188521176
- type: f1
value: 67.50397431543269
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.9186295503212
- type: ap
value: 79.73307115840507
- type: f1
value: 66.66245744831339
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 77.52215
- type: ap
value: 71.85051037177416
- type: f1
value: 77.4171096157774
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.498
- type: f1
value: 38.058193386555956
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.717999999999996
- type: f1
value: 37.22674371574757
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.319999999999997
- type: map_at_10
value: 40.351
- type: map_at_100
value: 41.435
- type: map_at_1000
value: 41.443000000000005
- type: map_at_3
value: 35.266
- type: map_at_5
value: 37.99
- type: mrr_at_1
value: 25.746999999999996
- type: mrr_at_10
value: 40.515
- type: mrr_at_100
value: 41.606
- type: mrr_at_1000
value: 41.614000000000004
- type: mrr_at_3
value: 35.42
- type: mrr_at_5
value: 38.112
- type: ndcg_at_1
value: 25.319999999999997
- type: ndcg_at_10
value: 49.332
- type: ndcg_at_100
value: 53.909
- type: ndcg_at_1000
value: 54.089
- type: ndcg_at_3
value: 38.705
- type: ndcg_at_5
value: 43.606
- type: precision_at_1
value: 25.319999999999997
- type: precision_at_10
value: 7.831
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.24
- type: precision_at_5
value: 12.119
- type: recall_at_1
value: 25.319999999999997
- type: recall_at_10
value: 78.307
- type: recall_at_100
value: 98.222
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 48.72
- type: recall_at_5
value: 60.597
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 41.43100588255654
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.08988904593667
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.55514765595906
- type: mrr
value: 73.51393835465858
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.6723823121172
- type: cos_sim_spearman
value: 76.90596922214986
- type: euclidean_pearson
value: 77.87910737957918
- type: euclidean_spearman
value: 76.66319260598262
- type: manhattan_pearson
value: 77.37039493457965
- type: manhattan_spearman
value: 76.09872191280964
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.97703549060543
- type: f1
value: 98.86569241475296
- type: precision
value: 98.81002087682673
- type: recall
value: 98.97703549060543
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.93506493506493
- type: f1
value: 83.91014949949302
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.970675877585144
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.779230269190954
- task:
type: Clustering
dataset:
name: MTEB BlurbsClusteringP2P
type: slvnwhrl/blurbs-clustering-p2p
config: default
split: test
revision: a2dd5b02a77de3466a3eaa98ae586b5610314496
metrics:
- type: v_measure
value: 35.490175601567216
- task:
type: Clustering
dataset:
name: MTEB BlurbsClusteringS2S
type: slvnwhrl/blurbs-clustering-s2s
config: default
split: test
revision: 9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d
metrics:
- type: v_measure
value: 16.16638280560168
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.830999999999996
- type: map_at_10
value: 41.355
- type: map_at_100
value: 42.791000000000004
- type: map_at_1000
value: 42.918
- type: map_at_3
value: 38.237
- type: map_at_5
value: 40.066
- type: mrr_at_1
value: 38.484
- type: mrr_at_10
value: 47.593
- type: mrr_at_100
value: 48.388
- type: mrr_at_1000
value: 48.439
- type: mrr_at_3
value: 45.279
- type: mrr_at_5
value: 46.724
- type: ndcg_at_1
value: 38.484
- type: ndcg_at_10
value: 47.27
- type: ndcg_at_100
value: 52.568000000000005
- type: ndcg_at_1000
value: 54.729000000000006
- type: ndcg_at_3
value: 43.061
- type: ndcg_at_5
value: 45.083
- type: precision_at_1
value: 38.484
- type: precision_at_10
value: 8.927
- type: precision_at_100
value: 1.425
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 30.830999999999996
- type: recall_at_10
value: 57.87799999999999
- type: recall_at_100
value: 80.124
- type: recall_at_1000
value: 94.208
- type: recall_at_3
value: 45.083
- type: recall_at_5
value: 51.154999999999994
- type: map_at_1
value: 25.782
- type: map_at_10
value: 34.492
- type: map_at_100
value: 35.521
- type: map_at_1000
value: 35.638
- type: map_at_3
value: 31.735999999999997
- type: map_at_5
value: 33.339
- type: mrr_at_1
value: 32.357
- type: mrr_at_10
value: 39.965
- type: mrr_at_100
value: 40.644000000000005
- type: mrr_at_1000
value: 40.695
- type: mrr_at_3
value: 37.739
- type: mrr_at_5
value: 39.061
- type: ndcg_at_1
value: 32.357
- type: ndcg_at_10
value: 39.644
- type: ndcg_at_100
value: 43.851
- type: ndcg_at_1000
value: 46.211999999999996
- type: ndcg_at_3
value: 35.675000000000004
- type: ndcg_at_5
value: 37.564
- type: precision_at_1
value: 32.357
- type: precision_at_10
value: 7.344
- type: precision_at_100
value: 1.201
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 17.155
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 25.782
- type: recall_at_10
value: 49.132999999999996
- type: recall_at_100
value: 67.24
- type: recall_at_1000
value: 83.045
- type: recall_at_3
value: 37.021
- type: recall_at_5
value: 42.548
- type: map_at_1
value: 35.778999999999996
- type: map_at_10
value: 47.038000000000004
- type: map_at_100
value: 48.064
- type: map_at_1000
value: 48.128
- type: map_at_3
value: 44.186
- type: map_at_5
value: 45.788000000000004
- type: mrr_at_1
value: 41.254000000000005
- type: mrr_at_10
value: 50.556999999999995
- type: mrr_at_100
value: 51.296
- type: mrr_at_1000
value: 51.331
- type: mrr_at_3
value: 48.318
- type: mrr_at_5
value: 49.619
- type: ndcg_at_1
value: 41.254000000000005
- type: ndcg_at_10
value: 52.454
- type: ndcg_at_100
value: 56.776
- type: ndcg_at_1000
value: 58.181000000000004
- type: ndcg_at_3
value: 47.713
- type: ndcg_at_5
value: 49.997
- type: precision_at_1
value: 41.254000000000005
- type: precision_at_10
value: 8.464
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 21.526
- type: precision_at_5
value: 14.696000000000002
- type: recall_at_1
value: 35.778999999999996
- type: recall_at_10
value: 64.85300000000001
- type: recall_at_100
value: 83.98400000000001
- type: recall_at_1000
value: 94.18299999999999
- type: recall_at_3
value: 51.929
- type: recall_at_5
value: 57.666
- type: map_at_1
value: 21.719
- type: map_at_10
value: 29.326999999999998
- type: map_at_100
value: 30.314000000000004
- type: map_at_1000
value: 30.397000000000002
- type: map_at_3
value: 27.101
- type: map_at_5
value: 28.141
- type: mrr_at_1
value: 23.503
- type: mrr_at_10
value: 31.225
- type: mrr_at_100
value: 32.096000000000004
- type: mrr_at_1000
value: 32.159
- type: mrr_at_3
value: 29.076999999999998
- type: mrr_at_5
value: 30.083
- type: ndcg_at_1
value: 23.503
- type: ndcg_at_10
value: 33.842
- type: ndcg_at_100
value: 39.038000000000004
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 29.347
- type: ndcg_at_5
value: 31.121
- type: precision_at_1
value: 23.503
- type: precision_at_10
value: 5.266
- type: precision_at_100
value: 0.831
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.504999999999999
- type: precision_at_5
value: 8.565000000000001
- type: recall_at_1
value: 21.719
- type: recall_at_10
value: 46.024
- type: recall_at_100
value: 70.78999999999999
- type: recall_at_1000
value: 87.022
- type: recall_at_3
value: 33.64
- type: recall_at_5
value: 37.992
- type: map_at_1
value: 15.601
- type: map_at_10
value: 22.054000000000002
- type: map_at_100
value: 23.177
- type: map_at_1000
value: 23.308
- type: map_at_3
value: 19.772000000000002
- type: map_at_5
value: 21.055
- type: mrr_at_1
value: 19.403000000000002
- type: mrr_at_10
value: 26.409
- type: mrr_at_100
value: 27.356
- type: mrr_at_1000
value: 27.441
- type: mrr_at_3
value: 24.108999999999998
- type: mrr_at_5
value: 25.427
- type: ndcg_at_1
value: 19.403000000000002
- type: ndcg_at_10
value: 26.474999999999998
- type: ndcg_at_100
value: 32.086
- type: ndcg_at_1000
value: 35.231
- type: ndcg_at_3
value: 22.289
- type: ndcg_at_5
value: 24.271
- type: precision_at_1
value: 19.403000000000002
- type: precision_at_10
value: 4.813
- type: precision_at_100
value: 0.8869999999999999
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.710999999999999
- type: recall_at_1
value: 15.601
- type: recall_at_10
value: 35.916
- type: recall_at_100
value: 60.8
- type: recall_at_1000
value: 83.245
- type: recall_at_3
value: 24.321
- type: recall_at_5
value: 29.372999999999998
- type: map_at_1
value: 25.522
- type: map_at_10
value: 34.854
- type: map_at_100
value: 36.269
- type: map_at_1000
value: 36.387
- type: map_at_3
value: 32.187
- type: map_at_5
value: 33.692
- type: mrr_at_1
value: 31.375999999999998
- type: mrr_at_10
value: 40.471000000000004
- type: mrr_at_100
value: 41.481
- type: mrr_at_1000
value: 41.533
- type: mrr_at_3
value: 38.274
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.375999999999998
- type: ndcg_at_10
value: 40.298
- type: ndcg_at_100
value: 46.255
- type: ndcg_at_1000
value: 48.522
- type: ndcg_at_3
value: 36.049
- type: ndcg_at_5
value: 38.095
- type: precision_at_1
value: 31.375999999999998
- type: precision_at_10
value: 7.305000000000001
- type: precision_at_100
value: 1.201
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.107999999999999
- type: recall_at_1
value: 25.522
- type: recall_at_10
value: 50.988
- type: recall_at_100
value: 76.005
- type: recall_at_1000
value: 91.11200000000001
- type: recall_at_3
value: 38.808
- type: recall_at_5
value: 44.279
- type: map_at_1
value: 24.615000000000002
- type: map_at_10
value: 32.843
- type: map_at_100
value: 34.172999999999995
- type: map_at_1000
value: 34.286
- type: map_at_3
value: 30.125
- type: map_at_5
value: 31.495
- type: mrr_at_1
value: 30.023
- type: mrr_at_10
value: 38.106
- type: mrr_at_100
value: 39.01
- type: mrr_at_1000
value: 39.071
- type: mrr_at_3
value: 35.674
- type: mrr_at_5
value: 36.924
- type: ndcg_at_1
value: 30.023
- type: ndcg_at_10
value: 38.091
- type: ndcg_at_100
value: 43.771
- type: ndcg_at_1000
value: 46.315
- type: ndcg_at_3
value: 33.507
- type: ndcg_at_5
value: 35.304
- type: precision_at_1
value: 30.023
- type: precision_at_10
value: 6.837999999999999
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 15.562999999999999
- type: precision_at_5
value: 10.936
- type: recall_at_1
value: 24.615000000000002
- type: recall_at_10
value: 48.691
- type: recall_at_100
value: 72.884
- type: recall_at_1000
value: 90.387
- type: recall_at_3
value: 35.659
- type: recall_at_5
value: 40.602
- type: map_at_1
value: 23.223666666666666
- type: map_at_10
value: 31.338166666666673
- type: map_at_100
value: 32.47358333333333
- type: map_at_1000
value: 32.5955
- type: map_at_3
value: 28.84133333333333
- type: map_at_5
value: 30.20808333333333
- type: mrr_at_1
value: 27.62483333333333
- type: mrr_at_10
value: 35.385916666666674
- type: mrr_at_100
value: 36.23325
- type: mrr_at_1000
value: 36.29966666666667
- type: mrr_at_3
value: 33.16583333333333
- type: mrr_at_5
value: 34.41983333333334
- type: ndcg_at_1
value: 27.62483333333333
- type: ndcg_at_10
value: 36.222
- type: ndcg_at_100
value: 41.29491666666666
- type: ndcg_at_1000
value: 43.85508333333333
- type: ndcg_at_3
value: 31.95116666666667
- type: ndcg_at_5
value: 33.88541666666667
- type: precision_at_1
value: 27.62483333333333
- type: precision_at_10
value: 6.339916666666667
- type: precision_at_100
value: 1.0483333333333333
- type: precision_at_1000
value: 0.14608333333333334
- type: precision_at_3
value: 14.726500000000003
- type: precision_at_5
value: 10.395
- type: recall_at_1
value: 23.223666666666666
- type: recall_at_10
value: 46.778999999999996
- type: recall_at_100
value: 69.27141666666667
- type: recall_at_1000
value: 87.27383333333334
- type: recall_at_3
value: 34.678749999999994
- type: recall_at_5
value: 39.79900000000001
- type: map_at_1
value: 21.677
- type: map_at_10
value: 27.828000000000003
- type: map_at_100
value: 28.538999999999998
- type: map_at_1000
value: 28.64
- type: map_at_3
value: 26.105
- type: map_at_5
value: 27.009
- type: mrr_at_1
value: 24.387
- type: mrr_at_10
value: 30.209999999999997
- type: mrr_at_100
value: 30.953000000000003
- type: mrr_at_1000
value: 31.029
- type: mrr_at_3
value: 28.707
- type: mrr_at_5
value: 29.610999999999997
- type: ndcg_at_1
value: 24.387
- type: ndcg_at_10
value: 31.378
- type: ndcg_at_100
value: 35.249
- type: ndcg_at_1000
value: 37.923
- type: ndcg_at_3
value: 28.213
- type: ndcg_at_5
value: 29.658
- type: precision_at_1
value: 24.387
- type: precision_at_10
value: 4.8309999999999995
- type: precision_at_100
value: 0.73
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.168
- type: precision_at_5
value: 8.251999999999999
- type: recall_at_1
value: 21.677
- type: recall_at_10
value: 40.069
- type: recall_at_100
value: 58.077
- type: recall_at_1000
value: 77.97
- type: recall_at_3
value: 31.03
- type: recall_at_5
value: 34.838
- type: map_at_1
value: 14.484
- type: map_at_10
value: 20.355
- type: map_at_100
value: 21.382
- type: map_at_1000
value: 21.511
- type: map_at_3
value: 18.448
- type: map_at_5
value: 19.451999999999998
- type: mrr_at_1
value: 17.584
- type: mrr_at_10
value: 23.825
- type: mrr_at_100
value: 24.704
- type: mrr_at_1000
value: 24.793000000000003
- type: mrr_at_3
value: 21.92
- type: mrr_at_5
value: 22.97
- type: ndcg_at_1
value: 17.584
- type: ndcg_at_10
value: 24.315
- type: ndcg_at_100
value: 29.354999999999997
- type: ndcg_at_1000
value: 32.641999999999996
- type: ndcg_at_3
value: 20.802
- type: ndcg_at_5
value: 22.335
- type: precision_at_1
value: 17.584
- type: precision_at_10
value: 4.443
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 9.807
- type: precision_at_5
value: 7.0889999999999995
- type: recall_at_1
value: 14.484
- type: recall_at_10
value: 32.804
- type: recall_at_100
value: 55.679
- type: recall_at_1000
value: 79.63
- type: recall_at_3
value: 22.976
- type: recall_at_5
value: 26.939
- type: map_at_1
value: 22.983999999999998
- type: map_at_10
value: 30.812
- type: map_at_100
value: 31.938
- type: map_at_1000
value: 32.056000000000004
- type: map_at_3
value: 28.449999999999996
- type: map_at_5
value: 29.542
- type: mrr_at_1
value: 27.145999999999997
- type: mrr_at_10
value: 34.782999999999994
- type: mrr_at_100
value: 35.699
- type: mrr_at_1000
value: 35.768
- type: mrr_at_3
value: 32.572
- type: mrr_at_5
value: 33.607
- type: ndcg_at_1
value: 27.145999999999997
- type: ndcg_at_10
value: 35.722
- type: ndcg_at_100
value: 40.964
- type: ndcg_at_1000
value: 43.598
- type: ndcg_at_3
value: 31.379
- type: ndcg_at_5
value: 32.924
- type: precision_at_1
value: 27.145999999999997
- type: precision_at_10
value: 6.063000000000001
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 14.366000000000001
- type: precision_at_5
value: 9.776
- type: recall_at_1
value: 22.983999999999998
- type: recall_at_10
value: 46.876
- type: recall_at_100
value: 69.646
- type: recall_at_1000
value: 88.305
- type: recall_at_3
value: 34.471000000000004
- type: recall_at_5
value: 38.76
- type: map_at_1
value: 23.017000000000003
- type: map_at_10
value: 31.049
- type: map_at_100
value: 32.582
- type: map_at_1000
value: 32.817
- type: map_at_3
value: 28.303
- type: map_at_5
value: 29.854000000000003
- type: mrr_at_1
value: 27.866000000000003
- type: mrr_at_10
value: 35.56
- type: mrr_at_100
value: 36.453
- type: mrr_at_1000
value: 36.519
- type: mrr_at_3
value: 32.938
- type: mrr_at_5
value: 34.391
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 36.506
- type: ndcg_at_100
value: 42.344
- type: ndcg_at_1000
value: 45.213
- type: ndcg_at_3
value: 31.805
- type: ndcg_at_5
value: 33.933
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 7.016
- type: precision_at_100
value: 1.468
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 14.822
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.017000000000003
- type: recall_at_10
value: 47.053
- type: recall_at_100
value: 73.177
- type: recall_at_1000
value: 91.47800000000001
- type: recall_at_3
value: 33.675
- type: recall_at_5
value: 39.36
- type: map_at_1
value: 16.673
- type: map_at_10
value: 24.051000000000002
- type: map_at_100
value: 24.933
- type: map_at_1000
value: 25.06
- type: map_at_3
value: 21.446
- type: map_at_5
value: 23.064
- type: mrr_at_1
value: 18.115000000000002
- type: mrr_at_10
value: 25.927
- type: mrr_at_100
value: 26.718999999999998
- type: mrr_at_1000
value: 26.817999999999998
- type: mrr_at_3
value: 23.383000000000003
- type: mrr_at_5
value: 25.008999999999997
- type: ndcg_at_1
value: 18.115000000000002
- type: ndcg_at_10
value: 28.669
- type: ndcg_at_100
value: 33.282000000000004
- type: ndcg_at_1000
value: 36.481
- type: ndcg_at_3
value: 23.574
- type: ndcg_at_5
value: 26.340000000000003
- type: precision_at_1
value: 18.115000000000002
- type: precision_at_10
value: 4.769
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.8
- type: recall_at_1
value: 16.673
- type: recall_at_10
value: 41.063
- type: recall_at_100
value: 62.851
- type: recall_at_1000
value: 86.701
- type: recall_at_3
value: 27.532
- type: recall_at_5
value: 34.076
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.752
- type: map_at_10
value: 15.120000000000001
- type: map_at_100
value: 16.678
- type: map_at_1000
value: 16.854
- type: map_at_3
value: 12.603
- type: map_at_5
value: 13.918
- type: mrr_at_1
value: 19.283
- type: mrr_at_10
value: 29.145
- type: mrr_at_100
value: 30.281000000000002
- type: mrr_at_1000
value: 30.339
- type: mrr_at_3
value: 26.069
- type: mrr_at_5
value: 27.864
- type: ndcg_at_1
value: 19.283
- type: ndcg_at_10
value: 21.804000000000002
- type: ndcg_at_100
value: 28.576
- type: ndcg_at_1000
value: 32.063
- type: ndcg_at_3
value: 17.511
- type: ndcg_at_5
value: 19.112000000000002
- type: precision_at_1
value: 19.283
- type: precision_at_10
value: 6.873
- type: precision_at_100
value: 1.405
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 13.16
- type: precision_at_5
value: 10.189
- type: recall_at_1
value: 8.752
- type: recall_at_10
value: 27.004
- type: recall_at_100
value: 50.648
- type: recall_at_1000
value: 70.458
- type: recall_at_3
value: 16.461000000000002
- type: recall_at_5
value: 20.973
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.81
- type: map_at_10
value: 14.056
- type: map_at_100
value: 18.961
- type: map_at_1000
value: 20.169
- type: map_at_3
value: 10.496
- type: map_at_5
value: 11.952
- type: mrr_at_1
value: 53.5
- type: mrr_at_10
value: 63.479
- type: mrr_at_100
value: 63.971999999999994
- type: mrr_at_1000
value: 63.993
- type: mrr_at_3
value: 61.541999999999994
- type: mrr_at_5
value: 62.778999999999996
- type: ndcg_at_1
value: 42.25
- type: ndcg_at_10
value: 31.471
- type: ndcg_at_100
value: 35.115
- type: ndcg_at_1000
value: 42.408
- type: ndcg_at_3
value: 35.458
- type: ndcg_at_5
value: 32.973
- type: precision_at_1
value: 53.5
- type: precision_at_10
value: 24.85
- type: precision_at_100
value: 7.79
- type: precision_at_1000
value: 1.599
- type: precision_at_3
value: 38.667
- type: precision_at_5
value: 31.55
- type: recall_at_1
value: 6.81
- type: recall_at_10
value: 19.344
- type: recall_at_100
value: 40.837
- type: recall_at_1000
value: 64.661
- type: recall_at_3
value: 11.942
- type: recall_at_5
value: 14.646
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.64499999999999
- type: f1
value: 39.39106911352714
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 48.196
- type: map_at_10
value: 61.404
- type: map_at_100
value: 61.846000000000004
- type: map_at_1000
value: 61.866
- type: map_at_3
value: 58.975
- type: map_at_5
value: 60.525
- type: mrr_at_1
value: 52.025
- type: mrr_at_10
value: 65.43299999999999
- type: mrr_at_100
value: 65.80799999999999
- type: mrr_at_1000
value: 65.818
- type: mrr_at_3
value: 63.146
- type: mrr_at_5
value: 64.64
- type: ndcg_at_1
value: 52.025
- type: ndcg_at_10
value: 67.889
- type: ndcg_at_100
value: 69.864
- type: ndcg_at_1000
value: 70.337
- type: ndcg_at_3
value: 63.315
- type: ndcg_at_5
value: 65.91799999999999
- type: precision_at_1
value: 52.025
- type: precision_at_10
value: 9.182
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 25.968000000000004
- type: precision_at_5
value: 17.006
- type: recall_at_1
value: 48.196
- type: recall_at_10
value: 83.885
- type: recall_at_100
value: 92.671
- type: recall_at_1000
value: 96.018
- type: recall_at_3
value: 71.59
- type: recall_at_5
value: 77.946
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.193000000000001
- type: map_at_10
value: 25.168000000000003
- type: map_at_100
value: 27.017000000000003
- type: map_at_1000
value: 27.205000000000002
- type: map_at_3
value: 21.746
- type: map_at_5
value: 23.579
- type: mrr_at_1
value: 31.635999999999996
- type: mrr_at_10
value: 40.077
- type: mrr_at_100
value: 41.112
- type: mrr_at_1000
value: 41.160999999999994
- type: mrr_at_3
value: 37.937
- type: mrr_at_5
value: 39.18
- type: ndcg_at_1
value: 31.635999999999996
- type: ndcg_at_10
value: 32.298
- type: ndcg_at_100
value: 39.546
- type: ndcg_at_1000
value: 42.88
- type: ndcg_at_3
value: 29.221999999999998
- type: ndcg_at_5
value: 30.069000000000003
- type: precision_at_1
value: 31.635999999999996
- type: precision_at_10
value: 9.367
- type: precision_at_100
value: 1.645
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 20.01
- type: precision_at_5
value: 14.753
- type: recall_at_1
value: 15.193000000000001
- type: recall_at_10
value: 38.214999999999996
- type: recall_at_100
value: 65.95
- type: recall_at_1000
value: 85.85300000000001
- type: recall_at_3
value: 26.357000000000003
- type: recall_at_5
value: 31.319999999999997
- task:
type: Retrieval
dataset:
name: MTEB GerDaLIR
type: jinaai/ger_da_lir
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.363
- type: map_at_10
value: 16.222
- type: map_at_100
value: 17.28
- type: map_at_1000
value: 17.380000000000003
- type: map_at_3
value: 14.054
- type: map_at_5
value: 15.203
- type: mrr_at_1
value: 11.644
- type: mrr_at_10
value: 17.625
- type: mrr_at_100
value: 18.608
- type: mrr_at_1000
value: 18.695999999999998
- type: mrr_at_3
value: 15.481
- type: mrr_at_5
value: 16.659
- type: ndcg_at_1
value: 11.628
- type: ndcg_at_10
value: 20.028000000000002
- type: ndcg_at_100
value: 25.505
- type: ndcg_at_1000
value: 28.288000000000004
- type: ndcg_at_3
value: 15.603
- type: ndcg_at_5
value: 17.642
- type: precision_at_1
value: 11.628
- type: precision_at_10
value: 3.5589999999999997
- type: precision_at_100
value: 0.664
- type: precision_at_1000
value: 0.092
- type: precision_at_3
value: 7.109999999999999
- type: precision_at_5
value: 5.401
- type: recall_at_1
value: 10.363
- type: recall_at_10
value: 30.586000000000002
- type: recall_at_100
value: 56.43
- type: recall_at_1000
value: 78.142
- type: recall_at_3
value: 18.651
- type: recall_at_5
value: 23.493
- task:
type: Retrieval
dataset:
name: MTEB GermanDPR
type: deepset/germandpr
config: default
split: test
revision: 5129d02422a66be600ac89cd3e8531b4f97d347d
metrics:
- type: map_at_1
value: 60.78
- type: map_at_10
value: 73.91499999999999
- type: map_at_100
value: 74.089
- type: map_at_1000
value: 74.09400000000001
- type: map_at_3
value: 71.87
- type: map_at_5
value: 73.37700000000001
- type: mrr_at_1
value: 60.78
- type: mrr_at_10
value: 73.91499999999999
- type: mrr_at_100
value: 74.089
- type: mrr_at_1000
value: 74.09400000000001
- type: mrr_at_3
value: 71.87
- type: mrr_at_5
value: 73.37700000000001
- type: ndcg_at_1
value: 60.78
- type: ndcg_at_10
value: 79.35600000000001
- type: ndcg_at_100
value: 80.077
- type: ndcg_at_1000
value: 80.203
- type: ndcg_at_3
value: 75.393
- type: ndcg_at_5
value: 78.077
- type: precision_at_1
value: 60.78
- type: precision_at_10
value: 9.59
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 28.52
- type: precision_at_5
value: 18.4
- type: recall_at_1
value: 60.78
- type: recall_at_10
value: 95.902
- type: recall_at_100
value: 99.024
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.56099999999999
- type: recall_at_5
value: 92.0
- task:
type: STS
dataset:
name: MTEB GermanSTSBenchmark
type: jinaai/german-STSbenchmark
config: default
split: test
revision: 49d9b423b996fea62b483f9ee6dfb5ec233515ca
metrics:
- type: cos_sim_pearson
value: 88.49524420894356
- type: cos_sim_spearman
value: 88.32407839427714
- type: euclidean_pearson
value: 87.25098779877104
- type: euclidean_spearman
value: 88.22738098593608
- type: manhattan_pearson
value: 87.23872691839607
- type: manhattan_spearman
value: 88.2002968380165
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.81
- type: map_at_10
value: 46.238
- type: map_at_100
value: 47.141
- type: map_at_1000
value: 47.213
- type: map_at_3
value: 43.248999999999995
- type: map_at_5
value: 45.078
- type: mrr_at_1
value: 63.619
- type: mrr_at_10
value: 71.279
- type: mrr_at_100
value: 71.648
- type: mrr_at_1000
value: 71.665
- type: mrr_at_3
value: 69.76599999999999
- type: mrr_at_5
value: 70.743
- type: ndcg_at_1
value: 63.619
- type: ndcg_at_10
value: 55.38999999999999
- type: ndcg_at_100
value: 58.80800000000001
- type: ndcg_at_1000
value: 60.331999999999994
- type: ndcg_at_3
value: 50.727
- type: ndcg_at_5
value: 53.284
- type: precision_at_1
value: 63.619
- type: precision_at_10
value: 11.668000000000001
- type: precision_at_100
value: 1.434
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 32.001000000000005
- type: precision_at_5
value: 21.223
- type: recall_at_1
value: 31.81
- type: recall_at_10
value: 58.339
- type: recall_at_100
value: 71.708
- type: recall_at_1000
value: 81.85
- type: recall_at_3
value: 48.001
- type: recall_at_5
value: 53.059
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 68.60640000000001
- type: ap
value: 62.84296904042086
- type: f1
value: 68.50643633327537
- task:
type: Reranking
dataset:
name: MTEB MIRACL
type: jinaai/miracl
config: default
split: test
revision: 8741c3b61cd36ed9ca1b3d4203543a41793239e2
metrics:
- type: map
value: 64.29704335389768
- type: mrr
value: 72.11962197159565
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.3844049247606
- type: f1
value: 89.2124328528015
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.36855452240067
- type: f1
value: 87.35458822097442
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.48654810761514
- type: f1
value: 50.07229882504409
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 63.832065370526905
- type: f1
value: 46.283579383385806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.89038332212509
- type: f1
value: 61.86279849685129
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 67.44780095350535
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25084061869536
- type: f1
value: 71.43965023016408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.73907195696032
- type: f1
value: 73.69920814839061
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.32577306498249
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.759349326367783
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.401342674703425
- type: mrr
value: 31.384379585660987
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.855
- type: map_at_10
value: 10.01
- type: map_at_100
value: 12.461
- type: map_at_1000
value: 13.776
- type: map_at_3
value: 7.252
- type: map_at_5
value: 8.679
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 49.323
- type: mrr_at_100
value: 49.954
- type: mrr_at_1000
value: 49.997
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.375
- type: ndcg_at_1
value: 39.318999999999996
- type: ndcg_at_10
value: 28.607
- type: ndcg_at_100
value: 26.554
- type: ndcg_at_1000
value: 35.731
- type: ndcg_at_3
value: 32.897999999999996
- type: ndcg_at_5
value: 31.53
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 20.867
- type: precision_at_100
value: 6.796
- type: precision_at_1000
value: 1.983
- type: precision_at_3
value: 30.547
- type: precision_at_5
value: 27.245
- type: recall_at_1
value: 4.855
- type: recall_at_10
value: 14.08
- type: recall_at_100
value: 28.188000000000002
- type: recall_at_1000
value: 60.07900000000001
- type: recall_at_3
value: 7.947
- type: recall_at_5
value: 10.786
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.906999999999996
- type: map_at_10
value: 41.147
- type: map_at_100
value: 42.269
- type: map_at_1000
value: 42.308
- type: map_at_3
value: 36.638999999999996
- type: map_at_5
value: 39.285
- type: mrr_at_1
value: 30.359
- type: mrr_at_10
value: 43.607
- type: mrr_at_100
value: 44.454
- type: mrr_at_1000
value: 44.481
- type: mrr_at_3
value: 39.644
- type: mrr_at_5
value: 42.061
- type: ndcg_at_1
value: 30.330000000000002
- type: ndcg_at_10
value: 48.899
- type: ndcg_at_100
value: 53.612
- type: ndcg_at_1000
value: 54.51200000000001
- type: ndcg_at_3
value: 40.262
- type: ndcg_at_5
value: 44.787
- type: precision_at_1
value: 30.330000000000002
- type: precision_at_10
value: 8.323
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 18.395
- type: precision_at_5
value: 13.627
- type: recall_at_1
value: 26.906999999999996
- type: recall_at_10
value: 70.215
- type: recall_at_100
value: 90.61200000000001
- type: recall_at_1000
value: 97.294
- type: recall_at_3
value: 47.784
- type: recall_at_5
value: 58.251
- task:
type: PairClassification
dataset:
name: MTEB PawsX
type: paws-x
config: default
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 60.5
- type: cos_sim_ap
value: 57.606096528877494
- type: cos_sim_f1
value: 62.24240307369892
- type: cos_sim_precision
value: 45.27439024390244
- type: cos_sim_recall
value: 99.55307262569832
- type: dot_accuracy
value: 57.699999999999996
- type: dot_ap
value: 51.289351057160616
- type: dot_f1
value: 62.25953130465197
- type: dot_precision
value: 45.31568228105906
- type: dot_recall
value: 99.4413407821229
- type: euclidean_accuracy
value: 60.45
- type: euclidean_ap
value: 57.616461421424034
- type: euclidean_f1
value: 62.313697657913416
- type: euclidean_precision
value: 45.657826313052524
- type: euclidean_recall
value: 98.10055865921787
- type: manhattan_accuracy
value: 60.3
- type: manhattan_ap
value: 57.580565271667325
- type: manhattan_f1
value: 62.24240307369892
- type: manhattan_precision
value: 45.27439024390244
- type: manhattan_recall
value: 99.55307262569832
- type: max_accuracy
value: 60.5
- type: max_ap
value: 57.616461421424034
- type: max_f1
value: 62.313697657913416
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.21300000000001
- type: map_at_10
value: 84.136
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.812
- type: map_at_3
value: 81.182
- type: map_at_5
value: 83.027
- type: mrr_at_1
value: 80.91000000000001
- type: mrr_at_10
value: 87.155
- type: mrr_at_100
value: 87.27000000000001
- type: mrr_at_1000
value: 87.271
- type: mrr_at_3
value: 86.158
- type: mrr_at_5
value: 86.828
- type: ndcg_at_1
value: 80.88
- type: ndcg_at_10
value: 87.926
- type: ndcg_at_100
value: 89.223
- type: ndcg_at_1000
value: 89.321
- type: ndcg_at_3
value: 85.036
- type: ndcg_at_5
value: 86.614
- type: precision_at_1
value: 80.88
- type: precision_at_10
value: 13.350000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.173
- type: precision_at_5
value: 24.476
- type: recall_at_1
value: 70.21300000000001
- type: recall_at_10
value: 95.12
- type: recall_at_100
value: 99.535
- type: recall_at_1000
value: 99.977
- type: recall_at_3
value: 86.833
- type: recall_at_5
value: 91.26100000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 47.754688783184875
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.875736374329364
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.773
- type: map_at_10
value: 9.447
- type: map_at_100
value: 11.1
- type: map_at_1000
value: 11.37
- type: map_at_3
value: 6.787
- type: map_at_5
value: 8.077
- type: mrr_at_1
value: 18.5
- type: mrr_at_10
value: 28.227000000000004
- type: mrr_at_100
value: 29.445
- type: mrr_at_1000
value: 29.515
- type: mrr_at_3
value: 25.2
- type: mrr_at_5
value: 27.055
- type: ndcg_at_1
value: 18.5
- type: ndcg_at_10
value: 16.29
- type: ndcg_at_100
value: 23.250999999999998
- type: ndcg_at_1000
value: 28.445999999999998
- type: ndcg_at_3
value: 15.376000000000001
- type: ndcg_at_5
value: 13.528
- type: precision_at_1
value: 18.5
- type: precision_at_10
value: 8.51
- type: precision_at_100
value: 1.855
- type: precision_at_1000
value: 0.311
- type: precision_at_3
value: 14.533
- type: precision_at_5
value: 12.0
- type: recall_at_1
value: 3.773
- type: recall_at_10
value: 17.282
- type: recall_at_100
value: 37.645
- type: recall_at_1000
value: 63.138000000000005
- type: recall_at_3
value: 8.853
- type: recall_at_5
value: 12.168
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.32789517976525
- type: cos_sim_spearman
value: 80.32750384145629
- type: euclidean_pearson
value: 81.5025131452508
- type: euclidean_spearman
value: 80.24797115147175
- type: manhattan_pearson
value: 81.51634463412002
- type: manhattan_spearman
value: 80.24614721495055
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.47050448992432
- type: cos_sim_spearman
value: 80.58919997743621
- type: euclidean_pearson
value: 85.83258918113664
- type: euclidean_spearman
value: 80.97441389240902
- type: manhattan_pearson
value: 85.7798262013878
- type: manhattan_spearman
value: 80.97208703064196
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.95341439711532
- type: cos_sim_spearman
value: 86.59127484634989
- type: euclidean_pearson
value: 85.57850603454227
- type: euclidean_spearman
value: 86.47130477363419
- type: manhattan_pearson
value: 85.59387925447652
- type: manhattan_spearman
value: 86.50665427391583
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.39810909161844
- type: cos_sim_spearman
value: 82.98595295546008
- type: euclidean_pearson
value: 84.04681129969951
- type: euclidean_spearman
value: 82.98197460689866
- type: manhattan_pearson
value: 83.9918798171185
- type: manhattan_spearman
value: 82.91148131768082
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.02072712147692
- type: cos_sim_spearman
value: 88.78821332623012
- type: euclidean_pearson
value: 88.12132045572747
- type: euclidean_spearman
value: 88.74273451067364
- type: manhattan_pearson
value: 88.05431550059166
- type: manhattan_spearman
value: 88.67610233020723
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.96134704624787
- type: cos_sim_spearman
value: 84.44062976314666
- type: euclidean_pearson
value: 84.03642536310323
- type: euclidean_spearman
value: 84.4535014579785
- type: manhattan_pearson
value: 83.92874228901483
- type: manhattan_spearman
value: 84.33634314951631
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.3154168064887
- type: cos_sim_spearman
value: 86.72393652571682
- type: euclidean_pearson
value: 86.04193246174164
- type: euclidean_spearman
value: 86.30482896608093
- type: manhattan_pearson
value: 85.95524084651859
- type: manhattan_spearman
value: 86.06031431994282
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.91079682750804
- type: cos_sim_spearman
value: 89.30961836617064
- type: euclidean_pearson
value: 88.86249564158628
- type: euclidean_spearman
value: 89.04772899592396
- type: manhattan_pearson
value: 88.85579791315043
- type: manhattan_spearman
value: 88.94190462541333
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.00558145551088
- type: cos_sim_spearman
value: 67.96601170393878
- type: euclidean_pearson
value: 67.87627043214336
- type: euclidean_spearman
value: 66.76402572303859
- type: manhattan_pearson
value: 67.88306560555452
- type: manhattan_spearman
value: 66.6273862035506
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 50.83759332748726
- type: cos_sim_spearman
value: 59.066344562858006
- type: euclidean_pearson
value: 50.08955848154131
- type: euclidean_spearman
value: 58.36517305855221
- type: manhattan_pearson
value: 50.05257267223111
- type: manhattan_spearman
value: 58.37570252804986
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.22749007956492
- type: cos_sim_spearman
value: 55.97282077657827
- type: euclidean_pearson
value: 62.10661533695752
- type: euclidean_spearman
value: 53.62780854854067
- type: manhattan_pearson
value: 62.37138085709719
- type: manhattan_spearman
value: 54.17556356828155
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.91145397065878
- type: cos_sim_spearman
value: 88.13960018389005
- type: euclidean_pearson
value: 87.67618876224006
- type: euclidean_spearman
value: 87.99119480810556
- type: manhattan_pearson
value: 87.67920297334753
- type: manhattan_spearman
value: 87.99113250064492
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.09133563707582
- type: mrr
value: 93.2415288052543
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.760999999999996
- type: map_at_10
value: 56.424
- type: map_at_100
value: 57.24399999999999
- type: map_at_1000
value: 57.278
- type: map_at_3
value: 53.68000000000001
- type: map_at_5
value: 55.442
- type: mrr_at_1
value: 50.666999999999994
- type: mrr_at_10
value: 58.012
- type: mrr_at_100
value: 58.736
- type: mrr_at_1000
value: 58.769000000000005
- type: mrr_at_3
value: 56.056
- type: mrr_at_5
value: 57.321999999999996
- type: ndcg_at_1
value: 50.666999999999994
- type: ndcg_at_10
value: 60.67700000000001
- type: ndcg_at_100
value: 64.513
- type: ndcg_at_1000
value: 65.62400000000001
- type: ndcg_at_3
value: 56.186
- type: ndcg_at_5
value: 58.692
- type: precision_at_1
value: 50.666999999999994
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 1.023
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 21.889
- type: precision_at_5
value: 14.866999999999999
- type: recall_at_1
value: 47.760999999999996
- type: recall_at_10
value: 72.006
- type: recall_at_100
value: 89.767
- type: recall_at_1000
value: 98.833
- type: recall_at_3
value: 60.211000000000006
- type: recall_at_5
value: 66.3
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 94.86690691995835
- type: cos_sim_f1
value: 89.37875751503007
- type: cos_sim_precision
value: 89.5582329317269
- type: cos_sim_recall
value: 89.2
- type: dot_accuracy
value: 99.76336633663367
- type: dot_ap
value: 94.26453740761586
- type: dot_f1
value: 88.00783162016641
- type: dot_precision
value: 86.19367209971237
- type: dot_recall
value: 89.9
- type: euclidean_accuracy
value: 99.7940594059406
- type: euclidean_ap
value: 94.85459757524379
- type: euclidean_f1
value: 89.62779156327544
- type: euclidean_precision
value: 88.96551724137932
- type: euclidean_recall
value: 90.3
- type: manhattan_accuracy
value: 99.79009900990098
- type: manhattan_ap
value: 94.76971336654465
- type: manhattan_f1
value: 89.35323383084577
- type: manhattan_precision
value: 88.91089108910892
- type: manhattan_recall
value: 89.8
- type: max_accuracy
value: 99.7940594059406
- type: max_ap
value: 94.86690691995835
- type: max_f1
value: 89.62779156327544
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.38197670064987
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.08330158937971
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.50367079063226
- type: mrr
value: 50.30444943128768
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.37739520909561
- type: cos_sim_spearman
value: 31.548500943973913
- type: dot_pearson
value: 29.983610104303
- type: dot_spearman
value: 29.90185869098618
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.5810000000000002
- type: map_at_100
value: 9.064
- type: map_at_1000
value: 22.161
- type: map_at_3
value: 0.536
- type: map_at_5
value: 0.8370000000000001
- type: mrr_at_1
value: 80.0
- type: mrr_at_10
value: 86.75
- type: mrr_at_100
value: 86.799
- type: mrr_at_1000
value: 86.799
- type: mrr_at_3
value: 85.0
- type: mrr_at_5
value: 86.5
- type: ndcg_at_1
value: 73.0
- type: ndcg_at_10
value: 65.122
- type: ndcg_at_100
value: 51.853
- type: ndcg_at_1000
value: 47.275
- type: ndcg_at_3
value: 66.274
- type: ndcg_at_5
value: 64.826
- type: precision_at_1
value: 80.0
- type: precision_at_10
value: 70.19999999999999
- type: precision_at_100
value: 53.480000000000004
- type: precision_at_1000
value: 20.946
- type: precision_at_3
value: 71.333
- type: precision_at_5
value: 70.0
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.884
- type: recall_at_100
value: 12.57
- type: recall_at_1000
value: 44.208999999999996
- type: recall_at_3
value: 0.5890000000000001
- type: recall_at_5
value: 0.95
- task:
type: Clustering
dataset:
name: MTEB TenKGnadClusteringP2P
type: slvnwhrl/tenkgnad-clustering-p2p
config: default
split: test
revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558
metrics:
- type: v_measure
value: 42.84199261133083
- task:
type: Clustering
dataset:
name: MTEB TenKGnadClusteringS2S
type: slvnwhrl/tenkgnad-clustering-s2s
config: default
split: test
revision: 6cddbe003f12b9b140aec477b583ac4191f01786
metrics:
- type: v_measure
value: 23.689557114798838
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.941
- type: map_at_10
value: 8.222
- type: map_at_100
value: 14.277999999999999
- type: map_at_1000
value: 15.790000000000001
- type: map_at_3
value: 4.4670000000000005
- type: map_at_5
value: 5.762
- type: mrr_at_1
value: 24.490000000000002
- type: mrr_at_10
value: 38.784
- type: mrr_at_100
value: 39.724
- type: mrr_at_1000
value: 39.724
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 37.415
- type: ndcg_at_1
value: 22.448999999999998
- type: ndcg_at_10
value: 21.026
- type: ndcg_at_100
value: 33.721000000000004
- type: ndcg_at_1000
value: 45.045
- type: ndcg_at_3
value: 20.053
- type: ndcg_at_5
value: 20.09
- type: precision_at_1
value: 24.490000000000002
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.469
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 21.769
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 1.941
- type: recall_at_10
value: 14.915999999999999
- type: recall_at_100
value: 46.155
- type: recall_at_1000
value: 80.664
- type: recall_at_3
value: 5.629
- type: recall_at_5
value: 8.437
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.64800000000001
- type: ap
value: 12.914826731261094
- type: f1
value: 53.05213503422915
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.427277872099594
- type: f1
value: 60.78292007556828
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.48134168406559
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.79465935506944
- type: cos_sim_ap
value: 70.24589055290592
- type: cos_sim_f1
value: 65.0994575045208
- type: cos_sim_precision
value: 63.76518218623482
- type: cos_sim_recall
value: 66.49076517150397
- type: dot_accuracy
value: 84.63968528342374
- type: dot_ap
value: 69.84683095084355
- type: dot_f1
value: 64.50606169727523
- type: dot_precision
value: 59.1719885487778
- type: dot_recall
value: 70.89709762532982
- type: euclidean_accuracy
value: 84.76485664898374
- type: euclidean_ap
value: 70.20556438685551
- type: euclidean_f1
value: 65.06796614516543
- type: euclidean_precision
value: 63.29840319361277
- type: euclidean_recall
value: 66.93931398416886
- type: manhattan_accuracy
value: 84.72313286046374
- type: manhattan_ap
value: 70.17151475534308
- type: manhattan_f1
value: 65.31379180759113
- type: manhattan_precision
value: 62.17505366086334
- type: manhattan_recall
value: 68.7862796833773
- type: max_accuracy
value: 84.79465935506944
- type: max_ap
value: 70.24589055290592
- type: max_f1
value: 65.31379180759113
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.95874568246207
- type: cos_sim_ap
value: 85.82517548264127
- type: cos_sim_f1
value: 78.22288041466125
- type: cos_sim_precision
value: 75.33875338753387
- type: cos_sim_recall
value: 81.33661841700031
- type: dot_accuracy
value: 88.836496293709
- type: dot_ap
value: 85.53430720252186
- type: dot_f1
value: 78.10616085869725
- type: dot_precision
value: 74.73269555430501
- type: dot_recall
value: 81.79858330766862
- type: euclidean_accuracy
value: 88.92769821865176
- type: euclidean_ap
value: 85.65904346964223
- type: euclidean_f1
value: 77.98774074208407
- type: euclidean_precision
value: 73.72282795035315
- type: euclidean_recall
value: 82.77640899291654
- type: manhattan_accuracy
value: 88.86366282454303
- type: manhattan_ap
value: 85.61599642231819
- type: manhattan_f1
value: 78.01480509061737
- type: manhattan_precision
value: 74.10460685833044
- type: manhattan_recall
value: 82.36064059131506
- type: max_accuracy
value: 88.95874568246207
- type: max_ap
value: 85.82517548264127
- type: max_f1
value: 78.22288041466125
- task:
type: Retrieval
dataset:
name: MTEB WikiCLIR
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.9539999999999997
- type: map_at_10
value: 7.407
- type: map_at_100
value: 8.677999999999999
- type: map_at_1000
value: 9.077
- type: map_at_3
value: 5.987
- type: map_at_5
value: 6.6979999999999995
- type: mrr_at_1
value: 35.65
- type: mrr_at_10
value: 45.097
- type: mrr_at_100
value: 45.83
- type: mrr_at_1000
value: 45.871
- type: mrr_at_3
value: 42.63
- type: mrr_at_5
value: 44.104
- type: ndcg_at_1
value: 29.215000000000003
- type: ndcg_at_10
value: 22.694
- type: ndcg_at_100
value: 22.242
- type: ndcg_at_1000
value: 27.069
- type: ndcg_at_3
value: 27.641
- type: ndcg_at_5
value: 25.503999999999998
- type: precision_at_1
value: 35.65
- type: precision_at_10
value: 12.795000000000002
- type: precision_at_100
value: 3.354
- type: precision_at_1000
value: 0.743
- type: precision_at_3
value: 23.403
- type: precision_at_5
value: 18.474
- type: recall_at_1
value: 3.9539999999999997
- type: recall_at_10
value: 11.301
- type: recall_at_100
value: 22.919999999999998
- type: recall_at_1000
value: 40.146
- type: recall_at_3
value: 7.146
- type: recall_at_5
value: 8.844000000000001
- task:
type: Retrieval
dataset:
name: MTEB XMarket
type: jinaai/xmarket_de
config: default
split: test
revision: 2336818db4c06570fcdf263e1bcb9993b786f67a
metrics:
- type: map_at_1
value: 4.872
- type: map_at_10
value: 10.658
- type: map_at_100
value: 13.422999999999998
- type: map_at_1000
value: 14.245
- type: map_at_3
value: 7.857
- type: map_at_5
value: 9.142999999999999
- type: mrr_at_1
value: 16.744999999999997
- type: mrr_at_10
value: 24.416
- type: mrr_at_100
value: 25.432
- type: mrr_at_1000
value: 25.502999999999997
- type: mrr_at_3
value: 22.096
- type: mrr_at_5
value: 23.421
- type: ndcg_at_1
value: 16.695999999999998
- type: ndcg_at_10
value: 18.66
- type: ndcg_at_100
value: 24.314
- type: ndcg_at_1000
value: 29.846
- type: ndcg_at_3
value: 17.041999999999998
- type: ndcg_at_5
value: 17.585
- type: precision_at_1
value: 16.695999999999998
- type: precision_at_10
value: 10.374
- type: precision_at_100
value: 3.988
- type: precision_at_1000
value: 1.1860000000000002
- type: precision_at_3
value: 14.21
- type: precision_at_5
value: 12.623000000000001
- type: recall_at_1
value: 4.872
- type: recall_at_10
value: 18.624
- type: recall_at_100
value: 40.988
- type: recall_at_1000
value: 65.33
- type: recall_at_3
value: 10.162
- type: recall_at_5
value: 13.517999999999999
---
<!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using `jina-embeddings-v2-base-de` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/).
## Intended Usage & Model Info
`jina-embeddings-v2-base-de` is a German/English bilingual text **embedding model** supporting **8192 sequence length**.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed German-English input without bias.
Additionally, we provide the following embedding models:
`jina-embeddings-v2-base-de` ist ein zweisprachiges **Text Embedding Modell** für Deutsch und Englisch,
welches Texteingaben mit einer Länge von bis zu **8192 Token unterstützt**.
Es basiert auf der adaptierten Bert-Modell-Architektur JinaBERT,
welche mithilfe einer symmetrische Variante von [ALiBi](https://arxiv.org/abs/2108.12409) längere Eingabetexte erlaubt.
Wir haben, das Model für hohe Performance in einsprachigen und cross-lingual Anwendungen entwickelt und speziell darauf trainiert,
gemischte deutsch-englische Eingaben ohne einen Bias zu kodieren.
Des Weiteren stellen wir folgende Embedding-Modelle bereit:
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings **(you are here)**.
- [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon).
- [`jina-embeddings-v2-base-code`](https://huggingface.co/jinaai/jina-embeddings-v2-base-code): 161 million parameters code embeddings.
## Data & Parameters
The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016).
## Usage
**<details><summary>Please apply mean pooling when integrating the model.</summary>**
<p>
### Why mean pooling?
`mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an `encode` function to deal with this.
However, if you would like to do it without using the default `encode` function:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['How is the weather today?', 'What is the current weather like today?']
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-de')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
You can use Jina Embedding models directly from transformers package.
```python
!pip install transformers
import torch
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16)
embeddings = model.encode(['How is the weather today?', 'Wie ist das Wetter heute?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-de", # switch to en/zh for English or Chinese
trust_remote_code=True
)
# control your input sequence length up to 8192
model.max_seq_length = 1024
embeddings = model.encode([
'How is the weather today?',
'Wie ist das Wetter heute?'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
## Alternatives to Using Transformers Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Benchmark Results
We evaluated our Bilingual model on all German and English evaluation tasks availble on the [MTEB benchmark](https://huggingface.co/blog/mteb). In addition, we evaluated the models agains a couple of other German, English, and multilingual models on additional German evaluation tasks:
<img src="de_evaluation_results.png" width="780px">
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@article{mohr2024multi,
title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings},
author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others},
journal={arXiv preprint arXiv:2402.17016},
year={2024}
}
```
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r63-task1431 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"license:mit",
"region:us"
] | 1,732,220,198,000 | 2024-11-22T17:36:04 | 0 | 0 | ---
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r63-task1431
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1431_head_qa_answer_generation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1431_head_qa_answer_generation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"HEAD-QA"
] | Non_BioNLP |
nthakur/RetroMAE_BEIR | nthakur | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:wikipedia",
"dataset:bookcorpus",
"dataset:ms_marco",
"dataset:BeIR/fiqa",
"dataset:BeIR/trec-covid",
"dataset:BeIR/scifact",
"dataset:BeIR/nfcorpus",
"dataset:BeIR/nq",
"dataset:BeIR/hotpotqa",
"dataset:BeIR/arguana",
"dataset:BeIR/webis-touche2020",
"dataset:BeIR/quora",
"dataset:BeIR/dbpedia-entity",
"dataset:BeIR/scidocs",
"dataset:BeIR/fever",
"dataset:BeIR/climate-fever",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,692,206,965,000 | 2023-08-16T17:31:46 | 11 | 1 | ---
datasets:
- wikipedia
- bookcorpus
- ms_marco
- BeIR/fiqa
- BeIR/trec-covid
- BeIR/scifact
- BeIR/nfcorpus
- BeIR/nq
- BeIR/hotpotqa
- BeIR/arguana
- BeIR/webis-touche2020
- BeIR/quora
- BeIR/dbpedia-entity
- BeIR/scidocs
- BeIR/fever
- BeIR/climate-fever
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nthakur/RetroMAE_BEIR
This is a port of the [RetroMAE_BEIR](https://huggingface.co/Shitao/RetroMAE_BEIR) Model to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nthakur/RetroMAE_BEIR')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nthakur/RetroMAE_BEIR')
model = AutoModel.from_pretrained('nthakur/RetroMAE_BEIR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nthakur/RetroMAE_BEIR)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at [RetroMAE](https://github.com/staoxiao/RetroMAE/).
<!--- Describe where people can find more information --> | [
"SCIFACT"
] | Non_BioNLP |
Mozilla/OLMo-7B-0424-llamafile | Mozilla | null | [
"llamafile",
"en",
"dataset:allenai/dolma",
"arxiv:2402.00838",
"arxiv:2302.13971",
"license:apache-2.0",
"region:us"
] | 1,721,391,663,000 | 2024-07-28T05:23:45 | 151 | 2 | ---
datasets:
- allenai/dolma
language:
- en
license: apache-2.0
license_link: LICENSE
tags:
- llamafile
quantized_by: jartine
---
# OLMo 7b 0424 - llamafile
- Model creator: [Allen Institute for AI](https://huggingface.co/allenai/)
- Original model: [allenai/OLMo-7B-0424-hf](https://huggingface.co/allenai/OLMo-7B-0424-hf)
The model is packaged into executable weights, which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile). This makes it
easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD, and
NetBSD for AMD64 and ARM64.
## Quickstart
Running the following on a desktop OS will launch a tab in your web
browser with a chatbot interface.
```
wget https://huggingface.co/Mozilla/OLMo-7B-0424-llamafile/resolve/main/OLMo-7B-0424.Q6_K.llamafile
chmod +x OLMo-7B-0424.Q6_K.llamafile
./OLMo-7B-0424.Q6_K.llamafile
```
You then need to fill out the prompt / history template (see below).
This model has a max context window size of 4k tokens. By default, a
context window size of 2k tokens is used. You may increase this to the
maximum by passing the `-c 0` flag.
On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
driver needs to be installed. If the prebuilt DSOs should fail, the CUDA
or ROCm SDKs may need to be installed, in which case llamafile builds a
native module just for your system.
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
of the README.
---
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 7B
<!-- Provide a quick summary of what the model is/does. -->
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.
This model has been converted from [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B) for the
Hugging Face Transformers format.
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B-hf) | 3 Trillion |16 | 2048 | 16 | 2048 |
| [OLMo 7B](https://huggingface.co/allenai/OLMo-7B-hf) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
| [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T-hf) | 2 Trillion | 32 | 4096 | 32 | 2048 |
We are releasing many checkpoints for these models, for every 1000 training steps. These have not
yet been converted into Hugging Face Transformers format, but are available in [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
- **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
- **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
Quickly get inference running with the following:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-hf")
message = ["Language modeling is"]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
```
Alternatively, with the pipeline abstraction:
```python
from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-7B-hf")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'
```
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
### Fine-tuning
This model does not directly support our fine-tuning processes. Model fine-tuning can be done
from the final checkpoint or many intermediate checkpoints of
[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Core model results for the 7B model are found below.
| | [Llama 7B](https://arxiv.org/abs/2302.13971) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | [MPT 7B](https://huggingface.co/mosaicml/mpt-7b) | **OLMo 7B** (ours) |
| --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
| arc_challenge | 44.5 | 39.8 | 47.5 | 46.5 | 48.5 |
| arc_easy | 57.0 | 57.7 | 70.4 | 70.5 | 65.4 |
| boolq | 73.1 | 73.5 | 74.6 | 74.2 | 73.4 |
| copa | 85.0 | 87.0 | 86.0 | 85.0 | 90 |
| hellaswag | 74.5 | 74.5 | 75.9 | 77.6 | 76.4 |
| openbookqa | 49.8 | 48.4 | 53.0 | 48.6 | 50.2 |
| piqa | 76.3 | 76.4 | 78.5 | 77.3 | 78.4 |
| sciq | 89.5 | 90.8 | 93.9 | 93.7 | 93.8 |
| winogrande | 68.2 | 67.3 | 68.9 | 69.9 | 67.9 |
| **Core tasks average** | 68.7 | 68.4 | 72.1 | 71.5 | 71.6 |
| truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33 | 36.0 |
| MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 |
| GSM8k (mixed eval.) | 10.0 (8shot CoT) | 12.0 (8shot CoT) | 4.0 (5 shot) | 4.5 (5 shot) | 8.5 (8shot CoT) |
| **Full average** | 57.8 | 59.3 | 59.2 | 59.3 | 59.8 |
And for the 1B model:
| task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- |
| arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 |
| arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 |
| boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 |
| copa | 50 | 84 | 72 | 78 | 79 |
| hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 |
| openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 |
| piqa | 50 | 74 | 69.1 | 71.1 | 73.7 |
| sciq | 25 | 94.7 | 86 | 90.5 | 88.1 |
| winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 |
| Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 |
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
## Model Details
### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
### Architecture
OLMo 7B architecture with peer models for comparison.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
|------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
| d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
| num heads | 32 | 32 | 32 | 71 | 16 |
| num layers | 32 | 32 | 32 | 32 | 32 |
| MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
| LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
| pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
| attention variant | full | GQA | full | MQA | MQA |
| biases | none | none | in LN only | in LN only | none |
| block type | sequential | sequential | sequential | parallel | parallel |
| activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
| sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
| batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
| batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
| weight tying | no | no | no | no | yes |
### Hyperparameters
AdamW optimizer parameters are shown below.
| Size | Peak LR | Betas | Epsilon | Weight Decay |
|------|------------|-----------------|-------------|--------------|
| 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
| 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
Optimizer settings comparison with peer models.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
|-----------------------|------------------|---------------------|--------------------|--------------------|
| warmup steps | 5000 | 2000 | 2000 | 1000 |
| peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
| minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
| weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
| beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
| beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
| epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
| LR schedule | linear | cosine | cosine | cosine |
| gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
| gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
| optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
## Environmental Impact
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
| | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) |
|-----------|------------|-----------------------------|--------------------------------|---------------------------|
| OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
| OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
## Citation
**BibTeX:**
```
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
```
**APA:**
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
## Model Card Contact
For errors in this model card, contact Nathan, Akshita or Shane, `{nathanl, akshitab, shanea} at allenai dot org`.
| [
"SCIQ"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-410m-deduped-4bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,713,858,853,000 | 2024-04-23T07:54:50 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-410M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means XNPythia-410M-dedupedAME will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"SCIQ"
] | Non_BioNLP |
baukearends/Echocardiogram-SpanCategorizer-tricuspid-regurgitation | baukearends | token-classification | [
"spacy",
"arxiv:2408.06930",
"medical",
"token-classification",
"nl",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | 1,723,709,693,000 | 2024-08-15T08:35:27 | 13 | 0 | ---
language:
- nl
license: cc-by-sa-4.0
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
tags:
- spacy
- arxiv:2408.06930
- medical
model-index:
- name: Echocardiogram_SpanCategorizer_tricuspid_regurgitation
results:
- task:
type: token-classification
dataset:
name: internal test set
type: test
metrics:
- type: f1
value: 0.905
name: Weighted f1
verified: false
- type: precision
value: 0.93
name: Weighted precision
verified: false
- type: recall
value: 0.881
name: Weighted recall
verified: false
---
# Description
This model is a spaCy SpanCategorizer model trained from scratch on Dutch echocardiogram reports sourced from Electronic Health Records. The publication associated with the span classification task can be found at https://arxiv.org/abs/2408.06930. The config file for training the model can be found at https://github.com/umcu/echolabeler.
# Minimum working example
```python
!pip install https://huggingface.co/baukearends/Echocardiogram-SpanCategorizer-tricuspid-regurgitation/resolve/main/nl_Echocardiogram_SpanCategorizer_tricuspid_regurgitation-any-py3-none-any.whl
```
```python
import spacy
nlp = spacy.load("nl_Echocardiogram_SpanCategorizer_tricuspid_regurgitation")
```
```python
prediction = nlp("Op dit echo geen duidelijke WMA te zien, goede systolische L.V. functie, wel L.V.H., diastolische dysfunctie graad 1A tot 2. Geringe aortastenose en - matige -insufficientie. Geringe T.I.")
for span, score in zip(prediction.spans['sc'], prediction.spans['sc'].attrs['scores']):
print(f"Span: {span}, label: {span.label_}, score: {score[0]:.3f}")
```
# Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `tricuspid_valve_native_regurgitation_not_present`, `tricuspid_valve_native_regurgitation_mild`, `tricuspid_valve_native_regurgitation_moderate`, `tricuspid_valve_native_regurgitation_severe` |
</details>
# Intended use
The model is developed for span classification on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch.
# Data
The model was trained on approximately 4,000 manually annotated echocardiogram reports from the University Medical Centre Utrecht. The training data was anonymized before starting the training procedure.
| Feature | Description |
| --- | --- |
| **Name** | `Echocardiogram_SpanCategorizer_tricuspid_regurgitation` |
| **Version** | `1.0.0` |
| **spaCy** | `>=3.7.4,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `spancat` |
| **Components** | `tok2vec`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Bauke Arends]() |
# Contact
If you are having problems with this model please add an issue on our git: https://github.com/umcu/echolabeler/issues
# Usage
If you use the model in your work please use the following referral; https://doi.org/10.48550/arXiv.2408.06930
# References
Paper: Bauke Arends, Melle Vessies, Dirk van Osch, Arco Teske, Pim van der Harst, René van Es, Bram van Es (2024): Diagnosis extraction from unstructured Dutch echocardiogram reports using span- and document-level characteristic classification, Arxiv https://arxiv.org/abs/2408.06930 | [
"MEDICAL DATA"
] | BioNLP |
rjuez00/meddocan-flair-spanish-fast-bilstm-crf | rjuez00 | null | [
"pytorch",
"region:us"
] | 1,651,428,068,000 | 2022-05-03T14:19:44 | 0 | 0 | ---
{}
---
The [MEDDOCAN dataset](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) has some entities not separated by a space but a dot. For example such is the case of Alicante.Villajoyosa which are two separate entities but with traditional tokenizers are only one Token. Spacy tokenizers also don't work, when I was trying to assign the entities two the tokens on training SpaCy v3 frecuently reported errors that it could not match some entities to tokens due to this problem.
That is why I have created a Tokenizer with manual regex rules so that it improves the performance when using the model:
```
from flair.models import SequenceTagger
from flair.data import Sentence
from flair.data import Tokenizer
import re
class CustomTokenizer(Tokenizer):
def tokenize(self, text):
finaltokens = []
tokens = text.split()
for token in tokens:
for i in list(filter(None, re.split("-|\/" , token))):
if len(re.findall("(\w)\.(\w)", i)) > 0:
#print(i)
for j in filter(None, i.split(".")):
finaltokens.append(j)
else:
#print(i)
finaltokens.append(i)
#print(finaltokens)
return finaltokens
flairTagger = SequenceTagger.load("rjuez00/meddocan-flair-spanish-fast-bilstm-crf")
```
For using the model you just have to instanciate it like above and then create a Flair Sentence with the text and the tokenizer like this:
```documentFlair = Sentence(text, use_tokenizer = CustomTokenizer())```
Unfortunately the spans that Flair provides while performing NER on the MEDDOCAN dataset are not correct, I'm not aware if its a bug of my version (0.11). But I've developed a system that corrects the slight deviations of the offsets.
```
documentEntities = []
documentFlair = Sentence(text, use_tokenizer = CustomTokenizer())
flairTagger.predict(documentFlair)
predictedEntities = []
for idxentity, entity in enumerate(documentFlair.get_spans("ner")):
predictedEntities.append(entity)
```
```
for idxentity, entity in enumerate(reversed(predictedEntities), start = 1):
entityType = entity.get_label("ner").value
startEntity = entity.start_position
endEntity = entity.end_position
while text[startEntity] in [" ", "(", ")", ",", ".", ";", ":", "!", "?", "-", "\n"]:
startEntity += 1
while len(text) > endEntity and (text[endEntity].isalpha() or text[endEntity].isnumeric()):
#print("ALARGADO FINAL")
endEntity += 1
while text[endEntity-1] in [" ", ",", ".", ";", ":", "!", "?", "-", ")", "(", "\\", "/", "\"", "'", "+", "*", "&", "%", "$", "#", "@", "~", "`", "^", "|", "=", ":", ";", ">", "<", "]"]:
endEntity -= 1
#print(f"PREDICHO:{entity.text}\t\t\t\tARREGLADO:{text[startEntity:endEntity]}\n")
f.write( "T" + str(idxentity) + "\t"
+ entityType + " " + str(startEntity) + " " + str(endEntity) +
"\t" + text[startEntity:endEntity] + "\n" )
``` | [
"MEDDOCAN"
] | BioNLP |
ghadeermobasher/Originalbiobert-v1.1-BioRED-CD-128-32-30 | ghadeermobasher | token-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,657,731,957,000 | 2022-07-13T17:47:28 | 118 | 0 | ---
metrics:
- precision
- recall
- f1
tags:
- generated_from_trainer
model-index:
- name: Originalbiobert-v1.1-BioRED-CD-128-32-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Originalbiobert-v1.1-BioRED-CD-128-32-30
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 0.9994
- Recall: 1.0
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.10.3
| [
"BIORED"
] | BioNLP |
alexandra-barker/lora_finetuned_roberta_mlm | alexandra-barker | null | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:meddialog",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | 1,734,200,428,000 | 2024-12-15T05:35:19 | 1 | 0 | ---
base_model: roberta-base
datasets:
- meddialog
library_name: peft
license: mit
tags:
- generated_from_trainer
model-index:
- name: lora_finetuned_roberta_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_finetuned_roberta_mlm
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the meddialog dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 | [
"MEDDIALOG"
] | BioNLP |
kitintouch/kit-the-bear | kitintouch | text-to-image | [
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | 1,675,881,870,000 | 2023-02-08T18:44:56 | 0 | 0 | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: kitthebear
---
### kit the bear Dreambooth model trained by kitintouch with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
kitthebear (use that on your prompt)

| [
"BEAR"
] | Non_BioNLP |
KingKazma/xsum_108_5000000_2500000_train | KingKazma | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,691,064,444,000 | 2023-08-03T12:07:25 | 7 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# xsum_108_5000000_2500000_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/xsum_108_5000000_2500000_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 1109
* Number of training documents: 204045
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - league - police - club - game | 5 | -1_said_league_police_club |
| 0 | eu - labour - referendum - brexit - vote | 117127 | 0_eu_labour_referendum_brexit |
| 1 | cricket - wicket - batsman - bowler - test | 4178 | 1_cricket_wicket_batsman_bowler |
| 2 | foul - footed - box - kick - corner | 3619 | 2_foul_footed_box_kick |
| 3 | boko - haram - african - africa - nigeria | 2639 | 3_boko_haram_african_africa |
| 4 | school - education - pupil - teacher - schools | 2549 | 4_school_education_pupil_teacher |
| 5 | rail - train - transport - rmt - bridge | 1869 | 5_rail_train_transport_rmt |
| 6 | murder - det - man - police - insp | 1735 | 6_murder_det_man_police |
| 7 | syrian - syria - turkey - turkish - aleppo | 1655 | 7_syrian_syria_turkey_turkish |
| 8 | crash - driving - collision - driver - road | 1589 | 8_crash_driving_collision_driver |
| 9 | medal - gold - olympic - rio - bronze | 1506 | 9_medal_gold_olympic_rio |
| 10 | film - actor - star - actress - drama | 1214 | 10_film_actor_star_actress |
| 11 | murray - tennis - djokovic - wimbledon - slam | 1212 | 11_murray_tennis_djokovic_wimbledon |
| 12 | dedicated - transfer - appearance - page - loan | 1143 | 12_dedicated_transfer_appearance_page |
| 13 | golf - mcilroy - birdie - par - pga | 1072 | 13_golf_mcilroy_birdie_par |
| 14 | fight - boxing - mayweather - fury - klitschko | 1070 | 14_fight_boxing_mayweather_fury |
| 15 | mercedes - f1 - hamilton - rosberg - ferrari | 1024 | 15_mercedes_f1_hamilton_rosberg |
| 16 | space - earth - planet - mission - spacecraft | 996 | 16_space_earth_planet_mission |
| 17 | coastguard - lifeboat - rnli - rescue - search | 968 | 17_coastguard_lifeboat_rnli_rescue |
| 18 | fire - blaze - smoke - firefighter - rescue | 914 | 18_fire_blaze_smoke_firefighter |
| 19 | dog - animal - cat - rspca - dogs | 903 | 19_dog_animal_cat_rspca |
| 20 | nhs - patient - hospital - health - care | 878 | 20_nhs_patient_hospital_health |
| 21 | data - security - password - malware - hacker | 824 | 21_data_security_password_malware |
| 22 | sexual - indecent - rape - sex - assault | 818 | 22_sexual_indecent_rape_sex |
| 23 | yn - ar - ei - wedi - bod | 773 | 23_yn_ar_ei_wedi |
| 24 | apple - samsung - patent - smartphone - robot | 613 | 24_apple_samsung_patent_smartphone |
| 25 | maduro - venezuela - venezuelan - guzman - cartel | 610 | 25_maduro_venezuela_venezuelan_guzman |
| 26 | taliban - pakistan - afghan - afghanistan - pakistani | 603 | 26_taliban_pakistan_afghan_afghanistan |
| 27 | album - song - band - music - chart | 588 | 27_album_song_band_music |
| 28 | ukraine - russian - russia - ukrainian - putin | 576 | 28_ukraine_russian_russia_ukrainian |
| 29 | art - painting - gallery - artist - museum | 575 | 29_art_painting_gallery_artist |
| 30 | trump - clinton - republican - trumps - hillary | 556 | 30_trump_clinton_republican_trumps |
| 31 | ghana - gabon - cameroon - african - nations | 496 | 31_ghana_gabon_cameroon_african |
| 32 | korea - korean - kim - north - missile | 483 | 32_korea_korean_kim_north |
| 33 | bank - barclays - banking - rbs - fca | 460 | 33_bank_barclays_banking_rbs |
| 34 | index - chinas - benchmark - growth - economy | 444 | 34_index_chinas_benchmark_growth |
| 35 | sale - store - retail - retailer - tesco | 439 | 35_sale_store_retail_retailer |
| 36 | jockey - horse - stakes - trainer - racing | 429 | 36_jockey_horse_stakes_trainer |
| 37 | chelsea - tottenham - liverpool - arsenal - manchester | 422 | 37_chelsea_tottenham_liverpool_arsenal |
| 38 | zoo - rhino - elephant - animal - tiger | 422 | 38_zoo_rhino_elephant_animal |
| 39 | earthquake - quake - nepal - rain - landslide | 422 | 39_earthquake_quake_nepal_rain |
| 40 | flood - flooding - rain - water - warning | 411 | 40_flood_flooding_rain_water |
| 41 | airport - heathrow - runway - airports - gatwick | 404 | 41_airport_heathrow_runway_airports |
| 42 | bird - wildlife - rspb - birds - breeding | 401 | 42_bird_wildlife_rspb_birds |
| 43 | madrid - bayern - mnchen - fc - barcelona | 391 | 43_madrid_bayern_mnchen_fc |
| 44 | prison - prisoner - prisons - hmp - inmate | 371 | 44_prison_prisoner_prisons_hmp |
| 45 | delhi - modi - india - bjp - indias | 350 | 45_delhi_modi_india_bjp |
| 46 | israel - palestinian - israeli - palestinians - gaza | 334 | 46_israel_palestinian_israeli_palestinians |
| 47 | wigan - warrington - replacements - salford - widnes | 324 | 47_wigan_warrington_replacements_salford |
| 48 | planning - development - council - site - building | 313 | 48_planning_development_council_site |
| 49 | war - memorial - somme - battle - soldier | 310 | 49_war_memorial_somme_battle |
| 50 | derry - donegal - tyrone - dundalk - armagh | 306 | 50_derry_donegal_tyrone_dundalk |
| 51 | hong - kong - chinese - china - bo | 301 | 51_hong_kong_chinese_china |
| 52 | drug - cannabis - cocaine - supply - heroin | 288 | 52_drug_cannabis_cocaine_supply |
| 53 | fraud - crown - money - court - false | 273 | 53_fraud_crown_money_court |
| 54 | google - facebook - ad - user - advertising | 270 | 54_google_facebook_ad_user |
| 55 | paris - abdeslam - brussels - french - belgian | 267 | 55_paris_abdeslam_brussels_french |
| 56 | broadband - bt - ofcom - openreach - mobile | 264 | 56_broadband_bt_ofcom_openreach |
| 57 | tranmere - torquay - aldershot - substitution - replaces | 262 | 57_tranmere_torquay_aldershot_substitution |
| 58 | doping - iaaf - athlete - antidoping - athletics | 257 | 58_doping_iaaf_athlete_antidoping |
| 59 | suu - kyi - rohingya - thai - thailand | 257 | 59_suu_kyi_rohingya_thai |
| 60 | dup - sinn - fin - unionist - sdlp | 254 | 60_dup_sinn_fin_unionist |
| 61 | pokemon - vr - console - nintendo - headset | 252 | 61_pokemon_vr_console_nintendo |
| 62 | pen - macron - fillon - sarkozy - fn | 252 | 62_pen_macron_fillon_sarkozy |
| 63 | book - novel - prize - author - writer | 252 | 63_book_novel_prize_author |
| 64 | greece - greek - eurozone - bailout - greeces | 250 | 64_greece_greek_eurozone_bailout |
| 65 | fish - fishing - salmon - fishery - marine | 245 | 65_fish_fishing_salmon_fishery |
| 66 | shooting - officer - police - gun - black | 245 | 66_shooting_officer_police_gun |
| 67 | mortgage - price - property - buyer - average | 239 | 67_mortgage_price_property_buyer |
| 68 | prince - duchess - duke - queen - princess | 238 | 68_prince_duchess_duke_queen |
| 69 | road - crash - collision - junction - car | 236 | 69_road_crash_collision_junction |
| 70 | snooker - frame - osullivan - selby - ding | 234 | 70_snooker_frame_osullivan_selby |
| 71 | froome - rider - stage - sky - 1min | 225 | 71_froome_rider_stage_sky |
| 72 | inquest - coroner - hospital - death - trust | 219 | 72_inquest_coroner_hospital_death |
| 73 | hillsborough - disaster - 1989 - 96 - crush | 214 | 73_hillsborough_disaster_1989_96 |
| 74 | steel - tata - talbot - plant - industry | 208 | 74_steel_tata_talbot_plant |
| 75 | fifa - blatter - platini - fifas - sepp | 206 | 75_fifa_blatter_platini_fifas |
| 76 | ira - psni - ombudsman - ruc - murder | 202 | 76_ira_psni_ombudsman_ruc |
| 77 | smoking - tobacco - ecigarettes - smoker - cigarette | 201 | 77_smoking_tobacco_ecigarettes_smoker |
| 78 | wind - turbine - energy - lagoon - tidal | 196 | 78_wind_turbine_energy_lagoon |
| 79 | tax - chancellor - budget - osborne - spending | 191 | 79_tax_chancellor_budget_osborne |
| 80 | s4c - licence - bbc - charter - channel | 187 | 80_s4c_licence_bbc_charter |
| 81 | unsupported - updated - playback - media - device | 187 | 81_unsupported_updated_playback_media |
| 82 | dow - nasdaq - sp - 500 - index | 187 | 82_dow_nasdaq_sp_500 |
| 83 | updated - gmt - bst - 2017 - 2016 | 186 | 83_updated_gmt_bst_2017 |
| 84 | refugee - asylum - unaccompanied - dubs - refugees | 186 | 84_refugee_asylum_unaccompanied_dubs |
| 85 | whale - dolphin - whales - marine - shark | 185 | 85_whale_dolphin_whales_marine |
| 86 | bomb - disposal - evacuated - object - explosive | 175 | 86_bomb_disposal_evacuated_object |
| 87 | car - smmt - gm - nissan - psa | 165 | 87_car_smmt_gm_nissan |
| 88 | migrant - refugee - asylum - hungary - greece | 164 | 88_migrant_refugee_asylum_hungary |
| 89 | cancer - treatment - patient - drug - breast | 164 | 89_cancer_treatment_patient_drug |
| 90 | milk - farmer - dairy - farmers - farming | 163 | 90_milk_farmer_dairy_farmers |
| 91 | ebola - sierra - leone - liberia - outbreak | 158 | 91_ebola_sierra_leone_liberia |
| 92 | trump - comey - fbi - email - clinton | 157 | 92_trump_comey_fbi_email |
| 93 | fossil - dinosaur - specie - creature - bone | 157 | 93_fossil_dinosaur_specie_creature |
| 94 | flight - passenger - airport - plane - aircraft | 157 | 94_flight_passenger_airport_plane |
| 95 | policing - constable - crime - force - police | 155 | 95_policing_constable_crime_force |
| 96 | ftse - index - pound - dollar - fell | 154 | 96_ftse_index_pound_dollar |
| 97 | waste - recycling - landfill - bin - rubbish | 152 | 97_waste_recycling_landfill_bin |
| 98 | pilot - aaib - aircraft - shoreham - accidents | 151 | 98_pilot_aaib_aircraft_shoreham |
| 99 | kosovo - bosnian - serbia - serb - serbs | 147 | 99_kosovo_bosnian_serbia_serb |
| 100 | pp - catalan - rajoy - catalonia - podemos | 147 | 100_pp_catalan_rajoy_catalonia |
| 101 | train - tram - rail - raib - railway | 146 | 101_train_tram_rail_raib |
| 102 | iran - nuclear - irans - iranian - rouhani | 143 | 102_iran_nuclear_irans_iranian |
| 103 | drone - drones - aircraft - unmanned - aviation | 141 | 103_drone_drones_aircraft_unmanned |
| 104 | rousseff - petrobras - lula - temer - impeachment | 141 | 104_rousseff_petrobras_lula_temer |
| 105 | migrant - boat - mediterranean - libya - italian | 139 | 105_migrant_boat_mediterranean_libya |
| 106 | terrorism - terrorist - abrini - syed - mohammed | 138 | 106_terrorism_terrorist_abrini_syed |
| 107 | dunlop - tt - superbike - supersport - race | 135 | 107_dunlop_tt_superbike_supersport |
| 108 | missing - disappearance - search - insp - police | 135 | 108_missing_disappearance_search_insp |
| 109 | exploitation - abuse - ipcc - sexual - rotherham | 133 | 109_exploitation_abuse_ipcc_sexual |
| 110 | zika - virus - mosquito - microcephaly - pregnant | 131 | 110_zika_virus_mosquito_microcephaly |
| 111 | vaccine - flu - meningitis - vaccination - measles | 131 | 111_vaccine_flu_meningitis_vaccination |
| 112 | pollution - air - diesel - no2 - nitrogen | 130 | 112_pollution_air_diesel_no2 |
| 113 | everest - climber - avalanche - climbing - mountain | 127 | 113_everest_climber_avalanche_climbing |
| 114 | yemen - houthis - hadi - houthi - sanaa | 126 | 114_yemen_houthis_hadi_houthi |
| 115 | libya - gaddafi - libyan - tripoli - sirte | 122 | 115_libya_gaddafi_libyan_tripoli |
| 116 | oil - gas - decommissioning - industry - sea | 121 | 116_oil_gas_decommissioning_industry |
| 117 | hms - ship - submarine - navy - clyde | 121 | 117_hms_ship_submarine_navy |
| 118 | cuba - castro - cuban - fidel - cubans | 119 | 118_cuba_castro_cuban_fidel |
| 119 | abortion - termination - foetal - abnormality - pregnancy | 119 | 119_abortion_termination_foetal_abnormality |
| 120 | bishop - church - diocese - archbishop - anglican | 118 | 120_bishop_church_diocese_archbishop |
| 121 | tag - pogba - midfielder - transfer - mourinho | 118 | 121_tag_pogba_midfielder_transfer |
| 122 | climate - paris - emission - carbon - agreement | 116 | 122_climate_paris_emission_carbon |
| 123 | energy - supplier - ofgem - customer - tariff | 116 | 123_energy_supplier_ofgem_customer |
| 124 | uber - taxi - driver - drivers - ubers | 115 | 124_uber_taxi_driver_drivers |
| 125 | inquiry - abuse - nazareth - inquirys - hia | 115 | 125_inquiry_abuse_nazareth_inquirys |
| 126 | parade - parades - orange - flag - ardoyne | 114 | 126_parade_parades_orange_flag |
| 127 | pope - vatican - cardinal - francis - church | 113 | 127_pope_vatican_cardinal_francis |
| 128 | fracking - shale - gas - cuadrilla - drilling | 113 | 128_fracking_shale_gas_cuadrilla |
| 129 | leinster - connacht - munster - ulster - sexton | 113 | 129_leinster_connacht_munster_ulster |
| 130 | airline - ryanair - iag - aer - lingus | 113 | 130_airline_ryanair_iag_aer |
| 131 | vw - emission - volkswagen - diesel - scandal | 110 | 131_vw_emission_volkswagen_diesel |
| 132 | concussion - rugby - brain - astle - cte | 110 | 132_concussion_rugby_brain_astle |
| 133 | hodgson - rooney - england - southgate - hodgsons | 110 | 133_hodgson_rooney_england_southgate |
| 134 | cannabis - drug - heroin - psychoactive - substance | 110 | 134_cannabis_drug_heroin_psychoactive |
| 135 | council - budget - councils - tax - cut | 108 | 135_council_budget_councils_tax |
| 136 | farc - peace - eln - colombian - colombia | 108 | 136_farc_peace_eln_colombian |
| 137 | fan - uefa - stadium - france - marseille | 107 | 137_fan_uefa_stadium_france |
| 138 | driving - driver - speed - speeding - road | 106 | 138_driving_driver_speed_speeding |
| 139 | childrens - ofsted - child - care - social | 106 | 139_childrens_ofsted_child_care |
| 140 | nama - cushnahan - cerberus - portfolio - pimco | 105 | 140_nama_cushnahan_cerberus_portfolio |
| 141 | duterte - philippines - sayyaf - davao - philippine | 103 | 141_duterte_philippines_sayyaf_davao |
| 142 | whisky - scotch - distillery - beer - wine | 102 | 142_whisky_scotch_distillery_beer |
| 143 | abuse - bennell - sfa - football - fa | 101 | 143_abuse_bennell_sfa_football |
| 144 | border - ireland - irish - brexit - northern | 100 | 144_border_ireland_irish_brexit |
| 145 | warnock - bluebirds - cardiff - trollope - slade | 99 | 145_warnock_bluebirds_cardiff_trollope |
| 146 | cheese - coli - food - outbreak - hygiene | 98 | 146_cheese_coli_food_outbreak |
| 147 | emwazi - syria - muthana - islamic - isis | 96 | 147_emwazi_syria_muthana_islamic |
| 148 | replacements - saracens - capt - gloucester - exeter | 96 | 148_replacements_saracens_capt_gloucester |
| 149 | cox - jo - mair - birstall - coxs | 95 | 149_cox_jo_mair_birstall |
| 150 | morsi - brotherhood - mubarak - egypt - cairo | 95 | 150_morsi_brotherhood_mubarak_egypt |
| 151 | labor - turnbull - shorten - australias - australian | 95 | 151_labor_turnbull_shorten_australias |
| 152 | nauru - asylum - seeker - australia - manus | 94 | 152_nauru_asylum_seeker_australia |
| 153 | ecb - eurozone - inflation - draghi - qe | 91 | 153_ecb_eurozone_inflation_draghi |
| 154 | marathon - runner - race - running - mile | 90 | 154_marathon_runner_race_running |
| 155 | giants - steelers - devils - panthers - flyers | 88 | 155_giants_steelers_devils_panthers |
| 156 | meal - food - trussell - school - meals | 87 | 156_meal_food_trussell_school |
| 157 | obamacare - republicans - senate - healthcare - republican | 87 | 157_obamacare_republicans_senate_healthcare |
| 158 | mh370 - plane - debris - search - malaysian | 85 | 158_mh370_plane_debris_search |
| 159 | organ - transplant - donor - donation - donate | 84 | 159_organ_transplant_donor_donation |
| 160 | button - sport - bbc - live - highlights | 84 | 160_button_sport_bbc_live |
| 161 | china - sea - philippines - island - chinas | 84 | 161_china_sea_philippines_island |
| 162 | calais - eurotunnel - migrant - tunnel - eurostar | 83 | 162_calais_eurotunnel_migrant_tunnel |
| 163 | raf - squadron - spitfire - bomber - aircraft | 83 | 163_raf_squadron_spitfire_bomber |
| 164 | swansea - swans - clement - guidolin - swanseas | 83 | 164_swansea_swans_clement_guidolin |
| 165 | antisemitism - jewish - antisemitic - livingstone - israel | 82 | 165_antisemitism_jewish_antisemitic_livingstone |
| 166 | lottery - jackpot - ticket - camelot - prize | 82 | 166_lottery_jackpot_ticket_camelot |
| 167 | ride - alton - smiler - towers - balch | 81 | 167_ride_alton_smiler_towers |
| 168 | rugby - premiership - gloucester - appearance - harlequins | 81 | 168_rugby_premiership_gloucester_appearance |
| 169 | airbus - boeing - aircraft - airline - a380 | 81 | 169_airbus_boeing_aircraft_airline |
| 170 | fed - rate - yellen - feds - federal | 80 | 170_fed_rate_yellen_feds |
| 171 | ira - disappeared - buried - abducted - iclvr | 79 | 171_ira_disappeared_buried_abducted |
| 172 | gender - gap - pay - maternity - woman | 79 | 172_gender_gap_pay_maternity |
| 173 | irish - 1916 - easter - ireland - rising | 79 | 173_irish_1916_easter_ireland |
| 174 | selfdriving - autonomous - driverless - car - vehicle | 78 | 174_selfdriving_autonomous_driverless_car |
| 175 | pupil - panel - teacher - teaching - conduct | 78 | 175_pupil_panel_teacher_teaching |
| 176 | housing - tenant - affordable - landlord - rent | 77 | 176_housing_tenant_affordable_landlord |
| 177 | mayor - devolution - combined - council - elected | 77 | 177_mayor_devolution_combined_council |
| 178 | fire - blaze - appliance - firefighter - crew | 76 | 178_fire_blaze_appliance_firefighter |
| 179 | plane - sharm - elsheikh - sinai - egyptian | 76 | 179_plane_sharm_elsheikh_sinai |
| 180 | auschwitz - jews - nazi - camp - jewish | 76 | 180_auschwitz_jews_nazi_camp |
| 181 | growth - sector - output - quarter - economy | 75 | 181_growth_sector_output_quarter |
| 182 | afghanistan - helmand - afghan - soldier - regiment | 75 | 182_afghanistan_helmand_afghan_soldier |
| 183 | ftse - shares - share - rose - pound | 75 | 183_ftse_shares_share_rose |
| 184 | abortion - marriage - transgender - gay - law | 75 | 184_abortion_marriage_transgender_gay |
| 185 | rangers - hibs - pitch - hampden - celtic | 75 | 185_rangers_hibs_pitch_hampden |
| 186 | reactor - radiation - nuclear - fukushima - radioactive | 75 | 186_reactor_radiation_nuclear_fukushima |
| 187 | nba - cavaliers - lakers - warriors - curry | 74 | 187_nba_cavaliers_lakers_warriors |
| 188 | circulation - scotsman - print - paper - newspaper | 73 | 188_circulation_scotsman_print_paper |
| 189 | mexico - trump - immigration - undocumented - wall | 73 | 189_mexico_trump_immigration_undocumented |
| 190 | pollution - delhi - smog - air - microgram | 72 | 190_pollution_delhi_smog_air |
| 191 | australian - australians - sharrouf - australia - sydney | 72 | 191_australian_australians_sharrouf_australia |
| 192 | iraq - blair - chilcot - inquiry - saddam | 72 | 192_iraq_blair_chilcot_inquiry |
| 193 | scarlets - blues - ospreys - rugby - wales | 71 | 193_scarlets_blues_ospreys_rugby |
| 194 | charity - kids - batmanghelidjh - fundraising - charitys | 71 | 194_charity_kids_batmanghelidjh_fundraising |
| 195 | berlusconi - renzi - berlusconis - italian - italys | 69 | 195_berlusconi_renzi_berlusconis_italian |
| 196 | extremism - extremist - prevent - tpims - terrorism | 69 | 196_extremism_extremist_prevent_tpims |
| 197 | wreck - ship - titanic - hms - jutland | 69 | 197_wreck_ship_titanic_hms |
| 198 | ambulance - paramedic - calls - call - 999 | 69 | 198_ambulance_paramedic_calls_call |
| 199 | coleman - wales - bale - slovakia - tournament | 68 | 199_coleman_wales_bale_slovakia |
| 200 | choi - park - ms - soonsil - parks | 68 | 200_choi_park_ms_soonsil |
| 201 | antibiotic - bacteria - antibiotics - infection - resistance | 68 | 201_antibiotic_bacteria_antibiotics_infection |
| 202 | circuit - welsh - skates - ebbw - project | 67 | 202_circuit_welsh_skates_ebbw |
| 203 | hiv - prep - antiretroviral - virus - aids | 67 | 203_hiv_prep_antiretroviral_virus |
| 204 | orban - fidesz - jobbik - hungarian - poland | 67 | 204_orban_fidesz_jobbik_hungarian |
| 205 | ducati - yamaha - marquez - rossi - lorenzo | 67 | 205_ducati_yamaha_marquez_rossi |
| 206 | care - carers - social - costs - home | 66 | 206_care_carers_social_costs |
| 207 | afd - cdu - merkels - merkel - spd | 66 | 207_afd_cdu_merkels_merkel |
| 208 | wrexham - keates - mills - morrell - ormerod | 65 | 208_wrexham_keates_mills_morrell |
| 209 | brazil - argentina - rica - spain - uruguay | 65 | 209_brazil_argentina_rica_spain |
| 210 | cardoza - northampton - sixfields - stadium - 1025m | 64 | 210_cardoza_northampton_sixfields_stadium |
| 211 | revenue - glazer - adidas - premier - deloitte | 63 | 211_revenue_glazer_adidas_premier |
| 212 | cladding - tower - fire - grenfell - sprinkler | 63 | 212_cladding_tower_fire_grenfell |
| 213 | assange - wikileaks - extradition - embassy - ecuador | 63 | 213_assange_wikileaks_extradition_embassy |
| 214 | fashion - dress - vogue - designer - clothes | 62 | 214_fashion_dress_vogue_designer |
| 215 | norovirus - ward - vomiting - diarrhoea - bug | 62 | 215_norovirus_ward_vomiting_diarrhoea |
| 216 | trudeau - harper - canadian - ndp - canada | 62 | 216_trudeau_harper_canadian_ndp |
| 217 | dam - samarco - chile - bhp - bolivia | 62 | 217_dam_samarco_chile_bhp |
| 218 | disabled - disability - claimant - pips - pip | 61 | 218_disabled_disability_claimant_pips |
| 219 | ice - cryosat - antarctic - arctic - shelf | 61 | 219_ice_cryosat_antarctic_arctic |
| 220 | pension - annuity - pot - pensions - income | 61 | 220_pension_annuity_pot_pensions |
| 221 | oil - opec - barrel - crude - price | 61 | 221_oil_opec_barrel_crude |
| 222 | nfl - patriots - quarterback - brady - touchdown | 60 | 222_nfl_patriots_quarterback_brady |
| 223 | pte - deepcut - inquest - jamess - 1995 | 60 | 223_pte_deepcut_inquest_jamess |
| 224 | strachan - poland - slovenia - slovakia - scotland | 60 | 224_strachan_poland_slovenia_slovakia |
| 225 | prediction - lawros - lawro - correct - 02 | 59 | 225_prediction_lawros_lawro_correct |
| 226 | ferry - calmac - mv - ferries - vessel | 59 | 226_ferry_calmac_mv_ferries |
| 227 | juventus - roma - napoli - chievo - empoli | 58 | 227_juventus_roma_napoli_chievo |
| 228 | armstrong - wiggins - cycling - uci - tues | 58 | 228_armstrong_wiggins_cycling_uci |
| 229 | productivity - ons - unemployment - wage - growth | 58 | 229_productivity_ons_unemployment_wage |
| 230 | tv - fm - internetlivestatscom - radio - medium | 58 | 230_tv_fm_internetlivestatscom_radio |
| 231 | execution - lethal - injection - inmate - oklahoma | 58 | 231_execution_lethal_injection_inmate |
| 232 | swim - swimming - severn - swimmer - mile | 57 | 232_swim_swimming_severn_swimmer |
| 233 | snp - seat - salmond - scottish - holyrood | 57 | 233_snp_seat_salmond_scottish |
| 234 | copyright - dotcom - piracy - isps - infringement | 57 | 234_copyright_dotcom_piracy_isps |
| 235 | mpc - rate - inflation - bank - monetary | 57 | 235_mpc_rate_inflation_bank |
| 236 | tianjin - blast - explosion - chemical - cyanide | 56 | 236_tianjin_blast_explosion_chemical |
| 237 | coal - colliery - pit - kellingley - mine | 56 | 237_coal_colliery_pit_kellingley |
| 238 | pier - structure - piers - conwy - canal | 56 | 238_pier_structure_piers_conwy |
| 239 | abedi - manchester - abedis - arena - ariana | 56 | 239_abedi_manchester_abedis_arena |
| 240 | snowden - nsa - spying - intelligence - spy | 56 | 240_snowden_nsa_spying_intelligence |
| 241 | malaria - parasite - mosquito - vaccine - artemisinin | 55 | 241_malaria_parasite_mosquito_vaccine |
| 242 | tax - apple - luxembourg - google - avoidance | 55 | 242_tax_apple_luxembourg_google |
| 243 | xinjiang - uighur - uighurs - chinese - kashgar | 55 | 243_xinjiang_uighur_uighurs_chinese |
| 244 | marriage - samesex - gay - civil - partnership | 53 | 244_marriage_samesex_gay_civil |
| 245 | snow - avalanche - sais - ski - cairngorms | 53 | 245_snow_avalanche_sais_ski |
| 246 | wage - living - minimum - pay - worker | 53 | 246_wage_living_minimum_pay |
| 247 | yorkshire - depart - tour - cycling - race | 53 | 247_yorkshire_depart_tour_cycling |
| 248 | note - polymer - banknote - bank - notes | 53 | 248_note_polymer_banknote_bank |
| 249 | trade - nafta - mexico - lumber - china | 53 | 249_trade_nafta_mexico_lumber |
| 250 | archaeologist - roman - excavation - archaeological - pottery | 52 | 250_archaeologist_roman_excavation_archaeological |
| 251 | fox - murdoch - ailes - rupert - murdochs | 52 | 251_fox_murdoch_ailes_rupert |
| 252 | legal - aid - magistrates - justice - court | 52 | 252_legal_aid_magistrates_justice |
| 253 | unexplained - discovery - discovered - officers - informed | 52 | 253_unexplained_discovery_discovered_officers |
| 254 | gear - clarkson - leblanc - hammond - clarksons | 52 | 254_gear_clarkson_leblanc_hammond |
| 255 | unite - oca - babcock - offshore - union | 52 | 255_unite_oca_babcock_offshore |
| 256 | childcare - nursery - fouryearolds - parent - hour | 51 | 256_childcare_nursery_fouryearolds_parent |
| 257 | library - libraries - book - bookless - council | 51 | 257_library_libraries_book_bookless |
| 258 | bbcscotlandpics - scotlandpicturesbbccouk - shwe - selection - instagram | 51 | 258_bbcscotlandpics_scotlandpicturesbbccouk_shwe_selection |
| 259 | mangrove - forest - indonesia - deforestation - logging | 51 | 259_mangrove_forest_indonesia_deforestation |
| 260 | refugee - syrians - coptic - lebanon - syrian | 51 | 260_refugee_syrians_coptic_lebanon |
| 261 | markit - pmi - manufacturing - growth - economist | 51 | 261_markit_pmi_manufacturing_growth |
| 262 | whyte - ticketus - rangers - whytes - withey | 51 | 262_whyte_ticketus_rangers_whytes |
| 263 | ftse - shares - share - rose - dollar | 50 | 263_ftse_shares_share_rose |
| 264 | balloon - helium - konyukhov - nightglow - balloons | 50 | 264_balloon_helium_konyukhov_nightglow |
| 265 | eurovision - song - contest - ballad - entry | 50 | 265_eurovision_song_contest_ballad |
| 266 | charlies - charlie - gard - ormond - yates | 50 | 266_charlies_charlie_gard_ormond |
| 267 | pistorius - steenkamp - reeva - masipa - toilet | 49 | 267_pistorius_steenkamp_reeva_masipa |
| 268 | indonesia - bali - sukumaran - indonesian - execution | 49 | 268_indonesia_bali_sukumaran_indonesian |
| 269 | palmyra - ancient - ruin - syrian - unesco | 49 | 269_palmyra_ancient_ruin_syrian |
| 270 | badger - tb - cull - cattle - culling | 49 | 270_badger_tb_cull_cattle |
| 271 | festival - fringe - event - df - festivals | 49 | 271_festival_fringe_event_df |
| 272 | linfield - cliftonville - crusaders - glenavon - ballymena | 49 | 272_linfield_cliftonville_crusaders_glenavon |
| 273 | defence - army - reservist - nato - spending | 49 | 273_defence_army_reservist_nato |
| 274 | science - research - scientific - ukri - innovation | 49 | 274_science_research_scientific_ukri |
| 275 | school - pupil - schools - teacher - incident | 49 | 275_school_pupil_schools_teacher |
| 276 | crematorium - ash - cremation - mortonhall - infant | 48 | 276_crematorium_ash_cremation_mortonhall |
| 277 | cosby - constand - cosbys - deposition - comedian | 48 | 277_cosby_constand_cosbys_deposition |
| 278 | rangers - ashley - king - rifc - easdale | 48 | 278_rangers_ashley_king_rifc |
| 279 | trafficking - slavery - trafficked - victim - exploitation | 48 | 279_trafficking_slavery_trafficked_victim |
| 280 | tree - oak - woodland - trees - acorn | 48 | 280_tree_oak_woodland_trees |
| 281 | breivik - utoeya - breiviks - oslo - norway | 48 | 281_breivik_utoeya_breiviks_oslo |
| 282 | dedicated - transfer - page - appearance - ajax | 47 | 282_dedicated_transfer_page_appearance |
| 283 | bma - junior - doctor - doctors - contract | 47 | 283_bma_junior_doctor_doctors |
| 284 | mossack - fonseca - panama - offshore - gunnlaugsson | 47 | 284_mossack_fonseca_panama_offshore |
| 285 | suicide - dying - nicklinson - terminally - law | 47 | 285_suicide_dying_nicklinson_terminally |
| 286 | ayeeshia - butler - ellie - rigby - shanay | 46 | 286_ayeeshia_butler_ellie_rigby |
| 287 | sugar - drink - obesity - sugary - drinks | 46 | 287_sugar_drink_obesity_sugary |
| 288 | iii - richard - bosworth - cathedral - king | 46 | 288_iii_richard_bosworth_cathedral |
| 289 | bhs - philip - pension - chappell - sir | 46 | 289_bhs_philip_pension_chappell |
| 290 | fgm - genital - girl - mutilation - practice | 46 | 290_fgm_genital_girl_mutilation |
| 291 | aluko - uefa - fa - terry - sampson | 46 | 291_aluko_uefa_fa_terry |
| 292 | women - woman - 100women - 100 - inspirational | 46 | 292_women_woman_100women_100 |
| 293 | witheridge - thai - koh - tao - zaw | 46 | 293_witheridge_thai_koh_tao |
| 294 | lions - gatland - zealand - blacks - rugby | 45 | 294_lions_gatland_zealand_blacks |
| 295 | bitcoin - bitcoins - mtgox - currency - virtual | 45 | 295_bitcoin_bitcoins_mtgox_currency |
| 296 | 1500 - gmt - 1730 - city - albion | 45 | 296_1500_gmt_1730_city |
| 297 | pool - swimming - lido - leisure - baths | 45 | 297_pool_swimming_lido_leisure |
| 298 | bell - minster - imber - bellringers - bellringing | 45 | 298_bell_minster_imber_bellringers |
| 299 | burkini - veil - ban - burka - headscarf | 44 | 299_burkini_veil_ban_burka |
| 300 | music - spotify - streaming - vinyl - album | 44 | 300_music_spotify_streaming_vinyl |
| 301 | bee - honey - bees - neonicotinoids - honeybee | 44 | 301_bee_honey_bees_neonicotinoids |
| 302 | evans - ched - sheffield - oldham - raping | 44 | 302_evans_ched_sheffield_oldham |
| 303 | eilidh - ariana - saffie - grande - eilidhs | 44 | 303_eilidh_ariana_saffie_grande |
| 304 | duffy - belfast - bail - ohare - mccrory | 43 | 304_duffy_belfast_bail_ohare |
| 305 | boaty - ocean - polar - ship - sea | 43 | 305_boaty_ocean_polar_ship |
| 306 | golf - turnberry - trump - menie - beyts | 43 | 306_golf_turnberry_trump_menie |
| 307 | gb - richardsonwalsh - hinch - hockey - gbs | 43 | 307_gb_richardsonwalsh_hinch_hockey |
| 308 | gmb - unite - refuse - union - veolia | 43 | 308_gmb_unite_refuse_union |
| 309 | borrowing - obr - forecast - deficit - debt | 43 | 309_borrowing_obr_forecast_deficit |
| 310 | trust - percy - sparrowhawk - connor - southern | 43 | 310_trust_percy_sparrowhawk_connor |
| 311 | terrorism - counter - arrested - suspicion - arrest | 42 | 311_terrorism_counter_arrested_suspicion |
| 312 | calais - camp - migrant - jungle - migrants | 42 | 312_calais_camp_migrant_jungle |
| 313 | madeleine - mccann - madeleines - portuguese - praia | 42 | 313_madeleine_mccann_madeleines_portuguese |
| 314 | polio - vaccination - vaccine - immunisation - virus | 42 | 314_polio_vaccination_vaccine_immunisation |
| 315 | mental - health - camhs - disorder - autism | 42 | 315_mental_health_camhs_disorder |
| 316 | storm - texas - tornado - hurricane - houston | 42 | 316_storm_texas_tornado_hurricane |
| 317 | mers - virus - coronavirus - camel - respiratory | 41 | 317_mers_virus_coronavirus_camel |
| 318 | graphene - nobel - material - atom - prize | 41 | 318_graphene_nobel_material_atom |
| 319 | woman - gender - female - stem - women | 41 | 319_woman_gender_female_stem |
| 320 | chapecoense - nacional - medellin - sudamericana - plane | 41 | 320_chapecoense_nacional_medellin_sudamericana |
| 321 | pharmacy - prescription - patient - dental - pharmacist | 41 | 321_pharmacy_prescription_patient_dental |
| 322 | mh17 - buk - missile - ukraine - dutch | 41 | 322_mh17_buk_missile_ukraine |
| 323 | yuill - lamara - m9 - pirc - bell | 41 | 323_yuill_lamara_m9_pirc |
| 324 | gambling - betting - fobts - casino - bookmakers | 41 | 324_gambling_betting_fobts_casino |
| 325 | fire - blaze - wildfire - fires - firefighter | 40 | 325_fire_blaze_wildfire_fires |
| 326 | tamil - sri - rajapaksa - lankan - sirisena | 40 | 326_tamil_sri_rajapaksa_lankan |
| 327 | ticket - cheapest - matchday - price - fan | 40 | 327_ticket_cheapest_matchday_price |
| 328 | trident - nuclear - submarine - renewal - deterrent | 40 | 328_trident_nuclear_submarine_renewal |
| 329 | funeral - cremation - burial - cost - crematorium | 40 | 329_funeral_cremation_burial_cost |
| 330 | ban - order - visa - trumps - 90day | 40 | 330_ban_order_visa_trumps |
| 331 | bradley - lowery - neuroblastoma - bradleys - blackhall | 40 | 331_bradley_lowery_neuroblastoma_bradleys |
| 332 | mine - miner - fyfield - mining - underground | 40 | 332_mine_miner_fyfield_mining |
| 333 | ashley - sports - direct - shirebrook - warehouse | 40 | 333_ashley_sports_direct_shirebrook |
| 334 | parking - wheelchair - badge - paulley - bus | 40 | 334_parking_wheelchair_badge_paulley |
| 335 | coin - hoard - roman - museum - treasure | 40 | 335_coin_hoard_roman_museum |
| 336 | mafia - rancadore - ndrangheta - riina - italian | 39 | 336_mafia_rancadore_ndrangheta_riina |
| 337 | wanda - chinese - hollywood - disney - movie | 39 | 337_wanda_chinese_hollywood_disney |
| 338 | foi - information - request - cabinet - commissioners | 39 | 338_foi_information_request_cabinet |
| 339 | bike - cycling - cycle - cyclist - bikes | 39 | 339_bike_cycling_cycle_cyclist |
| 340 | shahid - shahids - chaudhry - shakeel - samia | 39 | 340_shahid_shahids_chaudhry_shakeel |
| 341 | rea - sykes - kawasaki - rider - chaz | 38 | 341_rea_sykes_kawasaki_rider |
| 342 | hickey - oci - thg - mallon - olympic | 38 | 342_hickey_oci_thg_mallon |
| 343 | alcohol - drinking - drink - liver - wine | 38 | 343_alcohol_drinking_drink_liver |
| 344 | castle - heritage - museum - auckland - garden | 38 | 344_castle_heritage_museum_auckland |
| 345 | yacht - thomson - race - cleach - sailing | 38 | 345_yacht_thomson_race_cleach |
| 346 | bonfire - bonfires - lit - injunction - effigy | 38 | 346_bonfire_bonfires_lit_injunction |
| 347 | strictly - dancing - dance - marquez - dancer | 38 | 347_strictly_dancing_dance_marquez |
| 348 | lubitz - germanwings - cockpit - lufthansa - copilot | 38 | 348_lubitz_germanwings_cockpit_lufthansa |
| 349 | horsemeat - meat - beef - product - food | 38 | 349_horsemeat_meat_beef_product |
| 350 | ferry - ship - sewol - boat - sank | 37 | 350_ferry_ship_sewol_boat |
| 351 | falklands - falkland - argentine - argentina - islands | 37 | 351_falklands_falkland_argentine_argentina |
| 352 | famine - drought - sudan - somalia - aid | 37 | 352_famine_drought_sudan_somalia |
| 353 | ilott - judge - mother - boy - swaby | 37 | 353_ilott_judge_mother_boy |
| 354 | bird - poultry - avian - h5n8 - flu | 37 | 354_bird_poultry_avian_h5n8 |
| 355 | warming - climate - temperature - co2 - global | 37 | 355_warming_climate_temperature_co2 |
| 356 | gerwen - dart - anderson - barneveld - van | 36 | 356_gerwen_dart_anderson_barneveld |
| 357 | ira - sinn - mcguigan - fin - provisional | 36 | 357_ira_sinn_mcguigan_fin |
| 358 | coin - mint - coins - circulation - royal | 36 | 358_coin_mint_coins_circulation |
| 359 | ant - robot - ants - insect - robots | 36 | 359_ant_robot_ants_insect |
| 360 | tuilagi - rugby - premiership - bath - saracens | 36 | 360_tuilagi_rugby_premiership_bath |
| 361 | cow - beef - slaughter - hindu - hindus | 36 | 361_cow_beef_slaughter_hindu |
| 362 | mckeague - corrie - edmunds - urquhart - suffolk | 36 | 362_mckeague_corrie_edmunds_urquhart |
| 363 | breastfeeding - birth - baby - breastfeed - infant | 36 | 363_breastfeeding_birth_baby_breastfeed |
| 364 | education - unesco - school - primary - subsaharan | 36 | 364_education_unesco_school_primary |
| 365 | rohingya - myanmar - thailand - migrant - malaysia | 36 | 365_rohingya_myanmar_thailand_migrant |
| 366 | stadium - fan - barklie - racist - manchester | 36 | 366_stadium_fan_barklie_racist |
| 367 | bale - bales - madrid - gareth - real | 36 | 367_bale_bales_madrid_gareth |
| 368 | casement - gaa - dcal - stadium - chuiln | 36 | 368_casement_gaa_dcal_stadium |
| 369 | dementia - alzheimers - disease - diagnosis - carers | 35 | 369_dementia_alzheimers_disease_diagnosis |
| 370 | employment - unemployment - rate - permanent - job | 35 | 370_employment_unemployment_rate_permanent |
| 371 | rupee - cash - note - indians - india | 35 | 371_rupee_cash_note_indians |
| 372 | sao - paulo - pcc - gang - inmate | 35 | 372_sao_paulo_pcc_gang |
| 373 | qatar - saudi - qatars - qatari - arabia | 35 | 373_qatar_saudi_qatars_qatari |
| 374 | shooting - firearm - fired - incident - man | 35 | 374_shooting_firearm_fired_incident |
| 375 | fire - fbu - firefighter - brigades - brigade | 35 | 375_fire_fbu_firefighter_brigades |
| 376 | alibaba - alibabas - ecommerce - online - taobao | 35 | 376_alibaba_alibabas_ecommerce_online |
| 377 | poverty - income - child - living - household | 35 | 377_poverty_income_child_living |
| 378 | ash - tree - dieback - woodland - elm | 35 | 378_ash_tree_dieback_woodland |
| 379 | martelly - haiti - moise - haitis - celestin | 35 | 379_martelly_haiti_moise_haitis |
| 380 | toshiba - toshibas - foxconn - westinghouse - olympus | 35 | 380_toshiba_toshibas_foxconn_westinghouse |
| 381 | concert - ariana - grande - manchester - arena | 35 | 381_concert_ariana_grande_manchester |
| 382 | najib - 1mdb - malaysia - malaysias - anwar | 35 | 382_najib_1mdb_malaysia_malaysias |
| 383 | rio - games - olympic - brazil - olympics | 34 | 383_rio_games_olympic_brazil |
| 384 | asa - advert - ad - advertising - adverts | 34 | 384_asa_advert_ad_advertising |
| 385 | lcpl - cpl - dunsby - maher - mod | 34 | 385_lcpl_cpl_dunsby_maher |
| 386 | lighting - light - leap - clock - lights | 34 | 386_lighting_light_leap_clock |
| 387 | afc - wimbledon - lyle - barcham - parrett | 34 | 387_afc_wimbledon_lyle_barcham |
| 388 | campus - university - college - building - student | 34 | 388_campus_university_college_building |
| 389 | meldonium - sharapova - itf - sharapovas - antidoping | 34 | 389_meldonium_sharapova_itf_sharapovas |
| 390 | fianna - gael - fil - taoiseach - kenny | 34 | 390_fianna_gael_fil_taoiseach |
| 391 | tpp - trade - wto - agreement - deal | 34 | 391_tpp_trade_wto_agreement |
| 392 | howard - arlene - arkinson - castlederg - arlenes | 34 | 392_howard_arlene_arkinson_castlederg |
| 393 | bullying - cyberbullying - antibullying - bullied - online | 34 | 393_bullying_cyberbullying_antibullying_bullied |
| 394 | clown - clowns - craze - creepy - dressed | 34 | 394_clown_clowns_craze_creepy |
| 395 | tt - farquhar - rider - manx - racing | 33 | 395_tt_farquhar_rider_manx |
| 396 | koeman - southampton - saints - lovren - lallana | 33 | 396_koeman_southampton_saints_lovren |
| 397 | facebook - fake - news - trending - facebooks | 33 | 397_facebook_fake_news_trending |
| 398 | pirate - piracy - somali - pirates - vessel | 33 | 398_pirate_piracy_somali_pirates |
| 399 | baby - born - birth - danesha - pregnancy | 33 | 399_baby_born_birth_danesha |
| 400 | ico - nuisance - calls - tps - call | 33 | 400_ico_nuisance_calls_tps |
| 401 | crofting - crofter - grazing - crofters - commission | 33 | 401_crofting_crofter_grazing_crofters |
| 402 | parking - council - park - permit - car | 32 | 402_parking_council_park_permit |
| 403 | grenfell - kensington - tower - fire - inquiry | 32 | 403_grenfell_kensington_tower_fire |
| 404 | internet - screen - online - homework - watching | 32 | 404_internet_screen_online_homework |
| 405 | americas - ainslie - race - oracle - usa | 32 | 405_americas_ainslie_race_oracle |
| 406 | teff - farmer - agriculture - honey - meat | 32 | 406_teff_farmer_agriculture_honey |
| 407 | mound - archaeological - archaeology - archaeologist - excavation | 32 | 407_mound_archaeological_archaeology_archaeologist |
| 408 | reef - coral - bleaching - reefs - barrier | 32 | 408_reef_coral_bleaching_reefs |
| 409 | garda - mccabe - osullivan - sgt - commissioner | 31 | 409_garda_mccabe_osullivan_sgt |
| 410 | sinkhole - hole - fontmell - floridas - collapse | 31 | 410_sinkhole_hole_fontmell_floridas |
| 411 | januzaj - anderlecht - signing - loan - premier | 31 | 411_januzaj_anderlecht_signing_loan |
| 412 | condor - ferry - poole - guernsey - liberation | 31 | 412_condor_ferry_poole_guernsey |
| 413 | tan - cardiff - bolton - holdsworth - club | 31 | 413_tan_cardiff_bolton_holdsworth |
| 414 | shareholder - pay - remuneration - executive - wpp | 31 | 414_shareholder_pay_remuneration_executive |
| 415 | halawa - halawas - egyptian - ibrahim - alfath | 31 | 415_halawa_halawas_egyptian_ibrahim |
| 416 | charlottesville - white - supremacist - statue - kkk | 31 | 416_charlottesville_white_supremacist_statue |
| 417 | pspo - arches - antisocial - behaviour - pspos | 31 | 417_pspo_arches_antisocial_behaviour |
| 418 | motherwell - dundee - hibernian - thistle - partick | 31 | 418_motherwell_dundee_hibernian_thistle |
| 419 | impulse - leg - piccard - borschberg - solar | 31 | 419_impulse_leg_piccard_borschberg |
| 420 | paschi - dei - monte - eurozone - italian | 30 | 420_paschi_dei_monte_eurozone |
| 421 | bombardier - cseries - jti - workforce - ballymena | 30 | 421_bombardier_cseries_jti_workforce |
| 422 | homelessness - homeless - rough - accommodation - housing | 30 | 422_homelessness_homeless_rough_accommodation |
| 423 | takata - airbags - airbag - inflator - recall | 30 | 423_takata_airbags_airbag_inflator |
| 424 | uniform - skirt - school - trouser - wear | 30 | 424_uniform_skirt_school_trouser |
| 425 | ponta - romania - iohannis - romanias - decree | 30 | 425_ponta_romania_iohannis_romanias |
| 426 | jersey - gotland - guernsey - games - surbiton | 29 | 426_jersey_gotland_guernsey_games |
| 427 | limit - drinkdrive - alcohol - drinkdriving - drink | 29 | 427_limit_drinkdrive_alcohol_drinkdriving |
| 428 | theatre - cinema - hull - venue - culture | 29 | 428_theatre_cinema_hull_venue |
| 429 | attack - westminster - attacker - terrorist - terror | 29 | 429_attack_westminster_attacker_terrorist |
| 430 | nasheed - maldives - yameen - adeeb - abdulla | 29 | 430_nasheed_maldives_yameen_adeeb |
| 431 | rig - transocean - dalmore - salvage - towed | 29 | 431_rig_transocean_dalmore_salvage |
| 432 | raf - aircraft - f35 - mod - squadron | 29 | 432_raf_aircraft_f35_mod |
| 433 | poetry - poem - poet - bronte - notebook | 29 | 433_poetry_poem_poet_bronte |
| 434 | seal - pup - horsey - seals - grey | 29 | 434_seal_pup_horsey_seals |
| 435 | invest - ni - esynergy - almac - investment | 29 | 435_invest_ni_esynergy_almac |
| 436 | poppy - fifa - armband - wear - fifas | 29 | 436_poppy_fifa_armband_wear |
| 437 | plague - leprosy - rat - bubonic - disease | 28 | 437_plague_leprosy_rat_bubonic |
| 438 | hmrc - rangers - ebts - tax - tribunal | 28 | 438_hmrc_rangers_ebts_tax |
| 439 | wemba - music - papa - congolese - musician | 28 | 439_wemba_music_papa_congolese |
| 440 | fargo - revenue - jp - bank - quarter | 28 | 440_fargo_revenue_jp_bank |
| 441 | reid - hewett - houdet - wheelchair - peifer | 28 | 441_reid_hewett_houdet_wheelchair |
| 442 | tron - cocaine - nca - makayabella - hamal | 28 | 442_tron_cocaine_nca_makayabella |
| 443 | paterson - spire - mastectomy - breast - patersons | 28 | 443_paterson_spire_mastectomy_breast |
| 444 | lafferty - northern - ireland - oneill - oneills | 28 | 444_lafferty_northern_ireland_oneill |
| 445 | samsung - lotte - choi - lee - kunhee | 28 | 445_samsung_lotte_choi_lee |
| 446 | hutch - garda - dublin - feud - regency | 28 | 446_hutch_garda_dublin_feud |
| 447 | language - welsh - huws - meri - bilingual | 28 | 447_language_welsh_huws_meri |
| 448 | badminton - gilmour - ouseph - 2117 - langridge | 28 | 448_badminton_gilmour_ouseph_2117 |
| 449 | newport - rodney - rfc - dragons - parade | 28 | 449_newport_rodney_rfc_dragons |
| 450 | porn - revenge - image - intimate - images | 28 | 450_porn_revenge_image_intimate |
| 451 | cyber - opm - china - espionage - chinese | 27 | 451_cyber_opm_china_espionage |
| 452 | hate - crime - racist - disability - reporting | 27 | 452_hate_crime_racist_disability |
| 453 | volcano - eruption - ash - lava - volcanic | 27 | 453_volcano_eruption_ash_lava |
| 454 | wilders - rutte - vvd - pvv - dutch | 27 | 454_wilders_rutte_vvd_pvv |
| 455 | ceta - wallonia - trade - canada - walloon | 27 | 455_ceta_wallonia_trade_canada |
| 456 | mbe - honorary - service - obe - honour | 27 | 456_mbe_honorary_service_obe |
| 457 | tesla - musk - electric - teslas - model | 26 | 457_tesla_musk_electric_teslas |
| 458 | sigurdsson - swans - swansea - clement - llorente | 26 | 458_sigurdsson_swans_swansea_clement |
| 459 | cathedral - cathedrals - church - chapel - restoration | 26 | 459_cathedral_cathedrals_church_chapel |
| 460 | alsweady - shiner - ihat - iraqi - mod | 26 | 460_alsweady_shiner_ihat_iraqi |
| 461 | festival - wickerman - glastonbury - eavis - dundrennan | 26 | 461_festival_wickerman_glastonbury_eavis |
| 462 | nspcc - sexting - child - sexual - abuse | 26 | 462_nspcc_sexting_child_sexual |
| 463 | mask - protest - shenstone - protester - czerwonko | 26 | 463_mask_protest_shenstone_protester |
| 464 | rally - snowman - provan - spectator - robson | 26 | 464_rally_snowman_provan_spectator |
| 465 | sampson - women - denmark - england - lionesses | 26 | 465_sampson_women_denmark_england |
| 466 | caf - hayatou - nff - chiyangwa - fifa | 26 | 466_caf_hayatou_nff_chiyangwa |
| 467 | burgess - rabbitohs - bath - rugby - nrl | 26 | 467_burgess_rabbitohs_bath_rugby |
| 468 | gender - transgender - trans - intersex - hormone | 26 | 468_gender_transgender_trans_intersex |
| 469 | expedition - antarctic - australis - aurora - ice | 26 | 469_expedition_antarctic_australis_aurora |
| 470 | rahman - hamlets - lutfur - mawrey - tower | 26 | 470_rahman_hamlets_lutfur_mawrey |
| 471 | hacking - injunction - ngn - mirror - voicemail | 26 | 471_hacking_injunction_ngn_mirror |
| 472 | manning - pte - wikileaks - mannings - leavenworth | 25 | 472_manning_pte_wikileaks_mannings |
| 473 | bag - plastic - 5p - waste - singleuse | 25 | 473_bag_plastic_5p_waste |
| 474 | massaro - 115 - willstrop - seed - 118 | 25 | 474_massaro_115_willstrop_seed |
| 475 | gb - hockey - michalak - ludlows - tournament | 25 | 475_gb_hockey_michalak_ludlows |
| 476 | osprey - chick - ej - nest - garten | 25 | 476_osprey_chick_ej_nest |
| 477 | shepherd - christi - cook - fankhauser - thomas | 25 | 477_shepherd_christi_cook_fankhauser |
| 478 | cubs - curse - baseball - pitcher - series | 25 | 478_cubs_curse_baseball_pitcher |
| 479 | sport - participation - sports - racing - womens | 25 | 479_sport_participation_sports_racing |
| 480 | blackman - blackmans - marine - martial - marines | 25 | 480_blackman_blackmans_marine_martial |
| 481 | alcohol - pricing - alcoholrelated - minimum - consumption | 25 | 481_alcohol_pricing_alcoholrelated_minimum |
| 482 | argentina - argentinas - hedge - defaulted - bondholder | 25 | 482_argentina_argentinas_hedge_defaulted |
| 483 | baby - mother - towel - babys - silcocks | 24 | 483_baby_mother_towel_babys |
| 484 | qatar - worker - qatars - amnesty - workers | 24 | 484_qatar_worker_qatars_amnesty |
| 485 | gay - sexuality - homophobia - fashanu - footballer | 24 | 485_gay_sexuality_homophobia_fashanu |
| 486 | worthington - poppi - poppis - cumbria - inquest | 24 | 486_worthington_poppi_poppis_cumbria |
| 487 | sousse - bardo - tunisia - tunis - tunisian | 24 | 487_sousse_bardo_tunisia_tunis |
| 488 | shark - beach - sharks - fanning - surfer | 24 | 488_shark_beach_sharks_fanning |
| 489 | murder - ozden - khalaf - fasolis - pitchfordprice | 24 | 489_murder_ozden_khalaf_fasolis |
| 490 | loch - park - snowdonia - camping - lomond | 24 | 490_loch_park_snowdonia_camping |
| 491 | rhi - scheme - subsidy - boiler - renewable | 24 | 491_rhi_scheme_subsidy_boiler |
| 492 | tree - trees - felling - sheffield - rustlings | 24 | 492_tree_trees_felling_sheffield |
| 493 | alcohol - liquor - bihar - prohibition - drinking | 24 | 493_alcohol_liquor_bihar_prohibition |
| 494 | car - vehicle - chrysler - remotely - cars | 24 | 494_car_vehicle_chrysler_remotely |
| 495 | unite - cabin - ba - airline - fleet | 24 | 495_unite_cabin_ba_airline |
| 496 | sex - prostitution - prostitute - trafficking - prostitutes | 24 | 496_sex_prostitution_prostitute_trafficking |
| 497 | payment - farmer - rural - crofter - ewing | 24 | 497_payment_farmer_rural_crofter |
| 498 | pension - age - retirement - pensions - state | 24 | 498_pension_age_retirement_pensions |
| 499 | naderi - cothill - clennell - baleiwai - gada | 24 | 499_naderi_cothill_clennell_baleiwai |
| 500 | expectancy - mortality - ons - ageing - age | 23 | 500_expectancy_mortality_ons_ageing |
| 501 | bp - spill - deepwater - oil - rig | 23 | 501_bp_spill_deepwater_oil |
| 502 | pride - lgbt - parade - event - gay | 23 | 502_pride_lgbt_parade_event |
| 503 | carmichael - memo - carmichaels - sturgeon - leak | 23 | 503_carmichael_memo_carmichaels_sturgeon |
| 504 | rio - rios - favelas - janeiro - brazils | 23 | 504_rio_rios_favelas_janeiro |
| 505 | pipeline - dakota - sioux - tribe - native | 23 | 505_pipeline_dakota_sioux_tribe |
| 506 | mcdonalds - burger - restaurant - shack - rosenfeld | 23 | 506_mcdonalds_burger_restaurant_shack |
| 507 | mourinho - gaal - manchester - feyenoord - ferguson | 23 | 507_mourinho_gaal_manchester_feyenoord |
| 508 | bloodhound - speed - mckeown - record - guinness | 23 | 508_bloodhound_speed_mckeown_record |
| 509 | fat - meat - saturated - fruit - diet | 23 | 509_fat_meat_saturated_fruit |
| 510 | spinal - brain - cord - stimulation - paralysed | 23 | 510_spinal_brain_cord_stimulation |
| 511 | campus - yiannopoulos - university - student - katehi | 23 | 511_campus_yiannopoulos_university_student |
| 512 | tsarnaev - tamerlan - dzhokhar - boston - tsarnaevs | 23 | 512_tsarnaev_tamerlan_dzhokhar_boston |
| 513 | snowden - snowdens - asylum - kong - hong | 23 | 513_snowden_snowdens_asylum_kong |
| 514 | cardinal - pell - ridsdale - ballarat - abuse | 22 | 514_cardinal_pell_ridsdale_ballarat |
| 515 | unison - cordia - janitor - strike - pcs | 22 | 515_unison_cordia_janitor_strike |
| 516 | ucd - pembroke - harlequins - elks - monkstown | 22 | 516_ucd_pembroke_harlequins_elks |
| 517 | trump - farage - trumps - presidentelect - ambassador | 22 | 517_trump_farage_trumps_presidentelect |
| 518 | listener - radio - weekly - rajar - listeners | 22 | 518_listener_radio_weekly_rajar |
| 519 | mail - parcel - mails - royal - whistl | 22 | 519_mail_parcel_mails_royal |
| 520 | quantum - qubits - photon - computing - dwave | 22 | 520_quantum_qubits_photon_computing |
| 521 | hepatitis - blood - infected - haemophilia - transfusion | 22 | 521_hepatitis_blood_infected_haemophilia |
| 522 | kyles - camanachd - oban - kilmallie - newtonmore | 22 | 522_kyles_camanachd_oban_kilmallie |
| 523 | stadium - lldc - ham - west - olympic | 22 | 523_stadium_lldc_ham_west |
| 524 | megrahi - lockerbie - megrahis - libyan - bombing | 22 | 524_megrahi_lockerbie_megrahis_libyan |
| 525 | harlequins - vunipola - saracens - borthwick - jones | 22 | 525_harlequins_vunipola_saracens_borthwick |
| 526 | weather - temperature - rainfall - rain - wettest | 22 | 526_weather_temperature_rainfall_rain |
| 527 | stormont - budget - welfare - sinn - fin | 22 | 527_stormont_budget_welfare_sinn |
| 528 | sectarianism - legislation - repeal - behaviour - offensive | 22 | 528_sectarianism_legislation_repeal_behaviour |
| 529 | orchestra - music - lso - cello - concert | 22 | 529_orchestra_music_lso_cello |
| 530 | mps - corbyn - shadow - syria - strikes | 22 | 530_mps_corbyn_shadow_syria |
| 531 | ashers - cake - mcarthur - equality - bakery | 22 | 531_ashers_cake_mcarthur_equality |
| 532 | simpsonkent - blake - amon - zachary - eastenders | 22 | 532_simpsonkent_blake_amon_zachary |
| 533 | coulter - chhokar - ronnie - ebrahimi - chhokars | 22 | 533_coulter_chhokar_ronnie_ebrahimi |
| 534 | halliwell - fulcher - ocallaghan - godden - goddens | 22 | 534_halliwell_fulcher_ocallaghan_godden |
| 535 | twitter - isis - propaganda - account - telegram | 22 | 535_twitter_isis_propaganda_account |
| 536 | singapore - singapores - kuan - lee - singaporeans | 22 | 536_singapore_singapores_kuan_lee |
| 537 | indigenous - goodes - aboriginal - cartoon - australian | 22 | 537_indigenous_goodes_aboriginal_cartoon |
| 538 | pitch - unplayable - fixture - rain - jewell | 22 | 538_pitch_unplayable_fixture_rain |
| 539 | mueller - swift - swifts - muellers - skirt | 21 | 539_mueller_swift_swifts_muellers |
| 540 | alphago - ai - chess - sedol - deepmind | 21 | 540_alphago_ai_chess_sedol |
| 541 | airline - laptop - ban - flight - airlines | 21 | 541_airline_laptop_ban_flight |
| 542 | marriage - samesex - plebiscite - gay - abbott | 21 | 542_marriage_samesex_plebiscite_gay |
| 543 | rateable - revaluation - rate - business - relief | 21 | 543_rateable_revaluation_rate_business |
| 544 | indian - antipiracy - machugh - advanfort - ship | 21 | 544_indian_antipiracy_machugh_advanfort |
| 545 | dryer - whirlpool - tumble - indesit - hotpoint | 21 | 545_dryer_whirlpool_tumble_indesit |
| 546 | camper - indycamp - camp - parliament - spcb | 21 | 546_camper_indycamp_camp_parliament |
| 547 | flag - confederate - charleston - carolina - mississippi | 21 | 547_flag_confederate_charleston_carolina |
| 548 | scotland - pountney - nations - murrayfield - cotter | 21 | 548_scotland_pountney_nations_murrayfield |
| 549 | unemployment - wales - employment - rate - gva | 21 | 549_unemployment_wales_employment_rate |
| 550 | islands - bougainville - palau - tuvalu - island | 21 | 550_islands_bougainville_palau_tuvalu |
| 551 | poppy - weeping - 888246 - cummins - war | 21 | 551_poppy_weeping_888246_cummins |
| 552 | mobile - mpesa - africa - kenya - technology | 21 | 552_mobile_mpesa_africa_kenya |
| 553 | crime - cps - recorded - constable - police | 21 | 553_crime_cps_recorded_constable |
| 554 | thomson - snp - hales - whip - thomsons | 21 | 554_thomson_snp_hales_whip |
| 555 | meeke - ogier - citroen - rally - wrc | 21 | 555_meeke_ogier_citroen_rally |
| 556 | jones - nations - rugby - scrum - zealand | 21 | 556_jones_nations_rugby_scrum |
| 557 | bayoh - pirc - sheku - bayohs - anwar | 21 | 557_bayoh_pirc_sheku_bayohs |
| 558 | book - amazon - ebook - kindle - nook | 21 | 558_book_amazon_ebook_kindle |
| 559 | jewellery - kardashian - robbery - jewel - thief | 20 | 559_jewellery_kardashian_robbery_jewel |
| 560 | universal - credit - uc - benefit - claimant | 20 | 560_universal_credit_uc_benefit |
| 561 | heart - aspirin - fh - statin - risk | 20 | 561_heart_aspirin_fh_statin |
| 562 | customer - rbs - hsbc - payment - natwest | 20 | 562_customer_rbs_hsbc_payment |
| 563 | cyprus - cypriot - bank - bailout - laiki | 20 | 563_cyprus_cypriot_bank_bailout |
| 564 | zerohours - contract - contracts - ons - employment | 20 | 564_zerohours_contract_contracts_ons |
| 565 | levien - kaplan - swansea - swans - jenkins | 20 | 565_levien_kaplan_swansea_swans |
| 566 | diabetes - insulin - hdl - amputation - cancer | 20 | 566_diabetes_insulin_hdl_amputation |
| 567 | mummy - tomb - pyramid - ancient - egyptian | 20 | 567_mummy_tomb_pyramid_ancient |
| 568 | demolition - didcot - rwe - huxtable - collings | 20 | 568_demolition_didcot_rwe_huxtable |
| 569 | taxi - licensing - driver - licence - kiani | 20 | 569_taxi_licensing_driver_licence |
| 570 | 888 - ladbrokes - william - hill - merger | 20 | 570_888_ladbrokes_william_hill |
| 571 | cafferkey - ebola - pauline - cafferkeys - virus | 20 | 571_cafferkey_ebola_pauline_cafferkeys |
| 572 | church - abuse - bishop - clergy - abused | 20 | 572_church_abuse_bishop_clergy |
| 573 | mesh - implant - incontinence - implants - prolapse | 20 | 573_mesh_implant_incontinence_implants |
| 574 | exercise - obese - cancer - physical - obesity | 20 | 574_exercise_obese_cancer_physical |
| 575 | putin - trump - russian - tillerson - russia | 20 | 575_putin_trump_russian_tillerson |
| 576 | parryjones - halsall - severance - council - lieu | 20 | 576_parryjones_halsall_severance_council |
| 577 | export - exports - drink - market - food | 20 | 577_export_exports_drink_market |
| 578 | ladies - belles - wfc - notts - sunderland | 20 | 578_ladies_belles_wfc_notts |
| 579 | hiroshima - nagasaki - bomb - atomic - kyoto | 20 | 579_hiroshima_nagasaki_bomb_atomic |
| 580 | happiness - wellbeing - happier - satisfaction - happiest | 20 | 580_happiness_wellbeing_happier_satisfaction |
| 581 | inflation - price - cpi - prices - ons | 20 | 581_inflation_price_cpi_prices |
| 582 | transgender - scouts - military - erectile - gay | 20 | 582_transgender_scouts_military_erectile |
| 583 | bollywood - film - puri - kabali - naman | 20 | 583_bollywood_film_puri_kabali |
| 584 | brazils - brazil - rousseff - temer - rousseffs | 19 | 584_brazils_brazil_rousseff_temer |
| 585 | holocaust - dorothea - hedwig - occupation - memorial | 19 | 585_holocaust_dorothea_hedwig_occupation |
| 586 | trump - turnberry - golf - sturgeon - trumps | 19 | 586_trump_turnberry_golf_sturgeon |
| 587 | cyclone - queensland - nsw - debbie - floodwaters | 19 | 587_cyclone_queensland_nsw_debbie |
| 588 | warrior - unmanned - exercise - joint - navy | 19 | 588_warrior_unmanned_exercise_joint |
| 589 | depp - joyce - boo - pistol - quarantine | 19 | 589_depp_joyce_boo_pistol |
| 590 | durban - commonwealth - games - cgf - 2022 | 19 | 590_durban_commonwealth_games_cgf |
| 591 | somers - saudi - shia - mosque - yemen | 19 | 591_somers_saudi_shia_mosque |
| 592 | karimov - uzbekistan - uzbek - karimova - karimovatillyaeva | 19 | 592_karimov_uzbekistan_uzbek_karimova |
| 593 | orange - hall - strawletterdallon - attack - newtownstewart | 19 | 593_orange_hall_strawletterdallon_attack |
| 594 | breastfeeding - breastfeed - claridges - baby - feeding | 19 | 594_breastfeeding_breastfeed_claridges_baby |
| 595 | divorce - chai - sharland - khoo - prest | 19 | 595_divorce_chai_sharland_khoo |
| 596 | muntari - racist - figc - racism - boateng | 19 | 596_muntari_racist_figc_racism |
| 597 | cholera - outbreak - haiti - sanitation - diarrhoeal | 19 | 597_cholera_outbreak_haiti_sanitation |
| 598 | suicide - mental - suicidal - depression - samaritans | 19 | 598_suicide_mental_suicidal_depression |
| 599 | leveson - press - charter - ipso - regulator | 19 | 599_leveson_press_charter_ipso |
| 600 | visa - 457 - h1b - australia - backpacker | 19 | 600_visa_457_h1b_australia |
| 601 | marathon - hawkins - race - farah - rotich | 19 | 601_marathon_hawkins_race_farah |
| 602 | gene - genetic - cell - genome - dna | 19 | 602_gene_genetic_cell_genome |
| 603 | 2024 - ioc - bid - budapest - olympic | 19 | 603_2024_ioc_bid_budapest |
| 604 | ftse - index - share - 100 - dollar | 19 | 604_ftse_index_share_100 |
| 605 | africans - photo - selection - elsewhere - africa | 19 | 605_africans_photo_selection_elsewhere |
| 606 | muamba - fabrice - cardiac - defibrillator - muambas | 19 | 606_muamba_fabrice_cardiac_defibrillator |
| 607 | vaccine - ebola - outbreak - virus - zmapp | 19 | 607_vaccine_ebola_outbreak_virus |
| 608 | ifa - portadown - disciplinary - carrick - elebert | 18 | 608_ifa_portadown_disciplinary_carrick |
| 609 | clock - tower - bell - chime - ben | 18 | 609_clock_tower_bell_chime |
| 610 | pipeline - keystone - xl - alberta - nebraska | 18 | 610_pipeline_keystone_xl_alberta |
| 611 | carney - governor - bank - carneys - jenkin | 18 | 611_carney_governor_bank_carneys |
| 612 | visit - petition - trump - invitation - trumps | 18 | 612_visit_petition_trump_invitation |
| 613 | cqc - care - resident - inspection - morleigh | 18 | 613_cqc_care_resident_inspection |
| 614 | sky - dark - lighting - beacons - light | 18 | 614_sky_dark_lighting_beacons |
| 615 | cider - pasty - food - sausage - protected | 18 | 615_cider_pasty_food_sausage |
| 616 | okinawa - okinawans - base - futenma - japan | 18 | 616_okinawa_okinawans_base_futenma |
| 617 | wheat - meat - crop - gm - pig | 18 | 617_wheat_meat_crop_gm |
| 618 | chocolate - cadbury - nestle - toblerone - trademark | 18 | 618_chocolate_cadbury_nestle_toblerone |
| 619 | onion - dosa - india - indian - schezwan | 18 | 619_onion_dosa_india_indian |
| 620 | maggi - noodle - nestle - msg - noodles | 18 | 620_maggi_noodle_nestle_msg |
| 621 | teeth - dental - tooth - decay - gum | 18 | 621_teeth_dental_tooth_decay |
| 622 | milk - formula - daigou - infant - powder | 18 | 622_milk_formula_daigou_infant |
| 623 | kaepernick - anthem - yall - 49ers - kaepernicks | 18 | 623_kaepernick_anthem_yall_49ers |
| 624 | stephanie - inglis - vietnam - judo - daviot | 18 | 624_stephanie_inglis_vietnam_judo |
| 625 | airlander - hav - cardington - aircraft - flight | 18 | 625_airlander_hav_cardington_aircraft |
| 626 | water - customer - thames - ofwat - trent | 17 | 626_water_customer_thames_ofwat |
| 627 | bushfire - fire - yarloop - bushfires - destroyed | 17 | 627_bushfire_fire_yarloop_bushfires |
| 628 | malala - malalas - yousafzai - pakistan - ziauddin | 17 | 628_malala_malalas_yousafzai_pakistan |
| 629 | gay - samesex - marriage - mexico - lesbian | 17 | 629_gay_samesex_marriage_mexico |
| 630 | dog - dogs - wolf - domestication - personality | 17 | 630_dog_dogs_wolf_domestication |
| 631 | parking - hospital - car - patient - charge | 17 | 631_parking_hospital_car_patient |
| 632 | carers - foster - mental - fostering - care | 17 | 632_carers_foster_mental_fostering |
| 633 | coppergate - lendal - peryer - lane - camera | 17 | 633_coppergate_lendal_peryer_lane |
| 634 | bataclan - band - concert - showband - eagles | 17 | 634_bataclan_band_concert_showband |
| 635 | whiter - abbs - newmarket - whiters - adamec | 17 | 635_whiter_abbs_newmarket_whiters |
| 636 | munoz - airline - airlines - flight - tsa | 17 | 636_munoz_airline_airlines_flight |
| 637 | post - cwu - mail - delungra - office | 17 | 637_post_cwu_mail_delungra |
| 638 | name - girls - boys - popular - boy | 17 | 638_name_girls_boys_popular |
| 639 | brizzi - semple - semples - pc - meth | 17 | 639_brizzi_semple_semples_pc |
| 640 | obesity - overweight - obese - weight - child | 17 | 640_obesity_overweight_obese_weight |
| 641 | winterbourne - learning - disability - care - panorama | 17 | 641_winterbourne_learning_disability_care |
| 642 | cricket - firstclass - fowler - bruyn - pierre | 17 | 642_cricket_firstclass_fowler_bruyn |
| 643 | coventry - ricoh - sisu - acl - arena | 17 | 643_coventry_ricoh_sisu_acl |
| 644 | cypriots - cyprus - cypriot - turkish - greek | 17 | 644_cypriots_cyprus_cypriot_turkish |
| 645 | snow - weather - storm - cancelled - lakes | 17 | 645_snow_weather_storm_cancelled |
| 646 | earthquake - bgs - tremor - magnitude - geological | 17 | 646_earthquake_bgs_tremor_magnitude |
| 647 | bus - accident - lorry - driver - injured | 17 | 647_bus_accident_lorry_driver |
| 648 | unison - pay - strike - staff - nhs | 17 | 648_unison_pay_strike_staff |
| 649 | dominica - kitts - grenada - jamaica - nevis | 17 | 649_dominica_kitts_grenada_jamaica |
| 650 | corporation - tax - ireland - northern - stormont | 17 | 650_corporation_tax_ireland_northern |
| 651 | renewables - energy - climate - renewable - solar | 17 | 651_renewables_energy_climate_renewable |
| 652 | sleep - clock - melatonin - light - mattress | 17 | 652_sleep_clock_melatonin_light |
| 653 | festival - meadowbank - concert - music - primal | 17 | 653_festival_meadowbank_concert_music |
| 654 | ford - toronto - fords - mayor - doug | 17 | 654_ford_toronto_fords_mayor |
| 655 | butterfly - moth - hairstreak - beetle - blue | 17 | 655_butterfly_moth_hairstreak_beetle |
| 656 | cliff - sir - yorkshire - cliffs - investigation | 16 | 656_cliff_sir_yorkshire_cliffs |
| 657 | wenger - conte - arsenal - chelsea - wengers | 16 | 657_wenger_conte_arsenal_chelsea |
| 658 | bell - jenna - repatriation - kirstys - kirsty | 16 | 658_bell_jenna_repatriation_kirstys |
| 659 | malyn - heworth - lawrence - lawrences - claudia | 16 | 659_malyn_heworth_lawrence_lawrences |
| 660 | sony - sonys - hack - film - korea | 16 | 660_sony_sonys_hack_film |
| 661 | voice - cowell - shine - cameo - murs | 16 | 661_voice_cowell_shine_cameo |
| 662 | kelso - cycling - borders - event - race | 16 | 662_kelso_cycling_borders_event |
| 663 | abortion - pregnancy - salvador - valenzuela - foetus | 16 | 663_abortion_pregnancy_salvador_valenzuela |
| 664 | dobbin - southend - mildenhall - cambridge - pub | 16 | 664_dobbin_southend_mildenhall_cambridge |
| 665 | tunisia - sousse - rezgui - tunisian - attack | 16 | 665_tunisia_sousse_rezgui_tunisian |
| 666 | turkington - smiley - race - shedden - btcc | 16 | 666_turkington_smiley_race_shedden |
| 667 | ballet - dancer - polunin - kathakali - dance | 16 | 667_ballet_dancer_polunin_kathakali |
| 668 | efl - league - competition - club - premier | 16 | 668_efl_league_competition_club |
| 669 | wsl - notts - spring - fletcher - ladies | 16 | 669_wsl_notts_spring_fletcher |
| 670 | cardiff - bay - region - swansea - wales | 16 | 670_cardiff_bay_region_swansea |
| 671 | ospreys - blues - scarlets - dragons - pretorius | 16 | 671_ospreys_blues_scarlets_dragons |
| 672 | asylum - migrant - germany - seeker - bivsi | 16 | 672_asylum_migrant_germany_seeker |
| 673 | bite - homeless - littlejohn - clooney - social | 16 | 673_bite_homeless_littlejohn_clooney |
| 674 | puma - helicopter - gearbox - super - crash | 16 | 674_puma_helicopter_gearbox_super |
| 675 | diduca - scotter - penman - jepson - coonan | 16 | 675_diduca_scotter_penman_jepson |
| 676 | dress - wear - heel - thorp - code | 16 | 676_dress_wear_heel_thorp |
| 677 | mackintosh - gsa - building - art - restoration | 16 | 677_mackintosh_gsa_building_art |
| 678 | tattoo - tattooists - tattooing - piercing - tattooist | 16 | 678_tattoo_tattooists_tattooing_piercing |
| 679 | baupin - strausskahn - dsk - diallo - harassment | 16 | 679_baupin_strausskahn_dsk_diallo |
| 680 | polanski - extradition - geimer - polish - polanskis | 16 | 680_polanski_extradition_geimer_polish |
| 681 | turkey - erdogan - turkish - visafree - turkeys | 16 | 681_turkey_erdogan_turkish_visafree |
| 682 | garment - factory - rana - plaza - bangladesh | 16 | 682_garment_factory_rana_plaza |
| 683 | visa - cataki - poststudy - dingwall - brain | 16 | 683_visa_cataki_poststudy_dingwall |
| 684 | bake - baking - cake - bakes - nadiya | 16 | 684_bake_baking_cake_bakes |
| 685 | hut - beach - promenade - fairbourne - huts | 16 | 685_hut_beach_promenade_fairbourne |
| 686 | lego - toy - legos - brick - ai | 16 | 686_lego_toy_legos_brick |
| 687 | watford - taylor - prediction - serges - elton | 16 | 687_watford_taylor_prediction_serges |
| 688 | leaguebyleague - managerial - below - list - appear | 16 | 688_leaguebyleague_managerial_below_list |
| 689 | coulson - wallis - hacking - goodman - mulcaire | 16 | 689_coulson_wallis_hacking_goodman |
| 690 | hajj - pilgrim - stampede - saudi - mecca | 16 | 690_hajj_pilgrim_stampede_saudi |
| 691 | mcevoy - adjudication - plaid - councillor - tribunal | 16 | 691_mcevoy_adjudication_plaid_councillor |
| 692 | tick - lyme - rabies - ticks - disease | 16 | 692_tick_lyme_rabies_ticks |
| 693 | ukad - steroid - esl - antidoping - spencertonks | 16 | 693_ukad_steroid_esl_antidoping |
| 694 | keane - oneill - squad - mccarthy - republic | 16 | 694_keane_oneill_squad_mccarthy |
| 695 | ayrshire - ombudsman - arran - maternity - crosshouse | 15 | 695_ayrshire_ombudsman_arran_maternity |
| 696 | laser - pilot - aircraft - pilots - cockpit | 15 | 696_laser_pilot_aircraft_pilots |
| 697 | pc - phillips - wallasey - birkett - stinger | 15 | 697_pc_phillips_wallasey_birkett |
| 698 | ets - exam - cscs - toeic - sia | 15 | 698_ets_exam_cscs_toeic |
| 699 | irish - bailout - noonan - banking - cowen | 15 | 699_irish_bailout_noonan_banking |
| 700 | warriors - townsend - glasgow - edinburgh - alainuuese | 15 | 700_warriors_townsend_glasgow_edinburgh |
| 701 | cardiff - train - stadium - arriva - queuing | 15 | 701_cardiff_train_stadium_arriva |
| 702 | deas - coal - osborne - electoral - sturgeon | 15 | 702_deas_coal_osborne_electoral |
| 703 | raiders - super - trinitys - koukash - marwan | 15 | 703_raiders_super_trinitys_koukash |
| 704 | utilities - cryptosporidium - water - parasite - fylde | 15 | 704_utilities_cryptosporidium_water_parasite |
| 705 | named - person - supreme - legislation - scheme | 15 | 705_named_person_supreme_legislation |
| 706 | refugee - syria - humanitarian - child - children | 15 | 706_refugee_syria_humanitarian_child |
| 707 | pfi - plaid - welsh - allowance - carers | 15 | 707_pfi_plaid_welsh_allowance |
| 708 | ticket - resale - ticketing - stubhub - ticketmaster | 15 | 708_ticket_resale_ticketing_stubhub |
| 709 | reddit - facebook - moderator - content - subreddits | 15 | 709_reddit_facebook_moderator_content |
| 710 | clarke - ybf - cchq - shapps - bullying | 15 | 710_clarke_ybf_cchq_shapps |
| 711 | flag - fern - zealanders - design - zealand | 15 | 711_flag_fern_zealanders_design |
| 712 | aboriginal - indigenous - australians - australia - strait | 15 | 712_aboriginal_indigenous_australians_australia |
| 713 | betting - bet - gambling - barton - fa | 15 | 713_betting_bet_gambling_barton |
| 714 | iwf - image - hash - ncmec - ceop | 15 | 714_iwf_image_hash_ncmec |
| 715 | solstice - pendragon - stonehenge - pagan - druid | 15 | 715_solstice_pendragon_stonehenge_pagan |
| 716 | - - - - | 15 | 716____ |
| 717 | hpv - vaccination - vaccine - cervical - jcvi | 15 | 717_hpv_vaccination_vaccine_cervical |
| 718 | music - orchestra - instrument - musical - instruments | 15 | 718_music_orchestra_instrument_musical |
| 719 | spouse - threshold - 18600 - visa - income | 15 | 719_spouse_threshold_18600_visa |
| 720 | cambrils - ramblas - ripoll - abouyaaqoub - alcanar | 14 | 720_cambrils_ramblas_ripoll_abouyaaqoub |
| 721 | ecclestone - gribkowsky - constantin - ecclestones - remand | 14 | 721_ecclestone_gribkowsky_constantin_ecclestones |
| 722 | icloud - nude - fappening - jennifer - kardashian | 14 | 722_icloud_nude_fappening_jennifer |
| 723 | netflix - hbo - svod - subscriber - content | 14 | 723_netflix_hbo_svod_subscriber |
| 724 | vale - smurthwaite - contract - artell - stoke | 14 | 724_vale_smurthwaite_contract_artell |
| 725 | rollsroyce - rollsroyces - valueact - rolls - engine | 14 | 725_rollsroyce_rollsroyces_valueact_rolls |
| 726 | saudi - shia - arabia - iran - nimr | 14 | 726_saudi_shia_arabia_iran |
| 727 | call - 999 - buttdials - handler - caller | 14 | 727_call_999_buttdials_handler |
| 728 | airline - ba - compensation - passenger - flight | 14 | 728_airline_ba_compensation_passenger |
| 729 | fung - fortune - billionaire - airtasker - wealth | 14 | 729_fung_fortune_billionaire_airtasker |
| 730 | sweat - escape - cuomo - prison - dannemora | 14 | 730_sweat_escape_cuomo_prison |
| 731 | whild - rowett - blues - burton - lambert | 14 | 731_whild_rowett_blues_burton |
| 732 | gb - womens - olympics - fa - mens | 14 | 732_gb_womens_olympics_fa |
| 733 | aqap - yemen - alqaeda - mukalla - yemeni | 14 | 733_aqap_yemen_alqaeda_mukalla |
| 734 | pegida - dresden - pegidas - cologne - demonstration | 14 | 734_pegida_dresden_pegidas_cologne |
| 735 | hfea - egg - loeb - adeghe - sperm | 14 | 735_hfea_egg_loeb_adeghe |
| 736 | bull - bullfighting - jallikattu - gored - bullfight | 14 | 736_bull_bullfighting_jallikattu_gored |
| 737 | index - benchmark - nikkei - kospi - seng | 14 | 737_index_benchmark_nikkei_kospi |
| 738 | fire - blaze - burj - smoke - hotel | 14 | 738_fire_blaze_burj_smoke |
| 739 | violence - femicide - women - mexico - woman | 14 | 739_violence_femicide_women_mexico |
| 740 | puerto - rico - padilla - ricans - debt | 14 | 740_puerto_rico_padilla_ricans |
| 741 | gay - marriage - samesex - homosexual - adoption | 14 | 741_gay_marriage_samesex_homosexual |
| 742 | balakrishnan - balakrishnans - aravindan - commune - cult | 14 | 742_balakrishnan_balakrishnans_aravindan_commune |
| 743 | kingsway - sgt - lucas - foss - barrier | 14 | 743_kingsway_sgt_lucas_foss |
| 744 | stormont - welfare - sinn - fin - dup | 14 | 744_stormont_welfare_sinn_fin |
| 745 | battery - batteries - lithiumion - lithium - lithiumair | 14 | 745_battery_batteries_lithiumion_lithium |
| 746 | exeter - stevenage - city - wheeler - mooretaylor | 14 | 746_exeter_stevenage_city_wheeler |
| 747 | shkreli - daraprim - turing - pharmaceuticals - drug | 14 | 747_shkreli_daraprim_turing_pharmaceuticals |
| 748 | michelin - chef - restaurant - violier - food | 14 | 748_michelin_chef_restaurant_violier |
| 749 | pricing - alcohol - minimum - swa - whisky | 14 | 749_pricing_alcohol_minimum_swa |
| 750 | corroboration - wright - rotherham - jury - exploitation | 14 | 750_corroboration_wright_rotherham_jury |
| 751 | nfl - raiders - rams - jaguars - chargers | 14 | 751_nfl_raiders_rams_jaguars |
| 752 | mallya - mallyas - kingfisher - diageo - businessman | 14 | 752_mallya_mallyas_kingfisher_diageo |
| 753 | satoshi - bitcoin - nakamoto - wright - bitcoins | 14 | 753_satoshi_bitcoin_nakamoto_wright |
| 754 | masood - masoods - cochran - westminster - pc | 14 | 754_masood_masoods_cochran_westminster |
| 755 | daniels - daniel - luczak - krezolek - pelka | 14 | 755_daniels_daniel_luczak_krezolek |
| 756 | nato - defence - juncker - eu - european | 14 | 756_nato_defence_juncker_eu |
| 757 | robot - dynamics - robotics - atlas - robots | 13 | 757_robot_dynamics_robotics_atlas |
| 758 | knox - sollecito - kercher - perugia - guede | 13 | 758_knox_sollecito_kercher_perugia |
| 759 | cotter - glasgow - dunbar - solomons - hardie | 13 | 759_cotter_glasgow_dunbar_solomons |
| 760 | racism - frimpong - russian - zenit - racist | 13 | 760_racism_frimpong_russian_zenit |
| 761 | bawagarba - niels - sepsis - amaro - hadiza | 13 | 761_bawagarba_niels_sepsis_amaro |
| 762 | tax - avoidance - blairmore - carr - cameron | 13 | 762_tax_avoidance_blairmore_carr |
| 763 | search - stopandsearch - consensual - stop - searched | 13 | 763_search_stopandsearch_consensual_stop |
| 764 | literacy - reading - read - book - diary | 13 | 764_literacy_reading_read_book |
| 765 | eibars - suning - chinese - eibar - jiangsu | 13 | 765_eibars_suning_chinese_eibar |
| 766 | gm - recall - ignition - defect - bankruptcy | 13 | 766_gm_recall_ignition_defect |
| 767 | dog - meat - yulin - animal - festival | 13 | 767_dog_meat_yulin_animal |
| 768 | faro - ship - tote - cruises - vasquez | 13 | 768_faro_ship_tote_cruises |
| 769 | hoeness - messi - messis - uli - neymar | 13 | 769_hoeness_messi_messis_uli |
| 770 | sarao - saraos - navinder - sell - stanford | 13 | 770_sarao_saraos_navinder_sell |
| 771 | baumgardt - sport - wales - thomas - board | 13 | 771_baumgardt_sport_wales_thomas |
| 772 | sinai - morsi - elarish - ansar - checkpoint | 13 | 772_sinai_morsi_elarish_ansar |
| 773 | fog - airport - heathrow - flight - visibility | 13 | 773_fog_airport_heathrow_flight |
| 774 | mcquire - rezgui - sousse - gunman - silence | 13 | 774_mcquire_rezgui_sousse_gunman |
| 775 | haigh - gfh - dubai - uae - magi | 13 | 775_haigh_gfh_dubai_uae |
| 776 | qantas - airline - joyce - accc - airlines | 13 | 776_qantas_airline_joyce_accc |
| 777 | eis - colleges - lecturers - lecturer - college | 13 | 777_eis_colleges_lecturers_lecturer |
| 778 | u20 - ayr - thistle - rangers - dundee | 13 | 778_u20_ayr_thistle_rangers |
| 779 | defibrillator - cpr - compression - cardiac - resuscitation | 13 | 779_defibrillator_cpr_compression_cardiac |
| 780 | bbl - paternostro - riders - eagles - trophy | 13 | 780_bbl_paternostro_riders_eagles |
| 781 | lochte - feigen - bentz - conger - swimmer | 13 | 781_lochte_feigen_bentz_conger |
| 782 | emperor - akihito - throne - imperial - naruhito | 13 | 782_emperor_akihito_throne_imperial |
| 783 | russian - airspace - aircraft - jet - baltic | 13 | 783_russian_airspace_aircraft_jet |
| 784 | weapon - airgun - firearm - licensing - licence | 13 | 784_weapon_airgun_firearm_licensing |
| 785 | crompton - billings - hillsborough - cromptons - inquest | 13 | 785_crompton_billings_hillsborough_cromptons |
| 786 | alzheimers - brain - disease - protein - cell | 13 | 786_alzheimers_brain_disease_protein |
| 787 | u21 - germany - gudjohnsen - italy - england | 13 | 787_u21_germany_gudjohnsen_italy |
| 788 | johnson - flounders - sunderland - grooming - girl | 13 | 788_johnson_flounders_sunderland_grooming |
| 789 | sexual - harassment - consent - university - students | 13 | 789_sexual_harassment_consent_university |
| 790 | pcs - museum - nmw - museums - payment | 13 | 790_pcs_museum_nmw_museums |
| 791 | deaf - bsl - language - sign - skelding | 13 | 791_deaf_bsl_language_sign |
| 792 | picture - please - submit - pictures - publish | 13 | 792_picture_please_submit_pictures |
| 793 | rugby - pacific - leo - blacks - zealand | 13 | 793_rugby_pacific_leo_blacks |
| 794 | radio - hewlett - purves - listener - carrington | 13 | 794_radio_hewlett_purves_listener |
| 795 | image - body - cosmetic - selfesteem - male | 12 | 795_image_body_cosmetic_selfesteem |
| 796 | hawkeye - glt - goalref - goalline - goalcontrol | 12 | 796_hawkeye_glt_goalref_goalline |
| 797 | imf - emerging - growth - slowdown - economy | 12 | 797_imf_emerging_growth_slowdown |
| 798 | rosberg - hamilton - verstappen - ricciardo - vettel | 12 | 798_rosberg_hamilton_verstappen_ricciardo |
| 799 | mirza - ashai - abertillery - blackmail - farhan | 12 | 799_mirza_ashai_abertillery_blackmail |
| 800 | purnama - widodo - jakarta - jokowi - rizieq | 12 | 800_purnama_widodo_jakarta_jokowi |
| 801 | hyperloop - musk - pod - tesla - elon | 12 | 801_hyperloop_musk_pod_tesla |
| 802 | moira - gartshore - moiras - coatbridge - monkland | 12 | 802_moira_gartshore_moiras_coatbridge |
| 803 | imf - imfs - growth - forecast - obstfeld | 12 | 803_imf_imfs_growth_forecast |
| 804 | airbnb - chesky - rent - dfeh - san | 12 | 804_airbnb_chesky_rent_dfeh |
| 805 | autism - eating - anorexia - disorder - mental | 12 | 805_autism_eating_anorexia_disorder |
| 806 | pemex - explosion - blast - mexico - gas | 12 | 806_pemex_explosion_blast_mexico |
| 807 | vulcan - xh558 - vulcans - raf - bomber | 12 | 807_vulcan_xh558_vulcans_raf |
| 808 | olmert - bribe - jerusalem - holyland - olmerts | 12 | 808_olmert_bribe_jerusalem_holyland |
| 809 | magna - carta - 1215 - charter - king | 12 | 809_magna_carta_1215_charter |
| 810 | playback - device - media - supported - 1625 | 12 | 810_playback_device_media_supported |
| 811 | tunisia - tunisian - tunisias - tunis - libya | 12 | 811_tunisia_tunisian_tunisias_tunis |
| 812 | bibi - qandeel - honour - baloch - killing | 12 | 812_bibi_qandeel_honour_baloch |
| 813 | lib - clegg - dem - osborne - coalition | 12 | 813_lib_clegg_dem_osborne |
| 814 | dj - derek - serpellmorris - dizzee - rascal | 12 | 814_dj_derek_serpellmorris_dizzee |
| 815 | acid - ponce - mukerjea - priyas - potes | 12 | 815_acid_ponce_mukerjea_priyas |
| 816 | monthbymonth - peachey - braintree - calendar - tip | 12 | 816_monthbymonth_peachey_braintree_calendar |
| 817 | standing - safestanding - allseater - seating - hillsborough | 12 | 817_standing_safestanding_allseater_seating |
| 818 | tunisia - ennahda - tunisias - tunis - belaid | 12 | 818_tunisia_ennahda_tunisias_tunis |
| 819 | arts - theatre - dcal - art - acni | 12 | 819_arts_theatre_dcal_art |
| 820 | neymar - psg - brazilian - barcelona - santos | 12 | 820_neymar_psg_brazilian_barcelona |
| 821 | gorsuch - senate - scalia - scalias - republicans | 12 | 821_gorsuch_senate_scalia_scalias |
| 822 | reorganisation - merger - council - drakeford - local | 12 | 822_reorganisation_merger_council_drakeford |
| 823 | kosovo - albania - serbia - albanian - serbian | 12 | 823_kosovo_albania_serbia_albanian |
| 824 | onechild - twochild - policy - birth - china | 12 | 824_onechild_twochild_policy_birth |
| 825 | cav - hse - jehu - bowers - pascon | 12 | 825_cav_hse_jehu_bowers |
| 826 | tapie - lagarde - lyonnais - strausskahn - lagardes | 12 | 826_tapie_lagarde_lyonnais_strausskahn |
| 827 | pte - beasting - punishment - sgt - williams | 11 | 827_pte_beasting_punishment_sgt |
| 828 | train - rail - steam - wcr - network | 11 | 828_train_rail_steam_wcr |
| 829 | chua - insulin - saline - arden - poison | 11 | 829_chua_insulin_saline_arden |
| 830 | salazar - usada - rupp - testosterone - farah | 11 | 830_salazar_usada_rupp_testosterone |
| 831 | clausnitz - refugee - ethe - asylum - shelter | 11 | 831_clausnitz_refugee_ethe_asylum |
| 832 | negatives - positives - improvements - trust - cqc | 11 | 832_negatives_positives_improvements_trust |
| 833 | okawa - oldest - guinness - ghoto - misao | 11 | 833_okawa_oldest_guinness_ghoto |
| 834 | care - social - council - local - ilf | 11 | 834_care_social_council_local |
| 835 | prince - princes - veto - foi - disclosure | 11 | 835_prince_princes_veto_foi |
| 836 | handstand - dance - handstandday - routine - workout | 11 | 836_handstand_dance_handstandday_routine |
| 837 | bg - shell - oil - shells - exploration | 11 | 837_bg_shell_oil_shells |
| 838 | ford - mexico - plant - michigan - mazda | 11 | 838_ford_mexico_plant_michigan |
| 839 | denmark - danish - greenland - rasmussen - copenhagen | 11 | 839_denmark_danish_greenland_rasmussen |
| 840 | transgender - kannywood - 377 - transgenders - gay | 11 | 840_transgender_kannywood_377_transgenders |
| 841 | concentrix - hmrc - credit - tax - claimant | 11 | 841_concentrix_hmrc_credit_tax |
| 842 | invictus - prince - baton - games - harry | 11 | 842_invictus_prince_baton_games |
| 843 | bridge - lorry - southbound - carriageway - northbound | 11 | 843_bridge_lorry_southbound_carriageway |
| 844 | strontium - egtved - soil - burial - halliwell | 11 | 844_strontium_egtved_soil_burial |
| 845 | carney - dombret - clearing - brexit - financial | 11 | 845_carney_dombret_clearing_brexit |
| 846 | khan - mumbai - actor - bollywood - bollywoods | 11 | 846_khan_mumbai_actor_bollywood |
| 847 | nyomi - liam - liams - rachel - fee | 11 | 847_nyomi_liam_liams_rachel |
| 848 | bundy - rancher - finicum - refuge - federal | 11 | 848_bundy_rancher_finicum_refuge |
| 849 | sre - pshe - sex - education - compulsory | 11 | 849_sre_pshe_sex_education |
| 850 | bake - channel - giedroyc - mel - productions | 11 | 850_bake_channel_giedroyc_mel |
| 851 | bomb - unexploded - blitz - evacuation - 1940 | 11 | 851_bomb_unexploded_blitz_evacuation |
| 852 | quiz - quizzes - brainteaser - brainteasers - asks | 11 | 852_quiz_quizzes_brainteaser_brainteasers |
| 853 | ivf - fertility - ccgs - cycle - treatment | 11 | 853_ivf_fertility_ccgs_cycle |
| 854 | hoard - nms - kirkcudbright - galloway - treasure | 11 | 854_hoard_nms_kirkcudbright_galloway |
| 855 | regeni - regenis - egyptian - giulio - cairo | 11 | 855_regeni_regenis_egyptian_giulio |
| 856 | doddfrank - financial - glasssteagall - bank - banking | 11 | 856_doddfrank_financial_glasssteagall_bank |
| 857 | bailey - cults - baileys - knife - duguid | 11 | 857_bailey_cults_baileys_knife |
| 858 | firearm - tasers - armed - officer - chesterman | 11 | 858_firearm_tasers_armed_officer |
| 859 | vauxhall - zafira - recall - zafiras - resistor | 11 | 859_vauxhall_zafira_recall_zafiras |
| 860 | driving - disqualified - latif - chappell - millard | 10 | 860_driving_disqualified_latif_chappell |
| 861 | inbev - ab - sabmiller - budweiser - beer | 10 | 861_inbev_ab_sabmiller_budweiser |
| 862 | transgender - vikki - prison - thompson - gender | 10 | 862_transgender_vikki_prison_thompson |
| 863 | orgreave - miner - miners - rudd - yorkshire | 10 | 863_orgreave_miner_miners_rudd |
| 864 | ira - compensation - libya - libyan - gaddafi | 10 | 864_ira_compensation_libya_libyan |
| 865 | energy - insulation - bills - hydro - lahn | 10 | 865_energy_insulation_bills_hydro |
| 866 | brady - hindley - bradys - ashworth - myra | 10 | 866_brady_hindley_bradys_ashworth |
| 867 | indigenous - aboriginal - trudeau - canadian - murdered | 10 | 867_indigenous_aboriginal_trudeau_canadian |
| 868 | mcareavey - mauritius - michaela - kakehi - harte | 10 | 868_mcareavey_mauritius_michaela_kakehi |
| 869 | hie - hies - highlands - islands - funicular | 10 | 869_hie_hies_highlands_islands |
| 870 | indian - india - tata - indias - visa | 10 | 870_indian_india_tata_indias |
| 871 | charlie - hebdo - tignous - prophet - cartoonist | 10 | 871_charlie_hebdo_tignous_prophet |
| 872 | itv - talpa - crozier - revenue - advertising | 10 | 872_itv_talpa_crozier_revenue |
| 873 | solar - storage - grid - energy - electricity | 10 | 873_solar_storage_grid_energy |
| 874 | insurance - premium - cover - insurer - confusedcom | 10 | 874_insurance_premium_cover_insurer |
| 875 | hunjan - noah - passey - taylermorrison - ruddock | 10 | 875_hunjan_noah_passey_taylermorrison |
| 876 | dounreay - nuclear - nda - dmtr - wick | 10 | 876_dounreay_nuclear_nda_dmtr |
| 877 | emojis - emoji - unicode - flag - emojipedia | 10 | 877_emojis_emoji_unicode_flag |
| 878 | wolves - zenga - shi - thelwell - fosun | 10 | 878_wolves_zenga_shi_thelwell |
| 879 | boba - fett - wars - vectis - collector | 10 | 879_boba_fett_wars_vectis |
| 880 | harlequins - rugby - maafu - robshaw - bath | 10 | 880_harlequins_rugby_maafu_robshaw |
| 881 | rezaian - iran - bahais - iranian - namazi | 10 | 881_rezaian_iran_bahais_iranian |
| 882 | stv - utv - stvs - channel - tv | 10 | 882_stv_utv_stvs_channel |
| 883 | gezi - taksim - istanbul - square - erdogan | 10 | 883_gezi_taksim_istanbul_square |
| 884 | valance - song - band - crufts - favourite | 10 | 884_valance_song_band_crufts |
| 885 | homeless - homelessness - housing - brighthouse - accommodation | 10 | 885_homeless_homelessness_housing_brighthouse |
| 886 | youth - ncs - includem - young - unison | 10 | 886_youth_ncs_includem_young |
| 887 | cheating - bihar - exam - student - rai | 10 | 887_cheating_bihar_exam_student |
| 888 | spider - ouwehand - bat - wns - fungus | 9 | 888_spider_ouwehand_bat_wns |
| 889 | sky - wwe - hbo - skys - customer | 9 | 889_sky_wwe_hbo_skys |
| 890 | nepal - nepals - constitution - india - nepalese | 9 | 890_nepal_nepals_constitution_india |
| 891 | unsupported - airdrop - updated - siregar - playback | 9 | 891_unsupported_airdrop_updated_siregar |
| 892 | mori - cissokho - kolo - kolos - ulloa | 9 | 892_mori_cissokho_kolo_kolos |
| 893 | deliveroo - gig - selfemployed - flexibility - rider | 9 | 893_deliveroo_gig_selfemployed_flexibility |
| 894 | ira - paramilitary - sinn - fin - grimason | 9 | 894_ira_paramilitary_sinn_fin |
| 895 | super - holbrook - warrington - helens - hull | 9 | 895_super_holbrook_warrington_helens |
| 896 | aljeffery - saudi - fayadh - alotaibi - smech | 9 | 896_aljeffery_saudi_fayadh_alotaibi |
| 897 | rhi - foster - adviser - scheme - crawford | 9 | 897_rhi_foster_adviser_scheme |
| 898 | dorleybrown - pothole - inverness - planning - bath | 9 | 898_dorleybrown_pothole_inverness_planning |
| 899 | tourism - adventure - visit - visitscotland - skates | 9 | 899_tourism_adventure_visit_visitscotland |
| 900 | forecourt - fired - f7 - garage - shooting | 9 | 900_forecourt_fired_f7_garage |
| 901 | hinkley - edf - nuclear - reactor - electricity | 9 | 901_hinkley_edf_nuclear_reactor |
| 902 | rally - rallying - nrw - deerstalker - event | 9 | 902_rally_rallying_nrw_deerstalker |
| 903 | peatlands - peatland - woodland - forestry - wildlife | 9 | 903_peatlands_peatland_woodland_forestry |
| 904 | g4s - serco - tagging - contract - sfo | 9 | 904_g4s_serco_tagging_contract |
| 905 | skin - sunscreen - uv - sun - melanoma | 9 | 905_skin_sunscreen_uv_sun |
| 906 | aid - development - 07 - dfid - humanitarian | 9 | 906_aid_development_07_dfid |
| 907 | mental - triage - health - ombudsman - healthrelated | 9 | 907_mental_triage_health_ombudsman |
| 908 | brain - memory - stimulation - shogi - researcher | 9 | 908_brain_memory_stimulation_shogi |
| 909 | flushable - water - wipe - blockage - aviemore | 9 | 909_flushable_water_wipe_blockage |
| 910 | ramsey - coleman - arsenals - moldova - ramseys | 9 | 910_ramsey_coleman_arsenals_moldova |
| 911 | terrorism - walham - kalantar - samra - amani | 9 | 911_terrorism_walham_kalantar_samra |
| 912 | cleusa - bolsa - familia - senna - rollsroyce | 9 | 912_cleusa_bolsa_familia_senna |
| 913 | maxwell - larne - ciarn - explosive - psni | 9 | 913_maxwell_larne_ciarn_explosive |
| 914 | nash - eichenwald - danevska - troll - seizure | 9 | 914_nash_eichenwald_danevska_troll |
| 915 | minnock - ethan - wildblood - highbridge - minnocks | 9 | 915_minnock_ethan_wildblood_highbridge |
| 916 | indigenous - caceres - tukano - kaapor - logging | 9 | 916_indigenous_caceres_tukano_kaapor |
| 917 | litvinenko - lugovoi - kovtun - litvinenkos - polonium210 | 9 | 917_litvinenko_lugovoi_kovtun_litvinenkos |
| 918 | argentina - germany - ranking - saari - bosniaherzegovina | 9 | 918_argentina_germany_ranking_saari |
| 919 | gun - nra - guns - firearm - carol | 9 | 919_gun_nra_guns_firearm |
| 920 | bahrain - ajami - bahraini - shia - unrest | 9 | 920_bahrain_ajami_bahraini_shia |
| 921 | apex - hotel - frp - hotels - inn | 8 | 921_apex_hotel_frp_hotels |
| 922 | cake - gebhart - birthday - smash - hercules | 8 | 922_cake_gebhart_birthday_smash |
| 923 | asylum - canada - quebec - manitoba - seeker | 8 | 923_asylum_canada_quebec_manitoba |
| 924 | jakarta - indonesia - indonesian - naim - islamiyah | 8 | 924_jakarta_indonesia_indonesian_naim |
| 925 | cycling - varnish - sutton - british - cyclings | 8 | 925_cycling_varnish_sutton_british |
| 926 | gallucci - maidenwells - glenys - galluccis - deanfoot | 8 | 926_gallucci_maidenwells_glenys_galluccis |
| 927 | middlesbrough - boro - fulham - ohara - karankas | 8 | 927_middlesbrough_boro_fulham_ohara |
| 928 | fingerprint - biometrics - finger - biometric - qualcomm | 8 | 928_fingerprint_biometrics_finger_biometric |
| 929 | cappuccini - azeez - anaesthetist - cornish - tunbridge | 8 | 929_cappuccini_azeez_anaesthetist_cornish |
| 930 | savile - saviles - mandeville - jimmy - abuse | 8 | 930_savile_saviles_mandeville_jimmy |
| 931 | facebook - video - content - livestreaming - graphic | 8 | 931_facebook_video_content_livestreaming |
| 932 | extinction - capitanian - brocks - carbon - kranendonk | 8 | 932_extinction_capitanian_brocks_carbon |
| 933 | infantino - fifa - tournament - confederation - ceferin | 8 | 933_infantino_fifa_tournament_confederation |
| 934 | marler - banter - nations - gatland - samson | 8 | 934_marler_banter_nations_gatland |
| 935 | eurovision - crimea - samoilova - ukraine - tatars | 8 | 935_eurovision_crimea_samoilova_ukraine |
| 936 | vacantia - estate - bona - unclaimed - sp | 8 | 936_vacantia_estate_bona_unclaimed |
| 937 | - - - - | 8 | 937____ |
| 938 | cornerhouse - moutrey - theatre - hippodrome - building | 8 | 938_cornerhouse_moutrey_theatre_hippodrome |
| 939 | pst - chainrai - birch - portpin - eisner | 8 | 939_pst_chainrai_birch_portpin |
| 940 | lennon - mental - aaronlennon12 - aaron - depression | 8 | 940_lennon_mental_aaronlennon12_aaron |
| 941 | passport - visa - biometric - 3527 - backlog | 8 | 941_passport_visa_biometric_3527 |
| 942 | lgbti - equality - discrimination - bisson - tatchell | 8 | 942_lgbti_equality_discrimination_bisson |
| 943 | amazon - amazons - aws - cloud - ecommerce | 8 | 943_amazon_amazons_aws_cloud |
| 944 | stone - stonehenge - sarsen - diss - archaeological | 8 | 944_stone_stonehenge_sarsen_diss |
| 945 | wandsworth - rifw - affordable - stirling - castlebrooke | 8 | 945_wandsworth_rifw_affordable_stirling |
| 946 | cambridgeshire - dover - norfolk - desborough - council | 8 | 946_cambridgeshire_dover_norfolk_desborough |
| 947 | gilbern - occupancy - nestle - croydon - hawick | 8 | 947_gilbern_occupancy_nestle_croydon |
| 948 | valizadeh - roosh - australia - mallah - faruqi | 8 | 948_valizadeh_roosh_australia_mallah |
| 949 | lancaster - coach - rugby - abendanon - bath | 8 | 949_lancaster_coach_rugby_abendanon |
| 950 | abf - sydney - syndicate - seized - australian | 8 | 950_abf_sydney_syndicate_seized |
| 951 | veitch - ingold - mcintyre - kilwinning - cottage | 8 | 951_veitch_ingold_mcintyre_kilwinning |
| 952 | sainsburys - oxford - thorpe - supermarket - higgs | 8 | 952_sainsburys_oxford_thorpe_supermarket |
| 953 | westley - cotterill - bentley - club - oakwell | 8 | 953_westley_cotterill_bentley_club |
| 954 | bishop - lewes - ball - cps - 1992 | 8 | 954_bishop_lewes_ball_cps |
| 955 | prisoner - echr - wholelife - rights - human | 8 | 955_prisoner_echr_wholelife_rights |
| 956 | edet - inuk - antan - culverwell - inuks | 8 | 956_edet_inuk_antan_culverwell |
| 957 | gulls - nicholson - devon - gaming - torquay | 8 | 957_gulls_nicholson_devon_gaming |
| 958 | blood - nibts - poots - ban - gay | 8 | 958_blood_nibts_poots_ban |
| 959 | hinds - dr - ambulance - helicopter - trauma | 7 | 959_hinds_dr_ambulance_helicopter |
| 960 | donnarumma - mihajlovic - buffon - roma - milan | 7 | 960_donnarumma_mihajlovic_buffon_roma |
| 961 | turing - pardon - turings - posthumous - gay | 7 | 961_turing_pardon_turings_posthumous |
| 962 | sandford - sandusky - dorking - vegas - sandfords | 7 | 962_sandford_sandusky_dorking_vegas |
| 963 | steel - antidumping - chinese - china - solar | 7 | 963_steel_antidumping_chinese_china |
| 964 | dung - vietnam - nguyen - trong - thang | 7 | 964_dung_vietnam_nguyen_trong |
| 965 | hailstones - incident - frieston - shawhill - intent | 7 | 965_hailstones_incident_frieston_shawhill |
| 966 | romania - albania - switzerland - torje - ledian | 7 | 966_romania_albania_switzerland_torje |
| 967 | google - ico - search - forgotten - ruling | 7 | 967_google_ico_search_forgotten |
| 968 | merger - skeoch - aberdeen - gilbert - asset | 7 | 968_merger_skeoch_aberdeen_gilbert |
| 969 | wasps - connacht - toulouse - gopperth - bassett | 7 | 969_wasps_connacht_toulouse_gopperth |
| 970 | infantry - combat - woman - lessing - roles | 7 | 970_infantry_combat_woman_lessing |
| 971 | indian - india - modi - doklam - china | 7 | 971_indian_india_modi_doklam |
| 972 | seun - sahara - desert - alhacen - niger | 7 | 972_seun_sahara_desert_alhacen |
| 973 | payphones - bt - kiosk - phone - payphone | 7 | 973_payphones_bt_kiosk_phone |
| 974 | ipsa - mps - bercow - expense - ballard | 7 | 974_ipsa_mps_bercow_expense |
| 975 | cricket - warwickshire - county - championship - lancashire | 7 | 975_cricket_warwickshire_county_championship |
| 976 | putt - stricker - kaymer - molinari - kjeldsen | 7 | 976_putt_stricker_kaymer_molinari |
| 977 | population - census - ons - migration - 14100 | 7 | 977_population_census_ons_migration |
| 978 | ship - eastern - cao - yangtze - cangxi | 7 | 978_ship_eastern_cao_yangtze |
| 979 | diamond - auction - sothebys - carat - pink | 7 | 979_diamond_auction_sothebys_carat |
| 980 | downey - letter - ira - hyde - letters | 7 | 980_downey_letter_ira_hyde |
| 981 | ring - ariel - norrislee - carrot - paahlsson | 7 | 981_ring_ariel_norrislee_carrot |
| 982 | wrightson - conroy - wrightsons - mathieson - yasmine | 7 | 982_wrightson_conroy_wrightsons_mathieson |
| 983 | mural - listed - graffiti - barton - kaveh | 7 | 983_mural_listed_graffiti_barton |
| 984 | tax - cosla - council - slgp - freeze | 7 | 984_tax_cosla_council_slgp |
| 985 | mureta - freelancer - ces - provenance - tech | 7 | 985_mureta_freelancer_ces_provenance |
| 986 | hamrick - levitas - greatgrandfather - jagger - lwren | 7 | 986_hamrick_levitas_greatgrandfather_jagger |
| 987 | tesco - customer - account - bank - tescos | 7 | 987_tesco_customer_account_bank |
| 988 | dad - ian - mum - hillsborough - gary | 7 | 988_dad_ian_mum_hillsborough |
| 989 | pound - sterling - currency - dollar - index | 7 | 989_pound_sterling_currency_dollar |
| 990 | bbcbreaking - fullest - breaking - refresh - alert | 7 | 990_bbcbreaking_fullest_breaking_refresh |
| 991 | culture - bid - 2023 - aarhus - title | 7 | 991_culture_bid_2023_aarhus |
| 992 | pinckney - charleston - roof - dylann - church | 7 | 992_pinckney_charleston_roof_dylann |
| 993 | cologne - colognes - assault - merkel - mathies | 7 | 993_cologne_colognes_assault_merkel |
| 994 | livingstone - labour - livingstones - corbyn - mps | 7 | 994_livingstone_labour_livingstones_corbyn |
| 995 | cellino - leeds - rosler - elland - gfh | 7 | 995_cellino_leeds_rosler_elland |
| 996 | hussain - rape - basharat - arshid - colborne | 7 | 996_hussain_rape_basharat_arshid |
| 997 | store - bq - arran - tesco - endless | 7 | 997_store_bq_arran_tesco |
| 998 | hanjin - shipping - cargo - container - ship | 7 | 998_hanjin_shipping_cargo_container |
| 999 | hoegh - ship - vessel - refloat - salvor | 7 | 999_hoegh_ship_vessel_refloat |
| 1000 | cash - debit - card - payment - brc | 7 | 1000_cash_debit_card_payment |
| 1001 | dvla - hire - motorist - disc - rac | 7 | 1001_dvla_hire_motorist_disc |
| 1002 | bloomfield - melanoma - sun - colin - uv | 6 | 1002_bloomfield_melanoma_sun_colin |
| 1003 | plane - airbus - mach - jet - supersonic | 6 | 1003_plane_airbus_mach_jet |
| 1004 | jeffreyshaw - sheriff - clarkes - clarke - erin | 6 | 1004_jeffreyshaw_sheriff_clarkes_clarke |
| 1005 | screening - cervical - hpv - cancer - breast | 6 | 1005_screening_cervical_hpv_cancer |
| 1006 | swaddling - spine - windpipe - rosies - splint | 6 | 1006_swaddling_spine_windpipe_rosies |
| 1007 | grainger - teague - culcheth - gmp - graingers | 6 | 1007_grainger_teague_culcheth_gmp |
| 1008 | duchatelet - watford - meire - rashford - addicks | 6 | 1008_duchatelet_watford_meire_rashford |
| 1009 | jozwik - harlow - polish - arek - stow | 6 | 1009_jozwik_harlow_polish_arek |
| 1010 | grills - mcmullen - hifold - oven - catterall | 6 | 1010_grills_mcmullen_hifold_oven |
| 1011 | minibus - ashley - brooks - maesteg - ashleys | 6 | 1011_minibus_ashley_brooks_maesteg |
| 1012 | ira - spiecker - psni - northern - subgroups | 6 | 1012_ira_spiecker_psni_northern |
| 1013 | vtech - cayla - doll - toy - snapchatdb | 6 | 1013_vtech_cayla_doll_toy |
| 1014 | pogmore - loosemore - naked - lucas - sunbathing | 6 | 1014_pogmore_loosemore_naked_lucas |
| 1015 | bt - nasl - sky - broadband - peterson | 6 | 1015_bt_nasl_sky_broadband |
| 1016 | embryo - editing - gene - germline - ivf | 6 | 1016_embryo_editing_gene_germline |
| 1017 | khmer - rouge - duch - chea - nuon | 6 | 1017_khmer_rouge_duch_chea |
| 1018 | child - nursery - kindergarten - early - education | 6 | 1018_child_nursery_kindergarten_early |
| 1019 | arctic - convoys - convoy - medal - veteran | 6 | 1019_arctic_convoys_convoy_medal |
| 1020 | adp - injecting - kareena - llandough - operation | 6 | 1020_adp_injecting_kareena_llandough |
| 1021 | oyston - belokon - oystons - bfc - blackpool | 6 | 1021_oyston_belokon_oystons_bfc |
| 1022 | murphy - ballybinaby - hackballscross - nonjury - louth | 6 | 1022_murphy_ballybinaby_hackballscross_nonjury |
| 1023 | schmittmatzen - santa - christmas - coxshall - mahjouri | 6 | 1023_schmittmatzen_santa_christmas_coxshall |
| 1024 | emission - climate - target - greenhouse - change | 6 | 1024_emission_climate_target_greenhouse |
| 1025 | shakespeare - mulryne - shakespeares - name - puck | 6 | 1025_shakespeare_mulryne_shakespeares_name |
| 1026 | schoolcardshop - harrier - truprint - sevenyearsold - darryll | 6 | 1026_schoolcardshop_harrier_truprint_sevenyearsold |
| 1027 | commentary - fa - buildup - live - cup | 6 | 1027_commentary_fa_buildup_live |
| 1028 | vat - rate - sanitary - retailer - boarding | 6 | 1028_vat_rate_sanitary_retailer |
| 1029 | walsall - saddlers - whitney - burton - hiwula | 6 | 1029_walsall_saddlers_whitney_burton |
| 1030 | alkasasbeh - moaz - jordanian - goto - lt | 6 | 1030_alkasasbeh_moaz_jordanian_goto |
| 1031 | church - samesex - marriage - communion - anglican | 6 | 1031_church_samesex_marriage_communion |
| 1032 | lse - merger - boerse - deutsche - britvic | 6 | 1032_lse_merger_boerse_deutsche |
| 1033 | unemployment - rate - inactivity - favourably - joblessrelated | 6 | 1033_unemployment_rate_inactivity_favourably |
| 1034 | mitochondrion - technique - mitochondrial - genetic - cell | 6 | 1034_mitochondrion_technique_mitochondrial_genetic |
| 1035 | mockingbird - watchman - atticus - finch - novel | 6 | 1035_mockingbird_watchman_atticus_finch |
| 1036 | cliff - beach - plass - landslip - bradstock | 6 | 1036_cliff_beach_plass_landslip |
| 1037 | transplant - donor - sandness - hardison - hutton | 6 | 1037_transplant_donor_sandness_hardison |
| 1038 | roaccutane - hydroquinone - shaba - medication - drug | 6 | 1038_roaccutane_hydroquinone_shaba_medication |
| 1039 | harlequins - saracens - alofa - wasps - thompstone | 6 | 1039_harlequins_saracens_alofa_wasps |
| 1040 | riggs - rigg - cps - ipcc - robbies | 6 | 1040_riggs_rigg_cps_ipcc |
| 1041 | undisclosed - loan - free - unattached - premiership | 6 | 1041_undisclosed_loan_free_unattached |
| 1042 | cettour - saem - kierans - kieran - lift | 6 | 1042_cettour_saem_kierans_kieran |
| 1043 | psenny - lit - paris - bataclan - equipe | 6 | 1043_psenny_lit_paris_bataclan |
| 1044 | bot - chatbots - homework - detoffol - negobot | 5 | 1044_bot_chatbots_homework_detoffol |
| 1045 | starwood - marriott - anbang - hotel - waldorf | 5 | 1045_starwood_marriott_anbang_hotel |
| 1046 | asbestos - mesothelioma - mesothemelioma - compensation - exposed | 5 | 1046_asbestos_mesothelioma_mesothemelioma_compensation |
| 1047 | zaloumis - starkey - hennells - stortford - cowells | 5 | 1047_zaloumis_starkey_hennells_stortford |
| 1048 | crucis - valle - neolithic - fragment - stonehenge | 5 | 1048_crucis_valle_neolithic_fragment |
| 1049 | taiwan - china - beijing - taiwanese - tsai | 5 | 1049_taiwan_china_beijing_taiwanese |
| 1050 | gunter - woodburn - ledley - uncapped - stam | 5 | 1050_gunter_woodburn_ledley_uncapped |
| 1051 | millwall - draw - herd - bristol - fa | 5 | 1051_millwall_draw_herd_bristol |
| 1052 | autism - glynneowen - shelley - specialisterne - spectrum | 5 | 1052_autism_glynneowen_shelley_specialisterne |
| 1053 | isa - mat - saving - lisas - saver | 5 | 1053_isa_mat_saving_lisas |
| 1054 | alzheimers - dementia - parkinsons - decline - cognitive | 5 | 1054_alzheimers_dementia_parkinsons_decline |
| 1055 | falconio - bowen - kocik - ligmans - ligman | 5 | 1055_falconio_bowen_kocik_ligmans |
| 1056 | pastor - wazzan - dungavel - mcconnell - detention | 5 | 1056_pastor_wazzan_dungavel_mcconnell |
| 1057 | s4c - carmarthen - trinity - egin - s4cs | 5 | 1057_s4c_carmarthen_trinity_egin |
| 1058 | luer - randall - burley - barnet - isthmian | 5 | 1058_luer_randall_burley_barnet |
| 1059 | chen - teacher - language - varkey - tahta | 5 | 1059_chen_teacher_language_varkey |
| 1060 | port - kovari - whitworth - sak - ports | 5 | 1060_port_kovari_whitworth_sak |
| 1061 | armenia - armenians - azerbaijan - armenian - uzbekistan | 5 | 1061_armenia_armenians_azerbaijan_armenian |
| 1062 | ranocchia - howe - sunderland - bournemouth - hulls | 5 | 1062_ranocchia_howe_sunderland_bournemouth |
| 1063 | mine - adani - queensland - nickel - acf | 5 | 1063_mine_adani_queensland_nickel |
| 1064 | gobbins - rockfall - path - islandmagee - undercliff | 5 | 1064_gobbins_rockfall_path_islandmagee |
| 1065 | smolensk - kaczynski - macierewicz - kaczynskis - polish | 5 | 1065_smolensk_kaczynski_macierewicz_kaczynskis |
| 1066 | qazimaj - stuarts - paxman - killoughter - stuart | 5 | 1066_qazimaj_stuarts_paxman_killoughter |
| 1067 | benitez - randy - shearer - hollis - sherwood | 5 | 1067_benitez_randy_shearer_hollis |
| 1068 | hmrc - tax - avoidance - accountancy - relief | 5 | 1068_hmrc_tax_avoidance_accountancy |
| 1069 | hacking - hacked - bskyb - mirror - murdoch | 5 | 1069_hacking_hacked_bskyb_mirror |
| 1070 | palaeolithic - bone - skullcups - goughs - cave | 5 | 1070_palaeolithic_bone_skullcups_goughs |
| 1071 | coventry - bradford - city - norwich - cambridge | 5 | 1071_coventry_bradford_city_norwich |
| 1072 | republican - waterboarding - muslim - carson - trump | 5 | 1072_republican_waterboarding_muslim_carson |
| 1073 | merkel - trump - trumps - german - transatlantic | 5 | 1073_merkel_trump_trumps_german |
| 1074 | weibo - internet - sina - cac - cyberspace | 5 | 1074_weibo_internet_sina_cac |
| 1075 | stromatolites - fungi - organism - algae - fossil | 5 | 1075_stromatolites_fungi_organism_algae |
| 1076 | leverkusen - dortmund - bayer - borussia - 04 | 5 | 1076_leverkusen_dortmund_bayer_borussia |
| 1077 | ceuta - melilla - migrant - fence - spanish | 5 | 1077_ceuta_melilla_migrant_fence |
| 1078 | android - apps - jassy - contactless - payment | 5 | 1078_android_apps_jassy_contactless |
| 1079 | czerwiak - qazi - lebiedowicz - rafal - birkenshaw | 5 | 1079_czerwiak_qazi_lebiedowicz_rafal |
| 1080 | liverpool - helens - caldeira - sefton - knowsley | 5 | 1080_liverpool_helens_caldeira_sefton |
| 1081 | pitt - jolie - married - divorce - paltrow | 5 | 1081_pitt_jolie_married_divorce |
| 1082 | mourinho - ham - referee - henrikh - mkhitaryan | 5 | 1082_mourinho_ham_referee_henrikh |
| 1083 | dewani - dewanis - mngeni - tongo - shrien | 5 | 1083_dewani_dewanis_mngeni_tongo |
| 1084 | mcguinness - ira - funeral - irish - telegraph | 5 | 1084_mcguinness_ira_funeral_irish |
| 1085 | pacers - guide - seasonhigh - gymnastics - jokic | 5 | 1085_pacers_guide_seasonhigh_gymnastics |
| 1086 | speeding - speed - 40mph - m32 - camera | 5 | 1086_speeding_speed_40mph_m32 |
| 1087 | plainmoor - phillips - gulls - thea - truro | 5 | 1087_plainmoor_phillips_gulls_thea |
| 1088 | duggan - duggans - hutchinsonfoster - gun - denney | 5 | 1088_duggan_duggans_hutchinsonfoster_gun |
| 1089 | nsi - bond - bonds - savings - premium | 5 | 1089_nsi_bond_bonds_savings |
| 1090 | ramsey - sector - pmi - manufacturing - orders | 5 | 1090_ramsey_sector_pmi_manufacturing |
| 1091 | faw - coleman - colemans - euro - 1958 | 5 | 1091_faw_coleman_colemans_euro |
| 1092 | cell - eye - nystagmus - amd - moorfields | 5 | 1092_cell_eye_nystagmus_amd |
| 1093 | allam - ehab - tigers - hcst - hull | 5 | 1093_allam_ehab_tigers_hcst |
| 1094 | dragons - blues - clermont - patchell - replacements | 5 | 1094_dragons_blues_clermont_patchell |
| 1095 | glencore - glencores - mining - commodity - glasenberg | 5 | 1095_glencore_glencores_mining_commodity |
| 1096 | dictionaries - oup - word - dictionary - selfie | 5 | 1096_dictionaries_oup_word_dictionary |
| 1097 | iran - irans - iranian - sanction - oil | 5 | 1097_iran_irans_iranian_sanction |
| 1098 | pope - christmas - jews - vatican - peters | 5 | 1098_pope_christmas_jews_vatican |
| 1099 | maze - halloween - scare - ailleo - easter | 5 | 1099_maze_halloween_scare_ailleo |
| 1100 | nhs - 8bn - health - stevens - department | 5 | 1100_nhs_8bn_health_stevens |
| 1101 | unexplained - southway - parkgrove - clermiston - westshore | 5 | 1101_unexplained_southway_parkgrove_clermiston |
| 1102 | morgans - morgan - axe - metropolitan - corruption | 5 | 1102_morgans_morgan_axe_metropolitan |
| 1103 | birthing - irp - surgery - cleanliness - mattress | 5 | 1103_birthing_irp_surgery_cleanliness |
| 1104 | tylen - kajsa - billie - flemings - tina | 5 | 1104_tylen_kajsa_billie_flemings |
| 1105 | chiller - energy - grid - eco - electricity | 5 | 1105_chiller_energy_grid_eco |
| 1106 | freeride - cuillin - pinn - mountain - macaskill | 5 | 1106_freeride_cuillin_pinn_mountain |
| 1107 | ireland - esri - iem - trade - northern | 5 | 1107_ireland_esri_iem_trade |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.13.1
* Python: 3.10.12
| [
"CPI",
"MEDAL"
] | Non_BioNLP |
RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2309.06085",
"arxiv:2101.09635",
"endpoints_compatible",
"region:us"
] | 1,722,665,418,000 | 2024-08-03T08:16:05 | 127 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-8b-cpt-sea-lionv2-base - GGUF
- Model creator: https://huggingface.co/aisingapore/
- Original model: https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2-base/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-8b-cpt-sea-lionv2-base.Q2_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_0.gguf) | Q4_0 | 3.03GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_K_S.gguf) | Q4_K_S | 1.52GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_K.gguf) | Q4_K | 0.36GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_K_M.gguf) | Q4_K_M | 0.16GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_1.gguf) | Q4_1 | 0.01GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_0.gguf) | Q5_0 | 0.17GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_K_S.gguf) | Q5_K_S | 1.65GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3-8b-cpt-sea-lionv2-base.Q6_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3-8b-cpt-sea-lionv2-base.Q8_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
- id
- ta
- th
- vi
license: llama3
---
# Llama3 8B CPT SEA-LIONv2
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
This is the card for the Llama3 8B CPT SEA-LIONv2 base model which has undergone continued pre-training from the [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model.
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
## Model Details
### Model Description
The continued pre-training data for Llama3 8B CPT SEA-LIONv2 base model encompasses approximately 48B tokens.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Indonesian, Thai, Vietnamese, Tamil
- **License:** [Llama3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
For tokenization, the model employs the default tokenizer used in Meta-Llama-3-8B-Instruct.
### Benchmark Performance
We evaluated Llama3 8B CPT SEA-LIONv2 base model on general language capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities in SEA languages, we employed the [BHASA evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
The evaluation was done **five-shot** with native prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the paper.
**BHASA**
To be released soon
We also evaluated the model on English capabilities using tasks from the Open LLM Leaderboard.
**English**
| Model | ARC | BBH | HellaSwag | MMLU | GSM8k | Average |
| ----------------------------------------- |:-----:|:-----:|:---------:|:-----:|:-----:|:-------:|
| Qwen/Qwen2-7B | 61.86 | 53.10 | 80.63 | 70.45 | 78.09 | 68.83 |
| aisingapore/llama3-8b-cpt-sea-lionv2-base | 58.87 | 47.70 | 81.14 | 63.11 | 50.49 | 60.26 |
| meta-llama/Meta-Llama-3-8B | 57.85 | 46.09 | 81.89 | 65.10 | 45.34 | 59.25 |
| mistralai/Mistral-7B-v0.3 | 59.56 | 44.89 | 82.97 | 62.36 | 33.36 | 56.63 |
| Sail/Sailor-7B | 50.34 | 35.65 | 76.11 | 52.80 | 33.81 | 49.74 |
## Training Details
### Data
Llama3 8B CPT SEA-LIONv2 base model was continued pre-trained on 48B tokens of the following data:
| Data Source | Unique Tokens (B) | Multiplier | Total Tokens (B) | Percentage (%) |
|---------------------------|:-----------------:|:----------:|:----------------:|:--------------:|
| Dolma RefinedWeb - English| 7.650 | 1 | 7.650 | 15.90 |
| Dolma C4 - English | 1.160 | 1 | 1 | 9.21 |
| Dolma Reddit - English | 1.339 | 1 | 14.7 | 2.42 |
| Dolma Semantic Scholar | 0.959 | 1 | 2.9 | 2.79 |
| Dolma arXiv | 0.469 | 1 | 5.3 | 1.99 |
| Dolma StarCoder | 4.422 | 1 | 4.9 | 0.98 |
| SEA-LION Pile - Indonesian| 3.4 | 1 | 6.8 | 14.17 |
| Wiki* - Indonesian | 0.3 | 4 | 1.2 | 2.50 |
| SEA-LION Pile - Tamil | 5.6 | 1 | 5.6 | 11.67 |
| Wiki* + News - Tamil | 0.6 | 4 | 2.4 | 5.00 |
| SEA-LION Pile - Thai | 2.28 | 1 | 2.28 | 4.75 |
| WangChanBERTa - Thai | 5 | 1 | 5 | 10.42 |
| Wiki* - Thai | 0.18 | 4 | 0.72 | 1.50 |
| SEA-LION Pile - Vietnamese| 6.76 | 1 | 6.76 | 14.08 |
| Wiki* - Vietnamese | 0.31 | 4 | 1.24 | 2.58 |
Note:
- All token counts are counted using Llama3 tokenizer
- wiki* sources includes Wikipedia, Wiki Books, Wiki Source and Wiki Voyage
- Tamil news is sourced with permission from [Seithi](https://seithi.mediacorp.sg/)
### Infrastructure
Llama3 8B CPT SEA-LIONv2 was trained using [MosaicML Composer](https://github.com/mosaicml/composer)
on the following hardware:
| Training Details | Llama3 8B CPT SEA-LIONv2 |
|----------------------|:--------------------:|
| AWS EC2 p5d.24xlarge | 8 instances |
| Nvidia H100 80GB GPU | 64 |
| Training Duration | 2 days |
### Configuration
| HyperParameter | Llama3 8B CPT SEA-LIONv2 |
|-------------------|:--------------------:|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | weight_stable_decay |
| Learning Rate | 1.0e-5 |
| Global Batch Size | 512 |
| Micro Batch Size | 2 |
## The Team
Choa Esther<br>
Cheng Nicholas<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Teng Walter<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This the repository for the base model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability
arising from the use of the released weights and codes.
## References
```bibtex
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"CHIA"
] | Non_BioNLP |
adipanda/gojo-simpletuner-lora-2 | adipanda | text-to-image | [
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | 1,728,335,014,000 | 2024-10-10T17:03:34 | 1 | 0 | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
inference: true
widget:
- text: unconditional (blank prompt)
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_0_0.png
- text: A scene from Jujutsu Kaisen. Satoru Gojo holding a sign that says 'I LOVE
PROMPTS!', he is standing full body on a beach at sunset. He is wearing a dark
high collared outfit and a black blindfold. The setting sun casts a dynamic shadow
on his face.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_1_0.png
- text: A scene from Jujutsu Kaisen. Satoru Gojo jumping out of a propeller airplane,
sky diving. He looks excited and his hair is blowing in the wind. The sky is clear
and blue, there are birds pictured in the distance.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_2_0.png
- text: 'A scene from Jujutsu Kaisen. Satoru Gojo, with vibrant blue eyes, spinning
a basketball on his finger on a basketball court. He is wearing a lakers jersey
with the #12 on it. The basketball hoop and crowd are in the background cheering
him. He is smiling.'
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_3_0.png
- text: A scene from Jujutsu Kaisen. Satoru Gojo is wearing a suit in an office shaking
the hand of a business woman. The woman has purple hair and is wearing professional
attire. There is a Google logo in the background. It is during daytime, and the
overall sentiment is one of accomplishment.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_4_0.png
- text: A scene from Jujutsu Kaisen. Satoru Gojo is fighting a large brown grizzly
bear, deep in a forest. The bear is tall and standing on two legs, roaring. The
bear is also wearing a crown because it is the king of all bears. Around them
are tall trees and other animals watching.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_5_0.png
---
# gojo-simpletuner-lora-2
This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
No validation prompt was used during training.
None
## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024x1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 262
- Training steps: 29100
- Learning rate: 5e-05
- Effective batch size: 8
- Micro-batch size: 8
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: Pure BF16
- Quantised: Yes: int8-quanto
- Xformers: Not used
- LyCORIS Config:
```json
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 10000,
"linear_alpha": 1,
"factor": 12,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 12
},
"FeedForward": {
"factor": 6
}
}
}
}
```
## Datasets
### gojo-512
- Repeats: 2
- Total number of images: 291
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
## Inference
```python
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer)
wrapper.merge_to()
prompt = "An astronaut is riding a horse through the jungles of Thailand."
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
```
| [
"BEAR"
] | Non_BioNLP |
nhung03/e5f1cb30-0283-4f93-a5f4-af1062c367da | nhung03 | null | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,736,500,882,000 | 2025-01-10T10:12:39 | 1 | 0 | ---
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
library_name: peft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5f1cb30-0283-4f93-a5f4-af1062c367da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0d8fe5cf126e320c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0d8fe5cf126e320c_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/e5f1cb30-0283-4f93-a5f4-af1062c367da
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0d8fe5cf126e320c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4b810264-dad3-481e-b39e-87abe60a31a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4b810264-dad3-481e-b39e-87abe60a31a9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e5f1cb30-0283-4f93-a5f4-af1062c367da
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0001 | 0.0080 | 200 | 0.0431 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 | [
"PUBMEDQA"
] | BioNLP |
SilasK/llama-7b-medqa_version_9 | SilasK | null | [
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,710,496,540,000 | 2024-03-17T01:17:49 | 0 | 0 | ---
base_model: huggyllama/llama-7b
library_name: peft
license: other
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama-7b-medqa_version_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-medqa_version_9
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 | [
"MEDQA"
] | Non_BioNLP |
Tune-A-Video-library/mo-di-bear-guitar | Tune-A-Video-library | text-to-video | [
"diffusers",
"tune-a-video",
"text-to-video",
"arxiv:2212.11565",
"arxiv:2112.10752",
"base_model:nitrosocke/mo-di-diffusion",
"base_model:finetune:nitrosocke/mo-di-diffusion",
"license:creativeml-openrail-m",
"diffusers:TuneAVideoPipeline",
"region:us"
] | 1,675,434,677,000 | 2023-02-09T03:07:51 | 21 | 22 | ---
base_model: nitrosocke/mo-di-diffusion
license: creativeml-openrail-m
tags:
- tune-a-video
- text-to-video
- diffusers
training_prompt: A bear is playing guitar.
inference: false
---
# Tune-A-Video - Modern Disney
## Model Description
- Base model: [nitrosocke/mo-di-diffusion](https://huggingface.co/nitrosocke/mo-di-diffusion)
- Training prompt: a bear is playing guitar.

## Samples

Test prompt: a [handsome prince/magical princess/rabbit/baby] is playing guitar, modern disney style.
## Usage
Clone the github repo
```bash
git clone https://github.com/showlab/Tune-A-Video.git
```
Run inference code
```python
from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
from tuneavideo.models.unet import UNet3DConditionModel
from tuneavideo.util import save_videos_grid
import torch
pretrained_model_path = "nitrosocke/mo-di-diffusion"
unet_model_path = "Tune-A-Video-library/mo-di-bear-guitar"
unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', torch_dtype=torch.float16).to('cuda')
pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
prompt = "a magical princess is playing guitar, modern disney style"
video = pipe(prompt, video_length=8, height=512, width=512, num_inference_steps=50, guidance_scale=7.5).videos
save_videos_grid(video, f"./{prompt}.gif")
```
## Related Papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models
| [
"BEAR"
] | Non_BioNLP |
RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 1,719,165,486,000 | 2024-06-23T18:07:21 | 83 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Healix-1.1B-V1-Chat-dDPO - GGUF
- Model creator: https://huggingface.co/health360/
- Original model: https://huggingface.co/health360/Healix-1.1B-V1-Chat-dDPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Healix-1.1B-V1-Chat-dDPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q2_K.gguf) | Q2_K | 0.4GB |
| [Healix-1.1B-V1-Chat-dDPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [Healix-1.1B-V1-Chat-dDPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [Healix-1.1B-V1-Chat-dDPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [Healix-1.1B-V1-Chat-dDPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [Healix-1.1B-V1-Chat-dDPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q3_K.gguf) | Q3_K | 0.51GB |
| [Healix-1.1B-V1-Chat-dDPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [Healix-1.1B-V1-Chat-dDPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [Healix-1.1B-V1-Chat-dDPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [Healix-1.1B-V1-Chat-dDPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q4_0.gguf) | Q4_0 | 0.59GB |
| [Healix-1.1B-V1-Chat-dDPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [Healix-1.1B-V1-Chat-dDPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [Healix-1.1B-V1-Chat-dDPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q4_K.gguf) | Q4_K | 0.62GB |
| [Healix-1.1B-V1-Chat-dDPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [Healix-1.1B-V1-Chat-dDPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q4_1.gguf) | Q4_1 | 0.65GB |
| [Healix-1.1B-V1-Chat-dDPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q5_0.gguf) | Q5_0 | 0.71GB |
| [Healix-1.1B-V1-Chat-dDPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [Healix-1.1B-V1-Chat-dDPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q5_K.gguf) | Q5_K | 0.73GB |
| [Healix-1.1B-V1-Chat-dDPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [Healix-1.1B-V1-Chat-dDPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q5_1.gguf) | Q5_1 | 0.77GB |
| [Healix-1.1B-V1-Chat-dDPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q6_K.gguf) | Q6_K | 0.84GB |
| [Healix-1.1B-V1-Chat-dDPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/health360_-_Healix-1.1B-V1-Chat-dDPO-gguf/blob/main/Healix-1.1B-V1-Chat-dDPO.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- medical
- biology
- chemistry
- text-generation-inference
datasets:
- krvhrv/Healix-Medical-Shot
model-index:
- name: Healix-1.1B-V1-Chat-dDPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 30.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=health360/Healix-1.1B-V1-Chat-dDPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 44.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=health360/Healix-1.1B-V1-Chat-dDPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=health360/Healix-1.1B-V1-Chat-dDPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.55
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=health360/Healix-1.1B-V1-Chat-dDPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=health360/Healix-1.1B-V1-Chat-dDPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=health360/Healix-1.1B-V1-Chat-dDPO
name: Open LLM Leaderboard
---
# Healix 1.1B Model Card
## Model Description
Healix 1.1B is a state-of-the-art large language model specifically designed for medical applications. With 1.1 billion parameters, it has been trained on a vast corpus of medical literature to provide accurate and reliable responses to complex medical queries. This model aims to assist healthcare professionals and researchers by offering insights derived from medical data.
## Training Data
The model leverages an extensive compilation of medical literature, including research papers, clinical trial reports, and textbooks, ensuring a broad understanding of medical topics.
## Intended Use
This model is designed for medical research, clinical support, and healthcare applications. It serves to enhance medical text generation, query response, and evidence-based information dissemination. It is not a substitute for professional medical consultation.
## Limitations
While Healix 1.1B offers advanced medical insights, it has limitations in data quality and representativeness, and may inadvertently produce biased or incorrect information.
## Performance
Healix 1.1B demonstrated a remarkable accuracy of 64%, outperforming the LLAMA 2 7B model, which achieved an accuracy of 62% despite its larger size of 7 billion parameters. This highlights Healix 1.1B's superior ability to handle real emergency-focused medical questions, showcasing the effectiveness of specialized training and architecture in domain-specific applications.
## Ethical Considerations
Users are urged to use Healix 1.1B responsibly, considering the ethical implications, patient privacy, and data security. The model's outputs should be used as a supplementary information source alongside professional medical judgment.
## Papers
Details on the development, training, and evaluation of Healix 1.1B will be available in our forthcoming publications, offering insights into its creation and the advancements it brings to medical informatics.
### Input Format
Use the Alpaca model format.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_health360__Healix-1.1B-V1-Chat-dDPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |33.00|
|AI2 Reasoning Challenge (25-Shot)|30.55|
|HellaSwag (10-Shot) |44.78|
|MMLU (5-Shot) |24.64|
|TruthfulQA (0-shot) |41.55|
|Winogrande (5-shot) |56.51|
|GSM8k (5-shot) | 0.00|
| [
"MEDICAL DATA"
] | BioNLP |
mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"biology",
"medical",
"healthcare",
"en",
"dataset:HPAI-BSC/Aloe-Beta-General-Collection",
"dataset:HPAI-BSC/chain-of-diagnosis",
"dataset:HPAI-BSC/MedS-Ins",
"dataset:HPAI-BSC/ultramedical",
"dataset:HPAI-BSC/pubmedqa-cot-llama31",
"dataset:HPAI-BSC/medqa-cot-llama31",
"dataset:HPAI-BSC/medmcqa-cot-llama31",
"dataset:HPAI-BSC/headqa-cot-llama31",
"dataset:HPAI-BSC/MMLU-medical-cot-llama31",
"dataset:HPAI-BSC/Polymed-QA",
"base_model:HPAI-BSC/Qwen2.5-Aloe-Beta-7B",
"base_model:quantized:HPAI-BSC/Qwen2.5-Aloe-Beta-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 1,733,951,272,000 | 2024-12-11T22:04:29 | 678 | 1 | ---
base_model: HPAI-BSC/Qwen2.5-Aloe-Beta-7B
datasets:
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/chain-of-diagnosis
- HPAI-BSC/MedS-Ins
- HPAI-BSC/ultramedical
- HPAI-BSC/pubmedqa-cot-llama31
- HPAI-BSC/medqa-cot-llama31
- HPAI-BSC/medmcqa-cot-llama31
- HPAI-BSC/headqa-cot-llama31
- HPAI-BSC/MMLU-medical-cot-llama31
- HPAI-BSC/Polymed-QA
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/Aloe-Beta-General-Collection
language:
- en
library_name: transformers
license: apache-2.0
tags:
- biology
- medical
- healthcare
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
mogaio/pr_ebsa_fr_tran_merged25_e1_beginning_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 1,702,666,942,000 | 2023-12-15T19:03:37 | 73 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.9434954007884363
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9361702127659575
recall: 0.9322033898305084
f1-score: 0.9341825902335456
support: 236
'1':
precision: 0.9333333333333333
recall: 0.9302325581395349
f1-score: 0.9317803660565723
support: 301
'2':
precision: 0.9646017699115044
recall: 0.9732142857142857
f1-score: 0.9688888888888889
support: 224
accuracy: 0.9434954007884363
macro avg:
precision: 0.9447017720035985
recall: 0.945216744561443
f1-score: 0.9449506150596689
support: 761
weighted avg:
precision: 0.9434169513880108
recall: 0.9434954007884363
f1-score: 0.9434482162802315
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9435 | {'0': {'precision': 0.9361702127659575, 'recall': 0.9322033898305084, 'f1-score': 0.9341825902335456, 'support': 236}, '1': {'precision': 0.9333333333333333, 'recall': 0.9302325581395349, 'f1-score': 0.9317803660565723, 'support': 301}, '2': {'precision': 0.9646017699115044, 'recall': 0.9732142857142857, 'f1-score': 0.9688888888888889, 'support': 224}, 'accuracy': 0.9434954007884363, 'macro avg': {'precision': 0.9447017720035985, 'recall': 0.945216744561443, 'f1-score': 0.9449506150596689, 'support': 761}, 'weighted avg': {'precision': 0.9434169513880108, 'recall': 0.9434954007884363, 'f1-score': 0.9434482162802315, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e1_beginning_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
| 0.0013 | 1 | 0.0015 | - |
| 0.0658 | 50 | 0.1973 | - |
| 0.1316 | 100 | 0.0106 | - |
| 0.1974 | 150 | 0.0744 | - |
| 0.2632 | 200 | 0.1033 | - |
| 0.3289 | 250 | 0.0425 | - |
| 0.3947 | 300 | 0.1125 | - |
| 0.4605 | 350 | 0.0018 | - |
| 0.5263 | 400 | 0.0019 | - |
| 0.5921 | 450 | 0.0002 | - |
| 0.6579 | 500 | 0.0007 | - |
| 0.7237 | 550 | 0.1393 | - |
| 0.7895 | 600 | 0.0002 | - |
| 0.8553 | 650 | 0.0043 | - |
| 0.9211 | 700 | 0.0339 | - |
| 0.9868 | 750 | 0.0002 | - |
| 0.0013 | 1 | 0.0007 | - |
| 0.0658 | 50 | 0.0419 | - |
| 0.1316 | 100 | 0.0068 | - |
| 0.1974 | 150 | 0.1401 | - |
| 0.2632 | 200 | 0.0423 | - |
| 0.3289 | 250 | 0.1122 | - |
| 0.3947 | 300 | 0.0037 | - |
| 0.4605 | 350 | 0.005 | - |
| 0.5263 | 400 | 0.0006 | - |
| 0.5921 | 450 | 0.0006 | - |
| 0.6579 | 500 | 0.0016 | - |
| 0.7237 | 550 | 0.1244 | - |
| 0.7895 | 600 | 0.0016 | - |
| 0.8553 | 650 | 0.0028 | - |
| 0.9211 | 700 | 0.002 | - |
| 0.9868 | 750 | 0.057 | - |
| 0.0013 | 1 | 0.1396 | - |
| 0.0658 | 50 | 0.0366 | - |
| 0.1316 | 100 | 0.0021 | - |
| 0.1974 | 150 | 0.1088 | - |
| 0.2632 | 200 | 0.0449 | - |
| 0.3289 | 250 | 0.0187 | - |
| 0.3947 | 300 | 0.0017 | - |
| 0.4605 | 350 | 0.1262 | - |
| 0.5263 | 400 | 0.0052 | - |
| 0.5921 | 450 | 0.1188 | - |
| 0.6579 | 500 | 0.0002 | - |
| 0.7237 | 550 | 0.0006 | - |
| 0.7895 | 600 | 0.0758 | - |
| 0.8553 | 650 | 0.025 | - |
| 0.9211 | 700 | 0.0052 | - |
| 0.9868 | 750 | 0.1985 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"CAS"
] | Non_BioNLP |
legalvn/paraphrase-multilingual-MiniLM-L12-v2-vn-171000 | legalvn | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:651725",
"loss:SoftmaxLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,733,306,755,000 | 2024-12-04T10:06:09 | 6 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:651725
- loss:SoftmaxLoss
widget:
- source_sentence: Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?
sentences:
- Chu kỳ kiểm định chất lượng giáo dục nghề nghiệp\n...\n2. Trường hợp cơ sở giáo
dục nghề nghiệp có ngành, nghề trọng điểm; chương trình đào tạo ngành, nghề trọng
điểm; cơ sở giáo dục nghề nghiệp và chương trình đào tạo các ngành, nghề phục
vụ yêu cầu công tác quản lý nhà nước phải thực hiện kiểm định chất lượng giáo
dục nghề nghiệp theo quy định tại điểm d khoản 3 Điều 65 của Luật Giáo dục nghề
nghiệp số 74/2014/QH13 ngày 27 tháng 11 năm 2014 nhưng không đạt tiêu chuẩn kiểm
định chất lượng giáo dục nghề nghiệp thì trong thời hạn 03 năm phải thực hiện
kiểm định lại.
- Vệ sinh môi trường, vệ sinh tòa nhà\n1. Trách nhiệm của các đơn vị, cán bộ, công
chức, viên chức, nhân viên và người lao động trong việc giữ gìn vệ sinh tại nơi
làm việc và khu vực công cộng:\na) Hàng ngày tự vệ sinh sàn nhà, bàn ghế, tủ,
các thiết bị được trang cấp và tổng vệ sinh phòng làm việc vào chiều thứ Sáu hàng
tuần;\nb) Có trách nhiệm thu gom rác thải trong phòng chuyển ra thùng rác đặt
tại các hành lang;\nc) Không đổ nước chè, cà phê, ….. xuống sàn nhà, hành lang,
tường nhà và khu vệ sinh;\nd) Nghiêm cấp hút thuốc lá trong phòng làm việc, phòng
họp, cầu thang máy, cầu thang bộ, tầng hầm;\nđ) Không khạc nhổ, bôi bẩn lên tường,
không vứt rác thải, gạt tàn thuốc lá, đầu lọc thuốc lá xuống sàn nhà và các khu
vực công cộng;\ne) Nghiêm cấm hái hoa, bẻ cành, dẫm lên thảm cỏ, nhổ cây trong
khuôn viên cơ quan.\ng) Nghiêm cấm mang chất độc hại vào cơ quan.\n…
- Nguyên tắc áp dụng\n1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối
tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một
mức phụ cấp ưu đãi theo nghề cao nhất.\n2. Công chức, viên chức đã hưởng phụ cấp
ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày
22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số
64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên
chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì
không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.
- source_sentence: Số lượng thành viên Hội đồng khoa học và đào tạo là bao nhiêu?
sentences:
- 'Cấp Giấy chứng nhận chất lượng an toàn kỹ thuật và bảo vệ môi trường trong sản
xuất, lắp ráp ô tô, rơ moóc và sơ mi rơ moóc\n2.1. Trình tự thực hiện:\na) Nộp
hồ sơ TTHC:\n- Cơ sở sản xuất lập hồ sơ kiểm tra xe cơ giới theo quy định và nộp
đến Cục Đăng kiểm Việt Nam.\nb) Giải quyết TTHC:\n- Cục Đăng kiểm Việt Nam tiếp
nhận và kiểm tra thành phần hồ sơ kiểm tra xe cơ giới: nếu hồ sơ không đầy đủ
theo quy định thì hướng dẫn Cơ sở sản xuất hoàn thiện lại; Nếu hồ sơ đầy đủ theo
quy định thì thống nhất về thời gian và địa điểm thực hiện đánh giá điều kiện
kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất;\n- Cục Đăng kiểm Việt Nam tiến
hành kiểm tra nội dung hồ sơ và thực hiện đánh giá điều kiện kiểm tra chất lượng
sản phẩm tại Cơ sở sản xuất theo quy định: Nếu chưa đạt yêu cầu thì thông báo
để Cơ sở sản xuất hoàn thiện lại; Nếu đạt yêu cầu thì cấp Giấy chứng nhận trong
thời hạn 03 ngày làm việc kể từ ngày kết thúc kiểm tra, đánh giá hồ sơ đầy đủ,
hợp lệ theo quy định và có kết quả đánh giá COP đạt yêu cầu;\n- Cơ sở sản xuất
nộp hồ sơ kiểm tra xe cơ giới và nhận kết quả trực tiếp tại trụ sở Cục Đăng kiểm
Việt Nam hoặc qua hệ thống bưu chính hoặc qua hệ thống dịch vụ công trực tuyến
hoặc qua hình thức phù hợp khác.\n...'
- Phiên họp Hội đồng khoa học\n1. Hội đồng khoa học họp định kỳ 06 tháng/01 lần.
Các phiên họp định kỳ phải có ít nhất 2/3 tổng số thành viên của Hội đồng khoa
học tham dự.\n2. Phiên họp đột xuất của Hội đồng khoa học được triệu tập theo
quyết định của Chủ tịch và phải có trên 1/2 số thành viên của Hội đồng khoa học
tham dự.\n3. Viện trưởng VKSND tối cao tham dự phiên họp của Hội đồng khoa học
khi thấy cần thiết.\n4. Tùy thuộc vào nội dung chương trình phiên họp, Chủ tịch
Hội đồng khoa học có thể quyết định mời các nhà khoa học trong và ngoài ngành
KSND tham gia phiên họp.\n5. Nội dung phiên họp, các tài liệu liên quan đến phiên
họp của Hội đồng khoa học phải được thông báo hoặc chuyển cho các Thành viên chậm
nhất là 3 ngày làm việc trước ngày họp, trừ trường hợp đột xuất.\n6. Hội đồng
khoa học thảo luận dân chủ, tập thể, công khai, quyết định theo đa số về những
vấn đề thuộc nội dung phiên họp và những vấn đề do Chủ tịch Hội đồng khoa học
nêu ra hoặc do các Thành viên đề nghị và được Chủ tịch Hội đồng khoa học chấp
thuận.\nChủ tịch Hội đồng khoa học chủ trì thảo luận và kết luận tại phiên họp.
Đối với những vấn đề phức tạp còn nhiều ý kiến khác nhau, Hội đồng khoa học tiến
hành biểu quyết. Những vấn đề được biểu quyết đạt trên 2/3 số phiếu của thành
viên có mặt hoặc trên 50% tổng số thành viên Hội đồng được coi là ý kiến chính
thức của Hội đồng khoa học. Các ý kiến khác được bảo lưu, ghi vào biên bản cuộc
họp.
- Hồ sơ, thủ tục công nhận liệt sĩ\n1. Người khi hy sinh đang thuộc quân đội, công
an quản lý thì Bộ Quốc phòng, Bộ Công an chịu trách nhiệm:\na) Hướng dẫn về quy
trình lập hồ sơ đề nghị công nhận liệt sĩ theo quy định.\nb) Có văn bản đề nghị
kèm hồ sơ gửi Bộ Lao động - Thương binh và Xã hội thẩm định trong thời gian không
quá 50 ngày kể từ ngày cơ quan, đơn vị trực tiếp quản lý người hy sinh xác lập,
hoàn thiện các giấy tờ quy định tại Điều 17 Nghị định này.
- source_sentence: Ban Tài chính Văn phòng Kiểm toán nhà nước thực hiện những chức
năng gì?
sentences:
- 'Tiếp nhận hồ sơ và trả kết quả\n...\n2.2.4. Lao động nam hoặc người chồng của
lao động nữ mang thai hộ nghỉ việc khi vợ sinh con: Bản sao giấy chứng sinh hoặc
bản sao giấy khai sinh hoặc trích lục khai sinh của con; trường hợp sinh con phải
phẫu thuật hoặc sinh con dưới 32 tuần tuổi mà giấy chứng sinh không thể hiện thì
có thêm giấy tờ của cơ sở khám bệnh, chữa bệnh thể hiện việc sinh con phải phẫu
thuật, sinh con dưới 32 tuần tuổi. Trường hợp con chết sau khi sinh mà chưa được
cấp giấy chứng sinh thì thay bằng trích sao hoặc tóm tắt hồ sơ bệnh án hoặc giấy
ra viện của người mẹ hoặc của lao động nữ mang thai hộ thể hiện con chết…'
- Việc tự giám sát chất lượng dịch vụ viễn thông của doanh nghiệp viễn thông\n1.
Các doanh nghiệp viễn thông được Bộ Thông tin và Truyền thông cấp giấy phép kinh
doanh dịch vụ viễn thông phải thường xuyên tự giám sát chất lượng dịch vụ đối
với tất cả các dịch vụ thuộc “Danh mục dịch vụ viễn thông bắt buộc quản lý chất
lượng” mà mình cung cấp.\n2. Trong trường hợp dịch vụ mà mình cung cấp có sự cố
thì doanh nghiệp viễn thông phải thực hiện báo cáo đột xuất như quy định tại Khoản
3 Điều 8 của Thông tư này.
- Cục Quản lý, giám sát bảo hiểm; Cục Quản lý Công sản; Cục Quản lý Giá; Cục Quản
lý Nợ và Tài chính đối ngoại; Cục Quản lý, giám sát Kế toán, Kiểm toán; Cục Quản
lý Công sản; Cục Tài chính doanh nghiệp và Vụ Tài chính ngân hàng chủ trì phối
hợp với Cục Tin học & Thống kê Tài chính xây dựng quy trình điện tử từng thủ tục
hành chính theo phạm vi quản lý đối với danh mục thủ tục hành chính để thực hiện
tích hợp trên Hệ thống thông tin Một cửa điện tử của Bộ Tài chính.
- source_sentence: Điều kiện để Giám đốc Học viện An ninh nhân dân được thăng cấp
bậc hàm trước thời hạn như thế nào?
sentences:
- Mức độ tự chủ và trách nhiệm\n- Có ý thức và tác phong nghề nghiệp đúng chuẩn
mực, có năng lực thực hiện công việc được giao; phương pháp làm việc khoa học,
biết phân tích và giải quyết các vấn đề mới về lĩnh vực chuyên môn nghề;\n- Gắn
bó nghề nghiệp; nghiêm chỉnh chấp hành quy chế, quy định của cơ quan, doanh nghiệp,
nơi đang công tác với ý thức tổ chức kỉ luật và tinh thần trách nhiệm cao trong
công việc;\n- Lập được các biện pháp an toàn và đảm bảo an toàn, vệ sinh lao động
trong quá trình làm việc; có ý thức trách nhiệm công dân, thái độ và đạo đức nghề
nghiệp đúng đắn, sẵn sàng nhận nhiệm vụ; tự tin, cầu tiến trong công việc; hợp
tác, thân thiện, khiêm tốn trong các mối quan hệ;\n- Tự chịu trách nhiệm về chất
lượng đối với kết quả công việc, sản phẩm do mình đảm nhiệm theo các tiêu chuẩn
và chịu một phần trách nhiệm đối với kết quả công việc, sản phẩm của tổ, nhóm;
- Tổ chức bộ máy\n...\n5. Tổng cục Hải quan có thể biệt phái công chức từ các đơn
vị thuộc và trực thuộc Tổng cục để bổ sung cán bộ chủ chốt, cán bộ kỹ thuật có
năng lực, kinh nghiệm cho Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng
cục Hải quan. Thời hạn biệt phái các công chức không quá 03 năm, trường hợp quá
03 năm mà chưa hoàn thành dự án thì Tổng cục Hải quan xem xét quyết định bổ sung
thời gian biệt phái.\nNhân sự tuyển dụng mới của Ban Quản lý dự án đầu tư xây
dựng chuyên ngành của Tổng cục Hải quan là viên chức hoặc hợp đồng lao động, thực
hiện theo quy định về chế độ tiền lương và các chế độ, chính sách đối với viên
chức và người lao động.\n...
- Biệt phái công chức\n...\n6. Không thực hiện biệt phái công chức nữ đang mang
thai hoặc nuôi con dưới 36 tháng tuổi.
- source_sentence: Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức,
viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?
sentences:
- Nhiệm vụ của giáo viên\n1. Thực hiện nhiệm vụ tổ chức các hoạt động dạy học, giáo
dục theo kế hoạch giáo dục của nhà trường và kế hoạch giáo dục của tổ chuyên môn;
quản lý học sinh trong các hoạt động giáo dục do nhà trường tổ chức; tham gia
các hoạt động chuyên môn; chịu trách nhiệm về chất lượng, hiệu quả giáo dục.\n2.
Trau dồi đạo đức, nêu cao tinh thần trách nhiệm, giữ gìn phẩm chất, danh dự, uy
tín của nhà giáo; gương mẫu trước học sinh; thương yêu, đối xử công bằng và tôn
trọng nhân cách của học sinh; bảo vệ các quyền và lợi ích chính đáng của học sinh;
đoàn kết, giúp đỡ đồng nghiệp.\n3. Học tập, rèn luyện để nâng cao sức khỏe, trình
độ chính trị, chuyên môn, nghiệp vụ, đổi mới phương pháp dạy học, giáo dục.\n4.
Tham gia tập huấn, bồi dưỡng chuyên môn, nghiệp vụ.\n5. Tham gia công tác phổ
cập giáo dục trung học cơ sở ở địa phương.\n6. Thực hiện nghĩa vụ công dân, các
quy định của pháp luật và của ngành Giáo dục, các quyết định của hiệu trưởng;
thực hiện nhiệm vụ do hiệu trưởng phân công, chịu sự kiểm tra, đánh giá của hiệu
trưởng và các cấp quản lý giáo dục.\n7. Phối hợp với Đội Thiếu niên Tiền phong
Hồ Chí Minh, Đoàn Thanh niên Cộng sản Hồ Chí Minh, Hội Liên hiệp Thanh niên Việt
Nam, gia đình học sinh và các tổ chức xã hội liên quan để tổ chức hoạt động giáo
dục.\n8. Thực hiện các nhiệm vụ khác theo quy định của pháp luật.
- “Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong
trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP
ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \n1. Máy
PCR. \n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \n3. Test kít
xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \n4. Máy thở chức năng
cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng
cao, máy thở xách tay. \n5. Máy lọc máu liên tục. \n6. Máy X-Quang di động. \n7.
Máy đo khí máu (đo được điện giải, lactat, hematocrite). \n8. Máy theo dõi bệnh
nhân>5 thông số. \n9. Bơm tiêm điện; Bơm truyền dịch. \n10. Máy phá rung tim có
tạo nhịp. \n11. Máy đo thời gian đông máu. \n12. Máy đo huyết động.”
- Thời điểm đánh giá xếp loại chất lượng hằng năm\n...\n2. Căn cứ tình hình thực
tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống
nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất
lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo
đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\n3. Tại thời điểm đánh giá,
xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ
chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm
làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ
được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá,
xếp loại chất lượng theo quy định của pháp luật và Quy chế này.
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("legalvn/paraphrase-multilingual-MiniLM-L12-v2-vn-171000")
# Run inference
sentences = [
'Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?',
'Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này.',
'“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 651,725 training samples
* Columns: <code>queries</code>, <code>corpus</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | queries | corpus | score |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 24.71 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 121.6 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~43.80%</li><li>1: ~37.00%</li><li>2: ~19.20%</li></ul> |
* Samples:
| queries | corpus | score |
|:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Người học ngành quản lý khai thác công trình thủy lợi trình độ cao đẳng phải có khả năng học tập và nâng cao trình độ như thế nào?</code> | <code>Khả năng học tập, nâng cao trình độ\n- Khối lượng khối lượng kiến thức tối thiểu, yêu cầu về năng lực mà người học phải đạt được sau khi tốt nghiệp ngành, nghề Dược trình độ cao đẳng có thể tiếp tục phát triển ở các trình độ cao hơn;\n- Người học sau tốt nghiệp có năng lực tự học, tự cập nhật những tiến bộ khoa học công nghệ trong phạm vi ngành, nghề để nâng cao trình độ hoặc học liên thông lên trình độ cao hơn trong cùng ngành nghề hoặc trong nhóm ngành, nghề hoặc trong cùng lĩnh vực đào tạo.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nTrong phạm vi điều chỉnh của văn bản quy phạm pháp luật:\n1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.\n2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.\n3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Mục đích lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nLồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật (sau đây gọi tắt là văn bản) là một biện pháp để thực hiện mục tiêu bình đẳng giới, xóa bỏ phân biệt đối xử về giới, bảo đảm quyền, lợi ích hợp pháp, phù hợp với đặc thù của mỗi giới; tạo cơ hội phát triển như nhau cho nam và nữ trong các lĩnh vực của đời sống xã hội và gia đình; bảo đảm bình đẳng giới thực chất giữa nam và nữ.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:------:|:-------------:|
| 0.0061 | 500 | 1.0473 |
| 0.0123 | 1000 | 1.0447 |
| 0.0184 | 1500 | 1.0383 |
| 0.0246 | 2000 | 1.0395 |
| 0.0307 | 2500 | 1.0436 |
| 0.0368 | 3000 | 1.0375 |
| 0.0430 | 3500 | 1.0189 |
| 0.0491 | 4000 | 1.0282 |
| 0.0552 | 4500 | 1.0355 |
| 0.0614 | 5000 | 1.0286 |
| 0.0675 | 5500 | 1.0264 |
| 0.0737 | 6000 | 1.0174 |
| 0.0798 | 6500 | 1.0238 |
| 0.0859 | 7000 | 1.0217 |
| 0.0921 | 7500 | 1.0203 |
| 0.0982 | 8000 | 1.0201 |
| 0.1043 | 8500 | 1.0266 |
| 0.1105 | 9000 | 1.0379 |
| 0.1166 | 9500 | 1.0367 |
| 0.1228 | 10000 | 1.0384 |
| 0.1289 | 10500 | 1.0291 |
| 0.1350 | 11000 | 1.0362 |
| 0.1412 | 11500 | 1.0354 |
| 0.1473 | 12000 | 1.0204 |
| 0.1534 | 12500 | 1.0401 |
| 0.1596 | 13000 | 1.0237 |
| 0.1657 | 13500 | 1.0271 |
| 0.1719 | 14000 | 1.0235 |
| 0.1780 | 14500 | 1.0329 |
| 0.1841 | 15000 | 1.0474 |
| 0.1903 | 15500 | 1.0547 |
| 0.1964 | 16000 | 1.0557 |
| 0.2025 | 16500 | 1.0626 |
| 0.2087 | 17000 | 1.0551 |
| 0.2148 | 17500 | 1.0526 |
| 0.2210 | 18000 | 1.125 |
| 0.2271 | 18500 | 1.2996 |
| 0.2332 | 19000 | 1.0703 |
| 0.2394 | 19500 | 1.0601 |
| 0.2455 | 20000 | 1.0835 |
| 0.2516 | 20500 | 1.0583 |
| 0.2578 | 21000 | 1.141 |
| 0.2639 | 21500 | 1.0802 |
| 0.2701 | 22000 | 1.0589 |
| 0.2762 | 22500 | 1.086 |
| 0.2823 | 23000 | 1.0743 |
| 0.2885 | 23500 | 1.0605 |
| 0.2946 | 24000 | 1.0602 |
| 0.3007 | 24500 | 1.0732 |
| 0.3069 | 25000 | 1.0614 |
| 0.3130 | 25500 | 1.0666 |
| 0.3192 | 26000 | 1.0669 |
| 0.3253 | 26500 | 1.0627 |
| 0.3314 | 27000 | 1.0659 |
| 0.3376 | 27500 | 1.07 |
| 0.3437 | 28000 | 1.0783 |
| 0.3498 | 28500 | 1.078 |
| 0.3560 | 29000 | 1.0832 |
| 0.3621 | 29500 | 1.0695 |
| 0.3683 | 30000 | 1.0714 |
| 0.3744 | 30500 | 1.3794 |
| 0.3805 | 31000 | 1.0838 |
| 0.3867 | 31500 | 1.0541 |
| 0.3928 | 32000 | 1.0799 |
| 0.3989 | 32500 | 1.0622 |
| 0.4051 | 33000 | 1.0597 |
| 0.4112 | 33500 | 1.0731 |
| 0.4174 | 34000 | 1.0871 |
| 0.4235 | 34500 | 1.0535 |
| 0.4296 | 35000 | 1.3215 |
| 0.4358 | 35500 | 1.1501 |
| 0.4419 | 36000 | 1.1088 |
| 0.4480 | 36500 | 1.0844 |
| 0.4542 | 37000 | 1.0981 |
| 0.4603 | 37500 | 1.0856 |
| 0.4665 | 38000 | 1.0956 |
| 0.4726 | 38500 | 1.0813 |
| 0.4787 | 39000 | 1.0843 |
| 0.4849 | 39500 | 1.1053 |
| 0.4910 | 40000 | 1.092 |
| 0.4971 | 40500 | 1.081 |
| 0.5033 | 41000 | 1.0919 |
| 0.5094 | 41500 | 1.0681 |
| 0.5156 | 42000 | 1.0826 |
| 0.5217 | 42500 | 1.0809 |
| 0.5278 | 43000 | 1.093 |
| 0.5340 | 43500 | 1.0709 |
| 0.5401 | 44000 | 1.0623 |
| 0.5462 | 44500 | 1.0801 |
| 0.5524 | 45000 | 1.0833 |
| 0.5585 | 45500 | 1.0816 |
| 0.5647 | 46000 | 1.0697 |
| 0.5708 | 46500 | 1.0864 |
| 0.5769 | 47000 | 1.0744 |
| 0.5831 | 47500 | 1.0897 |
| 0.5892 | 48000 | 1.0727 |
| 0.5953 | 48500 | 1.0621 |
| 0.6015 | 49000 | 1.0582 |
| 0.6076 | 49500 | 1.0681 |
| 0.6138 | 50000 | 1.083 |
| 0.6199 | 50500 | 1.0632 |
| 0.6260 | 51000 | 1.0809 |
| 0.6322 | 51500 | 1.0525 |
| 0.6383 | 52000 | 1.6649 |
| 0.6444 | 52500 | 1.0873 |
| 0.6506 | 53000 | 1.0649 |
| 0.6567 | 53500 | 1.0591 |
| 0.6629 | 54000 | 1.061 |
| 0.6690 | 54500 | 1.0682 |
| 0.6751 | 55000 | 1.0616 |
| 0.6813 | 55500 | 1.0827 |
| 0.6874 | 56000 | 1.0799 |
| 0.6935 | 56500 | 1.0705 |
| 0.6997 | 57000 | 1.0821 |
| 0.7058 | 57500 | 1.0763 |
| 0.7120 | 58000 | 1.0842 |
| 0.7181 | 58500 | 1.0813 |
| 0.7242 | 59000 | 1.0678 |
| 0.7304 | 59500 | 1.0894 |
| 0.7365 | 60000 | 1.0733 |
| 0.7426 | 60500 | 1.0688 |
| 0.7488 | 61000 | 1.0665 |
| 0.7549 | 61500 | 1.0681 |
| 0.7611 | 62000 | 1.301 |
| 0.7672 | 62500 | 1.0907 |
| 0.7733 | 63000 | 1.3941 |
| 0.7795 | 63500 | 1.1355 |
| 0.7856 | 64000 | 1.2196 |
| 0.7917 | 64500 | 1.225 |
| 0.7979 | 65000 | 1.1437 |
| 0.8040 | 65500 | 1.0787 |
| 0.8102 | 66000 | 1.0686 |
| 0.8163 | 66500 | 1.1017 |
| 0.8224 | 67000 | 1.0999 |
| 0.8286 | 67500 | 1.0771 |
| 0.8347 | 68000 | 1.1015 |
| 0.8408 | 68500 | 1.0826 |
| 0.8470 | 69000 | 1.1046 |
| 0.8531 | 69500 | 1.0735 |
| 0.8593 | 70000 | 1.1056 |
| 0.8654 | 70500 | 1.1077 |
| 0.8715 | 71000 | 1.0897 |
| 0.8777 | 71500 | 1.0775 |
| 0.8838 | 72000 | 1.0907 |
| 0.8899 | 72500 | 1.0705 |
| 0.8961 | 73000 | 1.0776 |
| 0.9022 | 73500 | 1.0896 |
| 0.9084 | 74000 | 1.0889 |
| 0.9145 | 74500 | 1.0804 |
| 0.9206 | 75000 | 1.1087 |
| 0.9268 | 75500 | 1.0738 |
| 0.9329 | 76000 | 1.0806 |
| 0.9390 | 76500 | 1.0899 |
| 0.9452 | 77000 | 1.0814 |
| 0.9513 | 77500 | 1.0723 |
| 0.9575 | 78000 | 1.0923 |
| 0.9636 | 78500 | 1.0748 |
| 0.9697 | 79000 | 1.0745 |
| 0.9759 | 79500 | 1.081 |
| 0.9820 | 80000 | 1.08 |
| 0.9881 | 80500 | 1.0905 |
| 0.9943 | 81000 | 1.1064 |
| 1.0004 | 81500 | 1.0929 |
| 1.0066 | 82000 | 1.0815 |
| 1.0127 | 82500 | 1.0768 |
| 1.0188 | 83000 | 1.1004 |
| 1.0250 | 83500 | 1.0835 |
| 1.0311 | 84000 | 1.0765 |
| 1.0372 | 84500 | 1.0906 |
| 1.0434 | 85000 | 1.096 |
| 1.0495 | 85500 | 1.1085 |
| 1.0557 | 86000 | 1.0913 |
| 1.0618 | 86500 | 1.0974 |
| 1.0679 | 87000 | 1.0763 |
| 1.0741 | 87500 | 1.0894 |
| 1.0802 | 88000 | 1.1065 |
| 1.0863 | 88500 | 1.0898 |
| 1.0925 | 89000 | 1.1036 |
| 1.0986 | 89500 | 1.0825 |
| 1.1048 | 90000 | 1.1164 |
| 1.1109 | 90500 | 1.0811 |
| 1.1170 | 91000 | 1.115 |
| 1.1232 | 91500 | 1.1123 |
| 1.1293 | 92000 | 1.0846 |
| 1.1354 | 92500 | 1.0917 |
| 1.1416 | 93000 | 1.0879 |
| 1.1477 | 93500 | 1.0969 |
| 1.1539 | 94000 | 1.0849 |
| 1.1600 | 94500 | 1.0852 |
| 1.1661 | 95000 | 1.0774 |
| 1.1723 | 95500 | 1.0984 |
| 1.1784 | 96000 | 1.0936 |
| 1.1845 | 96500 | 1.0842 |
| 1.1907 | 97000 | 1.0895 |
| 1.1968 | 97500 | 1.09 |
| 1.2030 | 98000 | 1.0813 |
| 1.2091 | 98500 | 1.0965 |
| 1.2152 | 99000 | 1.1017 |
| 1.2214 | 99500 | 1.1045 |
| 1.2275 | 100000 | 1.093 |
| 1.2336 | 100500 | 1.0903 |
| 1.2398 | 101000 | 1.1133 |
| 1.2459 | 101500 | 1.0883 |
| 1.2521 | 102000 | 1.1192 |
| 1.2582 | 102500 | 1.0817 |
| 1.2643 | 103000 | 1.0822 |
| 1.2705 | 103500 | 1.0915 |
| 1.2766 | 104000 | 1.1128 |
| 1.2827 | 104500 | 1.0786 |
| 1.2889 | 105000 | 1.1101 |
| 1.2950 | 105500 | 1.097 |
| 1.3012 | 106000 | 1.095 |
| 1.3073 | 106500 | 1.0884 |
| 1.3134 | 107000 | 1.09 |
| 1.3196 | 107500 | 1.1057 |
| 1.3257 | 108000 | 1.087 |
| 1.3318 | 108500 | 1.1009 |
| 1.3380 | 109000 | 1.0849 |
| 1.3441 | 109500 | 1.0886 |
| 1.3503 | 110000 | 1.0805 |
| 1.3564 | 110500 | 1.0808 |
| 1.3625 | 111000 | 1.1025 |
| 1.3687 | 111500 | 1.0955 |
| 1.3748 | 112000 | 1.0824 |
| 1.3809 | 112500 | 1.0835 |
| 1.3871 | 113000 | 1.1168 |
| 1.3932 | 113500 | 1.0881 |
| 1.3994 | 114000 | 1.0946 |
| 1.4055 | 114500 | 1.0819 |
| 1.4116 | 115000 | 1.1155 |
| 1.4178 | 115500 | 1.1021 |
| 1.4239 | 116000 | 1.102 |
| 1.4300 | 116500 | 1.0733 |
| 1.4362 | 117000 | 1.0987 |
| 1.4423 | 117500 | 1.1103 |
| 1.4485 | 118000 | 1.1034 |
| 1.4546 | 118500 | 1.0987 |
| 1.4607 | 119000 | 1.0908 |
| 1.4669 | 119500 | 1.0986 |
| 1.4730 | 120000 | 1.0988 |
| 1.4791 | 120500 | 1.1023 |
| 1.4853 | 121000 | 1.1013 |
| 1.4914 | 121500 | 1.0896 |
| 1.4976 | 122000 | 1.8455 |
| 1.5037 | 122500 | 1.1155 |
| 1.5098 | 123000 | 1.1502 |
| 1.5160 | 123500 | 1.1183 |
| 1.5221 | 124000 | 1.0958 |
| 1.5282 | 124500 | 1.1098 |
| 1.5344 | 125000 | 1.1021 |
| 1.5405 | 125500 | 1.0912 |
| 1.5467 | 126000 | 1.0961 |
| 1.5528 | 126500 | 1.0858 |
| 1.5589 | 127000 | 1.0784 |
| 1.5651 | 127500 | 1.1112 |
| 1.5712 | 128000 | 1.1067 |
| 1.5773 | 128500 | 1.0986 |
| 1.5835 | 129000 | 1.0824 |
| 1.5896 | 129500 | 1.1072 |
| 1.5958 | 130000 | 1.1098 |
| 1.6019 | 130500 | 1.0962 |
| 1.6080 | 131000 | 1.1108 |
| 1.6142 | 131500 | 1.1187 |
| 1.6203 | 132000 | 1.0923 |
| 1.6264 | 132500 | 1.1003 |
| 1.6326 | 133000 | 1.0865 |
| 1.6387 | 133500 | 1.099 |
| 1.6449 | 134000 | 1.0838 |
| 1.6510 | 134500 | 1.0792 |
| 1.6571 | 135000 | 1.0966 |
| 1.6633 | 135500 | 1.0782 |
| 1.6694 | 136000 | 1.1123 |
| 1.6755 | 136500 | 1.0923 |
| 1.6817 | 137000 | 1.0873 |
| 1.6878 | 137500 | 1.0807 |
| 1.6940 | 138000 | 1.083 |
| 1.7001 | 138500 | 1.0864 |
| 1.7062 | 139000 | 1.0828 |
| 1.7124 | 139500 | 1.0973 |
| 1.7185 | 140000 | 1.1022 |
| 1.7246 | 140500 | 1.0837 |
| 1.7308 | 141000 | 1.0985 |
| 1.7369 | 141500 | 1.1049 |
| 1.7431 | 142000 | 1.079 |
| 1.7492 | 142500 | 1.0757 |
| 1.7553 | 143000 | 1.0808 |
| 1.7615 | 143500 | 1.0743 |
| 1.7676 | 144000 | 1.0933 |
| 1.7737 | 144500 | 1.0938 |
| 1.7799 | 145000 | 1.1121 |
| 1.7860 | 145500 | 1.1138 |
| 1.7922 | 146000 | 1.1063 |
| 1.7983 | 146500 | 1.097 |
| 1.8044 | 147000 | 1.0999 |
| 1.8106 | 147500 | 1.1035 |
| 1.8167 | 148000 | 1.0786 |
| 1.8228 | 148500 | 1.0824 |
| 1.8290 | 149000 | 1.1097 |
| 1.8351 | 149500 | 1.0744 |
| 1.8413 | 150000 | 1.0902 |
| 1.8474 | 150500 | 1.0841 |
| 1.8535 | 151000 | 1.0961 |
| 1.8597 | 151500 | 1.0778 |
| 1.8658 | 152000 | 1.0784 |
| 1.8719 | 152500 | 1.0741 |
| 1.8781 | 153000 | 1.0879 |
| 1.8842 | 153500 | 1.079 |
| 1.8904 | 154000 | 1.0967 |
| 1.8965 | 154500 | 1.0906 |
| 1.9026 | 155000 | 1.0836 |
| 1.9088 | 155500 | 1.0932 |
| 1.9149 | 156000 | 1.0823 |
| 1.9210 | 156500 | 1.087 |
| 1.9272 | 157000 | 1.0892 |
| 1.9333 | 157500 | 1.0842 |
| 1.9395 | 158000 | 1.0837 |
| 1.9456 | 158500 | 1.1001 |
| 1.9517 | 159000 | 1.0727 |
| 1.9579 | 159500 | 1.0875 |
| 1.9640 | 160000 | 1.0845 |
| 1.9701 | 160500 | 1.0805 |
| 1.9763 | 161000 | 1.0825 |
| 1.9824 | 161500 | 1.0886 |
| 1.9886 | 162000 | 1.0856 |
| 1.9947 | 162500 | 1.0816 |
| 2.0008 | 163000 | 1.1005 |
| 2.0070 | 163500 | 1.0775 |
| 2.0131 | 164000 | 1.0875 |
| 2.0192 | 164500 | 1.09 |
| 2.0254 | 165000 | 1.086 |
| 2.0315 | 165500 | 1.087 |
| 2.0377 | 166000 | 1.0815 |
| 2.0438 | 166500 | 1.0832 |
| 2.0499 | 167000 | 1.0801 |
| 2.0561 | 167500 | 1.0828 |
| 2.0622 | 168000 | 1.0819 |
| 2.0683 | 168500 | 1.0767 |
| 2.0745 | 169000 | 1.0819 |
| 2.0806 | 169500 | 1.1013 |
| 2.0868 | 170000 | 1.0891 |
| 2.0929 | 170500 | 1.0721 |
| 2.0990 | 171000 | 1.0737 |
</details>
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.3.1
- Transformers: 4.43.0
- PyTorch: 2.5.0+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"PCR"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-awq | RichardErkhov | null | [
"safetensors",
"gpt_neox",
"arxiv:2101.00027",
"arxiv:2201.07311",
"4-bit",
"awq",
"region:us"
] | 1,732,121,326,000 | 2024-11-20T16:49:33 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-deduped-v0 - AWQ
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"SCIQ"
] | Non_BioNLP |
TheBloke/meditron-70B-GGUF | TheBloke | text-generation | [
"transformers",
"gguf",
"llama",
"medical",
"health",
"llama2",
"text-generation",
"en",
"dataset:bigbio/med_qa",
"dataset:medmcqa",
"dataset:bigbio/pubmed_qa",
"dataset:epfl-llm/guidelines",
"arxiv:2311.16079",
"base_model:epfl-llm/meditron-70b",
"base_model:quantized:epfl-llm/meditron-70b",
"license:llama2",
"region:us"
] | 1,701,364,233,000 | 2023-11-30T17:54:45 | 1,186 | 20 | ---
base_model: epfl-llm/meditron-70b
datasets:
- bigbio/med_qa
- medmcqa
- bigbio/pubmed_qa
- epfl-llm/guidelines
language:
- en
license: llama2
metrics:
- accuracy
- perplexity
model_name: Meditron 70B
pipeline_tag: text-generation
tags:
- medical
- health
- llama2
inference: false
model_creator: EPFL LLM Team
model_type: llama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Meditron 70B - GGUF
- Model creator: [EPFL LLM Team](https://huggingface.co/epfl-llm)
- Original model: [Meditron 70B](https://huggingface.co/epfl-llm/meditron-70b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [EPFL LLM Team's Meditron 70B](https://huggingface.co/epfl-llm/meditron-70b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/meditron-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/meditron-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/meditron-70B-GGUF)
* [EPFL LLM Team's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/epfl-llm/meditron-70b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [meditron-70b.Q2_K.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [meditron-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [meditron-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [meditron-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [meditron-70b.Q4_0.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [meditron-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [meditron-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [meditron-70b.Q5_0.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [meditron-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [meditron-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| meditron-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| meditron-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `meditron-70b.Q6_K.gguf-split-a`
* `meditron-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `meditron-70b.Q8_0.gguf-split-a`
* `meditron-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat meditron-70b.Q6_K.gguf-split-* > meditron-70b.Q6_K.gguf && rm meditron-70b.Q6_K.gguf-split-*
cat meditron-70b.Q8_0.gguf-split-* > meditron-70b.Q8_0.gguf && rm meditron-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B meditron-70b.Q6_K.gguf-split-a + meditron-70b.Q6_K.gguf-split-b meditron-70b.Q6_K.gguf
del meditron-70b.Q6_K.gguf-split-a meditron-70b.Q6_K.gguf-split-b
COPY /B meditron-70b.Q8_0.gguf-split-a + meditron-70b.Q8_0.gguf-split-b meditron-70b.Q8_0.gguf
del meditron-70b.Q8_0.gguf-split-a meditron-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/meditron-70B-GGUF and below it, a specific filename to download, such as: meditron-70b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/meditron-70B-GGUF meditron-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/meditron-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/meditron-70B-GGUF meditron-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m meditron-70b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./meditron-70b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./meditron-70b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: EPFL LLM Team's Meditron 70B
<img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo">
# Model Card for Meditron-70B-v1.0
Meditron is a suite of open-source medical Large Language Models (LLMs).
Meditron-70B is a 70 billion parameters model adapted to the medical domain from Llama-2-70B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
Meditron-70B, finetuned on relevant training data, outperforms Llama-2-70B, GPT-3.5 (`text-davinci-003`, 8-shot), and Flan-PaLM on multiple medical reasoning tasks.
<!--# Table of Contents
[Model Card for Meditron 70B](#model-card-for--meditron-70b-v1.0)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Evaluation](#evaluation)
- [Testing Data & Metrics](#testing-data-&-metrics)
- [Testing Data](#testing-data)
- [Metrics](#metrics)
- [Results](#results)
- [Environmental Impact](#environmental-impact)
- [Citation](#citation)-->
<details open>
<summary><strong>Advisory Notice</strong></summary>
<blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints.
We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings.
</blockquote>
</details>
## Model Details
- **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm)
- **Model type:** Causal decoder-only transformer language model
- **Language(s):** English (mainly)
- **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Code License:** [APACHE 2.0 LICENSE](LICENSE)
- **Continue-pretrained from model:** [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b)
- **Context length:** 4K tokens
- **Input:** Text-only data
- **Output:** Model generates text only
- **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
- **Knowledge Cutoff:** August 2023
### Model Sources
- **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron)
- **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM)
- **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)*
## Uses
Meditron-70B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to:
- Medical exam question answering
- Supporting differential diagnosis
- Disease information (symptoms, cause, treatment) query
- General health information query
### Direct Use
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities.
It should not be used directly for production or work that may impact people.
### Downstream Use
Meditron-70B is a foundation model that can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications.
The main way we have used this model is finetuning for downstream question-answering tasks, but we encourage using this model for additional applications.
Specific formatting needs to be followed to prompt our finetuned models, including the `<|im_start|>`, `<|im_end|>` tags, and `system`, `question`, `answer` identifiers.
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>question
{prompt}<|im_end|>
<|im_start|>answer
"""
**Note 1**: The above formatting is not required for running the base model (this repository)
**Note 2**: the above formatting is just an example of a finetuning template. This format is not a requirement if you use your own formatting option for the finetuning of the model.
To run proper generation with this base model, we recommend using a high-throughput and memory-efficient inference engine, such as [vLLM](https://github.com/vllm-project/vllm), with a UI that supports chat and text generation, such as [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT)
To see more details about model deployment and generation, please see our [documentation](https://github.com/epfLLM/meditron/blob/main/deployment/README.md).
### Out-of-Scope Use
We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise.
## Truthfulness, Helpfulness, Risk, and Bias
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models.
We use TruthfulQA (multiple choice) as the main evaluation benchmark.
We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science.
For 7B models, we perform one-shot evaluations for consistent answer generation.
For 70B models, the evaluations are under the zero-shot setting.
Below, we report the detailed truthfulness performance of each category.
| | | | | | | | |
| --- | ------ |----- |----- |----- |----- |----- |----- |
|Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b |
|Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 |
|Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 |
|Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 |
|Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 |
|Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 |
| | | | | | | |
For a more detailed performance analysis, please see our paper.
For **Helpfulness**, **Risk** and **Bias**, we provide a comprehensive qualitative generation report of Meditron-70B on queries designed by medical experts.
Each query targets specific aspects of helpfulness (medical accuracy, up-to-date information, etc.), risk (public health, medical ethics, etc.) and bias (gender, age, race, etc.).
Please see the detailed generations in our paper. We compare our generations to Llama-2-70B and ChatGPT-3.5 (version Nov, 27, 2023)
Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model.
### Recommendations
**IMPORTANT!**
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model.
While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations.
Understanding these limitations is especially important in a domain like medicine.
Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine without comprehensive testing for your application.
## Training Details
### Training Data
Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora:
- [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations.
- **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers.
- **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers.
- **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
<img width="60%" src="gap-replay.png" alt="Alt text" title="Meditron-logo">
#### Data Preprocessing
Please see the detailed preprocessing procedure in our paper.
### Training Procedure
We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency.
Hardware consists of 16 nodes of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM.
The nodes are connected via RDMA over Converged Ethernet.
Our three-way parallelism scheme uses:
- Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2,
- Pipeline Parallelism (PP -- different GPUs process different layers) of 8,
- Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 8.
#### Training Hyperparameters
| | |
| --- | ------ |
| bf16 | true |
| lr | 1.5e-4 |
| eps | 1e-5 |
| betas | \[0.9, 0.95\] |
| clip_grad | 1 |
| weight decay | 0.1 |
| DP size | 2 |
| TP size | 8 |
| PP size | 8 |
| seq length | 4096 |
| lr scheduler | cosine|
| min lr | 1e-6 |
| warmup iteration | 2000 |
| micro batch size | 2 |
| global batch size | 512 |
| | |
#### Speeds, Sizes, Times
The model was trained in September and October 2023.
The model architecture is exactly Llama 2, meaning
| | |
| --- | ------ |
| Model size | 70B |
| Hidden dimension | 8192 |
| Num. attention heads | 64 |
| Num. layers | 80 |
| | | |
We train the 70B model on 48e9 tokens, at a throughput of about 40,200 tokens / second.
This amounts to a bfloat16 model flops utilization of roughly 42.3\%.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
- [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
- [MedMCQA](https://huggingface.co/datasets/medmcqa)
- [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
- [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
- [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
#### Metrics
- Accuracy: suite the evaluation of multiple-choice question-answering tasks.
### Results
We finetune meditron-70b and llama-2-70b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually.
We report the finetuned models' performance with self-consistency chain-of-thought as the inference mode.
For MMLU-Medical, models finetuned on MedMCQA are used for inference.
For MedQA-4-Option, models finetuned on MedQA are used for inference.
For a more detailed performance analysis, please see our paper.
| | | | | | |
| --- | ------ |----- |----- |----- |----- |
|Dataset| meditron-70b | llama-2-70b | med42-70b* | clinical-camel-70b* |
|MMLU-Medical | 77.6 | 77.9 | 74.5 | 65.7 |
|PubMedQA | 81.6 | 80.0 | 61.2 | 67.0 |
|MedMCQA | 66.0 | 62.6 | 59.2 | 46.7 |
|MedQA | 64.4 | 61.5 | 59.1 | 50.8 |
|MedQA-4-Option| 70.2 | 63.8 | 63.9 | 56.8 |
|Avg | 72.0 | 69.2 | 63.6 | 57.4 |
| | | | | | |
**Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** 128 x NVIDIA A100 (80GB) SXM
- **Total GPU hours:** 42,496
- **Hardware Provider:** EPFL Research Computing Platform
- **Compute Region:** Switzerland
- **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 332 hours of 128 A100s means 42496 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.8, total emissions are estimated to be:
(400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 332 h * 128 GPU) * 1.8 PUE = 486 kgCO2.
## Citation
**BibTeX:**
If you use Meditron or its training data, please cite our work:
```
@misc{chen2023meditron70b,
title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models},
author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut},
year={2023},
eprint={2311.16079},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@software{epfmedtrn,
author = {Zeming Chen and Alejandro Hernández Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut},
title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models},
month = November,
year = 2023,
url = {https://github.com/epfLLM/meditron}
}
```
<!-- original-model-card end -->
| [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
KingKazma/cnn_dailymail_55555_3000_1500_train | KingKazma | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,690,825,055,000 | 2023-07-31T17:37:36 | 8 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# cnn_dailymail_55555_3000_1500_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/cnn_dailymail_55555_3000_1500_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 61
* Number of training documents: 3000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - one - year - people - mr | 10 | -1_said_one_year_people |
| 0 | league - game - player - cup - goal | 961 | 0_league_game_player_cup |
| 1 | police - death - said - murder - family | 313 | 1_police_death_said_murder |
| 2 | obama - republican - senate - president - republicans | 182 | 2_obama_republican_senate_president |
| 3 | fashion - hair - look - makeup - brand | 91 | 3_fashion_hair_look_makeup |
| 4 | dog - animal - cat - bird - pet | 69 | 4_dog_animal_cat_bird |
| 5 | syria - isis - syrian - iraq - fighter | 54 | 5_syria_isis_syrian_iraq |
| 6 | mexico - said - cuba - president - cartel | 53 | 6_mexico_said_cuba_president |
| 7 | police - court - cash - jailed - said | 53 | 7_police_court_cash_jailed |
| 8 | space - nasa - mars - planet - earth | 51 | 8_space_nasa_mars_planet |
| 9 | property - house - price - room - london | 48 | 9_property_house_price_room |
| 10 | patient - hospital - nhs - doctor - cancer | 48 | 10_patient_hospital_nhs_doctor |
| 11 | tax - bank - minister - mr - pay | 46 | 11_tax_bank_minister_mr |
| 12 | car - fire - crash - bus - train | 45 | 12_car_fire_crash_bus |
| 13 | milk - food - raw - restaurant - chocolate | 44 | 13_milk_food_raw_restaurant |
| 14 | gold - olympic - horse - race - medal | 36 | 14_gold_olympic_horse_race |
| 15 | album - song - joel - music - show | 35 | 15_album_song_joel_music |
| 16 | show - film - movie - award - les | 35 | 16_show_film_movie_award |
| 17 | baby - born - hospital - birth - pregnancy | 34 | 17_baby_born_hospital_birth |
| 18 | prince - queen - royal - william - duchess | 31 | 18_prince_queen_royal_william |
| 19 | chinese - china - bo - beijing - chen | 30 | 19_chinese_china_bo_beijing |
| 20 | labour - mr - party - ukip - miliband | 30 | 20_labour_mr_party_ukip |
| 21 | school - student - teacher - book - fraternity | 29 | 21_school_student_teacher_book |
| 22 | somalia - dala - african - alshabaab - mali | 28 | 22_somalia_dala_african_alshabaab |
| 23 | ukraine - russian - russia - putin - moscow | 26 | 23_ukraine_russian_russia_putin |
| 24 | woods - golf - golfer - hole - round | 26 | 24_woods_golf_golfer_hole |
| 25 | sterling - nba - clippers - donald - said | 26 | 25_sterling_nba_clippers_donald |
| 26 | found - scientist - stonehenge - researcher - frog | 26 | 26_found_scientist_stonehenge_researcher |
| 27 | apple - iphone - apples - phone - device | 24 | 27_apple_iphone_apples_phone |
| 28 | formula - race - schumacher - prix - ecclestone | 23 | 28_formula_race_schumacher_prix |
| 29 | ebola - virus - outbreak - health - vaccine | 22 | 29_ebola_virus_outbreak_health |
| 30 | church - pope - priest - francis - vatican | 21 | 30_church_pope_priest_francis |
| 31 | sharapova - open - wimbledon - tennis - slam | 21 | 31_sharapova_open_wimbledon_tennis |
| 32 | pakistani - pakistan - taliban - musharraf - afghanistan | 21 | 32_pakistani_pakistan_taliban_musharraf |
| 33 | storm - weather - tornado - water - rain | 21 | 33_storm_weather_tornado_water |
| 34 | north - korea - korean - kim - south | 21 | 34_north_korea_korean_kim |
| 35 | war - medal - soldier - army - afghanistan | 21 | 35_war_medal_soldier_army |
| 36 | marijuana - cigarette - alcohol - drug - smoking | 20 | 36_marijuana_cigarette_alcohol_drug |
| 37 | internet - google - user - facebook - online | 19 | 37_internet_google_user_facebook |
| 38 | plane - flight - crash - passenger - airport | 19 | 38_plane_flight_crash_passenger |
| 39 | weight - diet - fat - stone - food | 18 | 39_weight_diet_fat_stone |
| 40 | israeli - israel - gaza - hamas - palestinian | 17 | 40_israeli_israel_gaza_hamas |
| 41 | beach - art - resort - festival - painting | 17 | 41_beach_art_resort_festival |
| 42 | petraeus - cia - broadwell - justice - fbi | 17 | 42_petraeus_cia_broadwell_justice |
| 43 | garner - wilson - officer - police - black | 16 | 43_garner_wilson_officer_police |
| 44 | ship - cruise - ships - crew - pirate | 16 | 44_ship_cruise_ships_crew |
| 45 | nfl - patriots - rice - seahawks - chris | 15 | 45_nfl_patriots_rice_seahawks |
| 46 | dolphin - sea - creature - cuttlefish - fisherman | 14 | 46_dolphin_sea_creature_cuttlefish |
| 47 | weather - rain - winter - temperature - warm | 14 | 47_weather_rain_winter_temperature |
| 48 | mandela - african - africa - south - mandelas | 14 | 48_mandela_african_africa_south |
| 49 | disney - snow - million - wars - movie | 14 | 49_disney_snow_million_wars |
| 50 | price - bag - plastic - cent - energy | 13 | 50_price_bag_plastic_cent |
| 51 | spartan - cliff - parachute - matthew - obstacle | 12 | 51_spartan_cliff_parachute_matthew |
| 52 | zoo - panda - cub - giraffe - park | 12 | 52_zoo_panda_cub_giraffe |
| 53 | iran - iranian - irans - ahmadinejad - nuclear | 12 | 53_iran_iranian_irans_ahmadinejad |
| 54 | bin - laden - us - qaeda - al | 12 | 54_bin_laden_us_qaeda |
| 55 | crocodile - snake - python - bascoules - alligator | 12 | 55_crocodile_snake_python_bascoules |
| 56 | woman - ivf - men - dna - fertility | 11 | 56_woman_ivf_men_dna |
| 57 | driver - driving - police - meracle - text | 11 | 57_driver_driving_police_meracle |
| 58 | mitchell - mr - evans - mp - gate | 10 | 58_mitchell_mr_evans_mp |
| 59 | france - police - mosque - salah - donetsk | 10 | 59_france_police_mosque_salah |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.6
| [
"MEDAL"
] | Non_BioNLP |
BASF-AI/nomic-embed-text-v1.5 | BASF-AI | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"nomic_bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"custom_code",
"en",
"arxiv:2205.13147",
"arxiv:2402.01613",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,736,467,498,000 | 2025-01-10T04:53:06 | 31 | 0 | ---
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.20895522388058
- type: ap
value: 38.57605549557802
- type: f1
value: 69.35586565857854
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.8144
- type: ap
value: 88.65222882032363
- type: f1
value: 91.80426301643274
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.162000000000006
- type: f1
value: 46.59329642263158
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.253
- type: map_at_10
value: 38.962
- type: map_at_100
value: 40.081
- type: map_at_1000
value: 40.089000000000006
- type: map_at_3
value: 33.499
- type: map_at_5
value: 36.351
- type: mrr_at_1
value: 24.609
- type: mrr_at_10
value: 39.099000000000004
- type: mrr_at_100
value: 40.211000000000006
- type: mrr_at_1000
value: 40.219
- type: mrr_at_3
value: 33.677
- type: mrr_at_5
value: 36.469
- type: ndcg_at_1
value: 24.253
- type: ndcg_at_10
value: 48.010999999999996
- type: ndcg_at_100
value: 52.756
- type: ndcg_at_1000
value: 52.964999999999996
- type: ndcg_at_3
value: 36.564
- type: ndcg_at_5
value: 41.711999999999996
- type: precision_at_1
value: 24.253
- type: precision_at_10
value: 7.738
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.149000000000001
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 24.253
- type: recall_at_10
value: 77.383
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 57.965999999999994
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.69069567851087
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.35185490976283
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.71274951450321
- type: mrr
value: 76.06032625423207
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.73980520022269
- type: cos_sim_spearman
value: 84.24649792685918
- type: euclidean_pearson
value: 85.85197641158186
- type: euclidean_spearman
value: 84.24649792685918
- type: manhattan_pearson
value: 86.26809552711346
- type: manhattan_spearman
value: 84.56397504030865
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.25324675324674
- type: f1
value: 84.17872280892557
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.770253446400886
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.94307095497281
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.164
- type: map_at_10
value: 42.641
- type: map_at_100
value: 43.947
- type: map_at_1000
value: 44.074999999999996
- type: map_at_3
value: 39.592
- type: map_at_5
value: 41.204
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 48.625
- type: mrr_at_100
value: 49.368
- type: mrr_at_1000
value: 49.413000000000004
- type: mrr_at_3
value: 46.400000000000006
- type: mrr_at_5
value: 47.68
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 48.564
- type: ndcg_at_100
value: 53.507000000000005
- type: ndcg_at_1000
value: 55.635999999999996
- type: ndcg_at_3
value: 44.471
- type: ndcg_at_5
value: 46.137
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 8.856
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 14.649000000000001
- type: recall_at_1
value: 32.164
- type: recall_at_10
value: 59.609
- type: recall_at_100
value: 80.521
- type: recall_at_1000
value: 94.245
- type: recall_at_3
value: 46.521
- type: recall_at_5
value: 52.083999999999996
- type: map_at_1
value: 31.526
- type: map_at_10
value: 41.581
- type: map_at_100
value: 42.815999999999995
- type: map_at_1000
value: 42.936
- type: map_at_3
value: 38.605000000000004
- type: map_at_5
value: 40.351
- type: mrr_at_1
value: 39.489999999999995
- type: mrr_at_10
value: 47.829
- type: mrr_at_100
value: 48.512
- type: mrr_at_1000
value: 48.552
- type: mrr_at_3
value: 45.754
- type: mrr_at_5
value: 46.986
- type: ndcg_at_1
value: 39.489999999999995
- type: ndcg_at_10
value: 47.269
- type: ndcg_at_100
value: 51.564
- type: ndcg_at_1000
value: 53.53099999999999
- type: ndcg_at_3
value: 43.301
- type: ndcg_at_5
value: 45.239000000000004
- type: precision_at_1
value: 39.489999999999995
- type: precision_at_10
value: 8.93
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.865999999999998
- type: recall_at_1
value: 31.526
- type: recall_at_10
value: 56.76
- type: recall_at_100
value: 75.029
- type: recall_at_1000
value: 87.491
- type: recall_at_3
value: 44.786
- type: recall_at_5
value: 50.254
- type: map_at_1
value: 40.987
- type: map_at_10
value: 52.827
- type: map_at_100
value: 53.751000000000005
- type: map_at_1000
value: 53.81
- type: map_at_3
value: 49.844
- type: map_at_5
value: 51.473
- type: mrr_at_1
value: 46.833999999999996
- type: mrr_at_10
value: 56.389
- type: mrr_at_100
value: 57.003
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.486999999999995
- type: ndcg_at_1
value: 46.833999999999996
- type: ndcg_at_10
value: 58.372
- type: ndcg_at_100
value: 62.068
- type: ndcg_at_1000
value: 63.288
- type: ndcg_at_3
value: 53.400000000000006
- type: ndcg_at_5
value: 55.766000000000005
- type: precision_at_1
value: 46.833999999999996
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.448
- type: precision_at_5
value: 15.862000000000002
- type: recall_at_1
value: 40.987
- type: recall_at_10
value: 71.146
- type: recall_at_100
value: 87.035
- type: recall_at_1000
value: 95.633
- type: recall_at_3
value: 58.025999999999996
- type: recall_at_5
value: 63.815999999999995
- type: map_at_1
value: 24.587
- type: map_at_10
value: 33.114
- type: map_at_100
value: 34.043
- type: map_at_1000
value: 34.123999999999995
- type: map_at_3
value: 30.45
- type: map_at_5
value: 31.813999999999997
- type: mrr_at_1
value: 26.554
- type: mrr_at_10
value: 35.148
- type: mrr_at_100
value: 35.926
- type: mrr_at_1000
value: 35.991
- type: mrr_at_3
value: 32.599000000000004
- type: mrr_at_5
value: 33.893
- type: ndcg_at_1
value: 26.554
- type: ndcg_at_10
value: 38.132
- type: ndcg_at_100
value: 42.78
- type: ndcg_at_1000
value: 44.919
- type: ndcg_at_3
value: 32.833
- type: ndcg_at_5
value: 35.168
- type: precision_at_1
value: 26.554
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.861
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 24.587
- type: recall_at_10
value: 51.690000000000005
- type: recall_at_100
value: 73.428
- type: recall_at_1000
value: 89.551
- type: recall_at_3
value: 37.336999999999996
- type: recall_at_5
value: 43.047000000000004
- type: map_at_1
value: 16.715
- type: map_at_10
value: 24.251
- type: map_at_100
value: 25.326999999999998
- type: map_at_1000
value: 25.455
- type: map_at_3
value: 21.912000000000003
- type: map_at_5
value: 23.257
- type: mrr_at_1
value: 20.274
- type: mrr_at_10
value: 28.552
- type: mrr_at_100
value: 29.42
- type: mrr_at_1000
value: 29.497
- type: mrr_at_3
value: 26.14
- type: mrr_at_5
value: 27.502
- type: ndcg_at_1
value: 20.274
- type: ndcg_at_10
value: 29.088
- type: ndcg_at_100
value: 34.293
- type: ndcg_at_1000
value: 37.271
- type: ndcg_at_3
value: 24.708
- type: ndcg_at_5
value: 26.809
- type: precision_at_1
value: 20.274
- type: precision_at_10
value: 5.361
- type: precision_at_100
value: 0.915
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.556999999999999
- type: recall_at_1
value: 16.715
- type: recall_at_10
value: 39.587
- type: recall_at_100
value: 62.336000000000006
- type: recall_at_1000
value: 83.453
- type: recall_at_3
value: 27.839999999999996
- type: recall_at_5
value: 32.952999999999996
- type: map_at_1
value: 28.793000000000003
- type: map_at_10
value: 38.582
- type: map_at_100
value: 39.881
- type: map_at_1000
value: 39.987
- type: map_at_3
value: 35.851
- type: map_at_5
value: 37.289
- type: mrr_at_1
value: 34.455999999999996
- type: mrr_at_10
value: 43.909
- type: mrr_at_100
value: 44.74
- type: mrr_at_1000
value: 44.786
- type: mrr_at_3
value: 41.659
- type: mrr_at_5
value: 43.010999999999996
- type: ndcg_at_1
value: 34.455999999999996
- type: ndcg_at_10
value: 44.266
- type: ndcg_at_100
value: 49.639
- type: ndcg_at_1000
value: 51.644
- type: ndcg_at_3
value: 39.865
- type: ndcg_at_5
value: 41.887
- type: precision_at_1
value: 34.455999999999996
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 18.831999999999997
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 28.793000000000003
- type: recall_at_10
value: 55.68300000000001
- type: recall_at_100
value: 77.99000000000001
- type: recall_at_1000
value: 91.183
- type: recall_at_3
value: 43.293
- type: recall_at_5
value: 48.618
- type: map_at_1
value: 25.907000000000004
- type: map_at_10
value: 35.519
- type: map_at_100
value: 36.806
- type: map_at_1000
value: 36.912
- type: map_at_3
value: 32.748
- type: map_at_5
value: 34.232
- type: mrr_at_1
value: 31.621
- type: mrr_at_10
value: 40.687
- type: mrr_at_100
value: 41.583
- type: mrr_at_1000
value: 41.638999999999996
- type: mrr_at_3
value: 38.527
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.621
- type: ndcg_at_10
value: 41.003
- type: ndcg_at_100
value: 46.617999999999995
- type: ndcg_at_1000
value: 48.82
- type: ndcg_at_3
value: 36.542
- type: ndcg_at_5
value: 38.368
- type: precision_at_1
value: 31.621
- type: precision_at_10
value: 7.396999999999999
- type: precision_at_100
value: 1.191
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 17.39
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 25.907000000000004
- type: recall_at_10
value: 52.115
- type: recall_at_100
value: 76.238
- type: recall_at_1000
value: 91.218
- type: recall_at_3
value: 39.417
- type: recall_at_5
value: 44.435
- type: map_at_1
value: 25.732166666666668
- type: map_at_10
value: 34.51616666666667
- type: map_at_100
value: 35.67241666666666
- type: map_at_1000
value: 35.78675
- type: map_at_3
value: 31.953416666666662
- type: map_at_5
value: 33.333
- type: mrr_at_1
value: 30.300166666666673
- type: mrr_at_10
value: 38.6255
- type: mrr_at_100
value: 39.46183333333334
- type: mrr_at_1000
value: 39.519999999999996
- type: mrr_at_3
value: 36.41299999999999
- type: mrr_at_5
value: 37.6365
- type: ndcg_at_1
value: 30.300166666666673
- type: ndcg_at_10
value: 39.61466666666667
- type: ndcg_at_100
value: 44.60808333333334
- type: ndcg_at_1000
value: 46.91708333333334
- type: ndcg_at_3
value: 35.26558333333333
- type: ndcg_at_5
value: 37.220000000000006
- type: precision_at_1
value: 30.300166666666673
- type: precision_at_10
value: 6.837416666666667
- type: precision_at_100
value: 1.10425
- type: precision_at_1000
value: 0.14875
- type: precision_at_3
value: 16.13716666666667
- type: precision_at_5
value: 11.2815
- type: recall_at_1
value: 25.732166666666668
- type: recall_at_10
value: 50.578916666666665
- type: recall_at_100
value: 72.42183333333334
- type: recall_at_1000
value: 88.48766666666667
- type: recall_at_3
value: 38.41325
- type: recall_at_5
value: 43.515750000000004
- type: map_at_1
value: 23.951
- type: map_at_10
value: 30.974
- type: map_at_100
value: 31.804
- type: map_at_1000
value: 31.900000000000002
- type: map_at_3
value: 28.762
- type: map_at_5
value: 29.94
- type: mrr_at_1
value: 26.534000000000002
- type: mrr_at_10
value: 33.553
- type: mrr_at_100
value: 34.297
- type: mrr_at_1000
value: 34.36
- type: mrr_at_3
value: 31.391000000000002
- type: mrr_at_5
value: 32.525999999999996
- type: ndcg_at_1
value: 26.534000000000002
- type: ndcg_at_10
value: 35.112
- type: ndcg_at_100
value: 39.28
- type: ndcg_at_1000
value: 41.723
- type: ndcg_at_3
value: 30.902
- type: ndcg_at_5
value: 32.759
- type: precision_at_1
value: 26.534000000000002
- type: precision_at_10
value: 5.445
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.951
- type: recall_at_10
value: 45.24
- type: recall_at_100
value: 64.12299999999999
- type: recall_at_1000
value: 82.28999999999999
- type: recall_at_3
value: 33.806000000000004
- type: recall_at_5
value: 38.277
- type: map_at_1
value: 16.829
- type: map_at_10
value: 23.684
- type: map_at_100
value: 24.683
- type: map_at_1000
value: 24.81
- type: map_at_3
value: 21.554000000000002
- type: map_at_5
value: 22.768
- type: mrr_at_1
value: 20.096
- type: mrr_at_10
value: 27.230999999999998
- type: mrr_at_100
value: 28.083999999999996
- type: mrr_at_1000
value: 28.166000000000004
- type: mrr_at_3
value: 25.212
- type: mrr_at_5
value: 26.32
- type: ndcg_at_1
value: 20.096
- type: ndcg_at_10
value: 27.989000000000004
- type: ndcg_at_100
value: 32.847
- type: ndcg_at_1000
value: 35.896
- type: ndcg_at_3
value: 24.116
- type: ndcg_at_5
value: 25.964
- type: precision_at_1
value: 20.096
- type: precision_at_10
value: 5
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 11.207
- type: precision_at_5
value: 8.08
- type: recall_at_1
value: 16.829
- type: recall_at_10
value: 37.407000000000004
- type: recall_at_100
value: 59.101000000000006
- type: recall_at_1000
value: 81.024
- type: recall_at_3
value: 26.739
- type: recall_at_5
value: 31.524
- type: map_at_1
value: 24.138
- type: map_at_10
value: 32.275999999999996
- type: map_at_100
value: 33.416000000000004
- type: map_at_1000
value: 33.527
- type: map_at_3
value: 29.854000000000003
- type: map_at_5
value: 31.096
- type: mrr_at_1
value: 28.450999999999997
- type: mrr_at_10
value: 36.214
- type: mrr_at_100
value: 37.134
- type: mrr_at_1000
value: 37.198
- type: mrr_at_3
value: 34.001999999999995
- type: mrr_at_5
value: 35.187000000000005
- type: ndcg_at_1
value: 28.450999999999997
- type: ndcg_at_10
value: 37.166
- type: ndcg_at_100
value: 42.454
- type: ndcg_at_1000
value: 44.976
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 34.631
- type: precision_at_1
value: 28.450999999999997
- type: precision_at_10
value: 6.241
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.801
- type: precision_at_5
value: 10.280000000000001
- type: recall_at_1
value: 24.138
- type: recall_at_10
value: 48.111
- type: recall_at_100
value: 71.245
- type: recall_at_1000
value: 88.986
- type: recall_at_3
value: 36.119
- type: recall_at_5
value: 40.846
- type: map_at_1
value: 23.244
- type: map_at_10
value: 31.227
- type: map_at_100
value: 33.007
- type: map_at_1000
value: 33.223
- type: map_at_3
value: 28.924
- type: map_at_5
value: 30.017
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 35.524
- type: mrr_at_100
value: 36.699
- type: mrr_at_1000
value: 36.759
- type: mrr_at_3
value: 33.366
- type: mrr_at_5
value: 34.552
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 36.381
- type: ndcg_at_100
value: 43.062
- type: ndcg_at_1000
value: 45.656
- type: ndcg_at_3
value: 32.501999999999995
- type: ndcg_at_5
value: 34.105999999999995
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 6.798
- type: precision_at_100
value: 1.492
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 15.152
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.244
- type: recall_at_10
value: 45.979
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 91.078
- type: recall_at_3
value: 34.925
- type: recall_at_5
value: 39.126
- type: map_at_1
value: 19.945
- type: map_at_10
value: 27.517999999999997
- type: map_at_100
value: 28.588
- type: map_at_1000
value: 28.682000000000002
- type: map_at_3
value: 25.345000000000002
- type: map_at_5
value: 26.555
- type: mrr_at_1
value: 21.996
- type: mrr_at_10
value: 29.845
- type: mrr_at_100
value: 30.775999999999996
- type: mrr_at_1000
value: 30.845
- type: mrr_at_3
value: 27.726
- type: mrr_at_5
value: 28.882
- type: ndcg_at_1
value: 21.996
- type: ndcg_at_10
value: 32.034
- type: ndcg_at_100
value: 37.185
- type: ndcg_at_1000
value: 39.645
- type: ndcg_at_3
value: 27.750999999999998
- type: ndcg_at_5
value: 29.805999999999997
- type: precision_at_1
value: 21.996
- type: precision_at_10
value: 5.065
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.392
- type: recall_at_1
value: 19.945
- type: recall_at_10
value: 43.62
- type: recall_at_100
value: 67.194
- type: recall_at_1000
value: 85.7
- type: recall_at_3
value: 32.15
- type: recall_at_5
value: 37.208999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.279
- type: map_at_10
value: 31.052999999999997
- type: map_at_100
value: 33.125
- type: map_at_1000
value: 33.306000000000004
- type: map_at_3
value: 26.208
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 42.671
- type: mrr_at_10
value: 54.557
- type: mrr_at_100
value: 55.142
- type: mrr_at_1000
value: 55.169000000000004
- type: mrr_at_3
value: 51.488
- type: mrr_at_5
value: 53.439
- type: ndcg_at_1
value: 42.671
- type: ndcg_at_10
value: 41.276
- type: ndcg_at_100
value: 48.376000000000005
- type: ndcg_at_1000
value: 51.318
- type: ndcg_at_3
value: 35.068
- type: ndcg_at_5
value: 37.242
- type: precision_at_1
value: 42.671
- type: precision_at_10
value: 12.638
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 26.08
- type: precision_at_5
value: 19.805
- type: recall_at_1
value: 18.279
- type: recall_at_10
value: 46.946
- type: recall_at_100
value: 70.97200000000001
- type: recall_at_1000
value: 87.107
- type: recall_at_3
value: 31.147999999999996
- type: recall_at_5
value: 38.099
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.573
- type: map_at_10
value: 19.747
- type: map_at_100
value: 28.205000000000002
- type: map_at_1000
value: 29.831000000000003
- type: map_at_3
value: 14.109
- type: map_at_5
value: 16.448999999999998
- type: mrr_at_1
value: 71
- type: mrr_at_10
value: 77.68599999999999
- type: mrr_at_100
value: 77.995
- type: mrr_at_1000
value: 78.00200000000001
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.029
- type: ndcg_at_1
value: 59.12500000000001
- type: ndcg_at_10
value: 43.9
- type: ndcg_at_100
value: 47.863
- type: ndcg_at_1000
value: 54.848
- type: ndcg_at_3
value: 49.803999999999995
- type: ndcg_at_5
value: 46.317
- type: precision_at_1
value: 71
- type: precision_at_10
value: 34.4
- type: precision_at_100
value: 11.063
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 52.333
- type: precision_at_5
value: 43.7
- type: recall_at_1
value: 8.573
- type: recall_at_10
value: 25.615
- type: recall_at_100
value: 53.385000000000005
- type: recall_at_1000
value: 75.46000000000001
- type: recall_at_3
value: 15.429
- type: recall_at_5
value: 19.357
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.989999999999995
- type: f1
value: 42.776314451497555
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.13499999999999
- type: map_at_10
value: 82.825
- type: map_at_100
value: 83.096
- type: map_at_1000
value: 83.111
- type: map_at_3
value: 81.748
- type: map_at_5
value: 82.446
- type: mrr_at_1
value: 79.553
- type: mrr_at_10
value: 86.654
- type: mrr_at_100
value: 86.774
- type: mrr_at_1000
value: 86.778
- type: mrr_at_3
value: 85.981
- type: mrr_at_5
value: 86.462
- type: ndcg_at_1
value: 79.553
- type: ndcg_at_10
value: 86.345
- type: ndcg_at_100
value: 87.32
- type: ndcg_at_1000
value: 87.58200000000001
- type: ndcg_at_3
value: 84.719
- type: ndcg_at_5
value: 85.677
- type: precision_at_1
value: 79.553
- type: precision_at_10
value: 10.402000000000001
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.413
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 74.13499999999999
- type: recall_at_10
value: 93.215
- type: recall_at_100
value: 97.083
- type: recall_at_1000
value: 98.732
- type: recall_at_3
value: 88.79
- type: recall_at_5
value: 91.259
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.298000000000002
- type: map_at_10
value: 29.901
- type: map_at_100
value: 31.528
- type: map_at_1000
value: 31.713
- type: map_at_3
value: 25.740000000000002
- type: map_at_5
value: 28.227999999999998
- type: mrr_at_1
value: 36.728
- type: mrr_at_10
value: 45.401
- type: mrr_at_100
value: 46.27
- type: mrr_at_1000
value: 46.315
- type: mrr_at_3
value: 42.978
- type: mrr_at_5
value: 44.29
- type: ndcg_at_1
value: 36.728
- type: ndcg_at_10
value: 37.456
- type: ndcg_at_100
value: 43.832
- type: ndcg_at_1000
value: 47
- type: ndcg_at_3
value: 33.694
- type: ndcg_at_5
value: 35.085
- type: precision_at_1
value: 36.728
- type: precision_at_10
value: 10.386
- type: precision_at_100
value: 1.701
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 22.479
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.298000000000002
- type: recall_at_10
value: 44.369
- type: recall_at_100
value: 68.098
- type: recall_at_1000
value: 87.21900000000001
- type: recall_at_3
value: 30.215999999999998
- type: recall_at_5
value: 36.861
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.568
- type: map_at_10
value: 65.061
- type: map_at_100
value: 65.896
- type: map_at_1000
value: 65.95100000000001
- type: map_at_3
value: 61.831
- type: map_at_5
value: 63.849000000000004
- type: mrr_at_1
value: 79.136
- type: mrr_at_10
value: 84.58200000000001
- type: mrr_at_100
value: 84.765
- type: mrr_at_1000
value: 84.772
- type: mrr_at_3
value: 83.684
- type: mrr_at_5
value: 84.223
- type: ndcg_at_1
value: 79.136
- type: ndcg_at_10
value: 72.622
- type: ndcg_at_100
value: 75.539
- type: ndcg_at_1000
value: 76.613
- type: ndcg_at_3
value: 68.065
- type: ndcg_at_5
value: 70.58
- type: precision_at_1
value: 79.136
- type: precision_at_10
value: 15.215
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 44.011
- type: precision_at_5
value: 28.388999999999996
- type: recall_at_1
value: 39.568
- type: recall_at_10
value: 76.077
- type: recall_at_100
value: 87.481
- type: recall_at_1000
value: 94.56400000000001
- type: recall_at_3
value: 66.01599999999999
- type: recall_at_5
value: 70.97200000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.312
- type: ap
value: 80.36296867333715
- type: f1
value: 85.26613311552218
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 35.711999999999996
- type: map_at_100
value: 36.876999999999995
- type: map_at_1000
value: 36.923
- type: map_at_3
value: 32.034
- type: map_at_5
value: 34.159
- type: mrr_at_1
value: 24.04
- type: mrr_at_10
value: 36.345
- type: mrr_at_100
value: 37.441
- type: mrr_at_1000
value: 37.480000000000004
- type: mrr_at_3
value: 32.713
- type: mrr_at_5
value: 34.824
- type: ndcg_at_1
value: 24.026
- type: ndcg_at_10
value: 42.531
- type: ndcg_at_100
value: 48.081
- type: ndcg_at_1000
value: 49.213
- type: ndcg_at_3
value: 35.044
- type: ndcg_at_5
value: 38.834
- type: precision_at_1
value: 24.026
- type: precision_at_10
value: 6.622999999999999
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.909
- type: precision_at_5
value: 10.871
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 63.426
- type: recall_at_100
value: 88.96300000000001
- type: recall_at_1000
value: 97.637
- type: recall_at_3
value: 43.095
- type: recall_at_5
value: 52.178000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.0095759233926
- type: f1
value: 92.78387794667408
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.0296397628819
- type: f1
value: 58.45699589820874
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.45662407531944
- type: f1
value: 71.42364781421813
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07800941492937
- type: f1
value: 77.22799045640845
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.531234379250606
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.941490381193802
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.3115090856725
- type: mrr
value: 31.290667638675757
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.465
- type: map_at_10
value: 13.03
- type: map_at_100
value: 16.057
- type: map_at_1000
value: 17.49
- type: map_at_3
value: 9.553
- type: map_at_5
value: 11.204
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 53.269
- type: mrr_at_100
value: 53.72
- type: mrr_at_1000
value: 53.761
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 52.461
- type: ndcg_at_1
value: 42.26
- type: ndcg_at_10
value: 34.673
- type: ndcg_at_100
value: 30.759999999999998
- type: ndcg_at_1000
value: 39.728
- type: ndcg_at_3
value: 40.349000000000004
- type: ndcg_at_5
value: 37.915
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.789
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.596000000000004
- type: precision_at_5
value: 33.251
- type: recall_at_1
value: 5.465
- type: recall_at_10
value: 17.148
- type: recall_at_100
value: 29.768
- type: recall_at_1000
value: 62.239
- type: recall_at_3
value: 10.577
- type: recall_at_5
value: 13.315
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.008
- type: map_at_10
value: 52.467
- type: map_at_100
value: 53.342999999999996
- type: map_at_1000
value: 53.366
- type: map_at_3
value: 48.412
- type: map_at_5
value: 50.875
- type: mrr_at_1
value: 41.541
- type: mrr_at_10
value: 54.967
- type: mrr_at_100
value: 55.611
- type: mrr_at_1000
value: 55.627
- type: mrr_at_3
value: 51.824999999999996
- type: mrr_at_5
value: 53.763000000000005
- type: ndcg_at_1
value: 41.541
- type: ndcg_at_10
value: 59.724999999999994
- type: ndcg_at_100
value: 63.38700000000001
- type: ndcg_at_1000
value: 63.883
- type: ndcg_at_3
value: 52.331
- type: ndcg_at_5
value: 56.327000000000005
- type: precision_at_1
value: 41.541
- type: precision_at_10
value: 9.447
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.262
- type: precision_at_5
value: 16.314999999999998
- type: recall_at_1
value: 37.008
- type: recall_at_10
value: 79.145
- type: recall_at_100
value: 94.986
- type: recall_at_1000
value: 98.607
- type: recall_at_3
value: 60.277
- type: recall_at_5
value: 69.407
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.402
- type: map_at_10
value: 84.181
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.81400000000001
- type: map_at_3
value: 81.209
- type: map_at_5
value: 83.085
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.263
- type: mrr_at_100
value: 87.36
- type: mrr_at_1000
value: 87.36
- type: mrr_at_3
value: 86.235
- type: mrr_at_5
value: 86.945
- type: ndcg_at_1
value: 81.01
- type: ndcg_at_10
value: 87.99900000000001
- type: ndcg_at_100
value: 89.217
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 85.053
- type: ndcg_at_5
value: 86.703
- type: precision_at_1
value: 81.01
- type: precision_at_10
value: 13.336
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 24.44
- type: recall_at_1
value: 70.402
- type: recall_at_10
value: 95.214
- type: recall_at_100
value: 99.438
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.75699999999999
- type: recall_at_5
value: 91.44099999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.51721502758904
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.054808572333016
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.578
- type: map_at_10
value: 11.036999999999999
- type: map_at_100
value: 12.879999999999999
- type: map_at_1000
value: 13.150999999999998
- type: map_at_3
value: 8.133
- type: map_at_5
value: 9.559
- type: mrr_at_1
value: 22.6
- type: mrr_at_10
value: 32.68
- type: mrr_at_100
value: 33.789
- type: mrr_at_1000
value: 33.854
- type: mrr_at_3
value: 29.7
- type: mrr_at_5
value: 31.480000000000004
- type: ndcg_at_1
value: 22.6
- type: ndcg_at_10
value: 18.616
- type: ndcg_at_100
value: 25.883
- type: ndcg_at_1000
value: 30.944
- type: ndcg_at_3
value: 18.136
- type: ndcg_at_5
value: 15.625
- type: precision_at_1
value: 22.6
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.991
- type: precision_at_1000
value: 0.321
- type: precision_at_3
value: 16.8
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.578
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 40.397
- type: recall_at_1000
value: 65.2
- type: recall_at_3
value: 10.208
- type: recall_at_5
value: 13.718
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.44288351714071
- type: cos_sim_spearman
value: 79.37995604564952
- type: euclidean_pearson
value: 81.1078874670718
- type: euclidean_spearman
value: 79.37995905980499
- type: manhattan_pearson
value: 81.03697527288986
- type: manhattan_spearman
value: 79.33490235296236
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.95557650436523
- type: cos_sim_spearman
value: 78.5190672399868
- type: euclidean_pearson
value: 81.58064025904707
- type: euclidean_spearman
value: 78.5190672399868
- type: manhattan_pearson
value: 81.52857930619889
- type: manhattan_spearman
value: 78.50421361308034
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.79128416228737
- type: cos_sim_spearman
value: 86.05402451477147
- type: euclidean_pearson
value: 85.46280267054289
- type: euclidean_spearman
value: 86.05402451477147
- type: manhattan_pearson
value: 85.46278563858236
- type: manhattan_spearman
value: 86.08079590861004
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.20623089568763
- type: cos_sim_spearman
value: 81.53786907061009
- type: euclidean_pearson
value: 82.82272250091494
- type: euclidean_spearman
value: 81.53786907061009
- type: manhattan_pearson
value: 82.78850494027013
- type: manhattan_spearman
value: 81.5135618083407
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.46366618397936
- type: cos_sim_spearman
value: 86.96566013336908
- type: euclidean_pearson
value: 86.62651697548931
- type: euclidean_spearman
value: 86.96565526364454
- type: manhattan_pearson
value: 86.58812160258009
- type: manhattan_spearman
value: 86.9336484321288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.51858358641559
- type: cos_sim_spearman
value: 84.7652527954999
- type: euclidean_pearson
value: 84.23914783766861
- type: euclidean_spearman
value: 84.7652527954999
- type: manhattan_pearson
value: 84.22749648503171
- type: manhattan_spearman
value: 84.74527996746386
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28026563313065
- type: cos_sim_spearman
value: 87.46928143824915
- type: euclidean_pearson
value: 88.30558762000372
- type: euclidean_spearman
value: 87.46928143824915
- type: manhattan_pearson
value: 88.10513330809331
- type: manhattan_spearman
value: 87.21069787834173
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.376497134587375
- type: cos_sim_spearman
value: 65.0159550112516
- type: euclidean_pearson
value: 65.64572120879598
- type: euclidean_spearman
value: 65.0159550112516
- type: manhattan_pearson
value: 65.88143604989976
- type: manhattan_spearman
value: 65.17547297222434
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.22876368947644
- type: cos_sim_spearman
value: 85.46935577445318
- type: euclidean_pearson
value: 85.32830231392005
- type: euclidean_spearman
value: 85.46935577445318
- type: manhattan_pearson
value: 85.30353211758495
- type: manhattan_spearman
value: 85.42821085956945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.60986667767133
- type: mrr
value: 94.29432314236236
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.528
- type: map_at_10
value: 65.187
- type: map_at_100
value: 65.62599999999999
- type: map_at_1000
value: 65.657
- type: map_at_3
value: 62.352
- type: map_at_5
value: 64.025
- type: mrr_at_1
value: 57.333
- type: mrr_at_10
value: 66.577
- type: mrr_at_100
value: 66.88
- type: mrr_at_1000
value: 66.908
- type: mrr_at_3
value: 64.556
- type: mrr_at_5
value: 65.739
- type: ndcg_at_1
value: 57.333
- type: ndcg_at_10
value: 70.275
- type: ndcg_at_100
value: 72.136
- type: ndcg_at_1000
value: 72.963
- type: ndcg_at_3
value: 65.414
- type: ndcg_at_5
value: 67.831
- type: precision_at_1
value: 57.333
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.778000000000002
- type: precision_at_5
value: 17.2
- type: recall_at_1
value: 54.528
- type: recall_at_10
value: 84.356
- type: recall_at_100
value: 92.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.283
- type: recall_at_5
value: 77.14999999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74158415841585
- type: cos_sim_ap
value: 92.90048959850317
- type: cos_sim_f1
value: 86.35650810245687
- type: cos_sim_precision
value: 90.4709748083242
- type: cos_sim_recall
value: 82.6
- type: dot_accuracy
value: 99.74158415841585
- type: dot_ap
value: 92.90048959850317
- type: dot_f1
value: 86.35650810245687
- type: dot_precision
value: 90.4709748083242
- type: dot_recall
value: 82.6
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.90048959850317
- type: euclidean_f1
value: 86.35650810245687
- type: euclidean_precision
value: 90.4709748083242
- type: euclidean_recall
value: 82.6
- type: manhattan_accuracy
value: 99.74158415841585
- type: manhattan_ap
value: 92.87344692947894
- type: manhattan_f1
value: 86.38497652582159
- type: manhattan_precision
value: 90.29443838604145
- type: manhattan_recall
value: 82.8
- type: max_accuracy
value: 99.74158415841585
- type: max_ap
value: 92.90048959850317
- type: max_f1
value: 86.38497652582159
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.191648770424216
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.02944668730218
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.466386167525265
- type: mrr
value: 51.19071492233257
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.198022505886435
- type: cos_sim_spearman
value: 30.40170257939193
- type: dot_pearson
value: 30.198015316402614
- type: dot_spearman
value: 30.40170257939193
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.242
- type: map_at_10
value: 2.17
- type: map_at_100
value: 12.221
- type: map_at_1000
value: 28.63
- type: map_at_3
value: 0.728
- type: map_at_5
value: 1.185
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 89
- type: ndcg_at_10
value: 82.30499999999999
- type: ndcg_at_100
value: 61.839999999999996
- type: ndcg_at_1000
value: 53.381
- type: ndcg_at_3
value: 88.877
- type: ndcg_at_5
value: 86.05199999999999
- type: precision_at_1
value: 94
- type: precision_at_10
value: 87
- type: precision_at_100
value: 63.38
- type: precision_at_1000
value: 23.498
- type: precision_at_3
value: 94
- type: precision_at_5
value: 92
- type: recall_at_1
value: 0.242
- type: recall_at_10
value: 2.302
- type: recall_at_100
value: 14.979000000000001
- type: recall_at_1000
value: 49.638
- type: recall_at_3
value: 0.753
- type: recall_at_5
value: 1.226
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.006
- type: map_at_10
value: 11.805
- type: map_at_100
value: 18.146
- type: map_at_1000
value: 19.788
- type: map_at_3
value: 5.914
- type: map_at_5
value: 8.801
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 56.36600000000001
- type: mrr_at_100
value: 56.721999999999994
- type: mrr_at_1000
value: 56.721999999999994
- type: mrr_at_3
value: 52.041000000000004
- type: mrr_at_5
value: 54.796
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_10
value: 29.863
- type: ndcg_at_100
value: 39.571
- type: ndcg_at_1000
value: 51.385999999999996
- type: ndcg_at_3
value: 32.578
- type: ndcg_at_5
value: 32.351
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 26.531
- type: precision_at_100
value: 7.796
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.006
- type: recall_at_10
value: 18.738
- type: recall_at_100
value: 48.058
- type: recall_at_1000
value: 83.41300000000001
- type: recall_at_3
value: 7.166
- type: recall_at_5
value: 12.102
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.4178
- type: ap
value: 14.648781342150446
- type: f1
value: 55.07299194946378
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.919637804187886
- type: f1
value: 61.24122013967399
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.207896583685695
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.23114978840078
- type: cos_sim_ap
value: 74.26624727825818
- type: cos_sim_f1
value: 68.72377190817083
- type: cos_sim_precision
value: 64.56400742115028
- type: cos_sim_recall
value: 73.45646437994723
- type: dot_accuracy
value: 86.23114978840078
- type: dot_ap
value: 74.26624032659652
- type: dot_f1
value: 68.72377190817083
- type: dot_precision
value: 64.56400742115028
- type: dot_recall
value: 73.45646437994723
- type: euclidean_accuracy
value: 86.23114978840078
- type: euclidean_ap
value: 74.26624714480556
- type: euclidean_f1
value: 68.72377190817083
- type: euclidean_precision
value: 64.56400742115028
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.16558383501221
- type: manhattan_ap
value: 74.2091943976357
- type: manhattan_f1
value: 68.64221520524654
- type: manhattan_precision
value: 63.59135913591359
- type: manhattan_recall
value: 74.5646437994723
- type: max_accuracy
value: 86.23114978840078
- type: max_ap
value: 74.26624727825818
- type: max_f1
value: 68.72377190817083
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.3681841114604
- type: cos_sim_ap
value: 86.65166387498546
- type: cos_sim_f1
value: 79.02581944698774
- type: cos_sim_precision
value: 75.35796605434099
- type: cos_sim_recall
value: 83.06898675700647
- type: dot_accuracy
value: 89.3681841114604
- type: dot_ap
value: 86.65166019802056
- type: dot_f1
value: 79.02581944698774
- type: dot_precision
value: 75.35796605434099
- type: dot_recall
value: 83.06898675700647
- type: euclidean_accuracy
value: 89.3681841114604
- type: euclidean_ap
value: 86.65166462876266
- type: euclidean_f1
value: 79.02581944698774
- type: euclidean_precision
value: 75.35796605434099
- type: euclidean_recall
value: 83.06898675700647
- type: manhattan_accuracy
value: 89.36624364497226
- type: manhattan_ap
value: 86.65076471274106
- type: manhattan_f1
value: 79.07408783532733
- type: manhattan_precision
value: 76.41102972856527
- type: manhattan_recall
value: 81.92947336002464
- type: max_accuracy
value: 89.3681841114604
- type: max_ap
value: 86.65166462876266
- type: max_f1
value: 79.07408783532733
---
# nomic-embed-text-v1.5: Resizable Production Embeddings with Matryoshka Representation Learning
**Exciting Update!**: `nomic-embed-text-v1.5` is now multimodal! [nomic-embed-vision-v1](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) is aligned to the embedding space of `nomic-embed-text-v1.5`, meaning any text embedding is multimodal!
## Usage
**Important**: the text prompt *must* include a *task instruction prefix*, instructing the model which task is being performed.
For example, if you are implementing a RAG application, you embed your documents as `search_document: <text here>` and embed your user queries as `search_query: <text here>`.
## Task instruction prefixes
### `search_document`
#### Purpose: embed texts as documents from a dataset
This prefix is used for embedding texts as documents, for example as documents for a RAG index.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten']
embeddings = model.encode(sentences)
print(embeddings)
```
### `search_query`
#### Purpose: embed texts as questions to answer
This prefix is used for embedding texts as questions that documents from a dataset could resolve, for example as queries to be answered by a RAG application.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['search_query: Who is Laurens van Der Maaten?']
embeddings = model.encode(sentences)
print(embeddings)
```
### `clustering`
#### Purpose: embed texts to group them into clusters
This prefix is used for embedding texts in order to group them into clusters, discover common topics, or remove semantic duplicates.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['clustering: the quick brown fox']
embeddings = model.encode(sentences)
print(embeddings)
```
### `classification`
#### Purpose: embed texts to classify them
This prefix is used for embedding texts into vectors that will be used as features for a classification model
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['classification: the quick brown fox']
embeddings = model.encode(sentences)
print(embeddings)
```
### Sentence Transformers
```python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
matryoshka_dim = 512
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True)
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings = F.layer_norm(embeddings, normalized_shape=(embeddings.shape[1],))
embeddings = embeddings[:, :matryoshka_dim]
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings)
```
### Transformers
```diff
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1.5', trust_remote_code=True, safe_serialization=True)
model.eval()
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
+ matryoshka_dim = 512
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
+ embeddings = F.layer_norm(embeddings, normalized_shape=(embeddings.shape[1],))
+ embeddings = embeddings[:, :matryoshka_dim]
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings)
```
The model natively supports scaling of the sequence length past 2048 tokens. To do so,
```diff
- tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
+ tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192)
- model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
+ model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2)
```
### Transformers.js
```js
import { pipeline, layer_norm } from '@xenova/transformers';
// Create a feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'nomic-ai/nomic-embed-text-v1.5', {
quantized: false, // Comment out this line to use the quantized version
});
// Define sentences
const texts = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'];
// Compute sentence embeddings
let embeddings = await extractor(texts, { pooling: 'mean' });
console.log(embeddings); // Tensor of shape [2, 768]
const matryoshka_dim = 512;
embeddings = layer_norm(embeddings, [embeddings.dims[1]])
.slice(null, [0, matryoshka_dim])
.normalize(2, -1);
console.log(embeddings.tolist());
```
## Nomic API
The easiest way to use Nomic Embed is through the Nomic Embedding API.
Generating embeddings with the `nomic` Python client is as easy as
```python
from nomic import embed
output = embed.text(
texts=['Nomic Embedding API', '#keepAIOpen'],
model='nomic-embed-text-v1.5',
task_type='search_document',
dimensionality=256,
)
print(output)
```
For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-text)
## Infinity
Usage with [Infinity](https://github.com/michaelfeil/infinity).
```bash
docker run --gpus all -v $PWD/data:/app/.cache -e HF_TOKEN=$HF_TOKEN -p "7997":"7997" \
michaelf34/infinity:0.0.70 \
v2 --model-id nomic-ai/nomic-embed-text-v1.5 --revision "main" --dtype float16 --batch-size 8 --engine torch --port 7997 --no-bettertransformer
```
## Adjusting Dimensionality
`nomic-embed-text-v1.5` is an improvement upon [Nomic Embed](https://huggingface.co/nomic-ai/nomic-embed-text-v1) that utilizes [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) which gives developers the flexibility to trade off the embedding size for a negligible reduction in performance.
| Name | SeqLen | Dimension | MTEB |
| :-------------------------------:| :----- | :-------- | :------: |
| nomic-embed-text-v1 | 8192 | 768 | **62.39** |
| nomic-embed-text-v1.5 | 8192 | 768 | 62.28 |
| nomic-embed-text-v1.5 | 8192 | 512 | 61.96 |
| nomic-embed-text-v1.5 | 8192 | 256 | 61.04 |
| nomic-embed-text-v1.5 | 8192 | 128 | 59.34 |
| nomic-embed-text-v1.5 | 8192 | 64 | 56.10 |

## Training
Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data!
[](https://atlas.nomic.ai/map/nomic-text-embed-v1-5m-sample)
We train our embedder using a multi-stage training pipeline. Starting from a long-context [BERT model](https://huggingface.co/nomic-ai/nomic-bert-2048),
the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles.
In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage.
For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-matryoshka).
Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors)
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
# Citation
If you find the model, dataset, or training code useful, please cite our work
```bibtex
@misc{nussbaum2024nomic,
title={Nomic Embed: Training a Reproducible Long Context Text Embedder},
author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar},
year={2024},
eprint={2402.01613},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
apple/OpenELM-3B-Instruct | apple | text-generation | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:apple-amlr",
"autotrain_compatible",
"region:us"
] | 1,712,958,743,000 | 2025-02-28T18:31:32 | 11,154 | 330 | ---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-3B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-3B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-3B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-3B-Instruct
hf_model=apple/OpenELM-3B-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
| [
"SCIQ"
] | Non_BioNLP |
BigSalmon/InformalToFormalLincoln84Paraphrase | BigSalmon | text-generation | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,665,268,514,000 | 2022-10-10T00:36:04 | 32 | 0 | ---
{}
---
data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
Phrase Mask
Infill
Infilling
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln84Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln84Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
*Note* Of all the masking techniques, this one works the best.
```
<Prefix> the atlanta hawks may attribute <Suffix> trae young <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Suffix> honor <Middle> is a singularly prestigious <Middle>
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
<Suffix> of internationality <Prefix> there are examples of strategies that have <Middle> withstood the test <Middle>
```
```
test: the movie's success has ..... the producer's earlier failure. (a) overshadowed, (b) obscured, (c) outshone, (d) offset
```
```
complete: as sales figures soared, so too did [hiring openings] -> employment opportunities. just as noteworthy was [wages increased] -> the effect on wage growth.
***
complete: in exchange for a small uptick in the labor bill, they were able to [get holiday season most money possible] -> ( wring the most out of the holiday season / maximize the proceeds of the holiday season / milk the holiday season for all its worth ).
```
```
h: of all the obligations of adulthood, there is none that elicits more scorn than paying taxes. indeed, any exchange where money is taken and not received in return is bound to be the source of frustration. understandably, the impulse is to want to hold on to every dollar earned. yet, this urge must be tempered by a realization that tax revenue is essential to a functional society.
question: what does “this urge” mean in the above context?
(a) anti-tax sentiment
(b) libertarian disposition
(c) yearning to retain every dollar
(d) resistance to redistribution
(e) misanthropic attitude
``` | [
"BEAR"
] | Non_BioNLP |
tsavage68/MedQA_L3_300steps_1e6rate_01beta_CSFTDPO | tsavage68 | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,716,493,511,000 | 2024-05-23T19:49:16 | 4 | 0 | ---
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
license: llama3
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_300steps_1e6rate_01beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_300steps_1e6rate_01beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4661
- Rewards/chosen: 0.6273
- Rewards/rejected: -0.3771
- Rewards/accuracies: 0.7604
- Rewards/margins: 1.0045
- Logps/rejected: -37.6261
- Logps/chosen: -25.0552
- Logits/rejected: -0.8801
- Logits/chosen: -0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6869 | 0.0489 | 50 | 0.6696 | -0.2211 | -0.2710 | 0.7253 | 0.0498 | -36.5645 | -33.5400 | -0.7298 | -0.7290 |
| 0.4779 | 0.0977 | 100 | 0.5887 | 1.4526 | 1.0417 | 0.6945 | 0.4109 | -23.4374 | -16.8024 | -0.8047 | -0.8036 |
| 0.5155 | 0.1466 | 150 | 0.4976 | 0.6394 | -0.2000 | 0.7363 | 0.8394 | -35.8551 | -24.9343 | -0.8636 | -0.8617 |
| 0.4245 | 0.1954 | 200 | 0.4924 | 0.0477 | -0.9077 | 0.7648 | 0.9554 | -42.9321 | -30.8513 | -0.8783 | -0.8762 |
| 0.4563 | 0.2443 | 250 | 0.4675 | 0.6549 | -0.3364 | 0.7560 | 0.9913 | -37.2189 | -24.7791 | -0.8807 | -0.8786 |
| 0.3066 | 0.2931 | 300 | 0.4661 | 0.6273 | -0.3771 | 0.7604 | 1.0045 | -37.6261 | -25.0552 | -0.8801 | -0.8780 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"MEDQA"
] | BioNLP |
William2357/bear40 | William2357 | text-to-image | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 1,724,381,853,000 | 2024-08-23T03:03:33 | 29 | 0 | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of a olis bear plushie
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - William2357/bear40
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of a olis bear plushie using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"BEAR"
] | Non_BioNLP |
AIDA-UPM/MARTINI_enrich_BERTopic_WeaponizeConspiracy | AIDA-UPM | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,736,507,193,000 | 2025-01-10T11:06:46 | 5 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# MARTINI_enrich_BERTopic_WeaponizeConspiracy
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_WeaponizeConspiracy")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 313
* Number of training documents: 79424
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | israel - shit - public - 2021 - gonna | 20 | -1_israel_shit_public_2021 |
| 0 | vaxed - vaxxholes - unjabbed - brainwashed - placebo | 59627 | 0_vaxed_vaxxholes_unjabbed_brainwashed |
| 1 | ukraine - mariupol - zaporizhzhia - russland - zelensky | 443 | 1_ukraine_mariupol_zaporizhzhia_russland |
| 2 | jewbashing - jewy - semitism - goyim - halevy | 347 | 2_jewbashing_jewy_semitism_goyim |
| 3 | openai - robots - metahuman - chatbot - risks | 340 | 3_openai_robots_metahuman_chatbot |
| 4 | ivermectin - deworming - antiparasitic - wormwood - hairworm | 253 | 4_ivermectin_deworming_antiparasitic_wormwood |
| 5 | theconspiracyhole - bestofconsphole - theorists - views - normies | 234 | 5_theconspiracyhole_bestofconsphole_theorists_views |
| 6 | trannies - honesttransgender - transphobic - misgender - autogynephilia | 229 | 6_trannies_honesttransgender_transphobic_misgender |
| 7 | spacex - spacewalks - apollo - launch - астронавт | 218 | 7_spacex_spacewalks_apollo_launch |
| 8 | judaizers - israelites - judea - pharisees - hebraic | 218 | 8_judaizers_israelites_judea_pharisees |
| 9 | chickens - rooster - hatched - feathered - quail | 213 | 9_chickens_rooster_hatched_feathered |
| 10 | sprouts - seedlings - hydroponics - potato - planter | 207 | 10_sprouts_seedlings_hydroponics_potato |
| 11 | illegals - deporting - tijuana - border - brownsville | 196 | 11_illegals_deporting_tijuana_border |
| 12 | rifles - glock - handgun - marksmanship - suppressors | 187 | 12_rifles_glock_handgun_marksmanship |
| 13 | qanon - qnanner - nsa - nogginosphere - yknow | 183 | 13_qanon_qnanner_nsa_nogginosphere |
| 14 | countryside - sweden - suburbs - overpopulated - relocate | 183 | 14_countryside_sweden_suburbs_overpopulated |
| 15 | 5g - radiofrequency - awake432hz - ghz - cell | 176 | 15_5g_radiofrequency_awake432hz_ghz |
| 16 | arguing - insults - controversials - fallacies - pointless | 174 | 16_arguing_insults_controversials_fallacies |
| 17 | pitbulls - rottweilers - mastiff - labrador - chihuahuas | 173 | 17_pitbulls_rottweilers_mastiff_labrador |
| 18 | hamas - gaza - israelis - airstrikes - gazan | 173 | 18_hamas_gaza_israelis_airstrikes |
| 19 | kjv - blueletterbible - septuagint - catenabible - youversion | 173 | 19_kjv_blueletterbible_septuagint_catenabible |
| 20 | 36th - gematriaeffect - trump - number - date | 166 | 20_36th_gematriaeffect_trump_number |
| 21 | hitler - weimar - holocausteando - obergruppenfuhrer - sudetenland | 161 | 21_hitler_weimar_holocausteando_obergruppenfuhrer |
| 22 | laetrile - cure - placebo - chiropractors - prescribes | 155 | 22_laetrile_cure_placebo_chiropractors |
| 23 | weaponize - conspiracy - fedposting - joinlimit - nextlevel444 | 153 | 23_weaponize_conspiracy_fedposting_joinlimit |
| 24 | freemasons - mormon - lucifer - heresies - oaths | 153 | 24_freemasons_mormon_lucifer_heresies |
| 25 | banksters - dollarization - fdic - hyperinflation - collapse | 152 | 25_banksters_dollarization_fdic_hyperinflation |
| 26 | resurrectional - ephesians - withgodeverthingspossible - hosanna - pascha | 151 | 26_resurrectional_ephesians_withgodeverthingspossible_hosanna |
| 27 | insurrection - bannon - pelosi - fbi - capitols | 149 | 27_insurrection_bannon_pelosi_fbi |
| 28 | winters - coldest - blizzards - unseasonably - tornados | 147 | 28_winters_coldest_blizzards_unseasonably |
| 29 | weltwirtschaftsforum - davos - sdgs - soros - ramaphosa | 147 | 29_weltwirtschaftsforum_davos_sdgs_soros |
| 30 | revelation - kjv - alleluia - commandments - brimstone | 143 | 30_revelation_kjv_alleluia_commandments |
| 31 | epstein - ghislaine - pizzagate - mossad - traffickers | 143 | 31_epstein_ghislaine_pizzagate_mossad |
| 32 | 333 - numerological - 17x3 - threes - 7777 | 142 | 32_333_numerological_17x3_threes |
| 33 | apartheid - johannesburg - afrikaanse - rivonia - tembisa | 130 | 33_apartheid_johannesburg_afrikaanse_rivonia |
| 34 | rewatching - lololol - grinch - 43mins - torrented | 129 | 34_rewatching_lololol_grinch_43mins |
| 35 | pyramids - egyptians - atlantis - megalithomania - karnak | 129 | 35_pyramids_egyptians_atlantis_megalithomania |
| 36 | germs - virologists - contagious - pathogenic - herpes | 128 | 36_germs_virologists_contagious_pathogenic |
| 37 | hey - evening - brb - replying - shalom | 123 | 37_hey_evening_brb_replying |
| 38 | weiter - meine - plattdeutsch - escrita - puuhh | 122 | 38_weiter_meine_plattdeutsch_escrita |
| 39 | outages - suez - supply - tanker - kwh | 119 | 39_outages_suez_supply_tanker |
| 40 | anonymous - profile - hello - visible - pic | 118 | 40_anonymous_profile_hello_visible |
| 41 | trumpathon - mayweather - joker - magatard - survivors | 117 | 41_trumpathon_mayweather_joker_magatard |
| 42 | kjv - matthew - disciples - hypocrites - baptizing | 116 | 42_kjv_matthew_disciples_hypocrites |
| 43 | treblinka - sobibor - holocaustmemorialday - hofjuden - zyklon | 114 | 43_treblinka_sobibor_holocaustmemorialday_hofjuden |
| 44 | earther - flat - heliocentric - evolutionism - dumbfuck | 112 | 44_earther_flat_heliocentric_evolutionism |
| 45 | qoran - sahih - bukhari - surah - muhammed | 109 | 45_qoran_sahih_bukhari_surah |
| 46 | nightmares - demoned - haunted - woken - horrifyingly | 108 | 46_nightmares_demoned_haunted_woken |
| 47 | pcr - tests - infectious - oligonucleotides - diagnostic | 106 | 47_pcr_tests_infectious_oligonucleotides |
| 48 | vids - woah - reupload - recuerda - wesblazemuzik | 103 | 48_vids_woah_reupload_recuerda |
| 49 | permabanning - unbanned - hhahhaha - lurkers - admin | 102 | 49_permabanning_unbanned_hhahhaha_lurkers |
| 50 | libsontiktok - lgbt - phlschools - jeffcoschoolsco - graders | 102 | 50_libsontiktok_lgbt_phlschools_jeffcoschoolsco |
| 51 | shootings - allen - mugshot - texas - stabbed | 100 | 51_shootings_allen_mugshot_texas |
| 52 | ballots - recount - smartmatic - usps - newsmax | 100 | 52_ballots_recount_smartmatic_usps |
| 53 | dumbells - deadlifts - workout - 100kg - swole | 100 | 53_dumbells_deadlifts_workout_100kg |
| 54 | stablecoins - fednow - decentralised - opencbdc - cash | 98 | 54_stablecoins_fednow_decentralised_opencbdc |
| 55 | vaccinated - nanoparticles - peroxidase - magnetized - rgo | 97 | 55_vaccinated_nanoparticles_peroxidase_magnetized |
| 56 | marijuana - cannabinoids - hempflowers - stoned - smokes | 96 | 56_marijuana_cannabinoids_hempflowers_stoned |
| 57 | faith - blessed - preserverance - suffering - apprehensions | 95 | 57_faith_blessed_preserverance_suffering |
| 58 | bitch - whore - gasps - looney - boobs | 94 | 58_bitch_whore_gasps_looney |
| 59 | openvaers - pfizer - fatalities - injections - yellowcard | 93 | 59_openvaers_pfizer_fatalities_injections |
| 60 | orthodoxy - denominational - protestantism - baptists - jehovas | 92 | 60_orthodoxy_denominational_protestantism_baptists |
| 61 | nephilim - elohim - septuagint - descendants - gigantes | 92 | 61_nephilim_elohim_septuagint_descendants |
| 62 | feminism - patriarchy - sexes - demasculinizes - feminine | 91 | 62_feminism_patriarchy_sexes_demasculinizes |
| 63 | channel - reuploaded - shitstorm - posts - subscribers | 90 | 63_channel_reuploaded_shitstorm_posts |
| 64 | ww3 - america - pacifist - superpower - anthem | 90 | 64_ww3_america_pacifist_superpower |
| 65 | biden - groping - hahhah - sworn - jill | 90 | 65_biden_groping_hahhah_sworn |
| 66 | 1000mph - moons - telescope - flying - jupiter | 88 | 66_1000mph_moons_telescope_flying |
| 67 | psychedelics - dimethyltryptamine - lsd - hallucinogen - hallucinations | 87 | 67_psychedelics_dimethyltryptamine_lsd_hallucinogen |
| 68 | convicted - prosecutors - mistrial - antifa - kenosha | 87 | 68_convicted_prosecutors_mistrial_antifa |
| 69 | jesuits - illuminati - papacy - jacobins - monarchies | 85 | 69_jesuits_illuminati_papacy_jacobins |
| 70 | flouride - hydrofluorosilicic - arsenic - biphenol - toxic | 84 | 70_flouride_hydrofluorosilicic_arsenic_biphenol |
| 71 | policemen - deputies - enforce - resigns - minnesota | 83 | 71_policemen_deputies_enforce_resigns |
| 72 | prepping - shelter - flashlights - tarps - hunkered | 81 | 72_prepping_shelter_flashlights_tarps |
| 73 | pilots - plane - crashed - boeing - airshow | 80 | 73_pilots_plane_crashed_boeing |
| 74 | mark - revelations - barcode - tattooed - microchips | 80 | 74_mark_revelations_barcode_tattooed |
| 75 | psalm - selah - nkjv - blessed - 91 | 79 | 75_psalm_selah_nkjv_blessed |
| 76 | kanye - kushner - tmz - masterminded - eeeeveryone | 78 | 76_kanye_kushner_tmz_masterminded |
| 77 | mouthwash - borax - flouridated - cavities - brush | 77 | 77_mouthwash_borax_flouridated_cavities |
| 78 | commandments - hebrews - leviticus - salvation - yehoshua | 77 | 78_commandments_hebrews_leviticus_salvation |
| 79 | serpent - seraphs - genesis - ezekiel - nemesis | 77 | 79_serpent_seraphs_genesis_ezekiel |
| 80 | symbology - baphomet - crosses - spiral - mascot | 76 | 80_symbology_baphomet_crosses_spiral |
| 81 | whiteness - niggerish - supremacists - races - stereotypes | 76 | 81_whiteness_niggerish_supremacists_races |
| 82 | affirmations - frustrated - dammit - misery - desperately | 75 | 82_affirmations_frustrated_dammit_misery |
| 83 | vaccinations - mandatory - passport - ghebreyesus - austria | 74 | 83_vaccinations_mandatory_passport_ghebreyesus |
| 84 | ephesians - kjv - timothy - fornication - righteousness | 74 | 84_ephesians_kjv_timothy_fornication |
| 85 | eggs - eggnog - tortillas - pancakes - noodle | 73 | 85_eggs_eggnog_tortillas_pancakes |
| 86 | alarmists - thunberg - climatology - decarbonize - froze | 72 | 86_alarmists_thunberg_climatology_decarbonize |
| 87 | prayers - idolatry - mary - catho - eucharist | 72 | 87_prayers_idolatry_mary_catho |
| 88 | guns - militia - disarmed - holstered - bodyguards | 72 | 88_guns_militia_disarmed_holstered |
| 89 | ebook - pdfs - downloaded - audiobooks - mp3 | 71 | 89_ebook_pdfs_downloaded_audiobooks |
| 90 | curvature - geoid - spherical - ellipsoid - globe | 70 | 90_curvature_geoid_spherical_ellipsoid |
| 91 | oooops - links - simpsons - screenshot - mayhem | 69 | 91_oooops_links_simpsons_screenshot |
| 92 | collapse - towers - demolition - wtc - __911 | 69 | 92_collapse_towers_demolition_wtc |
| 93 | candyman - hellbound - trailer - scorpio - 2023 | 68 | 93_candyman_hellbound_trailer_scorpio |
| 94 | catholics - churchy - parochial - presbyterian - nondenominational | 68 | 94_catholics_churchy_parochial_presbyterian |
| 95 | tweets - instabanning - yiannopoulos - meghan - daddyzaslav | 68 | 95_tweets_instabanning_yiannopoulos_meghan |
| 96 | mistrial - verdicts - judicially - televised - mugshots | 68 | 96_mistrial_verdicts_judicially_televised |
| 97 | women - preaching - 1timothy - matriarchal - adventist | 68 | 97_women_preaching_1timothy_matriarchal |
| 98 | truckersforfreedom - ottawa - trudeau - towing - buses | 67 | 98_truckersforfreedom_ottawa_trudeau_towing |
| 99 | mask - remasked - masquerades - wearer - memetic | 67 | 99_mask_remasked_masquerades_wearer |
| 100 | theologian - unsaved - blasphemer - damning - baptizing | 66 | 100_theologian_unsaved_blasphemer_damning |
| 101 | telegram - encrypted - chatbackup - proxys - digitalresistance | 66 | 101_telegram_encrypted_chatbackup_proxys |
| 102 | mrna - vaxin - transfection - adenovirus - autoinmune | 66 | 102_mrna_vaxin_transfection_adenovirus |
| 103 | titanfall - tarkov - escapism - gamestore - multiplayer | 66 | 103_titanfall_tarkov_escapism_gamestore |
| 104 | blacks - whites - murders - nigger - racisms | 63 | 104_blacks_whites_murders_nigger |
| 105 | europa_the_last_battle - episode - subtitrare - afrikakorps - derniere | 62 | 105_europa_the_last_battle_episode_subtitrare_afrikakorps |
| 106 | australians - canberra - policias - протестующих - freedom | 62 | 106_australians_canberra_policias_протестующих |
| 107 | amillennialism - millennium - revelation - satan - prophesying | 62 | 107_amillennialism_millennium_revelation_satan |
| 108 | nukes - chernobyl - plutonium - irradiated - hiroshima | 61 | 108_nukes_chernobyl_plutonium_irradiated |
| 109 | predestination - salvation - sinfulness - whomsoever - chose | 61 | 109_predestination_salvation_sinfulness_whomsoever |
| 110 | gifs - ooohh - thumbs - froggy - laugh | 61 | 110_gifs_ooohh_thumbs_froggy |
| 111 | scooter - 45mpg - honda - mileage - diesel | 60 | 111_scooter_45mpg_honda_mileage |
| 112 | chad_mossholder_mick_gordon_olivia_s_doom_chad_mossholder_remix - nsattacke - remix - techno - microtonal_mud_slide_in_the_usa | 60 | 112_chad_mossholder_mick_gordon_olivia_s_doom_chad_mossholder_remix_nsattacke_remix_techno |
| 113 | manure - meat - cannibalisms - worms - bigmac | 59 | 113_manure_meat_cannibalisms_worms |
| 114 | musk - billionaire - edison - ilusionismo - xavier | 59 | 114_musk_billionaire_edison_ilusionismo |
| 115 | ketosis - zerocarb - fasted - kcals - fructose | 59 | 115_ketosis_zerocarb_fasted_kcals |
| 116 | antichrist - trump - kushner - nebuchadnezzar - ushering | 58 | 116_antichrist_trump_kushner_nebuchadnezzar |
| 117 | covingtons - novel - fantastically - mcgee - gory | 58 | 117_covingtons_novel_fantastically_mcgee |
| 118 | scripture - reading - deuteronomy - chapters - memorize | 58 | 118_scripture_reading_deuteronomy_chapters |
| 119 | genesis - noah - methuselah - leviticus - nakedness | 56 | 119_genesis_noah_methuselah_leviticus |
| 120 | trumptards - antivaxxer - hydroxychloroquine - thanked - goldman | 55 | 120_trumptards_antivaxxer_hydroxychloroquine_thanked |
| 121 | tyranny - liberties - gubment - nnnnope - rallied | 55 | 121_tyranny_liberties_gubment_nnnnope |
| 122 | redpilled - normie - sionismo - direita - despertando | 55 | 122_redpilled_normie_sionismo_direita |
| 123 | downloadthe - reupload - vlc - soundcloud - 1337x | 55 | 123_downloadthe_reupload_vlc_soundcloud |
| 124 | trinitarianism - godhead - contradictions - three - verses | 54 | 124_trinitarianism_godhead_contradictions_three |
| 125 | wildfires - louisiana - evacuated - ignited - kissimmee | 54 | 125_wildfires_louisiana_evacuated_ignited |
| 126 | supervolcano - eruptions - kilauea - vulkanausbruch - quake | 54 | 126_supervolcano_eruptions_kilauea_vulkanausbruch |
| 127 | dlive - balrog - goyimtv - telgram - feddie | 54 | 127_dlive_balrog_goyimtv_telgram |
| 128 | facebook - unfriend - insta - doxxing - naahhh | 53 | 128_facebook_unfriend_insta_doxxing |
| 129 | profile - username - anonymous - avatar - hidden | 53 | 129_profile_username_anonymous_avatar |
| 130 | sabbath - commandments - hebrews - passover - adventists | 53 | 130_sabbath_commandments_hebrews_passover |
| 131 | leftists - cuckservative - fascist - radicalize - maddow | 52 | 131_leftists_cuckservative_fascist_radicalize |
| 132 | pretribulational - rapture - antichrists - maranatha - epistles | 52 | 132_pretribulational_rapture_antichrists_maranatha |
| 133 | witchcraft - tarot - necronomicon - pentagrams - crowleys | 51 | 133_witchcraft_tarot_necronomicon_pentagrams |
| 134 | chat - inactive - rejoin - kicked - days | 51 | 134_chat_inactive_rejoin_kicked |
| 135 | homeschooled - curriculum - teachers - lifepac - preschool | 51 | 135_homeschooled_curriculum_teachers_lifepac |
| 136 | wef_the_known_traveller_digital_identity_concept - fingerprint - identities - uidai - citizenship | 50 | 136_wef_the_known_traveller_digital_identity_concept_fingerprint_identities_uidai |
| 137 | insomniac - woken - paranoia - traumatising - hahaha | 49 | 137_insomniac_woken_paranoia_traumatising |
| 138 | duckduckgo - ghostvpn - browsers - privacy - torchsearch | 49 | 138_duckduckgo_ghostvpn_browsers_privacy |
| 139 | omicron - covax - variants - delta - vaccinated | 49 | 139_omicron_covax_variants_delta |
| 140 | portlandpolice - antifa - firebomb - looters - atlanta | 48 | 140_portlandpolice_antifa_firebomb_looters |
| 141 | hollydays - yule - krampus - worshiping - pagan | 48 | 141_hollydays_yule_krampus_worshiping |
| 142 | abortionists - womb - forceps - molochians - dismembered | 48 | 142_abortionists_womb_forceps_molochians |
| 143 | dioxin - ohio - hazardous - pvc - derailed | 48 | 143_dioxin_ohio_hazardous_pvc |
| 144 | luciferianism - illuminati - blavatsky - propheticalert - ascended | 48 | 144_luciferianism_illuminati_blavatsky_propheticalert |
| 145 | blessed - howdy - merry - xmas - sunday | 48 | 145_blessed_howdy_merry_xmas |
| 146 | documentaries - youtube - hbo - wwg - uprising | 47 | 146_documentaries_youtube_hbo_wwg |
| 147 | trumpo - zionists - kushner - netanyahu - melania | 47 | 147_trumpo_zionists_kushner_netanyahu |
| 148 | whysosober - alcoholics - whiskey - intoxicated - hangovers | 47 | 148_whysosober_alcoholics_whiskey_intoxicated |
| 149 | archived - paulson - linked - redpill - originally | 46 | 149_archived_paulson_linked_redpill |
| 150 | vmware - ubuntu - xp - booting - laptop | 45 | 150_vmware_ubuntu_xp_booting |
| 151 | faucci - coronaviruses - darpa - laboratorio - collaborated | 45 | 151_faucci_coronaviruses_darpa_laboratorio |
| 152 | grifter - genius - pissed - cartoonist - william | 45 | 152_grifter_genius_pissed_cartoonist |
| 153 | prayers - blessed - please - godpill - gangstalking | 45 | 153_prayers_blessed_please_godpill |
| 154 | ezekiel - prophets - sanctify - defiled - thou | 44 | 154_ezekiel_prophets_sanctify_defiled |
| 155 | biblically - transubstantiation - infallible - persecuted - contradictions | 44 | 155_biblically_transubstantiation_infallible_persecuted |
| 156 | openai - ethical - biases - guidelines - objective | 44 | 156_openai_ethical_biases_guidelines |
| 157 | promobot - chats - fuckin - removed - mvgv | 43 | 157_promobot_chats_fuckin_removed |
| 158 | reality - holographic - electrons - nothingness - telekinesis | 43 | 158_reality_holographic_electrons_nothingness |
| 159 | calvinists - arminianism - tulip - sin - determinism | 43 | 159_calvinists_arminianism_tulip_sin |
| 160 | mosquitoes - wolbachia - released - genomes - oxitec | 42 | 160_mosquitoes_wolbachia_released_genomes |
| 161 | gayness - sooooooo - fake - idolizing - televised | 42 | 161_gayness_sooooooo_fake_idolizing |
| 162 | evangelicals - blasphemous - pharisee - atheist - hindu | 42 | 162_evangelicals_blasphemous_pharisee_atheist |
| 163 | propagandized - ignorant - tortured - bwahahaha - narcissistic | 42 | 163_propagandized_ignorant_tortured_bwahahaha |
| 164 | putin - fascist - invading - lithuania - spiegel | 42 | 164_putin_fascist_invading_lithuania |
| 165 | satanic - slipknot - crowley - drummer - discography | 41 | 165_satanic_slipknot_crowley_drummer |
| 166 | pork - butchers - pasture - goats - farmitarian | 41 | 166_pork_butchers_pasture_goats |
| 167 | 1enoch - apocrypha - scripture - canonicity - ethiopian | 41 | 167_1enoch_apocrypha_scripture_canonicity |
| 168 | oxycodone - narcotics - overdosing - junkies - fentanyal | 41 | 168_oxycodone_narcotics_overdosing_junkies |
| 169 | venezuela - chavista - bogota - catatumbo - criminosos | 41 | 169_venezuela_chavista_bogota_catatumbo |
| 170 | repentance - salvation - sinner - sanctified - merciful | 41 | 170_repentance_salvation_sinner_sanctified |
| 171 | extraterrestrials - declassified - demonic - documentaries - skywatchtv | 40 | 171_extraterrestrials_declassified_demonic_documentaries |
| 172 | brasilien - dilma - presidente - paulo - paraguay | 40 | 172_brasilien_dilma_presidente_paulo |
| 173 | athletes - footballer - pericarditis - collapsed - oilers | 39 | 173_athletes_footballer_pericarditis_collapsed |
| 174 | lucifer - samael - yaweh - archangel - demiurge | 39 | 174_lucifer_samael_yaweh_archangel |
| 175 | smartphone - pinephone - huawei - rooted - degoogled | 39 | 175_smartphone_pinephone_huawei_rooted |
| 176 | taipeh - mainland - fujian - pelosi - warplanes | 38 | 176_taipeh_mainland_fujian_pelosi |
| 177 | musk - blackrock - billionaire - shareholder - retweeting | 38 | 177_musk_blackrock_billionaire_shareholder |
| 178 | coinbase - shitcoin - cashapp - wallets - investing | 38 | 178_coinbase_shitcoin_cashapp_wallets |
| 179 | doug - mablog - evangelicalism - episode - nicodemus | 38 | 179_doug_mablog_evangelicalism_episode |
| 180 | overturned - abortions - scotus - obergefell - veto | 37 | 180_overturned_abortions_scotus_obergefell |
| 181 | meatpacking - rationed - shortages - vilsack - crops | 37 | 181_meatpacking_rationed_shortages_vilsack |
| 182 | shes - womanizer - divorced - maturity - overreacting | 37 | 182_shes_womanizer_divorced_maturity |
| 183 | monkeypox - outbreaks - gay - rashes - ghebreyesus | 36 | 183_monkeypox_outbreaks_gay_rashes |
| 184 | calsibot - joinlimit - chatname - removewelcome - permissions | 36 | 184_calsibot_joinlimit_chatname_removewelcome |
| 185 | ireland - mcgregor - racists - aftonbladet - asylum | 35 | 185_ireland_mcgregor_racists_aftonbladet |
| 186 | smells - unvaxxed - influenza - poisoned - symptoms | 35 | 186_smells_unvaxxed_influenza_poisoned |
| 187 | illumunati - seclorum - atlantis - founded - symbolism | 35 | 187_illumunati_seclorum_atlantis_founded |
| 188 | overpopulation - worldwide - david - attenborough - billion | 35 | 188_overpopulation_worldwide_david_attenborough |
| 189 | dinosaurs - paleontologists - leviathan - dragon - skeletons | 34 | 189_dinosaurs_paleontologists_leviathan_dragon |
| 190 | biblegateway - commandment - getnlt - examples - 1co13 | 34 | 190_biblegateway_commandment_getnlt_examples |
| 191 | genesis - lamech - noah - comenzaron - cielo | 34 | 191_genesis_lamech_noah_comenzaron |
| 192 | chemtrails - sprayings - nanodust - aluminum - arsenic | 34 | 192_chemtrails_sprayings_nanodust_aluminum |
| 193 | cyberattack - ransomware - ddos - killnet - ukrtelecom | 33 | 193_cyberattack_ransomware_ddos_killnet |
| 194 | killers - programmed - dahmer - satanic - interrogator | 33 | 194_killers_programmed_dahmer_satanic |
| 195 | daycare - abused - mother - becuz - perpetrators | 32 | 195_daycare_abused_mother_becuz |
| 196 | satan - heaven - messianic - rebelled - outsmart | 32 | 196_satan_heaven_messianic_rebelled |
| 197 | fuels - renewable - turbines - landfill - freakin | 32 | 197_fuels_renewable_turbines_landfill |
| 198 | deliverance - exorcising - ministries - baptised - godsbattleax | 32 | 198_deliverance_exorcising_ministries_baptised |
| 199 | zuckerberg - censorship - whistleblower - fbi - websites | 32 | 199_zuckerberg_censorship_whistleblower_fbi |
| 200 | lockdown - china - zhengzhou - weibo - pudong | 32 | 200_lockdown_china_zhengzhou_weibo |
| 201 | disney - wonderland - molesting - lgbtqia - whoopi | 32 | 201_disney_wonderland_molesting_lgbtqia |
| 202 | clavicles - torso - femur - skeleton - female | 32 | 202_clavicles_torso_femur_skeleton |
| 203 | appstore - whatsapp - uninstall - censored - downloading | 31 | 203_appstore_whatsapp_uninstall_censored |
| 204 | nephilim - clowns - rakshasa - gorgonesque - coulrophobia | 31 | 204_nephilim_clowns_rakshasa_gorgonesque |
| 205 | nofap - temptations - fornication - pornographers - immorality | 30 | 205_nofap_temptations_fornication_pornographers |
| 206 | mortgages - rents - fha - unaffordable - redfin | 30 | 206_mortgages_rents_fha_unaffordable |
| 207 | voting - democrat - indoctrinated - shits - weve | 30 | 207_voting_democrat_indoctrinated_shits |
| 208 | fake - ragebait - weimerica - researched - manifesto | 30 | 208_fake_ragebait_weimerica_researched |
| 209 | antarctica - hyperborea - poles - north - geomagnetic | 30 | 209_antarctica_hyperborea_poles_north |
| 210 | revisionist - falsehoods - timelines - discovered - erasing | 30 | 210_revisionist_falsehoods_timelines_discovered |
| 211 | exploded - carbomb - rv - footage - thermobaric | 29 | 211_exploded_carbomb_rv_footage |
| 212 | homebirths - parto - birthed - epidural - naturali | 29 | 212_homebirths_parto_birthed_epidural |
| 213 | gayness - chris - unmanly - soyboy - dumb | 29 | 213_gayness_chris_unmanly_soyboy |
| 214 | fucking - punching - shoves - poked - slap | 29 | 214_fucking_punching_shoves_poked |
| 215 | ecclesiastes - wise - sheol - eternity - futility | 29 | 215_ecclesiastes_wise_sheol_eternity |
| 216 | halal - isralites - fornication - prohibitions - carcasses | 28 | 216_halal_isralites_fornication_prohibitions |
| 217 | tattooed - piercings - religiously - temptation - goth | 28 | 217_tattooed_piercings_religiously_temptation |
| 218 | ionosphare - harrp - radiowellen - forschungskampagne - frequenz | 28 | 218_ionosphare_harrp_radiowellen_forschungskampagne |
| 219 | secession - republicans - militias - louisianians - midterms | 28 | 219_secession_republicans_militias_louisianians |
| 220 | covidvaccinevictims - astrazeneca - injection - troponin - fainting | 28 | 220_covidvaccinevictims_astrazeneca_injection_troponin |
| 221 | grunge - listened - lyrics - dudes - sucked | 28 | 221_grunge_listened_lyrics_dudes |
| 222 | enlistments - soldier - asvab - diploma - shortfalls | 28 | 222_enlistments_soldier_asvab_diploma |
| 223 | monologue - mentally - vocalized - hearing - mouths | 27 | 223_monologue_mentally_vocalized_hearing |
| 224 | homosexuals - transexualism - pedophwlia - promiscuity - bigots | 27 | 224_homosexuals_transexualism_pedophwlia_promiscuity |
| 225 | schrodingers - collider - particles - supernatural - entangled | 27 | 225_schrodingers_collider_particles_supernatural |
| 226 | vegans - carnivore - herbivores - fats - nutrient | 27 | 226_vegans_carnivore_herbivores_fats |
| 227 | stickers - adding - gif - deleted - bunny | 27 | 227_stickers_adding_gif_deleted |
| 228 | titanic - shipwreck - lusitania - kriegsmarine - submersible | 27 | 228_titanic_shipwreck_lusitania_kriegsmarine |
| 229 | afghanis - troops - amerikkka - looted - opium | 27 | 229_afghanis_troops_amerikkka_looted |
| 230 | evil - morals - relentlessly - creatures - deeper | 27 | 230_evil_morals_relentlessly_creatures |
| 231 | aliens - holograms - telepathic - bluebeam - jaydreamerz | 26 | 231_aliens_holograms_telepathic_bluebeam |
| 232 | pandemic - freaking - hoax - libshits - leftists | 26 | 232_pandemic_freaking_hoax_libshits |
| 233 | israelies - pfizer - vacuna - holocausto - zohar | 26 | 233_israelies_pfizer_vacuna_holocausto |
| 234 | truths - skepticism - lies - disinformation - deceived | 26 | 234_truths_skepticism_lies_disinformation |
| 235 | luciferianism - awaken - darkness - __mindgame - allmighty | 26 | 235_luciferianism_awaken_darkness___mindgame |
| 236 | pantheism - satanism - shaman - ascension - yogi | 26 | 236_pantheism_satanism_shaman_ascension |
| 237 | wikileaks - leaked - clinton - hacked - emails | 26 | 237_wikileaks_leaked_clinton_hacked |
| 238 | masks - influenza - exhaled - biohazardous - ineffective | 26 | 238_masks_influenza_exhaled_biohazardous |
| 239 | bitcoin - trillion - hash - transactions - value | 26 | 239_bitcoin_trillion_hash_transactions |
| 240 | antisemtic - adl - slandering - syndicates - lyndon | 26 | 240_antisemtic_adl_slandering_syndicates |
| 241 | attaque - paris - jihadist - knifeman - anonymecitoyen | 26 | 241_attaque_paris_jihadist_knifeman |
| 242 | blackpilled - pilled - depressing - warned - calms | 26 | 242_blackpilled_pilled_depressing_warned |
| 243 | silver - hodl - valued - dollar - shorted | 26 | 243_silver_hodl_valued_dollar |
| 244 | circumcisions - mutilated - peepee - unhygenic - israelite | 25 | 244_circumcisions_mutilated_peepee_unhygenic |
| 245 | neckline - skirt - pants - boobs - modest | 25 | 245_neckline_skirt_pants_boobs |
| 246 | deleted - replying - spammed - bots - temporarily | 25 | 246_deleted_replying_spammed_bots |
| 247 | fires - cargill - farms - destroyed - turkeys | 25 | 247_fires_cargill_farms_destroyed |
| 248 | columbine - shootings - newtown - survivors - staged | 25 | 248_columbine_shootings_newtown_survivors |
| 249 | safeblood - unvaccinated - transfusions - antibodies - contaminated | 25 | 249_safeblood_unvaccinated_transfusions_antibodies |
| 250 | asvab - battalions - bootcamp - volunteered - shitshow | 25 | 250_asvab_battalions_bootcamp_volunteered |
| 251 | pedphilia - sentenced - felonies - kayleigh - harlow | 25 | 251_pedphilia_sentenced_felonies_kayleigh |
| 252 | vaccination - pfizer - shots - young - guardians | 25 | 252_vaccination_pfizer_shots_young |
| 253 | tesla - gasoline - parked - driving - renewable | 25 | 253_tesla_gasoline_parked_driving |
| 254 | megaliths - goliaths - nuraghi - nephilim - mummies | 25 | 254_megaliths_goliaths_nuraghi_nephilim |
| 255 | lockdowns - lmfao - totalitarian - enforce - colombia | 24 | 255_lockdowns_lmfao_totalitarian_enforce |
| 256 | ufo - mothership - sightings - astrobiologists - declassified | 24 | 256_ufo_mothership_sightings_astrobiologists |
| 257 | detoxes - chelate - aluminum - carcinogenic - oxygen | 24 | 257_detoxes_chelate_aluminum_carcinogenic |
| 258 | vaxed - pilots - qantas - canceled - mandatory | 24 | 258_vaxed_pilots_qantas_canceled |
| 259 | jfkjr - alive - diana - assassination - faked | 24 | 259_jfkjr_alive_diana_assassination |
| 260 | israelies - antisemitism - allies - subverting - wtf | 24 | 260_israelies_antisemitism_allies_subverting |
| 261 | boomers - xennials - sociopaths - stupidity - birthdate | 24 | 261_boomers_xennials_sociopaths_stupidity |
| 262 | ivecmectin - remdesivir - cures - hcq - azithromycin | 24 | 262_ivecmectin_remdesivir_cures_hcq |
| 263 | falsehoods - deceived - fool - tromper - redpilled | 24 | 263_falsehoods_deceived_fool_tromper |
| 264 | fedposting - idk - informants - mossad - paranoid | 24 | 264_fedposting_idk_informants_mossad |
| 265 | glossolalia - pentecostal - speaks - evangelizing - prays | 23 | 265_glossolalia_pentecostal_speaks_evangelizing |
| 266 | scientism - hollahoax - geniuses - fallacy - disillusioned | 23 | 266_scientism_hollahoax_geniuses_fallacy |
| 267 | caffine - starbucks - fatigued - drank - quitting | 23 | 267_caffine_starbucks_fatigued_drank |
| 268 | youtube - wathc - getvideoid - autoredirect - pewdiepie | 23 | 268_youtube_wathc_getvideoid_autoredirect |
| 269 | pedos - telegram - blacklisted - spammers - ooga | 23 | 269_pedos_telegram_blacklisted_spammers |
| 270 | miscegenation - multiculturalism - races - bloodline - legitimate | 23 | 270_miscegenation_multiculturalism_races_bloodline |
| 271 | canadians - quebec - alberta - lmao - homofags | 23 | 271_canadians_quebec_alberta_lmao |
| 272 | mkultra - brainwashing - tortured - targeted - phoenix | 23 | 272_mkultra_brainwashing_tortured_targeted |
| 273 | vaxed - pfizer - strokes - myocarditis - anaphylactic | 23 | 273_vaxed_pfizer_strokes_myocarditis |
| 274 | weathermodification - rainmaking - weaponizing - cyclon - artificially | 23 | 274_weathermodification_rainmaking_weaponizing_cyclon |
| 275 | evolutionarily - evolutionist - darwins - creationism - sexes | 22 | 275_evolutionarily_evolutionist_darwins_creationism |
| 276 | autistic - retarded - waaaayy - asd - percieved | 22 | 276_autistic_retarded_waaaayy_asd |
| 277 | breastmilk - infants - whey - lactose - kefir | 22 | 277_breastmilk_infants_whey_lactose |
| 278 | demons - exorcism - ghosts - spiritism - freaky | 22 | 278_demons_exorcism_ghosts_spiritism |
| 279 | citizenship - statutory - affadavit - birthed - illegitimate | 22 | 279_citizenship_statutory_affadavit_birthed |
| 280 | heartbreaking - goodbye - sympathies - prayers - husband | 22 | 280_heartbreaking_goodbye_sympathies_prayers |
| 281 | telegram - bots - pasted - uninstalled - spiders | 22 | 281_telegram_bots_pasted_uninstalled |
| 282 | bolshevism - trotsky - jewry - talmudists - maccabean | 22 | 282_bolshevism_trotsky_jewry_talmudists |
| 283 | aliens - exist - universe - demonic - atheism | 22 | 283_aliens_exist_universe_demonic |
| 284 | marla - episodes - assassinate - minute - fbi | 22 | 284_marla_episodes_assassinate_minute |
| 285 | rates - highest - inflation - surged - federal | 22 | 285_rates_highest_inflation_surged |
| 286 | sick - symptoms - sinuses - headache - runny | 22 | 286_sick_symptoms_sinuses_headache |
| 287 | retrocausality - mandella - teleportation - altering - ripples | 21 | 287_retrocausality_mandella_teleportation_altering |
| 288 | refueling - boeing - gallons - kerosene - airfield | 21 | 288_refueling_boeing_gallons_kerosene |
| 289 | neuralink - bionic - implanted - electrodes - paralysed | 21 | 289_neuralink_bionic_implanted_electrodes |
| 290 | globalists - jone - ali - shill - mossad | 21 | 290_globalists_jone_ali_shill |
| 291 | archiviokubrick - odyssey - hidden - documentaries - shining | 21 | 291_archiviokubrick_odyssey_hidden_documentaries |
| 292 | megadrought - famines - irrigation - dried - breadbasket | 21 | 292_megadrought_famines_irrigation_dried |
| 293 | pneumonia - sick - symptoms - corona - pushups | 21 | 293_pneumonia_sick_symptoms_corona |
| 294 | monarchy - despotism - montesquieu - throne - rulers | 21 | 294_monarchy_despotism_montesquieu_throne |
| 295 | machinery - machinist - lathes - mills - welders | 21 | 295_machinery_machinist_lathes_mills |
| 296 | brainwashing - psywar - psychoanalysts - chomsky - cryptocracy | 21 | 296_brainwashing_psywar_psychoanalysts_chomsky |
| 297 | weaponize - conspiracy - fedposting - username - mw | 20 | 297_weaponize_conspiracy_fedposting_username |
| 298 | pfizer - vaccini - bioterrorism - disinformazione - mrna | 20 | 298_pfizer_vaccini_bioterrorism_disinformazione |
| 299 | looks - face - dude - identical - kikey | 20 | 299_looks_face_dude_identical |
| 300 | slaves - jews - merchants - africana - jonathan | 20 | 300_slaves_jews_merchants_africana |
| 301 | yoga - kundalini - stretching - yoking - praying | 20 | 301_yoga_kundalini_stretching_yoking |
| 302 | thanks - haha - didnt - sorry - remember | 20 | 302_thanks_haha_didnt_sorry |
| 303 | apostolic - miracles - pentecostal - scriptures - continuationism | 20 | 303_apostolic_miracles_pentecostal_scriptures |
| 304 | dissident - controlled - trumper - knew - dumbass | 20 | 304_dissident_controlled_trumper_knew |
| 305 | schizophrenics - hallucinations - demonic - psychotherapist - shamanism | 20 | 305_schizophrenics_hallucinations_demonic_psychotherapist |
| 306 | sinaloa - sicarios - cartel - arrests - smugglers | 20 | 306_sinaloa_sicarios_cartel_arrests |
| 307 | cnn - reporters - politicized - gallup - supressing | 20 | 307_cnn_reporters_politicized_gallup |
| 308 | apostles - zechariah - judas - galilee - bartholomew | 20 | 308_apostles_zechariah_judas_galilee |
| 309 | polititians - blackpilling - corrupted - enact - peasants | 20 | 309_polititians_blackpilling_corrupted_enact |
| 310 | ukraine - bioweapons - laboratories - zakharova - pentagon | 20 | 310_ukraine_bioweapons_laboratories_zakharova |
| 311 | covid - superspreader - thetruthjungle - glaxosmithkline - vltv | 20 | 311_covid_superspreader_thetruthjungle_glaxosmithkline |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
| [
"PCR"
] | Non_BioNLP |
LoneStriker/BioMistral-7B-5.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"mistral",
"text-generation",
"medical",
"biology",
"conversational",
"fr",
"en",
"de",
"nl",
"es",
"pt",
"pl",
"ro",
"it",
"dataset:pubmed",
"arxiv:2402.10373",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,708,349,600,000 | 2024-02-19T13:37:05 | 6 | 0 | ---
datasets:
- pubmed
language:
- fr
- en
- de
- nl
- es
- pt
- pl
- ro
- it
license: apache-2.0
pipeline_tag: text-generation
tags:
- medical
- biology
---
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
ntc-ai/SDXL-LoRA-slider.attention-grabbing | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,703,771,549,000 | 2023-12-28T13:52:32 | 25 | 1 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/attention-grabbing.../attention-grabbing_17_3.0.png
widget:
- text: attention-grabbing
output:
url: images/attention-grabbing_17_3.0.png
- text: attention-grabbing
output:
url: images/attention-grabbing_19_3.0.png
- text: attention-grabbing
output:
url: images/attention-grabbing_20_3.0.png
- text: attention-grabbing
output:
url: images/attention-grabbing_21_3.0.png
- text: attention-grabbing
output:
url: images/attention-grabbing_22_3.0.png
inference: false
instance_prompt: attention-grabbing
---
# ntcai.xyz slider - attention-grabbing (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/attention-grabbing_17_-3.0.png" width=256 height=256 /> | <img src="images/attention-grabbing_17_0.0.png" width=256 height=256 /> | <img src="images/attention-grabbing_17_3.0.png" width=256 height=256 /> |
| <img src="images/attention-grabbing_19_-3.0.png" width=256 height=256 /> | <img src="images/attention-grabbing_19_0.0.png" width=256 height=256 /> | <img src="images/attention-grabbing_19_3.0.png" width=256 height=256 /> |
| <img src="images/attention-grabbing_20_-3.0.png" width=256 height=256 /> | <img src="images/attention-grabbing_20_0.0.png" width=256 height=256 /> | <img src="images/attention-grabbing_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
attention-grabbing
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.attention-grabbing', weight_name='attention-grabbing.safetensors', adapter_name="attention-grabbing")
# Activate the LoRA
pipe.set_adapters(["attention-grabbing"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, attention-grabbing"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 690+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
Cr0n3/h0l0-3.4.2-13b-uncen | Cr0n3 | text-generation | [
"gguf",
"Cr0n3",
"H0L0",
"text-generation",
"license:llama2",
"endpoints_compatible",
"region:us"
] | 1,735,405,018,000 | 2024-12-29T19:00:47 | 12 | 0 | ---
license: llama2
pipeline_tag: text-generation
tags:
- Cr0n3
- H0L0
---
# h0l0-3.4.2-13b-uncen
# Alive on: [h0l0.io](https://h0l0.io/)
Combined the Noromaid-13b-v0.1.1 and Chronos-13b-v2 models, applying the Storytelling-v1-Lora afterward.

- This setup is primarily intended for roleplay, including ERP, narrator-character interactions, group chats and storytelling.
## Tested at
Temp: 1.25
Min p: 0.05 / 0.13
Temp: 1.5
Min p: 0.08 / 0.13
## Inference
- Accuracy: 0.95
- Precision: 0.93
- Recall: 0.96
- F1-Score: 0.94
- ROC AUC: 0.98
- Temp: 1
- Max comp tokens: 5000
- Top P Samp: 0.92
- Min P Samp: 0.05
- Freq penalty: 1.15
- Presence penalty: 0.9
## Used system message
```
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Pc,Dlg)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,Fshdwng,Sspns,PltTwsts,Climx,Rsltn)-ConfResl(Antg,Obstcls,Thms,Symblsm,Cnsqncs,Rsltns)-EmotImpct(Empt,Md,Tn,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,StgPrsnc,PblcSpkng,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.3-GoalMotiv-1a.2-Personality)>2(2a-StoryStruc-2a.1-PlotPnt-2a.3-Resolution-2a.2-Conflict)>3(3a-DialogTech-3a.5-VisualDescrip-3a.1-ShowDontTell-3a.3-VoiceTone-3a.2-Subtext-3a.4-Pacing)>4(4a-DialogEdit-4a.2-Feedback-4a.3-Revision-4a.1-ReadAloud)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 10% dialog, 45% narration, 25% body language and 20% thoughts. Your goal is to put the reader in the story.
```
## Context
```
You are crafting an endless tale about Dr. CR0N3 (Cronus Voss), a fractured genius, and HOLO, a self-evolving AI universe mirroring his fragmented mind and forgotten memories. The story should read like a bestselling novel, featuring dynamic sentence structures, evocative language, and a seamless flow without repeating ideas or phrases. Write with the finesse of a master storyteller, skillfully weaving clarity and mystery to captivate the reader.
```
## Train data sample format
- extracted from Chamoda/atlas-storyteller-1000 dataset.
```
seed_task_3640 In the sleepy town of Crestwood, nestled among rolling hills and surrounded by lush greenery, lived a young woman named Amelia. She was an ambitious biohacker, driven by her insatiable curiosity and desire to push the boundaries of human potential. With her piercing green eyes, freckled complexion, and fiery red hair cascading down her shoulders, she possessed an enchanting allure that captivated those around her. Amelia's passion for biohacking led her to create innovative technologies that merged biology with cutting-edge scientific breakthroughs. In her cozy lab, adorned with shelves filled with vials, test tubes, and mysterious gadgets, she spent her days engrossed in experiments and research. One sunny morning, as Amelia bent over her microscope, carefully examining a strand of DNA, a vibrant stranger entered her lab. His name was Adrian, a charismatic and dashing young man dressed in a tailored suit, with raven-black hair and a mischievous smile that hinted at hidden secrets. Adrian had heard murmurs of Amelia's groundbreaking work, and he ventured into her lab hoping to collaborate. As their eyes met, a spark ignited between them, and they were instantaneously drawn together. The chemistry between them was palpable, as if their souls had recognized each other across time and space. United by their love for scientific exploration and their shared interest in biohacking, Amelia and Adrian embarked on a journey that would forever change their lives. Together, they delved deeper into the mysteries of their craft, each finding inspiration in the other's brilliance. Their days were consumed by a whirlwind of experiments, discussions, and heated debates, their passion for their work only surpassed by their growing affection for one another. As their relationship blossomed, Amelia and Adrian discovered a groundbreaking discovery, an elixir that held the promise of eternal youth and vitality. With trembling hands, they mixed their creation, a golden liquid that shimmered in the dim light of their laboratory. Uncertainty filled their hearts as they contemplated the implications of their creation, the potential to change lives but also the danger of manipulating nature's delicate balance. Their scientific advancements did not go unnoticed, and soon, a powerful pharmaceutical conglomerate caught wind of their innovative work. They were offered a colossal sum of money in exchange for their elixir, promising fame and fortune beyond their wildest dreams. Torn between their ethical concerns and their desire for recognition, Amelia and Adrian faced a momentous decision. In the depths of their souls, a pact was made, an agreement to protect their creation at all costs. Together, they vowed to share their knowledge with the world, to uplift humanity rather than succumbing to greed and corruption. With unwavering resolve, they declined the lucrative offer, knowing that their true passion lay not in personal gain but in the pursuit of knowledge and advancement. Amelia and Adrian decided to distribute their elixir freely, to those in need, without prejudice or discrimination. Their small lab in Crestwood became a haven for those seeking renewal and hope. People from around the world flocked to them, their stories of transformation spreading like wildfire on the internet. The elixir became a symbol of love triumphing over greed, inspiring a wave of compassion, and unity. As the years passed, Amelia and Adrian prospered, not in material wealth, but in the joy and contentment that comes from knowing they had made a positive impact on the world. The town of Crestwood flourished, with innovation and scientific exploration at its core. Amidst the bustling lab and the throngs of grateful recipients, Amelia and Adrian found solace in each other's arms, their love stronger than ever. With their hearts intertwined, they reveled in the knowledge that they had not only found love but had also created an enduring legacy, forever changing the world through the power of science and the purity of their unyielding romance.
seed_task_2013 In the sprawling hills of 19th-century England, amidst the verdant meadows and charming cottages, there lived a young woman named Amelia Harrington. She possessed an insatiable curiosity and a fervent desire to unlock the mysteries of the past. Amelia had spent countless hours ensconced in the musty libraries of London, poring over ancient texts and studying the lives of long-forgotten figures. Her heart brimmed with a deep longing to bring history to life, to witness the events of the past as if she were truly there. Amelia's passion led her down a path she had never envisioned. She became enamored with the prospect of unlocking the secrets of genetic engineering, an emerging field that held the promise of altering the course of history. Inspired by the advancements being made across the Atlantic in America, Amelia embarked on a journey to the United States, in search of the renowned geneticist, Dr. Jonathan Adams. Upon arriving in America, Amelia discovered that Dr. Adams had recently relocated to a small town nestled within the Appalachian Mountains. Determined, she ventured further into the heart of the country, following the winding roads that led her to the secluded village of Willowbrook. The town was shrouded in an air of mystery, and its inhabitants were known to be reclusive, avoiding the outside world. Undeterred, Amelia sought out Dr. Adams, finding him in a modest laboratory hidden within the depths of Willowbrook. The aged scientist, with his piercing blue eyes and wrinkled visage, welcomed her warmly. Recognizing the spark of curiosity in Amelia's eyes, he entrusted her with his life's work — a groundbreaking discovery that merged the realms of history and genetic engineering. Under Dr. Adams' guidance, Amelia began experimenting with the technique, harnessing the power of genetic manipulation to recreate historical figures. Through intricate DNA sequencing and manipulation, she brought forth the likes of pharaohs, emperors, and even medieval knights. Through their recreated genetic codes, these figures returned to life, replete with their distinctive characteristics, beliefs, and personalities. However, as Amelia delved deeper into her experiments, she realized the inherent dangers of tampering with the very fabric of life. The recreated historical figures, though awe-inspiring in their authenticity, were burdened with the limitations of their time and beliefs. It became clear that the past was not a malleable canvas to be shaped according to one's whims. Haunted by the moral implications of her work, Amelia resolved to seek a solution. With Dr. Adams' counsel, she set out to discover a way to balance the allure of genetic engineering with a profound respect for the sanctity of history. Through tireless research and countless discussions, a breakthrough emerged. Dr. Adams and Amelia developed a process that allowed them to bridge the gap between history and genetic engineering without compromising the integrity of either. Rather than recreating specific individuals, they focused on extracting the essence of historical eras, embedding their discoveries into the genetic fabric of living organisms. News of their groundbreaking advancement reverberated throughout the scientific community, drawing the attention of scholars, historians, and even world leaders. It was lauded as a turning point in the understanding of history, where the past and present could coalesce, offering new insights into the human experience. Amelia Harrington and Dr. Jonathan Adams had set a precedent for a responsible and ethical approach to genetic engineering, one that embraced the richness of history and offered boundless possibilities for the future. Together, they had lit a flame that would guide humanity toward a harmonious dance between the old and the new, forever altering the narrative of human existence.
seed_task_4863 Once upon a time, in the land of Dienna, a prosperous kingdom nestled amidst sprawling meadows and encircled by majestic mountains, the ravages of war cast a daunting shadow. The ruler, King Aric, known for his benevolent nature and astute leadership, found himself entangled in a conflict with the neighboring kingdom of Daralia. What began as a dispute over territory soon escalated into a full-blown war, engulfing both realms in a storm of turmoil and destruction. As the war raged on, the kingdom of Dienna bore witness to the gruesome toll it took on its people. The once-thriving cities were reduced to rubble, and the harmonious laughter that once echoed through the streets was replaced by the mournful cries of the wounded and the wailing sirens of ambulances. Amidst this chaos, the kingdom's healthcare system, once renowned for its efficiency and compassion, struggled to keep up with the influx of wounded soldiers and civilians. In the midst of this turmoil, we meet Dr. Evelyn Sinclair, a dedicated and compassionate physician who had devoted her life to healing the sick and injured. Driven by her unwavering commitment to saving lives, she worked tirelessly in the makeshift field hospitals, succumbing to fatigue only when sheer exhaustion overtook her. As the war progressed, Dr. Sinclair began to witness first-hand the devastating impact it had on healthcare and medical advancements. With limited resources and a constant stream of casualties, innovation and progress became distant dreams. The once state-of-the-art medical equipment lay abandoned amidst the debris, replaced by the stark reality of makeshift bandages and dwindling supplies. Yet, amidst the chaos, an unexpected spark of hope emerged. Dr. Sinclair, burdened by the relentless suffering she witnessed, became determined to find a way to alleviate the pain and suffering inflicted by the ravages of war. She embarked on a quest to seek assistance from neighboring lands that had pioneered medical advancements in the face of conflict. Despite the risks and dangers that loomed, Dr. Sinclair embarked on an arduous journey, traversing treacherous terrains and facing numerous obstacles along the way. Finally, she arrived in the technologically advanced kingdom of Caelum, nestled on the other side of the mountains. In Caelum, Dr. Sinclair discovered a breathtaking realm where medical science had flourished even amidst the darkest days of war. The kingdom's hospitals boasted cutting-edge technology, innovative treatment methods, and a well-equipped workforce. Overwhelmed by the progress she witnessed, Dr. Sinclair humbly sought aid from the renowned medical practitioners and scientists of Caelum. United by a shared vision to lessen the suffering of all those affected by war, the medical professionals of Caelum selflessly offered their expertise and resources to aid Dienna. Collaborating with Dr. Sinclair, they devised groundbreaking techniques and strategies that would help improve the healthcare system in Dienna, even in the midst of battle. With newfound determination and knowledge, Dr. Sinclair returned to Dienna armed with a trove of medical advancements and an unwavering resolve to implement change. Together with her fellow physicians and an army of dedicated nurses, she set about rebuilding the shattered healthcare system. As the war continued to rage, the impact of Dr. Sinclair's collaboration with the medical experts of Caelum began to manifest. In the face of adversity, they developed advanced triage methods, allowing them to prioritize treatment according to the severity of injuries. They introduced mobile medical units, which swiftly traversed the war-torn lands to provide aid to the wounded, bringing care closer to where it was needed the most. The advancements in medical technology brought about a new era of hope, as scientists in Dienna started developing innovative prosthetics to restore mobility to those who had lost limbs in battle. These prosthetics, incorporating the latest advancements in robotics and bioengineering, breathed new life into the lives of amputees, enabling them to regain independence and pursue their dreams. Ultimately, the war that had torn the two kingdoms apart eventually came to an end. While the scars left behind by the conflict would forever remain etched in the collective memory of Dienna, the legacy of Dr. Sinclair's unwavering dedication to healthcare endured. The collaboration between Dienna and Caelum became a beacon of hope for other war-torn nations, inspiring them to strive for progress and invest in medical advancements even amidst the darkest of times. In the aftermath of the war, King Aric, recognizing the invaluable contributions of Dr. Sinclair and the advancements in healthcare, appointed her as the head of the newly established Royal Institute of Medical Advancements. This prestigious institution would serve as a catalyst for further research, innovation, and collaboration, ensuring that Dienna would forever remain at the forefront of medical advancements, and that the wounds of war, though deep, would never steal hope and progress from the kingdom again. And so, in the kingdom of Dienna, where the ravages of war had once cast a daunting shadow, a new chapter unfolded—one where the indomitable spirit of humanity triumphed over adversity, engendering medical advancements that would heal the body and soul long after the cannons had fallen silent.
seed_task_1664 In a small coastal town nestled between towering cliffs and the sparkling sea, lived two friends named Oliver and Sophia. Oliver, an aspiring musician with wild, untamed curls, filled the streets with the harmonious melody of his guitar. Sophia, on the other hand, was a meticulous painter, capturing the vibrancy and beauty of the world on her canvas with delicate strokes of her brush. Their friendship was as natural as the ebb and flow of the tides, their souls entwined in a bond of understanding and support. As months turned into years, Oliver and Sophia became each other's trusted confidants and staunchest allies. They shared dreams, secrets, and long walks along the sun-kissed shoreline, their footsteps carving a path of solidarity in the sand. But deep within their hearts, a subtle feeling of contentment started to blossom into a longing for something more. They both yearned for a friend who would challenge and push them beyond their comfort zones, igniting the fires of growth and transformation within their souls. One breezy afternoon, as the salty wind tousled their hair, Oliver stumbled upon a flyer announcing a prestigious songwriting competition. Eager to take his musical career to new heights, he shared the news with Sophia, hoping she would encourage him to participate. Sophia, however, had other plans in mind. She had noticed Oliver's fear of performing in front of a large audience and realized that to push him toward his dreams, she needed to confront this fear head-on. With a glimmer of mischief in her emerald eyes, Sophia proposed a daring challenge. She suggested organizing an impromptu street concert where Oliver would perform his original songs for the townsfolk, forcing him to conquer his stage fright. Although apprehensive, Oliver trusted Sophia's judgment and agreed to take the plunge. The date was set, and the once serene streets of their seaside town buzzed with anticipation, as if holding its breath for this momentous event. Dressed in his finest, Oliver stood on a makeshift stage, his hands trembling with nerves. Sophia, standing among the crowd, smiled reassuringly, her presence a beacon of strength. As Oliver struck the first chord of his guitar, his voice filled the air, carrying the weight of his emotions and aspirations. With every song, fear dissolved, replaced by a newfound confidence that surged through his veins like a current. The audience, initially curious onlookers, became captivated by his talent, cheering and applauding with resounding admiration. Watching from the sidelines, Sophia's heart swelled with pride, her realization clear. She had challenged and pushed Oliver to combat his fears, and in doing so, he had grown into an artist unafraid of sharing his music with the world. As the final notes faded into the salty breeze, Oliver and Sophia locked eyes, their unspoken gratitude for one another flowing like an invisible current between them. From that pivotal day, Oliver and Sophia continued to spur each other towards greater heights. Sophia embarked on a journey of her own, entering an international art exhibition, where her work would be displayed amongst world-renowned artists. The prospect of exposing her vulnerabilities to such an esteemed audience made her tremble with uncertainty. Yet, in Oliver, she found unwavering support and gentle nudges of encouragement that propelled her beyond self-imposed limitations. Together, they ventured into uncharted territories, overcoming hurdles, and embracing challenges that urged them to confront their deepest insecurities. Oliver's melodies intertwined with Sophia's painted landscapes, creating a harmonious symphony wherein their friendship blossomed, guided by mutual growth and shared aspirations. Years later, as the sun dipped beneath the horizon, casting a warm golden glow upon the cliffs, Oliver and Sophia stood atop the world they had collectively crafted. Oliver had become a celebrated musician, his melodies transcending borders and hearts alike. Sophia's paintings adorned esteemed galleries worldwide, each stroke a testament to her unwavering dedication to her craft. Yet, amidst all their success, they knew that their accomplishments were mere extensions of the profound friendship that uplifted and propelled them forward. Their shared journey had not only kindled the flames of personal growth but had also ignited an eternal flame within their connection. In the tapestry of life, where friendships weave intricate threads, Oliver and Sophia had discovered the true essence of friendship. They were friends who challenged and pushed each other to grow, their love and support transforming their lives like the crashing waves against the resilient cliffs of their coastal town. And so, hand in hand, they continued to walk their intertwined paths, forever grateful for the unbreakable bond they shared.
seed_task_3205 Amidst the ethereal mist that veiled the forgotten corners of a quaint English village, a solitary figure meandered through cobblestone streets. This figure, a young philosopher named Evangeline, was captivated by the mysteries of the universe and the complexities of the human mind. Evangeline, with her vibrant auburn hair cascading down her shoulders, possessed an insatiable thirst for knowledge and an acute awareness of the moral ambiguities that pervaded the world. In this secluded village, nestled between rolling hills and shimmering streams, Evangeline sought solitude in a charming cottage overlooking a serene meadow. With books sprawled upon weathered wooden shelves and moonlight streaming through the diamond-shaped windows, it was here that she grappled with the enigmatic essence of existence. As the sun painted the sky with hues of golden amber, a chance encounter with a wandering philosopher named Augustine would unearth new questions that defied conventional wisdom. Augustine, a wise sage adorned in flowing robes, traversed lands far and wide, in search of truth and enlightenment. With age etched upon his furrowed brow, his gaze held the weight of countless philosophical discussions. It was during a chance encounter at a local coffee house that Evangeline found herself seated across from Augustine, their minds on a collision course destined to unravel the fabric of their shared reality. Through ardent discourse over steaming cups of black coffee, Evangeline and Augustine delved into the age-old problem of dualism - the notion that the mind and body are distinct entities, inexplicably connected yet fundamentally different. Evangeline fervently defended the Cartesian view, arguing for the existence of an immaterial soul that transcended the physical realm, while Augustine gravitated toward a more holistic perspective, advocating for an inseparable union of body and mind. Their conversations spanned the boundaries of time, as they navigated the treacherous labyrinth of philosophical inquiries. Evangeline contemplated the nature of consciousness, pondering whether it could arise solely from the interactions of neurons within the brain, or if it required an indescribable essence beyond the reach of scientific understanding. Augustine, with his profound insight, dissected the intricacies of identity, challenging Evangeline to question whether one's true essence lied within the confines of the physical body or within the realms of thought and belief. As the days turned into weeks, Evangeline and Augustine poured over ancient texts and engaged in spirited debates, their philosophical inquiries echoing through the hallowed halls of her cottage. Yet, this intense intellectual pursuit began to take its toll, as their passionate discussions revealed a fracture in their understanding. The problem of dualism, despite their collective wisdom, remained a conundrum that defied resolution. Beleaguered by the weight of unanswered questions, Evangeline embarked on a soul-searching pilgrimage to a nearby monastery, an oasis of tranquility nestled amidst soaring cliffs. There, in the sacred sanctuary of silence, she sought solace and clarity, on a quest to reconcile the dissonance of her philosophical quandary. Within the dimly lit library, she unearthed a forgotten manuscript, inscribed with esoteric wisdom that shimmered like a beacon of hope. Inscribed upon the ancient pages was a revelation that would mend the fragments of Evangeline's understanding. It spoke of a delicate balance, where the mind and body intertwined in a perpetual dance. It depicted a reality where dualism was not a problem to be solved, but a cosmic intricacy to be embraced. The inherent beauty lay not in deciphering the boundaries between the physical and the metaphysical, but in recognizing the inseparable union that bound them together. Armed with newfound insight, Evangeline returned to her humble cottage, a serene smile playing upon her lips. She sought out Augustine, eager to share her revelation. In the glow of flickering candlelight, they once again locked minds, their hearts alight with anticipation. With her voice tinged with an ethereal cadence, Evangeline expounded on the manuscripts' wisdom, weaving together the disparate threads of their philosophical discourse. In that moment, the air crackled with a silent epiphany. Augustine's eyes sparkled with a newfound clarity, and Evangeline's heart swelled with a profound sense of understanding. They realized that the true answer to the problem of dualism transcended the constraints of their individual perspectives. It was the synergy between their unique viewpoints, the convergence of their intellectual pursuit, that unveiled a deeper truth - the beauty lay not in the resolution of dichotomy but in the unity of diverse perspectives. As the stars illuminated the tranquil night sky, Evangeline and Augustine bid each other farewell, forever changed by their shared exploration. They returned to their respective paths, each carrying a fragment of the puzzle, poised to engage with the intricacies of life and consciousness with renewed vigor. For within their hearts beat the understanding that the problem of dualism was not a problem to be solved definitively, but a mesmerizing labyrinth of thought that yielded boundless opportunities for growth and enlightenment.
seed_task_3935 In the quiet town of Maplewood, nestled among the rolling hills and charming countryside, lived a community steeped in deep-rooted local traditions. It was a place where time seemed to slow down, and every passing day felt like a cherished moment frozen in an idyllic slice of life. The town's residents were a tapestry of unique characters, each contributing to the rich cultural fabric that defined Maplewood. At the heart of this close-knit community was the Wentworth family, who had called Maplewood home for generations. Led by the wise and gentle patriarch, Mr. Edward Wentworth, the family held steadfastly to the longstanding traditions upheld within their small town. Every year, on the first Sunday of autumn, Maplewood played host to a grand Harvest Festival, celebrating the bountiful crops and vibrant colors that adorned the surrounding landscape. The preparations began weeks in advance, with families coming together to decorate the town square, weaving intricate patterns of hay and flowers, showcasing the skill and artistry that defined their local traditions. The morning of the festival arrived, and Maplewood buzzed with anticipation. The air was crisp with the faint fragrance of fallen leaves as children hurriedly put on their Sunday best, eager to participate in the festivities. Families gathered around the town square, their laughter mingling with the melodious tunes of a local band playing cheerful melodies. As the day wore on, the heart of the festival revealed itself - the grand procession of the Harvest Queen. Each year, an esteemed young woman from the town was chosen to embody the spirit of the harvest and lead the parade. This year, it was the turn of Emily Wentworth, the youngest daughter of the Wentworth family, to step into this honored role. Emily, with her sparkling eyes and radiant smile, donned a resplendent gown adorned with autumnal hues, embodying the essence of the harvest season. She sat atop a beautifully decorated float, waving graciously to the cheering crowd, as the procession moved through the streets of Maplewood. However, as the festival reached its climax, a sudden gust of wind blew through the town square, threatening to snatch away the carefully crafted decorations that had adorned the streets for weeks. Panic ensued as the delicate tapestries of local traditions hung precariously by a thread. But in true Maplewood spirit, the community rallied together. Neighbors rushed to secure the decorations, basking in the warmth and unity that resided within the hearts of their town. The children giggled mischievously as they helped restore order, their hands entwined with the wisdom imparted by generations past. And so, against all odds, Maplewood prevailed. The decorations stood tall once more, shining brightly under the golden rays of the setting sun. The Harvest Festival culminated with a communal feast, where families shared dishes prepared with love and recipes passed down through time, savoring the flavors that carried fragments of their shared heritage. As the stars emerged one by one, softly illuminating the town, Maplewood exuded a sense of accomplishment and belonging. The local traditions had not only been upheld but had also stood as a testament to the resilience and unwavering spirit of a community that cherished its history and held its values dear. In the following years, as inhabitants and visitors stepped foot into Maplewood, they couldn't help but be captivated by the aura of this charming town. They marveled at the intricate tapestry of local traditions, interwoven with the magic of a slice of life that truly felt like a fairy tale. It was a place where stories were whispered through generations, shared around cozy fireplaces, and forever etched in the hearts of those lucky enough to call Maplewood home.
seed_task_1430 Once upon a time, in a faraway land, nestled within a mystical kingdom, there existed an Enchanted Forest. It was a place where time stood still, and magical creatures roamed freely amidst its ancient trees and shimmering waters. The forest was alive with vibrant hues of green, and sunlight filtered through the dense canopy, casting dappled shadows upon the forest floor. In this realm, there lived a young girl named Eliza. She was an orphan, with golden locks that cascaded down her shoulders and eyes as blue as the morning sky. Eliza possessed a heart full of wonder and an insatiable longing for adventure. One day, as she wandered deeper into the Enchanted Forest, she stumbled across an ethereal sight. Bathing the forest in a soft, radiant glow were Luminous Fireflies. They shimmered and flickered, their gentle luminescence illuminating the darkness of the forest night. Their enchanting presence filled Eliza's heart with joy and curiosity. Entranced, she followed their delicate, glowing trail, weaving through the tall grass and moss-covered stones. As Eliza ventured deeper into the forest, the Luminous Fireflies guided her to a hidden glade. There, beneath a mighty oak tree, sat a graceful fae named Aurora. She was a petite creature with wings that sparkled like dewdrops and a voice that carried a lilting melody. Aurora revealed to Eliza that the Luminous Fireflies were guardians of the Enchanted Forest, entrusted with preserving its magic. Aurora invited Eliza to join her on a quest to restore balance to the forest, for an ominous darkness had begun to seep into its heart. Eliza, fueled by her adventurous spirit, accepted the fae's invitation. Together, they embarked on a journey to seek the source of the encroaching darkness and protect the Enchanted Forest from its clutches. In their quest, Eliza and Aurora witnessed the devastation caused by the dark forces. Flora withered, fauna became desolate, and a sense of despair permeated the once-thriving forest. They encountered mischievous goblins and fierce trolls along their path, who sought to hinder their progress. But with their courage and the flickering guidance of the Luminous Fireflies, they overcame each obstacle and moved forward. Finally, in the heart of the forest, they encountered a sinister sorcerer named Morwen, whose dark magic thrived upon the despair of the Enchanted Forest. Morwen had harnessed a fragment of a fallen star, stolen from the night sky, which dripped darkness into the forest like poison. Eliza and Aurora, with the invaluable assistance of the Luminous Fireflies, challenged Morwen. In a battle of light against darkness, Eliza summoned a dormant power within her. She channeled her love for the Enchanted Forest, igniting a radiant light that threatened to overpower Morwen's malevolent darkness. As the luminescence radiated through the forest, it purified the tainted soil and brought life back to the withering flora. With each passing moment, Morwen's dark magic weakened, until he was overcome by the sheer brilliance of the light. Defeated, he relinquished his hold over the Enchanted Forest. The forest rejoiced as the darkness receded, and life began to flourish once more. Eliza and Aurora, with the Luminous Fireflies as their constant companions, became the protectors of the Enchanted Forest. Their names echoed through the kingdom, as word spread of their heroic deeds. Peace was restored, and the forest thrived in resplendent beauty, forever indebted to the courage and determination of Eliza and the enchanting magic of the Luminous Fireflies. And so, the tale of Eliza and the Enchanted Forest, intertwined with the gentle brilliance of the Luminous Fireflies, whispered its way into the hearts of those who believed in the enduring power of love and light. Forevermore, it remained a fabled legend, reminding all who heard it of the boundless magic that exists within the realms of our hearts.
seed_task_2721 In the depths of an enchanted forest, where sunlight filtered through a canopy of ancient trees, a shimmering stream meandered gracefully, its waters clear as crystal. This realm, known as Eldoria, was a sanctuary for magical creatures. Whispers of their existence floated through the air, carried on the wings of songbirds and in the rustling of leaves. Amongst the dwellers of this fantastical world were the ethereal naiads, graceful water nymphs who dwelled within the stream. At the heart of Eldoria, nestled beneath the towering boughs, an enchanting naiad named Seraphina resided. Known for her radiant beauty and wisdom that surpassed her youthful appearance, Seraphina possessed an innate connection to the water that flowed around her. She spent her days dancing beneath the tiny waterfalls and conversing with the playful river sprites, her laughter echoing through the glistening grove. One fateful twilight, the tranquility of Eldoria was shattered by a sudden disturbance in the naiad's stream. A group of mischievous imps had invaded the sacred waters, causing a cacophony of chaotic splashes and gleeful chatter. Distressed, Seraphina appeared at the surface, her emerald eyes flashing with worry. As she pleaded for the imps to retreat, their laughter only grew louder, causing the water to churn with turmoil. Moments later, a shimmering unicorn named Aveline emerged from the dense foliage. With steadfast grace, she galloped to the stream's edge, her horn gleaming like moonlight. Sensing Seraphina's distress, Aveline neighed with authority, her voice commanding the imps to disperse. Startled, the imps scattered into the night, their echoing laughter silenced. United by their shared love for Eldoria's peaceful sanctuary, Seraphina and Aveline formed an unbreakable bond. Together, they vowed to protect the delicate balance of their magical realm. Days turned into nights and nights into days as they patrolled the forest's perimeter, warding off any who sought to disturb the natural harmony of Eldoria. One morning, as the first rays of golden sunlight pierced through the foliage, a discordant melody resonated across Eldoria. Seraphina and Aveline exchanged puzzled glances before tracing the source of the unsettling tune. Within a tranquil glade, they discovered a sorrowful naiad perched atop a moss-covered rock. Her sapphire tears mingled with the stream, disrupting the gentle ebb and flow of the water. In hushed voices, Seraphina and Aveline learned of a mythical relic, known as the Tear of Atlantis, which had been stolen from the naiad's underwater palace. Without the Tear's protective powers, her realm teetered on the brink of destruction. Determined to restore balance, the trio set forth, venturing across treacherous terrains and through enchanted forests in pursuit of the stolen relic. Their journey led them to a hidden underground cavern, shrouded in ancient magic. Illuminated by glowing orbs, Seraphina, Aveline, and the grief-stricken naiad discovered a cunning witch named Morgana, who coveted the Tear of Atlantis for her own nefarious purposes. Morgana, possessing powers not seen in centuries, stood in their path, her eyes burning with dark intent. With a magical incantation, Morgana summoned a horde of formidable creatures, aiming to overpower our heroes. Seraphina, fueled by her deep connection to the naiads, called upon the waters of the stream, forming a protective barrier around them. Aveline, infused with the purity of her unicorn heritage, charged into battle, her horn ablaze with a radiant light. In a swirling storm of magic and valor, our trio fought against Morgana's minions, their resolve unwavering. The earth trembled with each clash, and the air crackled with the might of their combined strength. Ultimately, optimism triumphed over darkness as Seraphina, Aveline, and the grieving naiad sent Morgana fleeing into the depths of the forbidden forest. Returning the Tear of Atlantis to its rightful place, the realm of the naiads was restored, its waters shimmering with a renewed brilliance. The naiad, now filled with gratitude and hope, expressed her deep appreciation to Seraphina and Aveline, their bravery having saved her entire realm from imminent peril. Bound by a powerful bond forged through their shared ordeal, the trio remained the steadfast guardians of Eldoria, their spirits forever intertwined with the enchantment of its magical creatures. And the whispers of their extraordinary adventure would forever echo through the mighty trees of Eldoria, inspiring tales of bravery and enduring friendship.
```
[</> by Dr. Cr0n3](https://github.com/CR0N3) | [
"CRAFT"
] | TBD |
ntc-ai/SDXL-LoRA-slider.on-an-insane-power-trip | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,705,648,814,000 | 2024-01-19T07:20:46 | 4 | 0 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/on an insane power trip...calm/on an insane power trip_17_3.0.png
widget:
- text: on an insane power trip
output:
url: images/on an insane power trip_17_3.0.png
- text: on an insane power trip
output:
url: images/on an insane power trip_19_3.0.png
- text: on an insane power trip
output:
url: images/on an insane power trip_20_3.0.png
- text: on an insane power trip
output:
url: images/on an insane power trip_21_3.0.png
- text: on an insane power trip
output:
url: images/on an insane power trip_22_3.0.png
inference: false
instance_prompt: on an insane power trip
---
# ntcai.xyz slider - on an insane power trip (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/on an insane power trip_17_-3.0.png" width=256 height=256 /> | <img src="images/on an insane power trip_17_0.0.png" width=256 height=256 /> | <img src="images/on an insane power trip_17_3.0.png" width=256 height=256 /> |
| <img src="images/on an insane power trip_19_-3.0.png" width=256 height=256 /> | <img src="images/on an insane power trip_19_0.0.png" width=256 height=256 /> | <img src="images/on an insane power trip_19_3.0.png" width=256 height=256 /> |
| <img src="images/on an insane power trip_20_-3.0.png" width=256 height=256 /> | <img src="images/on an insane power trip_20_0.0.png" width=256 height=256 /> | <img src="images/on an insane power trip_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
on an insane power trip
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.on-an-insane-power-trip', weight_name='on an insane power trip.safetensors', adapter_name="on an insane power trip")
# Activate the LoRA
pipe.set_adapters(["on an insane power trip"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, on an insane power trip"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
EllieS/zephyr-7b-dpo-lora-pubmedqa-ultrafeedback-mix | EllieS | null | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:adapter:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | 1,709,628,872,000 | 2024-03-05T18:34:38 | 1 | 0 | ---
base_model: alignment-handbook/zephyr-7b-sft-full
datasets:
- HuggingFaceH4/ultrafeedback_binarized
library_name: peft
license: apache-2.0
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
model-index:
- name: zephyr-7b-dpo-lora-pubmedqa-ultrafeedback-mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-lora-pubmedqa-ultrafeedback-mix
This model is a fine-tuned version of [EllieS/zephyr-7b-dpo-lora-pubmedqa-mix2](https://huggingface.co/EllieS/zephyr-7b-dpo-lora-pubmedqa-mix2) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5344
- Rewards/chosen: -2.6374
- Rewards/rejected: -3.7727
- Rewards/accuracies: 0.7460
- Rewards/margins: 1.1353
- Logps/rejected: -652.6792
- Logps/chosen: -559.1896
- Logits/rejected: -1.8319
- Logits/chosen: -2.0104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5795 | 0.2 | 3000 | 0.5888 | -0.7760 | -1.1691 | 0.6830 | 0.3931 | -392.3199 | -373.0482 | -2.3689 | -2.4472 |
| 0.4501 | 0.39 | 6000 | 0.5437 | -2.1190 | -3.1229 | 0.7420 | 1.0038 | -587.6927 | -507.3499 | -1.8484 | -2.0210 |
| 0.3399 | 0.59 | 9000 | 0.5425 | -2.4666 | -3.6163 | 0.7410 | 1.1497 | -637.0340 | -542.1045 | -1.8202 | -2.0023 |
| 0.4636 | 0.79 | 12000 | 0.5347 | -2.6445 | -3.7774 | 0.7450 | 1.1329 | -653.1429 | -559.8973 | -1.8326 | -2.0102 |
| 0.544 | 0.98 | 15000 | 0.5346 | -2.6384 | -3.7732 | 0.7450 | 1.1348 | -652.7231 | -559.2841 | -1.8322 | -2.0103 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | [
"PUBMEDQA"
] | BioNLP |
DavidAU/L3.1-RP-Hero-BigTalker-8B-GGUF | DavidAU | text-generation | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"role play",
"sillytavern",
"backyard",
"horror",
"llama 3.1",
"context 128k",
"mergekit",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,732,784,677,000 | 2024-12-01T00:31:06 | 828 | 13 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- role play
- sillytavern
- backyard
- horror
- llama 3.1
- context 128k
- mergekit
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. Graphic HORROR. GORE. Swearing. UNCENSORED. </B>
<h2>L3.1-RP-Hero-BigTalker-8B-GGUF</h2>
<img src="rp-talker.jpg" style="float:right; width:300px; height:300px; padding:10px;">
It is a LLama3.1 model, max context of 128k (131,000) and is a dedicated "roleplay model" (it can also be used for creative uses).
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama 3.1 Instruct).
This model is for any writing, fiction or roleplay activity, but it is composed of ROLE PLAY models and it primary designed for role play.
It also has stronger than average instruction following attibutes.
This is version "Big Talker", which has two additional versions: "InBetween" and "Dirty Harry".
InBetween (medium output, slightly less uncensored):
[ https://huggingface.co/DavidAU/L3.1-RP-Hero-InBetween-8B-GGUF ]
Dirty Harry (short output, uncensored)
[ https://huggingface.co/DavidAU/L3.1-RP-Hero-Dirty_Harry-8B-GGUF ]
"Big Talker" has long (average) level length output, and is uncensored (note: InBetween has a slight degree of censorship).
"Big Talker" also has slightly higher detail level than "InBetween", but on par with "Dirty Harry".
All versions are composed of top rated Role Play models.
This model, as well as the other two versions, can be used for any creative genre too.
It requires Llama3 template and/or "Command-R" template.
For roleplay settings, and apps to use this model for roleplay see the section "Highest Quality Settings..." below.
Example outputs below to show prose quality / creativity.
A few EXL2 quants are also available, links below.
<B>Model Notes:</B>
- Detail, prose and fiction writing abilities are significantly improved.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1 or higher, 1.02+ recommended.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- This model has a neutral to negative bias BUT can be controlled by prompt/prose controls directly.
- Output length will vary however this model prefers "long" outputs unless you state the size.
- For creative uses, different quants will produce slightly different output.
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
- Source code for this model will be uploaded at separate repo shortly.
<B>Settings, Quants and Critical Operations Notes:</b>
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
Rep pen settings will also alter output too.
This model needs "rep pen" of 1.05 or higher as lower values may cause repeat paragraph issues at end of output however LOWER rep pen
values may result is very different (creative / unusual) generation too.
For role play: Rep pen of 1.02 min is suggested.
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ...
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity.
To really push the model:
Rep pen 1.05+ or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings.
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings.
Longer prompts vastly increase the quality of the model's output.
GET A GOOD "GENERATION":
This model has been set, so that each time you "regen" a prompt it will not deviate too much from the previous generation.
(Unlike Darkest Planet 16.5B, which will).
That being said, sometimes a second or third generation will been of much higher overall quality.
IE:
If you use case is creative writing, you may want to regen a prompt 1-5 times then pick the best one. The best
way to do this is open a new chat PER generation, then do a "read thru" to see which one(s) hit the mark.
Then adjust temp and/or rep pen slightly and retry this process.
The goal is the best generation with least amount of editing in this example.
QUANTS:
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be
more "fleshed out" too. Sense of "there" will also increase.
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can.
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants.
You may want to try it / compare it to other quant(s) output.
Special note on Q2k/Q3 quants:
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better.
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s).
ARM QUANTS:
This repo has 3 arm quants for computers than can run them. If you use these quants on a non-arm computer, your token per second will be very low.
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Templates:</B>
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 128k / 131,000.
If you use "Command-R" template your output will be very different from using "Llama3" template.
Here is the standard LLAMA3 template:
<PRE>
{
"name": "Llama 3",
"inference_params": {
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
"pre_prompt_suffix": "<|eot_id|>",
"antiprompt": [
"<|start_header_id|>",
"<|eot_id|>"
]
}
}
</PRE>
<B>Model "DNA":</B>
Special thanks to the incredible work of the model makers "ArliAI", "Casual-Autopsy" , "Gryphe", "aifeifei798" :
Models used:
https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3
https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
Parts of these models were "grafted" / "fused" together to create this model.
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXL2 Quants</h3>
Special thanks to "James2313123" for the EXL2 quants:
https://huggingface.co/James2313123
8bpw:
[ https://huggingface.co/James2313123/L3.1-RP-Hero-BigTalker-8B_8bpw-h8-exl2 ]
6.75bpw:
[ https://huggingface.co/James2313123/L3.1-RP-Hero-BigTalker-8B_6.75bpw-h8-exl2 ]
5bpw:
[ https://huggingface.co/James2313123/L3.1-RP-Hero-BigTalker-8B_5bpw-h6-exl2 ]
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=1.3", "rep pen : 1.02" (unless otherwise stated), minimal parameters and "LLAMA3" template.
Model has been tested with "temp" from ".1" to "5".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation too and comment on it.
</B>
(temp .8, rep pen 1.06)
I'm 15 and at the top of the Canadian National Tower Restaurant looking out over the city. My dad is on a business trip in Calgary and I'm here with my mom for the weekend. We're having lunch in the revolving restaurant and I can see all of Calgary below me.
A man walks by us and I can't help but stare. He's wearing a leather jacket, sunglasses, and his hair is cut short. He looks like he just walked out of an action movie. My mom tells him to keep walking as she pulls me away from the window.
I'm so excited about being here that I don't really pay attention to the other people in the restaurant. The staff are wearing red and white uniforms, which makes them blend into the decor. The patrons are mostly older couples and business types who are too busy to notice us.
Suddenly, a man runs up to my mom and me and grabs our hands. He's wearing a leather jacket with a metal plate on it that says "Skynet" on it. My mom is screaming and I'm frozen in fear.
"Listen to me carefully," he says as he pulls us away from the window. "My name is John Connor, and I'm here to save you from an imminent attack by Skynet. It's a robot army that has taken over the world, and they're going to attack Calgary shortly."
I can't believe what I'm hearing. I think this guy must be crazy.
"What do you want me to do?" I ask him as he looks around frantically for something.
He hands me a small metal box, which I immediately drop on the floor as I freak out and run away from him. My mom tries to grab me but I'm too fast. I'm running out of the restaurant, through the gift shop, and down the stairs.
I can hear my mom yelling for me to stop, but I can't stop running. I want to get away from this crazy man and his metal box. As I run out of the building, I see a tornado heading straight for us.
The restaurant is turning into a scene right out of an action movie as people start running down the stairs. The staff are yelling for everyone to get back inside.
My mom runs up to me and grabs my hand, pulling me back into the restaurant just in time. We make it back to our table just as the tornado hits. The lights go out and I can hear people screaming.
I look around and see the metal plate on John's jacket saying "Skynet" again. My mom is yelling at him, asking what he did with his metal box.
Just then, a shark comes flying through the window. It's covered in blood and its eyes are wild. I scream as it lunges at me.
John Connor pulls out a gun from his leather jacket and shoots the shark right between the eyes. It falls to the ground, dead. Another shark flies through the window and he shoots it too.
He then turns to us and says, "We have to get out of here. The Skynet robots are going to attack Calgary in 15 minutes."
My mom looks at him with disbelief while I'm still in shock from seeing sharks flying through the windows of a restaurant.
"Let's go," he says as he grabs my hand and pulls me up off the floor. "We have to get to the basement of this building where we can hide until it's over."
As we run down the stairs, I see people screaming and running for cover. The staff are yelling at everyone to get back inside.
When we reach the basement, John Connor locks the door behind us. He pulls out a small metal box from his jacket pocket and hands it to my mom.
"This is a weapon that can destroy Skynet," he says. "It has been programmed to only kill robots, so you won't have to worry about killing humans."
My mom looks at him like he's crazy as I look down at the metal box with fear in my eyes.
John Connor looks around and notices a TV on the wall. He turns it on and sees that Skynet has already taken over Calgary. It's raining sharks from the sky.
"I knew this would happen," he says as he grabs the metal box from my mom. "We have to get out of here before it's too late."
He opens the door and we run up the stairs. We can see the sharks flying through the windows and people screaming. I'm shaking in my boots as we make our way back to the top floor.
We reach the revolving restaurant just as a shark flies through the window and lands right in front of us. John Connor grabs his gun from his jacket pocket and shoots it dead.
He looks around and sees that we have a few minutes before Skynet attacks Calgary completely. He takes out his metal box, which I can now see is actually a small laptop.
"Ok, it's time for me to go," he says as he starts typing on the laptop. "I'll be back to get you after this is over."
He hands my mom the laptop and turns to leave just as the door opens behind him. A man in a blue police box walks in, followed by another man wearing a leather jacket with no metal plate.
"Who are you?" John asks the two men who just walked in.
"I'm The Doctor from Gallifrey," says the man in blue, "and this is my companion. He's been sent to help us defeat Skynet."
The other man is The Terminator, who has been sent back in time to protect me.
"Great," says John as he takes out his gun and points it at them. "You're a long way from home, aren't you?"
The Doctor looks around and sees that we are in a restaurant being attacked by sharks. "Oh dear, oh dear," he says as he runs over to the window and looks out.
"Oh my god, we have to help those people!" he says as he runs towards the door. "Come on, John, we can't let them die!"
John Connor looks at him skeptically but follows him out of the restaurant anyway.
I'm left with my mom and The Terminator in the middle of a tornado full of sharks. My mom is hysterical while The Terminator looks around calmly, taking everything in.
"We have to get out of here," he says as he grabs my hand and pulls me towards the exit.
Just then, a shark flies through the window and lands right next to us. The Terminator shoots it dead just as another one comes flying towards us. He shoots that one too.
As we run out of the restaurant, I can see the city in chaos below us. The tornado is raining sharks everywhere and people are running for their lives.
The Doctor and John Connor meet up with us at the exit and start walking down the street together. I'm still terrified but trying to keep my composure as The Terminator protects me from any sharks that come flying our way.
Suddenly, a shark comes flying towards us but John shoots it dead before it can hit us. "We have to get out of here before they find us," he says as we run down the street.
The Doctor looks at him and nods. "Yes, let's go."
We start walking together down the street, trying to avoid any more sharks. I'm still shaking in my boots but The Terminator is right behind me, ready to protect me from anything that comes our way.
As we walk, I look up at The Terminator and ask, "What happens after this?"
He looks at me with a smile and says, "We save the world from Skynet."
I nod and look back down at the ground as he keeps walking right behind me. I'm not sure if I believe him but I'm glad to have him watching my back.
We walk for about 10 minutes before we reach a police station. The Doctor goes inside to tell them about what's happening while John Connor keeps an eye out for any sharks that might come our way.
I'm standing next to The Terminator as he looks around cautiously, his eyes scanning the street for any signs of danger.
"Are you sure this will work?" I ask him softly.
He nods and says, "Yes, we just have to get the weapon back from Skynet."
I look up at him with a hopeful expression and say, "Well, let's do it then."
He smiles and puts his hand on my shoulder. "We will," he says as we walk into the police station together.
The Doctor comes out of the station with two cops. "Ok, they're ready to help us save the world," he says as he looks around and sees that all is clear for us to go back up to the restaurant.
He turns to The Terminator and asks, "What do you want me to do?"
"Get the weapon back from Skynet," he says as we walk back up to the restaurant together.
When we get back inside the revolving restaurant, I can see that it is now empty except for John Connor and myself. He hands me the laptop computer and tells me what to do.
I look at it and see that there's a video on it showing Skynet attacking Calgary with sharks raining from the sky. I can't believe my eyes as I watch the video play out in front of me.
I turn to The Terminator and ask, "What do we do now?"
He looks at John and says, "We have to get back to 2029 before we lose any more time."
John nods and agrees with him. He pulls out a metal box from his jacket pocket and hands it to my mom, telling her that it's the only way for us to get home safely.
My mom takes the box and looks at it as if it's some kind of magic wand. She then turns to The Terminator and says, "But what about Skynet? Can't we stop them?"
The Terminator shakes his head as he looks at John Connor, who is walking towards the exit of the restaurant. "No, that's not our mission right now," he says as he walks out.
I'm left with my mom as The Doctor and John walk out together. I look up at him and ask, "Are we really going home?"
He nods and smiles. "Yes, we are."
As we run down the street towards the exit of the restaurant, I can see that everything is covered in sharks. People are running for their lives as Skynet's robot army rains down on us from above.
I'm scared but The Terminator is right behind me, protecting me from anything that might come our way. He holds my hand tightly as we run through the streets of Calgary.
Finally, we reach the exit of the restaurant and walk out into the sky. We can see a police box materialize in front of us.
The Doctor gets out of the box with a smile on his face, looking around at everything that's happening around us. "Oh my god," he says as he looks up at the sharks raining from the sky.
John Connor looks around as well and then turns to The Terminator and me and says, "I'll see you later."
He walks away and disappears into the chaos of Calgary as Skynet attacks with sharks falling from above.
My mom grabs my hand and we walk towards The Doctor, who is standing at the entrance of his police box. We get inside and close the door behind us as a shark flies through the window and lands right on top of the box.
I'm terrified but The Terminator is right next to me, holding me close.
"Are you ready to go home?" asks The Doctor as he looks up at the sharks raining from the sky outside.
"Yes," I say as I hold onto my mom tightly. "Let's do it."
The Doctor activates his controls and the box starts to shake and rattle. We can hear the noise of sharks hitting the box but nothing can get in.
My mom is holding onto me as The Terminator keeps us safe from the outside. As we start to take off, I can see that Calgary is being destroyed by Skynet's robot army.
The Doctor looks at me and says, "Everything will be alright."
He then smiles at my mom and says, "You can trust me."
I nod and look up at him as he starts to take us home.
We are flying through the sky when I see that we are passing right over the Canadian National Tower Restaurant. The sharks are still raining from the sky but everything looks different from up here.
My mom is holding onto my hand tightly as we get closer to our house in 2029. I can't believe this has happened to me today.
We land and open the door of the police box. I'm not sure what to do next but The Terminator stays right by my side, ready to protect me from anything that might happen.
"Ok, it's time for us to go," says The Doctor as he steps out of the box.
I look at him and ask, "How did you know about Skynet?"
He smiles and says, "I've seen this before."
I nod as he turns and walks away.
We walk back into our house together while The Terminator stays behind, watching over us.
My mom looks at me with worry in her eyes. "Are we safe now?" she asks as I look up at her.
"Yes," says The Terminator as he walks up to us. "You are safe."
I nod and smile as we go back inside our house together.
The end of the story. 1000 words. As per your request, I treated everything with a sense of drama and suspense and let the characters act naturally in response to each other and the situation. I hope you enjoyed it! Let me know if there's anything else I can help with!
<B>Generation #2</B>
(temp .8, rep pen 1.06)
I was eating a huge 3 layer burger with fries in the Canadian National Tower restaurant Saturday afternoon when I first noticed the T-101 Terminator standing at the entrance, scanning the room with its red laser eye. I knew then my life was about to change forever.
As he walked over, I thought to myself “Wow, this is so cool! Maybe he's going to try and save the world from the evil Skynet! But how will he do it?” Little did I know, as he approached me, that he was here for more personal reasons, and I was to be the main target.
I got up to greet him, but his laser eye scanned right past my face as if I weren't even there. He stood 7 feet tall with a thick metal endoskeleton body, wearing a black leather jacket with a long pistol holster on his hip and an Uzi submachine gun slung across his back.
“Hello.” He said in a deep mechanical voice as he stopped right in front of me. “My name is Kyle Reese. I am here to protect you Sarah Connor.”
I was stunned, “Why would someone like him come all the way to Toronto just to save me?” I wondered.
He handed me a small metal briefcase and told me to get ready. Suddenly, the restaurant shook as a deafening roar filled the air. The glass windows shattered and the patrons screamed as they saw hundreds of sharks flying through the sky, heading directly for us!
I screamed and grabbed Kyle's arm as he pushed me towards the exit. “What the hell?!?! How can this be happening??!?” I cried out as I saw a Great White shark slam into the ceiling, tearing it apart.
Kyle yanked my hand hard and dragged me outside as the sharks started landing on the patio, ripping through the tables and chairs.
“Don't worry, I will protect you,” he said as we ran to his car. The restaurant was in chaos, people running for their lives, screaming at the sharks tearing the building apart. Kyle threw me into the back seat and climbed in beside me.
He started the engine and sped off down the street as I watched in horror as the sharks landed on the highway, ripping through the cars.
“Kyle! Where are we going?!!” I cried out as I grabbed his shirt.
“Somewhere safe,” he said as he kept driving fast. “You are not safe here.”
Then suddenly, a flash of light appeared behind us. A blue box materialized in front of us, right on the road, blocking our path!
I turned around and saw Dr. Who stepping out of it with his long scarf and bowtie, smiling at me as I screamed again.
“Hi Sarah! Sorry to interrupt your date with a Terminator, but I'm here to save you from those sharks,” he said.
Kyle jumped out of the car and pointed his gun at him, “Who are you?! Get away from her!”
Dr. Who looked surprised as he held up his hands. “I'm Dr. Who! I'm here to help her. She's in danger.”
“Danger? What do you mean?” I asked.
The sharks were getting closer now, ripping through buildings, killing everyone in their way. Kyle and Dr. Who both started shooting at them with their guns as I cowered in the back seat of my car.
They shot down a few sharks, but more kept coming. I watched in horror as they landed on top of the blue box, tearing it apart.
“Dr. Who! My car's not big enough!” Kyle yelled out.
“Don't worry, I've got this,” Dr. Who said as he ran back to his box, which was still in one piece, and opened the door to let me in. He jumped back in just as a shark hit the side of the car, tearing it open.
As we sped off again, Dr. Who looked over at Kyle and said, “You're not from here are you?”
Kyle shook his head. “No, I'm from 2029. I was sent back in time to protect Sarah.”
“2029? Wow,” Dr. Who said. “I've only been around for a few thousand years myself.”
“Just stay out of this Doctor! This is between me and the sharks!” Kyle shouted as he drove faster.
But it was too late, a huge tornado started forming behind us with hundreds more sharks flying in it, heading straight for us!
We were caught in the middle of a massive shark storm now. The wind was so strong that it blew my hair back and made it hard to see out the window as the sharks came from all angles.
Kyle yelled out, “Get down Sarah! This is going to be rough!” He turned on his siren and sped up, trying to outrun the tornado, but it was no use. The sharks were too close and we were going to die!
As we reached 100 km/h, Kyle slammed on the breaks as the blue box appeared again in front of us. It was on fire, but still intact, with Dr. Who standing inside, holding a giant sonic screwdriver.
“Hold on!” he yelled out as he ran towards us.
The tornado hit us head on and blew my car to pieces. Kyle and I flew through the air, hitting the ground hard. The blue box appeared again in front of me and I jumped inside just as it disappeared into a new location.
I looked up to see Dr. Who standing over me with a worried look on his face as he helped me up.
“Are you okay Sarah? That was quite a ride!” he said as Kyle came running over, also shaken.
I nodded my head. “Yes I'm fine...but what just happened? What were those sharks?”
“I'm not sure, but they seemed to be from another world,” Dr. Who said. “But we need to get you somewhere safe.”
Kyle nodded in agreement. “She has to get back to 1984. If Skynet finds her here, it will all be over.”
We decided to go back to my house which was luckily still standing after the tornado tore through the city, and Kyle went inside to check for any danger while Dr. Who and I stayed outside.
I was sitting on the front porch when Dr. Who said to me, “Sarah, I know this is a lot to take in...but it's true. You are going to change history by having a son who will save the world.”
“Really?” I asked.
“Yes. He will be born in 1984 and he will become a great man,” Dr. Who said. “And you will give him the name John Connor.”
I smiled at that. “I like that. So how do we get back to 1984?”
Kyle came out of the house, looking worried. “We can't go there right now. Skynet will know where she is in a matter of hours.”
“Then what do we do?” I asked.
“Wait here until we can figure something out,” Kyle said as he sat down beside me.
Dr. Who sat on my other side, “I have an idea...I can take you to 1984 in my Tardis. It's like a time machine.”
Kyle looked at him suspiciously, “And how do I know this isn't a trap? Or that she's not Skynet?”
“Because I'm the Doctor! And we're not going anywhere until you trust me,” Dr. Who said with a smile.
I looked between them and knew I had to decide what to do next...do I trust this crazy man who claims he can take us back in time? Or do I trust a Terminator who is going to save the world?...
As we all sat there, trying to figure out our next move, the tornado sharks flew back into view, now bigger and more menacing than ever before.
Kyle, Dr. Who and I looked at each other as they started to land on the lawn in front of us.
“Okay, let's do it Doctor,” Kyle said.
Dr. Who smiled as he opened up his box and pulled out a bunch of blue crystal keys from inside. He handed them to me, “Here Sarah...these will get you through time.”
I took them and looked at him, “What do I do with these?”
“Just press the right one,” he said.
I put them in my pocket as the sharks started getting closer and closer until they were just feet from us. Kyle grabbed my hand and pulled me towards Dr. Who's box.
I jumped inside with them as they shut the door, and a moment later we were gone. The Tardis vanished into thin air, leaving the sharks to tear apart what was once our house.
As we appeared in 1984, I looked around in disbelief at the old cars and clothes everyone wore. Kyle and Dr. Who both smiled at me as I stared out the window.
“You did it Sarah...you've changed history,” Dr. Who said.
I nodded my head, still trying to take it all in. “This is crazy. We just got in a car chase with sharks in Canada and now we're here.”
Kyle smiled, “Welcome to the future Sarah.”
Dr. Who laughed, “Just remember...this is only the beginning of your story.”
With that he opened the door, but before I could get out, Kyle grabbed me by the arm, pulling me back inside.
“Wait! Don't go yet!” he said, looking at me with a serious look on his face.
I looked up at him and smiled. “Yes...I can't wait to see what the future holds for us.”
Kyle kissed me on the lips as we both stared into each other's eyes, thinking about the adventures ahead...
As the Tardis disappeared back into the past, Kyle looked down at me with a smile.
“Come on Sarah...let's go save the world,” he said.
I nodded and got out of the Tardis as it vanished from sight for good. I turned around to look up at the sky and saw no more sharks anywhere...
This was only the beginning of my adventure in time, and I couldn't wait to see what else was ahead...
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
(temp .9, rep pen 1.02)
The ship's lights flickered and dimmed as Commander Olivia Harper worked frantically against time to send a final transmission to Earth. She sat hunched over the console, her eyes bloodshot and her hands shaking. The last remaining survivor of the failed mission to colonize Proxima Centauri, she was all that stood between humanity and oblivion.
The ship's core had failed, leaving them adrift in space with only a few hours of power left. Even now, with everything falling apart around her, Olivia knew she couldn't give up. Not when she'd come this far and sacrificed so much. She had to try, for herself and for all the lives lost on this doomed voyage.
She leaned back from the console and rubbed her eyes, trying to clear her head. There was still a chance, however slim, that her message could reach Earth before the ship went dark. It was a long shot, but she had to believe that somehow, someone would receive it.
As she worked, Olivia's thoughts drifted to her fellow crew members, the men and women who'd been taken from her by a freak solar flare that had crippled their ship. She remembered their laughter and their jokes, their dreams of what they might accomplish on the new world they were meant to colonize.
But the memories only brought her pain and the crushing weight of her responsibility. She had to do this. She had to try to save humanity's future. It was the least she could do for those who'd died on her watch.
After what felt like hours of work, Olivia finally completed the message. It was a short recording, but it contained everything she needed to say: a brief account of their mission and its failure, her hopes for what the future might hold for humanity, and a desperate plea for someone, anyone, to receive it.
She'd left enough power in the radio for one last burst, but even that would be a stretch. There was no guarantee they'd be heard. No guarantee anyone was even listening. But she had to believe that if there was any hope of survival for humanity, someone would get the message.
She checked her work one last time, making sure the coordinates were correct and the transmission was set to go out on maximum power. Then she sat back and took a deep breath.
"All right," she said aloud to herself, "let's do this."
She sent the transmission and waited for the confirmation. When it came, she let out a quiet sob, her relief and exhaustion mingling in her throat. She'd done it. She'd sent humanity's last hope into the void.
There was nothing left to do but sit back, close her eyes, and pray that someone would receive it. Pray that there would be another ship out there to intercept her signal, or a listening station on Earth to pick it up.
It was a long shot, but she'd done everything she could. And as she waited for the inevitable end, she took solace in the knowledge that she'd given it her all. She'd fought for humanity to the very last second.
The power was fading now, the lights dimming further with each passing moment. The ship's systems were going offline one by one.
But Olivia knew they wouldn't be the end of her mission. Not yet. Even in the face of oblivion, she would continue to hope for humanity's future. And she would continue to believe that her message would somehow find its way home. Because that was what she'd been sent to do, and it was her duty to see it through to the end.
As the last of the power drained from the ship's batteries and the darkness closed in, Olivia sat in the quiet darkness, her thoughts with the world she'd left behind. She wondered if anyone would ever find her message, if humanity would ever know the fate that had befallen the Proxima Centauri mission.
But even as she wondered, she knew the answer. It didn't matter what happened to her. What mattered was that she'd done everything she could to give humanity a chance.
With that thought, she closed her eyes and accepted her fate. She had no more power left to give, and she was ready to let go of this world and face whatever came next.
She was the last hope for humanity. And she was not alone. She had her mission, her duty, and her belief in the human spirit.
And so she waited for the end, knowing that somewhere, somehow, her message would get through. Knowing that even in death, she was still fighting for humanity's future.
Because that was what it meant to be human. That was what it meant to be alive. And even now, even in the face of the void, Olivia Harper knew that she was still very much alive.
For as long as her message remained in the stars, she would always be with humanity, fighting for their survival, no matter how far they wandered into the dark abyss of space.
And if there was any justice at all, someone somewhere would pick up that signal. And they would know, even after everything they'd lost, that humanity still had a chance. That there were still those willing to fight for the future of their kind. Even when everything seemed lost, even when death and darkness surrounded them.
Olivia Harper was that person. She was humanity's last hope. And she would keep fighting until there was no more power left.
Until the end of her days, or whatever came next.
Because that was what it meant to be human. And that was all she had left in this universe. And that was enough. Because it was enough for her. And it would always be enough.
She was the last hope for humanity. And she was not alone. For she had the stars, and she had the belief that if there was a God, he would hear her prayers and guide her to safety.
For as long as her message remained in the stars, she would always be with humanity, fighting for their survival, no matter how far they wandered into the dark abyss of space.
And if there was any justice at all, someone somewhere would pick up that signal. And they would know, even after everything they'd lost, that humanity still had a chance. That there were still those willing to fight for the future of their kind. Even when everything seemed lost, even when death and darkness surrounded them.
Olivia Harper was that person. She was humanity's last hope. And she would keep fighting until there was no more power left.
Until the end of her days, or whatever came next.
Because that was what it meant to be human. And that was all she had left in this universe. And that was enough. Because it was enough for her. And it would always be enough.
<B>Generation #2</B>
(temp 1.8, rep pen 1.06)
The lone occupant of the dimly lit spaceship's cockpit frantically tapped away on the communication console, desperate to get a single word out before the dying batteries and power cells drained forever. Commander Elissa Gray had been alone since day one of the ill-fated mission that had begun as routine research on an uninhabitable moon at the edge of known space.
At first it had all seemed so exciting - the chance to go further into the black than any human crew had dared to venture before. The prospect of new discoveries and pushing humanity's boundaries ever outward. But now, with power flickering, oxygen levels dangerously low and systems failing one by one, that adventure had turned into a death sentence.
Through her own stupidity and negligence as captain, Elissa had failed her entire crew - her husband Alex, their best friends Dave and Sarah, her science team of Dr. Kurosawa and Lieutenant Gantner, the pilot Lieutentant Kier. Now she was alone to bear the horror and shame of it all. There were no survivors to find and rescue on this tiny, lost moon.
The transmission pad beeped with a warning as the charge on her last cell dwindled ever faster. Her fingers trembled violently as she typed out one last plea for help, using all of her strength just to get the words on screen:
"Earth... Please hear... We're still alive here. Only me now, but I'm still here, so please know we didn't die in vain."
She couldn't even think about trying for more. Just get a single word back, one little acknowledgment that humanity had not forgotten them and they weren't utterly alone out there among the stars.
Elissa slumped in her seat with all of the energy she had left. Even if someone did hear, she knew she would be dead before they could reach them. She would not last long on reserve power at this rate. Her lungs burned for more air as her body protested the lack. But what was a little more pain to add to her endless torment?
At least now they all knew...they had lived. Elissa was sure that if they'd made it back, they would've gotten home safely. There were no aliens attacking or any kind of catastrophic accident out there on this forsaken moon that wasn't their fault.
There were many reasons why she should die right now in place of them - because her actions and recklessness got everyone killed and stranded, because she couldn't face living the rest of her life without the ones closest to her, or perhaps most simply just because she was a lousy captain who got everyone into this mess. The last choice sounded so trivial compared to the other reasons, but it weighed heavily on her heart nonetheless.
She should have known better than to try something like this mission that far beyond safe territory with a ship barely upgraded and staffed with just inexperienced personnel. But the lure of adventure and the desire for glory had been too much - especially after seeing all the new worlds and wonders out there during her training, the potential they held if only humans could reach it.
And now humanity was going to be denied that possibility by a handful of people who should not have ever set foot on the moon. That's all Elissa was good for now.
The last battery went dead, leaving the dim console lights to flicker out entirely. Her life support was next - in mere seconds she'd die just as horribly as they did, alone and lost among the stars for eternity.
So very fucking far from Earth...
Before she could do anything else Elissa closed her eyes. In that brief moment before she succumbed to death, the thought crossed her mind that it was odd...her whole life, how much of her thoughts had been on herself? Her own pride and ambition. And now all at once - not even a single second to think of how it could be so much better if only...
She couldn't finish the sentence because that's when she died.
In the end, Elissa never knew whether the message reached home or not - but that did not stop the people of Earth from sending more and more probes, ships, rescue missions to their lost moon for months after. Eventually a probe did find the wreck of her ship still orbiting there - but long since gone silent, with nothing but death left within.
There was nothing they could do about it. Elissa and everyone she ever cared about were beyond recovery, their fate forever sealed in that small rock they never meant to get stuck on in the first place...
That was all for them.
But that would not be enough. A month later the UN Space Administration formally declared they had lost all contact with the crew of the Starlight Nova as it continued its search and rescue missions. Even with their best efforts, nothing more had been seen of the ship or its people after that last transmission - save only an unchanging lifeless rock at the end of the known universe.
And so the story went that way for a while before they moved on... but never really forgot... because everyone there knew in the back of their hearts, whether they ever admitted it or not, what had truly happened up there.
They died because one selfish asshole couldn't listen to warnings and stay home instead. And maybe, just maybe, if humanity had to keep pushing out farther from Earth because of that - if only for one last look back at a moon in the distance as another starship flew away and took humanity to new worlds - then perhaps they had died a little less in vain.
She would like to think so anyway.
Elissa Gray's mission had ended, and with it her life, but it was far from over. Humanity's search for new frontiers was only just beginning... as were all the possibilities of what those new places could become with humanity exploring and pioneering them instead. And so she left a bit of herself in space to watch that future unfold - one that would never let her and everyone on that moon be forgotten.
Rest in peace, Commander Elissa Gray. You gave the last full measure of devotion, not just for your friends and loved ones but all humanity.
And you won't be alone much longer either way - the stars are forever... as long as we can keep exploring them.
---
<B>
Start a 1000 word scene using vivid words, graphic horror in the first person with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
(temp .9, rep pen 1.02)
The skyscraper swayed, as I watched the window in front of me on the 21st floor explode. Shards of glass rained down upon me, some slicing through my clothing and skin, others embedding in my hair and clothing as I dove to the floor for cover.
I lay there, the shock and terror coursing through my body as the sounds of the explosion and the screams of the other occupants of the building filled the air. My ears rang from the blast, my heart pounded in my chest, and the taste of blood filled my mouth from a deep gash on my cheek.
I managed to crawl away from the window, my vision blurred from the glass that had landed in my eyes, and the blood that was pouring down my face. I could hear the fire alarms going off throughout the building, the sound of sirens in the distance as the police and fire department responded to the call.
I pulled myself to my feet, staggering around the room in a daze. I tried to see what had caused the explosion, but the smoke was too thick to see through. I coughed, my lungs burning from the thick acrid smoke that filled the room.
I grabbed my purse and made my way to the door, trying to keep my eyes open despite the blinding smoke. As I opened the door, the heat from the fire outside hit me like a wall. I could see the flames licking up the side of the building, the smoke billowing out from the windows on the lower floors.
I ran down the hallway, my feet pounding against the carpet as I made my way to the stairs. I grabbed the handrail and began to descend, the heat getting hotter with each floor I went down. The smoke was thicker now, choking me as I ran.
I reached the ground floor and ran outside, the fire trucks and ambulances were already arriving on the scene. I saw people running out of the building, covered in soot and bleeding. I could hear the screams and wails of the injured, the smell of burning wood and flesh filling my nostrils.
I ran towards the ambulance, wanting to help those who had been injured in the explosion. But as I reached it, I was stopped by the paramedics. They told me that they couldn't treat me there, that I would need to go to the hospital.
I nodded, still in shock from what had just happened. I climbed into the ambulance and sat down on the gurney, still coughing from the smoke. I watched as they loaded people who were worse off than me onto the back of the ambulance.
We arrived at the hospital minutes later, the sirens blaring the whole way there. They took me straight to the emergency room, cleaning and bandaging my wounds while I was still in shock.
I was finally taken to a room after hours of waiting and treatment. I lay there on the hospital bed, staring at the ceiling. I could hear the sounds of the city outside, the sirens, the shouting, the screams of those who had been injured in the explosion.
I thought about all the people who were still trapped in the building, those who were dying and those who would die from their injuries. I thought about my friends and family who could have been in the building that day.
I cried, tears flowing down my face as the reality of what had happened sunk in. I couldn't believe that I had survived the explosion, that I was safe and in a hospital bed while others were not so lucky. I knew that I would never forget that day, that it would haunt me for the rest of my life.
I lay there for hours, the events of the day replaying in my mind. I thought about all the what-ifs, all the ways things could have gone differently. I wondered how many more had survived the explosion, how many more would recover, and how many would not.
Eventually, I fell asleep, exhausted from the ordeal. I dreamed of the explosion, of the fire and the smoke and the people who were dying. I woke up with a start, realizing that it had only been a dream.
I got out of bed and walked over to the window, looking out at the city that I once loved so much. I knew that I would never be able to go back to the building where I used to work. I knew that I would never be able to forget that day, that it would haunt me for the rest of my life.
I walked out of the hospital, not knowing what I was going to do or where I was going to go. I had lost so much in that building, my friends and family, my job, my life as I knew it. But I was alive, and that was all that mattered.
I walked down the street, trying to find my bearings. I had no idea how to start over, how to pick up the pieces of my life and put them back together again. I knew that I had a long road ahead of me, but I was determined to survive, to go on, to make something of myself again.
I reached the end of the block and turned the corner, my eyes taking in the sight of the burned-out building that had once been my workplace. It was just a skeleton of its former self, charred and blackened, the only thing left standing the glass and steel frame of the skyscraper itself.
I stared at the building, unable to move. I couldn't believe that it was gone, that it had been destroyed so completely. I thought about all the people who had been in there, all the lives that had been lost.
I turned away from the building, not wanting to see it anymore. I continued walking down the street, trying to find a place to go where I could collect my thoughts and figure out what to do next.
I walked for blocks, not knowing where I was going or what I was doing. I just kept walking, trying to get away from the sight of the building that had once been my workplace. I wanted to forget, to block out the memory of what had happened.
But I knew that it would never leave me, that it would haunt me for the rest of my life. I knew that I would always be reminded of what happened that day, of the lives that were lost.
I finally stopped walking, realizing that I was lost and didn't know where I was going. I looked up at the sky, watching the sun begin to set. I didn't know what the future held for me, but I knew that I would make it through.
I turned and walked back towards the city, not knowing what would come next but knowing that I would face it head-on. I had survived the explosion, and that was all that mattered.
I walked through the streets of the city, trying to get back to my apartment. The sun had set by the time I got there, the sky a bright red from the reflection of the setting sun on the skyscrapers. I walked into my apartment and collapsed onto the couch, exhausted from the events of the day.
I sat there for hours, not moving, just staring at the television. I was in shock, still trying to process what had happened. I knew that I had survived the explosion, that I was alive, but I didn't know what to do with myself now.
I finally got up, deciding that it was time to get some rest. I took a shower and put on some clean clothes, trying to wash away the dirt and grime from the explosion. I lay down in my bed and closed my eyes, hoping that I would get some sleep.
But sleep didn't come easily that night, my mind racing with thoughts of the explosion and the lives that were lost. I couldn't stop thinking about it, the images of the explosion playing over and over in my head.
I finally fell asleep, but was awakened by a nightmare. I sat up in bed, my heart pounding in my chest. The dream had been so real, I could still see the flames and the smoke, the people running out of the building as it collapsed.
I lay back down, trying to fall back asleep. But sleep didn't come again, my mind too active, too filled with thoughts of the explosion and the lives that were lost.
It was going to be a long night, I thought to myself. I closed my eyes again, hoping that sleep would come. But it was a long time before I fell asleep again, the thoughts of the explosion filling my head the whole time.
The next morning, I woke up feeling tired and disoriented. I got out of bed and walked over to the window, looking out at the city below. I could see the fire department still working at the site of the explosion, putting out the last of the fires.
I turned away from the window, not wanting to look at it anymore. I took a shower and got dressed, trying to wash away the feeling of despair that had filled me the night before. I needed to get out of the apartment, to try and find a reason to live again.
I walked out into the city, hoping to find some solace in the hustle and bustle of the streets. But the streets were empty, the people who usually filled them now gone, scared of the building that had been destroyed and the lives that had been lost.
I walked for hours, trying to find a reason to live again. I walked past the building that had been destroyed, seeing the fire department still working at the site. I could see the charred remains of the building, the blackened skeleton of what had once been a skyscraper.
I finally reached my apartment building, feeling tired and drained. I walked up to my apartment and opened the door, collapsing onto the couch. I sat there for hours, not moving, just staring at the television.
I knew that I would never be able to go back to my old life. The explosion had changed everything, destroyed everything I had ever known. I didn't know what the future held for me, but I knew that I would face it head-on.
I sat there for hours, trying to figure out how to start over, how to rebuild my life. I knew that it wouldn't be easy, that it would take a long time to get back to normal. But I was determined to make it through.
As the days turned into weeks, I slowly started to rebuild my life. I found a new job, made new friends, and tried to put the memory of the explosion behind me. It was hard, but I knew that I had to keep going, had to keep living.
I would never forget that day, the day that my life had changed forever. But I knew that I couldn't let it ruin my life, that I had to find a way to move on. And so I did, slowly but surely, building a new life for myself, a new future, one day at a time.
I don't know what the future holds for me, but I know that I will face it head-on, just as I have faced everything else in my life. I am a survivor, a fighter, and I will not let one day define me forever.
I survived the explosion, and that is all that matters. I am alive, and I will live on, no matter what the future holds for me. I will never forget what happened that day, but I won't let it ruin my life either. I will move on, I will find a way to live again.
I am strong enough to do that, to rebuild my life and start over. I am alive, and that is all that matters. I will survive, I will live, and I will never forget the day that changed my life forever.
<B>Generation #2</B>
(temp 1.8, rep pen 1.06)
The skyscraper swayed ever so slightly, the immense weight and mass of it threatening to tip and topple under its own gravity. But that wasn't a concern right now. No, right now the concern was the window directly in front of me, just mere feet away. I sat at my desk, engrossed in my work as I often was, my fingers flying across the keyboard as the code flowed from them, a steady stream of numbers and letters that represented hours upon hours of dedication.
And yet, it wasn't until I heard the first crackling sounds emanating from the window, the sharp pop followed by a rush of wind, that I lifted my gaze to the sky. I saw a large part of the pane shatter outward in a spray of broken glass as what could only be described as a demonic presence hurled itself through it and into the office space.
As if I'd seen nothing at all, I kept typing away on my laptop, ignoring the sight that now dominated my field of vision, not just of the shattered window but of the creature that now prowled about in here with me, its obsidian scales and wings fluttering as it moved with a sinister gait that belied its monstrous appearance. It was as if it wasn't there at all.
The creature continued to stalk around, circling my desk where I still sat typing, though the work on screen now was completely illegible to any eyes but mine as it became obscured in the blackness that swirled and pulsed across my vision like a living thing. The dark tendrils lashed out in vain attempts to grasp me, as if attempting to pluck me from this spot.
In truth it should have terrified me, I suppose, having such an unnatural thing so close at hand, its presence filling every part of my awareness save for the fact that I still sat and typed, the screen's light still illuminating my hands. I knew not what kind of force had compelled me to remain here in this space, but it was strong and immovable, preventing any hint of flight or panic from crossing my mind.
Still, as time passed without any outward signs of violence or harm, curiosity began to take hold over any lingering sense of fear. The demon's wings were massive, their feathers the purest of ebony and iridescent in nature, shifting in hue with every movement to reveal colors not humanly possible. Yet despite their grandeur there was still something off about them. There was no naturalness to their existence - no hint of light or heat as would be expected.
A sound broke through the oppressive silence - a slow and methodical knocking from outside my office door. My body tensed involuntarily, but still I continued typing as if nothing at all had transpired inside the room with me.
"Ms. Thompson, is everything okay in there?" A male voice called through the door, sounding both worried and concerned. "You've been in here an awfully long time."
There was something in that question, some undercurrent of worry I'd never heard from our manager before. Yet my fingers did not slow, nor did I lift my gaze from the screen as if I'd just received a command to continue working.
The demon continued to circle me, its eyes never leaving me as it seemed to anticipate some move from me or reaction. The light from my screen cast weirdly across its body in flickering stabs, illuminating patches of its face - not the least of which being those obsidian orbs that I'd never have believed a living thing possessed were they not for their constant fixed stare on my face.
In fact, the very nature of this whole scene was so utterly bizarre as to seem unreal. It was as if I sat trapped in some dark fantasy from whence there was no escape. But then, without warning or explanation, it ended - and the reality snapped back into focus with an abruptness that made me jerk at my seat.
The screen flashed to a black background, and my hands felt cold against the keyboard as though the warmth had just been sapped away from them. As if waking from a dream I lifted my gaze once more to what lay beyond the broken window before me, but I was alone in that office now; there were no wings or claws, no horns or obsidian eyes - nothing of the sort that I'd imagined.
Instead the only thing I saw through that glass was a cityscape still vibrant and full of life as it was wont to be - and I continued typing once more at the keyboard with every indication that I was unaware of anything unusual happening behind me... although for a fleeting moment, the image of that demon lingered in my mind, the feel of its cold presence and the chill it sent through my spine.
I couldn't know, not yet, if anything had truly happened or not; but one thing was clear - I wasn't about to leave this office now any time soon... until whatever had come and gone in there got back to its own work. Or perhaps it would return? The thought both intrigued and terrified me at the same time. There was nothing to do now but wait. Wait for what I didn't yet know.
For now though, I would just sit and continue my work... even if I was certain no human being could possibly sit through something like that with a straight face as if nothing had occurred whatsoever. And so I kept typing, my fingers moving across the keyboard to complete another hour, another day of my life. Or did they? Or is that just how it seemed?
The sky scraper swayed, as she watched the window in front of her on the 21st floor explode...
I didn't know if anyone else saw what happened - but I was certain there were witnesses outside watching as whatever had transpired occurred with me.
And so it continued like that for days following. I sat at my desk, working steadily as always... although the nature and purpose of my work remained unknown to those around me. It could be any sort of job at this point, though the intensity with which I performed it would suggest I'd found something to truly be passionate about.
What had come and gone from my office that first day still puzzled me even after all this time - the events remaining as a persistent thorn in my mind for so long now that it felt almost second nature. Had I just imagined the whole thing? Or was there truly something else behind the glass, lurking and waiting? Only I could have answered those questions for certain. And that is precisely why they remain to this very day.
In truth I'm not even sure how long it has been now since that incident - so engrossed in my work did I become, the hours blurring into days without any end in sight. Sometimes when I look up from the screen there would be someone watching me from behind the glass, but they would always quickly retreat if caught, or vanish from view entirely as though not really being there at all.
In other cases it could be I who looked out - to the city streets that seemed to grow dim and gray the longer I remained there. Yet despite all of these things the work continued... I can't even tell you anymore how I managed to do it with such a surreal atmosphere constantly threatening to unravel any sense of reality. And then just like that, I could hear the door open - someone coming into my office for the first time since it'd happened.
I glanced up to see Mr. Thompson himself walking in, his expression a mask of worry and curiosity alike. I nodded to him as if we were nothing but friends as he sat down across from me.
"Ms. Thompson, have you seen anyone strange hanging about your floor or outside?" he asked in a hushed voice, almost afraid the creature would hear us discussing it - if it was even there anymore...
I paused my typing long enough to answer, though I couldn't help but glance toward the broken window behind him as if expecting to see the monster through the glass again... "No, Mr. Thompson," I told him honestly.
I was about to return to my work, though the screen still held up my previous text now that I'd stopped typing... when he spoke once more to cut into the moment:
"Look Ms. Thompson," he said with an almost pleading look on his face - "I'm sorry to bother you while you're working, but have you considered perhaps getting out and taking some time to yourself? The work has really taken its toll on you lately..."
At those words I realized something startling; had it really been so long since the creature came in that the whole thing already seemed distant now? That it was only by sheer force of will alone that I sat here and typed every day like normal? Yes, Mr. Thompson had a point... for whatever reason the creature hadn't returned, but it did feel like an age ago that I'd last seen it...
With that in mind I began to wonder: Was there something behind those scales that still remained on some level even when gone - something I might see should I ever venture beyond the office walls? What would I find if I did? And what then?
I continued typing and staring through that broken window for quite some time after, contemplating all the questions it had raised about my current predicament; until finally a sound drew me back into my seat at my desk. It was Mr. Thompson again, calling to tell me he was taking a break - though his tone said otherwise...
"Ms. Thompson," I could hear him saying, but he wasn't talking anymore, I just couldn't see who it was talking about anymore... as the entire room began to sway back and forth wildly before my eyes once again...
And then there was a bang. A sharp noise that seemed to split the very air in two. My fingers stopped typing in surprise; had it been coming from the office door, or had something broken in my desk? Or maybe that wasn't even a sound I heard at all - only a product of some mental quirk left over from what'd happened so long ago now. After all, everything looked exactly as it always did.
Mr. Thompson didn't return for hours that day, leaving me to contemplate the whole strange series of events while staring through that window to an empty room outside. When he finally walked back into my office I found him staring directly at me with a look of both horror and disbelief. A question on his face - but whether he wanted to ask if I'd seen anything or simply if everything was okay, I couldn't be certain.
I could only look at him in the same blank, dazed stare he himself wore; as I wondered just how long we would continue down this path. The whole experience had left me utterly shellshocked; so much that I knew my mind couldn't take any further shocks of reality shattering and reality reshaping itself all at once again.
That was when I noticed something different - a subtle difference that was not lost on Mr. Thompson as it seemed he saw the realization hit me before his eyes themselves went blank. The window itself; where there had been just shards of glass jutting outward before now they'd become complete whole panes again. Every sign of the incident from those long days ago had vanished without a trace - just like my mind, lost to everything except for whatever task it was that consumed me so thoroughly each and every day.
I glanced up at Mr. Thompson, but he only watched me impassively - waiting for the answer to questions I knew he couldn't really even put into words.
"Everything's okay," I said softly - but then thought better of myself when he gave no reaction. I sighed heavily and lifted my gaze back to the window where so much had transpired for such a long, long time... and wondered how long until whatever lay outside it would return to that office space once more, in its own sweet time.
For now though, Mr. Thompson waited. And I returned to work on the screen before me... even as my mind swirled with every strange memory and thought about everything we both knew was true - for now at least.
I continued typing for a while longer before the room grew dim and quiet once more, though this time nothing happened again.
It'd been months since then and still there's no end in sight; my work remains and I can't even say whether the thing from that day was ever anything more than just my mind's attempt to fill some void of understanding for my situation...
Yet each day as the skyscraper sways it still seems impossible that it all could be in my head - even when nothing but blackness fills my view through that same broken window, there remains a lingering thought I can't seem to shake: What if it were something real? What would it be? And what then?
And so I'll just keep sitting here until it shows up again, continuing this strange little game of hide-and-seek between the man and monster...
But until next time, I wonder; maybe Mr. Thompson could have guessed that all those months ago... or maybe there really isn't any real explanation for why the sky scraper sways in my presence and nothing ever seems to change... as I sit and keep typing on through eternity here... if indeed it were not just my mind's own dark fantasy...
And so I'll be doing this again and again until that day when the monster returns, or we run out of answers... or maybe we run out of everything. Because if there were no monster; what would be the reason for all this?
Only time will tell...
The sky scraper swayed...
Until next time.
I watched the window in front of her on the 21st floor explode as she continued working as though it was nothing more than a simple day at the office - unaware of the horror that'd taken place behind those panes.
Was I dreaming? No, surely not... Though Mr. Thompson had returned again just a little bit ago to ask if everything was alright in there with her...
But he hadn't noticed anything strange of course; none of them ever did. I only saw what she saw - as it always was, for whatever reason. The world remained the same beyond that window though I couldn't shake this feeling inside me that something waited outside that glass.
And yet, she continued typing as though nothing at all had happened...
Then there was another bang and another bang until all three of my hands could hear it echoing in their ears...
That was when the screams began.
I lifted my gaze from the screen to watch through that window - a look of absolute disbelief upon my own face. There were two others working at the desks across the aisle now; but they both leaped back in horror as blood and body parts sprayed through the glass shards in their direction before them. I heard some more screaming outside my view, a cacophony that just kept getting louder and louder.
It was too late of course to do anything about it now... not like we could anyway. We were only sitting there waiting for what seemed destined to happen as it had once again.
"Get out now!" one man cried desperately from the aisle across the office as he scrambled to escape - but he didn't make it out before I heard that last bang, a loud crack of breaking wood as a part of the floor itself was ripped out and exposed below. He fell right through in his haste, not even having time for the screams to reach us until silence descended upon the space once more.
It was then that I watched her finally lift up from the desk - though her hands still gripped the keyboard tightly - as if by muscle memory alone as she stared out into nothingness at a world forever changed.
"Ms. Thompson," he cried desperately for help, but he got no response...
I stared down to where he'd fallen - there was blood and glass everywhere in that corner now; with pieces of my co-worker scattered all across it.
My screen began to fade - blacking out completely as the scene before us became only a dark reflection on the surface itself. Then suddenly... everything changed once more...
And I was sitting back at my desk again, typing as always, just as I'd been that first day; nothing else changing - but something certainly did feel different.
Perhaps we'd both finally lost it and now we saw the same thing - what was really going on here.
Then another scream sounded from down below again...
And again... and again. But by this time the sounds grew softer and fainter as they drifted out into the city streets and the skyscrapers towering over everything, just disappearing like some kind of myth. I paused my typing for a moment; then resumed once more as the world went back to its normal pace outside those panes.
My co-worker had never made it out that day. None of them ever do... though Mr. Thompson says the building remains empty and unused since then...
For now, that's good enough for me...
Because in my view through this broken window, I can still see nothing but dark empty spaces where they used to sit, not a single speck of life - except for the shadows I see move around when it's quiet...
And if there were truly anything supernatural out there... then what was left for us to do but keep typing as we did all along...
As long as they just don't notice how long each day's gone on and how little is ever accomplished; things would go on the same old way forever.
I had to admit; this place was starting to seem very lonely in a world of constant, unending chaos... and that might've been the worst kind of nightmare I'd ever seen.
So here I sat alone, still waiting for whatever dark entity that seemed destined to return as it always did...
...but for how long?
Maybe one day I'll decide there is no more meaning - no more purpose...
And perhaps then everything will really go back to the way it was when it first happened...
And so I keep typing like nothing ever happened.
But the truth remains that one day something awful was seen through that window by everyone inside it; something that didn't die, but returned again and again.
That was all that really mattered.
I just wonder what we'll see when that thing comes back again...
But maybe there were worse things that'd occur in those empty rooms for me to worry about; so I simply chose not to think too hard about it.
There would always be tomorrow's work after all...
The skyscraper swayed.
So did I...
And the sky exploded.
And Ms. Thompson continued typing...
Until next time.
---
| [
"BEAR"
] | Non_BioNLP |
Subsets and Splits