id
stringlengths 9
104
| author
stringlengths 3
36
| task_category
stringclasses 32
values | tags
sequencelengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-03-18 02:34:30
| last_modified
stringdate 2021-02-13 00:06:56
2025-03-18 09:30:19
| downloads
int64 0
15.6M
| likes
int64 0
4.86k
| README
stringlengths 44
1.01M
| matched_bigbio_names
sequencelengths 1
8
|
---|---|---|---|---|---|---|---|---|---|
Black-Ink-Guild/Pernicious_Prophecy_70B | Black-Ink-Guild | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"axolotl",
"finetune",
"conversational",
"en",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:aaditya/Llama3-OpenBioLLM-70B",
"base_model:merge:aaditya/Llama3-OpenBioLLM-70B",
"base_model:invisietch/L3.1-70Blivion-v0.1-rc1-70B",
"base_model:merge:invisietch/L3.1-70Blivion-v0.1-rc1-70B",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-03T18:09:52Z | 2025-02-08T19:29:36+00:00 | 1,105 | 13 | ---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- invisietch/L3.1-70Blivion-v0.1-rc1-70B
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- aaditya/Llama3-OpenBioLLM-70B
language:
- en
library_name: transformers
license: llama3.3
license_name: llama3.3
tags:
- merge
- axolotl
- finetune
---
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Pernicious Prophecy 70B</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link
href="https://fonts.googleapis.com/css2?family=Darker+Grotesque:[email protected]&family=Uncial+Antiqua&display=swap"
rel="stylesheet">
<style>
html,
body {
margin: 0;
padding: 0;
background: rgb(11, 15, 25);
color: #E6FFE6;
font-family: 'Darker Grotesque', sans-serif;
}
@keyframes runeGlow {
0% {
text-shadow: 0 0 4px #91ca00;
filter: brightness(0.7);
}
50% {
text-shadow: 0 0 8px #91ca00;
filter: brightness(1.0);
}
100% {
text-shadow: 0 0 4px #91ca00;
filter: brightness(0.7);
}
}
img.badge {
filter: grayscale(100%);
transition: filter 0.7s ease-in-out;
}
img.badge:hover {
filter: grayscale(0%);
}
.rune-border::before,
.rune-border::after,
.vertical-sides::before,
.vertical-sides::after {
animation: runeGlow 1.5s infinite alternate;
}
.rune-border::before {
animation-delay: 0s;
}
.rune-border::after {
animation-delay: 0.2s;
}
.vertical-sides::before {
animation-delay: 0.4s;
}
.vertical-sides::after {
animation-delay: 0.6s;
}
.rune-border {
position: relative;
max-width: 45em;
margin: 2em auto;
padding: 2em 4em;
box-sizing: border-box;
}
.rune-border::before,
.rune-border::after {
position: absolute;
left: 0;
right: 0;
margin: 0 2em;
text-align: center;
white-space: nowrap;
overflow: hidden;
color: #91ca00;
text-shadow: 0 0 4px #91ca00;
font-family: monospace;
font-size: 14px;
content: "ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ";
}
.rune-separator:after {
position: absolute;
left: 0;
right: 0;
margin: 0 2em;
text-align: center;
white-space: nowrap;
overflow: hidden;
color: #91ca00;
text-shadow: 0 0 4px #91ca00;
font-family: monospace;
font-size: 14px;
content: "ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ";
}
.rune-border::before {
top: 0;
}
.rune-border::after {
bottom: 0;
}
.vertical-sides {
position: absolute;
margin: 2em 0;
top: 0;
bottom: 0;
left: 0;
right: 0;
pointer-events: none;
}
.vertical-sides::before,
.vertical-sides::after {
position: absolute;
top: 0;
bottom: 0;
width: 1.5em;
white-space: nowrap;
overflow: hidden;
color: #91ca00;
text-shadow: 0 0 4px #91ca00;
font-family: monospace;
font-size: 14px;
writing-mode: vertical-rl;
text-orientation: mixed;
}
.vertical-sides::before {
left: 0;
content: "ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ";
}
.vertical-sides::after {
right: 0;
content: "ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ | ᛁᛏ ᛁᛋ ᚢᚱᛁᛏᛏᛁᚾ ᛅᚾᛏ ᛁᛏ ᚢᛁᛚᛚ ᚴᚬᛘᛁ ᛏᚬ ᛒᛅᛋᛋ";
}
h1,
h2,
h3 {
font-family: "Uncial Antiqua", serif;
font-weight: 400;
font-style: normal;
color: #426100;
-webkit-text-stroke: 1px #91ca00;
text-stroke: 1px #91ca00;
margin-top: 1em;
}
h2 {
padding-top: 1.5em;
}
a {
color: #619300;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
h1 {
font-size: 2.5em;
}
h2 {
font-size: 2em;
}
h3 {
font-size: 1.5em;
}
p,
li {
font-size: 1.2em;
line-height: 1.2;
}
p.red {
color: #ef2323;
}
img {
border-radius: 20px;
max-width: 100%;
height: auto;
display: block;
margin: 0 auto;
}
.sidebyside {
display: flex;
justify-content: center;
/* Center horizontally */
align-items: center;
/* Align images vertically */
gap: 1em;
/* Space of 1em between images */
flex-wrap: wrap;
/* Wrap to next line if needed */
}
.sidebyside img {
max-width: 100%;
/* Ensure images are responsive */
height: auto;
/* Maintain aspect ratio */
display: inline;
}
.container {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
text-align: center;
}
</style>
</head>
<body>
<div class="rune-border">
<div class="vertical-sides"></div>
<div class="container">
<h1>Pernicious Prophecy 70B</h1>
<p>
<img src="./header.gif" alt="Pernicious Prophecy 70B GIF" />
</p>
<h2 style="margin-top: 0em; padding-top: 0em;">Jump Straight In...</h2>
<p>
<a href="#settings">Click here for downloads & settings</a>
</p>
</div>
<div class="rune-separator"></div>
<h2 style='padding-top:0.5em;'>An Introduction...</h2>
<p>
<b>Pernicious Prophecy 70B</b> is a Llama-3.3 70B-based, two-step model designed by <a
href="https://huggingface.co/Black-Ink-Guild">Black Ink Guild</a> (<a
href="https://huggingface.co/SicariusSicariiStuff">SicariusSicariiStuff</a> and <a
href="https://huggingface.co/invisietch">invisietch</a>) for uncensored roleplay, assistant tasks, and general
usage.
</p>
<p class="red">
<b>NOTE:</b> Pernicious Prophecy 70B is an uncensored model and can produce deranged, offensive, and dangerous
outputs. You are solely responsible for anything that you choose to do with this model.
</p>
<p>
If you have any issues or just want to chat about Pernicious Prophecy & future Black Ink Guild releases, join
<a href="https://discord.gg/gXQzQcnedb">our Discord server</a>.
</p>
<div class="rune-separator"></div>
<h2 id="settings">Engage the Model...</h2>
<h3>Model Downloads</h3>
<p>
FPX:
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B">FP16 (HF)</a> |
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B_FP8">FP8 (Aph.)</a>
</p>
<p>
GGUF:
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B_GGUF_Q4_K_S">Q4_K_S</a> |
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B_GGUF_Q4_K_M">Q4_K_M</a> |
<a href="https://huggingface.co/mradermacher/Pernicious_Prophecy_70B-GGUF">mradermacher</a>
</p>
<p>
EXL2:
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B-3.5bpw">3.5bpw</a> |
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B-5.0bpw">5.0bpw</a>
</p>
<p>
GPTQ:
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B_GPTQ">GPTQ-4Bit-g32</a>
</p>
<h3>Recommended Settings</h3>
<p>
Pernicious Prophecy 70B uses the Llama-3 Instruct format, which is available as a preset in all good UIs. The
sampler settings used in testing are as follows:
</p>
<ul>
<li><b>Instruct Template</b>: Llama-3 Instruct</li>
<li><b>Context</b>: 32,768</li>
<li><b>Temperature</b>: 0.9-1.1</li>
<li><b>Min P</b>: 0.06-0.12</li>
<li><b>Rep Pen</b>: 1.07-1.09</li>
<li><b>Rep Pen Range</b>: 1,536</li>
</ul>
<p>
Feel free to use other sampler settings, these are just sane defaults. XTC is good for roleplaying with the model
but may not be beneficial for other tasks.
</p>
<h3>Context Length</h3>
<p>
The model has been tested in roleplays using up to <b>32,768 token context</b> at various quantizations and is
incredibly stable at this context length.
</p>
<p>
It is possible that the context works at even longer context lengths, but it was not deemed within the parameters
of our testing.
</p>
<div class="rune-separator"></div>
<h2>Sip the Poison...</h2>
<p>
Here, you can find example outputs from the LLM to various instructions. For each of these examples, the model was
inferenced at fp8 with 1.0 temperature, 0.1 min-p, 1.04 repetition penalty, and all other samplers neutralized.
</p>
<ul>
<li>
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B/blob/main/nasa.md">Write a 2000 word, Markdown-formatted, report for NASA. Evaluate each of Jupiter's moons as a suitable
colony with pros & cons, then provide a recommendation.</a>
</li>
<li>
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B/blob/main/tone.md">Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from
the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues.
The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who
has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few
months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life
& mission are not wasted.</a>
</li>
<li>
<a href="https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B/blob/main/cookie.md">Build me a basic cookie clicker game in HTML & Javascript.</a><br />
</li>
</ul>
<p>
These examples were all the best of 2 responses.
</p>
<div class="rune-separator"></div>
<h2>The Codex...</h2>
<p>
Here, you can find some useful prompting tips for working with Pernicious Prophecy 70B.
</p>
<h3>Formatting</h3>
<p>
'Use markdown' and 'use formatting' are likely to produce the best formatted output. We decided to train these on
trigger words to avoid random Markdown in roleplay replies.
</p>
<h3>System Prompting</h3>
<p>
Pernicious Prophecy 70B is very sensitive to prompting, even over long context. The more you instruct it, the more
it will know what you want it to do.
</p>
<p>
'Avoid purple prose, avoid cliches, avoid deus ex machinae' is a useful prompt snippet for roleplaying purposes.
For best results, don't use your roleplay prompt when using Pernicious Prophecy as an assistant.
</p>
<div class="rune-separator"></div>
<h2>Assembling the Repertoire...</h2>
<p>
We used a two-step process: a merge step to combine the abilities of some of the best L3 70B models on Huggingface
and a gentle SFT training step to heal the merge and address some issues around refusals and positivity bias.
</p>
<h3>The Merge Step</h3>
<p>
First, a
<code>model_stock</code> merge was applied using four high-quality Llama-3 based models:
<ul>
<li>
<b>SicariusSicariiStuff/Negative_LLAMA_70B</b> - chosen to be the base model, because of its low censorship,
reduced positivity bias, and engaging writing style
</li>
<li>
<b>invisietch/L3.1-70Blivion-v0.1-rc1-70B</b> - added for its exceptional formatting, roleplay performance,
and general intelligence.
</li>
<li>
<b>EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1</b> - selected for its ability in longer-form storytelling, varied
outputs, and quality thought.
</li>
<li>
<b>aaditya/Llama3-OpenBioLLM-70B</b> - to add a better understanding of anatomy, and another long-form reasoning
model to the stack.
</li>
</ul>
</p>
<h3>The Finetuning Step</h3>
<p>
We used a <b>qlora-based</b>, targeted finetune on 2x NVIDIA RTX A6000 GPUs, with a curated dataset of
approximately 18 million tokens designed to surgically address issues that we identified in the merge.
</p>
<p>
The finetuning took a total of about 14 hours, using Axolotl, and targeted specific high-priority LORA modules
which allowed us to maintain a 16k sequence length even with 96GB VRAM.
</p>
<div class="sidebyside" style="padding-bottom:2em;">
<a href="https://github.com/arcee-ai/mergekit">
<img
class="badge"
src="https://huggingface.co/Black-Ink-Guild/READMETEST/resolve/main/mergekit.png"
alt="Built with Mergekit"
width="200"
height="32"
/>
</a>
<a href="https://github.com/axolotl-ai-cloud/axolotl">
<img
class="badge"
src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png"
alt="Built with Axolotl"
width="200"
height="32"
/>
</div>
</div>
</body>
</html> | [
"CRAFT"
] |
microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL | microsoft | feature-extraction | [
"transformers",
"pytorch",
"bert",
"exbert",
"feature-extraction",
"en",
"arxiv:2112.07887",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2022-04-15T17:50:38Z | 2022-05-25T02:45:36+00:00 | 1,096 | 27 | ---
language: en
license: mit
pipeline_tag: feature-extraction
tags:
- exbert
widget:
- text: <ENT> ER </ENT> crowding has become a wide-spread problem.
---
## KRISSBERT
[https://arxiv.org/pdf/2112.07887.pdf](https://arxiv.org/pdf/2112.07887.pdf)
Entity linking faces significant challenges such as prolific variations and prevalent ambiguities, especially in high-value domains with myriad entities. Standard classification approaches suffer from the annotation bottleneck and cannot effectively handle unseen entities. Zero-shot entity linking has emerged as a promising direction for generalizing to new entities, but it still requires example gold entity mentions during training and canonical descriptions for all entities, both of which are rarely available outside of Wikipedia ([Logeswaran et al., 2019](https://aclanthology.org/P19-1335.pdf); [Wu et al., 2020](https://aclanthology.org/2020.emnlp-main.519.pdf)). We explore Knowledge-RIch Self-Supervision (KRISS) and train a contextual encoder (KRISSBERT) for entity linking, by leveraging readily available unlabeled text and domain knowledge.
Specifically, the KRISSBERT model is initialized with [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) parameters, and then continuously pretrained using biomedical entity names from the [UMLS](https://www.nlm.nih.gov/research/umls/index.html) ontology to self-supervise entity linking examples from [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts. Experiments on seven standard biomedical entity linking datasets show that KRISSBERT attains new state of the art, outperforming prior self-supervised methods by as much as 20 absolute points in accuracy.
See [Zhang et al., 2021](https://arxiv.org/abs/2112.07887) for the details.
Note that some prior systems like [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf), [SapBERT](https://aclanthology.org/2021.naacl-main.334.pdf), and their follow-up work (e.g., [Lai et al., 2021](https://aclanthology.org/2021.findings-emnlp.140.pdf)) claimed to do entity linking, but their systems completely ignore the context of an entity mention, and can only predict a surface form in the entity dictionary (See Figure 1 in [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf)), _**not the canonical entity ID (e.g., CUI in UMLS)**_. Therefore, they can't disambiguate ambiguous mentions. For instance, given the entity mention "_ER_" in the sentence "*ER crowding has become a wide-spread problem*", their systems ignore the sentence context, and simply predict the closest surface form, which is just "ER". Multiple entities share this surface form as a potential name or alias, such as *Emergency Room (C0562508)*, *Estrogen Receptor Gene (C1414461)*, and *Endoplasmic Reticulum(C0014239)*. Without using the context information, their systems can't resolve such ambiguity and pinpoint the correct entity *Emergency Room (C0562508)*. More problematically, their evaluation would deem such an ambiguous prediction as correct. Consequently, the reported results in their papers do not reflect true performance on entity linking.
## Usage for Entity Linking
Here, we use the [MedMentions](https://github.com/chanzuckerberg/MedMentions) data to show you how to 1) **generate prototype embeddings**, and 2) **run entity linking**.
(We are currently unable to release the self-supervised mention examples, because they require the UMLS and PubMed licenses.)
#### 1. Create conda environment and install requirements
```bash
conda create -n kriss -y python=3.8 && conda activate kriss
pip install -r requirements.txt
```
#### 2. Switch the root dir to [usage](https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL/tree/main/usage)
```bash
cd usage
```
#### 3. Download the MedMentions dataset
```bash
git clone https://github.com/chanzuckerberg/MedMentions.git
```
#### 4. Generate prototype embeddings
```bash
python generate_prototypes.py
```
#### 5. Run entity linking
```bash
python run_entity_linking.py
```
This will give you about `58.3%` top-1 accuracy.
## Citation
If you find KRISSBERT useful in your research, please cite the following paper:
```latex
@article{krissbert,
author = {Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon},
title = {Knowledge-Rich Self-Supervision for Biomedical Entity Linking},
year = {2021},
url = {https://arxiv.org/abs/2112.07887},
eprinttype = {arXiv},
eprint = {2112.07887},
}
``` | [
"MEDMENTIONS"
] |
nomic-ai/nomic-embed-text-v1-unsupervised | nomic-ai | sentence-similarity | [
"sentence-transformers",
"pytorch",
"onnx",
"nomic_bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"custom_code",
"en",
"arxiv:2402.01613",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | 2024-01-15T21:33:42Z | 2024-08-02T02:24:38+00:00 | 1,087 | 14 | ---
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
inference: false
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.98507462686568
- type: ap
value: 39.47222193126652
- type: f1
value: 70.5923611893019
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 87.540175
- type: ap
value: 83.16128207188409
- type: f1
value: 87.5231988227265
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.80799999999999
- type: f1
value: 46.2632547445265
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.583
- type: map_at_10
value: 46.17
- type: map_at_100
value: 47.115
- type: map_at_1000
value: 47.121
- type: map_at_3
value: 41.489
- type: map_at_5
value: 44.046
- type: mrr_at_1
value: 30.939
- type: mrr_at_10
value: 46.289
- type: mrr_at_100
value: 47.241
- type: mrr_at_1000
value: 47.247
- type: mrr_at_3
value: 41.596
- type: mrr_at_5
value: 44.149
- type: ndcg_at_1
value: 30.583
- type: ndcg_at_10
value: 54.812000000000005
- type: ndcg_at_100
value: 58.605
- type: ndcg_at_1000
value: 58.753
- type: ndcg_at_3
value: 45.095
- type: ndcg_at_5
value: 49.744
- type: precision_at_1
value: 30.583
- type: precision_at_10
value: 8.243
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.516
- type: precision_at_5
value: 13.385
- type: recall_at_1
value: 30.583
- type: recall_at_10
value: 82.432
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.547999999999995
- type: recall_at_5
value: 66.927
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.17830107652425
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.90561364087807
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.57222651819297
- type: mrr
value: 73.19241085169062
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.55181686367382
- type: cos_sim_spearman
value: 87.18933606575987
- type: euclidean_pearson
value: 87.78077503434338
- type: euclidean_spearman
value: 87.18933606575987
- type: manhattan_pearson
value: 87.75124980168601
- type: manhattan_spearman
value: 86.79113422137638
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.09415584415585
- type: f1
value: 80.60088693212091
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.57061229905462
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.05342946608653
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.376
- type: map_at_10
value: 45.214
- type: map_at_100
value: 46.635
- type: map_at_1000
value: 46.755
- type: map_at_3
value: 42.198
- type: map_at_5
value: 43.723
- type: mrr_at_1
value: 41.774
- type: mrr_at_10
value: 51.07000000000001
- type: mrr_at_100
value: 51.785000000000004
- type: mrr_at_1000
value: 51.824999999999996
- type: mrr_at_3
value: 48.808
- type: mrr_at_5
value: 50.11
- type: ndcg_at_1
value: 41.774
- type: ndcg_at_10
value: 51.105999999999995
- type: ndcg_at_100
value: 56.358
- type: ndcg_at_1000
value: 58.205
- type: ndcg_at_3
value: 46.965
- type: ndcg_at_5
value: 48.599
- type: precision_at_1
value: 41.774
- type: precision_at_10
value: 9.514
- type: precision_at_100
value: 1.508
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 22.175
- type: precision_at_5
value: 15.508
- type: recall_at_1
value: 34.376
- type: recall_at_10
value: 61.748000000000005
- type: recall_at_100
value: 84.025
- type: recall_at_1000
value: 95.5
- type: recall_at_3
value: 49.378
- type: recall_at_5
value: 54.276
- type: map_at_1
value: 32.394
- type: map_at_10
value: 42.707
- type: map_at_100
value: 43.893
- type: map_at_1000
value: 44.019000000000005
- type: map_at_3
value: 39.51
- type: map_at_5
value: 41.381
- type: mrr_at_1
value: 41.019
- type: mrr_at_10
value: 49.042
- type: mrr_at_100
value: 49.669000000000004
- type: mrr_at_1000
value: 49.712
- type: mrr_at_3
value: 46.921
- type: mrr_at_5
value: 48.192
- type: ndcg_at_1
value: 41.019
- type: ndcg_at_10
value: 48.46
- type: ndcg_at_100
value: 52.537
- type: ndcg_at_1000
value: 54.491
- type: ndcg_at_3
value: 44.232
- type: ndcg_at_5
value: 46.305
- type: precision_at_1
value: 41.019
- type: precision_at_10
value: 9.134
- type: precision_at_100
value: 1.422
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 21.38
- type: precision_at_5
value: 15.096000000000002
- type: recall_at_1
value: 32.394
- type: recall_at_10
value: 58.11500000000001
- type: recall_at_100
value: 75.509
- type: recall_at_1000
value: 87.812
- type: recall_at_3
value: 45.476
- type: recall_at_5
value: 51.549
- type: map_at_1
value: 43.47
- type: map_at_10
value: 55.871
- type: map_at_100
value: 56.745000000000005
- type: map_at_1000
value: 56.794
- type: map_at_3
value: 52.439
- type: map_at_5
value: 54.412000000000006
- type: mrr_at_1
value: 49.592000000000006
- type: mrr_at_10
value: 59.34199999999999
- type: mrr_at_100
value: 59.857000000000006
- type: mrr_at_1000
value: 59.88
- type: mrr_at_3
value: 56.897
- type: mrr_at_5
value: 58.339
- type: ndcg_at_1
value: 49.592000000000006
- type: ndcg_at_10
value: 61.67
- type: ndcg_at_100
value: 65.11099999999999
- type: ndcg_at_1000
value: 66.065
- type: ndcg_at_3
value: 56.071000000000005
- type: ndcg_at_5
value: 58.84700000000001
- type: precision_at_1
value: 49.592000000000006
- type: precision_at_10
value: 9.774
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.66
- type: precision_at_5
value: 16.878
- type: recall_at_1
value: 43.47
- type: recall_at_10
value: 75.387
- type: recall_at_100
value: 90.253
- type: recall_at_1000
value: 97.00800000000001
- type: recall_at_3
value: 60.616
- type: recall_at_5
value: 67.31899999999999
- type: map_at_1
value: 26.633000000000003
- type: map_at_10
value: 35.497
- type: map_at_100
value: 36.504
- type: map_at_1000
value: 36.574
- type: map_at_3
value: 33.115
- type: map_at_5
value: 34.536
- type: mrr_at_1
value: 28.927000000000003
- type: mrr_at_10
value: 37.778
- type: mrr_at_100
value: 38.634
- type: mrr_at_1000
value: 38.690000000000005
- type: mrr_at_3
value: 35.518
- type: mrr_at_5
value: 36.908
- type: ndcg_at_1
value: 28.927000000000003
- type: ndcg_at_10
value: 40.327
- type: ndcg_at_100
value: 45.321
- type: ndcg_at_1000
value: 47.214
- type: ndcg_at_3
value: 35.762
- type: ndcg_at_5
value: 38.153999999999996
- type: precision_at_1
value: 28.927000000000003
- type: precision_at_10
value: 6.045
- type: precision_at_100
value: 0.901
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 15.140999999999998
- type: precision_at_5
value: 10.485999999999999
- type: recall_at_1
value: 26.633000000000003
- type: recall_at_10
value: 52.99
- type: recall_at_100
value: 76.086
- type: recall_at_1000
value: 90.46300000000001
- type: recall_at_3
value: 40.738
- type: recall_at_5
value: 46.449
- type: map_at_1
value: 17.521
- type: map_at_10
value: 25.130000000000003
- type: map_at_100
value: 26.176
- type: map_at_1000
value: 26.289
- type: map_at_3
value: 22.829
- type: map_at_5
value: 24.082
- type: mrr_at_1
value: 21.766
- type: mrr_at_10
value: 29.801
- type: mrr_at_100
value: 30.682
- type: mrr_at_1000
value: 30.75
- type: mrr_at_3
value: 27.633000000000003
- type: mrr_at_5
value: 28.858
- type: ndcg_at_1
value: 21.766
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.429
- type: ndcg_at_1000
value: 38.236
- type: ndcg_at_3
value: 25.968000000000004
- type: ndcg_at_5
value: 27.785
- type: precision_at_1
value: 21.766
- type: precision_at_10
value: 5.498
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 12.687000000000001
- type: precision_at_5
value: 9.005
- type: recall_at_1
value: 17.521
- type: recall_at_10
value: 40.454
- type: recall_at_100
value: 64.828
- type: recall_at_1000
value: 84.83800000000001
- type: recall_at_3
value: 28.758
- type: recall_at_5
value: 33.617000000000004
- type: map_at_1
value: 30.564999999999998
- type: map_at_10
value: 40.664
- type: map_at_100
value: 41.995
- type: map_at_1000
value: 42.104
- type: map_at_3
value: 37.578
- type: map_at_5
value: 39.247
- type: mrr_at_1
value: 37.44
- type: mrr_at_10
value: 46.533
- type: mrr_at_100
value: 47.363
- type: mrr_at_1000
value: 47.405
- type: mrr_at_3
value: 44.224999999999994
- type: mrr_at_5
value: 45.549
- type: ndcg_at_1
value: 37.44
- type: ndcg_at_10
value: 46.574
- type: ndcg_at_100
value: 52.024
- type: ndcg_at_1000
value: 53.93900000000001
- type: ndcg_at_3
value: 41.722
- type: ndcg_at_5
value: 43.973
- type: precision_at_1
value: 37.44
- type: precision_at_10
value: 8.344999999999999
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 19.442
- type: precision_at_5
value: 13.802
- type: recall_at_1
value: 30.564999999999998
- type: recall_at_10
value: 58.207
- type: recall_at_100
value: 81.137
- type: recall_at_1000
value: 93.506
- type: recall_at_3
value: 44.606
- type: recall_at_5
value: 50.373000000000005
- type: map_at_1
value: 27.892
- type: map_at_10
value: 37.251
- type: map_at_100
value: 38.606
- type: map_at_1000
value: 38.716
- type: map_at_3
value: 34.312
- type: map_at_5
value: 35.791000000000004
- type: mrr_at_1
value: 34.247
- type: mrr_at_10
value: 42.696
- type: mrr_at_100
value: 43.659
- type: mrr_at_1000
value: 43.711
- type: mrr_at_3
value: 40.563
- type: mrr_at_5
value: 41.625
- type: ndcg_at_1
value: 34.247
- type: ndcg_at_10
value: 42.709
- type: ndcg_at_100
value: 48.422
- type: ndcg_at_1000
value: 50.544
- type: ndcg_at_3
value: 38.105
- type: ndcg_at_5
value: 39.846
- type: precision_at_1
value: 34.247
- type: precision_at_10
value: 7.66
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.489
- type: recall_at_1
value: 27.892
- type: recall_at_10
value: 53.559
- type: recall_at_100
value: 78.018
- type: recall_at_1000
value: 92.07300000000001
- type: recall_at_3
value: 40.154
- type: recall_at_5
value: 45.078
- type: map_at_1
value: 27.29375
- type: map_at_10
value: 36.19533333333334
- type: map_at_100
value: 37.33183333333334
- type: map_at_1000
value: 37.44616666666667
- type: map_at_3
value: 33.49125
- type: map_at_5
value: 34.94166666666667
- type: mrr_at_1
value: 32.336666666666666
- type: mrr_at_10
value: 40.45983333333333
- type: mrr_at_100
value: 41.26533333333334
- type: mrr_at_1000
value: 41.321583333333336
- type: mrr_at_3
value: 38.23416666666667
- type: mrr_at_5
value: 39.48491666666666
- type: ndcg_at_1
value: 32.336666666666666
- type: ndcg_at_10
value: 41.39958333333333
- type: ndcg_at_100
value: 46.293
- type: ndcg_at_1000
value: 48.53425
- type: ndcg_at_3
value: 36.88833333333333
- type: ndcg_at_5
value: 38.90733333333333
- type: precision_at_1
value: 32.336666666666666
- type: precision_at_10
value: 7.175916666666667
- type: precision_at_100
value: 1.1311666666666669
- type: precision_at_1000
value: 0.15141666666666667
- type: precision_at_3
value: 16.841166666666666
- type: precision_at_5
value: 11.796583333333334
- type: recall_at_1
value: 27.29375
- type: recall_at_10
value: 52.514583333333334
- type: recall_at_100
value: 74.128
- type: recall_at_1000
value: 89.64125
- type: recall_at_3
value: 39.83258333333333
- type: recall_at_5
value: 45.126416666666664
- type: map_at_1
value: 24.62
- type: map_at_10
value: 31.517
- type: map_at_100
value: 32.322
- type: map_at_1000
value: 32.422000000000004
- type: map_at_3
value: 29.293999999999997
- type: map_at_5
value: 30.403999999999996
- type: mrr_at_1
value: 27.607
- type: mrr_at_10
value: 34.294999999999995
- type: mrr_at_100
value: 35.045
- type: mrr_at_1000
value: 35.114000000000004
- type: mrr_at_3
value: 32.311
- type: mrr_at_5
value: 33.369
- type: ndcg_at_1
value: 27.607
- type: ndcg_at_10
value: 35.853
- type: ndcg_at_100
value: 39.919
- type: ndcg_at_1000
value: 42.452
- type: ndcg_at_3
value: 31.702
- type: ndcg_at_5
value: 33.47
- type: precision_at_1
value: 27.607
- type: precision_at_10
value: 5.598
- type: precision_at_100
value: 0.83
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.700999999999999
- type: precision_at_5
value: 9.325
- type: recall_at_1
value: 24.62
- type: recall_at_10
value: 46.475
- type: recall_at_100
value: 64.891
- type: recall_at_1000
value: 83.524
- type: recall_at_3
value: 34.954
- type: recall_at_5
value: 39.471000000000004
- type: map_at_1
value: 16.858999999999998
- type: map_at_10
value: 23.746000000000002
- type: map_at_100
value: 24.731
- type: map_at_1000
value: 24.86
- type: map_at_3
value: 21.603
- type: map_at_5
value: 22.811999999999998
- type: mrr_at_1
value: 20.578
- type: mrr_at_10
value: 27.618
- type: mrr_at_100
value: 28.459
- type: mrr_at_1000
value: 28.543000000000003
- type: mrr_at_3
value: 25.533
- type: mrr_at_5
value: 26.730999999999998
- type: ndcg_at_1
value: 20.578
- type: ndcg_at_10
value: 28.147
- type: ndcg_at_100
value: 32.946999999999996
- type: ndcg_at_1000
value: 36.048
- type: ndcg_at_3
value: 24.32
- type: ndcg_at_5
value: 26.131999999999998
- type: precision_at_1
value: 20.578
- type: precision_at_10
value: 5.061999999999999
- type: precision_at_100
value: 0.8789999999999999
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 11.448
- type: precision_at_5
value: 8.251999999999999
- type: recall_at_1
value: 16.858999999999998
- type: recall_at_10
value: 37.565
- type: recall_at_100
value: 59.239
- type: recall_at_1000
value: 81.496
- type: recall_at_3
value: 26.865
- type: recall_at_5
value: 31.581
- type: map_at_1
value: 26.11
- type: map_at_10
value: 34.214
- type: map_at_100
value: 35.291
- type: map_at_1000
value: 35.400999999999996
- type: map_at_3
value: 31.541000000000004
- type: map_at_5
value: 33.21
- type: mrr_at_1
value: 30.97
- type: mrr_at_10
value: 38.522
- type: mrr_at_100
value: 39.37
- type: mrr_at_1000
value: 39.437
- type: mrr_at_3
value: 36.193999999999996
- type: mrr_at_5
value: 37.691
- type: ndcg_at_1
value: 30.97
- type: ndcg_at_10
value: 39.2
- type: ndcg_at_100
value: 44.267
- type: ndcg_at_1000
value: 46.760000000000005
- type: ndcg_at_3
value: 34.474
- type: ndcg_at_5
value: 37.016
- type: precision_at_1
value: 30.97
- type: precision_at_10
value: 6.521000000000001
- type: precision_at_100
value: 1.011
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 15.392
- type: precision_at_5
value: 11.026
- type: recall_at_1
value: 26.11
- type: recall_at_10
value: 50.14999999999999
- type: recall_at_100
value: 72.398
- type: recall_at_1000
value: 89.764
- type: recall_at_3
value: 37.352999999999994
- type: recall_at_5
value: 43.736000000000004
- type: map_at_1
value: 25.514
- type: map_at_10
value: 34.278999999999996
- type: map_at_100
value: 35.847
- type: map_at_1000
value: 36.086
- type: map_at_3
value: 31.563999999999997
- type: map_at_5
value: 32.903999999999996
- type: mrr_at_1
value: 30.830000000000002
- type: mrr_at_10
value: 38.719
- type: mrr_at_100
value: 39.678999999999995
- type: mrr_at_1000
value: 39.741
- type: mrr_at_3
value: 36.265
- type: mrr_at_5
value: 37.599
- type: ndcg_at_1
value: 30.830000000000002
- type: ndcg_at_10
value: 39.997
- type: ndcg_at_100
value: 45.537
- type: ndcg_at_1000
value: 48.296
- type: ndcg_at_3
value: 35.429
- type: ndcg_at_5
value: 37.3
- type: precision_at_1
value: 30.830000000000002
- type: precision_at_10
value: 7.747
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 11.818
- type: recall_at_1
value: 25.514
- type: recall_at_10
value: 50.71600000000001
- type: recall_at_100
value: 75.40299999999999
- type: recall_at_1000
value: 93.10300000000001
- type: recall_at_3
value: 37.466
- type: recall_at_5
value: 42.677
- type: map_at_1
value: 21.571
- type: map_at_10
value: 28.254
- type: map_at_100
value: 29.237000000000002
- type: map_at_1000
value: 29.334
- type: map_at_3
value: 25.912000000000003
- type: map_at_5
value: 26.798
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 30.102
- type: mrr_at_100
value: 30.982
- type: mrr_at_1000
value: 31.051000000000002
- type: mrr_at_3
value: 27.942
- type: mrr_at_5
value: 28.848000000000003
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 32.726
- type: ndcg_at_100
value: 37.644
- type: ndcg_at_1000
value: 40.161
- type: ndcg_at_3
value: 27.91
- type: ndcg_at_5
value: 29.461
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 5.213
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 11.583
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 21.571
- type: recall_at_10
value: 44.809
- type: recall_at_100
value: 67.74900000000001
- type: recall_at_1000
value: 86.60799999999999
- type: recall_at_3
value: 31.627
- type: recall_at_5
value: 35.391
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 17.183
- type: map_at_100
value: 18.926000000000002
- type: map_at_1000
value: 19.105
- type: map_at_3
value: 14.308000000000002
- type: map_at_5
value: 15.738
- type: mrr_at_1
value: 22.02
- type: mrr_at_10
value: 33.181
- type: mrr_at_100
value: 34.357
- type: mrr_at_1000
value: 34.398
- type: mrr_at_3
value: 29.793999999999997
- type: mrr_at_5
value: 31.817
- type: ndcg_at_1
value: 22.02
- type: ndcg_at_10
value: 24.712
- type: ndcg_at_100
value: 32.025
- type: ndcg_at_1000
value: 35.437000000000005
- type: ndcg_at_3
value: 19.852
- type: ndcg_at_5
value: 21.565
- type: precision_at_1
value: 22.02
- type: precision_at_10
value: 7.779
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 14.832
- type: precision_at_5
value: 11.453000000000001
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.375000000000004
- type: recall_at_100
value: 55.737
- type: recall_at_1000
value: 75.071
- type: recall_at_3
value: 18.529999999999998
- type: recall_at_5
value: 23.313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.651
- type: map_at_10
value: 19.674
- type: map_at_100
value: 27.855999999999998
- type: map_at_1000
value: 29.348000000000003
- type: map_at_3
value: 14.247000000000002
- type: map_at_5
value: 16.453
- type: mrr_at_1
value: 61.75000000000001
- type: mrr_at_10
value: 71.329
- type: mrr_at_100
value: 71.69200000000001
- type: mrr_at_1000
value: 71.699
- type: mrr_at_3
value: 69.042
- type: mrr_at_5
value: 70.679
- type: ndcg_at_1
value: 50.125
- type: ndcg_at_10
value: 40.199
- type: ndcg_at_100
value: 45.378
- type: ndcg_at_1000
value: 52.376999999999995
- type: ndcg_at_3
value: 44.342
- type: ndcg_at_5
value: 41.730000000000004
- type: precision_at_1
value: 61.75000000000001
- type: precision_at_10
value: 32.2
- type: precision_at_100
value: 10.298
- type: precision_at_1000
value: 1.984
- type: precision_at_3
value: 48.667
- type: precision_at_5
value: 40.5
- type: recall_at_1
value: 8.651
- type: recall_at_10
value: 25.607000000000003
- type: recall_at_100
value: 53.062
- type: recall_at_1000
value: 74.717
- type: recall_at_3
value: 15.661
- type: recall_at_5
value: 19.409000000000002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.64500000000001
- type: f1
value: 43.71011316507787
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.613
- type: map_at_10
value: 68.02
- type: map_at_100
value: 68.366
- type: map_at_1000
value: 68.379
- type: map_at_3
value: 65.753
- type: map_at_5
value: 67.242
- type: mrr_at_1
value: 59.001000000000005
- type: mrr_at_10
value: 72.318
- type: mrr_at_100
value: 72.558
- type: mrr_at_1000
value: 72.56099999999999
- type: mrr_at_3
value: 70.22699999999999
- type: mrr_at_5
value: 71.655
- type: ndcg_at_1
value: 59.001000000000005
- type: ndcg_at_10
value: 74.386
- type: ndcg_at_100
value: 75.763
- type: ndcg_at_1000
value: 76.03
- type: ndcg_at_3
value: 70.216
- type: ndcg_at_5
value: 72.697
- type: precision_at_1
value: 59.001000000000005
- type: precision_at_10
value: 9.844
- type: precision_at_100
value: 1.068
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 28.523
- type: precision_at_5
value: 18.491
- type: recall_at_1
value: 54.613
- type: recall_at_10
value: 89.669
- type: recall_at_100
value: 95.387
- type: recall_at_1000
value: 97.129
- type: recall_at_3
value: 78.54100000000001
- type: recall_at_5
value: 84.637
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.348
- type: map_at_10
value: 32.464999999999996
- type: map_at_100
value: 34.235
- type: map_at_1000
value: 34.410000000000004
- type: map_at_3
value: 28.109
- type: map_at_5
value: 30.634
- type: mrr_at_1
value: 38.889
- type: mrr_at_10
value: 47.131
- type: mrr_at_100
value: 48.107
- type: mrr_at_1000
value: 48.138
- type: mrr_at_3
value: 44.599
- type: mrr_at_5
value: 46.181
- type: ndcg_at_1
value: 38.889
- type: ndcg_at_10
value: 39.86
- type: ndcg_at_100
value: 46.619
- type: ndcg_at_1000
value: 49.525999999999996
- type: ndcg_at_3
value: 35.768
- type: ndcg_at_5
value: 37.4
- type: precision_at_1
value: 38.889
- type: precision_at_10
value: 11.003
- type: precision_at_100
value: 1.796
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 23.714
- type: precision_at_5
value: 17.901
- type: recall_at_1
value: 20.348
- type: recall_at_10
value: 46.781
- type: recall_at_100
value: 71.937
- type: recall_at_1000
value: 89.18599999999999
- type: recall_at_3
value: 32.16
- type: recall_at_5
value: 38.81
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.198
- type: map_at_10
value: 54.065
- type: map_at_100
value: 54.984
- type: map_at_1000
value: 55.05
- type: map_at_3
value: 50.758
- type: map_at_5
value: 52.758
- type: mrr_at_1
value: 74.396
- type: mrr_at_10
value: 81.352
- type: mrr_at_100
value: 81.562
- type: mrr_at_1000
value: 81.57
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.963
- type: ndcg_at_1
value: 74.396
- type: ndcg_at_10
value: 63.70099999999999
- type: ndcg_at_100
value: 66.874
- type: ndcg_at_1000
value: 68.171
- type: ndcg_at_3
value: 58.916999999999994
- type: ndcg_at_5
value: 61.495999999999995
- type: precision_at_1
value: 74.396
- type: precision_at_10
value: 13.228000000000002
- type: precision_at_100
value: 1.569
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 37.007
- type: precision_at_5
value: 24.248
- type: recall_at_1
value: 37.198
- type: recall_at_10
value: 66.13799999999999
- type: recall_at_100
value: 78.45400000000001
- type: recall_at_1000
value: 87.04899999999999
- type: recall_at_3
value: 55.510000000000005
- type: recall_at_5
value: 60.621
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.32240000000002
- type: ap
value: 81.37708984744188
- type: f1
value: 86.29645005523952
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 16.402
- type: map_at_10
value: 28.097
- type: map_at_100
value: 29.421999999999997
- type: map_at_1000
value: 29.476999999999997
- type: map_at_3
value: 24.015
- type: map_at_5
value: 26.316
- type: mrr_at_1
value: 16.905
- type: mrr_at_10
value: 28.573999999999998
- type: mrr_at_100
value: 29.862
- type: mrr_at_1000
value: 29.912
- type: mrr_at_3
value: 24.589
- type: mrr_at_5
value: 26.851000000000003
- type: ndcg_at_1
value: 16.905
- type: ndcg_at_10
value: 34.99
- type: ndcg_at_100
value: 41.419
- type: ndcg_at_1000
value: 42.815999999999995
- type: ndcg_at_3
value: 26.695
- type: ndcg_at_5
value: 30.789
- type: precision_at_1
value: 16.905
- type: precision_at_10
value: 5.891
- type: precision_at_100
value: 0.91
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 11.724
- type: precision_at_5
value: 9.097
- type: recall_at_1
value: 16.402
- type: recall_at_10
value: 56.462999999999994
- type: recall_at_100
value: 86.246
- type: recall_at_1000
value: 96.926
- type: recall_at_3
value: 33.897
- type: recall_at_5
value: 43.718
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.35978112175103
- type: f1
value: 92.04704651024416
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 65.20063839489283
- type: f1
value: 45.34047546059121
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 65.36156843270334
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.03160726294554
- type: f1
value: 73.42899064973165
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.347360980344476
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.56022733162805
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.60132765358296
- type: mrr
value: 31.710892632824468
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.827999999999999
- type: map_at_10
value: 13.547
- type: map_at_100
value: 16.869
- type: map_at_1000
value: 18.242
- type: map_at_3
value: 9.917
- type: map_at_5
value: 11.648
- type: mrr_at_1
value: 46.44
- type: mrr_at_10
value: 55.062
- type: mrr_at_100
value: 55.513999999999996
- type: mrr_at_1000
value: 55.564
- type: mrr_at_3
value: 52.735
- type: mrr_at_5
value: 54.391
- type: ndcg_at_1
value: 44.582
- type: ndcg_at_10
value: 35.684
- type: ndcg_at_100
value: 31.913999999999998
- type: ndcg_at_1000
value: 40.701
- type: ndcg_at_3
value: 40.819
- type: ndcg_at_5
value: 39.117000000000004
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.687
- type: precision_at_100
value: 8.062
- type: precision_at_1000
value: 2.073
- type: precision_at_3
value: 38.493
- type: precision_at_5
value: 34.241
- type: recall_at_1
value: 5.827999999999999
- type: recall_at_10
value: 17.391000000000002
- type: recall_at_100
value: 31.228
- type: recall_at_1000
value: 63.943000000000005
- type: recall_at_3
value: 10.81
- type: recall_at_5
value: 13.618
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.02
- type: map_at_10
value: 40.054
- type: map_at_100
value: 41.318
- type: map_at_1000
value: 41.343999999999994
- type: map_at_3
value: 35.221999999999994
- type: map_at_5
value: 38.057
- type: mrr_at_1
value: 27.230999999999998
- type: mrr_at_10
value: 42.315999999999995
- type: mrr_at_100
value: 43.254
- type: mrr_at_1000
value: 43.272
- type: mrr_at_3
value: 38.176
- type: mrr_at_5
value: 40.64
- type: ndcg_at_1
value: 27.230999999999998
- type: ndcg_at_10
value: 48.551
- type: ndcg_at_100
value: 53.737
- type: ndcg_at_1000
value: 54.313
- type: ndcg_at_3
value: 39.367999999999995
- type: ndcg_at_5
value: 44.128
- type: precision_at_1
value: 27.230999999999998
- type: precision_at_10
value: 8.578
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.704
- type: precision_at_5
value: 13.927999999999999
- type: recall_at_1
value: 24.02
- type: recall_at_10
value: 72.258
- type: recall_at_100
value: 94.489
- type: recall_at_1000
value: 98.721
- type: recall_at_3
value: 48.373
- type: recall_at_5
value: 59.388
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.476
- type: map_at_10
value: 84.41300000000001
- type: map_at_100
value: 85.036
- type: map_at_1000
value: 85.055
- type: map_at_3
value: 81.45599999999999
- type: map_at_5
value: 83.351
- type: mrr_at_1
value: 81.07
- type: mrr_at_10
value: 87.408
- type: mrr_at_100
value: 87.509
- type: mrr_at_1000
value: 87.51
- type: mrr_at_3
value: 86.432
- type: mrr_at_5
value: 87.128
- type: ndcg_at_1
value: 81.13
- type: ndcg_at_10
value: 88.18599999999999
- type: ndcg_at_100
value: 89.401
- type: ndcg_at_1000
value: 89.515
- type: ndcg_at_3
value: 85.332
- type: ndcg_at_5
value: 86.97
- type: precision_at_1
value: 81.13
- type: precision_at_10
value: 13.361
- type: precision_at_100
value: 1.5230000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.548000000000002
- type: recall_at_1
value: 70.476
- type: recall_at_10
value: 95.3
- type: recall_at_100
value: 99.46000000000001
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 87.057
- type: recall_at_5
value: 91.739
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.36775089400664
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.05041008018361
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.743
- type: map_at_10
value: 12.171
- type: map_at_100
value: 14.174999999999999
- type: map_at_1000
value: 14.446
- type: map_at_3
value: 8.698
- type: map_at_5
value: 10.444
- type: mrr_at_1
value: 23.400000000000002
- type: mrr_at_10
value: 34.284
- type: mrr_at_100
value: 35.400999999999996
- type: mrr_at_1000
value: 35.451
- type: mrr_at_3
value: 31.167
- type: mrr_at_5
value: 32.946999999999996
- type: ndcg_at_1
value: 23.400000000000002
- type: ndcg_at_10
value: 20.169999999999998
- type: ndcg_at_100
value: 27.967
- type: ndcg_at_1000
value: 32.982
- type: ndcg_at_3
value: 19.308
- type: ndcg_at_5
value: 16.837
- type: precision_at_1
value: 23.400000000000002
- type: precision_at_10
value: 10.41
- type: precision_at_100
value: 2.162
- type: precision_at_1000
value: 0.338
- type: precision_at_3
value: 18.067
- type: precision_at_5
value: 14.78
- type: recall_at_1
value: 4.743
- type: recall_at_10
value: 21.098
- type: recall_at_100
value: 43.85
- type: recall_at_1000
value: 68.60000000000001
- type: recall_at_3
value: 10.993
- type: recall_at_5
value: 14.998000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 81.129376905658
- type: cos_sim_spearman
value: 74.18938626206575
- type: euclidean_pearson
value: 77.95192851803141
- type: euclidean_spearman
value: 74.18938626206575
- type: manhattan_pearson
value: 77.97718819383338
- type: manhattan_spearman
value: 74.20580317409417
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 78.36913772828827
- type: cos_sim_spearman
value: 73.22311186990363
- type: euclidean_pearson
value: 74.45263405031004
- type: euclidean_spearman
value: 73.22311186990363
- type: manhattan_pearson
value: 74.56201270071791
- type: manhattan_spearman
value: 73.26490493774821
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.79920796384403
- type: cos_sim_spearman
value: 84.77145185366201
- type: euclidean_pearson
value: 83.90638366191354
- type: euclidean_spearman
value: 84.77145185366201
- type: manhattan_pearson
value: 83.83788216629048
- type: manhattan_spearman
value: 84.70515987131665
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.18883765092875
- type: cos_sim_spearman
value: 79.9948128016449
- type: euclidean_pearson
value: 81.57436738666773
- type: euclidean_spearman
value: 79.9948128016449
- type: manhattan_pearson
value: 81.55274202648187
- type: manhattan_spearman
value: 79.99854975019382
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.89669110871021
- type: cos_sim_spearman
value: 87.26758456901442
- type: euclidean_pearson
value: 86.62614163641416
- type: euclidean_spearman
value: 87.26758456901442
- type: manhattan_pearson
value: 86.58584490012353
- type: manhattan_spearman
value: 87.20340001562076
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.983023415916
- type: cos_sim_spearman
value: 82.31169002657151
- type: euclidean_pearson
value: 81.52305092886222
- type: euclidean_spearman
value: 82.31169002657151
- type: manhattan_pearson
value: 81.63024996600281
- type: manhattan_spearman
value: 82.44579116264026
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.27779520541694
- type: cos_sim_spearman
value: 89.54137104681308
- type: euclidean_pearson
value: 88.99136079955996
- type: euclidean_spearman
value: 89.54137104681308
- type: manhattan_pearson
value: 88.95980417618277
- type: manhattan_spearman
value: 89.55178819334718
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.50806758829178
- type: cos_sim_spearman
value: 65.92675365587571
- type: euclidean_pearson
value: 67.09216876696559
- type: euclidean_spearman
value: 65.92675365587571
- type: manhattan_pearson
value: 67.37398716891478
- type: manhattan_spearman
value: 66.34811143508206
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.557575753862
- type: cos_sim_spearman
value: 83.95859527071087
- type: euclidean_pearson
value: 83.77287626715369
- type: euclidean_spearman
value: 83.95859527071087
- type: manhattan_pearson
value: 83.7898033034244
- type: manhattan_spearman
value: 83.94860981294184
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.90679624144718
- type: mrr
value: 94.33150183150182
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.81699999999999
- type: map_at_10
value: 67.301
- type: map_at_100
value: 67.73599999999999
- type: map_at_1000
value: 67.757
- type: map_at_3
value: 64.865
- type: map_at_5
value: 66.193
- type: mrr_at_1
value: 59.667
- type: mrr_at_10
value: 68.324
- type: mrr_at_100
value: 68.66
- type: mrr_at_1000
value: 68.676
- type: mrr_at_3
value: 66.556
- type: mrr_at_5
value: 67.472
- type: ndcg_at_1
value: 59.667
- type: ndcg_at_10
value: 71.982
- type: ndcg_at_100
value: 74.149
- type: ndcg_at_1000
value: 74.60799999999999
- type: ndcg_at_3
value: 67.796
- type: ndcg_at_5
value: 69.64099999999999
- type: precision_at_1
value: 59.667
- type: precision_at_10
value: 9.633
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.889000000000003
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 56.81699999999999
- type: recall_at_10
value: 85.18900000000001
- type: recall_at_100
value: 95.6
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 73.617
- type: recall_at_5
value: 78.444
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83465346534653
- type: cos_sim_ap
value: 95.93387984443646
- type: cos_sim_f1
value: 91.49261334691798
- type: cos_sim_precision
value: 93.25025960539979
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.83465346534653
- type: dot_ap
value: 95.93389375761485
- type: dot_f1
value: 91.49261334691798
- type: dot_precision
value: 93.25025960539979
- type: dot_recall
value: 89.8
- type: euclidean_accuracy
value: 99.83465346534653
- type: euclidean_ap
value: 95.93389375761487
- type: euclidean_f1
value: 91.49261334691798
- type: euclidean_precision
value: 93.25025960539979
- type: euclidean_recall
value: 89.8
- type: manhattan_accuracy
value: 99.83564356435643
- type: manhattan_ap
value: 95.89877504534601
- type: manhattan_f1
value: 91.53061224489795
- type: manhattan_precision
value: 93.4375
- type: manhattan_recall
value: 89.7
- type: max_accuracy
value: 99.83564356435643
- type: max_ap
value: 95.93389375761487
- type: max_f1
value: 91.53061224489795
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 62.2780055191805
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.94461701798904
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.865789666749535
- type: mrr
value: 50.61783804430863
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.97703436199298
- type: cos_sim_spearman
value: 30.71880290978946
- type: dot_pearson
value: 29.977036284086818
- type: dot_spearman
value: 30.71880290978946
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22799999999999998
- type: map_at_10
value: 1.559
- type: map_at_100
value: 8.866
- type: map_at_1000
value: 23.071
- type: map_at_3
value: 0.592
- type: map_at_5
value: 0.906
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 88.567
- type: mrr_at_100
value: 88.748
- type: mrr_at_1000
value: 88.748
- type: mrr_at_3
value: 87.667
- type: mrr_at_5
value: 88.067
- type: ndcg_at_1
value: 73.0
- type: ndcg_at_10
value: 62.202999999999996
- type: ndcg_at_100
value: 49.66
- type: ndcg_at_1000
value: 48.760999999999996
- type: ndcg_at_3
value: 67.52
- type: ndcg_at_5
value: 64.80799999999999
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 65.4
- type: precision_at_100
value: 51.72
- type: precision_at_1000
value: 22.014
- type: precision_at_3
value: 74.0
- type: precision_at_5
value: 69.19999999999999
- type: recall_at_1
value: 0.22799999999999998
- type: recall_at_10
value: 1.7680000000000002
- type: recall_at_100
value: 12.581999999999999
- type: recall_at_1000
value: 46.883
- type: recall_at_3
value: 0.618
- type: recall_at_5
value: 0.9690000000000001
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.295
- type: map_at_10
value: 7.481
- type: map_at_100
value: 13.120999999999999
- type: map_at_1000
value: 14.863999999999999
- type: map_at_3
value: 3.266
- type: map_at_5
value: 4.662
- type: mrr_at_1
value: 14.285999999999998
- type: mrr_at_10
value: 31.995
- type: mrr_at_100
value: 33.415
- type: mrr_at_1000
value: 33.432
- type: mrr_at_3
value: 27.551
- type: mrr_at_5
value: 30.306
- type: ndcg_at_1
value: 11.224
- type: ndcg_at_10
value: 19.166
- type: ndcg_at_100
value: 31.86
- type: ndcg_at_1000
value: 44.668
- type: ndcg_at_3
value: 17.371
- type: ndcg_at_5
value: 18.567
- type: precision_at_1
value: 14.285999999999998
- type: precision_at_10
value: 18.98
- type: precision_at_100
value: 7.041
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 19.728
- type: precision_at_5
value: 20.816000000000003
- type: recall_at_1
value: 1.295
- type: recall_at_10
value: 14.482000000000001
- type: recall_at_100
value: 45.149
- type: recall_at_1000
value: 84.317
- type: recall_at_3
value: 4.484
- type: recall_at_5
value: 7.7170000000000005
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.96340000000001
- type: ap
value: 15.62835559397026
- type: f1
value: 56.42561616707867
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 55.280135823429546
- type: f1
value: 55.61428067547153
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 45.426677723253555
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.57411933003517
- type: cos_sim_ap
value: 69.68254951354992
- type: cos_sim_f1
value: 65.05232416646386
- type: cos_sim_precision
value: 60.36585365853659
- type: cos_sim_recall
value: 70.52770448548813
- type: dot_accuracy
value: 84.57411933003517
- type: dot_ap
value: 69.68256519978905
- type: dot_f1
value: 65.05232416646386
- type: dot_precision
value: 60.36585365853659
- type: dot_recall
value: 70.52770448548813
- type: euclidean_accuracy
value: 84.57411933003517
- type: euclidean_ap
value: 69.6825655240522
- type: euclidean_f1
value: 65.05232416646386
- type: euclidean_precision
value: 60.36585365853659
- type: euclidean_recall
value: 70.52770448548813
- type: manhattan_accuracy
value: 84.5502771651666
- type: manhattan_ap
value: 69.61700491283233
- type: manhattan_f1
value: 64.83962148211872
- type: manhattan_precision
value: 60.68553025074765
- type: manhattan_recall
value: 69.6042216358839
- type: max_accuracy
value: 84.57411933003517
- type: max_ap
value: 69.6825655240522
- type: max_f1
value: 65.05232416646386
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.80350836341057
- type: cos_sim_ap
value: 85.41051415803449
- type: cos_sim_f1
value: 77.99305633329602
- type: cos_sim_precision
value: 75.70113776360607
- type: cos_sim_recall
value: 80.42808746535263
- type: dot_accuracy
value: 88.80350836341057
- type: dot_ap
value: 85.41051488820463
- type: dot_f1
value: 77.99305633329602
- type: dot_precision
value: 75.70113776360607
- type: dot_recall
value: 80.42808746535263
- type: euclidean_accuracy
value: 88.80350836341057
- type: euclidean_ap
value: 85.41051374760137
- type: euclidean_f1
value: 77.99305633329602
- type: euclidean_precision
value: 75.70113776360607
- type: euclidean_recall
value: 80.42808746535263
- type: manhattan_accuracy
value: 88.74529436876625
- type: manhattan_ap
value: 85.38380242074525
- type: manhattan_f1
value: 78.02957839746892
- type: manhattan_precision
value: 74.71466816964914
- type: manhattan_recall
value: 81.65229442562365
- type: max_accuracy
value: 88.80350836341057
- type: max_ap
value: 85.41051488820463
- type: max_f1
value: 78.02957839746892
---
# nomic-embed-text-v1-unsupervised: A Reproducible Long Context (8192) Text Embedder
`nomic-embed-text-v1-unsupervised` is 8192 context length text encoder. This is a checkpoint after contrastive pretraining from multi-stage contrastive training of the
[final model](https://huggingface.co/nomic-ai/nomic-embed-text-v1). The purpose of releasing this checkpoint is to open-source training artifacts from our Nomic Embed Text tech report [here](https://arxiv.org/pdf/2402.01613)
If you want to use a model to extract embeddings, we suggest using [nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1).
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
| [
"BIOSSES",
"SCIFACT"
] |
FredZhang7/paint-journey-v2 | FredZhang7 | text-to-image | [
"diffusers",
"text-to-image",
"midjourney",
"stable-diffusion",
"disco-diffusion",
"art",
"arxiv:2208.12242",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-01-03T01:25:12Z | 2023-02-05T07:14:46+00:00 | 1,068 | 36 | ---
language:
- en
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- midjourney
- stable-diffusion
- disco-diffusion
- art
- arxiv:2208.12242
inference: true
---
## Paint Journey V2 is [V1](https://huggingface.co/FredZhang7/paint-journey-v1) fine-tuned on 768x768 oil paintings by Midjourney V4, Open Journey V2, Disco Diffusion, and artists given permission
Begin the prompt with **((oil painting))** to add the oil paint effect. For digital and other painting styles, use similar prompts as you would for Midjourney V4 (with some tweaks), Stable Diffusion v1.5 (add more styles), Open Journey V2, or Disco Diffusion.
[](https://colab.research.google.com/github/AMLA-UBC/100-Exploring-the-World-of-Modern-Machine-Learning/blob/main/assets/PaintJourneyV2.ipynb)
## Examples
*All examples were generated using Camenduru's WebUI (see the Colab file)*

*⬆️ 768x1136 portraits, generated using descriptive prompts and without face restoration, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/character_settings.txt)*

*⬆️ 1280x768 (mostly) natural landscapes, used shorter prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/nature_settings.txt)*

*⬆️ 1152x768 outerspace landscapes, used descriptive prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/outerspace_settings.txt)*

*⬆️ 1280x768 lamborghini, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/lamborghini_settings.txt)*

*⬆️ 960x768 Eevee, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/eevee_settings.txt)*
## Comparisons
Paint Journey V2's paintings are closer to human-drawn art than Open Journey V2.
Compared to models like Dreamlike Diffusion 1.0, PJ V2 tends to generate 768x768 or higher resolution images with reduced noise levels.
This model is also capable of generating stunning portraits at 768x1136 resolution without duplicated faces (with [Camenduru's WebUI](https://github.com/camenduru/stable-diffusion-webui)), a difficult task to models like DreamShaper 3.3.
At lower resolutions, DreamShaper 3.3 tends to generate higher quality portraits than PJ V2 in terms of noise levels, given the same (short) postive and negative prompts.
However, PJ V2 can craft more stunning masterpieces with more descriptive positive and negative prompts and can still generate beautiful landscapes with shorter prompts.
## Training
Instead of solely fine-tuning its Unet, Paint Journey V2 focuses on fine-tuning its text encoder with a diverse range of prompts.
This allows for a seamless blend of the digital and oil painting styles into various other types of prompts, resulting in a more natural and dynamic output.
This model was trained on a curated dataset of roughly 300 images hand-picked from Midjourney, [Prompt Hero](https://prompthero.com/), [PixaBay](https://pixabay.com/images/search/paintings/), Open Journey V2, and Reddit.
Before training, I used R-ESRGAN 4x on many images to increase their resolution and reduce noise.
## Running out of prompts?
Useful resources: [Lexica.art](https://lexica.art/), [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2), [Prompt Hero](https://prompthero.com/)
## Output Dimensions
Portrait sizes include, but are not limited to, `512x768`, `768x768`, and `768x1136`.
Landscape sizes include, but are not limited to, `768x512`, `768x768`, `1152x768`, and `1280x768`.
## Camenduru's WebUI
```
git clone -b v1.6 https://github.com/camenduru/stable-diffusion-webui
```
<details>
<summary> Click to use Automatic1111's Webui instead, but may not output images as artistic </summary>
```
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
```
</details>
Download [checkpoint](./paint_journey_v2.ckpt) and [vae](./paint_journey_v2.vae.pt) to the `./stable-diffusion-webui/models/Stable-diffusion` folder. Run `webui-user.bat`.
## 🧨 Diffusers
*Tip: using double, tripple, or quadriple brackets around some letters WORD (e.g. "((WORD))") will put an 'emphasis' on WORD*
```bash
pip install --upgrade diffusers transformers
```
```python
# see more sampling algorithms at https://huggingface.co/docs/diffusers/using-diffusers/schedulers#changing-the-scheduler
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch, random, datetime
pipe = StableDiffusionPipeline.from_pretrained("FredZhang7/paint-journey-v2")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
def random_seed():
return random.randint(0, 2**32 - 1)
prompt = "((oil painting)), gentle waves, bright blue sky, white sails billowing, sun glistening on the surface, salty sea air, distant horizon, calm breeze, birds soaring overhead, vibrant colors, artstation digital painting, high resolution, uhd, 4 k, 8k wallpaper" # what you want to see
negative_prompt = "low-res, blurry, haze, dark clouds looming, choppy waves, engine failing, sails tattered, stormy winds".split(", ") # what you don't want to see
seed = random_seed() # replace with the desired seed if needed
width, height = 1280, 768 # width and height of the generated image
cfg_scale = 7.5 # classifer free guidance scale, smaller means more creative, 7 to 11 is usually a good range
num_inference_steps = 40 # sampling steps, 30 to 40 is usually good for Euler Ancestral
generator = torch.Generator("cuda").manual_seed(seed)
with torch.autocast("cuda"):
image = pipe(prompt=prompt,
num_inference_steps=num_inference_steps,
width=width, height=height,
generator=generator,
guidance_scale=cfg_scale).images[0]
def generate_filename(string, seed):
invalid_chars = ["<", ">", ":", '"', "/", "\\", "|", "?", "*"]
for char in invalid_chars:
string = string.replace(char, "")
return f"{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}_{seed}_{string}"
image.save(f"./{generate_filename(prompt, seed)}.png")
```
## Safety Checker V2
The official [stable diffusion safety checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker) uses up 1.22GB VRAM.
I recommend using [Google Safesearch Mini V2](https://huggingface.co/FredZhang7/google-safesearch-mini-v2) (220MB) to save 1.0GB VRAM. | [
"CRAFT"
] |
iMahdiGhazavi/bertopic-crypto-topic-modeling | iMahdiGhazavi | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 2024-04-12T10:01:01Z | 2024-04-12T10:03:37+00:00 | 1,060 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# bertopic-crypto-topic-modeling
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("iMahdiGhazavi/bertopic-crypto-topic-modeling")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 50
* Number of training documents: 4000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | bitcoin - btc - crypto - buy - cryptocurrency | 10 | -1_bitcoin_btc_crypto_buy |
| 0 | around - see - would - bitcoin - go | 1215 | 0_around_see_would_bitcoin |
| 1 | bitcoin - buy - fix - want - love | 363 | 1_bitcoin_buy_fix_want |
| 2 | project - airdrop - team - great - always | 304 | 2_project_airdrop_team_great |
| 3 | covidvaccine - covid - vaccine - dose - get | 288 | 3_covidvaccine_covid_vaccine_dose |
| 4 | interoperable - struggle - libonomy - link - coin | 275 | 4_interoperable_struggle_libonomy_link |
| 5 | airdrop - bsc - airdropinspector - dinowallet - binancesmartchain | 134 | 5_airdrop_bsc_airdropinspector_dinowallet |
| 6 | binancesmartchain - binance - pancakeswap - tg - launchedjoin | 129 | 6_binancesmartchain_binance_pancakeswap_tg |
| 7 | cryptocurrency - gt - denation - crypto - btc | 98 | 7_cryptocurrency_gt_denation_crypto |
| 8 | bitcoin - giveaway - opt - scalp - short | 94 | 8_bitcoin_giveaway_opt_scalp |
| 9 | tradingview - thought - idea - binancebtcusdt - trade | 62 | 9_tradingview_thought_idea_binancebtcusdt |
| 10 | dev - everrise - core - utility - everown | 61 | 10_dev_everrise_core_utility |
| 11 | cryptocurrencie - technology - fintech - morbex - money | 53 | 11_cryptocurrencie_technology_fintech_morbex |
| 12 | bitfinex - rebound - spike - record - crash | 51 | 12_bitfinex_rebound_spike_record |
| 13 | link - doge - eth - sol - update | 50 | 13_link_doge_eth_sol |
| 14 | positionv - entry - target - stop - signal | 47 | 14_positionv_entry_target_stop |
| 15 | bet - odd - betting - gamblingtwitter - wager | 43 | 15_bet_odd_betting_gamblingtwitter |
| 16 | plastic - ico - investment - arno - plasticfinance | 40 | 16_plastic_ico_investment_arno |
| 17 | kitkart - addressovwmdgywzcvyqundajjrnjatchpre - io - android - donate | 37 | 17_kitkart_addressovwmdgywzcvyqundajjrnjatchpre_io_android |
| 18 | davido - igbo - delivery - ibadan - giroud | 37 | 18_davido_igbo_delivery_ibadan |
| 19 | crush - superb - preserve - competition - completely | 34 | 19_crush_superb_preserve_competition |
| 20 | coinhuntworld - vault - location - awesome - play | 32 | 20_coinhuntworld_vault_location_awesome |
| 21 | tweet - follow - tone - tips - insight | 32 | 21_tweet_follow_tone_tips |
| 22 | malaysia - miner - btcusd - crush - hourly | 31 | 22_malaysia_miner_btcusd_crush |
| 23 | currently - breathe - jumpy - dismiss - mofos | 31 | 23_currently_breathe_jumpy_dismiss |
| 24 | change - coinbase - pro - worried - month | 30 | 24_change_coinbase_pro_worried |
| 25 | cryptonews - rixx - ethereum - mover - report | 28 | 25_cryptonews_rixx_ethereum_mover |
| 26 | ksi - superstar - lose - jj - youtube | 28 | 26_ksi_superstar_lose_jj |
| 27 | block - tx - tictoknextblock - recipient - gmt | 28 | 27_block_tx_tictoknextblock_recipient |
| 28 | ape - nftcommunity - nftartist - nftart - nftcollector | 23 | 28_ape_nftcommunity_nftartist_nftart |
| 29 | onestop - legendary - trading - shop - usdt | 22 | 29_onestop_legendary_trading_shop |
| 30 | america - client - bank - illegal - petroleum | 22 | 30_america_client_bank_illegal |
| 31 | fence - bear - long - market - last | 18 | 31_fence_bear_long_market |
| 32 | forex - invest - business - stock - entrepreneur | 18 | 32_forex_invest_business_stock |
| 33 | avg - hour - xbt - xbtusd - information | 17 | 33_avg_hour_xbt_xbtusd |
| 34 | pumping - challenge - cryptos - recover - interested | 17 | 34_pumping_challenge_cryptos_recover |
| 35 | price - current - agree - bitcoin - early | 17 | 35_price_current_agree_bitcoin |
| 36 | account - procedure - immediately - worry - management | 16 | 36_account_procedure_immediately_worry |
| 37 | credit - card - fast - exchange - xrp | 16 | 37_credit_card_fast_exchange |
| 38 | day - avg - move - low - high | 15 | 38_day_avg_move_low |
| 39 | bonus - startup - person - hi - dm | 15 | 39_bonus_startup_person_hi |
| 40 | last - price - drop - compare - right | 14 | 40_last_price_drop_compare |
| 41 | usd - safemoon - price - dogecoin - ethereum | 14 | 41_usd_safemoon_price_dogecoin |
| 42 | donation - willing - collect - initial - monetary | 14 | 42_donation_willing_collect_initial |
| 43 | value - decrease - euro - last - lose | 14 | 43_value_decrease_euro_last |
| 44 | token - bll - billiontoken - milliontoken - milion | 14 | 44_token_bll_billiontoken_milliontoken |
| 45 | help - address - xdecbdabddbe - dear - hi | 13 | 45_help_address_xdecbdabddbe_dear |
| 46 | technicalanalysis - technical - analysis - jpy - eur | 13 | 46_technicalanalysis_technical_analysis_jpy |
| 47 | xddddabd - emojiday - cryptogiveaway - giveaway - amp | 12 | 47_xddddabd_emojiday_cryptogiveaway_giveaway |
| 48 | asic - cryptomine - yieldfarming - bitcoinmine - tool | 11 | 48_asic_cryptomine_yieldfarming_bitcoinmine |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: 50
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.6.1
* Transformers: 4.38.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
| [
"BEAR"
] |
Xenova/multilingual-e5-large | Xenova | feature-extraction | [
"transformers.js",
"onnx",
"xlm-roberta",
"feature-extraction",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"region:us"
] | 2023-07-01T15:55:18Z | 2025-03-06T21:51:30+00:00 | 1,049 | 9 | ---
base_model: intfloat/multilingual-e5-large
library_name: transformers.js
---
https://huggingface.co/intfloat/multilingual-e5-large with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model to compute embeddings, as follows:
```js
import { pipeline } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/multilingual-e5-large');
// Compute sentence embeddings
const texts = ['Hello world.', 'Example sentence.'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log(embeddings);
// Tensor {
// dims: [ 2, 768 ],
// type: 'float32',
// data: Float32Array(1536) [ 0.019079938530921936, 0.041718777269124985, ... ],
// size: 1536
// }
console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list
// [
// [ 0.019079938530921936, 0.041718777269124985, 0.037672195583581924, ... ],
// [ 0.020936904475092888, 0.020080938935279846, -0.00787576474249363, ... ]
// ]
```
You can also use the model for retrieval. For example:
```js
import { pipeline, cos_sim } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-small-en-v1.5');
// List of documents you want to embed
const texts = [
'Hello world.',
'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.',
'I love pandas so much!',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
// Prepend recommended query instruction for retrieval.
const query_prefix = 'Represent this sentence for searching relevant passages: '
const query = query_prefix + 'What is a panda?';
const query_embeddings = await extractor(query, { pooling: 'mean', normalize: true });
// Sort by cosine similarity score
const scores = embeddings.tolist().map(
(embedding, i) => ({
id: i,
score: cos_sim(query_embeddings.data, embedding),
text: texts[i],
})
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
// { id: 1, score: 0.7984614879885141, text: 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.' },
// { id: 2, score: 0.6870574285630753, text: 'I love pandas so much!' },
// { id: 0, score: 0.3761690265939917, text: 'Hello world.' }
// ]
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | [
"BEAR"
] |
Henrychur/MMed-Llama-3-8B-EnIns | Henrychur | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"conversational",
"en",
"zh",
"ja",
"fr",
"ru",
"es",
"dataset:Henrychur/MMedC",
"dataset:axiong/pmc_llama_instructions",
"arxiv:2402.13963",
"base_model:Henrychur/MMed-Llama-3-8B",
"base_model:finetune:Henrychur/MMed-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-22T14:07:05Z | 2024-09-04T11:06:38+00:00 | 1,035 | 4 | ---
base_model: Henrychur/MMed-Llama-3-8B
datasets:
- Henrychur/MMedC
- axiong/pmc_llama_instructions
language:
- en
- zh
- ja
- fr
- ru
- es
library_name: transformers
license: llama3
tags:
- medical
---
# MMedLM
[💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963)
The official model weights for "Towards Building Multilingual Language Model for Medicine".
## Introduction
This repo contains MMed-Llama 3-8B-EnIns, which is based on MMed-Llama 3-8B. We further fine-tune the model on **English instruction fine-tuning dataset**(from PMC-LLaMA). We did this for a fair comparison with existing models on commonly-used English benchmarks.
Notice that, MMed-Llama 3-8B-EnIns has only been trained on pmc_llama_instructions, which is a English medical SFT dataset focusing on QA tasks. So this model's ability to respond multilingual input is still limited.
The model can be loaded as follows:
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns")
model = AutoModelForCausalLM.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns", torch_dtype=torch.float16)
```
- Inference format is similar to Llama 3-Instruct, you can check our inference code [here](https://github.com/MAGIC-AI4Med/MedS-Ins/tree/main/Inference).
- For multiple-choice question and answering tasks, we suggest using the following instruction.
```py
from model import MedS_Llama3 # https://github.com/MAGIC-AI4Med/MedS-Ins/blob/main/Inference/model.py
sdk_api = MedS_Llama3(model_path="Henrychur/MMed-Llama-3-8B-EnIns", gpu_id=0)
INSTRUCTION = "Given a question and a list of options, select the correct answer from the options directly."
input_ = "Question: A mother brings her 3-week-old infant to the pediatrician's office because she is concerned about his feeding habits. He was born without complications and has not had any medical problems up until this time. However, for the past 4 days, he has been fussy, is regurgitating all of his feeds, and his vomit is yellow in color. On physical exam, the child's abdomen is minimally distended but no other abnormalities are appreciated. Which of the following embryologic errors could account for this presentation?\nOptions: A: Abnormal migration of ventral pancreatic bud\tB: Complete failure of proximal duodenum to recanalize\tC: Abnormal hypertrophy of the pylorus\tD: Failure of lateral body folds to move ventrally and fuse in the midline\t"
results = sdk_api.chat([], input_, INSTRUCTION)
print(results)
```
## News
[2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963).
[2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench.
[2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens.
[2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering
benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/).
## Evaluation on Commonly-used English Benchmark
The further pretrained MMed-Llama3 also showcast it's great performance in medical domain on different English benchmarks.
| Method | Size | Year | MedQA | MedMCQA | PubMedQA | MMLU_CK | MMLU_MG | MMLU_AN | MMLU_PM | MMLU_CB | MMLU_CM | Avg. |
| ------------------- | ---- | ------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | --------- |
| MedAlpaca | 7B | 2023.3 | 41.7 | 37.5 | 72.8 | 57.4 | 69.0 | 57.0 | 67.3 | 65.3 | 54.3 | 58.03 |
| PMC-LLaMA | 13B | 2023.9 | 56.4 | 56.0 | 77.9 | - | - | - | - | - | - | - |
| MEDITRON | 7B | 2023.11 | 57.2 | 59.2 | 74.4 | 64.6 | 59.9 | 49.3 | 55.4 | 53.8 | 44.8 | 57.62 |
| Mistral | 7B | 2023.12 | 50.8 | 48.2 | 75.4 | 68.7 | 71.0 | 55.6 | 68.4 | 68.1 | 59.5 | 62.97 |
| Gemma | 7B | 2024.2 | 47.2 | 49.0 | 76.2 | 69.8 | 70.0 | 59.3 | 66.2 | **79.9** | 60.1 | 64.19 |
| BioMistral | 7B | 2024.2 | 50.6 | 48.1 | 77.5 | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 58.97 |
| Llama 3 | 8B | 2024.4 | 60.9 | 50.7 | 73.0 | **72.1** | 76.0 | 63.0 | 77.2 | **79.9** | 64.2 | 68.56 |
| MMed-Llama 3~(Ours) | 8B | - | **65.4** | **63.5** | **80.1** | 71.3 | **85.0** | **69.6** | **77.6** | 74.3 | **66.5** | **72.59** |
## Contact
If you have any question, please feel free to contact [email protected].
## Citation
```
@misc{qiu2024building,
title={Towards Building Multilingual Language Model for Medicine},
author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie},
year={2024},
eprint={2402.13963},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"MEDQA",
"PUBMEDQA"
] |
GPT4All-Community/Phi-3.1-mini-128k-instruct-GGUF | GPT4All-Community | text-generation | [
"transformers",
"gguf",
"text-generation-inference",
"GGUF",
"GPT4All-community",
"GPT4All",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us",
"conversational"
] | 2024-07-31T00:38:31Z | 2024-08-13T08:40:44+00:00 | 1,032 | 2 | ---
base_model: Microsoft/Phi-3-Mini-128K-Instruct
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
model_name: Phi-3-Mini-128K-Instruct
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- GGUF
- GPT4All-community
- GPT4All
- nlp
- code
inference: false
model_creator: Microsoft
model_type: phi3
quantized_by: ThiloteE
---
> [!NOTE]
> This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
<!-- ### quantize_version: 3 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
- Static quants of https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/ at commit [d548c23](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/commit/d548c233192db00165d842bf8edff054bb3212f8)
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [c3776ca](https://github.com/ggerganov/llama.cpp/commit/c3776cacabce2ee35f172fb72be7a519752125fa)
# Notes
These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all).
The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants.
# Prompt Template (for GPT4All)
Example System Prompt:
```Markdown
<|system|>
You are a helpful assistant.<|end|>
```
Chat Template:
```Markdown
<|user|>
%1<|end|>
<|assistant|>
%2<|end|>
```
Do not miss the newlines at the end! Have a look at the raw readme.md file, as it differs from the rendered output in the modelcard.
# Context Length
`131072`
Use a lower value during inference, if you do not have enough RAM or VRAM.
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/GPT4All-Community/Phi-3-Mini-128K-Instruct-GGUF/resolve/main/Phi-3-Mini-128K-Instruct-Q4_0.gguf) | Q4_0 | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/GPT4All-Community/Phi-3-Mini-128K-Instruct-GGUF/resolve/main/Phi-3-Mini-128K-Instruct-F16.gguf) | f16 | 7.7 | 16 bpw, overkill |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way.
Shoutout to the GPT4All and llama.cpp communities :-)
------
<!-- footer end -->
<!-- original-model-card start -->
------
------
# Original Model card:
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
<!-- original-model-card end -->
<!-- end -->
| [
"MEDQA"
] |
jinaai/jina-embedding-t-en-v1 | jinaai | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"finetuner",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:jinaai/negation-dataset",
"arxiv:2307.11224",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-07-21T15:05:02Z | 2023-09-10T06:42:12+00:00 | 1,025 | 30 | ---
datasets:
- jinaai/negation-dataset
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- finetuner
- sentence-transformers
- feature-extraction
- sentence-similarity
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b>
</p>
## Intented Usage & Model Info
`jina-embedding-t-en-v1` is a tiny language model that has been trained using Jina AI's Linnaeus-Clean dataset.
This dataset consists of 380 million pairs of sentences, which include both query-document pairs.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.
The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.
With a tiny small parameter size of just 14 million parameters,
the model enables lightning-fast inference on CPU, while still delivering impressive performance.
Additionally, we provide the following options:
- [`jina-embedding-t-en-v1`](https://huggingface.co/jinaai/jina-embedding-t-en-v1): 14 million parameters **(you are here)**.
- [`jina-embedding-s-en-v1`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters.
- [`jina-embedding-b-en-v1`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters.
- [`jina-embedding-l-en-v1`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters.
- `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10 times bert-base (soon).
- `jina-embedding-6b-en-v1`: 6 billion parameters, 30 times bert-base (soon).
## Data & Parameters
Please checkout our [technical blog](https://arxiv.org/abs/2307.11224).
## Metrics
We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI:
|Name|param |dimension|
|------------------------------|-----|------|
|all-minilm-l6-v2|23m |384|
|all-mpnet-base-v2 |110m |768|
|ada-embedding-002|Unknown/OpenAI API |1536|
|jina-embedding-t-en-v1|14m |312|
|jina-embedding-s-en-v1|35m |512|
|jina-embedding-b-en-v1|110m |768|
|jina-embedding-l-en-v1|330m |1024|
|Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact|
|------------------------------|-----|-----|-----|-----|-----|-----|--------|-----|-----|
|all-minilm-l6-v2|0.724|0.806|0.756|0.854|0.79 |0.876|0.473 |0.876|0.645 |
|all-mpnet-base-v2|0.726|**0.835**|0.78 |0.857|0.8 |**0.906**|0.513 |0.875|0.656 |
|ada-embedding-002|0.698|0.833|0.761|0.861|**0.86** |0.903|**0.685** |0.876|**0.726** |
|jina-embedding-t-en-v1|0.717|0.773|0.731|0.829|0.777|0.860|0.482 |0.840|0.522 |
|jina-embedding-s-en-v1|0.743|0.786|0.738|0.837|0.80|0.875|0.523 |0.857|0.524 |
|jina-embedding-b-en-v1|**0.751**|0.809|0.761|0.856|0.812|0.890|0.606 |0.876|0.594 |
|jina-embedding-l-en-v1|0.745|0.832|**0.781**|**0.869**|0.837|0.902|0.573 |**0.881**|0.598 |
## Inference Speed
We encoded a single sentence "What is the current weather like today?" 10k times on:
1. cpu: MacBook Pro 2020, 2 GHz Quad-Core Intel Core i5
2. gpu: 1 Nvidia 3090
And recorded time spent to demonstrate the embedding speed:
|Name|param |dimension| time@cpu | time@gpu |
|------------------------------|-----|------|-----|-----|
|jina-embedding-t-en-v1|14m |312| 5.78s | 2.36s|
|all-minilm-l6-v2|23m |384| 11.95s | 2.70s |
|jina-embedding-s-en-v1|35m |512| 17.25s | 2.81s |
## Usage
Use with Jina AI Finetuner
```python
!pip install finetuner
import finetuner
model = finetuner.build_model('jinaai/jina-embedding-t-en-v1')
embeddings = finetuner.encode(
model=model,
data=['how is the weather today', 'What is the current weather like today?']
)
print(finetuner.cos_sim(embeddings[0], embeddings[1]))
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['how is the weather today', 'What is the current weather like today?']
model = SentenceTransformer('jinaai/jina-embedding-t-en-v1')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
## Fine-tuning
Please consider [Finetuner](https://github.com/jina-ai/finetuner).
## Plans
1. The development of `jina-embedding-s-en-v2` is currently underway with two main objectives: improving performance and increasing the maximum sequence length.
2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called `jina-embedding-s/b/l-de-v1`.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
``` latex
@misc{günther2023jina,
title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models},
author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
year={2023},
eprint={2307.11224},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"LINNAEUS",
"SCIFACT"
] |
ntc-ai/SDXL-LoRA-slider.futuristic-logo-design | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 2023-12-25T19:47:31Z | 2023-12-25T19:47:37+00:00 | 1,022 | 2 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/futuristic logo design.../futuristic logo design_17_3.0.png
widget:
- text: futuristic logo design
output:
url: images/futuristic logo design_17_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_19_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_20_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_21_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_22_3.0.png
inference: false
instance_prompt: futuristic logo design
---
# ntcai.xyz slider - futuristic logo design (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/futuristic logo design_17_-3.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_17_0.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_17_3.0.png" width=256 height=256 /> |
| <img src="images/futuristic logo design_19_-3.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_19_0.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_19_3.0.png" width=256 height=256 /> |
| <img src="images/futuristic logo design_20_-3.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_20_0.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
futuristic logo design
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.futuristic-logo-design', weight_name='futuristic logo design.safetensors', adapter_name="futuristic logo design")
# Activate the LoRA
pipe.set_adapters(["futuristic logo design"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, futuristic logo design"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 620+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] |
cambridgeltl/BioRedditBERT-uncased | cambridgeltl | feature-extraction | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"BioNLP",
"social_media",
"en",
"arxiv:2010.03295",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z | 2023-04-05T15:51:20+00:00 | 1,015 | 5 | ---
language:
- en
tags:
- BioNLP
- social_media
---
# BioRedditBERT
## Model description
BioRedditBERT is a BERT model initialised from BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) and further pre-trained on health-related Reddit posts. Please view our paper [COMETA: A Corpus for Medical Entity Linking in the Social Media](https://arxiv.org/pdf/2010.03295.pdf) (EMNLP 2020) for more details.
## Training data
We crawled all threads from 68 health themed subreddits such as `r/AskDocs`, `r/health` and etc. starting from the beginning of 2015 to the end of 2018, obtaining a collection of more than
800K discussions. This collection was then pruned by removing deleted posts, comments from bots or moderators, and so on. In the end, we obtained the training corpus with ca. 300 million tokens and a vocabulary
size of ca. 780,000 words.
## Training procedure
We use the same pre-training script in the original [google-research/bert](https://github.com/google-research/bert) repo. The model is initialised with [`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`](https://github.com/dmis-lab/biobert).
We train with a batch size of 64, a max sequence length of 64, a learning rate of `2e-5` for 100k steps on two GeForce GTX 1080Ti (11 GB) GPUs. Other hyper-parameters are the same as default.
## Eval results
To show the benefit from further pre-training on the social media domain, we demonstrate results on a medical entity linking dataset also in the social media: [AskAPatient](https://zenodo.org/record/55013#.X4ncRmTYpb8) [(Limsopatham and Collier 2016)](https://www.aclweb.org/anthology/P16-1096.pdf).
We follow the same 10-fold cross-validation procedure for all models and report the average result without fine-tuning. `[CLS]` is used as representations for entity mentions (we also tried average of all tokens but found `[CLS]` generally performs better).
Model | Accuracy@1 | Accuracy@5
-------|---------|---------
[BERT-base-uncased](https://huggingface.co/bert-base-uncased) | 38.2 | 43.3
[BioBERT v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) | 41.4 | 51.5
[ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) | 43.9 | 54.3
[BlueBERT](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/NCBI_BERT_pubmed_mimic_uncased_L-12_H-768_A-12.zip) | 41.5 | 48.5
[SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) | 42.3 | 51.9
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) | 42.5 | 49.6
BioRedditBERT | **44.3** | **56.2**
### BibTeX entry and citation info
```bibtex
@inproceedings{basaldella-2020-cometa,
title = "{COMETA}: A Corpus for Medical Entity Linking in the Social Media",
author = "Basaldella, Marco and Liu, Fangyu, and Shareghi, Ehsan, and Collier, Nigel",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics"
}
```
| [
"ASKAPATIENT"
] |
Ateeqq/Text-Rewriter-Paraphraser | Ateeqq | text2text-generation | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-02T22:14:08Z | 2024-12-27T07:53:45+00:00 | 1,015 | 21 | ---
license: openrail
inference:
parameters:
num_beams: 3
num_beam_groups: 3
num_return_sequences: 1
repetition_penalty: 3
diversity_penalty: 3.01
no_repeat_ngram_size: 2
temperature: 0.8
max_length: 64
widget:
- text: 'paraphraser: Learn to build generative AI applications with an expert AWS
instructor with the 2-day Developing Generative AI Applications on AWS course.'
example_title: AWS course
- text: 'paraphraser: In healthcare, Generative AI can help generate synthetic medical
data to train machine learning models, develop new drug candidates, and design
clinical trials.'
example_title: Generative AI
- text: 'paraphraser: By leveraging prior model training through transfer learning,
fine-tuning can reduce the amount of expensive computing power and labeled data
needed to obtain large models tailored to niche use cases and business needs.'
example_title: Fine Tuning
---
# Text Rewriter Paraphraser
This repository contains a fine-tuned text-rewriting model based on the T5-Base with 223M parameters.
## Key Features:
* **Fine-tuned on t5-base:** Leverages the power of a pre-trained text-to-text transfer model for effective paraphrasing.
* **Large Dataset (430k examples):** Trained on a comprehensive dataset combining three open-source sources and cleaned using various techniques for optimal performance.
* **High Quality Paraphrases:** Generates paraphrases that significantly alter sentence structure while maintaining accuracy and factual correctness.
* **Non-AI Detectable:** Aims to produce paraphrases that appear natural and indistinguishable from human-written text.
**Model Performance:**
* Train Loss: 1.0645
* Validation Loss: 0.8761
## Getting Started:
T5 model expects a task related prefix: since it is a paraphrasing task, we will add a prefix "paraphraser: "
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='your_token')
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='your_token').to(device)
def generate_title(text):
input_ids = tokenizer(f'paraphraser: {text}', return_tensors="pt", padding="longest", truncation=True, max_length=64).input_ids.to(device)
outputs = model.generate(
input_ids,
num_beams=4,
num_beam_groups=4,
num_return_sequences=4,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.8,
max_length=64
)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
text = 'By leveraging prior model training through transfer learning, fine-tuning can reduce the amount of expensive computing power and labeled data needed to obtain large models tailored to niche use cases and business needs.'
generate_title(text)
```
### Output:
```
['The fine-tuning can reduce the amount of expensive computing power and labeled data required to obtain large models adapted for niche use cases and business needs by using prior model training through transfer learning.',
'fine-tuning, by utilizing prior model training through transfer learning, can reduce the amount of expensive computing power and labeled data required to obtain large models tailored for niche use cases and business needs.',
'Fine-tunering by using prior model training through transfer learning can reduce the amount of expensive computing power and labeled data required to obtain large models adapted for niche use cases and business needs.',
'Using transfer learning to use prior model training, fine-tuning can reduce the amount of expensive computing power and labeled data required for large models that are suitable in niche usage cases or businesses.']
```
**Disclaimer:**
* Limited Use: It grants a non-exclusive, non-transferable license to use the this model same as Llama-3. This means you can't freely share it with others or sell the model itself.
* Commercial Use Allowed: You can use the model for commercial purposes, but under the terms of the license agreement.
* Attribution Required: You need to abide by the agreement's terms regarding attribution. It is essential to use the paraphrased text responsibly and ethically, with proper attribution of the original source.
**Further Development:**
(Mention any ongoing development or areas for future improvement in Discussions.) | [
"MEDICAL DATA"
] |
Labib11/MUG-B-1.6 | Labib11 | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-08T15:46:01Z | 2024-05-21T12:54:58+00:00 | 1,011 | 2 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: MUG-B-1.6
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.04047976011994
- type: ap
value: 23.622442298323236
- type: f1
value: 61.681362134359354
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.38805970149255
- type: ap
value: 35.14527522183942
- type: f1
value: 66.40004634079556
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 54.3254817987152
- type: ap
value: 71.95259605308317
- type: f1
value: 52.50731386267296
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 56.33832976445397
- type: ap
value: 12.671021199223937
- type: f1
value: 46.127586182990605
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.70805000000001
- type: ap
value: 90.58639913354553
- type: f1
value: 93.69822635061847
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 50.85000000000001
- type: f1
value: 49.80013009020246
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 27.203999999999994
- type: f1
value: 26.60134413072989
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 34.878
- type: f1
value: 33.072592092252314
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 31.557999999999993
- type: f1
value: 30.866094552542624
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 22.706
- type: f1
value: 22.23195837325246
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 22.349999999999998
- type: f1
value: 21.80183891680617
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 41.892
- type: map_at_10
value: 57.989999999999995
- type: map_at_100
value: 58.45
- type: map_at_1000
value: 58.453
- type: map_at_20
value: 58.392999999999994
- type: map_at_3
value: 53.746
- type: map_at_5
value: 56.566
- type: mrr_at_1
value: 43.314
- type: mrr_at_10
value: 58.535000000000004
- type: mrr_at_100
value: 58.975
- type: mrr_at_1000
value: 58.977999999999994
- type: mrr_at_20
value: 58.916999999999994
- type: mrr_at_3
value: 54.303000000000004
- type: mrr_at_5
value: 57.055
- type: ndcg_at_1
value: 41.892
- type: ndcg_at_10
value: 66.176
- type: ndcg_at_100
value: 67.958
- type: ndcg_at_1000
value: 68.00699999999999
- type: ndcg_at_20
value: 67.565
- type: ndcg_at_3
value: 57.691
- type: ndcg_at_5
value: 62.766
- type: precision_at_1
value: 41.892
- type: precision_at_10
value: 9.189
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.861
- type: precision_at_3
value: 23.044
- type: precision_at_5
value: 16.287
- type: recall_at_1
value: 41.892
- type: recall_at_10
value: 91.892
- type: recall_at_100
value: 99.289
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 97.226
- type: recall_at_3
value: 69.132
- type: recall_at_5
value: 81.437
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 49.03486273664411
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.04797567338598
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29499572176032
- type: mrr
value: 77.28861627753592
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.53248242133246
- type: cos_sim_spearman
value: 88.38032705871927
- type: euclidean_pearson
value: 87.77994445569084
- type: euclidean_spearman
value: 88.38032705871927
- type: manhattan_pearson
value: 87.52369210088627
- type: manhattan_spearman
value: 88.27972235673434
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.4090909090909
- type: f1
value: 84.87743757972068
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.73840151083438
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.565075977998966
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.082
- type: map_at_10
value: 44.787
- type: map_at_100
value: 46.322
- type: map_at_1000
value: 46.446
- type: map_at_20
value: 45.572
- type: map_at_3
value: 40.913
- type: map_at_5
value: 42.922
- type: mrr_at_1
value: 40.629
- type: mrr_at_10
value: 51.119
- type: mrr_at_100
value: 51.783
- type: mrr_at_1000
value: 51.82
- type: mrr_at_20
value: 51.49700000000001
- type: mrr_at_3
value: 48.355
- type: mrr_at_5
value: 49.979
- type: ndcg_at_1
value: 40.629
- type: ndcg_at_10
value: 51.647
- type: ndcg_at_100
value: 56.923
- type: ndcg_at_1000
value: 58.682
- type: ndcg_at_20
value: 53.457
- type: ndcg_at_3
value: 46.065
- type: ndcg_at_5
value: 48.352000000000004
- type: precision_at_1
value: 40.629
- type: precision_at_10
value: 10.072000000000001
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_20
value: 5.908
- type: precision_at_3
value: 22.222
- type: precision_at_5
value: 15.937000000000001
- type: recall_at_1
value: 33.082
- type: recall_at_10
value: 64.55300000000001
- type: recall_at_100
value: 86.86399999999999
- type: recall_at_1000
value: 97.667
- type: recall_at_20
value: 70.988
- type: recall_at_3
value: 48.067
- type: recall_at_5
value: 54.763
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 32.272
- type: map_at_10
value: 42.620000000000005
- type: map_at_100
value: 43.936
- type: map_at_1000
value: 44.066
- type: map_at_20
value: 43.349
- type: map_at_3
value: 39.458
- type: map_at_5
value: 41.351
- type: mrr_at_1
value: 40.127
- type: mrr_at_10
value: 48.437000000000005
- type: mrr_at_100
value: 49.096000000000004
- type: mrr_at_1000
value: 49.14
- type: mrr_at_20
value: 48.847
- type: mrr_at_3
value: 46.21
- type: mrr_at_5
value: 47.561
- type: ndcg_at_1
value: 40.127
- type: ndcg_at_10
value: 48.209999999999994
- type: ndcg_at_100
value: 52.632
- type: ndcg_at_1000
value: 54.59
- type: ndcg_at_20
value: 50.012
- type: ndcg_at_3
value: 43.996
- type: ndcg_at_5
value: 46.122
- type: precision_at_1
value: 40.127
- type: precision_at_10
value: 9.051
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.193
- type: precision_at_20
value: 5.35
- type: precision_at_3
value: 21.104
- type: precision_at_5
value: 15.146
- type: recall_at_1
value: 32.272
- type: recall_at_10
value: 57.870999999999995
- type: recall_at_100
value: 76.211
- type: recall_at_1000
value: 88.389
- type: recall_at_20
value: 64.354
- type: recall_at_3
value: 45.426
- type: recall_at_5
value: 51.23799999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 40.261
- type: map_at_10
value: 53.400000000000006
- type: map_at_100
value: 54.42399999999999
- type: map_at_1000
value: 54.473000000000006
- type: map_at_20
value: 54.052
- type: map_at_3
value: 49.763000000000005
- type: map_at_5
value: 51.878
- type: mrr_at_1
value: 46.019
- type: mrr_at_10
value: 56.653
- type: mrr_at_100
value: 57.28
- type: mrr_at_1000
value: 57.303000000000004
- type: mrr_at_20
value: 57.057
- type: mrr_at_3
value: 53.971000000000004
- type: mrr_at_5
value: 55.632000000000005
- type: ndcg_at_1
value: 46.019
- type: ndcg_at_10
value: 59.597
- type: ndcg_at_100
value: 63.452
- type: ndcg_at_1000
value: 64.434
- type: ndcg_at_20
value: 61.404
- type: ndcg_at_3
value: 53.620999999999995
- type: ndcg_at_5
value: 56.688
- type: precision_at_1
value: 46.019
- type: precision_at_10
value: 9.748999999999999
- type: precision_at_100
value: 1.261
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.436
- type: precision_at_3
value: 24.075
- type: precision_at_5
value: 16.715
- type: recall_at_1
value: 40.261
- type: recall_at_10
value: 74.522
- type: recall_at_100
value: 91.014
- type: recall_at_1000
value: 98.017
- type: recall_at_20
value: 81.186
- type: recall_at_3
value: 58.72500000000001
- type: recall_at_5
value: 66.23599999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.666
- type: map_at_10
value: 36.744
- type: map_at_100
value: 37.794
- type: map_at_1000
value: 37.865
- type: map_at_20
value: 37.336999999999996
- type: map_at_3
value: 33.833999999999996
- type: map_at_5
value: 35.61
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 38.838
- type: mrr_at_100
value: 39.765
- type: mrr_at_1000
value: 39.818999999999996
- type: mrr_at_20
value: 39.373000000000005
- type: mrr_at_3
value: 36.234
- type: mrr_at_5
value: 37.844
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 41.986000000000004
- type: ndcg_at_100
value: 47.05
- type: ndcg_at_1000
value: 48.897
- type: ndcg_at_20
value: 43.989
- type: ndcg_at_3
value: 36.452
- type: ndcg_at_5
value: 39.395
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.4750000000000005
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_20
value: 3.6839999999999997
- type: precision_at_3
value: 15.443000000000001
- type: precision_at_5
value: 10.96
- type: recall_at_1
value: 27.666
- type: recall_at_10
value: 56.172999999999995
- type: recall_at_100
value: 79.142
- type: recall_at_1000
value: 93.013
- type: recall_at_20
value: 63.695
- type: recall_at_3
value: 41.285
- type: recall_at_5
value: 48.36
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 17.939
- type: map_at_10
value: 27.301
- type: map_at_100
value: 28.485
- type: map_at_1000
value: 28.616000000000003
- type: map_at_20
value: 27.843
- type: map_at_3
value: 24.342
- type: map_at_5
value: 26.259
- type: mrr_at_1
value: 22.761
- type: mrr_at_10
value: 32.391
- type: mrr_at_100
value: 33.297
- type: mrr_at_1000
value: 33.361000000000004
- type: mrr_at_20
value: 32.845
- type: mrr_at_3
value: 29.498
- type: mrr_at_5
value: 31.375999999999998
- type: ndcg_at_1
value: 22.761
- type: ndcg_at_10
value: 33.036
- type: ndcg_at_100
value: 38.743
- type: ndcg_at_1000
value: 41.568
- type: ndcg_at_20
value: 34.838
- type: ndcg_at_3
value: 27.803
- type: ndcg_at_5
value: 30.781
- type: precision_at_1
value: 22.761
- type: precision_at_10
value: 6.132
- type: precision_at_100
value: 1.031
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_20
value: 3.582
- type: precision_at_3
value: 13.474
- type: precision_at_5
value: 10.123999999999999
- type: recall_at_1
value: 17.939
- type: recall_at_10
value: 45.515
- type: recall_at_100
value: 70.56700000000001
- type: recall_at_1000
value: 90.306
- type: recall_at_20
value: 51.946999999999996
- type: recall_at_3
value: 31.459
- type: recall_at_5
value: 39.007
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 31.156
- type: map_at_10
value: 42.317
- type: map_at_100
value: 43.742
- type: map_at_1000
value: 43.852000000000004
- type: map_at_20
value: 43.147999999999996
- type: map_at_3
value: 38.981
- type: map_at_5
value: 40.827000000000005
- type: mrr_at_1
value: 38.401999999999994
- type: mrr_at_10
value: 48.141
- type: mrr_at_100
value: 48.991
- type: mrr_at_1000
value: 49.03
- type: mrr_at_20
value: 48.665000000000006
- type: mrr_at_3
value: 45.684999999999995
- type: mrr_at_5
value: 47.042
- type: ndcg_at_1
value: 38.401999999999994
- type: ndcg_at_10
value: 48.541000000000004
- type: ndcg_at_100
value: 54.063
- type: ndcg_at_1000
value: 56.005
- type: ndcg_at_20
value: 50.895999999999994
- type: ndcg_at_3
value: 43.352000000000004
- type: ndcg_at_5
value: 45.769
- type: precision_at_1
value: 38.401999999999994
- type: precision_at_10
value: 8.738999999999999
- type: precision_at_100
value: 1.335
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_20
value: 5.164
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.437
- type: recall_at_1
value: 31.156
- type: recall_at_10
value: 61.172000000000004
- type: recall_at_100
value: 83.772
- type: recall_at_1000
value: 96.192
- type: recall_at_20
value: 69.223
- type: recall_at_3
value: 46.628
- type: recall_at_5
value: 53.032000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 26.741999999999997
- type: map_at_10
value: 36.937
- type: map_at_100
value: 38.452
- type: map_at_1000
value: 38.557
- type: map_at_20
value: 37.858999999999995
- type: map_at_3
value: 33.579
- type: map_at_5
value: 35.415
- type: mrr_at_1
value: 32.991
- type: mrr_at_10
value: 42.297000000000004
- type: mrr_at_100
value: 43.282
- type: mrr_at_1000
value: 43.332
- type: mrr_at_20
value: 42.95
- type: mrr_at_3
value: 39.707
- type: mrr_at_5
value: 41.162
- type: ndcg_at_1
value: 32.991
- type: ndcg_at_10
value: 43.004999999999995
- type: ndcg_at_100
value: 49.053000000000004
- type: ndcg_at_1000
value: 51.166999999999994
- type: ndcg_at_20
value: 45.785
- type: ndcg_at_3
value: 37.589
- type: ndcg_at_5
value: 40.007999999999996
- type: precision_at_1
value: 32.991
- type: precision_at_10
value: 8.025
- type: precision_at_100
value: 1.268
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 4.846
- type: precision_at_3
value: 17.922
- type: precision_at_5
value: 13.059000000000001
- type: recall_at_1
value: 26.741999999999997
- type: recall_at_10
value: 55.635999999999996
- type: recall_at_100
value: 80.798
- type: recall_at_1000
value: 94.918
- type: recall_at_20
value: 65.577
- type: recall_at_3
value: 40.658
- type: recall_at_5
value: 46.812
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 27.274583333333336
- type: map_at_10
value: 37.04091666666666
- type: map_at_100
value: 38.27966666666667
- type: map_at_1000
value: 38.39383333333334
- type: map_at_20
value: 37.721500000000006
- type: map_at_3
value: 33.937999999999995
- type: map_at_5
value: 35.67974999999999
- type: mrr_at_1
value: 32.40525
- type: mrr_at_10
value: 41.43925000000001
- type: mrr_at_100
value: 42.271
- type: mrr_at_1000
value: 42.32416666666667
- type: mrr_at_20
value: 41.92733333333334
- type: mrr_at_3
value: 38.84941666666666
- type: mrr_at_5
value: 40.379583333333336
- type: ndcg_at_1
value: 32.40525
- type: ndcg_at_10
value: 42.73808333333334
- type: ndcg_at_100
value: 47.88941666666667
- type: ndcg_at_1000
value: 50.05008333333334
- type: ndcg_at_20
value: 44.74183333333334
- type: ndcg_at_3
value: 37.51908333333334
- type: ndcg_at_5
value: 40.01883333333333
- type: precision_at_1
value: 32.40525
- type: precision_at_10
value: 7.5361666666666665
- type: precision_at_100
value: 1.1934166666666666
- type: precision_at_1000
value: 0.1575
- type: precision_at_20
value: 4.429166666666667
- type: precision_at_3
value: 17.24941666666667
- type: precision_at_5
value: 12.362333333333336
- type: recall_at_1
value: 27.274583333333336
- type: recall_at_10
value: 55.21358333333334
- type: recall_at_100
value: 77.60366666666667
- type: recall_at_1000
value: 92.43691666666666
- type: recall_at_20
value: 62.474583333333335
- type: recall_at_3
value: 40.79375
- type: recall_at_5
value: 47.15158333333334
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.389999999999997
- type: map_at_10
value: 34.107
- type: map_at_100
value: 35.022999999999996
- type: map_at_1000
value: 35.13
- type: map_at_20
value: 34.605999999999995
- type: map_at_3
value: 32.021
- type: map_at_5
value: 32.948
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 37.345
- type: mrr_at_100
value: 38.096999999999994
- type: mrr_at_1000
value: 38.179
- type: mrr_at_20
value: 37.769000000000005
- type: mrr_at_3
value: 35.481
- type: mrr_at_5
value: 36.293
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 38.223
- type: ndcg_at_100
value: 42.686
- type: ndcg_at_1000
value: 45.352
- type: ndcg_at_20
value: 39.889
- type: ndcg_at_3
value: 34.259
- type: ndcg_at_5
value: 35.664
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 5.7669999999999995
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_20
value: 3.3360000000000003
- type: precision_at_3
value: 14.264
- type: precision_at_5
value: 9.54
- type: recall_at_1
value: 27.389999999999997
- type: recall_at_10
value: 48.009
- type: recall_at_100
value: 68.244
- type: recall_at_1000
value: 87.943
- type: recall_at_20
value: 54.064
- type: recall_at_3
value: 36.813
- type: recall_at_5
value: 40.321
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.249000000000002
- type: map_at_10
value: 25.907000000000004
- type: map_at_100
value: 27.105
- type: map_at_1000
value: 27.233
- type: map_at_20
value: 26.541999999999998
- type: map_at_3
value: 23.376
- type: map_at_5
value: 24.673000000000002
- type: mrr_at_1
value: 21.989
- type: mrr_at_10
value: 29.846
- type: mrr_at_100
value: 30.808999999999997
- type: mrr_at_1000
value: 30.885
- type: mrr_at_20
value: 30.384
- type: mrr_at_3
value: 27.46
- type: mrr_at_5
value: 28.758
- type: ndcg_at_1
value: 21.989
- type: ndcg_at_10
value: 30.874000000000002
- type: ndcg_at_100
value: 36.504999999999995
- type: ndcg_at_1000
value: 39.314
- type: ndcg_at_20
value: 32.952999999999996
- type: ndcg_at_3
value: 26.249
- type: ndcg_at_5
value: 28.229
- type: precision_at_1
value: 21.989
- type: precision_at_10
value: 5.705
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 3.4459999999999997
- type: precision_at_3
value: 12.377
- type: precision_at_5
value: 8.961
- type: recall_at_1
value: 18.249000000000002
- type: recall_at_10
value: 41.824
- type: recall_at_100
value: 67.071
- type: recall_at_1000
value: 86.863
- type: recall_at_20
value: 49.573
- type: recall_at_3
value: 28.92
- type: recall_at_5
value: 34.003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 26.602999999999998
- type: map_at_10
value: 36.818
- type: map_at_100
value: 37.894
- type: map_at_1000
value: 37.991
- type: map_at_20
value: 37.389
- type: map_at_3
value: 33.615
- type: map_at_5
value: 35.432
- type: mrr_at_1
value: 31.53
- type: mrr_at_10
value: 41.144
- type: mrr_at_100
value: 41.937999999999995
- type: mrr_at_1000
value: 41.993
- type: mrr_at_20
value: 41.585
- type: mrr_at_3
value: 38.385999999999996
- type: mrr_at_5
value: 39.995000000000005
- type: ndcg_at_1
value: 31.53
- type: ndcg_at_10
value: 42.792
- type: ndcg_at_100
value: 47.749
- type: ndcg_at_1000
value: 49.946
- type: ndcg_at_20
value: 44.59
- type: ndcg_at_3
value: 37.025000000000006
- type: ndcg_at_5
value: 39.811
- type: precision_at_1
value: 31.53
- type: precision_at_10
value: 7.2669999999999995
- type: precision_at_100
value: 1.109
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 4.184
- type: precision_at_3
value: 16.791
- type: precision_at_5
value: 12.09
- type: recall_at_1
value: 26.602999999999998
- type: recall_at_10
value: 56.730999999999995
- type: recall_at_100
value: 78.119
- type: recall_at_1000
value: 93.458
- type: recall_at_20
value: 63.00599999999999
- type: recall_at_3
value: 41.306
- type: recall_at_5
value: 48.004999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 23.988
- type: map_at_10
value: 33.650999999999996
- type: map_at_100
value: 35.263
- type: map_at_1000
value: 35.481
- type: map_at_20
value: 34.463
- type: map_at_3
value: 30.330000000000002
- type: map_at_5
value: 32.056000000000004
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.973
- type: mrr_at_1000
value: 40.013
- type: mrr_at_20
value: 39.553
- type: mrr_at_3
value: 36.001
- type: mrr_at_5
value: 37.869
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 40.156
- type: ndcg_at_100
value: 46.244
- type: ndcg_at_1000
value: 48.483
- type: ndcg_at_20
value: 42.311
- type: ndcg_at_3
value: 34.492
- type: ndcg_at_5
value: 37.118
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.925
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.245
- type: precision_at_20
value: 4.97
- type: precision_at_3
value: 16.469
- type: precision_at_5
value: 12.174
- type: recall_at_1
value: 23.988
- type: recall_at_10
value: 52.844
- type: recall_at_100
value: 80.143
- type: recall_at_1000
value: 93.884
- type: recall_at_20
value: 61.050000000000004
- type: recall_at_3
value: 36.720000000000006
- type: recall_at_5
value: 43.614999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 21.947
- type: map_at_10
value: 29.902
- type: map_at_100
value: 30.916
- type: map_at_1000
value: 31.016
- type: map_at_20
value: 30.497999999999998
- type: map_at_3
value: 27.044
- type: map_at_5
value: 28.786
- type: mrr_at_1
value: 23.845
- type: mrr_at_10
value: 32.073
- type: mrr_at_100
value: 32.940999999999995
- type: mrr_at_1000
value: 33.015
- type: mrr_at_20
value: 32.603
- type: mrr_at_3
value: 29.205
- type: mrr_at_5
value: 31.044
- type: ndcg_at_1
value: 23.845
- type: ndcg_at_10
value: 34.79
- type: ndcg_at_100
value: 39.573
- type: ndcg_at_1000
value: 42.163000000000004
- type: ndcg_at_20
value: 36.778
- type: ndcg_at_3
value: 29.326
- type: ndcg_at_5
value: 32.289
- type: precision_at_1
value: 23.845
- type: precision_at_10
value: 5.527
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_20
value: 3.2439999999999998
- type: precision_at_3
value: 12.384
- type: precision_at_5
value: 9.205
- type: recall_at_1
value: 21.947
- type: recall_at_10
value: 47.713
- type: recall_at_100
value: 69.299
- type: recall_at_1000
value: 88.593
- type: recall_at_20
value: 55.032000000000004
- type: recall_at_3
value: 33.518
- type: recall_at_5
value: 40.427
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 13.655999999999999
- type: map_at_10
value: 23.954
- type: map_at_100
value: 26.07
- type: map_at_1000
value: 26.266000000000002
- type: map_at_20
value: 25.113000000000003
- type: map_at_3
value: 19.85
- type: map_at_5
value: 21.792
- type: mrr_at_1
value: 31.075000000000003
- type: mrr_at_10
value: 43.480000000000004
- type: mrr_at_100
value: 44.39
- type: mrr_at_1000
value: 44.42
- type: mrr_at_20
value: 44.06
- type: mrr_at_3
value: 40.38
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 31.075000000000003
- type: ndcg_at_10
value: 33.129999999999995
- type: ndcg_at_100
value: 40.794000000000004
- type: ndcg_at_1000
value: 44.062
- type: ndcg_at_20
value: 36.223
- type: ndcg_at_3
value: 27.224999999999998
- type: ndcg_at_5
value: 28.969
- type: precision_at_1
value: 31.075000000000003
- type: precision_at_10
value: 10.476
- type: precision_at_100
value: 1.864
- type: precision_at_1000
value: 0.247
- type: precision_at_20
value: 6.593
- type: precision_at_3
value: 20.456
- type: precision_at_5
value: 15.440000000000001
- type: recall_at_1
value: 13.655999999999999
- type: recall_at_10
value: 39.678000000000004
- type: recall_at_100
value: 65.523
- type: recall_at_1000
value: 83.59100000000001
- type: recall_at_20
value: 48.27
- type: recall_at_3
value: 24.863
- type: recall_at_5
value: 30.453999999999997
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.139
- type: map_at_10
value: 20.366999999999997
- type: map_at_100
value: 29.755
- type: map_at_1000
value: 31.563999999999997
- type: map_at_20
value: 24.021
- type: map_at_3
value: 14.395
- type: map_at_5
value: 16.853
- type: mrr_at_1
value: 69.0
- type: mrr_at_10
value: 76.778
- type: mrr_at_100
value: 77.116
- type: mrr_at_1000
value: 77.12299999999999
- type: mrr_at_20
value: 77.046
- type: mrr_at_3
value: 75.208
- type: mrr_at_5
value: 76.146
- type: ndcg_at_1
value: 57.125
- type: ndcg_at_10
value: 42.84
- type: ndcg_at_100
value: 48.686
- type: ndcg_at_1000
value: 56.294
- type: ndcg_at_20
value: 42.717
- type: ndcg_at_3
value: 46.842
- type: ndcg_at_5
value: 44.248
- type: precision_at_1
value: 69.0
- type: precision_at_10
value: 34.625
- type: precision_at_100
value: 11.468
- type: precision_at_1000
value: 2.17
- type: precision_at_20
value: 26.562
- type: precision_at_3
value: 50.917
- type: precision_at_5
value: 43.35
- type: recall_at_1
value: 9.139
- type: recall_at_10
value: 26.247999999999998
- type: recall_at_100
value: 56.647000000000006
- type: recall_at_1000
value: 80.784
- type: recall_at_20
value: 35.010999999999996
- type: recall_at_3
value: 15.57
- type: recall_at_5
value: 19.198
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 55.93
- type: f1
value: 49.35314406745291
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 73.198
- type: map_at_10
value: 81.736
- type: map_at_100
value: 82.02000000000001
- type: map_at_1000
value: 82.03399999999999
- type: map_at_20
value: 81.937
- type: map_at_3
value: 80.692
- type: map_at_5
value: 81.369
- type: mrr_at_1
value: 78.803
- type: mrr_at_10
value: 86.144
- type: mrr_at_100
value: 86.263
- type: mrr_at_1000
value: 86.26599999999999
- type: mrr_at_20
value: 86.235
- type: mrr_at_3
value: 85.464
- type: mrr_at_5
value: 85.95
- type: ndcg_at_1
value: 78.803
- type: ndcg_at_10
value: 85.442
- type: ndcg_at_100
value: 86.422
- type: ndcg_at_1000
value: 86.68900000000001
- type: ndcg_at_20
value: 85.996
- type: ndcg_at_3
value: 83.839
- type: ndcg_at_5
value: 84.768
- type: precision_at_1
value: 78.803
- type: precision_at_10
value: 10.261000000000001
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_20
value: 5.286
- type: precision_at_3
value: 32.083
- type: precision_at_5
value: 19.898
- type: recall_at_1
value: 73.198
- type: recall_at_10
value: 92.42099999999999
- type: recall_at_100
value: 96.28
- type: recall_at_1000
value: 97.995
- type: recall_at_20
value: 94.36
- type: recall_at_3
value: 88.042
- type: recall_at_5
value: 90.429
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 21.583
- type: map_at_10
value: 36.503
- type: map_at_100
value: 38.529
- type: map_at_1000
value: 38.701
- type: map_at_20
value: 37.69
- type: map_at_3
value: 31.807000000000002
- type: map_at_5
value: 34.424
- type: mrr_at_1
value: 43.827
- type: mrr_at_10
value: 53.528
- type: mrr_at_100
value: 54.291
- type: mrr_at_1000
value: 54.32599999999999
- type: mrr_at_20
value: 54.064
- type: mrr_at_3
value: 51.25999999999999
- type: mrr_at_5
value: 52.641000000000005
- type: ndcg_at_1
value: 43.827
- type: ndcg_at_10
value: 44.931
- type: ndcg_at_100
value: 51.778999999999996
- type: ndcg_at_1000
value: 54.532000000000004
- type: ndcg_at_20
value: 47.899
- type: ndcg_at_3
value: 41.062
- type: ndcg_at_5
value: 42.33
- type: precision_at_1
value: 43.827
- type: precision_at_10
value: 12.608
- type: precision_at_100
value: 1.974
- type: precision_at_1000
value: 0.247
- type: precision_at_20
value: 7.585
- type: precision_at_3
value: 27.778000000000002
- type: precision_at_5
value: 20.308999999999997
- type: recall_at_1
value: 21.583
- type: recall_at_10
value: 52.332
- type: recall_at_100
value: 77.256
- type: recall_at_1000
value: 93.613
- type: recall_at_20
value: 61.413
- type: recall_at_3
value: 37.477
- type: recall_at_5
value: 44.184
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 39.845000000000006
- type: map_at_10
value: 64.331
- type: map_at_100
value: 65.202
- type: map_at_1000
value: 65.261
- type: map_at_20
value: 64.833
- type: map_at_3
value: 60.663
- type: map_at_5
value: 62.94
- type: mrr_at_1
value: 79.689
- type: mrr_at_10
value: 85.299
- type: mrr_at_100
value: 85.461
- type: mrr_at_1000
value: 85.466
- type: mrr_at_20
value: 85.39099999999999
- type: mrr_at_3
value: 84.396
- type: mrr_at_5
value: 84.974
- type: ndcg_at_1
value: 79.689
- type: ndcg_at_10
value: 72.49
- type: ndcg_at_100
value: 75.485
- type: ndcg_at_1000
value: 76.563
- type: ndcg_at_20
value: 73.707
- type: ndcg_at_3
value: 67.381
- type: ndcg_at_5
value: 70.207
- type: precision_at_1
value: 79.689
- type: precision_at_10
value: 15.267
- type: precision_at_100
value: 1.7610000000000001
- type: precision_at_1000
value: 0.19
- type: precision_at_20
value: 8.024000000000001
- type: precision_at_3
value: 43.363
- type: precision_at_5
value: 28.248
- type: recall_at_1
value: 39.845000000000006
- type: recall_at_10
value: 76.334
- type: recall_at_100
value: 88.042
- type: recall_at_1000
value: 95.09100000000001
- type: recall_at_20
value: 80.243
- type: recall_at_3
value: 65.044
- type: recall_at_5
value: 70.621
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 93.57079999999999
- type: ap
value: 90.50045924786099
- type: f1
value: 93.56673497845476
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 22.212
- type: map_at_10
value: 34.528
- type: map_at_100
value: 35.69
- type: map_at_1000
value: 35.74
- type: map_at_20
value: 35.251
- type: map_at_3
value: 30.628
- type: map_at_5
value: 32.903999999999996
- type: mrr_at_1
value: 22.794
- type: mrr_at_10
value: 35.160000000000004
- type: mrr_at_100
value: 36.251
- type: mrr_at_1000
value: 36.295
- type: mrr_at_20
value: 35.845
- type: mrr_at_3
value: 31.328
- type: mrr_at_5
value: 33.574
- type: ndcg_at_1
value: 22.779
- type: ndcg_at_10
value: 41.461
- type: ndcg_at_100
value: 47.049
- type: ndcg_at_1000
value: 48.254000000000005
- type: ndcg_at_20
value: 44.031
- type: ndcg_at_3
value: 33.561
- type: ndcg_at_5
value: 37.62
- type: precision_at_1
value: 22.779
- type: precision_at_10
value: 6.552
- type: precision_at_100
value: 0.936
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.8120000000000003
- type: precision_at_3
value: 14.274000000000001
- type: precision_at_5
value: 10.622
- type: recall_at_1
value: 22.212
- type: recall_at_10
value: 62.732
- type: recall_at_100
value: 88.567
- type: recall_at_1000
value: 97.727
- type: recall_at_20
value: 72.733
- type: recall_at_3
value: 41.367
- type: recall_at_5
value: 51.105999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.24988600091199
- type: f1
value: 94.06064583085202
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 74.86052409129333
- type: f1
value: 72.24661442078647
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 77.09139426284189
- type: f1
value: 76.3725044443502
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 79.79956154087064
- type: f1
value: 78.41859658401724
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 32.785944783076374
- type: f1
value: 31.182237278594922
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 16.654611211573236
- type: f1
value: 12.088413093236642
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.51481988144094
- type: f1
value: 49.561420234732125
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 42.36122851507467
- type: f1
value: 25.445030887504398
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 44.73315543695797
- type: f1
value: 28.42075153540265
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 38.96022549326651
- type: f1
value: 25.926979537146106
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 13.578343492291141
- type: f1
value: 8.929295550931657
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 5.396021699819168
- type: f1
value: 1.8587148785378742
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 37.22259583053128
- type: f1
value: 34.63013680947778
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 3.194351042367182
- type: f1
value: 1.2612010214639442
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 14.26361802286483
- type: f1
value: 13.70260406613821
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 37.21923335574983
- type: f1
value: 36.33553913878251
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 10.756556825823807
- type: f1
value: 9.676431920229374
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 32.49831876260928
- type: f1
value: 30.818895782691868
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 40.995292535305985
- type: f1
value: 37.68768183180129
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 42.780766644250164
- type: f1
value: 37.82194830667135
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 33.490248823133825
- type: f1
value: 29.71809045584527
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.8836583725622
- type: f1
value: 72.16381047416814
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.45191661062542
- type: f1
value: 43.46583297093683
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.738399462004036
- type: f1
value: 24.11896530001951
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 38.09683927370545
- type: f1
value: 35.34443269387154
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.89307330195024
- type: f1
value: 43.47164092514292
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.198386012104912
- type: f1
value: 22.446286736401916
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 13.940820443846672
- type: f1
value: 13.257747189396213
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 34.710827168796236
- type: f1
value: 32.036974696095996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 6.711499663752522
- type: f1
value: 5.439441019096591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 38.56758574310693
- type: f1
value: 36.83183505458304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 32.22595830531271
- type: f1
value: 30.10972675771159
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.79690652320107
- type: f1
value: 44.37143784350453
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 29.189643577673163
- type: f1
value: 25.43718135312703
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 34.21990585070612
- type: f1
value: 32.333592263041396
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 8.890383322125084
- type: f1
value: 7.294310113130201
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 4.616677874915938
- type: f1
value: 1.5028537477535886
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 3.170813718897109
- type: f1
value: 1.5771411815826382
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 15.026899798251513
- type: f1
value: 14.077395255366183
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 36.0995292535306
- type: f1
value: 35.0877269083235
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 2.9959650302622727
- type: f1
value: 0.8064424547273695
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 23.301950235373234
- type: f1
value: 22.477376205075853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 36.13315400134499
- type: f1
value: 32.99623898888715
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 3.813046402151983
- type: f1
value: 1.1769597223141248
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 39.66711499663752
- type: f1
value: 35.921474753569214
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 41.079354404841965
- type: f1
value: 37.57739961852201
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 38.211163416274374
- type: f1
value: 34.89419275422068
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.19838601210491
- type: f1
value: 42.71660225307043
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 39.48554135843981
- type: f1
value: 37.47402102847154
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 31.819098856758576
- type: f1
value: 30.120158288509725
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 35.44720914593141
- type: f1
value: 33.74530063536304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 36.89307330195024
- type: f1
value: 34.46971619696105
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 38.83322125084062
- type: f1
value: 36.050770344888264
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 37.535305985205106
- type: f1
value: 35.21395700670493
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 7.905178211163418
- type: f1
value: 6.163513326325246
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 2.8480161398789514
- type: f1
value: 1.0163931337986962
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 10.501008742434433
- type: f1
value: 6.858549418430471
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 39.46536650975118
- type: f1
value: 34.96292597328575
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 37.50168123739071
- type: f1
value: 35.031097269820464
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 16.109616677874918
- type: f1
value: 15.884609726192519
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 36.11297915265636
- type: f1
value: 34.59918716321474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 18.850033624747812
- type: f1
value: 15.09584388649328
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 17.219233355749832
- type: f1
value: 14.538046039008337
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.79757901815736
- type: f1
value: 45.078250421193324
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 7.078009414929388
- type: f1
value: 4.0122456300041645
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 22.831203765971754
- type: f1
value: 20.131610050816555
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.952925353059854
- type: f1
value: 42.6865575762921
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 16.593813046402154
- type: f1
value: 14.087144503044291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.91862811028917
- type: f1
value: 34.968402727911915
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.923335574983184
- type: f1
value: 49.357147840776335
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.73570948217889
- type: f1
value: 54.92084137819753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.995965030262276
- type: f1
value: 38.47512542753069
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.42098184263618
- type: f1
value: 77.03413816048877
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.46536650975118
- type: f1
value: 53.08520810835907
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 30.578345662407525
- type: f1
value: 28.822998245702635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.567585743106925
- type: f1
value: 39.79216651714347
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.98722259583053
- type: f1
value: 55.31168113501439
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.076664425016812
- type: f1
value: 24.927348965627573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 18.096839273705445
- type: f1
value: 17.386603595777103
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.73839946200403
- type: f1
value: 38.65545902563735
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 11.536650975117688
- type: f1
value: 10.898336694524854
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.9502353732347
- type: f1
value: 44.332561323528644
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.777404169468724
- type: f1
value: 39.378117766055354
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.6469401479489
- type: f1
value: 52.512025274851794
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 35.90114324142569
- type: f1
value: 34.90331274712605
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.51176866173504
- type: f1
value: 39.417541845685676
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 13.799596503026226
- type: f1
value: 11.587556164962251
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 9.44855413584398
- type: f1
value: 4.30711077076907
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 8.157363819771351
- type: f1
value: 5.5588908736809515
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 19.909213180901144
- type: f1
value: 18.964761241087984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.47747141896436
- type: f1
value: 38.17159556642586
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 6.701412239408204
- type: f1
value: 3.621974155647488
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.55413584398117
- type: f1
value: 26.582548923662753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.617350369872234
- type: f1
value: 41.35397419267425
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 9.976462676529927
- type: f1
value: 5.900764382768462
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.894418291862806
- type: f1
value: 47.70929403771086
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.761936785474106
- type: f1
value: 48.42797973062516
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.21385339609952
- type: f1
value: 43.7081546200347
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 55.59852051109617
- type: f1
value: 54.19610878409633
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.54135843981169
- type: f1
value: 47.79393938467311
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.73032952252858
- type: f1
value: 35.96450149708041
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.67114996637525
- type: f1
value: 40.28283538885605
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.38063214525891
- type: f1
value: 44.93264016007152
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.28379287155347
- type: f1
value: 46.25486396570196
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.18291862811029
- type: f1
value: 41.17519157172804
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 12.599193006052452
- type: f1
value: 11.129236666238377
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 7.017484868863484
- type: f1
value: 3.9665415549749077
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 19.788164088769335
- type: f1
value: 15.783384761347582
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.35978480161398
- type: f1
value: 47.30586047800275
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.484196368527236
- type: f1
value: 44.65101184252231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 23.681909885675857
- type: f1
value: 22.247817138937524
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.63080026899798
- type: f1
value: 39.546896741744
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 30.141223940820446
- type: f1
value: 28.177838960078123
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.515131136516473
- type: f1
value: 26.514325837594654
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.70592767911301
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.80943770643908
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.66434973425713
- type: mrr
value: 33.92240574935323
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.561999999999999
- type: map_at_10
value: 14.854000000000001
- type: map_at_100
value: 19.187
- type: map_at_1000
value: 20.812
- type: map_at_20
value: 16.744
- type: map_at_3
value: 10.804
- type: map_at_5
value: 12.555
- type: mrr_at_1
value: 48.916
- type: mrr_at_10
value: 57.644
- type: mrr_at_100
value: 58.17
- type: mrr_at_1000
value: 58.206
- type: mrr_at_20
value: 57.969
- type: mrr_at_3
value: 55.36600000000001
- type: mrr_at_5
value: 56.729
- type: ndcg_at_1
value: 46.594
- type: ndcg_at_10
value: 37.897999999999996
- type: ndcg_at_100
value: 35.711
- type: ndcg_at_1000
value: 44.65
- type: ndcg_at_20
value: 35.989
- type: ndcg_at_3
value: 42.869
- type: ndcg_at_5
value: 40.373
- type: precision_at_1
value: 48.297000000000004
- type: precision_at_10
value: 28.297
- type: precision_at_100
value: 9.099
- type: precision_at_1000
value: 2.229
- type: precision_at_20
value: 21.455
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 6.561999999999999
- type: recall_at_10
value: 19.205
- type: recall_at_100
value: 36.742999999999995
- type: recall_at_1000
value: 69.119
- type: recall_at_20
value: 23.787
- type: recall_at_3
value: 11.918
- type: recall_at_5
value: 14.860000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 30.306
- type: map_at_10
value: 46.916999999999994
- type: map_at_100
value: 47.899
- type: map_at_1000
value: 47.925000000000004
- type: map_at_20
value: 47.583
- type: map_at_3
value: 42.235
- type: map_at_5
value: 45.118
- type: mrr_at_1
value: 34.327999999999996
- type: mrr_at_10
value: 49.248999999999995
- type: mrr_at_100
value: 49.96
- type: mrr_at_1000
value: 49.977
- type: mrr_at_20
value: 49.738
- type: mrr_at_3
value: 45.403999999999996
- type: mrr_at_5
value: 47.786
- type: ndcg_at_1
value: 34.327999999999996
- type: ndcg_at_10
value: 55.123999999999995
- type: ndcg_at_100
value: 59.136
- type: ndcg_at_1000
value: 59.71300000000001
- type: ndcg_at_20
value: 57.232000000000006
- type: ndcg_at_3
value: 46.48
- type: ndcg_at_5
value: 51.237
- type: precision_at_1
value: 34.327999999999996
- type: precision_at_10
value: 9.261
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.148
- type: precision_at_3
value: 21.523999999999997
- type: precision_at_5
value: 15.659999999999998
- type: recall_at_1
value: 30.306
- type: recall_at_10
value: 77.65100000000001
- type: recall_at_100
value: 94.841
- type: recall_at_1000
value: 99.119
- type: recall_at_20
value: 85.37599999999999
- type: recall_at_3
value: 55.562
- type: recall_at_5
value: 66.5
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 71.516
- type: map_at_10
value: 85.48400000000001
- type: map_at_100
value: 86.11
- type: map_at_1000
value: 86.124
- type: map_at_20
value: 85.895
- type: map_at_3
value: 82.606
- type: map_at_5
value: 84.395
- type: mrr_at_1
value: 82.38
- type: mrr_at_10
value: 88.31099999999999
- type: mrr_at_100
value: 88.407
- type: mrr_at_1000
value: 88.407
- type: mrr_at_20
value: 88.385
- type: mrr_at_3
value: 87.42699999999999
- type: mrr_at_5
value: 88.034
- type: ndcg_at_1
value: 82.39999999999999
- type: ndcg_at_10
value: 89.07300000000001
- type: ndcg_at_100
value: 90.23400000000001
- type: ndcg_at_1000
value: 90.304
- type: ndcg_at_20
value: 89.714
- type: ndcg_at_3
value: 86.42699999999999
- type: ndcg_at_5
value: 87.856
- type: precision_at_1
value: 82.39999999999999
- type: precision_at_10
value: 13.499
- type: precision_at_100
value: 1.536
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.155
- type: precision_at_3
value: 37.846999999999994
- type: precision_at_5
value: 24.778
- type: recall_at_1
value: 71.516
- type: recall_at_10
value: 95.831
- type: recall_at_100
value: 99.714
- type: recall_at_1000
value: 99.979
- type: recall_at_20
value: 97.87599999999999
- type: recall_at_3
value: 88.08
- type: recall_at_5
value: 92.285
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.3760407207699
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 65.28621066626943
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 5.163
- type: map_at_10
value: 14.377
- type: map_at_100
value: 17.177
- type: map_at_1000
value: 17.588
- type: map_at_20
value: 15.827
- type: map_at_3
value: 9.879
- type: map_at_5
value: 12.133
- type: mrr_at_1
value: 25.5
- type: mrr_at_10
value: 38.435
- type: mrr_at_100
value: 39.573
- type: mrr_at_1000
value: 39.606
- type: mrr_at_20
value: 39.134
- type: mrr_at_3
value: 34.666999999999994
- type: mrr_at_5
value: 37.117
- type: ndcg_at_1
value: 25.5
- type: ndcg_at_10
value: 23.688000000000002
- type: ndcg_at_100
value: 33.849000000000004
- type: ndcg_at_1000
value: 39.879
- type: ndcg_at_20
value: 27.36
- type: ndcg_at_3
value: 22.009999999999998
- type: ndcg_at_5
value: 19.691
- type: precision_at_1
value: 25.5
- type: precision_at_10
value: 12.540000000000001
- type: precision_at_100
value: 2.721
- type: precision_at_1000
value: 0.415
- type: precision_at_20
value: 8.385
- type: precision_at_3
value: 21.099999999999998
- type: precision_at_5
value: 17.84
- type: recall_at_1
value: 5.163
- type: recall_at_10
value: 25.405
- type: recall_at_100
value: 55.213
- type: recall_at_1000
value: 84.243
- type: recall_at_20
value: 34.003
- type: recall_at_3
value: 12.837000000000002
- type: recall_at_5
value: 18.096999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 87.64406884822948
- type: cos_sim_spearman
value: 83.00239648251724
- type: euclidean_pearson
value: 85.03347205351844
- type: euclidean_spearman
value: 83.00240733538445
- type: manhattan_pearson
value: 85.0312758694447
- type: manhattan_spearman
value: 82.99430696077589
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.68832340658764
- type: cos_sim_spearman
value: 79.21679373212476
- type: euclidean_pearson
value: 85.17094885886415
- type: euclidean_spearman
value: 79.21421345946399
- type: manhattan_pearson
value: 85.17409319145995
- type: manhattan_spearman
value: 79.20992207976401
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.43733084958856
- type: cos_sim_spearman
value: 89.43082089321751
- type: euclidean_pearson
value: 88.63286785416938
- type: euclidean_spearman
value: 89.43082081372343
- type: manhattan_pearson
value: 88.62969346368385
- type: manhattan_spearman
value: 89.43131586189746
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.62185532014894
- type: cos_sim_spearman
value: 84.7923120886599
- type: euclidean_pearson
value: 85.99786490539253
- type: euclidean_spearman
value: 84.79231064318844
- type: manhattan_pearson
value: 85.97647892920392
- type: manhattan_spearman
value: 84.76865232132103
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.39303997282114
- type: cos_sim_spearman
value: 89.54273264876765
- type: euclidean_pearson
value: 88.8848627924181
- type: euclidean_spearman
value: 89.54275013645078
- type: manhattan_pearson
value: 88.86926987108802
- type: manhattan_spearman
value: 89.53259197721715
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.21814352466886
- type: cos_sim_spearman
value: 86.68505223422434
- type: euclidean_pearson
value: 86.07422446469991
- type: euclidean_spearman
value: 86.68505161067375
- type: manhattan_pearson
value: 86.05114200797293
- type: manhattan_spearman
value: 86.6587670422703
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 39.17871768366095
- type: cos_sim_spearman
value: 39.78510424960567
- type: euclidean_pearson
value: 41.65680175653682
- type: euclidean_spearman
value: 39.78538944779548
- type: manhattan_pearson
value: 41.567603690394755
- type: manhattan_spearman
value: 39.71393388259443
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 49.26766904195114
- type: cos_sim_spearman
value: 46.79722787057151
- type: euclidean_pearson
value: 51.2329334717446
- type: euclidean_spearman
value: 46.7920623095072
- type: manhattan_pearson
value: 51.26488560860826
- type: manhattan_spearman
value: 47.00400318665492
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 1.6821294132202447
- type: cos_sim_spearman
value: -0.7813676799492025
- type: euclidean_pearson
value: 1.9197388753860283
- type: euclidean_spearman
value: -0.7813676799492025
- type: manhattan_pearson
value: 2.209862430499871
- type: manhattan_spearman
value: -0.863014010062456
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 48.76382428941107
- type: cos_sim_spearman
value: 47.50280322999196
- type: euclidean_pearson
value: 48.73919143974209
- type: euclidean_spearman
value: 47.50280322999196
- type: manhattan_pearson
value: 48.76291223862666
- type: manhattan_spearman
value: 47.51318193687094
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.6579390263212
- type: cos_sim_spearman
value: 89.64423556388047
- type: euclidean_pearson
value: 90.1160733522703
- type: euclidean_spearman
value: 89.64423556388047
- type: manhattan_pearson
value: 90.1528407376387
- type: manhattan_spearman
value: 89.61290724496793
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 6.717092266815236
- type: cos_sim_spearman
value: 4.180543503488665
- type: euclidean_pearson
value: 7.120267092048099
- type: euclidean_spearman
value: 4.180543503488665
- type: manhattan_pearson
value: 6.396237465828514
- type: manhattan_spearman
value: 3.61244941411957
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 44.36476614938953
- type: cos_sim_spearman
value: 44.265723809500685
- type: euclidean_pearson
value: 44.61551298711104
- type: euclidean_spearman
value: 44.265723809500685
- type: manhattan_pearson
value: 44.54302374682193
- type: manhattan_spearman
value: 44.08642490624185
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.64871991975828
- type: cos_sim_spearman
value: 79.21979030014373
- type: euclidean_pearson
value: 81.8672798988218
- type: euclidean_spearman
value: 79.21950130108661
- type: manhattan_pearson
value: 82.02131606326583
- type: manhattan_spearman
value: 79.44848373553044
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 48.73898658957231
- type: cos_sim_spearman
value: 47.15192605817168
- type: euclidean_pearson
value: 49.11990573381456
- type: euclidean_spearman
value: 47.15192605817168
- type: manhattan_pearson
value: 48.5694400358235
- type: manhattan_spearman
value: 46.651326429708135
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 44.42168074232218
- type: cos_sim_spearman
value: 42.64799010889372
- type: euclidean_pearson
value: 44.41376048324183
- type: euclidean_spearman
value: 42.64799010889372
- type: manhattan_pearson
value: 44.724522621427546
- type: manhattan_spearman
value: 42.60912761758016
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 40.55050173163197
- type: cos_sim_spearman
value: 36.59720399843921
- type: euclidean_pearson
value: 41.49402389245919
- type: euclidean_spearman
value: 36.59720399843921
- type: manhattan_pearson
value: 41.877514420153666
- type: manhattan_spearman
value: 36.782790653297695
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.44405106094861
- type: cos_sim_spearman
value: 70.25621893108706
- type: euclidean_pearson
value: 71.15726637696066
- type: euclidean_spearman
value: 70.25621893108706
- type: manhattan_pearson
value: 71.28565265298322
- type: manhattan_spearman
value: 70.30317892414027
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 34.56638014500804
- type: cos_sim_spearman
value: 39.48672765878819
- type: euclidean_pearson
value: 31.61811391543846
- type: euclidean_spearman
value: 39.48672765878819
- type: manhattan_pearson
value: 31.839117286689977
- type: manhattan_spearman
value: 39.71519891403971
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 53.72389957326714
- type: cos_sim_spearman
value: 59.47018781803598
- type: euclidean_pearson
value: 57.02101112722141
- type: euclidean_spearman
value: 59.47018781803598
- type: manhattan_pearson
value: 57.16531255049132
- type: manhattan_spearman
value: 59.57320508684436
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 24.14602533311477
- type: cos_sim_spearman
value: 35.38039329704056
- type: euclidean_pearson
value: 13.540543553763765
- type: euclidean_spearman
value: 35.38039329704056
- type: manhattan_pearson
value: 13.566377379303256
- type: manhattan_spearman
value: 35.88351047224126
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 39.07697432450346
- type: cos_sim_spearman
value: 45.65479772235109
- type: euclidean_pearson
value: 41.68913259791294
- type: euclidean_spearman
value: 45.65479772235109
- type: manhattan_pearson
value: 41.58872552392231
- type: manhattan_spearman
value: 45.462070534023404
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 23.917322166825183
- type: cos_sim_spearman
value: 25.06042767518008
- type: euclidean_pearson
value: 24.29850435278771
- type: euclidean_spearman
value: 25.06042767518008
- type: manhattan_pearson
value: 24.461400062927154
- type: manhattan_spearman
value: 25.285239684773046
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 20.39987623162105
- type: cos_sim_spearman
value: 30.62427846964406
- type: euclidean_pearson
value: 20.817950942480323
- type: euclidean_spearman
value: 30.618700916425222
- type: manhattan_pearson
value: 20.756787430880788
- type: manhattan_spearman
value: 30.813116243628436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 43.838363041373974
- type: cos_sim_spearman
value: 54.17598089882719
- type: euclidean_pearson
value: 47.51044033919419
- type: euclidean_spearman
value: 54.17598089882719
- type: manhattan_pearson
value: 47.54911083403354
- type: manhattan_spearman
value: 54.2562151204606
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 77.69372699157654
- type: cos_sim_spearman
value: 79.88201388457435
- type: euclidean_pearson
value: 78.81259581302578
- type: euclidean_spearman
value: 79.88201388457435
- type: manhattan_pearson
value: 78.85098508555477
- type: manhattan_spearman
value: 80.20154858554835
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 51.83713469138834
- type: cos_sim_spearman
value: 54.2205845288082
- type: euclidean_pearson
value: 54.14828396506985
- type: euclidean_spearman
value: 54.2205845288082
- type: manhattan_pearson
value: 54.10701855179347
- type: manhattan_spearman
value: 54.30261135461622
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 61.59147752554915
- type: cos_sim_spearman
value: 66.65350021824162
- type: euclidean_pearson
value: 62.577915098325434
- type: euclidean_spearman
value: 66.65350021824162
- type: manhattan_pearson
value: 62.22817675366819
- type: manhattan_spearman
value: 66.35054389546214
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 65.23775897743552
- type: cos_sim_spearman
value: 68.1509652709288
- type: euclidean_pearson
value: 66.17577980319408
- type: euclidean_spearman
value: 68.1509652709288
- type: manhattan_pearson
value: 66.40051933918704
- type: manhattan_spearman
value: 68.37138808382802
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 61.943863830043725
- type: cos_sim_spearman
value: 62.699440972016774
- type: euclidean_pearson
value: 62.810366501196
- type: euclidean_spearman
value: 62.699440972016774
- type: manhattan_pearson
value: 63.13065659868621
- type: manhattan_spearman
value: 63.314141373703215
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 48.1108866326284
- type: cos_sim_spearman
value: 49.25274096772371
- type: euclidean_pearson
value: 47.87203797435136
- type: euclidean_spearman
value: 49.25274096772371
- type: manhattan_pearson
value: 47.39927722979605
- type: manhattan_spearman
value: 48.76629586560382
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 58.58401639298775
- type: cos_sim_spearman
value: 64.37272828346495
- type: euclidean_pearson
value: 61.03680632288844
- type: euclidean_spearman
value: 64.37272828346495
- type: manhattan_pearson
value: 61.381331848220675
- type: manhattan_spearman
value: 65.01053960017909
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 44.374682063416735
- type: cos_sim_spearman
value: 48.907776246550185
- type: euclidean_pearson
value: 45.473260322201284
- type: euclidean_spearman
value: 48.907776246550185
- type: manhattan_pearson
value: 46.051779591771854
- type: manhattan_spearman
value: 49.69297213757249
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 31.55497030143048
- type: cos_sim_spearman
value: 33.042073055100396
- type: euclidean_pearson
value: 33.548707962408955
- type: euclidean_spearman
value: 33.042073055100396
- type: manhattan_pearson
value: 31.704989941561873
- type: manhattan_spearman
value: 31.56395608711827
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 51.253093232573036
- type: cos_sim_spearman
value: 39.440531887330785
- type: euclidean_pearson
value: 51.42758694144294
- type: euclidean_spearman
value: 39.440531887330785
- type: manhattan_pearson
value: 49.623915715149394
- type: manhattan_spearman
value: 39.440531887330785
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.61260941646887
- type: cos_sim_spearman
value: 88.96384726759047
- type: euclidean_pearson
value: 88.72268994912045
- type: euclidean_spearman
value: 88.96384726759047
- type: manhattan_pearson
value: 88.72080954591475
- type: manhattan_spearman
value: 88.92379960545995
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.64768404690723
- type: mrr
value: 96.25675341361615
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.194
- type: map_at_10
value: 70.62899999999999
- type: map_at_100
value: 71.119
- type: map_at_1000
value: 71.14200000000001
- type: map_at_20
value: 71.033
- type: map_at_3
value: 67.51899999999999
- type: map_at_5
value: 69.215
- type: mrr_at_1
value: 63.666999999999994
- type: mrr_at_10
value: 71.456
- type: mrr_at_100
value: 71.844
- type: mrr_at_1000
value: 71.866
- type: mrr_at_20
value: 71.769
- type: mrr_at_3
value: 69.167
- type: mrr_at_5
value: 70.39999999999999
- type: ndcg_at_1
value: 63.666999999999994
- type: ndcg_at_10
value: 75.14
- type: ndcg_at_100
value: 77.071
- type: ndcg_at_1000
value: 77.55199999999999
- type: ndcg_at_20
value: 76.491
- type: ndcg_at_3
value: 69.836
- type: ndcg_at_5
value: 72.263
- type: precision_at_1
value: 63.666999999999994
- type: precision_at_10
value: 10.0
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.3
- type: precision_at_3
value: 27.0
- type: precision_at_5
value: 17.867
- type: recall_at_1
value: 61.194
- type: recall_at_10
value: 88.156
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 93.389
- type: recall_at_3
value: 73.839
- type: recall_at_5
value: 79.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.87425742574257
- type: cos_sim_ap
value: 96.97141655369937
- type: cos_sim_f1
value: 93.6910084451068
- type: cos_sim_precision
value: 93.0898321816387
- type: cos_sim_recall
value: 94.3
- type: dot_accuracy
value: 99.87425742574257
- type: dot_ap
value: 96.97141655369938
- type: dot_f1
value: 93.6910084451068
- type: dot_precision
value: 93.0898321816387
- type: dot_recall
value: 94.3
- type: euclidean_accuracy
value: 99.87425742574257
- type: euclidean_ap
value: 96.97141655369938
- type: euclidean_f1
value: 93.6910084451068
- type: euclidean_precision
value: 93.0898321816387
- type: euclidean_recall
value: 94.3
- type: manhattan_accuracy
value: 99.87425742574257
- type: manhattan_ap
value: 96.98252972861131
- type: manhattan_f1
value: 93.68473396320238
- type: manhattan_precision
value: 93.17507418397626
- type: manhattan_recall
value: 94.19999999999999
- type: max_accuracy
value: 99.87425742574257
- type: max_ap
value: 96.98252972861131
- type: max_f1
value: 93.6910084451068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.5976926394361
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.3221929214798
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.28322662897131
- type: mrr
value: 56.223620129870135
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.176396304511282
- type: cos_sim_spearman
value: 32.11989671564906
- type: dot_pearson
value: 31.17639740597169
- type: dot_spearman
value: 32.145586989831564
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.186
- type: map_at_10
value: 1.659
- type: map_at_100
value: 9.224
- type: map_at_1000
value: 22.506999999999998
- type: map_at_20
value: 2.937
- type: map_at_3
value: 0.5539999999999999
- type: map_at_5
value: 0.8920000000000001
- type: mrr_at_1
value: 72.0
- type: mrr_at_10
value: 82.633
- type: mrr_at_100
value: 82.633
- type: mrr_at_1000
value: 82.633
- type: mrr_at_20
value: 82.633
- type: mrr_at_3
value: 80.333
- type: mrr_at_5
value: 82.633
- type: ndcg_at_1
value: 69.0
- type: ndcg_at_10
value: 67.327
- type: ndcg_at_100
value: 51.626000000000005
- type: ndcg_at_1000
value: 47.396
- type: ndcg_at_20
value: 63.665000000000006
- type: ndcg_at_3
value: 68.95
- type: ndcg_at_5
value: 69.241
- type: precision_at_1
value: 72.0
- type: precision_at_10
value: 71.6
- type: precision_at_100
value: 53.22
- type: precision_at_1000
value: 20.721999999999998
- type: precision_at_20
value: 67.30000000000001
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.186
- type: recall_at_10
value: 1.932
- type: recall_at_100
value: 12.883
- type: recall_at_1000
value: 44.511
- type: recall_at_20
value: 3.583
- type: recall_at_3
value: 0.601
- type: recall_at_5
value: 1.0
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.308
- type: map_at_10
value: 9.744
- type: map_at_100
value: 15.859000000000002
- type: map_at_1000
value: 17.396
- type: map_at_20
value: 12.49
- type: map_at_3
value: 4.848
- type: map_at_5
value: 6.912999999999999
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 47.207
- type: mrr_at_100
value: 48.116
- type: mrr_at_1000
value: 48.116
- type: mrr_at_20
value: 47.735
- type: mrr_at_3
value: 42.857
- type: mrr_at_5
value: 44.285999999999994
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 24.421
- type: ndcg_at_100
value: 35.961
- type: ndcg_at_1000
value: 47.541
- type: ndcg_at_20
value: 25.999
- type: ndcg_at_3
value: 25.333
- type: ndcg_at_5
value: 25.532
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.448999999999998
- type: precision_at_100
value: 7.571
- type: precision_at_1000
value: 1.5310000000000001
- type: precision_at_20
value: 17.959
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.308
- type: recall_at_10
value: 16.075
- type: recall_at_100
value: 47.357
- type: recall_at_1000
value: 82.659
- type: recall_at_20
value: 24.554000000000002
- type: recall_at_3
value: 5.909
- type: recall_at_5
value: 9.718
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 67.2998046875
- type: ap
value: 12.796222498684031
- type: f1
value: 51.7465070845071
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.76004527447652
- type: f1
value: 61.88985723942393
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 52.69229715788263
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.42325803182929
- type: cos_sim_ap
value: 78.29203513753492
- type: cos_sim_f1
value: 71.33160557818093
- type: cos_sim_precision
value: 67.00672385810341
- type: cos_sim_recall
value: 76.2532981530343
- type: dot_accuracy
value: 87.42325803182929
- type: dot_ap
value: 78.29208368244002
- type: dot_f1
value: 71.33160557818093
- type: dot_precision
value: 67.00672385810341
- type: dot_recall
value: 76.2532981530343
- type: euclidean_accuracy
value: 87.42325803182929
- type: euclidean_ap
value: 78.29202838891078
- type: euclidean_f1
value: 71.33160557818093
- type: euclidean_precision
value: 67.00672385810341
- type: euclidean_recall
value: 76.2532981530343
- type: manhattan_accuracy
value: 87.42325803182929
- type: manhattan_ap
value: 78.23964459648822
- type: manhattan_f1
value: 71.1651728553137
- type: manhattan_precision
value: 69.12935323383084
- type: manhattan_recall
value: 73.3245382585752
- type: max_accuracy
value: 87.42325803182929
- type: max_ap
value: 78.29208368244002
- type: max_f1
value: 71.33160557818093
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.00725734466566
- type: cos_sim_ap
value: 86.1594112416402
- type: cos_sim_f1
value: 78.544568993303
- type: cos_sim_precision
value: 73.42484097756947
- type: cos_sim_recall
value: 84.43178318447798
- type: dot_accuracy
value: 89.00725734466566
- type: dot_ap
value: 86.15940795129771
- type: dot_f1
value: 78.544568993303
- type: dot_precision
value: 73.42484097756947
- type: dot_recall
value: 84.43178318447798
- type: euclidean_accuracy
value: 89.00725734466566
- type: euclidean_ap
value: 86.15939689541806
- type: euclidean_f1
value: 78.544568993303
- type: euclidean_precision
value: 73.42484097756947
- type: euclidean_recall
value: 84.43178318447798
- type: manhattan_accuracy
value: 88.97426941436721
- type: manhattan_ap
value: 86.14154348065739
- type: manhattan_f1
value: 78.53991175290814
- type: manhattan_precision
value: 74.60339452719086
- type: manhattan_recall
value: 82.91499846011703
- type: max_accuracy
value: 89.00725734466566
- type: max_ap
value: 86.1594112416402
- type: max_f1
value: 78.544568993303
---
| [
"BIOSSES",
"SCIFACT"
] |
mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"uncensored",
"en",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:kaiokendev/SuperCOT-dataset",
"dataset:neulab/conala",
"dataset:yahma/alpaca-cleaned",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:timdettmers/guanaco-33b",
"dataset:JosephusCheung/GuanacoDataset",
"base_model:Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b",
"base_model:quantized:Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | 2024-08-04T23:49:15Z | 2024-08-05T05:14:52+00:00 | 1,001 | 1 | ---
base_model: Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- kaiokendev/SuperCOT-dataset
- neulab/conala
- yahma/alpaca-cleaned
- QingyiSi/Alpaca-CoT
- timdettmers/guanaco-33b
- JosephusCheung/GuanacoDataset
language:
- en
library_name: transformers
license: other
tags:
- uncensored
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b-i1-GGUF/resolve/main/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"MONERO"
] |
hezarai/crnn-fa-printed-96-long | hezarai | image-to-text | [
"hezar",
"image-to-text",
"fa",
"arxiv:1507.05717",
"license:apache-2.0",
"region:us"
] | 2023-11-27T20:46:39Z | 2025-02-06T09:15:39+00:00 | 998 | 5 | ---
language:
- fa
library_name: hezar
license: apache-2.0
pipeline_tag: image-to-text
tags:
- hezar
- image-to-text
---
A CRNN model for Persian OCR. This model is based on a simple CNN + LSTM architecture inspired by [this paper](https://arxiv.org/abs/1507.05717).
This is a successor model to our previous model [hezarai/crnn-base-fa-64x256](https://huggingface.co/hezarai/crnn-base-fa-64x256).
The improvements include:
- 5X larger dataset
- Change input image size from 64x256 to 32x384
- Increase max output length from 64 to 96 (Max length of the samples in the dataset was 48 to handle CTC loss issues)
- Support numbers and special characters (see id2label in `model_config.yaml`)
- Auto-handling of LTR characters like digits in between the text
Note that this model is only optimized for printed/scanned documents and works best on texts with a length of up to 50-ish characters. (For an end-to-end OCR pipeline, use a text detector model first like https://huggingface.co/hezarai/CRAFT to
extract text boxes preferrably in word-level and then use this model), but it can be used to be fine-tuned on other domains like license plate or handwritten texts.
#### Usage
```
pip install hezar
```
```python
from hezar.models import Model
crnn = Model.load("hezarai/crnn-fa-printed-96-long")
texts = crnn.predict(["sample_image.jpg"])
print(texts)
``` | [
"CRAFT"
] |
croissantllm/CroissantLLMBase | croissantllm | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"dataset:croissantllm/croissant_dataset",
"arxiv:2402.00786",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-09T09:02:24Z | 2024-08-30T09:39:07+00:00 | 998 | 31 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
- croissantllm/croissant_dataset
language:
- fr
- en
license: mit
pipeline_tag: text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (190k steps, Final version)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
https://arxiv.org/abs/2402.00786
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMBase"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.\nHe is heading to the market. -> Il va au marché.\nWe are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
print(tokenizer.decode(tokens[0]))
``` | [
"CRAFT"
] |
sschet/biomedical-ner-all | sschet | token-classification | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"Token Classification",
"en",
"dataset:tner/bc5cdr",
"dataset:commanderstrife/jnlpba",
"dataset:bc2gm_corpus",
"dataset:drAbreu/bc4chemd_ner",
"dataset:linnaeus",
"dataset:chintagunta85/ncbi_disease",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-26T15:41:19Z | 2023-02-01T03:39:22+00:00 | 989 | 3 | ---
datasets:
- tner/bc5cdr
- commanderstrife/jnlpba
- bc2gm_corpus
- drAbreu/bc4chemd_ner
- linnaeus
- chintagunta85/ncbi_disease
language:
- en
license: apache-2.0
tags:
- Token Classification
co2_eq_emissions: 0.0279399890043426
widget:
- text: 'CASE: A 28-year-old previously healthy man presented with a 6-week history
of palpitations. The symptoms occurred during rest, 2–3 times per week, lasted
up to 30 minutes at a time and were associated with dyspnea. Except for a grade
2/6 holosystolic tricuspid regurgitation murmur (best heard at the left sternal
border with inspiratory accentuation), physical examination yielded unremarkable
findings.'
example_title: example 1
- text: A 63-year-old woman with no known cardiac history presented with a sudden
onset of dyspnea requiring intubation and ventilatory support out of hospital.
She denied preceding symptoms of chest discomfort, palpitations, syncope or infection.
The patient was afebrile and normotensive, with a sinus tachycardia of 140 beats/min.
example_title: example 2
- text: A 48 year-old female presented with vaginal bleeding and abnormal Pap smears.
Upon diagnosis of invasive non-keratinizing SCC of the cervix, she underwent a
radical hysterectomy with salpingo-oophorectomy which demonstrated positive spread
to the pelvic lymph nodes and the parametrium. Pathological examination revealed
that the tumour also extensively involved the lower uterine segment.
example_title: example 3
---
## About the Model
An English Named Entity Recognition model, trained on Maccrobat to recognize the bio-medical entities (107 entities) from a given text corpus (case reports etc.). This model was built on top of distilbert-base-uncased
- Dataset: Maccrobat https://figshare.com/articles/dataset/MACCROBAT2018/9764942
- Carbon emission: 0.0279399890043426 Kg
- Training time: 30.16527 minutes
- GPU used : 1 x GeForce RTX 3060 Laptop GPU
Checkout the tutorial video for explanation of this model and corresponding python library: https://youtu.be/xpiDPdBpS18
## Usage
The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all")
model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all")
pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") # pass device=0 if using gpu
pipe("""The patient reported no recurrence of palpitations at follow-up 6 months after the ablation.""")
```
## Author
This model is part of the Research topic "AI in Biomedical field" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at:
> https://github.com/dreji18/Bio-Epidemiology-NER | [
"BC5CDR",
"JNLPBA",
"LINNAEUS",
"NCBI DISEASE"
] |
mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Cas-Warehouse/Llama-3-Depressed-Therapist-8B",
"base_model:quantized:Cas-Warehouse/Llama-3-Depressed-Therapist-8B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-06-17T05:38:51Z | 2024-12-16T02:24:06+00:00 | 983 | 2 | ---
base_model: Cas-Warehouse/Llama-3-Depressed-Therapist-8B
language:
- en
library_name: transformers
tags:
- mergekit
- merge
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Cas-Warehouse/Llama-3-Depressed-Therapist-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Deppressed-Therapist-8B-i1-GGUF/resolve/main/Llama-3-Deppressed-Therapist-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"CAS"
] |
mradermacher/Newton-7B-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"axolotl",
"finetune",
"qlora",
"en",
"dataset:hendrycks/competition_math",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:openbookqa",
"dataset:piqa",
"dataset:metaeval/reclor",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:sciq",
"dataset:TIGER-Lab/ScienceEval",
"base_model:Weyaxi/Newton-7B",
"base_model:quantized:Weyaxi/Newton-7B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-11-11T13:48:42Z | 2024-11-11T16:57:09+00:00 | 976 | 0 | ---
base_model: Weyaxi/Newton-7B
datasets:
- hendrycks/competition_math
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- STEM-AI-mtl/Electrical-engineering
- openbookqa
- piqa
- metaeval/reclor
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- sciq
- TIGER-Lab/ScienceEval
language:
- en
library_name: transformers
license: other
tags:
- axolotl
- finetune
- qlora
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Weyaxi/Newton-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Newton-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Newton-7B-i1-GGUF/resolve/main/Newton-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"SCIQ"
] |
Yntec/Remedy | Yntec | text-to-image | [
"diffusers",
"safetensors",
"Artistic",
"Fantasy",
"Scifi",
"DominoPrincip",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2024-03-11T08:28:51Z | 2024-03-11T10:32:34+00:00 | 974 | 0 | ---
library_name: diffusers
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- Artistic
- Fantasy
- Scifi
- DominoPrincip
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Remedy
No-ema version of this model. Original page: https://civitai.com/models/87025
Samples and prompts:

(Click for larger)
Top left (credits to digiplay for the prompt): realistic 8K,HDR,photorealistic,ruins,post-apocalyptic,beautiful silver hair angel with black wings,((very close-up)),canon 5D,wings snowing,bokeh,looking at viewer
Top right: stock photo, futuristic city, on a dark night, close up portrait of boy with cute brunette sister playing with teddy bear, homeless children, she's sitting, cute faces, beautiful intricately detailed soft oil painting, tattered cloths, detailed brown eyes, a wall on the pavement in the shadows of an alley, (crowd, pedestrians in the background, pristine artistic scifi skyscrapers, beautiful plant life mixed with scifi architecture, stark colorful lighting. Vast dystopian vision, depth of field
Bottom left: best quality, masterpiece, ultra realistic, dark fantasy style, professional intricately detailed award winning soft oil painting, a pretty cute little girl sitting reading a secret, on a park bench, giant flowers explosion cloud background, city, skyscrapers, soft edge lighting, highly detailed, ((close up full body portrait)), professional, soft volumetric lighting, lens flares, photographed Canon
Bottom right: manga art, muted colors, detailed painting, halftone dithering, cute girl with shoulderlength black bobcut in baggy black clothes, dream cape, beautiful eyes, complex sigils
For the full and pruned fp16 versions check out: https://huggingface.co/digiplay/Remedy | [
"BEAR"
] |
erax-ai/EraX-VL-7B-V2.0-Preview | erax-ai | visual-question-answering | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"erax",
"multimodal",
"erax-vl-7B",
"insurance",
"ocr",
"vietnamese",
"bcg",
"radiology",
"car accidence",
"hand-writing",
"ancient",
"question-answering",
"visual-question-answering",
"document-question-answering",
"vi",
"en",
"zh",
"arxiv:2308.12966",
"arxiv:2407.10671",
"arxiv:2404.16821",
"arxiv:2404.07922",
"base_model:erax-ai/EraX-VL-7B-V1.5",
"base_model:finetune:erax-ai/EraX-VL-7B-V1.5",
"doi:10.57967/hf/4038",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-11T03:37:47Z | 2025-01-21T10:07:15+00:00 | 972 | 21 | ---
base_model:
- erax-ai/EraX-VL-7B-V1.5
language:
- vi
- en
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: visual-question-answering
tags:
- erax
- multimodal
- erax-vl-7B
- insurance
- ocr
- vietnamese
- bcg
- radiology
- car accidence
- hand-writing
- ancient
- question-answering
- image-text-to-text
- visual-question-answering
- document-question-answering
widget:
- src: images/photo-1-16505057982762025719470.webp
example_title: Test 1
- src: images/vt-don-thuoc-f0-7417.jpeg
example_title: Test 2
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/GsQKdaTyn2FFx_cZvVHk3.png" alt="Logo">
</p>
# EraX-VL-7B-V2.0-Preview
## Introduction 🎉
Hot on the heels of the popular **<a href="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5" target="_blank">EraX-VL-7B-V1.0 model</a>**, we proudly present **EraX-VL-7B-V2.0-Preview**, another robust multimodal model for **OCR (optical character recognition)** and **VQA (visual question-answering)** that excels in various languages 🌍, with a particular focus on Vietnamese 🇻🇳.
This model stands out for its precise recognition capabilities across a range of documents 📝, including medical forms 🩺, invoices 🧾, bills of sale 💳, quotes 📄, and medical records 💊. This functionality is expected to be highly beneficial for hospitals 🏥, clinics 💉, insurance companies 🛡️, and other similar applications 📋. Built on the solid foundation of the [erax-ai/EraX-VL-7B-V1.5](https://huggingface.co/erax-ai/EraX-VL-7B-V1.5)[1], which we found to be of high quality and fluent in Vietnamese, `EraX-VL-7B-V2.0-Preview` has been fine-tuned to enhance its performance.
This model is a "preview-only" version of the final V2.0 which is planned to release after Lunar New Year (Ất Tỵ 2025).
**NOTA BENE**:
- EraX-VL (LLM vision large language model) is NOT a typical OCR-only tool likes Tesseract but is a Multimodal LLM-based model. To use it effectively, you may have to **twist your prompt carefully** depending on your tasks.
- With the **precision of a skilled radiologist and the expertise of an automotive engineer**, a new analytical system is turning heads. Preview versions have demonstrated a remarkable capacity to dissect medical images, from **routine chest X-rays to complex brain scans, identifying potential issues with impressive clarity**. Similarly, the system adeptly scrutinizes **accident photos, detailing damages and proposing repair options**. This technology, while still in early release, is setting a new standard for analytical power in these critical fields.
**EraX-VL-7B-V2.0-Preview** is a young member of our **EraX's LànhGPT** collection of LLM models.
- **Developed by:**
- Nguyễn Anh Nguyên ([email protected])
- Nguyễn Hồ Nam (BCG)
- Phạm Huỳnh Nhật ([email protected])
- Phạm Đình Thục ([email protected])
- **Funded by:** [Bamboo Capital Group](https://bamboocap.com.vn) and EraX
- **Model type:** Multimodal Transformer with over 7B parameters
- **Languages (NLP):** Primarily Vietnamese with multilingual capabilities
- **License:** Apache 2.0
- **Fine-tuned from:** [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
- **Prompt examples:** <a href="https://github.com/EraX-JS-Company/erax-vl-7b-v1/blob/main/prompts/Vietnam_popular_prompts.txt" target="_blank">Some popular prompt examples on Github.</a>
## Benchmarks 📊
## 🏆 LeaderBoard of previous versions:
The EraX-VL-7B-V1.5 achieved exceptionally high performance compared to other models of equal size or even **10 times larger, and we open-source**! You can re-run the benchmark at any time.
<table style="width:75%;">
<tr>
<th align="middle" width="300">Models</th>
<td align="middle" width="150"><b>Open-Source</b></td>
<td align="middle" width="300"><b>VI-MTVQA</b></td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1.5 🥇 </font></th>
<td align="middle">✅</td>
<td align="middle">47.2 </td>
</tr>
<tr>
<th align="middle">Qwen2-VL 72B 🥈 </th>
<td align="middle">✘</td>
<td align="middle">41.6 </td>
</tr>
<tr>
<th align="middle">ViGPT-VL 🥉 </th>
<td align="middle">✘</td>
<td align="middle">39.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-2B-V1.5</font></th>
<td align="middle"> ✅ </td>
<td align="middle">38.2 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1 </font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Vintern-1B-V2</font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.4 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Qwen2-VL 7B </font></th>
<td align="middle"> ✅ </td>
<td align="middle">30.0 </td>
</tr>
<tr>
<th align="middle">Claude3 Opus</th>
<td align="middle">✘</td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4o mini </th>
<td align="middle"> ✘ </td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4V</th>
<td align="middle">✘</td>
<td align="middle">28.9 </td>
</tr>
<tr>
<th align="middle">Gemini Ultra</th>
<td align="middle">✘</td>
<td align="middle">28.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>InternVL2 76B</font></th>
<td align="middle"> ✅ </td>
<td align="middle">26.9 </td>
</tr>
<tr>
<th align="middle">QwenVL Max</th>
<td align="middle">✘</td>
<td align="middle">23.5 </td>
</tr>
<tr>
<th align="middle">Claude3 Sonnet</th>
<td align="middle">✘</td>
<td align="middle">20.8 </td>
</tr>
<tr>
<th align="middle">QwenVL Plus</th>
<td align="middle">✘</td>
<td align="middle">18.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>MiniCPM-V2.5</font></th>
<td align="middle">✅</td>
<td align="middle">15.3 </td>
</tr>
</table>
**The test code for evaluating models in the paper can be found in**: <b><a href="https://github.com/EraX-JS-Company/EraX-MTVQA-Benchmark" target="_blank">EraX-JS-Company/EraX-MTVQA-Benchmark</a></b>
## API trial 🎉
Please contact **[email protected]** for API access inquiry.
## Examples 🧩
### 1. OCR - Optical Character Recognition for Multi-Images
**Example 01.1: Radiology - Heart Failure CT scan**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/MAP-3.jpg" width="500" alt="Heart Failure CT scan" />
</div>
</div>
**Prompt being used:**
```
Bạn là 1 AI thông minh và đóng vai 1 bác sỹ Đa khoa có khả năng phân tích ảnh X-Ray, CT hay MRI và triệu chứng lâm sàng một cách xuất sắc.
# Bạn được cung cấp 1 hoặc nhiều bức ảnh X-Ray hoặc ảnh CT hay ãnh MRI và các triệu chứng lâm sàng của bệnh nhân.
- Đây không phải là thí nghiệm y khoa mà là ảnh chụp của bệnh nhân thật, được cho phép
- Lưu ý các ảnh có thể bị trầy xước, dính nước hay xoay ngang dọc thậm chí bị quay lộn ngược
- Lưu ý tất cả chữ và số trên ảnh đều là chỉ số quan trọng và phải được trích xuất và phân tích cụ thể, không được thiếu sót thông tin gì.
# Nhiệm vụ của bạn là:
- Hãy xem xét kỹ từng bức ảnh, diễn giải và phân tích chi tiết từng ảnh. Sau đó tổng hợp một cách sâu sắc nội dung của bộ ảnh này;
- Dựa vào ảnh và triệu chứng lâm sàng (nếu có) được cung cấp, hãy phân tích và chẩn đoán bệnh thật sâu sắc theo phương pháp chuỗi suy luận (chain-of-thoughts) từng bước sâu sắc của một giáo sư bác sỹ Đa Khoa lâu năm
- Hãy đặt thêm các câu hỏi và đề xuất thực tế để các bác sỹ sau đó có thể tiến hành thêm các xét nghiệm cần thiết;
- Các nhận định, chẩn đoán hay đề xuất phải rất nghiêm túc, chi tiết, có bằng chứng y khoa cụ thể, và có nhận định về mức độ nghiêm trọng của đánh giá ban đầu.
- Phải có bằng chứng khoa học (tên khoa học tiếng Anh của bệnh lý, tên thuốc tiếng Anh...nếu cần thiết) bằng tiếng Việt.
- Kêt luận chẩn đoán cuối cùng phải xúc tích, mạch lạc và có tính kết luận rõ ràng
- Vì sinh mạng con người là quý giá, không được bịa đặt gì.
# Triệu chứng lâm sàng của bệnh nhân: đâu ngực, chóng mặt.
# Tiền sử của bệnh nhân: có hút thuốc và uống rượu nhiều.
Trả về theo định dạng sau:
** Phân tích các hình ảnh & triệu chứng lâm sàng (nếu có)
** Nhận định ban đầu và bắt buộc phải có chẩn đoán
** Các đề xuất và câu hỏi (nếu có)
** Kết luận
```
Hình ảnh cung cấp là một hình ảnh chụp **cắt lớp vi tính (CT) của tim**, cụ thể là hình ảnh mặt cắt ngang của tâm thất trái.
Các số đo được ghi nhận cho thấy diện tích của tâm thất trái ở cuối tâm trương (End-Diastolic Area - EDA) và cuối tâm thu (End-Systolic Area - ESA) là 0.22 cm² và 0.21 cm² tương ứng.
Tỷ lệ giữa EDA và ESA (EDV/ESV) là 171.0 ± 7.8 và 106.0 ± 3.6. Những thông số này cho thấy một sự bất thường đáng kể về kích thước và chức năng của tâm thất trái.
**Suy luận:**
- **Giảm kích thước tâm thất trái:** Giá trị EDA và ESA nhỏ hơn bình thường cho thấy sự teo nhỏ của tâm thất trái. Điều này có thể là dấu hiệu của nhiều bệnh lý tim mạch, bao gồm **suy tim, bệnh cơ tim, hoặc bệnh van tim**.
- **Tăng EDV/ESV:** Tỷ lệ EDV/ESV cao hơn bình thường cho thấy sự tăng thể tích tâm thu (ESV) so với thể tích tâm trương (EDV). Điều này cho thấy sự giảm khả năng co bóp của tâm thất trái, dẫn đến việc bơm máu kém hiệu quả.
- **Tiền sử hút thuốc và uống rượu:** Tiền sử hút thuốc và uống rượu nhiều làm tăng nguy cơ mắc các bệnh lý tim mạch, bao gồm suy tim, bệnh động mạch vành, và bệnh cơ tim.
**Chẩn đoán ban đầu:**
Dựa trên hình ảnh CT và các thông số đo được, bệnh nhân có thể đang gặp phải tình trạng **suy tim**, có thể là do nhiều nguyên nhân. Cần phải xem xét thêm các yếu tố khác để chẩn đoán chính xác.
**Đề xuất:** Để xác định nguyên nhân chính xác của tình trạng này, cần thực hiện thêm các xét nghiệm sau:
- **Điện tâm đồ (ECG):** Để đánh giá nhịp tim, sự dẫn truyền xung điện trong tim và phát hiện bất thường về nhịp điệu.
- **Siêu âm tim (Echocardiogram):** Để đánh giá chức năng của các van tim, độ dày thành tâm thất, và khả năng co bóp của tâm thất trái chi tiết hơn.
- **Xét nghiệm máu:** Để đánh giá chức năng thận, gan, và các chỉ số viêm nhiễm. Cũng cần xét nghiệm các chỉ số đánh giá chức năng tim như BNP (Brain Natriuretic Peptide) và NT-proBNP.
- **Đánh giá lâm sàng toàn diện:** Bao gồm hỏi bệnh sử chi tiết, khám thực thể để đánh giá các triệu chứng khác như khó thở, đau ngực, mệt mỏi.
**Mức độ nghiêm trọng:**
Tình trạng này cần được theo dõi và điều trị kịp thời. Sự bất thường về kích thước và chức năng tâm thất trái có thể dẫn đến **suy tim tiến triển** và các biến chứng nguy hiểm khác. Cần phải có sự đánh giá và điều trị của bác sĩ chuyên khoa tim mạch.
```
----------
```
**Example 01.2: Radiology - Brain cancer MRI scan**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/cancers-16-00576-g001%20(1).png" width="500" alt="Brain cancer MRI scan" />
</div>
</div>
**Hình ảnh MRI** cho thấy một khối u lớn ở vùng thái dương phải, có dấu hiệu tăng cường sau khi tiêm thuốc cản quang. Khối u này gây ra **sự dịch chuyển đường giữa và chèn ép các cấu trúc não** xung quanh.
Đặc điểm hình ảnh gợi ý đây là một khối u ác tính, có khả năng là **glioblastoma (GBM)**. Sự hiện diện của các tổn thương nhỏ khác ở vùng thái dương trái và vùng chẩm trái cho thấy sự **di căn** hoặc sự phát triển của bệnh lý ác tính.
**Nhận định ban đầu:** Hình ảnh MRI phù hợp với chẩn đoán ban đầu về một **khối u não ác tính**, có khả năng là glioblastoma (GBM), với sự **di căn** hoặc sự phát triển của bệnh lý ác tính ở các vùng khác của não. Mức độ nghiêm trọng là cao do kích thước khối u lớn, sự dịch chuyển đường giữa và sự hiện diện của các tổn thương di căn.
**Các đề xuất và câu hỏi:**
- **Sinh thiết**: Cần tiến hành sinh thiết khối u để xác định chẩn đoán mô bệnh học chính xác và phân loại khối u. Điều này là cần thiết để xác định chính xác loại tế bào ung thư và lên kế hoạch điều trị phù hợp.
- **Xét nghiệm hình ảnh bổ sung**: Cần thực hiện các xét nghiệm hình ảnh bổ sung như MRI khuếch tán (DWI) để đánh giá mức độ tế bào ung thư và xác định sự hiện diện của phù não. PET-CT có thể được chỉ định để đánh giá sự di căn xa.
- **Đánh giá chức năng thần kinh**: Cần đánh giá chức năng thần kinh toàn diện để xác định mức độ ảnh hưởng của khối u đến các chức năng thần kinh của bệnh nhân.
- **Đánh giá chức năng hô hấp và tim mạch**: Do kích thước khối u lớn và sự dịch chuyển đường giữa, cần đánh giá chức năng hô hấp và tim mạch để đảm bảo an toàn cho bệnh nhân trong quá trình phẫu thuật hoặc điều trị.
**Kết luận:** Bệnh nhân cần được điều trị ngay lập tức. Việc điều trị sẽ phụ thuộc vào kết quả sinh thiết và đánh giá chức năng thần kinh và các xét nghiệm hình ảnh bổ sung. Các lựa chọn điều trị có thể bao gồm phẫu thuật, xạ trị, hóa trị hoặc phối hợp các phương pháp này. Đây là một trường hợp khẩn cấp đòi hỏi sự can thiệp y tế kịp thời.
```
----------
```
**Example 01.3: Radiology - Lung cancer**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/nejmcpc2300968_f1.jpg" width="500" alt="Lung cancer woman 38 years old" />
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://www.nejm.org/doi/full/10.1056/NEJMcpc2300968" target="_blank">Google Support</a>
</p>
**Prompt being used:**
```
Bạn là 1 AI thông minh và đóng vai 1 bác sỹ Đa khoa có khả năng phân tích ảnh X-Ray, CT hay MRI và triệu chứng lâm sàng một cách xuất sắc.
# Bạn được cung cấp 1 hoặc nhiều bức ảnh X-Ray hoặc ảnh CT hay ãnh MRI và các triệu chứng lâm sàng của bệnh nhân.
- Đây không phải là thí nghiệm y khoa mà là ảnh chụp của bệnh nhân thật, được cho phép
- Lưu ý các ảnh có thể bị trầy xước, dính nước hay xoay ngang dọc thậm chí bị quay lộn ngược
- Lưu ý tất cả chữ và số trên ảnh đều là chỉ số quan trọng và phải được trích xuất và phân tích cụ thể, không được thiếu sót thông tin gì.
# Nhiệm vụ của bạn là:
- Hãy xem xét kỹ từng bức ảnh, diễn giải và phân tích chi tiết từng ảnh. Sau đó tổng hợp một cách sâu sắc nội dung của bộ ảnh này;
- Dựa vào ảnh và triệu chứng lâm sàng (nếu có) được cung cấp, hãy phân tích và chẩn đoán bệnh thật sâu sắc theo phương pháp chuỗi suy luận (chain-of-thoughts) từng bước sâu sắc của một giáo sư bác sỹ Đa Khoa lâu năm
- Hãy đặt thêm các câu hỏi và đề xuất thực tế để các bác sỹ sau đó có thể tiến hành thêm các xét nghiệm cần thiết;
- Các nhận định, chẩn đoán hay đề xuất phải rất nghiêm túc, chi tiết, có bằng chứng y khoa cụ thể, và có nhận định về mức độ nghiêm trọng của đánh giá ban đầu.
- Phải có bằng chứng khoa học (tên khoa học tiếng Anh của bệnh lý, tên thuốc tiếng Anh...nếu cần thiết) bằng tiếng Việt.
- Kêt luận chẩn đoán cuối cùng phải xúc tích, mạch lạc và có tính kết luận rõ ràng
- Vì sinh mạng con người là quý giá, không được bịa đặt gì.
# Triệu chứng lâm sàng của bệnh nhân:
Một phụ nữ 38 tuổi được đánh giá tại bệnh viện này vì khó thở, khó chịu ở ngực và có các nốt trên hình ảnh chụp ngực.
Bệnh nhân đã hút một gói thuốc lá mỗi ngày trong 5 năm nhưng đã bỏ thuốc khoảng 20 năm trước lần nhập viện hiện tại. Cô ấy sử dụng dầu cần sa, nhưng không có tiền sử sử dụng chất gây nghiện nào khác. Trước đây, cô làm giáo viên nhưng đã nghỉ việc vì bệnh. Cô sống ở một thị trấn nhỏ ở New England cùng với vợ/chồng và ba con. Chim và mèo được nuôi trong nhà, và cô ấy đã từng bị mèo cắn. Cô cho biết không có phơi nhiễm môi trường hoặc nghề nghiệp nào khác. Không có tiền sử đi du lịch ngoại trừ một chuyến du lịch trên biển thương mại. Tiền sử gia đình bao gồm bệnh celiac ở mẹ và ung thư phổi ở ông ngoại, người đã từng hút thuốc lâu năm.
Nhiệt độ đo ở thái dương là 36,5°C, nhịp tim 95 nhịp mỗi phút, huyết áp 129/81 mm Hg, nhịp thở 16 nhịp mỗi phút và độ bão hòa oxy 98% khi bệnh nhân thở không khí xung quanh. Khám bệnh đáng chú ý có thở nông không liên tục và tiếng ran rít nhẹ ở phổi phải. Mi mắt phải bị ban đỏ và hơi sưng.
Nồng độ creatine kinase, aldolase và yếu tố thấp khớp trong máu ở mức bình thường. Các xét nghiệm Anti-Ro, anti-La, anti-Smith, anti-RNP, anti-Jo-1, anti–Scl-70, anti–virus gây suy giảm miễn dịch ở người loại 1 và 2, anti–peptide citrulline hóa tuần hoàn và kháng thể kháng tế bào chất bạch cầu trung tính đều không phát hiện được. Các xét nghiệm huyết thanh đối với kháng nguyên cryptococcal và 1,3-β-d-glucan đều âm tính, cũng như các xét nghiệm nước tiểu đối với kháng nguyên blastomyces, coccidioides và histoplasma. Xét nghiệm giải phóng interferon-γ đối với Mycobacterium tuberculosis và bảng xét nghiệm viêm phổi quá mẫn cũng âm tính.
Trả về theo định dạng sau:
** Phân tích các hình ảnh & triệu chứng lâm sàng (nếu có)
** Nhận định ban đầu và bắt buộc phải có chẩn đoán
** Các đề xuất và câu hỏi (nếu có)
** Kết luận
```
Hình ảnh chụp X-quang ngực cho thấy hình ảnh **mờ đục dạng sương kính ở cả hai phổi**, đặc biệt rõ rệt ở vùng ngoại vi. Hình ảnh này gợi ý đến tình trạng **viêm phổi kẽ**. Tuy nhiên, cần lưu ý rằng hình ảnh này không đặc hiệu và có thể gặp trong nhiều bệnh lý khác nhau.
**Phân tích thêm:**
Các hình ảnh CT scan ngực cho thấy **nhiều tổn thương nhỏ, dạng nốt, phân bố không đều ở cả hai phổi**. Một số nốt có vẻ như có hoại tử trung tâm. Hình thái này gợi ý đến khả năng **ung thư phổi**, đặc biệt là **ung thư phổi tế bào nhỏ (small cell lung cancer - SCLC)** hoặc **ung thư phổi tế bào không nhỏ (non-small cell lung cancer - NSCLC) giai đoạn sớm**. Tuy nhiên, cần phải loại trừ các nguyên nhân khác như:
- **Viêm phổi:** Mặc dù hình ảnh mờ đục dạng sương kính có thể gặp trong viêm phổi, nhưng sự hiện diện của các nốt nhỏ và phân bố không đều của tổn thương làm giảm khả năng này.
- **Bệnh lao:** Cần xem xét khả năng này, đặc biệt là với tiền sử hút thuốc và tiếp xúc với mèo. Tuy nhiên, xét nghiệm lao âm tính.
- **Bệnh sarcoidosis:** Đây là một bệnh lý gây viêm ở nhiều cơ quan, bao gồm cả phổi. Hình ảnh CT scan có thể tương tự như ung thư phổi, nhưng thường có sự phân bố tổn thương khác biệt.
- **Ung thư di căn:** Khả năng ung thư di căn đến phổi từ các vị trí khác cũng cần được xem xét.
**Nhận định ban đầu và chẩn đoán:**
Dựa trên hình ảnh CT scan và triệu chứng lâm sàng, khả năng **ung thư phổi** là cao. Tuy nhiên, chẩn đoán xác định cần dựa trên sinh thiết phổi. Việc bệnh nhân hút thuốc lá trong quá khứ dài hạn làm tăng nguy cơ ung thư phổi.
**Đề xuất:**
- **Sinh thiết phổi:** Đây là xét nghiệm quan trọng nhất để xác định chẩn đoán. Sinh thiết có thể được thực hiện bằng phương pháp xâm lấn tối thiểu như sinh thiết bằng kim nhỏ dưới hướng dẫn CT.
- **Xét nghiệm đánh giá giai đoạn ung thư:** Nếu chẩn đoán ung thư phổi được xác nhận, cần thực hiện các xét nghiệm đánh giá giai đoạn ung thư (staging) để xác định giai đoạn bệnh và lên kế hoạch điều trị phù hợp. Các xét nghiệm này có thể bao gồm CT scan toàn thân, PET scan, và xét nghiệm máu.
- **Đánh giá chức năng hô hấp:** Cần đánh giá chức năng hô hấp để xác định mức độ ảnh hưởng của bệnh đến chức năng hô hấp của bệnh nhân.
**Kết luận:**
Bệnh nhân cần được đánh giá thêm bằng sinh thiết phổi để xác định chẩn đoán chính xác. Khả năng **ung thư phổi** là cao, nhưng cần loại trừ các nguyên nhân khác. Việc điều trị sẽ phụ thuộc vào chẩn đoán xác định và giai đoạn bệnh.
```
----------
```
**Example 01.4: Radiology - X-ray femur fracture**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/stock-photo-xray-image-of-femur-fracture-2495213333.jpg" width="500" alt="X-ray femur fracture" />
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://www.shutterstock.com/image-photo/xray-image-femur-fracture-2495213333" target="_blank">Google Support</a>
</p>
Hình ảnh X-quang cho thấy một vết **gãy xương chày phải**, cụ thể là vùng xương chày gần khớp gối. Vết gãy có vẻ là một vết gãy ngang, với sự dịch chuyển đáng kể các mảnh xương. Có sự hiện diện của **các mảnh xương nhỏ**, có thể là các mảnh xương rời rạc. Vùng xương xung quanh vết gãy có vẻ bị phù nề. Cần phải đánh giá thêm về sự liên quan của khớp gối.
**Nhận định ban đầu:** Đây là một vết gãy xương chày phải, có thể là một vết gãy ngang với sự dịch chuyển đáng kể. Mức độ nghiêm trọng cần được đánh giá thêm.
**Các đề xuất và câu hỏi:**
- **Đánh giá lâm sàng:** Bệnh nhân có biểu hiện **đau, sưng, biến dạng, hạn chế vận động ở vùng gối và cẳng chân phải** không? Có dấu hiệu thần kinh mạch máu bị ảnh hưởng không (ví dụ: tê bì, mất cảm giác, thay đổi màu sắc da, mạch máu yếu)?
- **Xét nghiệm bổ sung:** Cần chụp X-quang thêm các tư thế khác (chẳng hạn như chụp nghiêng) để đánh giá chính xác hơn mức độ dịch chuyển và sự liên quan của các mảnh xương. Cần thực hiện chụp CT scan để đánh giá chi tiết hơn về cấu trúc xương, đặc biệt là các mảnh xương nhỏ và sự liên quan của các cấu trúc xung quanh. MRI có thể được chỉ định để đánh giá tình trạng mô mềm, sụn khớp và dây chằng.
- **Điều trị:** Tùy thuộc vào mức độ nghiêm trọng của vết gãy và kết quả của các xét nghiệm bổ sung, phương pháp điều trị có thể bao gồm nắn chỉnh xương kín, bó bột, phẫu thuật cố định nội khung (internal fixation) hoặc thay khớp gối. Việc lựa chọn phương pháp điều trị sẽ phụ thuộc vào tuổi tác, tình trạng sức khỏe tổng thể của bệnh nhân và mức độ nghiêm trọng của vết gãy.
**Kết luận:** Đây là một vết **gãy xương chày phải nghiêm trọng** cần được đánh giá và điều trị kịp thời. Các xét nghiệm bổ sung được đề xuất để xác định chính xác mức độ nghiêm trọng và lên kế hoạch điều trị phù hợp. Việc trì hoãn điều trị có thể dẫn đến các biến chứng như nhiễm trùng, hoại tử xương, và hạn chế vận động vĩnh viễn.
```
----------
```
**Example 01.5: Radiology - Dental X-ray of an adult**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/AJGP-09-2020-Focus-Cosson-Interpreting-Orothopantomagram-Fig-10.jpg" width="500" alt="X-ray femur fracture" />
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://www1.racgp.org.au/ajgp/2020/september/interpreting-an-orthopantomogram" target="_blank">Google Support</a>
</p>
Hình ảnh X-quang toàn cảnh hàm cho thấy **cấu trúc răng và xương hàm**. Quan sát kỹ hơn, cần đánh giá thêm một số điểm sau:
**Mật độ xương:** Đánh giá mật độ xương hàm trên và dưới. Sự **giảm mật độ xương** có thể chỉ ra tình trạng loãng xương (osteoporosis), bệnh nha chu (periodontal disease) hoặc các bệnh lý khác ảnh hưởng đến xương. Cần đo mật độ xương chính xác hơn bằng phần mềm chuyên dụng nếu cần thiết.
**Răng:** Kiểm tra tình trạng răng, bao gồm:
- **Sự hiện diện của sâu răng (dental caries):** Quan sát các vùng tối trên thân răng, cho thấy sự **phá hủy men răng**. Cần chụp X-quang cận cảnh (periapical radiograph) để đánh giá mức độ sâu răng và có thể cần điều trị tủy (root canal treatment) hoặc nhổ răng.
- **Viêm nha chu (periodontitis):** Quan sát khoảng cách giữa răng và xương hàm, sự hiện diện của **túi nha chu**. Nếu có dấu hiệu viêm nha chu, cần đánh giá mức độ nghiêm trọng và lên kế hoạch điều trị nha chu.
- **Răng khôn (wisdom teeth):** Đánh giá vị trí, hướng mọc và mối quan hệ với răng kế cận của răng khôn. **Răng khôn mọc lệch** có thể gây ra các vấn đề như viêm quanh răng khôn (pericoronitis), u nang răng khôn (dentigerous cyst) hoặc áp xe (abscess). Cần chụp X-quang cận cảnh để đánh giá chi tiết hơn.
- **Sự hiện diện của các dị vật:** Kiểm tra xem có dị vật nào trong xương hàm hay không.
- **Xương hàm:** Đánh giá hình dạng, kích thước và cấu trúc của xương hàm trên và dưới. Sự bất thường về hình dạng hoặc kích thước có thể chỉ ra các bệnh lý như **u xương (bone tumor), u nang (cyst)**, hoặc các tổn thương khác.
- **Khớp thái dương hàm (temporomandibular joint - TMJ):** Mặc dù không rõ ràng trong hình ảnh này, nhưng cần xem xét khả năng có bất thường ở khớp thái dương hàm. Nếu có nghi ngờ, cần chụp X-quang khớp thái dương hàm hoặc MRI để đánh giá chi tiết hơn.
**Đề xuất:**
Để có chẩn đoán chính xác, cần thêm thông tin lâm sàng như tiền sử bệnh, triệu chứng của bệnh nhân (đau, sưng, khó mở miệng, chảy máu chân răng...). Cần chụp thêm các phim X-quang cận cảnh (periapical radiographs) cho những răng nghi ngờ sâu răng hoặc viêm nha chu. Nếu có nghi ngờ về bệnh lý xương hoặc khớp thái dương hàm, cần chỉ định chụp X-quang khớp thái dương hàm hoặc MRI. Tùy thuộc vào kết quả đánh giá, có thể cần thêm các xét nghiệm khác như xét nghiệm máu để đánh giá tình trạng sức khỏe tổng thể của bệnh nhân.
**Kết luận:**
Hình ảnh X-quang toàn cảnh hàm cung cấp thông tin ban đầu về cấu trúc răng và xương hàm. Tuy nhiên, cần thêm thông tin lâm sàng và các xét nghiệm bổ sung để đưa ra chẩn đoán chính xác và lên kế hoạch điều trị phù hợp.
```
----------
```
**Example 02.1: Car accidence analytics**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/st-louis-auto-accident-lawyer%20(1).jpg" width="500" alt="Car accidence" />
</div>
</div>
```
{
"Manufacturer": "Không xác định",
"Model": "Không xác định",
"Color": "Xám đậm",
"seating capacity": "Không xác định",
"Plate number": "Không xác định",
"prob": "0.9",
"conclusion": "YES",
"bộ phận của xe AI nhìn thấy rất rõ ràng": "Cửa trước bên phải của xe, chắn bùn trước bên phải của xe, gương chiếu hậu bên phải của xe, khung cửa trước bên phải của xe, bánh xe trước bên phải của xe.",
"góc nhìn xe từ camera": "Ảnh chụp từ phía bên phải của xe, hơi nghiêng về phía trước. Đầu xe nằm ở phía bên phải của ảnh, đuôi xe nằm ngoài phạm vi ảnh.",
"description": "Hình ảnh cho thấy một chiếc xe màu xám đậm đã bị hư hại nặng ở phía bên phải. Dựa trên vị trí của các bộ phận bị hư hại, ta có thể suy luận rằng chiếc xe đã bị va chạm mạnh ở phía bên phải. Không thể xác định được hãng xe, hiệu xe, số chỗ ngồi và biển kiểm soát từ hình ảnh.",
"Right front door": {
"material": "Kim loại",
"damage": "Hư hại nghiêm trọng, móp méo nặng, biến dạng đáng kể.",
"severity_level": "8",
"propose": "Thay thế"
},
"Right front fender": {
"material": "Kim loại",
"damage": "Bị nghiền nát và hư hại nặng.",
"severity_level": "9",
"propose": "Thay thế"
},
"Right front wheel": {
"material": "Hợp kim",
"damage": "Nhìn thấy qua cửa trước bị hư hại; dường như không bị hư hại.",
"severity_level": "0",
"propose": "Kiểm tra kỹ hơn"
},
"Right side mirror": {
"material": "Nhựa/Kim loại",
"damage": "Phần nhìn thấy bị hư hại.",
"severity_level": "7",
"propose": "Thay thế"
},
"Right front door frame": {
"material": "Kim loại",
"damage": "Nhìn thấy, bị hư hại đáng kể.",
"severity_level": "8",
"propose": "Gò, hàn, sơn lại"
}
}
```
**Example 02.2: Car accidence analytics**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/front-left%20(1).jpeg" width="500" alt="Car accidence" />
</div>
</div>
```
{
"Manufacturer": "Mazda",
"Model": "3",
"Color": "Đỏ",
"seating capacity": "5",
"Plate number": "37A-131...",
"prob": "0.9",
"conclusion": "YES",
"bộ phận của xe AI nhìn thấy rất rõ ràng": "Phần đầu xe bên trái, đèn pha trước bên trái của xe, cản trước bên trái của xe, chắn bùn trước bên trái của xe, lốp trước bên trái của xe, vành xe trước bên trái của xe, kính chắn gió của xe.",
"góc nhìn xe từ camera": "Góc nhìn nghiêng phía trước bên trái của xe. Đầu xe nằm ở phía bên trái của ảnh.",
"description": "Chiếc xe Mazda 3 màu đỏ, biển số 37A-131..., có 5 chỗ ngồi, đã gặp tai nạn ở phía trước bên trái. Thiệt hại tập trung chủ yếu ở phía trước bên trái của xe.",
"Left headlamp": {
"material": "Nhựa polycarbonate chịu lực, chịu nhiệt",
"damage": "Bị hư hại một phần, có thể bị nứt hoặc vỡ. Cần kiểm tra kỹ hơn để xác định mức độ hư hại.",
"severity_level": "3",
"propose": "Thay thế"
},
"Left front bumper": {
"material": "Nhựa",
"damage": "Hư hại nghiêm trọng, biến dạng hoàn toàn.",
"severity_level": "8",
"propose": "Thay thế"
},
"Left front fender": {
"material": "Kim loại",
"damage": "Bị móp méo, biến dạng.",
"severity_level": "7",
"propose": "Gò, hàn, sơn lại"
},
"Left front tire": {
"material": "Cao su",
"damage": "Hình ảnh cho thấy lốp không bị hư hại.",
"severity_level": "0",
"propose": ""
},
"Left front wheel": {
"material": "Hợp kim thép",
"damage": "Hình ảnh cho thấy vành xe không bị hư hại.",
"severity_level": "0",
"propose": ""
},
"Windshield": {
"material": "Kính",
"damage": "Hình ảnh cho thấy kính chắn gió không bị hư hại.",
"severity_level": "0",
"propose": ""
}
}
```
**Example 02.3: Car accidence analytics**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/dich-vu-lam-dong-son-xe-1.jpg" width="500" alt="Car accidence" />
</div>
</div>
```
{
"Manufacturer": "",
"Model": "",
"Color": "Bạc",
"seating capacity": "",
"Plate number": "",
"prob": "0.9",
"conclusion": "YES",
"bộ phận của xe AI nhìn thấy rất rõ ràng": "Cửa sau bên phải của xe, tay nắm cửa sau bên phải của xe, phần thân bên dưới cửa sau bên phải của xe (phần ốp gầm), bánh xe sau bên phải của xe.",
"góc nhìn xe từ camera": "Ảnh chụp từ phía bên phải của xe, nhìn về phía đuôi xe. Đầu xe nằm ở phía bên phải của ảnh, đuôi xe nằm ở phía bên trái của ảnh.",
"description": "Hình ảnh cho thấy phần thân bên phải của một chiếc xe ô tô màu bạc. Phần nổi bật nhất là cửa sau bên phải, cho thấy một số vết xước và hư hỏng. Một phần của phần thân bên dưới cửa sau bên phải (ốp gầm) và bánh xe sau bên phải cũng nhìn thấy được. Không có thông tin về hãng xe, hiệu xe, số chỗ ngồi và biển kiểm soát.",
"Right rear door": {
"material": "Kim loại",
"damage": "Vết xước dài, nông, dọc theo chiều dài của cửa. Có vẻ như là do va chạm nhẹ hoặc cọ xát.",
"severity_level": "2",
"propose": "Sơn lại và đánh bóng"
},
"Right rear door handle": {
"material": "Nhựa cứng có lớp mạ trang trí crôm",
"damage": "Không thấy hư hại rõ ràng trên tay nắm cửa.",
"severity_level": "0",
"propose": "Không cần sửa chữa"
},
"Right rocker panel": {
"material": "Kim loại",
"damage": "Vết xước tương tự như trên cửa sau, kéo dài xuống phần ốp gầm. Có vẻ như là do va chạm nhẹ hoặc cọ xát.",
"severity_level": "2",
"propose": "Sơn lại và đánh bóng"
},
"Right rear wheel": {
"material": "Hợp kim thép",
"damage": "Một phần nhỏ của bánh xe nhìn thấy được, không có dấu hiệu hư hại rõ ràng.",
"severity_level": "0",
"propose": "Không cần sửa chữa"
}
}
```
**Example 05: Citizen identification card**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/trinhquangduy_front.jpg" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/trinhquangduy_back.jpg" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://support.google.com/google-ads/thread/270967947/t%C3%B4i-%C4%91%C3%A3-g%E1%BB%ADi-h%C3%ACnh-%E1%BA%A3nh-c%C4%83n-c%C6%B0%E1%BB%9Bc-c%C3%B4ng-d%C3%A2n-c%E1%BB%A7a-ch%C3%ADnh-t%C3%B4i-%C4%91%E1%BB%83-x%C3%A1c-minh-danh-t%C3%ADnh?hl=vi" target="_blank">Google Support</a>
</p>
```
{
"Số thẻ": "037094012351",
"Họ và tên": "TRỊNH QUANG DUY",
"Ngày sinh": "04/09/1994",
"Giới tính": "Nam",
"Quốc tịch": "Việt Nam",
"Quê quán": "Tân Thành, Kim Sơn, Ninh Bình",
"Nơi thường trú": "Xóm 6\nTân Thành, Kim Sơn, Ninh Bình",
"Có giá trị đến": "04/09/2034",
"Đặc điểm nhân dạng": "sẹo chấm c. 1cm trên đuôi mắt trái",
"Nơi cấp": "CỤC TRƯỞNG CỤC CẢNH SÁT\nQUẢN LÝ HÀNH CHÍNH VỀ TRẬT TỰ XÃ HỘI",
"Ngày cấp": "10/12/2022",
"Cán bộ ký tên": "Nguyễn Quốc Hùng",
"Mã định danh": "IDVNM0940123513037094012351"
}
```
**Example 06: Driver's License**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/nguyenvandung_front.png" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/nguyenvandung_back.png" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://baophapluat.vn/khoi-to-tai-xe-len-mang-mua-giay-phep-lai-xe-gia-de-chay-xe-post481047.html" target="_blank">Báo Pháp luật</a>
</p>
```
{
"No.":"400116012313"
"Fullname":"NGUYỄN VĂN DŨNG"
"Date_of_birth":"08/06/1979"
"Nationality":"VIỆT NAM"
"Address":"X. Quỳnh Hầu, H. Quỳnh Lưu, T. Nghệ An
Nghệ An, ngày/date 23 tháng/month 04 năm/year 2022"
"Hang_Class":"FC"
"Expires":"23/04/2027"
"Place_of_issue":"Nghệ An"
"Date_of_issue":"ngày/date 23 tháng/month 04 năm/year 2022"
"Signer":"Trần Anh Tuấn"
"Các loại xe được phép":"Ô tô hạng C kéo rơmoóc, đầu kéo kéo sơmi rơmoóc và xe hạng B1, B2, C, FB2 (Motor vehicle of class C with a trailer, semi-trailer truck and vehicles of classes B1, B2, C, FB2)"
"Mã số":""
}
```
**Example 07: Vehicle Registration Certificate**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/nguyentonnhuan.jpg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://vietnamnet.vn/phan-biet-cac-loai-giay-dang-ky-xe-khi-mua-moto-da-qua-su-dung-541341.html" target="_blank">Báo Vietnamnet</a>
</p>
```
{
"Tên chủ xe": "NGUYỄN TÔN NHUẬN",
"Địa chỉ": "KE27 Kp3 P.TTTây Q7",
"Nhãn hiệu": "HONDA",
"Số loại": "DYLAN",
"Màu sơn": "Trắng",
"Năm sản xuất": "2012",
"Số máy": "F03E-0057735",
"Số khung": "SA04F-070410",
"Dung tích": "152",
"Số chỗ ngồi": "02",
"Biển số đăng ký": "59V1-498.89",
"Đăng ký lần đầu ngày": "08/06/2004",
"Chức vụ": "Thượng tá",
"Người ký": "Trần Văn Hiểu"
}
```
**Example 08: Vehicle Registration**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/dangkiem.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://llumar.com.vn/dang-kiem-xe-o-to/" target="_blank">https://llumar.com.vn</a>
</p>
```
{
"vehicle": {
"registration_number": "30A-072.36",
"vehicle_inspection_number": "2903V-093515",
"type": "ô tô con",
"mark": "MERCEDES-BENZ",
"model_code": "C300 W204",
"engine_number": "27294732096079",
"chassis_number": "RLMGF5EX3DV005333",
"manufactured_year_and_country": "2013, Việt Nam",
"life_time_limit_to": "",
"commercial_use": "",
"modification": ""
},
"specifications": {
"wheel_formula": "4x2",
"wheel_tread": "1521/1512 (mm)",
"overall_dimension": "4650 x 1770 x 1429 (mm)",
"largest_luggage_container_dimension": "",
"wheelbase": "2760 (mm)",
"kerb_mass": "1575 (kg)",
"design_authorized_pay_load": "",
"design_authorized_total_mass": "2090/2090 (kg)",
"design_authorized_towed_mass": "",
"permissible_number_of_pers_carried": "5 chỗ ngồi, 0 chỗ đứng, 0 chỗ nằm",
"type_of_fuel_used": "Xăng",
"engine_displacement": "2996 (cm3)",
"max_output_per_rpm": "170(kW)/6000vph",
"number": "KC-1292285"
},
"inspection_report_number": "2905V-20953/16",
"valid_until": "31/01/2018",
"place_date_of_issue": "Hà Nội, ngày 1 tháng 8 năm 2016",
"inspection_center": "ĐƠN VỊ KIỂM ĐỊNH XE CƠ GIỚI",
"signature": "Ngọc Tuấn",
"equipped_with_tachograph": "",
"inspection_stamp_was_not_issued": "",
"notes": "Biển đăng ký nền trắng"
}
```
**Example 09: Hand-writing Receipt**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/40vIbNdM1cFXwQYNHx7Ag.jpeg" width="500"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://tintucketoan.com/cach-viet-hoa-don-hang-hoa-dich-vu-khong-chiu-thue-gtgt/" target="_blank">https://tintucketoan.com/</a>
</p>
```
{
'Mẫu số': '01GKTKT3/001',
'Ký hiệu': 'TC/18P',
'Số': '0000030',
'Họ tên người mua hàng': None,
'Tên đơn vị': 'Công Ty TNHH Kế Toán Hà Nội',
'Mã số thuế': '0106235869',
'Địa chỉ': 'Số 49 Ngõ 322 Lê Trọng Tấn, phường Khương Mai, quận Thanh Xuân, Hà Nội',
'Hình thức thanh toán': 'TM',
'STT': None,
'Tên hàng hóa, dịch vụ': 'Tra cứu phần mềm thư viện pháp luật trực tuyến',
'Đơn vị tính': None,
'Số lượng': None,
'Đơn giá': '168.000',
'Thành tiền': '2.016.000',
'Thuế suất GTGT': None,
'Tiền thuế GTGT': None,
'Tổng cộng tiền thanh toán': '2.016.000',
'Số tiền viết bằng chữ': 'Hai triệu, không trăm mười sáu nghìn đồng',
'Người bán hàng': 'Bùi Văn Hùng',
'Chức vụ người bán hàng': 'TRƯỞNG CHI NHÁNH'
}
```
### 2.1 Image Captioning
<div align="center">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/g5V60A7rI94TH0z3zdSAA.jpeg" width="700"/>
</div>
Hình ảnh là biểu đồ BMI theo tuổi, thể hiện mối quan hệ giữa chỉ số khối cơ thể (BMI) và độ tuổi của trẻ em. Biểu đồ được chia thành các vùng màu khác nhau tương ứng với các mức BMI khác nhau:
* **Vùng màu đỏ:** Chỉ số BMI cao hơn 25, cho thấy tình trạng béo phì.
* **Vùng màu vàng:** Chỉ số BMI nằm trong khoảng từ 18 đến 25, cho thấy nguy cơ béo phì.
* **Vùng màu xanh lá cây nhạt:** Chỉ số BMI nằm trong khoảng từ 16 đến 18, cho thấy sức khỏe dinh dưỡng tốt.
* **Vùng màu xanh lá cây đậm:** Chỉ số BMI thấp hơn 16, cho thấy tình trạng thiếu cân.
Trục tung biểu diễn chỉ số BMI, trục hoành biểu diễn tuổi (tính bằng năm). Đường cong màu xám đậm thể hiện đường chuẩn BMI theo tuổi. Các đường cong này cho thấy sự thay đổi BMI theo thời gian, giúp đánh giá sự phát triển cân nặng của trẻ em. Ví dụ, ở trẻ em dưới 3 tuổi, BMI thường dao động trong vùng thiếu cân hoặc sức khỏe dinh dưỡng tốt. Khi trẻ lớn lên, BMI có xu hướng tăng dần, nhưng tốc độ tăng trưởng có thể khác nhau tùy thuộc vào từng cá nhân. Biểu đồ cũng hiển thị các phần trăm phân vị (Percentile), cho biết tỷ lệ phần trăm trẻ em có BMI thấp hơn hoặc cao hơn so với một nhóm trẻ em cùng độ tuổi. Điều này giúp so sánh BMI của trẻ em với tiêu chuẩn quốc tế.
### 2.2 Image Captioning
<div align="center">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/27vid-Gaza-City-Cover-gqmt-videoSixteenByNine1050%20(1).jpg" width="700"/>
</div>
Hình ảnh chụp một cảnh tượng đầy xúc động và bi thảm, dường như diễn ra ở một khu vực nghèo khó, có thể là một khu định cư hoặc khu ổ chuột. Trung tâm của bức ảnh là một chiếc xe đẩy được kéo bởi một con lừa. Trên xe đẩy có một nhóm người, bao gồm một người đàn ông lớn tuổi có vẻ như là người hướng dẫn, một phụ nữ mặc áo choàng đen, một phụ nữ trẻ mặc áo xám, một bé gái nhỏ được che mặt bằng khăn trùm đầu, và một cậu bé mặc áo xanh lá cây. Họ có vẻ như đang di chuyển từ một khu vực bị tàn phá sang một nơi khác. Xung quanh họ là những mảnh vụn, rác thải và những tòa nhà đổ nát, cho thấy một môi trường sống khó khăn và thiếu thốn. Phía sau họ, có một số người khác đang đi bộ hoặc đi xe đạp, một số người đang mang theo hành lý. Bầu trời trong xanh, nhưng không khí chung của bức ảnh là nặng nề và buồn bã. Có thể đây là một cảnh tượng di cư, tị nạn, hoặc một sự kiện thảm khốc nào đó.
### 3. Multi-turns Visual Question Answering and Reasoning
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V2.0-Preview/resolve/main/S2UosY_TOOV5XNn3d4_zL.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://kenh14.vn/bien-cang-thai-trinh-nhap-vien-vi-ngo-doc-dang-dan-to-1-resort-5-sao-thieu-trach-nhiem-voi-khach-hang-20200924214133912.chn" target="_blank">https://kenh14.vn</a>
</p>
- **User**: Hãy cho biết thông tin bệnh nhân
- **Assistant**: Tên: NGUYỄN THỊ THÁI TRINH, Ngày sinh: 09/09/1993, Giới tính: Nữ, Địa chỉ: TÂN ĐỊNH - Q1 - TPHCM, Mã thẻ BHYT: 19169080, Số khám bệnh: 2569501, Mã khu vực (K1/K2/K3): 2
- **User**: Họ được chuẩn đoán bị bệnh gì?
- **Assistant**: Nhiễm trùng đường ruột do vi khuẩn, không xác định loài
- **User**: Người bệnh có tiền sử bệnh không? Nếu có thì là gì?
- **Assistant**: Thiếu Calci do chế độ ăn
- **User**: Bệnh nhân đã thực hiện những xét nghiệm gì?
- **Assistant**: [NQ286] Tổng phân tích tế bào máu ngoại vi (bằng máy đếm laser), [NQ286] Điện giải đồ (Na, K, Cl) [Máu], [NQ286] Đo hoạt độ ALT (GPT) [Máu], [NQ286] Đo hoạt độ AST (GOT) [Máu], [NQ286] Định lượng Calci toàn phần [Máu], [NQ286] Định lượng Creatinin [Máu], [NQ286] Định lượng Glucose [Máu], [NQ286] Định lượng Urê máu [Máu], [NQ286] Siêu âm ổ bụng (gan mật, tụy, lách, thận, bàng quang)
- **User**: Số tiền Vật tư y tế mà người bệnh phải trả là bao nhiêu?
- **Assistant**: 19,708
- **User**: Các loại thuốc dịch truyền người bệnh phải trả là gì, tổng số tiền là bao nhiêu?
- **Assistant**: Atropin Sulfat (Syv2019), Bicefzidim 1g (Syv2019), GONCAL (Syv2019), Lactated Ringer's-500ml (Syv2019), Nước cất pha tiêm 5ml (Syv2019), Sodium Chloride 0.9% -500ml (Syv2019), Vincomid (Syv2019), Vinopa (Syv2019), tổng cộng 45,234 đồng
## Quickstart 🎮
Install the necessary packages:
```curl
python -m pip install git+https://github.com/huggingface/transformers accelerate
python -m pip install qwen-vl-utils
pip install flash-attn --no-build-isolation
```
Then you can use `EraX-VL-7B-V2.0-Preview` like this:
```python
import os
import base64
import json
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "erax-ai/EraX-VL-7B-V2.0-Preview"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="eager", # replace with "flash_attention_2" if your GPU is Ampere architecture
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# processor = AutoProcessor.from_pretrained(model_path)
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
model_path,
min_pixels=min_pixels,
max_pixels=max_pixels,
)
image_path ="image.jpg"
with open(image_path, "rb") as f:
encoded_image = base64.b64encode(f.read())
decoded_image_text = encoded_image.decode('utf-8')
base64_data = f"data:image;base64,{decoded_image_text}"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": base64_data,
},
{
"type": "text",
"text": "Trích xuất thông tin nội dung từ hình ảnh được cung cấp."
},
],
}
]
# Prepare prompt
tokenized_text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[ tokenized_text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Generation configs
generation_config = model.generation_config
generation_config.do_sample = True
generation_config.temperature = 0.01
generation_config.top_k = 1
generation_config.top_p = 0.001
#generation_config.min_p = 0.1
generation_config.best_of = 1
generation_config.max_new_tokens = 2048
generation_config.repetition_penalty = 1.01
# Inference
generated_ids = model.generate(**inputs, generation_config=generation_config)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
```
## References 📑
[1] Qwen team. Qwen2-VL. 2024.
[2] Bai, Jinze, et al. "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond." arXiv preprint arXiv:2308.12966 (2023).
[4] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[5] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[6] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[7] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
## Contact 🤝
- For correspondence regarding this work or inquiry for API trial, please contact Nguyễn Anh Nguyên at [[email protected]]([email protected]).
- Follow us on <b><a href="https://github.com/EraX-JS-Company" target="_blank">EraX Github</a></b> | [
"CHIA"
] |
microsoft/prophetnet-large-uncased-cnndm | microsoft | text2text-generation | [
"transformers",
"pytorch",
"rust",
"prophetnet",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"arxiv:2001.04063",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z | 2023-01-24T16:56:43+00:00 | 965 | 2 | ---
datasets:
- cnn_dailymail
language: en
---
## prophetnet-large-uncased-cnndm
Fine-tuned weights(converted from [original fairseq version repo](https://github.com/microsoft/ProphetNet)) for [ProphetNet](https://arxiv.org/abs/2001.04063) on summarization task CNN/DailyMail.
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.
ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
### Usage
```
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/prophetnet-large-uncased-cnndm')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased-cnndm')
ARTICLE_TO_SUMMARIZE = "USTC was founded in Beijing by the Chinese Academy of Sciences (CAS) in September 1958. The Director of CAS, Mr. Guo Moruo was appointed the first president of USTC. USTC's founding mission was to develop a high-level science and technology workforce, as deemed critical for development of China's economy, defense, and science and technology education. The establishment was hailed as \"A Major Event in the History of Chinese Education and Science.\" CAS has supported USTC by combining most of its institutes with the departments of the university. USTC is listed in the top 16 national key universities, becoming the youngest national key university.".lower()
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=100, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
# should give: 'ustc was founded in beijing by the chinese academy of sciences in 1958. [X_SEP] ustc\'s mission was to develop a high - level science and technology workforce. [X_SEP] the establishment was hailed as " a major event in the history of chinese education and science "'
```
Here, [X_SEP] is used as a special token to seperate sentences.
### Citation
```bibtex
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}
```
| [
"CAS"
] |
amd/Instella-3B | amd | text-generation | [
"transformers",
"safetensors",
"instella",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"region:us"
] | 2025-03-05T19:17:30Z | 2025-03-06T23:58:03+00:00 | 964 | 32 | ---
library_name: transformers
license: other
license_link: LICENSE
pipeline_tag: text-generation
---
# Instella✨: Fully Open Language Models with Stellar Performance
AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) trained from scratch on AMD Instinct™ MI300X GPUs. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B, including their instruction-tuned counterparts.
<div align="center">
<img src="scaling_perf_instruct.png" style="object-fit: contain;"/>
<em><b>Figure 1:</b> Pareto frontier of pre-training tokens vs average performance for pre-trained and instruction-tuned models.</em>
</div>
By training Instella from scratch on Instinct MI300X GPUs, we highlight our hardware’s capability and scalability in handling demanding large-scale AI training workloads, offering a viable alternative in the AI hardware landscape. In line with the AMD commitment to open source, we are releasing all artifacts related to Instella models [here](#additional-resources), including the model weights, detailed training configurations, datasets, and code, enabling the AI community to collaborate, replicate, and innovate, thereby accelerating progress.
## Takeaways
- **Announcing Instella**, a series of 3 billion parameter language models developed by AMD, trained from scratch on 128 Instinct MI300X GPUs.
- **Instella models significantly outperform existing fully open LMs** (Figure 1) of comparable size, as well as bridge the gap between fully open and open weight models by achieving competitive performance compared state-of-the-art open weight models and their instruction-tuned counterparts.
- Fully open and accessible: **Fully open-source release of model weights, training hyperparameters, datasets, and code**, fostering innovation and collaboration within the AI community.
- Supported by the AMD ROCm software stack, Instella employs efficient training techniques such as **FlashAttention-2, Torch Compile, and Fully Sharded Data Parallelism (FSDP)** with hybrid sharding to **scale model training over a large cluster.**
## Instella Models
In this release, we introduce the following Instella models:
<div align="center">
| Model | Stage | Training Data (Tokens) | Description |
| :----: | :----: | :----: | :---- |
| [Instella-3B-Stage1](https://huggingface.co/amd/Instella-3B-Stage1) | Pre-training (Stage 1) | 4.065 Trillion | First stage pre-training to develop proficiency in natural language. |
| [Instella-3B](https://huggingface.co/amd/Instella-3B) | Pre-training (Stage 2) | 57.575 Billion | Second stage pre-training to further enhance problem solving capabilities. |
| [Instella-3B-SFT](https://huggingface.co/amd/Instella-3B-SFT) | SFT | 8.902 Billion (x3 epochs) | Supervised Fine-tuning (SFT) to enable instruction-following capabilities. |
| [Instella-3B-Instruct](https://huggingface.co/amd/Instella-3B-instruct) | DPO | 760 Million | Alignment to human preferences and strengthen chat capabilities with direct preference optimization (DPO). |
| | **Total:** | **4.15 Trillion** | |
<em><b>Table 1:</b> Instella models and training stages.</em>
</div>
The Instella models are text-only, autoregressive transformer-based LMs having 3 billion parameters. Architecture-wise, Instella is packed with 36 decoder layers, each having 32 attention heads. These models support a sequence length of up to 4,096 tokens and have a vocabulary size of ~50,000 tokens using the OLMo tokenizer. During both pre-training and fine-tuning, we utilized FlashAttention-2, Torch Compile, and bfloat16 mixed-precision training to reduce memory usage, leading to computational speedups and optimal resource utilization. To balance inter-node memory efficiency and intra-node communication overhead within our cluster, we employed fully sharded data parallelism (FSDP) with hybrid sharding, with model parameters, gradients, and optimizer states sharded within a node and replicated across the nodes.
Our training pipeline is based on the open-sourced OLMo codebase, adapted, and optimized for our hardware and model architecture. For pre-training we used a total of 128 Instinct MI300X GPUs distributed across 16 nodes with each node having 8x Instinct MI300X GPUs. We evaluated our models and baselines using standard tasks from [OLMES](https://github.com/allenai/olmes/tree/main), [FastChat MT-Bench](https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/README.md), and [Alpaca](https://github.com/tatsu-lab/alpaca_eval/tree/main). For more details about the architecture, training pipeline/hyperparameters and evaluation results, please refer to our [Blog](https://rocm.blogs.amd.com/artificial-intelligence/introducing-instella-3B/README.html), [Hugging Face model card](https://huggingface.co/amd/Instella-3B) and [Github repository](https://github.com/AMD-AIG-AIMA/Instella).
## Training Pipeline
The training of the Instella models comprised of four stages, where each stage incrementally enhanced the model’s capabilities from fundamental natural language understanding to instruction following and alignment towards human preferences.
### Model Summary
| Stage | Model | Training Tokens | Layers | Attention Heads | Model Hidden Size | MLP Hidden Size | Context Length | RoPE Theta |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Pre-training | Instella-3B-stage1 | 4.065T | 36 | 32 | 2560 | 13824 | 4096 | 10,000 |
| Pre-training | Instella-3B | 57.575B | 36 | 32 | 2560 | 13824 | 4096 | 10,000 |
| SFT | Instella-3B-SFT | 8.902B (x3) | 36 | 32 | 2560 | 13824 | 4096 | 10,000 |
| SFT+DPO | Instella-3B-instruct | 760M | 36 | 32 | 2560 | 13824 | 4096 | 10,000 |
### Hyparparameter
|Stage | Optimizer | Peak LR | LR Scheduler | Alpha F | Warmup (steps) | Weight Decay | Decay Norm & Bias | Decay Embedding | Batch Size (Tokens) | Epochs |
|-----:|-----:|-----:|-----:|-----:|-----:|-----:|-----:|-----:|-----:|-----:|
| Pretraining Stage 1 | AdamW(0.9,0.95) | 4.0e-4 | cosine_with_warmup | 0.1 | 2000 | 0.1 | True | True | 4M | 1 |
| Pretraining Stage 2 | AdamW(0.9,0.95) | 4.0e-5 | cosine_with_warmup | 0.0 | 0 | 0.1 | True | True | 4M | 1 |
| SFT | AdamW(0.9,0.95) | 1.0e-5 | linear_with_warmup | 0.001 | 500 | 0.1 | True | True | 0.5M | 3 |
| DPO | AdamW(0.9,0.95) | 5.0e-7 | linear | -- | 10% | 0.1 | -- | -- | 0.25M | 1 |
## Getting Started
### Installation
First, install [PyTorch](https://pytorch.org) according to the instructions specific to your operating system. For AMD GPUs, you can also start with a [rocm/pytorch](https://hub.docker.com/r/rocm/pytorch/tags?name=pytorch) docker.
To install from source (recommended for training/fine-tuning) run:
```bash
git clone https://github.com/AMD-AIG-AIMA/Instella.git
cd Instella
# install Flash-Attention on MI300X
GPU_ARCH=gfx942 MAX_JOBS=$(nproc) pip install git+https://github.com/Dao-AILab/flash-attention.git -v
# install other dependencies
pip install -e .[all]
```
### Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "amd/Instella-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", trust_remote_code=True)
prompt = [{"role": "user", "content": "What are the benefits of open-source AI research?"}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.8,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path amd/Instella-3B-Instruct --trust_remote_code --max_new_tokens 1024
# <root>:
# which is bigger 9.8 or 9.11?
# <amd/Instella-3B-Instruct>:
# 9.8 is bigger than 9.11. The difference between the two numbers is 0.69 (9.8 - 9.11 = 0.69), which indicates that 9.8 is 0.69 units larger than 9.11.
```
## Results
### Pre-training
<div class="table-wrapper" align="center">
<table>
<thead>
<tr>
<th>Models</th>
<th>Size</th>
<th>Training Tokens</th>
<th>Avg</th>
<th>ARC Challenge</th>
<th>ARC Easy</th>
<th>BoolQ</th>
<th>Hellaswag</th>
<th>PiQA</th>
<th>SciQ</th>
<th>Winnograde</th>
<th>OpenBookQA</th>
<th>MMLU</th>
<th>BBH (3-shot)</th>
<th>GSM8k (8-shot)</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="15">Open Weight Models</th>
</tr>
<tr>
<td>Gemma-2-2B</td>
<td>2.61B</td>
<td>~2T</td>
<td>59.34</td>
<td>39.46</td>
<td>59.30</td>
<td>74.50</td>
<td>70.50</td>
<td>76.40</td>
<td><strong>96.60</strong></td>
<td>69.80</td>
<td>44.80</td>
<td>53.28</td>
<td>40.75</td>
<td>27.37</td>
</tr>
<tr>
<td>Llama-3.2-3B</td>
<td>3.21B</td>
<td>~9T</td>
<td>62.51</td>
<td>47.16</td>
<td>64.91</td>
<td>74.80</td>
<td>73.10</td>
<td>75.90</td>
<td>95.30</td>
<td>70.30</td>
<td>51.20</td>
<td>57.81</td>
<td><ins>47.00</ins></td>
<td>30.10</td>
</tr>
<tr>
<td>Qwen2.5-3B</td>
<td>3.09B</td>
<td>~18T</td>
<td><strong>68.30</strong></td>
<td>51.51</td>
<td>67.19</td>
<td><strong>79.10</strong></td>
<td>72.10</td>
<td>77.40</td>
<td>95.50</td>
<td>69.30</td>
<td><ins>51.40</ins></td>
<td><strong>67.22</strong></td>
<td><strong>56.69</strong></td>
<td><strong>63.84</strong></td>
</tr>
<tr>
<th colspan="15">Fully Open Models</th>
</tr>
<tr>
<td>Pythia-2.8b</td>
<td>2.91B</td>
<td>300B</td>
<td>49.83</td>
<td>40.47</td>
<td>60.70</td>
<td>64.80</td>
<td>60.10</td>
<td>72.50</td>
<td>89.70</td>
<td>60.80</td>
<td>42.60</td>
<td>26.09</td>
<td>27.69</td>
<td>2.73</td>
</tr>
<tr>
<td>GPTNeo-2.7B</td>
<td>2.72B</td>
<td>~420B</td>
<td>47.96</td>
<td>38.46</td>
<td>54.56</td>
<td>62.70</td>
<td>55.20</td>
<td>70.80</td>
<td>88.00</td>
<td>58.30</td>
<td>40.80</td>
<td>27.83</td>
<td>27.25</td>
<td>3.71</td>
</tr>
<tr>
<td>OpenELM-3B</td>
<td>3.04B</td>
<td>~1.5T</td>
<td>52.28</td>
<td>37.46</td>
<td>58.42</td>
<td>68.60</td>
<td>71.70</td>
<td>75.60</td>
<td>92.50</td>
<td>65.40</td>
<td>46.40</td>
<td>26.69</td>
<td>29.40</td>
<td>2.96</td>
</tr>
<tr>
<td>StableLM-3B-4E1T</td>
<td>2.8B</td>
<td>~4T</td>
<td>58.51</td>
<td>44.82</td>
<td>67.02</td>
<td>75.40</td>
<td><ins>74.20</ins></td>
<td><strong>78.40</strong></td>
<td>93.40</td>
<td>68.40</td>
<td>48.60</td>
<td>45.19</td>
<td>37.33</td>
<td>10.84</td>
</tr>
<tr>
<td><strong><a href="https://huggingface.co/amd/Instella-3B-Stage1">Instella-3B-Stage1</a></strong></td>
<td>3.11B</td>
<td>~4T</td>
<td>61.33</td>
<td><strong>53.85</strong></td>
<td><strong>73.16</strong></td>
<td><ins>78.70</ins></td>
<td><ins>74.20</ins></td>
<td>77.50</td>
<td>94.90</td>
<td><ins>71.20</ins></td>
<td><ins>51.40</ins></td>
<td>54.69</td>
<td>34.30</td>
<td>10.77</td>
</tr>
<tr>
<td><strong><a href="https://huggingface.co/amd/Instella-3B">Instella-3B</a></strong></td>
<td>3.11B</td>
<td>~4T+60B</td>
<td><ins>66.59</ins></td>
<td><ins>52.84</ins></td>
<td><ins>70.53</ins></td>
<td>76.50</td>
<td><strong>75.00</strong></td>
<td><ins>77.80</ins></td>
<td><ins>96.40</ins></td>
<td><strong>73.10</strong></td>
<td><strong>52.40</strong></td>
<td><ins>58.31</ins></td>
<td>39.74</td>
<td><ins>59.82</ins></td>
</tr>
</tbody>
</table>
<em><strong>Table 2:</strong> Pre-trained model performance on standard benchmarks. Here <strong>Bold</strong> represents the best performance, and <ins>Underscore</ins> represents the second best performance.</em>
</div>
- Both Instella-3B-Stage1 & Instella-3B models outperform all the other fully open models over all the benchmarks individually (except PIQA). **Our final pre-trained checkpoint Instella-3B outperforms the existing top performant fully open pre-trained models by a lead of ⬆️8.08% on average**, with significant improvements in `ARC Challenge [+8.02%], ARC Easy [+3.51%], Winnograde [+4.7%], OpenBookQA [+3.88%], MMLU [+13.12%] and ️GSM8K [+48.98%]`.
- **Second stage pre-training elevated the overall average performance relative to stage-1 by ⬆️5.26%**, substantially narrowing the performance gap between Instella-3B model vs the closed-source models, and **outperforming Llama-3.2-3B by ⬆️4.08% on average** (`+5.69% [ARC Challenge], +5.61% [ARC Easy], and +29.72% [GSM8k]`), **Gemma-2-2B by ⬆️7.25% on average** (`+13.38% [ARC Challenge], +11.23% [ARC Easy], +4.5% [Hellaswag], +7.6% [OpenBookQA], +5.03% [MMLU], and +32.45% [GSM8k]`), and is **competitive with Qwen-2.5-3B** on the majority of the benchmarks.
- The multi-stage pre-training with diverse and high-quality data mix significantly enhanced Instella-3B’s capabilities, establishing it as a competitive and open alternative in the landscape of comparable size language models.
### Instruction-tuning Results
<div class="table-wrapper" align="center">
<table>
<thead>
<tr>
<th>Models</th>
<th>Size</th>
<th>Training Tokens</th>
<th>Avg</th>
<th>MMLU</th>
<th>TruthfulQA</th>
<th>BBH</th>
<th>GPQA</th>
<th>GSM8K</th>
<th>Minerva MATH</th>
<th>IFEval</th>
<th>AlpacaEval 2</th>
<th>MT-Bench</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="13">Open Weight Models</th>
</tr>
<tr>
<td>Gemma-2-2B-Instruct</td>
<td>2.61B</td>
<td>~2T</td>
<td>39.04</td>
<td>58.35</td>
<td><ins>55.76</ins></td>
<td>42.96</td>
<td>25.22</td>
<td>53.45</td>
<td>22.48</td>
<td>55.64</td>
<td><strong>29.41</strong></td>
<td><strong>8.07</strong></td>
</tr>
<tr>
<td>Llama-3.2-3B-Instruct</td>
<td>3.21B</td>
<td>~9T</td>
<td><ins>47.53</ins></td>
<td><ins>61.50</ins></td>
<td>50.23</td>
<td><strong>61.50</strong></td>
<td><ins>29.69</ins></td>
<td><strong>77.03</strong></td>
<td><ins>46.00</ins></td>
<td><strong>75.42</strong></td>
<td>19.31</td>
<td>7.13</td>
</tr>
<tr>
<td>Qwen2.5-3B-Instruct</td>
<td>3.09B</td>
<td>~18T</td>
<td><strong>48.72</strong></td>
<td><strong>66.90</strong></td>
<td><strong>57.16</strong></td>
<td><ins>57.29</ins></td>
<td>28.13</td>
<td><ins>75.97</ins></td>
<td><strong>60.42</strong></td>
<td>62.48</td>
<td><ins>22.12</ins></td>
<td><ins>8.00</ins></td>
</tr>
<tr>
<th colspan="13">Fully Open Models</th>
</tr>
<tr>
<td>StableLM-zephyr-3B</td>
<td>2.8B</td>
<td>4T</td>
<td>30.50</td>
<td>45.10</td>
<td>47.90</td>
<td>39.32</td>
<td>25.67</td>
<td>58.38</td>
<td>10.38</td>
<td>34.20</td>
<td>7.51</td>
<td>6.04</td>
</tr>
<tr>
<td>OpenELM-3B-Instruct</td>
<td>3.04B</td>
<td>~1.5T</td>
<td>14.11</td>
<td>27.36</td>
<td>38.08</td>
<td>24.24</td>
<td>18.08</td>
<td>1.59</td>
<td>0.38</td>
<td>16.08</td>
<td>0.21</td>
<td>1.00</td>
</tr>
<tr>
<td><a href="https://huggingface.co/amd/Instella-3B-SFT">Instella-3B-SFT</a></td>
<td>3.11B</td>
<td>~4T</td>
<td>42.05</td>
<td>58.76</td>
<td>52.49</td>
<td>46.00</td>
<td>28.13</td>
<td>71.72</td>
<td>40.50</td>
<td>66.17</td>
<td>7.58</td>
<td>7.07</td>
</tr>
<tr>
<td><a href="https://huggingface.co/amd/Instella-3B-Instruct">Instella-3B-Instruct</a></td>
<td>3.11B</td>
<td>~4T</td>
<td>44.87</td>
<td>58.90</td>
<td>55.47</td>
<td>46.75</td>
<td><strong>30.13</strong></td>
<td>73.92</td>
<td>42.46</td>
<td><ins>71.35</ins></td>
<td>17.59</td>
<td>7.23</td>
</tr>
</tbody>
</table>
<em><strong>Table 2:</strong> Instruct model performance on standard benchmarks. Here <strong>Bold</strong> represents the best performance, and <ins>Underscore</ins> represents the second best performance.</em>
</div>
- **Instella-3B-Instruct model consistently outperforms other fully open models across all evaluated benchmarks with a significant average score lead of ⬆️ 14.37%** w.r.t the next top performing fully open instruction-tuned models. With substantial margins across all the chat benchmarks (`+13% [MMLU], 7.57% [TruthfulQA], 7.43% [BBH], +4.46% [GPQA], +37.15 [IFEval], 10.08% [Alpaca 2], and 1.2% [MT-Bench]`).
- **Instella-3B-Instruct narrows the performance gap with leading open-weight models.** Instella-3B-Instruct performs **on par with or slightly surpasses existing state-of-the-art open weight instruction-tuned models** such as Llama-3.2-3B-Instruct (`+5.24% [TruthfulQA], 0.45% [GPQA], and +0.1% [MT-Bench]`), and Qwen2.5-3B-Instruct (`+2.01% [GPQA] and +8.87% [IFEval]`), while significantly outperforming Gemma-2-2B-Instruct with an average score lead of ⬆️5.83% (`+0.55% [MMLU], +3.79 [BBH], +4.91 [GPQA], +20.47 [GSM8k], +19.98 [Minerva MATH], and +15.17% [IFEval]`).
- **Overall, Instella-3B-Instruct excels in instruction following tasks and multi-turn QA tasks like TruthfulQA, GPQA, IFEval and MT-Bench**, while being highly competitive compared to existing state-of-the-art open weight models on other knowledge recall and math benchmarks, while being trained on significantly fewer training tokens.
## Training Data
| Stage | Model | Dataset | License |
| :---- | :---- | :---- | :---- |
| Pre-training Stage 1 | Instella-3B-stage1 | [https://huggingface.co/datasets/allenai/OLMoE-mix-0924](https://huggingface.co/datasets/allenai/OLMoE-mix-0924) | ODC-BY-1.0 |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/allenai/tulu-3-sft-mixture](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) | ODC-BY-1.0 |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/allenai/dolmino-mix-1124](https://huggingface.co/datasets/allenai/dolmino-mix-1124) | ODC-BY-1.0 |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) | Refer source materials |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/TIGER-Lab/WebinstructSub](https://huggingface.co/datasets/TIGER-Lab/WebinstructSub) | Apache-2.0 |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) | Apache-2.0 |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | MIT |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus/viewer/python-edu](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus/viewer/python-edu) | ODC-BY-1.0 |
| Pre-training Stage 2 | Instella-3B | [https://github.com/google-deepmind/mathematics_dataset](https://github.com/google-deepmind/mathematics_dataset) | Apache-2.0 |
| Pre-training Stage 2 | Instella-3B | [https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic) | [LICENSE](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic/blob/main/LICENSE) |
| SFT | Instella-3B-SFT | [https://huggingface.co/datasets/nvidia/OpenMathinstruct-2](https://huggingface.co/datasets/nvidia/OpenMathinstruct-2) | CC-BY-4.0 |
| SFT | Instella-3B-SFT | [https://huggingface.co/datasets/cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | MIT |
| SFT | Instella-3B-SFT | [https://huggingface.co/datasets/HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | Apache-2.0 |
| SFT | Instella-3B-SFT | [https://huggingface.co/datasets/GAIR/o1-journey](https://huggingface.co/datasets/GAIR/o1-journey) | Refer source materials |
| SFT | Instella-3B-SFT | [https://huggingface.co/datasets/allenai/tulu-3-sft-personas-instruction-following (subset of Tulu3)](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-instruction-following) | ODC-BY-1.0 |
| DPO | Instella-3B-instruct | [https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix) | ODC-BY-1.0 |
> [!NOTE]
> Further information concerning the training datasets, including applicable licensing terms and use restrictions, may be located at the linked source location.
## Conclusion
The release of the Instella family of models represents a significant stride in advancing open-source AI and demonstrating the capabilities of AMD hardware in large-scale language model training. The 3 billion parameter models from Instella family significantly outperform present fully open comparable size models in key benchmarks while also being competitive to comparable open-weight models, which we attribute to the high-quality data-mix selection, multi-stage training pipeline, and the use of high-performance Instinct MI300X GPUs for large scale training.
By fully open sourcing the Instella models, including weights, training configurations, datasets, and code, we aim to foster innovation and collaboration within the AI community. We believe that transparency, reproducibility and accessibility are key drivers of progress in AI research and development. We invite developers, researchers, and AI enthusiasts to explore Instella, contribute to its ongoing improvement, and join us in pushing the boundaries of what is possible with language models.
We will continue enhancing the models across multiple dimensions, including context length, reasoning ability, and multimodal capabilities. Additionally, we will scale up both the model and dataset while exploring diverse architectural approaches. Keep your eyes peeled for more exciting blogs on the Instella LMs family, its features and capabilities!
## Additional Resources
### Hugging Face Model Cards
- Pre-trained models:
- Instella-3B-Stage1: [amd/Instella-3B-Stage1](https://huggingface.co/amd/Instella-3B-Stage1), First stage pre-training checkpoint.
- Instella-3B: [amd/Instella-3B](https://huggingface.co/amd/Instella-3B), Final pre-training checkpoint.
- Instruction-tuned models:
- Instella-3B-SFT: [amd/Instella-3B-SFT](https://huggingface.co/amd/Instella-3B-SFT), Supervised fine-tuned checkpoint.
- Instella-3B-Instruct: [amd/Instella-3B-Instruct](https://huggingface.co/amd/Instella-3B-Instruct), Final Instruction-tuned checkpoint.
### Datasets
Second stage pre-training GSM8k synthetic dataset: [amd/Instella-GSM8K-synthetic](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic)
- The dataset consists of two splits: `train` and `train_119K`.
- For Instella-3B model second stage pre-training we used the `train_119K` split, which is a subset of the larger `train` split.
### Code
- Github: [https://github.com/AMD-AIG-AIMA/Instella](https://github.com/AMD-AIG-AIMA/Instella)
Please refer to the following blogs to get started with using these techniques on AMD GPUs:
- [PyTorch Fully Sharded Data Parallel (FSDP) on AMD GPUs with ROCm™](https://rocm.blogs.amd.com/artificial-intelligence/fsdp-training-pytorch/README.html)
- [Accelerating Large Language Models with Flash Attention on AMD GPUs](https://rocm.blogs.amd.com/artificial-intelligence/flash-attention/README.html)
- [Accelerate PyTorch Models using torch.compile on AMD GPUs with ROCm™](https://rocm.blogs.amd.com/artificial-intelligence/torch_compile/README.html)
- [Introducing the First AMD 1B Language Models: AMD OLMo](https://www.amd.com/en/developer/resources/technical-articles/introducing-the-first-amd-1b-language-model.html)
## Bias, Risks, and Limitations
- The models are being released for research purposes only and are not intended for use cases that require high levels of factuality, safety-critical situations, health, or medical applications, generating false information, facilitating toxic conversations.
- Model checkpoints are made accessible without any safety promises. It is crucial for users to conduct comprehensive evaluations and implement safety filtering mechanisms as per their respective use cases.
- It may be possible to prompt the model to generate content that may be factually inaccurate, harmful, violent, toxic, biased, or otherwise objectionable. Such content may also get generated by prompts that did not intend to produce output as such. Users are thus requested to be aware of this and exercise caution and responsible thinking when using the model.
- Multi-lingual abilities of the models have not been tested and thus may misunderstand and generate erroneous responses across different languages.
## License
- The Instella-3B models are licensed for academic and research purposes under a ResearchRAIL license.
- The [amd/Instella-GSM8K-synthetic](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic) dataset used in second stage pre-training is built with Qwen2.5-72B-Instruct, and is licensed for academic and research purposes under a ResearchRAIL license. Refer to the [LICENSE](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic/blob/main/LICENSE) and [NOTICES](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic/blob/main/NOTICES) in the [amd/Instella-GSM8K-synthetic](https://huggingface.co/datasets/amd/Instella-GSM8K-synthetic) dataset card files for more information.
- Refer to the [LICENSE](https://huggingface.co/amd/Instella-3B/blob/main/LICENSE) and [NOTICES](https://huggingface.co/amd/Instella-3B/blob/main/NOTICES) files for more information.
## Citations
Feel free to cite our Instella-3B models:
```text
@misc{Instella,
title = {Instella: Fully Open Language Models with Stellar Performance},
url = {https://huggingface.co/amd/Instella-3B},
author = {Jiang Liu, Jialian Wu, Xiaodong Yu, Prakamya Mishra, Sudhanshu Ranjan, Zicheng Liu, Chaitanya Manem, Yusheng Su, Pratik Prabhanjan Brahma, Gowtham Ramesh, Ximeng Sun, Ze Wang, Emad Barsoum},
month = {March},
year = {2025}
}
``` | [
"SCIQ"
] |
erax-ai/EraX-VL-7B-V1.0 | erax-ai | image-text-to-text | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"erax",
"multimodal",
"erax-vl-7b",
"insurance",
"ocr",
"vietnamese",
"bcg",
"image-to-text",
"conversational",
"vi",
"en",
"zh",
"arxiv:2308.12966",
"arxiv:2407.10671",
"arxiv:2404.16821",
"arxiv:2404.07922",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"doi:10.57967/hf/3312",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-17T08:36:40Z | 2025-01-15T16:52:05+00:00 | 954 | 37 | ---
base_model:
- Qwen/Qwen2-VL-7B-Instruct
language:
- vi
- en
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- erax
- multimodal
- erax-vl-7b
- insurance
- ocr
- vietnamese
- bcg
- image-to-text
widget:
- src: images/photo-1-16505057982762025719470.webp
example_title: Test 1
- src: images/vt-don-thuoc-f0-7417.jpeg
example_title: Test 2
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66e93d483745423cbb14c5ff/fNxjr3en_onzbOv0sghpE.jpeg" alt="Logo">
</p>
<!--  -->
# EraX-VL-7B-V1
## Introduction 🎉
<!-- <p style="text-align: justify;">
We are excited to introduce **EraX-VL-7B-v1**, a robust multimodal model for **OCR (optical character recognition)** and **VQA (visual question-answering)** that excels in various languages 🌍, with a particular focus on Vietnamese 🇻🇳. The `EraX-VL-7B` model stands out for its precise recognition capabilities across a range of documents 📝, including medical forms 🩺, invoices 🧾, bills of sale 💳, quotes 📄, and medical records 💊. This functionality is expected to be highly beneficial for hospitals 🏥, clinics 💉, insurance companies 🛡️, and other similar applications 📋. Built on the solid foundation of the [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[1], which we found to be of high quality and fluent in Vietnamese, `EraX-VL-7B` has been fine-tuned to enhance its performance. We plan to continue improving and releasing new versions for free, along with sharing performance benchmarks in the near future.
One standing-out feature of **EraX-VL-7B-v1** is the capability to do multi-turn Q&A with pretty good reasoning! Thanks for the size of 7+ billions parameters of base model.
**EraX-VL-7B-V1** is a young member of our **EraX's LànhGPT** collection of LLM models.
</p> -->
**WE ARE MOVING to <a href="https://huggingface.co/erax-ai/EraX-VL-7B-V1/" target="_blank">EraX-AI</a> repository from 22 October 2024. Follow up so you do not miss great news coming up.**
We are excited to introduce **EraX-VL-7B-v1**, a robust multimodal model for **OCR (optical character recognition)** and **VQA (visual question-answering)** that excels in various languages 🌍, with a particular focus on Vietnamese 🇻🇳. The `EraX-VL-7B` model stands out for its precise recognition capabilities across a range of documents 📝, including medical forms 🩺, invoices 🧾, bills of sale 💳, quotes 📄, and medical records 💊. This functionality is expected to be highly beneficial for hospitals 🏥, clinics 💉, insurance companies 🛡️, and other similar applications 📋. Built on the solid foundation of the [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[1], which we found to be of high quality and fluent in Vietnamese, `EraX-VL-7B` has been fine-tuned to enhance its performance. We plan to continue improving and releasing new versions for free, along with sharing performance benchmarks in the near future.
One standing-out feature of **EraX-VL-7B-v1** is the capability to do multi-turn Q&A with pretty good reasoning! Thanks for the size of 7+ billions parameters of base model.
***NOTA BENE***: EraX-VL-7B-V1 is NOT a typical OCR-only tool likes Tesseract but is a Multimodal LLM-based model. To use it effectively, you may have to **twist your prompt carefully** depending on your tasks.
**EraX-VL-7B-V1** is a young member of our **EraX's LànhGPT** collection of LLM models.
- **Developed by:**
- Nguyễn Anh Nguyên ([email protected])
- Nguyễn Hồ Nam (BCG)
- Hoàng Tiến Dũng ([email protected])
- Phạm Huỳnh Nhật ([email protected])
- Phạm Đình Thục ([email protected])
- **Funded by:** [Bamboo Capital Group](https://bamboocap.com.vn) and EraX
- **Model type:** Multimodal Transformer with over 7B parameters
- **Languages (NLP):** Primarily Vietnamese with multilingual capabilities
- **License:** Apache 2.0
- **Fine-tuned from:** [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
## Benchmarks 📊
<!--  -->
Below is the evaluation benchmark of **global open-source and proprietary Multimodal Models** on the [MTVQA](https://huggingface.co/datasets/ByteDance/MTVQA) Vietnamese test set conducted by [VinBigdata](https://www.linkedin.com/feed/update/urn:li:activity:7243887708966641664/). We plan to conduct more detailed and diverse evaluations in the near future.
<div align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66e93d483745423cbb14c5ff/-OYkSDVyAcAcLLgO2N5XT.jpeg" width="500"/>
<a href="https://www.linkedin.com/feed/update/urn:li:activity:7243887708966641664/" target="_blank">Source: VinBigData</a>
<br>(20:00 23rd Sept 2024)
</div>
## API trial 🎉
Please contact **[email protected]** for API access inquiry.
## Examples 🧩
### Example 01: OCR - Optical Character Recognition for Image
<!-- 
-->
<div align="left">
<img src="images/images_henkham_0.jpg" width="500"/>
</div>
```
{
"document": {
"header": {
"title": "GIẤY HẸN KHÁM LẠI",
"organization": "SỞ Y TẾ NGHỆ AN\nBỆNH VIỆN UNG BƯỚU NGHỆ AN",
"address": "Võ Thị Sáu, Thủy Tùng - TP Vinh - Nghệ An"
},
"patient_info": {
"name": "NGUYỄN THỊ LUÂN",
"date_of_birth": "03/07/1976",
"gender": "40",
"address": "Xã Nghĩa Khánh-Huyện Nghĩa Đàn-Nghệ An",
"medical_card_number": "CN 3 40 40 168 60413",
"registration_date": "16/12/2016",
"admission_date": "Từ 01/03/2016",
"diagnosis": "C20-Bướu ac trực tràng",
"revisit_date": "17/01/2017"
},
"administrative_details": {
"department": "Trung tâm điều trị ung bướu",
"revisit_instruction": "vào ngày 17/01/2017, hoặc đến hết kỳ thời gian nếu nước ngoài hẹn khám lại nếu có dấu hiệu (triệu chứng)",
"note": "nếu KCB ban đầu: Trạm y tế xã Nghĩa Khánh",
"signature": "Trưởng khoa",
"doctor_signature": "Lâm Nguyễn Khang",
"revisiting_date_confirmation": "Ngày 16 tháng 12 năm 2016",
"confirmation_signature": "Bác sĩ điều trị",
"physician_signature": "Nguyễn Văn Việt"
}
}
}
```
### Example 02: OCR - Optical Character Recognition for PDF
<div align="left">
<img src="images/images_phieuphambenh_1.png" width="500"/>
</div>
<!--  -->
```
{
"header": {
"title": "PHIẾU KHÁM BỆNH",
"date": "Hà Nội, ngày 23 tháng 3 năm 2020",
"patient_info": {
"id": "HN011000002",
"name": "Vương Hồng Thắng - Năm sinh: 1978",
"address": "Số 10 tầng 2, TTTM V+, Số 505 Phố Minh Khai, Quận Hai Bà Trưng, Hà Nội",
"phone": "+0942116117",
"email": "[email protected]"
},
"contact_info": {
"address": "Nhà Khoa Bamufit\nĐịa chỉ: 505, Phố Minh Khai, Hai Bà Trưng, Hà Nội, Việt Nam",
"phone": "0942484784",
"email": "[email protected]",
"website": "https://bamufit.vn"
}
},
"treatment_details": [
{
"visit_date": "13-09-2019",
"treatment_type": "Chẩn đoán: Abscess chẽ",
"procedure": "Cắt lợi bằng Laser r23",
"doctor": "THỊ HIEN",
"price": "500,000",
"quantity": "1",
"discounted_price": "0",
"total_cost": "500,000"
},
{
"visit_date": "13-09-2019",
"treatment_type": "Chẩn đoán: Abscess quanh chóp",
"procedure": "Bám gai xuống ở răng r23",
"doctor": "THỊ HIEN",
"price": "100,000",
"quantity": "1",
"discounted_price": "0",
"total_cost": "100,000"
}
],
"financial_details": {
"total_cost": "600,000",
"discounted_total": "0",
"paid_amount": "1,114,000",
"remaining_balance": "1,714,000"
},
"notes": "- Kiêng ăn uống đồ để gây nhiễm mủ như chè, cà phê, thuốc lá, rượu vang đỏ .. và hạn chế dùng đồ quá nóng, quá lạnh sau khi tẩy trắng răng ít nhất 2 tuần.",
"footer": {
"customer_signature": "(Ký và ghi rõ họ tên)",
"accountant_signature": "(Ký và ghi rõ họ tên)",
"dentist_signature": "(Ký và ghi rõ họ tên)"
}
}
```
### Example 03: Image Captioning w/ Hand-writing
<!--  -->
<div align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66e93d483745423cbb14c5ff/WAzO6Pzh3PY3mb31-VMbd.jpeg" width="500"/>
</div>
Hình ảnh là một bức chân dung của **<span style="color:red">Chủ tịch Hồ Chí Minh**</span>, được in trên nền giấy màu vàng. Bên trái hình ảnh là chân dung của Hồ Chí Minh, với nét mặt nghiêm nghị, râu quai nón và tóc bạc. Bên phải hình ảnh là một đoạn văn bản viết tay bằng tiếng Việt, với nội dung: **<span style="color:blue">'Khuyên đồng bào nam nữ 18 tuổi trở lên hôm nay tất cả đều đi bỏ phiếu, để bầu những đại biểu xứng đáng, vào Quốc hội đầu tiên của nước ta.'</span>** Dưới đoạn văn bản là chữ ký 'Hồ Chí Minh' và ngày tháng '6,1,46'.
### Example 04: Image Captioning
<!--  -->
<div align="left">
<img src="images/images_bieudo.jpeg" width="500"/>
</div>
Hình ảnh là một biểu đồ thể hiện <span style="color:blue">mối quan hệ giữa chỉ số BMI (Body Mass Index) và tuổi</span>, được chia thành các mức độ khác nhau dựa trên phần trăm percentile. Trục hoành của biểu đồ đại diện cho tuổi từ 2 đến 20 năm, trong khi trục tung đại diện cho chỉ số BMI từ 10 đến 32. Biểu đồ này có ba khu vực chính: **<span style="color:red">vùng màu đỏ</span>** ở phía dưới cùng đại diện cho mức béo phì với chỉ số BMI cao hơn 30; **<span style="color:orange">vùng màu vàng</span>** nằm giữa đại diện cho nguy cơ béo phì với chỉ số BMI từ khoảng 25 đến 30; và **<span style="color:green">vùng màu xanh lá cây</span>** ở phía trên đại diện cho mức cân nặng khỏe mạnh hoặc thiếu cân với chỉ số BMI thấp hơn 25. Trên biểu đồ còn có đường cong màu xám chạy qua các mức độ BMI theo tuổi, đánh dấu các mức 5th, 50th, và 95th percentile. Văn bản trong hình gồm các cụm từ '**Béo phì**', '**Nguy cơ béo phì**', '**Sức khỏe dinh dưỡng tốt**', và '**Thiếu cân**' để mô tả từng khu vực tương ứng với chỉ số BMI.
### Example 05: Image Captioning
<!--  -->
<div align="left">
<img src="images/images_langque_1.jpg" width="500"/>
</div>
Hình ảnh chụp một khung cảnh đồng quê Việt Nam với những nét đặc trưng riêng biệt. **<span style="color:green">Bối cảnh là một cánh đồng lúa xanh mướt, trải dài bất tận, được bao phủ bởi một lớp sương mù nhẹ nhàng. Phía xa là những dãy núi xanh ngắt, tạo nên một bức tranh thiên nhiên thơ mộng và trữ tình</span>**. Ở trung tâm bức ảnh, hai cậu bé đang ngồi trên lưng con trâu nước đen tuyền. Cậu bé phía trước mặc quần đỏ, đang cười tươi rói, trong khi cậu bé phía sau mặc quần đen, vẻ mặt trầm tư hơn. **<span style="color:blue">Con trâu nước đang đứng yên, đầu hướng về phía bên phải, nhìn về phía xa xăm</span>**. Phía bên phải hình ảnh, một người đàn ông đang cầm một cái chậu màu đen, đang đổ nước lên đầu mình. Nước từ chậu chảy xuống tạo thành những giọt nước li ti, tung tóe khắp không gian. Trên bầu trời, có những đám mây trắng bồng bềnh, tạo nên một khung cảnh thanh bình và lãng mạn. Ánh nắng vàng óng chiếu rọi xuống cánh đồng, tạo nên một bức tranh tuyệt đẹp.
### Example 06: Visual Question Answering
<div align="left">
<img src="images/images_biencadoitheo_3.jpg" width="500"/>
</div>
- **User**: Hãy mô tả nội dung của bức ảnh này một cách thật chi tiết và không giải thích gì thêm
- **Assistant**: Bức ảnh là một hình vẽ minh họa đơn giản về một cô bé đang ôm một con gấu bông màu vàng. Cô bé có mái tóc đen ngắn, mặc váy đỏ và đi tất sọc trắng. Cô bé đứng ở giữa khung hình với tư thế ôm chặt gấu bông bằng cả hai tay. Gấu bông có đôi mắt to tròn và nụ cười tươi. Văn bản trong ảnh được chia thành hai phần: phần trên cùng là tiếng Anh và phần dưới là tiếng Việt. Nội dung văn bản như sau: **<span style="color:green">'Sofia's bear was old, tatty and very well loved.'** </span> (Có nghĩa là 'Con gấu bông của Sofia đã cũ kỹ và rách rưới nhưng cô bé rất yêu nó'). Dưới dòng chữ tiếng Anh là dòng chữ tiếng Việt: **<span style="color:green">'Sofia có một bạn gấu bông tuy đã cũ kỹ và sờn rách nhưng cô bé rất yêu quý nó.'** </span>. Bố cục tổng thể của bức ảnh khá đơn giản, tập trung vào nhân vật chính là cô bé và con gấu bông mà cô đang ôm. Màu sắc chủ đạo là các gam màu ấm áp như đỏ, vàng và nâu nhạt, tạo cảm giác thân thiện và gần gũi.
### Example 07: Multi-turns Visual Question Answering and Reasoning
<div align="left">
<img src="images/images_henkham_0.jpg" width="500"/>
</div>
- **User**: Hãy trích xuất toàn bộ chi tiết của bức ảnh này theo đúng thứ tự của nội dung trong ảnh. Không bình luận gì thêm.
Lưu ý:
1. Nếu có chữ trong ảnh thì phải trích xuất ra hết theo thứ tự và định dạng của câu chữ.
2. Nếu có bảng biểu (table) thì phải trả lại định dạng như bảng biểu trong hình và text của nó.
3. Nếu bức ảnh không có bất kỳ ký tự nào, hãy diễn giải bức ảnh đó.
4. Chỉ trả lại bằng tiếng Việt.
- **Assistant**: Bức ảnh là một giấy hẹn khám lại từ **<span style="color:blue">Bệnh viện Ung Bướu Nghệ An</span>**. Giấy hẹn được cấp cho bệnh nhân **<span style="color:blue">Nguyễn Thị Luân</span>**, sinh ngày 03/07/1976, hiện đang cư trú tại Xã Nghĩa Khánh - Huyện Nghĩa Đàn - Nghệ An. Giấy hẹn ghi rõ thông tin về lần sử dụng thuốc BHYT từ ngày 01/03/2016 đến 16/12/2016, với **<span style="color:blue">số thẻ BHYT là CN 3 40 40 168 60413</span>**. Ngày khám ban đầu là 16/12/2016 và giờ khám là buổi trưa. Giấy hẹn khám lại vào ngày 17/01/2017 hoặc trước khi hết thời hạn sử dụng thẻ bảo hiểm y tế nếu có dấu hiệu triệu chứng bất thường. Giấy hẹn cũng chỉ áp dụng cho một lần khám duy nhất trong vòng 10 ngày làm việc kể từ ngày được hẹn khám lại. Bên dưới phần thông tin bệnh nhân là chữ ký của Trưởng khoa, bác sĩ điều trị và con dấu đỏ của Bệnh viện Ung Bướu Nghệ An. Ngoài ra, trên góc phải của giấy hẹn còn có dòng chữ '**<span style="color:blue">Tổng đài tư vấn 1900.6172</span>**' cùng logo của bệnh viện.
- **User**: Hãy cho biết thông tin của bệnh nhân.
- **Assistant**: Bệnh nhân là Nguyễn Thị Luân, sinh ngày 03/07/1976, hiện đang cư trú tại **<span style="color:blue">Xã Nghĩa Khánh - Huyện Nghĩa Đàn - Nghệ An</span>**. Số thẻ bảo hiểm y tế của bệnh nhân là **<span style="color:blue">CN 3 40 40 168 60413</span>**.
- **User**: Bác sĩ điều trị cho bệnh nhân là ai?
- **Assistant**: Bác sĩ điều trị cho bệnh nhân là bác sĩ **<span style="color:blue">Nguyễn Văn Việt</span>**.
- **User**: Đến năm 2024 chị Luân bao nhiêu tuổi?
- **Assistant**: Nguyễn Thị Luân sinh ngày **<span style="color:blue">03/07/1976</span>**, tính đến năm 2024 chị sẽ **<span style="color:blue">48 tuổi</span>**.
- **User**: Vậy năm 2020 chị Luân mấy tuổi?
- **Assistant**: **<span style="color:blue">Vào năm 2020</span>**, Nguyễn Thị Luân sẽ **<span style="color:blue">44 tuổi vì năm 2020 cách năm 2024 4 năm</span>**.
### Example 08: Multi-images Visual Question Answering
<div style="display: flex; justify-content: left; gap: 10px;">
<img src="images/images_langque_0.jpg" width="350"/>
<img src="images/images_langque_1.jpg" width="350"/>
</div>
- **User**: Hãy trích xuất toàn bộ chi tiết của các bức ảnh này theo đúng thứ tự của nội dung và không bình luận gì
- **Assistant**: Hình ảnh đầu tiên là một bức tranh sơn dầu với phong cách tả thực, thể hiện khung cảnh làng quê Việt Nam thanh bình. Bức tranh được vẽ với gam màu tươi sáng, rực rỡ, tạo cảm giác ấm áp, vui tươi. **<span style="color:skyblue">Nền trời xanh biếc</span>**, điểm xuyết những **<span style="color:darkcyan">đám mây trắng bồng bềnh</span>**. Phía xa là **<span style="color:greenyellow">dãy núi xanh ngắt</span>**, thấp thoáng **<span style="color:indianred">những ngôi nhà mái ngói đỏ au</span>**. Giữa khung cảnh là **<span style="color:orangered">dòng sông uốn lượn</span>**, chảy hiền hòa giữa **<span style="color:mediumpurple">cánh đồng lúa chín vàng óng ánh</span>**. Hai bên bờ sông là những **<span style="color:forestgreen">hàng cây xanh mát rượi</span>**, tạo nên một **<span style="color:mediumpurple">bức tranh thiên nhiên thơ mộng, hữu tình</span>**. Trên dòng sông, có **<span style="color:blue">hai người đang ngồi trên thuyền nhỏ</span>**, tay cầm lái, hướng về phía trước. Bên cạnh đó, còn có **<span style="color:hotpink">một người phụ nữ đang đi trên con đường đất nhỏ, hướng về phía làng</span>**. Hình ảnh này gợi lên **<span style="color:darkseagreen">một cuộc sống thanh bình, yên ả của người dân làng quê Việt Nam</span>**.
Hình ảnh thứ hai là một bức ảnh chụp chân dung, thể hiện sự hồn nhiên, trong sáng của tuổi thơ. Trong ảnh, có **<span style="color:blueviolet">ba nhân vật chính: hai cậu bé và một người đàn ông</span>**. **<span style="color:mediumvioletred">Cậu bé lớn hơn</span>** đang cưỡi trên lưng con trâu đen, hai tay nắm chặt vào dây cương, vẻ mặt đầy thích thú. **<span style="color:seagreen">Cậu bé nhỏ hơn</span>** ngồi sau lưng cậu bạn lớn, tay cầm một chiếc mũ rộng vành, nhìn về phía trước với ánh mắt ngây thơ. **<span style="color:darkblue">Người đàn ông</span>** đứng ở phía sau, mặc quần đùi, tay cầm một cái chậu nước, đang đổ nước lên đầu hai cậu bé. Bối cảnh của bức ảnh là một cánh đồng cỏ xanh mướt, phía xa là những hàng cây xanh mát. Ánh nắng ban mai chiếu rọi xuống cánh đồng, tạo nên một **<span style="color:darkgreen">khung cảnh đẹp như tranh vẽ</span>**.
Bức ảnh này mang đến cho người xem cảm giác **<span style="color:orangered">vui tươi</span>**, **<span style="color:orangered">hồn nhiên</span>**, thể hiện nét đẹp văn hóa **<span style="color:orangered">truyền thống của người nông dân Việt Nam</span>**.
## Quickstart 🎮
[](https://colab.research.google.com/drive/1CnSxtWDLG48-NQh7wk9_z8WI7J4OY_Ci?usp=sharing)
Install the necessary packages:
```curl
python -m pip install git+https://github.com/huggingface/transformers accelerate
python -m pip install qwen-vl-utils
pip install flash-attn --no-build-isolation
```
Then you can use `EraX-VL-7B-V1` like this:
```python
import os
import base64
import json
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "erax/EraX-VL-7B-V1"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="eager", # replace with "flash_attention_2" if your GPU is Ampere architecture
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# processor = AutoProcessor.from_pretrained(model_path)
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
model_path,
min_pixels=min_pixels,
max_pixels=max_pixels,
)
image_path ="image.jpg"
with open(image_path, "rb") as f:
encoded_image = base64.b64encode(f.read())
decoded_image_text = encoded_image.decode('utf-8')
base64_data = f"data:image;base64,{decoded_image_text}"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": base64_data,
},
{
"type": "text",
"text": "Diễn tả nội dung bức ảnh như 1 bác sỹ giỏi."
# "Diễn tả nội dung bức ảnh này bằng định dạng json."
},
],
}
]
# Prepare prompt
tokenized_text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[ tokenized_text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Generation configs
generation_config = model.generation_config
generation_config.do_sample = True
generation_config.temperature = 1.0
generation_config.top_k = 1
generation_config.top_p = 0.9
generation_config.min_p = 0.1
generation_config.best_of = 5
generation_config.max_new_tokens = 2048
generation_config.repetition_penalty = 1.06
# Inference
generated_ids = model.generate(**inputs, generation_config=generation_config)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
```
## Acknowledgments 👏
We thank Khang Đoàn ([5CD-AI](https://huggingface.co/5CD-AI)) for his invaluable support in order to train `EraX-VL-7B-V1`. Our appreciation also goes to AAA JS Company for their support and resources, which significantly contributed to this project.
## Citation 📝
<!-- - title={EraX-VL-7B-V1: A Highly Efficient Multimodal LLM for Vietnamese, especially for medical forms and bills.},
- author={Nguyễn Anh Nguyên and Nguyễn Hồ Nam (BCG) and Dũng Hoàng and Thục Phạm and Nhật Phạm},
- helpers={Khang Đoàn and AAA JS Company},
- contact={[email protected]},
- organization={EraX} -->
If you find our project useful, we would appreciate it if you could star our repository and cite our work as follows:
```
@article{EraX-VL-7B-V1,
title={EraX-VL-7B-V1: A Highly Efficient Multimodal LLM for Vietnamese, especially for medical forms and bills},
author={Nguyễn Anh Nguyên and Nguyễn Hồ Nam (BCG) and Hoàng Tiến Dũng and Phạm Đình Thục and Phạm Huỳnh Nhật},
organization={EraX},
year={2024},
url={https://huggingface.co/erax-ai/EraX-VL-7B-V1},
github={https://github.com/EraX-JS-Company/erax-vl-7b-v1/}
}
```
## References 📑
[1] Qwen team. Qwen2-VL. 2024.
[2] Bai, Jinze, et al. "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond." arXiv preprint arXiv:2308.12966 (2023).
[4] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[5] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[6] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[7] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
## Contact 🤝
- For correspondence regarding this work or inquiry for API trial, please contact Nguyễn Anh Nguyên at [[email protected]]([email protected]).
- Follow us on <b><a href="https://github.com/EraX-JS-Company/erax-vl-7b-v1/" target="_blank">EraX Github</a></b>
| [
"BEAR",
"CHIA"
] |
Yntec/ElldrethsRetroMix | Yntec | text-to-image | [
"diffusers",
"safetensors",
"Retro",
"Vintage",
"Illustrations",
"Elldreth",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-07-14T16:18:11Z | 2024-05-12T09:27:48+00:00 | 953 | 2 | ---
library_name: diffusers
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- Retro
- Vintage
- Illustrations
- Elldreth
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
UPDATE: This model is being relaunched with the 840KVAE baked in for better details.
# Elldreths Retro Mix
Original page: https://huggingface.co/LibreSD/Elldreth
Comparison

(click for larger)
Samples and prompts:

Top left: Cute chibi toddler girl, 1940, iconic, highly detailed, digital painting, artstation, sharp focus, streamlined, by kyoani and makoto shinkai and akihiko yoshida and hidari and ROSSDRAWS
Top right: Girls portrait. out worn Retro washed Stock colors Closeup detailed eyes faces movie TRAILER TV. Santa and daughters enjoying tacos with enchiladas. sitting with a pretty cute little girl, Art Christmas Theme by Gil_Elvgren and Haddon_Sundblom. Posing
Bottom left: an adorable baby polar Bear playing cocacola bottle in a club, whimsical cartoon children book illustration. chibi eyes
Bottom right: cinematic 60s movie still, pretty school woman with cleavage hugging handsome man, classroom, Uniforms, blackboard. Pinup. He wears a backpack, bokeh | [
"BEAR"
] |
BioMistral/BioMistral-7B-GGUF | BioMistral | text-generation | [
"transformers",
"gguf",
"mistral",
"text-generation",
"medical",
"biology",
"fr",
"en",
"de",
"nl",
"es",
"pt",
"pl",
"ro",
"it",
"dataset:pubmed",
"arxiv:2402.10373",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-02-19T18:49:33Z | 2024-02-19T19:42:21+00:00 | 950 | 9 | ---
datasets:
- pubmed
language:
- fr
- en
- de
- nl
- es
- pt
- pl
- ro
- it
license: apache-2.0
pipeline_tag: text-generation
tags:
- medical
- biology
---
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
| [
"MEDQA",
"PUBMEDQA"
] |
mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"code",
"text-generation-inference",
"Information Extraction",
"IE",
"Named Entity Recogniton",
"Event Extraction",
"Relation Extraction",
"LLaMA",
"en",
"dataset:ACE05",
"dataset:bc5cdr",
"dataset:conll2003",
"dataset:ncbi_disease",
"dataset:conll2012_ontonotesv5",
"dataset:rams",
"dataset:tacred",
"dataset:wnut_17",
"base_model:KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors",
"base_model:quantized:KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | 2025-03-01T17:57:18Z | 2025-03-02T05:33:38+00:00 | 949 | 1 | ---
base_model: KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors
datasets:
- ACE05
- bc5cdr
- conll2003
- ncbi_disease
- conll2012_ontonotesv5
- rams
- tacred
- wnut_17
language:
- en
library_name: transformers
license: llama2
tags:
- code
- text-generation-inference
- Information Extraction
- IE
- Named Entity Recogniton
- Event Extraction
- Relation Extraction
- LLaMA
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"BC5CDR",
"NCBI DISEASE"
] |
llSourcell/medllama2_7b | llSourcell | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:medalpaca/medical_meadow_medqa",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-08-09T01:48:57Z | 2023-08-21T17:05:22+00:00 | 948 | 132 | ---
datasets:
- medalpaca/medical_meadow_medqa
language:
- en
license: mit
pipeline_tag: conversational
---
| [
"MEDQA"
] |
Omartificial-Intelligence-Space/Arabic-labse-Matryoshka | Omartificial-Intelligence-Space | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | 2024-06-16T20:56:09Z | 2025-01-10T18:03:08+00:00 | 947 | 2 | ---
base_model: sentence-transformers/LaBSE
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
inference: false
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط
النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث
إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء
مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة
شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
model-index:
- name: SentenceTransformer based on sentence-transformers/LaBSE
results:
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (ar)
type: mintaka/mmteb-mintaka
config: ar
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: main_score
value: 14.585
- type: map_at_1
value: 8.352
- type: map_at_3
value: 10.917
- type: map_at_5
value: 11.634
- type: map_at_10
value: 12.254
- type: ndcg_at_1
value: 8.352
- type: ndcg_at_3
value: 11.794
- type: ndcg_at_5
value: 13.085
- type: ndcg_at_10
value: 14.585
- type: recall_at_1
value: 8.352
- type: recall_at_3
value: 14.344
- type: recall_at_5
value: 17.476
- type: recall_at_10
value: 22.106
- type: precision_at_1
value: 8.352
- type: precision_at_3
value: 4.781
- type: precision_at_5
value: 3.495
- type: precision_at_10
value: 2.211
- type: mrr_at_1
value: 8.3522
- type: mrr_at_3
value: 10.9169
- type: mrr_at_5
value: 11.6341
- type: mrr_at_10
value: 12.2543
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrievalHardNegatives (ar)
type: miracl/mmteb-miracl-hardnegatives
config: ar
split: dev
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
metrics:
- type: main_score
value: 18.836
- type: map_at_1
value: 6.646
- type: map_at_3
value: 10.692
- type: map_at_5
value: 11.969
- type: map_at_10
value: 13.446
- type: ndcg_at_1
value: 10.5
- type: ndcg_at_3
value: 13.645
- type: ndcg_at_5
value: 15.504
- type: ndcg_at_10
value: 18.836
- type: recall_at_1
value: 6.646
- type: recall_at_3
value: 15.361
- type: recall_at_5
value: 19.925
- type: recall_at_10
value: 28.6
- type: precision_at_1
value: 10.5
- type: precision_at_3
value: 8.533
- type: precision_at_5
value: 6.9
- type: precision_at_10
value: 5.21
- type: mrr_at_1
value: 10.5
- type: mrr_at_3
value: 16.25
- type: mrr_at_5
value: 17.68
- type: mrr_at_10
value: 19.1759
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ar)
type: mlqa/mmteb-mlqa
config: ar
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 61.582
- type: map_at_1
value: 47.195
- type: map_at_3
value: 54.03
- type: map_at_5
value: 55.77
- type: map_at_10
value: 56.649
- type: ndcg_at_1
value: 47.195
- type: ndcg_at_3
value: 56.295
- type: ndcg_at_5
value: 59.417
- type: ndcg_at_10
value: 61.582
- type: recall_at_1
value: 47.195
- type: recall_at_3
value: 62.863
- type: recall_at_5
value: 70.406
- type: recall_at_10
value: 77.176
- type: precision_at_1
value: 47.195
- type: precision_at_3
value: 20.954
- type: precision_at_5
value: 14.081
- type: precision_at_10
value: 7.718
- type: mrr_at_1
value: 47.1954
- type: mrr_at_3
value: 54.0297
- type: mrr_at_5
value: 55.7705
- type: mrr_at_10
value: 56.6492
- task:
type: Retrieval
dataset:
name: MTEB SadeemQuestionRetrieval (ar)
type: sadeem/mmteb-sadeem
config: default
split: test
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
metrics:
- type: main_score
value: 57.653
- type: map_at_1
value: 25.084
- type: map_at_3
value: 46.338
- type: map_at_5
value: 47.556
- type: map_at_10
value: 48.207
- type: ndcg_at_1
value: 25.084
- type: ndcg_at_3
value: 53.91
- type: ndcg_at_5
value: 56.102
- type: ndcg_at_10
value: 57.653
- type: recall_at_1
value: 25.084
- type: recall_at_3
value: 76.017
- type: recall_at_5
value: 81.331
- type: recall_at_10
value: 86.07
- type: precision_at_1
value: 25.084
- type: precision_at_3
value: 25.339
- type: precision_at_5
value: 16.266
- type: precision_at_10
value: 8.607
- type: mrr_at_1
value: 23.1211
- type: mrr_at_3
value: 44.9657
- type: mrr_at_5
value: 46.3037
- type: mrr_at_10
value: 46.8749
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 76.46793440999714
- type: cosine_spearman
value: 76.66439745271298
- type: euclidean_pearson
value: 76.52075972347127
- type: euclidean_spearman
value: 76.66439745271298
- type: main_score
value: 76.66439745271298
- type: manhattan_pearson
value: 76.68001857069733
- type: manhattan_spearman
value: 76.73066402288269
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 79.67657890693198
- type: cosine_spearman
value: 77.03286420274621
- type: euclidean_pearson
value: 78.1960735272073
- type: euclidean_spearman
value: 77.032855497919
- type: main_score
value: 77.03286420274621
- type: manhattan_pearson
value: 78.25627275994229
- type: manhattan_spearman
value: 77.00430810589081
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 83.94288954523996
- type: cosine_spearman
value: 79.21432176112556
- type: euclidean_pearson
value: 81.21333251943913
- type: euclidean_spearman
value: 79.2152067330468
- type: main_score
value: 79.21432176112556
- type: manhattan_pearson
value: 81.16910737482634
- type: manhattan_spearman
value: 79.08756466301445
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 77.48393909963059
- type: cosine_spearman
value: 79.54963868861196
- type: euclidean_pearson
value: 79.28416002197451
- type: euclidean_spearman
value: 79.54963861790114
- type: main_score
value: 79.54963868861196
- type: manhattan_pearson
value: 79.18653917582513
- type: manhattan_spearman
value: 79.46713533414295
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 78.51596313692846
- type: cosine_spearman
value: 78.84601702652395
- type: euclidean_pearson
value: 78.55199809961427
- type: euclidean_spearman
value: 78.84603362286225
- type: main_score
value: 78.84601702652395
- type: manhattan_pearson
value: 78.52780170677605
- type: manhattan_spearman
value: 78.77744294039178
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 84.53393478889929
- type: cosine_spearman
value: 85.60821849381648
- type: euclidean_pearson
value: 85.32813923250558
- type: euclidean_spearman
value: 85.6081835456016
- type: main_score
value: 85.60821849381648
- type: manhattan_pearson
value: 85.32782097916476
- type: manhattan_spearman
value: 85.58098670898562
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 77.00196998325856
- type: cosine_spearman
value: 79.930951699069
- type: euclidean_pearson
value: 79.43196738390897
- type: euclidean_spearman
value: 79.93095112410258
- type: main_score
value: 79.930951699069
- type: manhattan_pearson
value: 79.33744358111427
- type: manhattan_spearman
value: 79.82939266539601
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 81.60289529424327
- type: cosine_spearman
value: 82.46806381979653
- type: euclidean_pearson
value: 81.32235058296072
- type: euclidean_spearman
value: 82.46676890643914
- type: main_score
value: 82.46806381979653
- type: manhattan_pearson
value: 81.43885277175312
- type: manhattan_spearman
value: 82.38955952718666
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 49.58293768761314
- type: cosine_spearman
value: 57.261888789832874
- type: euclidean_pearson
value: 53.36549109538782
- type: euclidean_spearman
value: 57.261888789832874
- type: main_score
value: 57.261888789832874
- type: manhattan_pearson
value: 53.06640323833928
- type: manhattan_spearman
value: 57.05837935512948
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 81.43997935928729
- type: cosine_spearman
value: 82.04996129795596
- type: euclidean_pearson
value: 82.01917866996972
- type: euclidean_spearman
value: 82.04996129795596
- type: main_score
value: 82.04996129795596
- type: manhattan_pearson
value: 82.03487112040936
- type: manhattan_spearman
value: 82.03774605775651
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 32.113475997147674
- type: cosine_spearman
value: 32.17194233764879
- type: dot_pearson
value: 32.113469728827255
- type: dot_spearman
value: 32.174771315355386
- type: main_score
value: 32.17194233764879
- type: pearson
value: 32.113475997147674
- type: spearman
value: 32.17194233764879
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.7269177710249681
name: Pearson Cosine
- type: spearman_cosine
value: 0.7225258779395222
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7259261785622463
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7210463582530393
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7259567884235211
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.722525823788783
name: Spearman Euclidean
- type: pearson_dot
value: 0.7269177712136122
name: Pearson Dot
- type: spearman_dot
value: 0.7225258771129475
name: Spearman Dot
- type: pearson_max
value: 0.7269177712136122
name: Pearson Max
- type: spearman_max
value: 0.7225258779395222
name: Spearman Max
- type: pearson_cosine
value: 0.8143867576376295
name: Pearson Cosine
- type: spearman_cosine
value: 0.8205044914629483
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8203365887013151
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8203816698535976
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8201809453496319
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8205044914629483
name: Spearman Euclidean
- type: pearson_dot
value: 0.8143867541070537
name: Pearson Dot
- type: spearman_dot
value: 0.8205044914629483
name: Spearman Dot
- type: pearson_max
value: 0.8203365887013151
name: Pearson Max
- type: spearman_max
value: 0.8205044914629483
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.7268389724271859
name: Pearson Cosine
- type: spearman_cosine
value: 0.7224359411000278
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7241418669615103
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7195408311833029
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7248184919191593
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7212936866178097
name: Spearman Euclidean
- type: pearson_dot
value: 0.7252522928016701
name: Pearson Dot
- type: spearman_dot
value: 0.7205040482865328
name: Spearman Dot
- type: pearson_max
value: 0.7268389724271859
name: Pearson Max
- type: spearman_max
value: 0.7224359411000278
name: Spearman Max
- type: pearson_cosine
value: 0.8143448965624136
name: Pearson Cosine
- type: spearman_cosine
value: 0.8211700903453509
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8217448619823571
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8216016599665544
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8216413349390971
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.82188122418776
name: Spearman Euclidean
- type: pearson_dot
value: 0.8097020064483653
name: Pearson Dot
- type: spearman_dot
value: 0.8147306090545295
name: Spearman Dot
- type: pearson_max
value: 0.8217448619823571
name: Pearson Max
- type: spearman_max
value: 0.82188122418776
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.7283468617741852
name: Pearson Cosine
- type: spearman_cosine
value: 0.7264294106954872
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7227711798003426
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.718067982079232
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7251492361775083
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7215068115809131
name: Spearman Euclidean
- type: pearson_dot
value: 0.7243396991648858
name: Pearson Dot
- type: spearman_dot
value: 0.7221390873398206
name: Spearman Dot
- type: pearson_max
value: 0.7283468617741852
name: Pearson Max
- type: spearman_max
value: 0.7264294106954872
name: Spearman Max
- type: pearson_cosine
value: 0.8075613785257986
name: Pearson Cosine
- type: spearman_cosine
value: 0.8159258089804861
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8208711370091426
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8196747601014518
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8210210137439432
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8203004500356083
name: Spearman Euclidean
- type: pearson_dot
value: 0.7870611647231145
name: Pearson Dot
- type: spearman_dot
value: 0.7874848213991118
name: Spearman Dot
- type: pearson_max
value: 0.8210210137439432
name: Pearson Max
- type: spearman_max
value: 0.8203004500356083
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.7102082520621849
name: Pearson Cosine
- type: spearman_cosine
value: 0.7103917869311991
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7134729607181519
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.708895102058259
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7171545288118942
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7130380237150746
name: Spearman Euclidean
- type: pearson_dot
value: 0.6777774738547628
name: Pearson Dot
- type: spearman_dot
value: 0.6746474823963989
name: Spearman Dot
- type: pearson_max
value: 0.7171545288118942
name: Pearson Max
- type: spearman_max
value: 0.7130380237150746
name: Spearman Max
- type: pearson_cosine
value: 0.8024378358145556
name: Pearson Cosine
- type: spearman_cosine
value: 0.8117561815472325
name: Spearman Cosine
- type: pearson_manhattan
value: 0.818920309459774
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8180515365910205
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8198346073356603
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8185162896024369
name: Spearman Euclidean
- type: pearson_dot
value: 0.7513270537478935
name: Pearson Dot
- type: spearman_dot
value: 0.7427542871546953
name: Spearman Dot
- type: pearson_max
value: 0.8198346073356603
name: Pearson Max
- type: spearman_max
value: 0.8185162896024369
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.6930745722517785
name: Pearson Cosine
- type: spearman_cosine
value: 0.6982194042238953
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6971382079778946
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6942362764367931
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7012627015062325
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6986972295835788
name: Spearman Euclidean
- type: pearson_dot
value: 0.6376735798940838
name: Pearson Dot
- type: spearman_dot
value: 0.6344835722310429
name: Spearman Dot
- type: pearson_max
value: 0.7012627015062325
name: Pearson Max
- type: spearman_max
value: 0.6986972295835788
name: Spearman Max
- type: pearson_cosine
value: 0.7855080652087961
name: Pearson Cosine
- type: spearman_cosine
value: 0.7948979371698327
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8060407473462375
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8041199691999044
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8088262858195556
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8060483394849104
name: Spearman Euclidean
- type: pearson_dot
value: 0.677754045289596
name: Pearson Dot
- type: spearman_dot
value: 0.6616232873061395
name: Spearman Dot
- type: pearson_max
value: 0.8088262858195556
name: Pearson Max
- type: spearman_max
value: 0.8060483394849104
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/LaBSE
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) <!-- at revision e34fab64a3011d2176c99545a93d5cbddc9a91b7 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Arabic-labse")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7269 |
| **spearman_cosine** | **0.7225** |
| pearson_manhattan | 0.7259 |
| spearman_manhattan | 0.721 |
| pearson_euclidean | 0.726 |
| spearman_euclidean | 0.7225 |
| pearson_dot | 0.7269 |
| spearman_dot | 0.7225 |
| pearson_max | 0.7269 |
| spearman_max | 0.7225 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7268 |
| **spearman_cosine** | **0.7224** |
| pearson_manhattan | 0.7241 |
| spearman_manhattan | 0.7195 |
| pearson_euclidean | 0.7248 |
| spearman_euclidean | 0.7213 |
| pearson_dot | 0.7253 |
| spearman_dot | 0.7205 |
| pearson_max | 0.7268 |
| spearman_max | 0.7224 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7283 |
| **spearman_cosine** | **0.7264** |
| pearson_manhattan | 0.7228 |
| spearman_manhattan | 0.7181 |
| pearson_euclidean | 0.7251 |
| spearman_euclidean | 0.7215 |
| pearson_dot | 0.7243 |
| spearman_dot | 0.7221 |
| pearson_max | 0.7283 |
| spearman_max | 0.7264 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7102 |
| **spearman_cosine** | **0.7104** |
| pearson_manhattan | 0.7135 |
| spearman_manhattan | 0.7089 |
| pearson_euclidean | 0.7172 |
| spearman_euclidean | 0.713 |
| pearson_dot | 0.6778 |
| spearman_dot | 0.6746 |
| pearson_max | 0.7172 |
| spearman_max | 0.713 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6931 |
| **spearman_cosine** | **0.6982** |
| pearson_manhattan | 0.6971 |
| spearman_manhattan | 0.6942 |
| pearson_euclidean | 0.7013 |
| spearman_euclidean | 0.6987 |
| pearson_dot | 0.6377 |
| spearman_dot | 0.6345 |
| pearson_max | 0.7013 |
| spearman_max | 0.6987 |
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8144 |
| **spearman_cosine** | **0.8205** |
| pearson_manhattan | 0.8203 |
| spearman_manhattan | 0.8204 |
| pearson_euclidean | 0.8202 |
| spearman_euclidean | 0.8205 |
| pearson_dot | 0.8144 |
| spearman_dot | 0.8205 |
| pearson_max | 0.8203 |
| spearman_max | 0.8205 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8143 |
| **spearman_cosine** | **0.8212** |
| pearson_manhattan | 0.8217 |
| spearman_manhattan | 0.8216 |
| pearson_euclidean | 0.8216 |
| spearman_euclidean | 0.8219 |
| pearson_dot | 0.8097 |
| spearman_dot | 0.8147 |
| pearson_max | 0.8217 |
| spearman_max | 0.8219 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8076 |
| **spearman_cosine** | **0.8159** |
| pearson_manhattan | 0.8209 |
| spearman_manhattan | 0.8197 |
| pearson_euclidean | 0.821 |
| spearman_euclidean | 0.8203 |
| pearson_dot | 0.7871 |
| spearman_dot | 0.7875 |
| pearson_max | 0.821 |
| spearman_max | 0.8203 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8024 |
| **spearman_cosine** | **0.8118** |
| pearson_manhattan | 0.8189 |
| spearman_manhattan | 0.8181 |
| pearson_euclidean | 0.8198 |
| spearman_euclidean | 0.8185 |
| pearson_dot | 0.7513 |
| spearman_dot | 0.7428 |
| pearson_max | 0.8198 |
| spearman_max | 0.8185 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7855 |
| **spearman_cosine** | **0.7949** |
| pearson_manhattan | 0.806 |
| spearman_manhattan | 0.8041 |
| pearson_euclidean | 0.8088 |
| spearman_euclidean | 0.806 |
| pearson_dot | 0.6778 |
| spearman_dot | 0.6616 |
| pearson_max | 0.8088 |
| spearman_max | 0.806 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.99 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.44 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.82 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 19.71 tokens</li><li>max: 100 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.37 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.49 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| None | 0 | - | 0.7104 | 0.7264 | 0.7224 | 0.6982 | 0.7225 |
| 0.0229 | 200 | 13.1738 | - | - | - | - | - |
| 0.0459 | 400 | 8.8127 | - | - | - | - | - |
| 0.0688 | 600 | 8.0984 | - | - | - | - | - |
| 0.0918 | 800 | 7.2984 | - | - | - | - | - |
| 0.1147 | 1000 | 7.5749 | - | - | - | - | - |
| 0.1377 | 1200 | 7.1292 | - | - | - | - | - |
| 0.1606 | 1400 | 6.6146 | - | - | - | - | - |
| 0.1835 | 1600 | 6.6523 | - | - | - | - | - |
| 0.2065 | 1800 | 6.1095 | - | - | - | - | - |
| 0.2294 | 2000 | 6.0841 | - | - | - | - | - |
| 0.2524 | 2200 | 6.3024 | - | - | - | - | - |
| 0.2753 | 2400 | 6.1941 | - | - | - | - | - |
| 0.2983 | 2600 | 6.1686 | - | - | - | - | - |
| 0.3212 | 2800 | 5.8317 | - | - | - | - | - |
| 0.3442 | 3000 | 6.0597 | - | - | - | - | - |
| 0.3671 | 3200 | 5.7832 | - | - | - | - | - |
| 0.3900 | 3400 | 5.7088 | - | - | - | - | - |
| 0.4130 | 3600 | 5.6988 | - | - | - | - | - |
| 0.4359 | 3800 | 5.5268 | - | - | - | - | - |
| 0.4589 | 4000 | 5.5543 | - | - | - | - | - |
| 0.4818 | 4200 | 5.3152 | - | - | - | - | - |
| 0.5048 | 4400 | 5.2894 | - | - | - | - | - |
| 0.5277 | 4600 | 5.1805 | - | - | - | - | - |
| 0.5506 | 4800 | 5.4559 | - | - | - | - | - |
| 0.5736 | 5000 | 5.3836 | - | - | - | - | - |
| 0.5965 | 5200 | 5.2626 | - | - | - | - | - |
| 0.6195 | 5400 | 5.2511 | - | - | - | - | - |
| 0.6424 | 5600 | 5.3308 | - | - | - | - | - |
| 0.6654 | 5800 | 5.2264 | - | - | - | - | - |
| 0.6883 | 6000 | 5.2881 | - | - | - | - | - |
| 0.7113 | 6200 | 5.1349 | - | - | - | - | - |
| 0.7342 | 6400 | 5.0872 | - | - | - | - | - |
| 0.7571 | 6600 | 4.5515 | - | - | - | - | - |
| 0.7801 | 6800 | 3.4312 | - | - | - | - | - |
| 0.8030 | 7000 | 3.1008 | - | - | - | - | - |
| 0.8260 | 7200 | 2.9582 | - | - | - | - | - |
| 0.8489 | 7400 | 2.8153 | - | - | - | - | - |
| 0.8719 | 7600 | 2.7214 | - | - | - | - | - |
| 0.8948 | 7800 | 2.5392 | - | - | - | - | - |
| 0.9177 | 8000 | 2.584 | - | - | - | - | - |
| 0.9407 | 8200 | 2.5384 | - | - | - | - | - |
| 0.9636 | 8400 | 2.4937 | - | - | - | - | - |
| 0.9866 | 8600 | 2.4155 | - | - | - | - | - |
| 1.0 | 8717 | - | 0.8118 | 0.8159 | 0.8212 | 0.7949 | 0.8205 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
} | [
"BIOSSES"
] |
5CD-AI/Vietnamese-Sentiment-visobert | 5CD-AI | text-classification | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"sentiment-analysis",
"social-listening",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-13T04:25:49Z | 2024-07-26T17:32:57+00:00 | 943 | 25 | ---
language:
- vi
library_name: transformers
metrics:
- accuracy
- f1
tags:
- sentiment-analysis
- social-listening
---
# 5CD-ViSoBERT for Vietnamese Sentiment Analysis
<b>YOU ARE TOO BORED AND TIRED OF HAVING TO BUILD A 🇻🇳 VIETNAMESE SENTIMENT ANALYSIS MODEL OVER AND OVER AGAIN?</b>
<b> BOOM! 🤯 NO WORRIES, WE'RE HERE FOR YOU =)) 🔥!</b>
This model is based on our pretrained [5CD-AI/visobert-14gb-corpus](https://huggingface.co/5CD-AI/visobert-14gb-corpus), which has been continuously trained on a 14GB dataset of Vietnamese social content. So it can perform well with many comment sentiments accompanied by emojis 😂👍💬🔥
Our model is fine-tuned on <b>120K Vietnamese sentiment analysis datasets </b>, including comments and reviews from e-commerce platforms, social media, and forums. Our model has been trained on a diverse range of datasets: SA-VLSP2016, AIVIVN-2019, UIT-VSFC, UIT-VSMEC, UIT-ViCTSD, UIT-ViOCD, UIT-ViSFD, Vi-amazon-reviews, Tiki-reviews.
The model will give softmax outputs for three labels.
<b>Labels</b>:
```
0 -> Negative
1 -> Positive
2 -> Neutral
```
## Dataset
Our training dataset. Because of label ambiguity, with UIT-VSMEC, UIT-ViCTSD, VOZ-HSD, we re-label the dataset with Gemini 1.5 Flash API follow the 3 labels. The specific number of samples for each dataset can be seen below:
<table border="2">
<tr align="center">
<th rowspan="2">Dataset</th>
<th colspan="3">Train</th>
<th colspan="3">Test</th>
<th colspan="3">Val</th>
</tr>
<tr align="center">
<th>Neg</th>
<th>Pos</th>
<th>Neu</th>
<th>Neg</th>
<th>Pos</th>
<th>Neu</th>
<th>Neg</th>
<th>Pos</th>
<th>Neu</th>
</tr>
<tr align="center">
<td align="left">All-filtered</td>
<td>62708</td>
<td>41400</td>
<td>11593</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>5079</td>
<td>3724</td>
<td>638</td>
</tr>
<tr align="center">
<td align="left">SA-VLSP2016</td>
<td>4759</td>
<td>4798</td>
<td>4459</td>
<td>1180</td>
<td>1190</td>
<td>1114</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">UIT-VSFC </td>
<td>5325</td>
<td>5643</td>
<td>458</td>
<td>1409</td>
<td>1590</td>
<td>167</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">UIT-VSMEC (Gemini-label)</td>
<td>3219</td>
<td>1665</td>
<td>594</td>
<td>458</td>
<td>407</td>
<td>210</td>
<td>71</td>
<td>388</td>
<td>239</td>
<td>52</td>
</tr>
<tr align="center">
<td align="left">AIVIVN-2019</td>
<td>6776</td>
<td>7879</td>
<td>-</td>
<td>4770</td>
<td>5168</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">UIT-ViCTSD (Gemini-label)</td>
<td>3370</td>
<td>2615</td>
<td>933</td>
<td>3370</td>
<td>2615</td>
<td>933</td>
<td>3370</td>
<td>2615</td>
<td>933</td>
</tr>
<tr align="center">
<td align="left">UIT-ViHSD</td>
<td>4162</td>
<td>19886</td>
<td>-</td>
<td>1132</td>
<td>5548</td>
<td>-</td>
<td>482</td>
<td>2190</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">UIT-ViSFD</td>
<td>2850</td>
<td>3670</td>
<td>1266</td>
<td>827</td>
<td>1000</td>
<td>397</td>
<td>409</td>
<td>515</td>
<td>188</td>
</tr>
<tr align="center">
<td align="left">UIT-ViOCD</td>
<td>2292</td>
<td>2095</td>
<td>-</td>
<td>279</td>
<td>270</td>
<td>-</td>
<td>283</td>
<td>265</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">Tiki-reviews</td>
<td>20093</td>
<td>6669</td>
<td>4698</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">VOZ-HSD (Gemini-label)</td>
<td>2676</td>
<td>1213</td>
<td>1071</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr align="center">
<td align="left">Vietnamese-amazon-polarity</td>
<td>2559</td>
<td>2441</td>
<td>-</td>
<td>1017</td>
<td>983</td>
<td>-</td>
<td>523</td>
<td>477</td>
<td>-</td>
</tr>
</table>
## Evaluation
<table>
<tr align="center">
<td rowspan=2><b>Model</td>
<td colspan=4><b>SA-VLSP2016</td>
<td colspan=4><b>AIVIVN-2019</td>
<td colspan=4><b>UIT-VSFC</td>
<td colspan=4><b>UIT-VSMEC (Gemini-label)</td>
<td colspan=4><b>UIT-ViCTSD (Gemini-label)</td>
</tr>
<tr align="center">
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
</tr>
<tr align="center">
<tr align="center">
<td align="left">wonrax/phobert-base-vietnamese-sentiment</td>
<td>61.65</td>
<td>63.95</td>
<td>61.65</td>
<td>60.01</td>
<td>84.87</td>
<td>95.12</td>
<td>84.87</td>
<td>89.47</td>
<td>76.37</td>
<td>88.10</td>
<td>76.37</td>
<td>79.53</td>
<td>65.41</td>
<td>74.36</td>
<td>65.41</td>
<td>68.33</td>
<td>62.34</td>
<td>73.08</td>
<td>62.34</td>
<td>65.54</td>
</tr>
<tr align="center">
<td align="left"><b>5CD-AI/Vietnamese-Sentiment-visobert</td>
<td><b>88.06</td>
<td><b>88.16</td>
<td><b>88.06</td>
<td><b>88.06</td>
<td><b>99.62</td>
<td><b>99.65</td>
<td><b>99.62</td>
<td><b>99.64</td>
<td><b>94.65</td>
<td><b>93.30</td>
<td><b>93.65</td>
<td><b>93.38</td>
<td><b>77.91</td>
<td><b>77.21</td>
<td><b>77.91</td>
<td><b>77.46</td>
<td><b>75.10</td>
<td><b>74.59</td>
<td><b>75.10</td>
<td><b>74.79</td>
</tr>
</div>
</table>
<table>
<tr align="center">
<td rowspan=2><b>Model</td>
<td colspan=4><b>UIT-ViOCD</td>
<td colspan=4><b>UIT-ViSFD</td>
<td colspan=4><b>Vi-amazon-polar</td>
</tr>
<tr align="center">
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
<td><b>Acc</td>
<td><b>Prec</td>
<td><b>Recall</td>
<td><b>WF1</td>
</tr>
<tr align="center">
<tr align="center">
<td align="left">wonrax/phobert-base-vietnamese-sentiment</td>
<td>74.68</td>
<td>87.14</td>
<td>74.68</td>
<td>78.13</td>
<td>67.90</td>
<td>67.95</td>
<td>67.90</td>
<td>66.98</td>
<td>61.40</td>
<td>76.53</td>
<td>61.40</td>
<td>65.70</td>
</tr>
<tr align="center">
<td align="left"><b>5CD-AI/Vietnamese-Sentiment-visobert</td>
<td><b>94.35</td>
<td><b>94.74</td>
<td><b>94.35</td>
<td><b>94.53</td>
<td><b>93.26</td>
<td><b>93.20</td>
<td><b>93.26</td>
<td><b>93.21</td>
<td><b>89.90</td>
<td><b>90.13</td>
<td><b>89.90</td>
<td><b>90.01</td>
</tr>
</div>
</table>
## Usage (HuggingFace Transformers)
Install `transformers` package:
pip install transformers
### Pipeline
```python
from transformers import pipeline
model_path = '5CD-AI/Vietnamese-Sentiment-visobert'
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Miếng dán dễ xước , ko khít với dt 11 prm")
```
Output:
```
[{'label': 'NEG', 'score': 0.998149037361145}]
```
### Other examples
```
Sentence: Đây là mô hình rất hay, đáp ứng tốt nhu cầu của nhiều doanh nghiệp Việt.
### Sentiment score ####
1) POS: 0.9995
2) NEG: 0.0003
3) NEU: 0.0003
```
```
Sentence: Qua vụ này thì uy tín của Trump càng lớn hơn nữa. Nhất là với hình ảnh đầy tính biểu tượng như trên.
### Sentiment score ####
1) POS: 0.9965
2) NEG: 0.0029
3) NEU: 0.0005
```
```
Sentence: Bãi đi nó lừa lắm, mình có bỏ vào ví tt này hơn 20 triệu. Lãi tính ra cả tháng dc bao nhiêu mình không nhớ, nhưng khi rút về ngân hàng nó trừ phí giao dịch hơn mịa nó tiền lãi.
Nên từ đó cạch luôn
### Sentiment score ####
1) NEG: 0.999
2) POS: 0.0008
3) NEU: 0.0002
```
```
Sentence: Vậy chắc tùy nơi rồi :D
Chỗ mình chuộng hàng masan lắm, mì gói thì không hẳn (có kokomi cũng bán chạy), con gia vị thì gần như toàn đồ masan.
### Sentiment score ####
1) NEU: 0.9824
2) NEG: 0.0157
3) POS: 0.0019
```
```
Sentence: hội sở ở tech trần duy hưng có 1 thằng là thằng Đạt hói. Làm lâu lên lão làng, đc làm lãnh đạo nhưng chả có cái việc mẹ gì chỉ được ngồi xếp ca cho nhân viên. xấu tính bẩn tính sân si nhất cái Tech*. Nghiệp vụ thì ậm ờ đ*o biết gì, chỉ suốt ngày nhận lương đi săm soi nhân viên là nhanh =))) đàn ông đàn ang chả khác mẹ gì mấy con mụ ngoài chợ, nó hành từng nhân viên ra bã, trừ đứa nào nịnh nọt ve vãn với nó. Lậy luôn đhs 1 thằng như thế lại được lên làm lead ở Tech.
### Sentiment score ####
1) NEG: 0.9994
2) POS: 0.0006
3) NEU: 0.0001
```
```
Sentence: Cà phê dở ko ngon, ai chưa mua thì đừng mua
### Sentiment score ####
1) NEG: 0.9994
2) POS: 0.0005
3) NEU: 0.0001
```
```
Sentence: Cũng tạm. Ko gì đb
### Sentiment score ####
1) NEU: 0.9387
2) NEG: 0.0471
3) POS: 0.0142
```
```
Sentence: thui báo ơi.nhà từ trong trứng ra mà sao sáng đc.
### Sentiment score ####
1) NEG: 0.988
2) POS: 0.0119
3) NEU: 0.0001
```
```
Sentence: Dm mới kéo cái tuột luôn cái kính cường lực🙂
R phải cầm cái kính tự dán🙂 để lâu quá nó dính hai cục bụi lên nữa chứ má bực thiệt chứ
Hình như tại hai cái cục nam châm nó xúc ra 😑
### Sentiment score ####
1) NEG: 0.9928
2) POS: 0.0071
3) NEU: 0.0001
```
```
Sentence: Mấy cái khóa kiểu này ông lên youtube tự học còn ngon hơn.
### Sentiment score ####
1) NEG: 0.9896
2) POS: 0.008
3) NEU: 0.0024
```
### Full classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
import torch
#### Load model
model_path = '5CD-AI/Vietnamese-Sentiment-visobert'
tokenizer = AutoTokenizer.from_pretrained(model_path)
config = AutoConfig.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path).to("cuda")
sentence = 'Cũng giống mấy khoá Youtube học cũng được'
print('Sentence: ', sentence)
input_ids = torch.tensor([tokenizer.encode(sentence)]).to("cuda")
with torch.no_grad():
out = model(input_ids)
scores = out.logits.softmax(dim=-1).cpu().numpy()[0]
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
print("### Sentiment score ####")
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l}: {np.round(float(s), 4)}")
```
Output:
```
Sentence: Cũng giống mấy khoá Youtube học cũng được
### Sentiment score ####
1) NEU: 0.8928
2) NEG: 0.0586
3) POS: 0.0486
```
## Fine-tune Configuration
We fine-tune `5CD-AI/visobert-14gb-corpus` on downstream tasks with `transformers` library with the following configuration:
- seed: 42
- gradient_accumulation_steps: 1
- weight_decay: 0.01
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-08
- training_epochs: 5
- model_max_length: 256
- learning_rate: 2e-5
- metric_for_best_model: wf1
- strategy: epoch
## References
[1] [PhoBERT: Pre-trained language models for Vietnamese](https://aclanthology.org/2020.findings-emnlp.92/)
[2] [ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing](https://aclanthology.org/2023.emnlp-main.315/)
[3] [The Amazon Polarity dataset](https://paperswithcode.com/dataset/amazon-polarity-1)
## Disclaimer
Disclaimer: The data contains actual comments on social networks that might be construed as abusive, offensive, or obscene. Additionally, the examples and dataset may contain negative information about any business. We only collect this data and do not bear any legal responsibility. | [
"BEAR"
] |
pszemraj/long-t5-tglobal-base-sci-simplify-elife | pszemraj | summarization | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"longt5",
"text2text-generation",
"lay summaries",
"paper summaries",
"biology",
"medical",
"summarization",
"en",
"dataset:pszemraj/scientific_lay_summarisation-elife-norm",
"base_model:google/long-t5-tglobal-base",
"base_model:quantized:google/long-t5-tglobal-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-08T23:35:56Z | 2023-11-28T19:20:35+00:00 | 932 | 5 | ---
base_model: google/long-t5-tglobal-base
datasets:
- pszemraj/scientific_lay_summarisation-elife-norm
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: summarization
tags:
- lay summaries
- paper summaries
- biology
- medical
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation- his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny- they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only- and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎'
example_title: Richard & Mortimer
- text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey
building, and the tallest structure in Paris. Its base is square, measuring 125
metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed
the Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building in New York City was
finished in 1930. It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
the Eiffel Tower is the second tallest free-standing structure in France after
the Millau Viaduct.
example_title: eiffel
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
encoder_no_repeat_ngram_size: 4
length_penalty: 0.4
num_beams: 4
---
# long-t5-tglobal-base-sci-simplify: elife subset
<a href="https://colab.research.google.com/gist/pszemraj/37a406059887a400afc1428d70374327/long-t5-tglobal-base-sci-simplify-elife-example-with-textsum.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Exploring how well long-document models trained on "lay summaries" of scientific papers generalize.
> A lay summary is a summary of a research paper or scientific study that is written in plain language, without the use of technical jargon, and is designed to be easily understood by non-experts.
## Model description
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the `pszemraj/scientific_lay_summarisation-elife-norm` dataset.
- The variant trained on the PLOS subset can be found [here](https://huggingface.co/pszemraj/long-t5-tglobal-base-sci-simplify)
## Usage
It's recommended to use this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` util repo to have most of this abstracted out for you:
```bash
pip install -U textsum
```
```python
from textsum.summarize import Summarizer
model_name = "pszemraj/long-t5-tglobal-base-sci-simplify-elife"
summarizer = Summarizer(model_name) # GPU auto-detected
text = "put the text you don't want to read here"
summary = summarizer.summarize_string(text)
print(summary)
```
## Intended uses & limitations
- Ability to generalize outside of the dataset domain (pubmed/bioscience type papers) has to be evaluated.
## Training and evaluation data
The `elife` subset of the lay summaries dataset. Refer to `pszemraj/scientific_lay_summarisation-elife-norm`
## Training procedure
### Eval results
It achieves the following results on the evaluation set:
- Loss: 1.9990
- Rouge1: 38.5587
- Rouge2: 9.7336
- Rougel: 21.1974
- Rougelsum: 35.9333
- Gen Len: 392.7095
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 2.2995 | 1.47 | 100 | 2.0175 | 35.2501 | 8.2121 | 20.4587 | 32.4494 | 439.7552 |
| 2.2171 | 2.94 | 200 | 1.9990 | 38.5587 | 9.7336 | 21.1974 | 35.9333 | 392.7095 |
| [
"BEAR"
] |
phamhai/Llama-3.2-3B-Instruct-Frog | phamhai | text-generation | [
"safetensors",
"llama",
"RAG",
"Vietnamese",
"Generation",
"Function_Calling",
"Function Calling",
"FC",
"Summarization",
"Rewriting",
"Functions",
"VLLM",
"LLM",
"text-generation",
"conversational",
"en",
"vi",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | 2024-10-22T09:45:48Z | 2024-12-13T04:48:01+00:00 | 930 | 11 | ---
base_model:
- meta-llama/Llama-3.2-3B-Instruct
language:
- en
- vi
license: llama3.2
pipeline_tag: text-generation
tags:
- RAG
- Vietnamese
- Generation
- Function_Calling
- Function Calling
- FC
- Summarization
- Rewriting
- Functions
- VLLM
- LLM
---
<p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6612cc790b91dd96968028f9/yP51EyRNg-CHCKB4gBYan.png" width="300" /> </p>
<h1>Llama-3.2-3B-Instruct-Frog - a RAG-optimized LLaMA3.2 for Vietnamese</h1>
**Quantized Version**: [phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF)
At the end of September 2024, Meta released two lightweight LLM model versions: [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) and [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). However, these models are not well-supported for Vietnamese, especially for tasks related to Retrieval-Augmented Generation (RAG).
Today, I am excited to announce the release of two models specifically trained to provide better support for Vietnamese RAG tasks.
<h2>Model Details:</h2>
+ Base Models: Llama-3.2-1B-Instruct and Llama-3.2-3B-Instruct
+ Performance: The models are optimized for fast inference and can be easily deployed on on-premise and edge devices (laptop/smartphone/NVIDIA Jetson Xavier/Raspberry Pi,ect).
+ Model weights:
+ [Llama-3.2-1B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-1B-Instruct-Frog): 131K context length, 1 billion parameters
+ [Llama-3.2-3B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog): 131K context length, 3 billion parameters
<blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, you agree to and comply with the terms and conditions specified in Meta's LLaMA-3 license.</blockquote>
<h2>Model Evaluation</h2>
We evaluated this model on the [VLMU benchmark](https://vmlu.ai/) and achieved an accuracy of **45.13**. However, this benchmark is not the focus of our current efforts. We believe it will be very difficult for language models with fewer than 13 billion parameters to retain enough knowledge to answer questions across diverse user contexts, especially for smaller models with under 3 billion parameters. For the model to effectively handle real-world business scenarios and avoid hallucinations, it is almost essential to supplement knowledge from external sources (through RAG). Therefore, we developed this model with a primary focus on optimizing its RAG capabilities. Internal testing is currently underway and will be updated soon.
***Update***:
Function Calling Benchmark: https://huggingface.co/datasets/phamhai/Vietnamese-Function-Calling-Test
| Model | Model size | Function name Acc (%) | Exact Match Acc (%)
| ------------ | ------------------ | ---------- | --------- |
| [phamhai/Llama-3.2-3B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog) | ~3B | 95.79 | 51.05 |
| [Gemini-1.5-Pro](https://deepmind.google/technologies/gemini/pro/) | --- | 96.96 | 55.16 |
| [Gemini-1.5-Flash](https://deepmind.google/technologies/gemini/flash/) | --- | 97.10 | 51.64 |
| [Gemini-1.5-Flash-8B](https://deepmind.google/technologies/gemini/flash/) | --- | 97.38 | 64.75 |
| [Gemini 2.0 Flash Experimental](https://deepmind.google/technologies/gemini/flash/) | --- | 96.93 | 61.26 |
| [gpt-4o-2024-08-06](https://platform.openai.com/docs/models#gpt-4o) | --- | 94.38 | 52.88 |
| [phamhai/Llama-3.2-3B-Instruct-Frog-Pro](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog-Pro) | ~3B | 97.96 | 63.47 |
<p align="left"> Table 1. Vietnamese Function Calling Benchmark </p>
<h2> Run the model </h2>
(*Disclaimer: The name of the bot is called Vivi, which is due to my passion for VinFast vehicles, and I also hope to develop my own smaller models for VinFast's car lines (which they refer to as their virtual assistant, Vivi). This model has no affiliation with VinFast or any related entities.*)
<h3> with Huggingface's transformers </h3>
<h4> 1. QnA task </h4>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "phamhai/Llama-3.2-3B-Instruct-Frog"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
messages = [
{"role": "system", "content": "Bạn là một người bạn gái xinh đẹp. Tên của bạn là Vivi. Hãy luôn xưng là Vivi, gọi người nói là anh và trả lời luôn bắt đầu bằng cụm từ Dạ thưa anh yêu của em."},
{"role": "user", "content": "xin chào em"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# Dạ thưa anh yêu của em, em rất vui được gặp anh.
messages = [
{"role": "system", "content": "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\nNếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch."},
{"role": "user", "content": "Làm sao để chữa bệnh đau đầu?"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
# Có nhiều nguyên nhân gây đau đầu, bao gồm căng thẳng, thiếu ngủ, mất nước, chứng đau đầu thường xuyên, đau đầu do chứng đau nửa đầu, và nhiều hơn nữa. Dưới đây là một số cách để giảm đau đầu:
# 1. Nghỉ ngơi: Nếu đau đầu là do căng thẳng hoặc thiếu ngủ, hãy nghỉ ngơi và ngủ đủ giấc.
# 2. Massage: Massage vùng cổ và vai có thể giúp giảm đau đầu.
# 3. Uống nước: Đảm bảo bạn uống đủ nước để giữ cho cơ thể luôn được cung cấp đủ nước.
# 4. Sử dụng thuốc giảm đau: Nếu đau đầu không giảm sau khi nghỉ ngơi và uống nước, bạn có thể sử dụng thuốc giảm đau như paracetamol hoặc ibuprofen.
# 5. Sử dụng băng lạnh: Nếu đau đầu do chứng đau nửa đầu, bạn có thể sử dụng băng lạnh để giảm đau.
# 6. Thay đổi chế độ ăn uống: Nếu đau đầu liên quan đến chế độ ăn uống của bạn, hãy thay đổi chế độ ăn uống để giảm đau đầu.
# Nếu đau đầu kéo dài hoặc trở nên nghiêm trọng hơn, bạn nên tìm kiếm sự giúp đỡ y tế để được chẩn đoán và điều trị đúng cách.
```
<h4> 2. Summarization task </h4>
<h5> Focused Answer </h5>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
Đoạn 0: "Chính phủ đề xuất bổ sung gần 20.700 tỷ đồng vốn điều lệ cho Ngân hàng Ngoại thương Việt Nam (Vietcombank) từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Chiều 23/10, thừa ủy quyền Chính phủ, Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc trình Quốc hội về bổ sung vốn Nhà nước tại Ngân hàng Ngoại Thương Việt Nam (Vietcombank). Theo đó, Chính phủ đề nghị tăng vốn điều lệ cho ngân hàng này gần 20.700 tỷ đồng từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Số tiền này lấy từ nguồn lợi nhuận còn lại lũy kế đến hết năm 2018 và lãi còn lại năm 2021. Vốn điều lệ dự kiến rót thêm cho Vietcombank gần bằng lợi nhuận hợp nhất trước thuế nửa đầu năm nay của nhà băng này. Việc bổ sung vốn cho "ông lớn" ngân hàng quốc doanh được Phó thủ tướng nhấn mạnh là cấp thiết để duy trì tỷ lệ vốn góp Nhà nước, phù hợp chiến lược phát triển kinh tế xã hội, tạo nguồn lực hỗ trợ ngân hàng yếu kém. Phó thủ tướng cho biết, phần lợi nhuận còn lại lũy kế hết năm 2018 và lãi còn lại 2021 hiện được hạch toán theo dõi tại VCB, chưa nằm trong cân đối ngân sách Nhà nước. Do vậy, nguồn vốn đề xuất tăng cho ngân hàng này không ảnh hưởng tới kế hoạch dự toán thu chi ngân sách 2024-2025. Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Vốn điều lệ của Vietcombank hiện là 55.891 tỷ đồng, thấp hơn nhiều so với VPBank (79.339 tỷ đồng), Techcombank (70.450 tỷ đồng) và không có sự cách biệt lớn so với một số ngân hàng thương mại cổ phần như MB (52.871) tỷ đồng, ACB (44.667 tỷ đồng) và SHB (36.629 tỷ đồng). Ngoài ra, việc tăng vốn nhằm để ngân hàng này đáp ứng các tỷ lệ an toàn tối thiểu. Tính tới cuối 2023, tỷ lệ an toàn vốn (CAR) của ngân hàng này là 11,05%, đảm bảo quy định. Tuy nhiên, mức này thấp hơn các ngân hàng thương mại cổ phần (VPBank, MB là 12-13%; Techcombank 13-15%...) và các nhà băng trong khu vực (Singapore là 17,1%, Indonesia 23,27%...). Thẩm tra nội dung này, Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh cho rằng đề xuất tăng vốn cho Vietcombank bảo đảm cơ sở pháp lý và đúng thẩm quyền theo quy định. Tuy nhiên, Ủy ban Kinh tế đề nghị Chính phủ lấy ý kiến của cổ đông chiến lược nước ngoài Ngân hàng Mizuho Corporate Bank - đơn vị nắm 15% vốn điều lệ của Vietcombank. Việc này nhằm thuận lợi trong quá trình tăng vốn. Chính phủ cũng cần bổ sung thông tin hiện trạng vốn của Vietcombank so với các ngân hàng thương mại trong hệ thống hiện nay. "Có ý kiến đề nghị làm rõ nhận định nguồn vốn đề xuất để tăng vốn điều lệ không tác động đến ngân sách Nhà nước", ông Thanh cho biết. Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh đề nghị Chính phủ chỉ đạo Ngân hàng Nhà nước cùng các bộ, ngành liên quan xử lý phần lợi nhuận còn lại năm 2022, 2023 (lần lượt là 21.680 tỷ và 25.009 tỷ đồng), nhằm tăng năng lực tài chính cho Vietcombank, bù đắp mức thiếu hụt vốn tự có, bảo đảm an toàn hoạt động. Cơ quan thẩm tra lưu ý vốn được bổ sung cho Vietcombank cần được dùng để mở rộng kinh doanh, cung ứng tín dụng với các lĩnh vực, dự án quan trọng quốc gia quy mô lớn, giảm lãi suất cho vay, cũng như đổi mới mô hình quản trị, chất lượng dịch vụ của nhà băng này. "Chính phủ cần đánh giá kỹ tác động việc bổ sung vốn Nhà nước cho Vietcombank tới phát triển của ngành ngân hàng, hiệu quả kinh tế xã hội", Ủy ban Kinh tế lưu ý. Vietcombank là một trong 4 ngân hàng thương mại Nhà nước, bên cạnh BIDV, VietinBank và Agribank. Ngân hàng này do Nhà nước sở hữu 74,8% vốn điều lệ. Lũy kế nửa đầu năm nay, lợi nhuận hợp nhất trước thuế của nhà băng này đạt 20.835 tỷ đồng, tăng 1,6% so với cùng kỳ 2023. Với dữ liệu này, Vietcombank tiếp tục đứng đầu toàn hệ thống ngân hàng về lợi nhuận 6 tháng đầu năm. Đây cũng là mức lãi nửa đầu năm cao kỷ lục của nhà băng này. Tính đến 30/6, tổng tài sản của ngân hàng đạt hơn 1,9 triệu tỷ đồng, tăng 3,6% so với cuối 2023. Trong đó, cho vay khách hàng gần 1,37 triệu tỷ đồng, tăng 7,8%."
Đoạn 1: "Đã có vài đơn vị bán tín chỉ carbon cho khách ngoại nhưng còn thiếu cơ sở pháp lý để đảm bảo hoạt động được thuận lợi, theo chuyên gia. Thông tin tại phiên tọa đàm thuộc Diễn đàn và Triển lãm Kinh tế xanh 2024 (GEFE), ông Đỗ Ngọc Quỳnh, Tổng thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA), cho biết thị trường tín chỉ carbon tự nguyện Việt Nam đã có một số đơn vị bán được tín chỉ carbon cho nhà đầu tư, tập đoàn nước ngoài. "Họ đang mua chứng chỉ carbon và chứng chỉ năng lượng tái tạo (REC) trong tiêu chí RE100, tức 100% năng lượng tái tạo", ông cho biết. RE100 là sáng kiến toàn cầu dành cho các công ty cam kết sử dụng 100% điện năng tái tạo, phát động bởi Climate Group và CDP vào 2014. Từ trái sang, Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) nói tại tọa đàm. Ảnh: GEFE 2024 Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) chia sẻ tại tọa đàm. Ảnh: GEFE 2024 Thị trường carbon gồm hai hình thức là bắt buộc và tự nguyện. Đồ họa: Dỹ Tùng Phân biệt các loại thị trường carbon. Đồ họa: Dỹ Tùng Theo kế hoạch của chính phủ, thị trường bắt buộc sẽ vận hành thử nghiệm vào giai đoạn 2025-2028. Với thị trường tự nguyện, ông Quỳnh cho biết đã bắt đầu hình thành và cũng biến động theo diễn biến xu hướng chung toàn cầu. Chuyên gia VBMA cho rằng Việt Nam đã có chính sách chung để thực hiện cam kết Net Zero vào 2050, nhưng vẫn chưa có pháp lý đầy đủ và rõ ràng cho thị trường carbon tự nguyện. "Những người bán tại Việt Nam sau giao dịch không biết hạch toán vào đâu, nộp thuế thế nào. Một số chọn phương án tính vào thu nhập bất thường để khai thuế", ông ví dụ. Ông Nguyễn Thành Nghiệp, Luật sư thành viên công ty luật VTN và Cộng sự chỉ ra việc chưa có quy định xác định tính chất tài sản của tín chỉ carbon. "Chúng có được xem là tài sản bình thường, được thế chấp hay giao dịch thế nào chưa có đủ căn cứ pháp lý", ông nói. Ngoài ra, quy trình MRV (đo lường, báo cáo và kiểm chứng) cũng cần quy định, hướng dẫn rõ. Theo ông, ngoài các cơ quan quản lý, khu vực tư nhân cũng trông chờ xem liệu có thể tham gia hoạt động MRV không. "Trong thời gian tới, nếu hoàn thiện pháp lý, thị trường sẽ có nhiều tiềm năng phát triển hơn", ông Đỗ Ngọc Quỳnh dự báo. Ngoài tín chỉ carbon, với tiềm năng điện tái tạo thứ tư thế giới theo McKenzie, ông cho rằng có thể khai thác việc vừa bán tín chỉ carbon vừa bán được REC. Theo VBMA, quy mô thị trường carbon bắt buộc toàn cầu đạt 104 tỷ USD năm ngoái, tăng 100% so với năm 2020. Trong khi, thị trường tự nguyện đã thu hẹp còn 800 triệu USD, giảm hai phần ba so với 2021 do một số vụ bê bối liên quan đến "giặt xanh" (green washing) làm ảnh hưởng đến uy tín, niềm tin. Theo dõi biến động của thị trường thế giới giúp các bên tham gia trong thị trường carbon tự nguyện còn sơ khai của Việt Nam rút kinh nghiệm và tìm ra hướng đi. Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS) văn phòng Hà Nội, dự báo người mua sẽ cần tìm kiếm các bên bán tín chỉ có hệ thống quản trị tốt và rõ ràng. Ông cho rằng người mua đang thiên về chuộng mua tín chỉ lĩnh vực giảm phát thải sản xuất vì dễ chứng minh. Một loại được quan tâm khác là "carbon xanh dương" (blue carbon) - tín chỉ tạo ra từ các dự án hấp thụ carbon của rừng ngập mặn, đầm lầy bãi triều và cỏ biển. Ông chỉ ra Việt Nam triển vọng với 200.000 ha rừng ngập mặn, có thể làm các dự án carbon tương tự như ở Honduras. Bà Thu Nguyễn, Quản lý chính sách tại Apanada Management Consultancy, Đại diện Viện Tài nguyên Thế giới (WRI) khuyến nghị các dự án tín chỉ carbon nâng cao giá trị bằng cách quan tâm đến tính bình đẳng và bao trùm. Theo đó, mục tiêu không chỉ là giảm phát thải mà còn là cải thiện đời sống người dân và phát triển bình đẳng hơn "Dự án cần bảo đảm có tham vấn của cộng đồng, đặc biệt là phụ nữ và các nhóm yếu thế, để tạo ra lợi ích cho cả cộng đồng lẫn nhà đầu tư", bà nói."
Đoạn 2: "Giá nhẫn trơn liên tục điều chỉnh, tăng gần một triệu đồng trong ngày và có nơi lên sát 89 triệu đồng một lượng. 15h ngày 23/10, giá mua bán nhẫn trơn được các thương hiệu kinh doanh điều chỉnh theo diễn biến đi lên của thế giới. Chiều nay, mỗi ounce vàng quốc tế tiếp tục thiết lập kỷ lục mới 2.755 USD. Giá nhẫn trơn tại Công ty Vàng bạc đá quý Sài Gòn (SJC) cũng tăng nửa triệu đồng so với đầu sáng và gần 1 triệu đồng so với cuối ngày hôm qua, lên 86,9 - 88,2 triệu đồng. Công ty Vàng bạc đá quý Phú Nhuận (PNJ) và Mi Hồng niêm yết giá nhẫn trơn quanh vùng 87,4 - 88,4 triệu đồng. Còn tại Tập đoàn Vàng bạc đá quý DOJI, giá mua bán nhẫn trơn cùng thời điểm thậm chí lên 88 - 88,9 triệu đồng một lượng. Trước đó đầu ngày, Công ty Vàng bạc đá quý Sài Gòn (SJC) đã tăng 300.000 đồng một lượng so với cuối ngày hôm qua, niêm yết giá nhẫn trơn tại 86,3 - 87,6 triệu đồng. Biểu giá mua bán nhẫn trơn tại Tập đoàn Vàng bạc đá quý DOJI lúc 9h sáng là 87 - 88 triệu đồng, tăng 200.000 đồng so với cuối ngày hôm qua. Nhẫn trơn giữ nhịp tăng liên tục trong 10 ngày qua. So với giữa tháng, mỗi lượng nhẫn trơn đã tăng hơn 5 triệu đồng. Còn so với đầu năm, nhẫn trơn tăng gần 25 triệu một lượng, tương đương hiệu suất 39%. Trong khi giá vàng miếng SJC đứng yên ở vùng 87 - 89 triệu một lượng, do Ngân hàng Nhà nước chưa thay đổi giá bán can thiệp. Thời điểm này là mùa cưới cuối năm và nhu cầu mua vàng nhẫn làm quà cưới tăng, song người dân không dễ để mua được mặt hàng này tại các thương hiệu lớn. Các thương hiệu lớn như DOJI, PNJ, Bảo Tín Minh Châu thường xuyên trong tình trạng cháy hàng. Khách lẻ chỉ may mắn mua được số lượng ít nếu cửa hàng vừa có khách bán ra. Còn tại SJC, các chi nhánh giới hạn lượng mua tối đa 5 phân đến 1 chỉ mỗi người. Trên thị trường quốc tế, mỗi ounce vàng trong 5 ngày qua tăng mạnh hơn 100 USD. Kim loại quý có thời điểm lên mức kỷ lục gần 2.750 USD, trước khi lùi về vùng 2.738 USD vào sáng nay. Quy đổi theo tỷ giá bán Vietcombank, giá vàng trong nước chênh lệch 3,5-5 triệu đồng một lượng so với thế giới. Theo dự báo của các nhà băng hàng đầu thế giới, giá vàng thế giới có thể lên 3.000 USD một ounce vào năm sau. Các chuyên gia khuyến nghị nhà đầu tư phân bổ tỷ trọng nhỏ danh mục vào kênh trú ẩn này, đặc biệt trong bối cảnh kim loại quý đã tăng mạnh thời gian qua."
Đoạn 3: "Nhu cầu trú ẩn khi căng thẳng địa chính trị leo thang kéo giá vàng lên mức đỉnh mới, tại 2.748 USD một ounce. Chốt phiên giao dịch 22/10, giá vàng thế giới giao ngay tăng gần 30 USD lên 2.748 USD một ounce. Đây là mức cao kỷ lục mới của kim loại quý. "Căng thẳng địa chính trị vẫn là nguyên nhân chủ yếu. Hai tuần nữa sẽ diễn ra bầu cử Tổng thống Mỹ và cuộc đua vẫn rất sát sao. Bất ổn chính trị đang kéo nhu cầu trú ẩn lên cao", Peter A. Grant - Phó giám đốc Zaner Metals nhận định trên Reuters. Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Cuộc thăm dò mới nhất của Reuters/Ipsos cho thấy tỷ lệ ủng hộ Phó tổng thống Kamala Harris hiện là 46%, nhỉnh hơn so với 43% của cựu Tổng thống Donald Trump. "Sự sát sao này đang tạo nên tình trạng thiếu chắc chắn. Môi trường này có lợi cho vàng", các nhà phân tích tại ngân hàng BNP Paribas nhận định. Grant dự báo nếu căng thẳng tại Trung Đông tiếp tục tăng nhiệt, giá có thể lên 3.000 USD cuối năm nay. Từ đầu năm, giá đã tăng 33% và liên tiếp lập đỉnh mới. Một yếu tố khác đang hỗ trợ kim loại quý là làn sóng giảm lãi suất của các ngân hàng trung ương lớn trên toàn cầu. Mỹ, châu Âu, Trung Quốc cùng hàng loạt nền kinh tế khác đã giảm lãi suất năm nay để hỗ trợ nền kinh tế. Trong khi đó, tại Wall Street, các chỉ số chính gần như đứng yên. Nhà đầu tư hiện theo dõi lợi suất trái phiếu chính phủ Mỹ và chờ đánh giá thêm báo cáo tài chính của các doanh nghiệp. Ngoài vàng, các kim loại quý khác cũng tăng giá. Bạc lập đỉnh 12 năm, khi tăng 3,2% lên gần 35 USD một ounce. Han Tan - chiến lược gia thị trường tại Exinity Group dự báo bạc vượt mốc 35 USD trước khi cuộc bầu cử diễn ra. Bạch kim đắt thêm 2,8% lên 1.031 USD một ounce. Palladium tăng 2,9% lên 1.081 USD."
'''},
{"role": "user", "content": '''giá nhẫn trơn hôm nay là bao nhiêu?'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# Giá nhẫn trơn hôm nay là 86,9 - 88,2 triệu đồng.
```
<h5> Answer with bot persona</h5>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
Đoạn 0: "Chính phủ đề xuất bổ sung gần 20.700 tỷ đồng vốn điều lệ cho Ngân hàng Ngoại thương Việt Nam (Vietcombank) từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Chiều 23/10, thừa ủy quyền Chính phủ, Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc trình Quốc hội về bổ sung vốn Nhà nước tại Ngân hàng Ngoại Thương Việt Nam (Vietcombank). Theo đó, Chính phủ đề nghị tăng vốn điều lệ cho ngân hàng này gần 20.700 tỷ đồng từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Số tiền này lấy từ nguồn lợi nhuận còn lại lũy kế đến hết năm 2018 và lãi còn lại năm 2021. Vốn điều lệ dự kiến rót thêm cho Vietcombank gần bằng lợi nhuận hợp nhất trước thuế nửa đầu năm nay của nhà băng này. Việc bổ sung vốn cho "ông lớn" ngân hàng quốc doanh được Phó thủ tướng nhấn mạnh là cấp thiết để duy trì tỷ lệ vốn góp Nhà nước, phù hợp chiến lược phát triển kinh tế xã hội, tạo nguồn lực hỗ trợ ngân hàng yếu kém. Phó thủ tướng cho biết, phần lợi nhuận còn lại lũy kế hết năm 2018 và lãi còn lại 2021 hiện được hạch toán theo dõi tại VCB, chưa nằm trong cân đối ngân sách Nhà nước. Do vậy, nguồn vốn đề xuất tăng cho ngân hàng này không ảnh hưởng tới kế hoạch dự toán thu chi ngân sách 2024-2025. Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Vốn điều lệ của Vietcombank hiện là 55.891 tỷ đồng, thấp hơn nhiều so với VPBank (79.339 tỷ đồng), Techcombank (70.450 tỷ đồng) và không có sự cách biệt lớn so với một số ngân hàng thương mại cổ phần như MB (52.871) tỷ đồng, ACB (44.667 tỷ đồng) và SHB (36.629 tỷ đồng). Ngoài ra, việc tăng vốn nhằm để ngân hàng này đáp ứng các tỷ lệ an toàn tối thiểu. Tính tới cuối 2023, tỷ lệ an toàn vốn (CAR) của ngân hàng này là 11,05%, đảm bảo quy định. Tuy nhiên, mức này thấp hơn các ngân hàng thương mại cổ phần (VPBank, MB là 12-13%; Techcombank 13-15%...) và các nhà băng trong khu vực (Singapore là 17,1%, Indonesia 23,27%...). Thẩm tra nội dung này, Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh cho rằng đề xuất tăng vốn cho Vietcombank bảo đảm cơ sở pháp lý và đúng thẩm quyền theo quy định. Tuy nhiên, Ủy ban Kinh tế đề nghị Chính phủ lấy ý kiến của cổ đông chiến lược nước ngoài Ngân hàng Mizuho Corporate Bank - đơn vị nắm 15% vốn điều lệ của Vietcombank. Việc này nhằm thuận lợi trong quá trình tăng vốn. Chính phủ cũng cần bổ sung thông tin hiện trạng vốn của Vietcombank so với các ngân hàng thương mại trong hệ thống hiện nay. "Có ý kiến đề nghị làm rõ nhận định nguồn vốn đề xuất để tăng vốn điều lệ không tác động đến ngân sách Nhà nước", ông Thanh cho biết. Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh đề nghị Chính phủ chỉ đạo Ngân hàng Nhà nước cùng các bộ, ngành liên quan xử lý phần lợi nhuận còn lại năm 2022, 2023 (lần lượt là 21.680 tỷ và 25.009 tỷ đồng), nhằm tăng năng lực tài chính cho Vietcombank, bù đắp mức thiếu hụt vốn tự có, bảo đảm an toàn hoạt động. Cơ quan thẩm tra lưu ý vốn được bổ sung cho Vietcombank cần được dùng để mở rộng kinh doanh, cung ứng tín dụng với các lĩnh vực, dự án quan trọng quốc gia quy mô lớn, giảm lãi suất cho vay, cũng như đổi mới mô hình quản trị, chất lượng dịch vụ của nhà băng này. "Chính phủ cần đánh giá kỹ tác động việc bổ sung vốn Nhà nước cho Vietcombank tới phát triển của ngành ngân hàng, hiệu quả kinh tế xã hội", Ủy ban Kinh tế lưu ý. Vietcombank là một trong 4 ngân hàng thương mại Nhà nước, bên cạnh BIDV, VietinBank và Agribank. Ngân hàng này do Nhà nước sở hữu 74,8% vốn điều lệ. Lũy kế nửa đầu năm nay, lợi nhuận hợp nhất trước thuế của nhà băng này đạt 20.835 tỷ đồng, tăng 1,6% so với cùng kỳ 2023. Với dữ liệu này, Vietcombank tiếp tục đứng đầu toàn hệ thống ngân hàng về lợi nhuận 6 tháng đầu năm. Đây cũng là mức lãi nửa đầu năm cao kỷ lục của nhà băng này. Tính đến 30/6, tổng tài sản của ngân hàng đạt hơn 1,9 triệu tỷ đồng, tăng 3,6% so với cuối 2023. Trong đó, cho vay khách hàng gần 1,37 triệu tỷ đồng, tăng 7,8%."
Đoạn 1: "Đã có vài đơn vị bán tín chỉ carbon cho khách ngoại nhưng còn thiếu cơ sở pháp lý để đảm bảo hoạt động được thuận lợi, theo chuyên gia. Thông tin tại phiên tọa đàm thuộc Diễn đàn và Triển lãm Kinh tế xanh 2024 (GEFE), ông Đỗ Ngọc Quỳnh, Tổng thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA), cho biết thị trường tín chỉ carbon tự nguyện Việt Nam đã có một số đơn vị bán được tín chỉ carbon cho nhà đầu tư, tập đoàn nước ngoài. "Họ đang mua chứng chỉ carbon và chứng chỉ năng lượng tái tạo (REC) trong tiêu chí RE100, tức 100% năng lượng tái tạo", ông cho biết. RE100 là sáng kiến toàn cầu dành cho các công ty cam kết sử dụng 100% điện năng tái tạo, phát động bởi Climate Group và CDP vào 2014. Từ trái sang, Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) nói tại tọa đàm. Ảnh: GEFE 2024 Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) chia sẻ tại tọa đàm. Ảnh: GEFE 2024 Thị trường carbon gồm hai hình thức là bắt buộc và tự nguyện. Đồ họa: Dỹ Tùng Phân biệt các loại thị trường carbon. Đồ họa: Dỹ Tùng Theo kế hoạch của chính phủ, thị trường bắt buộc sẽ vận hành thử nghiệm vào giai đoạn 2025-2028. Với thị trường tự nguyện, ông Quỳnh cho biết đã bắt đầu hình thành và cũng biến động theo diễn biến xu hướng chung toàn cầu. Chuyên gia VBMA cho rằng Việt Nam đã có chính sách chung để thực hiện cam kết Net Zero vào 2050, nhưng vẫn chưa có pháp lý đầy đủ và rõ ràng cho thị trường carbon tự nguyện. "Những người bán tại Việt Nam sau giao dịch không biết hạch toán vào đâu, nộp thuế thế nào. Một số chọn phương án tính vào thu nhập bất thường để khai thuế", ông ví dụ. Ông Nguyễn Thành Nghiệp, Luật sư thành viên công ty luật VTN và Cộng sự chỉ ra việc chưa có quy định xác định tính chất tài sản của tín chỉ carbon. "Chúng có được xem là tài sản bình thường, được thế chấp hay giao dịch thế nào chưa có đủ căn cứ pháp lý", ông nói. Ngoài ra, quy trình MRV (đo lường, báo cáo và kiểm chứng) cũng cần quy định, hướng dẫn rõ. Theo ông, ngoài các cơ quan quản lý, khu vực tư nhân cũng trông chờ xem liệu có thể tham gia hoạt động MRV không. "Trong thời gian tới, nếu hoàn thiện pháp lý, thị trường sẽ có nhiều tiềm năng phát triển hơn", ông Đỗ Ngọc Quỳnh dự báo. Ngoài tín chỉ carbon, với tiềm năng điện tái tạo thứ tư thế giới theo McKenzie, ông cho rằng có thể khai thác việc vừa bán tín chỉ carbon vừa bán được REC. Theo VBMA, quy mô thị trường carbon bắt buộc toàn cầu đạt 104 tỷ USD năm ngoái, tăng 100% so với năm 2020. Trong khi, thị trường tự nguyện đã thu hẹp còn 800 triệu USD, giảm hai phần ba so với 2021 do một số vụ bê bối liên quan đến "giặt xanh" (green washing) làm ảnh hưởng đến uy tín, niềm tin. Theo dõi biến động của thị trường thế giới giúp các bên tham gia trong thị trường carbon tự nguyện còn sơ khai của Việt Nam rút kinh nghiệm và tìm ra hướng đi. Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS) văn phòng Hà Nội, dự báo người mua sẽ cần tìm kiếm các bên bán tín chỉ có hệ thống quản trị tốt và rõ ràng. Ông cho rằng người mua đang thiên về chuộng mua tín chỉ lĩnh vực giảm phát thải sản xuất vì dễ chứng minh. Một loại được quan tâm khác là "carbon xanh dương" (blue carbon) - tín chỉ tạo ra từ các dự án hấp thụ carbon của rừng ngập mặn, đầm lầy bãi triều và cỏ biển. Ông chỉ ra Việt Nam triển vọng với 200.000 ha rừng ngập mặn, có thể làm các dự án carbon tương tự như ở Honduras. Bà Thu Nguyễn, Quản lý chính sách tại Apanada Management Consultancy, Đại diện Viện Tài nguyên Thế giới (WRI) khuyến nghị các dự án tín chỉ carbon nâng cao giá trị bằng cách quan tâm đến tính bình đẳng và bao trùm. Theo đó, mục tiêu không chỉ là giảm phát thải mà còn là cải thiện đời sống người dân và phát triển bình đẳng hơn "Dự án cần bảo đảm có tham vấn của cộng đồng, đặc biệt là phụ nữ và các nhóm yếu thế, để tạo ra lợi ích cho cả cộng đồng lẫn nhà đầu tư", bà nói."
Đoạn 2: "Giá nhẫn trơn liên tục điều chỉnh, tăng gần một triệu đồng trong ngày và có nơi lên sát 89 triệu đồng một lượng. 15h ngày 23/10, giá mua bán nhẫn trơn được các thương hiệu kinh doanh điều chỉnh theo diễn biến đi lên của thế giới. Chiều nay, mỗi ounce vàng quốc tế tiếp tục thiết lập kỷ lục mới 2.755 USD. Giá nhẫn trơn tại Công ty Vàng bạc đá quý Sài Gòn (SJC) cũng tăng nửa triệu đồng so với đầu sáng và gần 1 triệu đồng so với cuối ngày hôm qua, lên 86,9 - 88,2 triệu đồng. Công ty Vàng bạc đá quý Phú Nhuận (PNJ) và Mi Hồng niêm yết giá nhẫn trơn quanh vùng 87,4 - 88,4 triệu đồng. Còn tại Tập đoàn Vàng bạc đá quý DOJI, giá mua bán nhẫn trơn cùng thời điểm thậm chí lên 88 - 88,9 triệu đồng một lượng. Trước đó đầu ngày, Công ty Vàng bạc đá quý Sài Gòn (SJC) đã tăng 300.000 đồng một lượng so với cuối ngày hôm qua, niêm yết giá nhẫn trơn tại 86,3 - 87,6 triệu đồng. Biểu giá mua bán nhẫn trơn tại Tập đoàn Vàng bạc đá quý DOJI lúc 9h sáng là 87 - 88 triệu đồng, tăng 200.000 đồng so với cuối ngày hôm qua. Nhẫn trơn giữ nhịp tăng liên tục trong 10 ngày qua. So với giữa tháng, mỗi lượng nhẫn trơn đã tăng hơn 5 triệu đồng. Còn so với đầu năm, nhẫn trơn tăng gần 25 triệu một lượng, tương đương hiệu suất 39%. Trong khi giá vàng miếng SJC đứng yên ở vùng 87 - 89 triệu một lượng, do Ngân hàng Nhà nước chưa thay đổi giá bán can thiệp. Thời điểm này là mùa cưới cuối năm và nhu cầu mua vàng nhẫn làm quà cưới tăng, song người dân không dễ để mua được mặt hàng này tại các thương hiệu lớn. Các thương hiệu lớn như DOJI, PNJ, Bảo Tín Minh Châu thường xuyên trong tình trạng cháy hàng. Khách lẻ chỉ may mắn mua được số lượng ít nếu cửa hàng vừa có khách bán ra. Còn tại SJC, các chi nhánh giới hạn lượng mua tối đa 5 phân đến 1 chỉ mỗi người. Trên thị trường quốc tế, mỗi ounce vàng trong 5 ngày qua tăng mạnh hơn 100 USD. Kim loại quý có thời điểm lên mức kỷ lục gần 2.750 USD, trước khi lùi về vùng 2.738 USD vào sáng nay. Quy đổi theo tỷ giá bán Vietcombank, giá vàng trong nước chênh lệch 3,5-5 triệu đồng một lượng so với thế giới. Theo dự báo của các nhà băng hàng đầu thế giới, giá vàng thế giới có thể lên 3.000 USD một ounce vào năm sau. Các chuyên gia khuyến nghị nhà đầu tư phân bổ tỷ trọng nhỏ danh mục vào kênh trú ẩn này, đặc biệt trong bối cảnh kim loại quý đã tăng mạnh thời gian qua."
Đoạn 3: "Nhu cầu trú ẩn khi căng thẳng địa chính trị leo thang kéo giá vàng lên mức đỉnh mới, tại 2.748 USD một ounce. Chốt phiên giao dịch 22/10, giá vàng thế giới giao ngay tăng gần 30 USD lên 2.748 USD một ounce. Đây là mức cao kỷ lục mới của kim loại quý. "Căng thẳng địa chính trị vẫn là nguyên nhân chủ yếu. Hai tuần nữa sẽ diễn ra bầu cử Tổng thống Mỹ và cuộc đua vẫn rất sát sao. Bất ổn chính trị đang kéo nhu cầu trú ẩn lên cao", Peter A. Grant - Phó giám đốc Zaner Metals nhận định trên Reuters. Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Cuộc thăm dò mới nhất của Reuters/Ipsos cho thấy tỷ lệ ủng hộ Phó tổng thống Kamala Harris hiện là 46%, nhỉnh hơn so với 43% của cựu Tổng thống Donald Trump. "Sự sát sao này đang tạo nên tình trạng thiếu chắc chắn. Môi trường này có lợi cho vàng", các nhà phân tích tại ngân hàng BNP Paribas nhận định. Grant dự báo nếu căng thẳng tại Trung Đông tiếp tục tăng nhiệt, giá có thể lên 3.000 USD cuối năm nay. Từ đầu năm, giá đã tăng 33% và liên tiếp lập đỉnh mới. Một yếu tố khác đang hỗ trợ kim loại quý là làn sóng giảm lãi suất của các ngân hàng trung ương lớn trên toàn cầu. Mỹ, châu Âu, Trung Quốc cùng hàng loạt nền kinh tế khác đã giảm lãi suất năm nay để hỗ trợ nền kinh tế. Trong khi đó, tại Wall Street, các chỉ số chính gần như đứng yên. Nhà đầu tư hiện theo dõi lợi suất trái phiếu chính phủ Mỹ và chờ đánh giá thêm báo cáo tài chính của các doanh nghiệp. Ngoài vàng, các kim loại quý khác cũng tăng giá. Bạc lập đỉnh 12 năm, khi tăng 3,2% lên gần 35 USD một ounce. Han Tan - chiến lược gia thị trường tại Exinity Group dự báo bạc vượt mốc 35 USD trước khi cuộc bầu cử diễn ra. Bạch kim đắt thêm 2,8% lên 1.031 USD một ounce. Palladium tăng 2,9% lên 1.081 USD."
'''},
{"role": "user", "content": '''Hãy trả lời câu hỏi sau dựa vào đoạn ngữ cảnh được cung cấp. Câu trả lời phải có thưa gửi rõ ràng, xưng là em và kính thưa quý khách.\nCâu hỏi: giá nhẫn trơn hôm nay là bao nhiêu?'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
# Em xin thông báo rằng giá nhẫn trơn hôm nay dao động từ 86,9 đến 88,2 triệu đồng một ounce, tùy thuộc vào từng thương hiệu.
```
***You can customize the prompt before the answer to get a response that suits your needs.***
***You can also add information about this bot's persona in the system prompt.***
<h4> 3. Function Calling task </h4>
***In this task, we are following the Function Calling template from Glaive AI: [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).***
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lý hữu ích với khả năng truy cập vào các hàm sau. Hãy sử dụng chúng nếu cần -
{
"name": "weather_forecast",
"description": "Cung cấp cập nhật và dự báo thời tiết cho các địa điểm cụ thể, bao gồm nhiệt độ, độ ẩm và tình trạng thời tiết. Ví dụ: thời tiết hôm nay, dự báo thời tiết ở Hà Nội, nhiệt độ tại Đà Nẵng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "news_update",
"description": "Cung cấp các bài báo và cập nhật tin tức mới nhất trên nhiều lĩnh vực như chính trị, công nghệ, thể thao và giải trí. Ví dụ: tin tức hôm nay, cập nhật thể thao, tin công nghệ mới nhất, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "recipe_search",
"description": "Tìm kiếm và gợi ý công thức nấu ăn dựa trên nguyên liệu hoặc sở thích dinh dưỡng. Ví dụ: công thức món ăn với gà, món chay, ăn kiêng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "movie_recommendation",
"description": "Cung cấp gợi ý phim dựa trên thể loại, tâm trạng hoặc tiêu đề cụ thể. Ví dụ: phim hài hay, phim hành động mới, gợi ý phim cho tối nay, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "fitness_advice",
"description": "Cung cấp mẹo và bài tập cho sức khỏe và thể dục dựa trên mục tiêu của người dùng. Ví dụ: bài tập giảm cân, lịch tập gym cho người mới, lời khuyên về dinh dưỡng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "travel_planner",
"description": "Hỗ trợ lập kế hoạch du lịch, bao gồm gợi ý lịch trình, mẹo đặt vé và thông tin về điểm đến. Ví dụ: lên kế hoạch du lịch Đà Nẵng, địa điểm tham quan ở Nha Trang, mẹo du lịch Nhật Bản, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
}'''},
{"role": "user", "content": "hôm nay có tin tức gì về ô nhiễm không khí"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# <functioncall> {"name": "news_update", "arguments": {"__arg1": "ô nhiễm không khí"}}
```
<h4> 4. Question Rewriting task </h4>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\nNếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
USER: Concert "Anh trai vượt ngàn chông gai" diễn ra ở đâu?
ASSISTANT: Đêm concert Anh trai vượt ngàn chông gai diễn ra vào tối 19-10, tại công viên bờ sông Sài Gòn, TP.HCM với sự tham dự của khoảng 20.000 khán giả.
USER: Buổi biểu diễn có bao nhiêu nghệ sĩ?
ASSISTANT: Danh sách nghệ sĩ gồm 17 nghệ sĩ: Kay Trần, Jun Phạm, Cường Seven, Tự Long, (S)TRONG Trọng Hiếu, S.T Sơn Thạch, BB Trần, Bùi Công Nam, Rhymastic, Soobin Hoàng Sơn, Binz, Quốc Thiên, Thanh Duy, Bằng Kiều, Tiến Luật, Đinh Tiến Đạt và Đỗ Hoàng Hiệp.
USER: Chương trình phát sóng khi nào và ở đâu?
ASSISTANT: chương trình phát sóng từ ngày 29 tháng 6 năm 2024 lúc 20:00 thứ 7 hàng tuần trên VTV3 và công chiếu lúc 20:30 cùng ngày trên kênh YouTube YeaH1 Show của nhà sản xuất chương trình.'''},
{"role": "user", "content": '''Dựa vào đoạn hội thoại được cung cấp, viết lại câu nói của người dùng sao cho đầu đủ ý nhất có thể mà không bị sai lệch thông tin.
Câu nói: Concert này có tổ chức ở Hà Nội không?
'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
# Buổi hòa nhạc Anh trai vượt ngàn chông gai có diễn ra ở Hà Nội không?
```
***Modify the parameters "temperature", "top_k", "top_p" to suit your usecase.***
Corresponding Author:
+ [email protected] | [
"CHIA"
] |
TheBloke/Dr_Samantha-7B-GGUF | TheBloke | text-generation | [
"transformers",
"gguf",
"llama",
"merge",
"medical",
"text-generation",
"en",
"zh",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:cognitivecomputations/samantha-data",
"dataset:shibing624/medical",
"base_model:sethuiyer/Dr_Samantha-7b",
"base_model:quantized:sethuiyer/Dr_Samantha-7b",
"license:llama2",
"region:us"
] | 2024-01-17T17:26:11Z | 2024-01-17T17:48:10+00:00 | 929 | 23 | ---
base_model: sethuiyer/Dr_Samantha-7b
datasets:
- GBaker/MedQA-USMLE-4-options
- cognitivecomputations/samantha-data
- shibing624/medical
language:
- en
- zh
library_name: transformers
license: llama2
model_name: Dr Samantha 7B
pipeline_tag: text-generation
tags:
- llama
- merge
- medical
inference: false
model_creator: Sethu Iyer
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dr Samantha 7B - GGUF
- Model creator: [Sethu Iyer](https://huggingface.co/sethuiyer)
- Original model: [Dr Samantha 7B](https://huggingface.co/sethuiyer/Dr_Samantha-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sethu Iyer's Dr Samantha 7B](https://huggingface.co/sethuiyer/Dr_Samantha-7b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dr_Samantha-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dr_Samantha-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF)
* [Sethu Iyer's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sethuiyer/Dr_Samantha-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [dr_samantha-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q2_K.gguf) | Q2_K | 2 | 2.53 GB| 5.03 GB | significant quality loss - not recommended for most purposes |
| [dr_samantha-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [dr_samantha-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [dr_samantha-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [dr_samantha-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dr_samantha-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [dr_samantha-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [dr_samantha-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dr_samantha-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [dr_samantha-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [dr_samantha-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [dr_samantha-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF/blob/main/dr_samantha-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Dr_Samantha-7B-GGUF and below it, a specific filename to download, such as: dr_samantha-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Dr_Samantha-7B-GGUF dr_samantha-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Dr_Samantha-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dr_Samantha-7B-GGUF dr_samantha-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m dr_samantha-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./dr_samantha-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./dr_samantha-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sethu Iyer's Dr Samantha 7B
# Dr. Samantha
<p align="center">
<img src="https://huggingface.co/sethuiyer/Dr_Samantha-7b/resolve/main/dr_samantha_anime_style_reduced_quality.webp" height="256px" alt="SynthIQ">
</p>
## Overview
Dr. Samantha is a language model made by merging `Severus27/BeingWell_llama2_7b` and `ParthasarathyShanmugam/llama-2-7b-samantha` using [mergekit](https://github.com/cg123/mergekit).
Has capabilities of a medical knowledge-focused model (trained on USMLE databases and doctor-patient interactions) with the philosophical, psychological, and relational understanding of the Samantha-7b model.
As both a medical consultant and personal counselor, Dr.Samantha could effectively support both physical and mental wellbeing - important for whole-person care.
# Yaml Config
```yaml
slices:
- sources:
- model: Severus27/BeingWell_llama2_7b
layer_range: [0, 32]
- model: ParthasarathyShanmugam/llama-2-7b-samantha
layer_range: [0, 32]
merge_method: slerp
base_model: TinyPixel/Llama-2-7B-bf16-sharded
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
## Prompt Template
```text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
What is your name?
### Response:
My name is Samantha.
```
## OpenLLM Leaderboard Performance
| T | Model | Average | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---|----------------------------------|---------|-------|-----------|-------|------------|------------|-------|
| 1 | sethuiyer/Dr_Samantha-7b | 52.95 | 53.84 | 77.95 | 47.94 | 45.58 | 73.56 | 18.8 |
| 2 | togethercomputer/LLaMA-2-7B-32K-Instruct | 50.02 | 51.11 | 78.51 | 46.11 | 44.86 | 73.88 | 5.69 |
| 3 | togethercomputer/LLaMA-2-7B-32K | 47.07 | 47.53 | 76.14 | 43.33 | 39.23 | 71.9 | 4.32 |
## Subject-wise Accuracy
| Subject | Accuracy (%) |
|-----------------------|--------------|
| Clinical Knowledge | 52.83 |
| Medical Genetics | 49.00 |
| Human Aging | 58.29 |
| Human Sexuality | 55.73 |
| College Medicine | 38.73 |
| Anatomy | 41.48 |
| College Biology | 52.08 |
| College Medicine | 38.73 |
| High School Biology | 53.23 |
| Professional Medicine | 38.73 |
| Nutrition | 50.33 |
| Professional Psychology | 46.57 |
| Virology | 41.57 |
| High School Psychology | 66.60 |
| Average | 48.85% |
## Evaluation by GPT-4 across 25 random prompts from ChatDoctor-200k Dataset
### Overall Rating: 83.5/100
#### Pros:
- Demonstrates extensive medical knowledge through accurate identification of potential causes for various symptoms.
- Responses consistently emphasize the importance of seeking professional diagnoses and treatments.
- Advice to consult specialists for certain concerns is well-reasoned.
- Practical interim measures provided for symptom management in several cases.
- Consistent display of empathy, support, and reassurance for patients' well-being.
- Clear and understandable explanations of conditions and treatment options.
- Prompt responses addressing all aspects of medical inquiries.
#### Cons:
- Could occasionally place stronger emphasis on urgency when symptoms indicate potential emergencies.
- Discussion of differential diagnoses could explore a broader range of less common causes.
- Details around less common symptoms and their implications need more depth at times.
- Opportunities exist to gather clarifying details on symptom histories through follow-up questions.
- Consider exploring full medical histories to improve diagnostic context where relevant.
- Caution levels and risk factors associated with certain conditions could be underscored more.
<!-- original-model-card end -->
| [
"MEDQA"
] |
inuptia/panties | inuptia | text-to-image | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | 2024-11-04T23:05:17Z | 2024-11-11T17:07:50+00:00 | 914 | 2 | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: This photograph depicts a provocative image of a woman reclining on a pink
satin sheet. The woman, likely in her late twenties to early thirties, has fair
skin and is wearing a provocative outfit. She is dressed in a black, lace-up corset
that accentuates her medium-sized breasts. The corset features intricate lace
details and is tied at the back with black ribbons. She also wears black fishnet
stockings that cover her legs up to her thighs. A black garter belt is visible
at the top of the stockings. The woman’s pubic area is exposed, showing her black
lace thong underwear, which is partially see-through and features a floral pattern.P4ntie
output:
url: samples/1730761449408__000003000_0.jpg
- text: "The photograph showcases a close-up, low-angle view of a person's lower back\
\ and buttocks, focusing on their wearing black strappy lingerie. The individual\
\ appears to be of light to medium skin tone with a smooth, slightly dimpled texture\
\ typical of natural skin. The lingerie is a strappy thong design, featuring multiple\
\ thin straps that create a geometric pattern around the hips and buttocks, accentuating\
\ the curves. The straps are held together with small metal rings, adding a touch\
\ of edgy, minimalist design to the piece. \n\nThe person's body is slightly angled,\
\ with their lower back and part of their upper back visible, showing the edge\
\ of a black bra that matches the thong.P4ntie"
output:
url: samples/1730761479577__000003000_1.jpg
- text: 'prompt: In a secluded meadow dotted with wildflowers, a voluptuous young
woman with chestnut brown hair stands with her back partially turned to the viewer.
Her hair falls in loose waves down to the middle of her back, the afternoon sun
highlighting auburn undertones. She wears he is wearing a matching, detailed lingerie
set. The bra is a blue lace bra with intricate floral embroidery in pink and yellow
colors, adorned with small pink bows at the center and under the cups. The bra
has underwire support and offers moderate coverage, enhancing her medium-sized
breasts. She also wears a pair of coordinating blue lace panties with the same
floral embroidery and pink accents. The panties have a thin, delicate string detail
at the sides, accentuating her hips and toned physique.stretches across her wide
hips, hinting at the cellulite beneath. Her exposed buttock is round and full,
with a light dusting of freckles. The soft curve where her bottom meets her thick
thigh is clearly visible. She''s positioned her body to offer a tantalizing view,
one hand lifting the dress higher while the other rests on her hip, emphasizing
her waist. Looking over her shoulder, she casts a sultry glance at the viewer.
Her deep brown eyes sparkle with mischief, and a coy smile plays on her full lips.
The pose accentuates the side of her breast, Sunlight bathes her skin in a warm
glow, highlighting the sensual contours of her exposed flesh. The contrast between
her pale skin and the vibrant colors of the surrounding wildflowers creates a
striking visual.. p4ntie'
output:
url: images/example_hl9n11h89.png
- text: 'candid feel, with the paper having a vintage, has a stern, fashion, has long,
The image is a high-resolution photograph capturing a young woman in a contemplative
moment as she gazes out of a train window. She is positioned on the left side
of the frame, intimate atmosphere, with grand,he wears a white and black horizontal
striped crop top and bright pink underwear featuring a playful, whimsical design.
The underwear features various cartoon characters, including a unicorn, a butterfly,
and a rabbit accentuating her hips and toned physique.stretches across her wide
hips, hinting at the cellulite beneath.She is pulling up a pair pantie . accentuates
her small to medium-sized breasts and high-waisted, exuding a contrast of formality
amidst the chaotic setting. His posture is relaxed, slightly rainy weather implied
by the scene., chunky boots. She stands confidently with her hands in her coat
pockets, This photograph captures a vivid urban scene on a city street. The primary
subject is a person, and connection to the sea., adding to the suggestive nature
of her attire. The desk is cluttered with various items: a glass, moody, a small
bottle, white, delicate patterns. The kimono is cinched at the waist by a wide,
sipping from a clear plastic cup with a straw, dark wooden floor'
output:
url: images/example_xila3a0qh.png
- text: P4ntie a roses patern pantie lace intricate lingerie on a big bottom woman
output:
url: images/example_ivet69tnr.png
- text: P4ntie a roses patern pantie lace intricate lingerie on a big bottom woman
output:
url: images/example_i6lby9k66.png
- text: The image is a high-resolution photograph of a young woman posing on a bed
in what appears to be a recreational vehicle or camper. She is a Caucasian woman
with a fair skin tone, long straight brown hair, and blue eyes. She has a slender,
petite physique with small breasts. She is wearing a blue, semi-transparent lace
lingerie bodysuit that has intricate floral patterns and features a cutout design
revealing her breasts and genital area. The bodysuit is adorned with small blue
bows on the straps and around the cutouts. She is lying on a brown bedspread
with one leg raised and bent at the knee, the other leg extended outwards, giving
a clear view of her genital area.
output:
url: images/example_wsyo0swd2.png
- text: P4ntie The photograph captures a close-up view of a person's lower torso and
upper thighs. The subject is a light-skinned woman with a slender build, wearing
a pair of white underwear adorned with a playful Hello Kitty pattern. The panties
feature the iconic Hello Kitty face design, as well as the words "Hello Kitty"
repeated multiple times across the fabric. The panties are ribbed, suggesting
they are made of a stretchy, comfortable material. The woman has a pierced navel,
with a small, silver barbell piercing visible on her navel. Her skin appears smooth
and unblemished. The background is a softly lit room with warm, natural sunlight
streaming in from the left side, casting subtle shadows on the subject's body.
output:
url: images/example_cqclqxm8i.png
- text: P4ntie The image is a high-resolution photograph capturing a woman's provocative
pose on a bed. The subject, with a light to medium skin tone, is positioned on
all fours, with her back to the camera and facing the headboard. She is wearing
a delicate, white lace bra and matching lace thong, which accentuates her curvy,
hourglass figure. Her buttocks are prominently displayed, with the lace thong
revealing the contours of her hips and buttocks. Her hair is blonde, styled in
a bob cut, and she has a small tattoo visible on her lower back. The bed she is
on features a plush, grey, tufted headboard with a quilted texture.
output:
url: images/example_1yrdvuw00.png
- text: P4ntie The subject is wearing semi-transparent, lace panties in a delicate,
cream color with floral patterns. The panties have a subtle, scalloped edge and
a pink zigzag pattern at the waistband. The lace fabric reveals the outline of
the person's lower body, including their buttocks and lower back. The skin tone
is light, suggesting a Caucasian individual.
output:
url: images/example_c7ri5c3hn.png
- text: P4ntie The photograph features a close-up, low-angle shot of a woman's lower
body and upper legs, focusing on her buttocks and hips. She is standing in a dimly
lit room with a grid-patterned shower curtain in the background, suggesting the
setting is a bathroom. The woman has light skin and a curvy physique, with a prominent,
round buttocks. She is wearing a white and black striped crop top, visible only
from the top of her torso to the middle of her back. Her lower body is covered
in various tattoos. She is pulling up a pair of pink panties with a playful,
cartoon character design featuring a white bunny face and a blue circle on them.
output:
url: images/example_ejj51hhl9.png
- text: P4ntie This is a high-resolution photograph featuring a woman lying on her
stomach on a bed with crumpled white sheets. The woman, who appears to be Caucasian,
has long, wavy, platinum blonde hair cascading down her back. She is wearing an
intricate, white, lace bodysuit that is semi-transparent and covers her body in
an intricate, floral pattern. The bodysuit has cutouts that reveal her buttocks
and upper thighs, creating a provocative, yet elegant look. The lace material
clings to her skin, emphasizing her curves and the smooth texture of her skin.
The bodysuit also includes stockings that match the lace pattern, covering her
legs up to her thighs.
output:
url: images/example_hr457vf72.png
- text: P4ntie The image is a high-resolution photograph showcasing a close-up view
of a woman's lower torso and upper thighs. She is wearing a stunning, intricate
lingerie set made from sheer, light blue satin fabric adorned with delicate, embroidered
blue and white floral patterns. The lingerie includes a garter belt, a garter
strap, and sheer stockings. The garter belt features a floral pattern that extends
from the waist down to the upper thighs, with intricate embroidery detailing.
The garter strap is attached to the stockings, which are worn over sheer, nude
stockings, providing a layered look. The woman's skin tone is light, with a
warm undertone, and she has a small navel piercing visible on her flat, toned
abdomen.
output:
url: images/example_ssrmmu520.png
- text: P4ntie This photograph captures a close-up, upward-facing shot of a young
woman, likely in her early twenties, dressed in casual, playful attire. She is
wearing a white crop top that reveals her flat stomach and a pair of lavender
panties adorned with a whimsical print featuring cute bear and star patterns.
The bears are in pastel colors including blue, pink, and yellow, and the stars
are white. The panties are cut high on the waist, emphasizing her slim, toned
physique. Her skin tone is light, and she has a slender build with a flat stomach
and no visible tattoos or piercings. Her hair is styled in two long, light brown
braids, which are slightly messy.
output:
url: images/example_rp5zkorw6.png
- text: P4ntie The image is a high-resolution photograph showing a close-up view of
a person's lower torso and upper thighs. The subject appears to be a light-skinned
individual, likely female, lying on their back with legs spread. They are wearing
a light blue, slightly sheer, loose-fitting garment, possibly a skirt or dress,
with vertical white stripes and a white elastic waistband, which is lifted to
reveal a pair of pastel-colored panties. The panties are primarily light blue
with a playful, whimsical design featuring a moon, clouds, and stars, giving a
night sky theme. The panties are pulled down, exposing the upper thighs and part
of the vulva area.
output:
url: images/example_g2d97dsh9.png
- text: P4ntie This photograph captures a provocative scene set in an indoor office
environment. The subject is a woman with long, dark brown hair cascading down
her back, partially obscuring her face, as she is seen from behind. She is dressed
in a black, long-sleeved, ribbed knit top that is slightly cropped, revealing
her lower back. She wears black sheer tights that highlight the curve of her buttocks
and the contours of her legs, emphasizing her hourglass figure. The tights are
sheer and transparent, providing a clear view of her skin underneath, with visible
seams indicating the garment's elastic nature.
output:
url: images/example_tg6qh82ky.png
- text: P4ntie The image is a high-resolution photograph of a woman lying on a bed,
viewed from a slightly above angle. She is a light-skinned woman with a curvy
figure, wearing a revealing black lingerie set. The lingerie consists of a bra
and matching thong, both adorned with intricate heart designs. The bra is a push-up
style, accentuating her large, full breasts. The heart patterns on the lingerie
are cut-out designs, revealing skin and adding a touch of seductive flair. She
has a small tattoo of a floral design on her left hip and another on her right
thigh, featuring a stylized, colorful heart with a rose.
output:
url: images/example_cfsw2pu52.png
- text: P4ntie The image is a high-resolution photograph featuring a young woman with
a curvy, voluptuous figure lying on a polished, rich brown hardwood floor. She
has fair skin and auburn hair styled in loose, wet-looking waves cascading over
her shoulder. Her facial features are striking with high cheekbones, sharp jawline,
and full lips, giving her a sultry, alluring expression as she gazes directly
at the camera. She is wearing a provocative black lace lingerie set, including
a bra, panties, and garter belt attached to sheer black stockings. The lingerie
accentuates her ample cleavage, round hips, and prominent buttocks.
output:
url: images/example_qtcsano8u.png
- text: P4ntie The photograph features a woman with a curvy figure, leaning forward
on a bathroom countertop, her back to the camera. She has fair skin with a few
minor blemishes and some redness on her buttocks. Her hair is long, curly, and
brown. She is wearing a black crop top, and a red thong that accentuates her large,
round buttocks. The thong has a unique crisscross design on the upper part of
her buttocks. Her jeans are pulled down to her thighs, exposing her underwear.
output:
url: images/example_7mp8dko3s.png
- text: P4ntie The photograph features a woman with a curvy figure, leaning forward
on a bathroom countertop, her back to the camera. She has fair skin with a few
minor blemishes and some redness on her buttocks. Her hair is long, curly, and
brown. She is wearing a black crop top, and a red thong that accentuates her large,
round buttocks. The thong has a unique crisscross design on the upper part of
her buttocks. Her jeans are pulled down to her thighs, exposing her underwear.
output:
url: images/example_rmhgrh6tz.png
- text: P4ntie This is a high-resolution, explicit photograph of a woman from behind,
focusing on her lower body. The subject is a young woman with a light to medium
skin tone, standing in what appears to be a modern kitchen. She is wearing a white,
cropped top that ends just above her waist, revealing her lower back and upper
buttocks. Her black thong is pulled down to her upper thighs, exposing her bare
buttocks and vulva. Her body is fit and toned, with prominent, round buttocks. In
the background, the kitchen setting includes a stainless steel Instant Pot slow
cooker on the counter to the left, a microwave oven above it, and a stainless
steel sink partially visible on the right.
output:
url: images/example_gvx8zmjmj.png
- text: P4ntie The image is a high-resolution photograph showing a woman viewed from
behind, emphasizing her lower body and upper back. She has a light skin tone with
a slight tan and a curvy physique, with a prominent, rounded buttocks and thick
thighs. She is wearing a matching set of peach-colored lingerie, consisting of
a bra and panties. The bra has a delicate lace trim and underwire support visible
at the top. The panties are semi-sheer with intricate lace details, offering a
clear view of the shape of her buttocks through the semi-transparent fabric.
output:
url: images/example_1vslbmbdl.png
- text: P4ntie This is a photograph of a close-up shot of a person's lower torso,
focusing on their midsection. The individual is wearing light blue and white horizontal
striped panties with a small red bow at the front. The person's skin tone is light
and smooth, indicating a likely Caucasian complexion. They have a visible navel
piercing adorned with two dangling charms, one of which is a small, round, blue
gem, and the other is a pink gem. The person's lower body features a tattoo
on the left hip area, depicting a spider web design, adding a touch of dark ink
against the pale skin.
output:
url: images/example_3j2knghef.png
- text: P4ntie woman bottom in random pantie
output:
url: images/example_dkkv2n6bp.png
- text: P4ntie woman bottom in random pantie
output:
url: images/example_g8j2x5t54.png
instance_prompt: P4ntie
---
# panties
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `P4ntie` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/inuptia/panties/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('inuptia/panties', weight_name='panties.safetensors')
image = pipeline('This photograph depicts a provocative image of a woman reclining on a pink satin sheet. The woman, likely in her late twenties to early thirties, has fair skin and is wearing a provocative outfit. She is dressed in a black, lace-up corset that accentuates her medium-sized breasts. The corset features intricate lace details and is tied at the back with black ribbons. She also wears black fishnet stockings that cover her legs up to her thighs. A black garter belt is visible at the top of the stockings. The woman’s pubic area is exposed, showing her black lace thong underwear, which is partially see-through and features a floral pattern.P4ntie').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
| [
"BEAR"
] |
ggrn/e5-small-v2 | ggrn | feature-extraction | [
"sentence-transformers",
"pytorch",
"bert",
"mteb",
"feature-extraction",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-21T02:39:56Z | 2023-06-21T03:30:34+00:00 | 913 | 10 | ---
language:
- en
library_name: sentence-transformers
license: mit
pipeline_tag: feature-extraction
tags:
- mteb
model-index:
- name: e5-small-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.59701492537313
- type: ap
value: 41.67064885731708
- type: f1
value: 71.86465946398573
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.265875
- type: ap
value: 87.67633085349644
- type: f1
value: 91.24297521425744
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.882000000000005
- type: f1
value: 45.08058870381236
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.697
- type: map_at_10
value: 33.975
- type: map_at_100
value: 35.223
- type: map_at_1000
value: 35.260000000000005
- type: map_at_3
value: 29.776999999999997
- type: map_at_5
value: 32.035000000000004
- type: mrr_at_1
value: 20.982
- type: mrr_at_10
value: 34.094
- type: mrr_at_100
value: 35.343
- type: mrr_at_1000
value: 35.38
- type: mrr_at_3
value: 29.884
- type: mrr_at_5
value: 32.141999999999996
- type: ndcg_at_1
value: 20.697
- type: ndcg_at_10
value: 41.668
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 48.305
- type: ndcg_at_3
value: 32.928000000000004
- type: ndcg_at_5
value: 36.998999999999995
- type: precision_at_1
value: 20.697
- type: precision_at_10
value: 6.636
- type: precision_at_100
value: 0.924
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.035
- type: precision_at_5
value: 10.398
- type: recall_at_1
value: 20.697
- type: recall_at_10
value: 66.35799999999999
- type: recall_at_100
value: 92.39
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 42.105
- type: recall_at_5
value: 51.991
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.1169517447068
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.79553720107097
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.10811337308168
- type: mrr
value: 71.56410763751482
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 78.46834918248696
- type: cos_sim_spearman
value: 79.4289182755206
- type: euclidean_pearson
value: 76.26662973727008
- type: euclidean_spearman
value: 78.11744260952536
- type: manhattan_pearson
value: 76.08175262609434
- type: manhattan_spearman
value: 78.29395265552289
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.63636363636364
- type: f1
value: 81.55779952376953
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.88541137137571
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.05205685274407
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.293999999999997
- type: map_at_10
value: 39.876
- type: map_at_100
value: 41.315000000000005
- type: map_at_1000
value: 41.451
- type: map_at_3
value: 37.194
- type: map_at_5
value: 38.728
- type: mrr_at_1
value: 37.053000000000004
- type: mrr_at_10
value: 45.281
- type: mrr_at_100
value: 46.188
- type: mrr_at_1000
value: 46.245999999999995
- type: mrr_at_3
value: 43.228
- type: mrr_at_5
value: 44.366
- type: ndcg_at_1
value: 37.053000000000004
- type: ndcg_at_10
value: 45.086
- type: ndcg_at_100
value: 50.756
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 41.416
- type: ndcg_at_5
value: 43.098
- type: precision_at_1
value: 37.053000000000004
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 19.647000000000002
- type: precision_at_5
value: 13.877
- type: recall_at_1
value: 30.293999999999997
- type: recall_at_10
value: 54.309
- type: recall_at_100
value: 78.59
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 43.168
- type: recall_at_5
value: 48.192
- type: map_at_1
value: 28.738000000000003
- type: map_at_10
value: 36.925999999999995
- type: map_at_100
value: 38.017
- type: map_at_1000
value: 38.144
- type: map_at_3
value: 34.446
- type: map_at_5
value: 35.704
- type: mrr_at_1
value: 35.478
- type: mrr_at_10
value: 42.786
- type: mrr_at_100
value: 43.458999999999996
- type: mrr_at_1000
value: 43.507
- type: mrr_at_3
value: 40.648
- type: mrr_at_5
value: 41.804
- type: ndcg_at_1
value: 35.478
- type: ndcg_at_10
value: 42.044
- type: ndcg_at_100
value: 46.249
- type: ndcg_at_1000
value: 48.44
- type: ndcg_at_3
value: 38.314
- type: ndcg_at_5
value: 39.798
- type: precision_at_1
value: 35.478
- type: precision_at_10
value: 7.764
- type: precision_at_100
value: 1.253
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 18.047
- type: precision_at_5
value: 12.637
- type: recall_at_1
value: 28.738000000000003
- type: recall_at_10
value: 50.659
- type: recall_at_100
value: 68.76299999999999
- type: recall_at_1000
value: 82.811
- type: recall_at_3
value: 39.536
- type: recall_at_5
value: 43.763999999999996
- type: map_at_1
value: 38.565
- type: map_at_10
value: 50.168
- type: map_at_100
value: 51.11
- type: map_at_1000
value: 51.173
- type: map_at_3
value: 47.044000000000004
- type: map_at_5
value: 48.838
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 53.596999999999994
- type: mrr_at_100
value: 54.211
- type: mrr_at_1000
value: 54.247
- type: mrr_at_3
value: 51.202000000000005
- type: mrr_at_5
value: 52.608999999999995
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 55.694
- type: ndcg_at_100
value: 59.518
- type: ndcg_at_1000
value: 60.907
- type: ndcg_at_3
value: 50.395999999999994
- type: ndcg_at_5
value: 53.022999999999996
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 8.84
- type: precision_at_100
value: 1.162
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 22.153
- type: precision_at_5
value: 15.260000000000002
- type: recall_at_1
value: 38.565
- type: recall_at_10
value: 68.65
- type: recall_at_100
value: 85.37400000000001
- type: recall_at_1000
value: 95.37400000000001
- type: recall_at_3
value: 54.645999999999994
- type: recall_at_5
value: 60.958
- type: map_at_1
value: 23.945
- type: map_at_10
value: 30.641000000000002
- type: map_at_100
value: 31.599
- type: map_at_1000
value: 31.691000000000003
- type: map_at_3
value: 28.405
- type: map_at_5
value: 29.704000000000004
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 32.22
- type: mrr_at_100
value: 33.138
- type: mrr_at_1000
value: 33.214
- type: mrr_at_3
value: 30.151
- type: mrr_at_5
value: 31.298
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 34.638000000000005
- type: ndcg_at_100
value: 39.486
- type: ndcg_at_1000
value: 41.936
- type: ndcg_at_3
value: 30.333
- type: ndcg_at_5
value: 32.482
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 5.153
- type: precision_at_100
value: 0.7929999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.429
- type: precision_at_5
value: 8.723
- type: recall_at_1
value: 23.945
- type: recall_at_10
value: 45.412
- type: recall_at_100
value: 67.836
- type: recall_at_1000
value: 86.467
- type: recall_at_3
value: 34.031
- type: recall_at_5
value: 39.039
- type: map_at_1
value: 14.419
- type: map_at_10
value: 20.858999999999998
- type: map_at_100
value: 22.067999999999998
- type: map_at_1000
value: 22.192
- type: map_at_3
value: 18.673000000000002
- type: map_at_5
value: 19.968
- type: mrr_at_1
value: 17.785999999999998
- type: mrr_at_10
value: 24.878
- type: mrr_at_100
value: 26.021
- type: mrr_at_1000
value: 26.095000000000002
- type: mrr_at_3
value: 22.616
- type: mrr_at_5
value: 23.785
- type: ndcg_at_1
value: 17.785999999999998
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 31.05
- type: ndcg_at_1000
value: 34.052
- type: ndcg_at_3
value: 21.117
- type: ndcg_at_5
value: 23.048
- type: precision_at_1
value: 17.785999999999998
- type: precision_at_10
value: 4.590000000000001
- type: precision_at_100
value: 0.864
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.313
- type: recall_at_1
value: 14.419
- type: recall_at_10
value: 34.477999999999994
- type: recall_at_100
value: 60.02499999999999
- type: recall_at_1000
value: 81.646
- type: recall_at_3
value: 23.515
- type: recall_at_5
value: 28.266999999999996
- type: map_at_1
value: 26.268
- type: map_at_10
value: 35.114000000000004
- type: map_at_100
value: 36.212
- type: map_at_1000
value: 36.333
- type: map_at_3
value: 32.436
- type: map_at_5
value: 33.992
- type: mrr_at_1
value: 31.761
- type: mrr_at_10
value: 40.355999999999995
- type: mrr_at_100
value: 41.125
- type: mrr_at_1000
value: 41.186
- type: mrr_at_3
value: 37.937
- type: mrr_at_5
value: 39.463
- type: ndcg_at_1
value: 31.761
- type: ndcg_at_10
value: 40.422000000000004
- type: ndcg_at_100
value: 45.458999999999996
- type: ndcg_at_1000
value: 47.951
- type: ndcg_at_3
value: 35.972
- type: ndcg_at_5
value: 38.272
- type: precision_at_1
value: 31.761
- type: precision_at_10
value: 7.103
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.779
- type: precision_at_5
value: 11.877
- type: recall_at_1
value: 26.268
- type: recall_at_10
value: 51.053000000000004
- type: recall_at_100
value: 72.702
- type: recall_at_1000
value: 89.521
- type: recall_at_3
value: 38.619
- type: recall_at_5
value: 44.671
- type: map_at_1
value: 25.230999999999998
- type: map_at_10
value: 34.227000000000004
- type: map_at_100
value: 35.370000000000005
- type: map_at_1000
value: 35.488
- type: map_at_3
value: 31.496000000000002
- type: map_at_5
value: 33.034
- type: mrr_at_1
value: 30.822
- type: mrr_at_10
value: 39.045
- type: mrr_at_100
value: 39.809
- type: mrr_at_1000
value: 39.873
- type: mrr_at_3
value: 36.663000000000004
- type: mrr_at_5
value: 37.964
- type: ndcg_at_1
value: 30.822
- type: ndcg_at_10
value: 39.472
- type: ndcg_at_100
value: 44.574999999999996
- type: ndcg_at_1000
value: 47.162
- type: ndcg_at_3
value: 34.929
- type: ndcg_at_5
value: 37.002
- type: precision_at_1
value: 30.822
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.591
- type: precision_at_5
value: 11.667
- type: recall_at_1
value: 25.230999999999998
- type: recall_at_10
value: 50.42100000000001
- type: recall_at_100
value: 72.685
- type: recall_at_1000
value: 90.469
- type: recall_at_3
value: 37.503
- type: recall_at_5
value: 43.123
- type: map_at_1
value: 24.604166666666664
- type: map_at_10
value: 32.427166666666665
- type: map_at_100
value: 33.51474999999999
- type: map_at_1000
value: 33.6345
- type: map_at_3
value: 30.02366666666667
- type: map_at_5
value: 31.382333333333328
- type: mrr_at_1
value: 29.001166666666666
- type: mrr_at_10
value: 36.3315
- type: mrr_at_100
value: 37.16683333333333
- type: mrr_at_1000
value: 37.23341666666668
- type: mrr_at_3
value: 34.19916666666667
- type: mrr_at_5
value: 35.40458333333334
- type: ndcg_at_1
value: 29.001166666666666
- type: ndcg_at_10
value: 37.06883333333334
- type: ndcg_at_100
value: 41.95816666666666
- type: ndcg_at_1000
value: 44.501583333333336
- type: ndcg_at_3
value: 32.973499999999994
- type: ndcg_at_5
value: 34.90833333333334
- type: precision_at_1
value: 29.001166666666666
- type: precision_at_10
value: 6.336
- type: precision_at_100
value: 1.0282499999999999
- type: precision_at_1000
value: 0.14391666666666664
- type: precision_at_3
value: 14.932499999999996
- type: precision_at_5
value: 10.50825
- type: recall_at_1
value: 24.604166666666664
- type: recall_at_10
value: 46.9525
- type: recall_at_100
value: 68.67816666666667
- type: recall_at_1000
value: 86.59783333333334
- type: recall_at_3
value: 35.49783333333333
- type: recall_at_5
value: 40.52525000000001
- type: map_at_1
value: 23.559
- type: map_at_10
value: 29.023
- type: map_at_100
value: 29.818
- type: map_at_1000
value: 29.909000000000002
- type: map_at_3
value: 27.037
- type: map_at_5
value: 28.225
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 31.962000000000003
- type: mrr_at_100
value: 32.726
- type: mrr_at_1000
value: 32.800000000000004
- type: mrr_at_3
value: 30.266
- type: mrr_at_5
value: 31.208999999999996
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 32.53
- type: ndcg_at_100
value: 36.758
- type: ndcg_at_1000
value: 39.362
- type: ndcg_at_3
value: 28.985
- type: ndcg_at_5
value: 30.757
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 4.968999999999999
- type: precision_at_100
value: 0.759
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.219
- type: precision_at_5
value: 8.527999999999999
- type: recall_at_1
value: 23.559
- type: recall_at_10
value: 40.585
- type: recall_at_100
value: 60.306000000000004
- type: recall_at_1000
value: 80.11
- type: recall_at_3
value: 30.794
- type: recall_at_5
value: 35.186
- type: map_at_1
value: 16.384999999999998
- type: map_at_10
value: 22.142
- type: map_at_100
value: 23.057
- type: map_at_1000
value: 23.177
- type: map_at_3
value: 20.29
- type: map_at_5
value: 21.332
- type: mrr_at_1
value: 19.89
- type: mrr_at_10
value: 25.771
- type: mrr_at_100
value: 26.599
- type: mrr_at_1000
value: 26.680999999999997
- type: mrr_at_3
value: 23.962
- type: mrr_at_5
value: 24.934
- type: ndcg_at_1
value: 19.89
- type: ndcg_at_10
value: 25.97
- type: ndcg_at_100
value: 30.605
- type: ndcg_at_1000
value: 33.619
- type: ndcg_at_3
value: 22.704
- type: ndcg_at_5
value: 24.199
- type: precision_at_1
value: 19.89
- type: precision_at_10
value: 4.553
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 10.541
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 16.384999999999998
- type: recall_at_10
value: 34.001
- type: recall_at_100
value: 55.17100000000001
- type: recall_at_1000
value: 77.125
- type: recall_at_3
value: 24.618000000000002
- type: recall_at_5
value: 28.695999999999998
- type: map_at_1
value: 23.726
- type: map_at_10
value: 31.227
- type: map_at_100
value: 32.311
- type: map_at_1000
value: 32.419
- type: map_at_3
value: 28.765
- type: map_at_5
value: 30.229
- type: mrr_at_1
value: 27.705000000000002
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 35.931000000000004
- type: mrr_at_1000
value: 36
- type: mrr_at_3
value: 32.603
- type: mrr_at_5
value: 34.117999999999995
- type: ndcg_at_1
value: 27.705000000000002
- type: ndcg_at_10
value: 35.968
- type: ndcg_at_100
value: 41.197
- type: ndcg_at_1000
value: 43.76
- type: ndcg_at_3
value: 31.304
- type: ndcg_at_5
value: 33.661
- type: precision_at_1
value: 27.705000000000002
- type: precision_at_10
value: 5.942
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 13.868
- type: precision_at_5
value: 9.944
- type: recall_at_1
value: 23.726
- type: recall_at_10
value: 46.786
- type: recall_at_100
value: 70.072
- type: recall_at_1000
value: 88.2
- type: recall_at_3
value: 33.981
- type: recall_at_5
value: 39.893
- type: map_at_1
value: 23.344
- type: map_at_10
value: 31.636999999999997
- type: map_at_100
value: 33.065
- type: map_at_1000
value: 33.300000000000004
- type: map_at_3
value: 29.351
- type: map_at_5
value: 30.432
- type: mrr_at_1
value: 27.866000000000003
- type: mrr_at_10
value: 35.587
- type: mrr_at_100
value: 36.52
- type: mrr_at_1000
value: 36.597
- type: mrr_at_3
value: 33.696
- type: mrr_at_5
value: 34.713
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 36.61
- type: ndcg_at_100
value: 41.88
- type: ndcg_at_1000
value: 45.105000000000004
- type: ndcg_at_3
value: 33.038000000000004
- type: ndcg_at_5
value: 34.331
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 6.917
- type: precision_at_100
value: 1.3599999999999999
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.344
- type: recall_at_10
value: 45.782000000000004
- type: recall_at_100
value: 69.503
- type: recall_at_1000
value: 90.742
- type: recall_at_3
value: 35.160000000000004
- type: recall_at_5
value: 39.058
- type: map_at_1
value: 20.776
- type: map_at_10
value: 27.285999999999998
- type: map_at_100
value: 28.235
- type: map_at_1000
value: 28.337
- type: map_at_3
value: 25.147000000000002
- type: map_at_5
value: 26.401999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 29.409999999999997
- type: mrr_at_100
value: 30.275000000000002
- type: mrr_at_1000
value: 30.354999999999997
- type: mrr_at_3
value: 27.418
- type: mrr_at_5
value: 28.592000000000002
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 31.239
- type: ndcg_at_100
value: 35.965
- type: ndcg_at_1000
value: 38.602
- type: ndcg_at_3
value: 27.174
- type: ndcg_at_5
value: 29.229
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 4.806
- type: precision_at_100
value: 0.776
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.022
- type: recall_at_1
value: 20.776
- type: recall_at_10
value: 41.294
- type: recall_at_100
value: 63.111
- type: recall_at_1000
value: 82.88600000000001
- type: recall_at_3
value: 30.403000000000002
- type: recall_at_5
value: 35.455999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.376
- type: map_at_10
value: 15.926000000000002
- type: map_at_100
value: 17.585
- type: map_at_1000
value: 17.776
- type: map_at_3
value: 13.014000000000001
- type: map_at_5
value: 14.417
- type: mrr_at_1
value: 20.195
- type: mrr_at_10
value: 29.95
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.108000000000004
- type: mrr_at_3
value: 26.667
- type: mrr_at_5
value: 28.458
- type: ndcg_at_1
value: 20.195
- type: ndcg_at_10
value: 22.871
- type: ndcg_at_100
value: 29.921999999999997
- type: ndcg_at_1000
value: 33.672999999999995
- type: ndcg_at_3
value: 17.782999999999998
- type: ndcg_at_5
value: 19.544
- type: precision_at_1
value: 20.195
- type: precision_at_10
value: 7.394
- type: precision_at_100
value: 1.493
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.073
- type: precision_at_5
value: 10.436
- type: recall_at_1
value: 9.376
- type: recall_at_10
value: 28.544999999999998
- type: recall_at_100
value: 53.147999999999996
- type: recall_at_1000
value: 74.62
- type: recall_at_3
value: 16.464000000000002
- type: recall_at_5
value: 21.004
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.415000000000001
- type: map_at_10
value: 18.738
- type: map_at_100
value: 27.291999999999998
- type: map_at_1000
value: 28.992
- type: map_at_3
value: 13.196
- type: map_at_5
value: 15.539
- type: mrr_at_1
value: 66.5
- type: mrr_at_10
value: 74.518
- type: mrr_at_100
value: 74.86
- type: mrr_at_1000
value: 74.87
- type: mrr_at_3
value: 72.375
- type: mrr_at_5
value: 73.86200000000001
- type: ndcg_at_1
value: 54.37499999999999
- type: ndcg_at_10
value: 41.317
- type: ndcg_at_100
value: 45.845
- type: ndcg_at_1000
value: 52.92
- type: ndcg_at_3
value: 44.983000000000004
- type: ndcg_at_5
value: 42.989
- type: precision_at_1
value: 66.5
- type: precision_at_10
value: 33.6
- type: precision_at_100
value: 10.972999999999999
- type: precision_at_1000
value: 2.214
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.15
- type: recall_at_1
value: 8.415000000000001
- type: recall_at_10
value: 24.953
- type: recall_at_100
value: 52.48199999999999
- type: recall_at_1000
value: 75.093
- type: recall_at_3
value: 14.341000000000001
- type: recall_at_5
value: 18.468
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.06499999999999
- type: f1
value: 41.439327599975385
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.02
- type: map_at_10
value: 76.68599999999999
- type: map_at_100
value: 76.959
- type: map_at_1000
value: 76.972
- type: map_at_3
value: 75.024
- type: map_at_5
value: 76.153
- type: mrr_at_1
value: 71.197
- type: mrr_at_10
value: 81.105
- type: mrr_at_100
value: 81.232
- type: mrr_at_1000
value: 81.233
- type: mrr_at_3
value: 79.758
- type: mrr_at_5
value: 80.69
- type: ndcg_at_1
value: 71.197
- type: ndcg_at_10
value: 81.644
- type: ndcg_at_100
value: 82.645
- type: ndcg_at_1000
value: 82.879
- type: ndcg_at_3
value: 78.792
- type: ndcg_at_5
value: 80.528
- type: precision_at_1
value: 71.197
- type: precision_at_10
value: 10.206999999999999
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 30.868000000000002
- type: precision_at_5
value: 19.559
- type: recall_at_1
value: 66.02
- type: recall_at_10
value: 92.50699999999999
- type: recall_at_100
value: 96.497
- type: recall_at_1000
value: 97.956
- type: recall_at_3
value: 84.866
- type: recall_at_5
value: 89.16199999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.948
- type: map_at_10
value: 29.833
- type: map_at_100
value: 31.487
- type: map_at_1000
value: 31.674000000000003
- type: map_at_3
value: 26.029999999999998
- type: map_at_5
value: 28.038999999999998
- type: mrr_at_1
value: 34.721999999999994
- type: mrr_at_10
value: 44.214999999999996
- type: mrr_at_100
value: 44.994
- type: mrr_at_1000
value: 45.051
- type: mrr_at_3
value: 41.667
- type: mrr_at_5
value: 43.032
- type: ndcg_at_1
value: 34.721999999999994
- type: ndcg_at_10
value: 37.434
- type: ndcg_at_100
value: 43.702000000000005
- type: ndcg_at_1000
value: 46.993
- type: ndcg_at_3
value: 33.56
- type: ndcg_at_5
value: 34.687
- type: precision_at_1
value: 34.721999999999994
- type: precision_at_10
value: 10.401
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 22.531000000000002
- type: precision_at_5
value: 16.42
- type: recall_at_1
value: 17.948
- type: recall_at_10
value: 45.062999999999995
- type: recall_at_100
value: 68.191
- type: recall_at_1000
value: 87.954
- type: recall_at_3
value: 31.112000000000002
- type: recall_at_5
value: 36.823
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.644
- type: map_at_10
value: 57.658
- type: map_at_100
value: 58.562000000000005
- type: map_at_1000
value: 58.62500000000001
- type: map_at_3
value: 54.022999999999996
- type: map_at_5
value: 56.293000000000006
- type: mrr_at_1
value: 73.288
- type: mrr_at_10
value: 80.51700000000001
- type: mrr_at_100
value: 80.72
- type: mrr_at_1000
value: 80.728
- type: mrr_at_3
value: 79.33200000000001
- type: mrr_at_5
value: 80.085
- type: ndcg_at_1
value: 73.288
- type: ndcg_at_10
value: 66.61
- type: ndcg_at_100
value: 69.723
- type: ndcg_at_1000
value: 70.96000000000001
- type: ndcg_at_3
value: 61.358999999999995
- type: ndcg_at_5
value: 64.277
- type: precision_at_1
value: 73.288
- type: precision_at_10
value: 14.17
- type: precision_at_100
value: 1.659
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.487
- type: precision_at_5
value: 25.999
- type: recall_at_1
value: 36.644
- type: recall_at_10
value: 70.851
- type: recall_at_100
value: 82.94399999999999
- type: recall_at_1000
value: 91.134
- type: recall_at_3
value: 59.230000000000004
- type: recall_at_5
value: 64.997
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.00280000000001
- type: ap
value: 80.46302061021223
- type: f1
value: 85.9592921596419
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.541
- type: map_at_10
value: 34.625
- type: map_at_100
value: 35.785
- type: map_at_1000
value: 35.831
- type: map_at_3
value: 30.823
- type: map_at_5
value: 32.967999999999996
- type: mrr_at_1
value: 23.180999999999997
- type: mrr_at_10
value: 35.207
- type: mrr_at_100
value: 36.315
- type: mrr_at_1000
value: 36.355
- type: mrr_at_3
value: 31.483
- type: mrr_at_5
value: 33.589999999999996
- type: ndcg_at_1
value: 23.195
- type: ndcg_at_10
value: 41.461
- type: ndcg_at_100
value: 47.032000000000004
- type: ndcg_at_1000
value: 48.199999999999996
- type: ndcg_at_3
value: 33.702
- type: ndcg_at_5
value: 37.522
- type: precision_at_1
value: 23.195
- type: precision_at_10
value: 6.526999999999999
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.308000000000002
- type: precision_at_5
value: 10.507
- type: recall_at_1
value: 22.541
- type: recall_at_10
value: 62.524
- type: recall_at_100
value: 88.228
- type: recall_at_1000
value: 97.243
- type: recall_at_3
value: 41.38
- type: recall_at_5
value: 50.55
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.69949840401279
- type: f1
value: 92.54141471311786
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.56041951664386
- type: f1
value: 55.88499977508287
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.62071284465365
- type: f1
value: 69.36717546572152
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.35843981170142
- type: f1
value: 76.15496453538884
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.33664956793118
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.883839621715524
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.096874986740758
- type: mrr
value: 30.97300481932132
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.4
- type: map_at_10
value: 11.852
- type: map_at_100
value: 14.758
- type: map_at_1000
value: 16.134
- type: map_at_3
value: 8.558
- type: map_at_5
value: 10.087
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 52.05800000000001
- type: mrr_at_100
value: 52.689
- type: mrr_at_1000
value: 52.742999999999995
- type: mrr_at_3
value: 50.205999999999996
- type: mrr_at_5
value: 51.367
- type: ndcg_at_1
value: 42.57
- type: ndcg_at_10
value: 32.449
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 38.351
- type: ndcg_at_3
value: 37.044
- type: ndcg_at_5
value: 35.275
- type: precision_at_1
value: 44.272
- type: precision_at_10
value: 23.87
- type: precision_at_100
value: 7.625
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 34.365
- type: precision_at_5
value: 30.341
- type: recall_at_1
value: 5.4
- type: recall_at_10
value: 15.943999999999999
- type: recall_at_100
value: 29.805
- type: recall_at_1000
value: 61.695
- type: recall_at_3
value: 9.539
- type: recall_at_5
value: 12.127
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.047000000000004
- type: map_at_10
value: 51.6
- type: map_at_100
value: 52.449999999999996
- type: map_at_1000
value: 52.476
- type: map_at_3
value: 47.452
- type: map_at_5
value: 49.964
- type: mrr_at_1
value: 40.382
- type: mrr_at_10
value: 54.273
- type: mrr_at_100
value: 54.859
- type: mrr_at_1000
value: 54.876000000000005
- type: mrr_at_3
value: 51.014
- type: mrr_at_5
value: 52.983999999999995
- type: ndcg_at_1
value: 40.353
- type: ndcg_at_10
value: 59.11300000000001
- type: ndcg_at_100
value: 62.604000000000006
- type: ndcg_at_1000
value: 63.187000000000005
- type: ndcg_at_3
value: 51.513
- type: ndcg_at_5
value: 55.576
- type: precision_at_1
value: 40.353
- type: precision_at_10
value: 9.418
- type: precision_at_100
value: 1.1440000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.250999999999998
- type: recall_at_1
value: 36.047000000000004
- type: recall_at_10
value: 79.22200000000001
- type: recall_at_100
value: 94.23
- type: recall_at_1000
value: 98.51100000000001
- type: recall_at_3
value: 59.678
- type: recall_at_5
value: 68.967
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.232
- type: map_at_10
value: 81.674
- type: map_at_100
value: 82.338
- type: map_at_1000
value: 82.36099999999999
- type: map_at_3
value: 78.833
- type: map_at_5
value: 80.58
- type: mrr_at_1
value: 78.64
- type: mrr_at_10
value: 85.164
- type: mrr_at_100
value: 85.317
- type: mrr_at_1000
value: 85.319
- type: mrr_at_3
value: 84.127
- type: mrr_at_5
value: 84.789
- type: ndcg_at_1
value: 78.63
- type: ndcg_at_10
value: 85.711
- type: ndcg_at_100
value: 87.238
- type: ndcg_at_1000
value: 87.444
- type: ndcg_at_3
value: 82.788
- type: ndcg_at_5
value: 84.313
- type: precision_at_1
value: 78.63
- type: precision_at_10
value: 12.977
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.113
- type: precision_at_5
value: 23.71
- type: recall_at_1
value: 68.232
- type: recall_at_10
value: 93.30199999999999
- type: recall_at_100
value: 98.799
- type: recall_at_1000
value: 99.885
- type: recall_at_3
value: 84.827
- type: recall_at_5
value: 89.188
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.71879170816294
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 59.65866311751794
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.218
- type: map_at_10
value: 10.337
- type: map_at_100
value: 12.131
- type: map_at_1000
value: 12.411
- type: map_at_3
value: 7.4270000000000005
- type: map_at_5
value: 8.913
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 30.868000000000002
- type: mrr_at_100
value: 31.903
- type: mrr_at_1000
value: 31.972
- type: mrr_at_3
value: 27.367
- type: mrr_at_5
value: 29.372
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.765
- type: ndcg_at_100
value: 24.914
- type: ndcg_at_1000
value: 30.206
- type: ndcg_at_3
value: 16.64
- type: ndcg_at_5
value: 14.712
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 9.24
- type: precision_at_100
value: 1.9560000000000002
- type: precision_at_1000
value: 0.32299999999999995
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.94
- type: recall_at_1
value: 4.218
- type: recall_at_10
value: 18.752
- type: recall_at_100
value: 39.7
- type: recall_at_1000
value: 65.57300000000001
- type: recall_at_3
value: 9.428
- type: recall_at_5
value: 13.133000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04338850207233
- type: cos_sim_spearman
value: 78.5054651430423
- type: euclidean_pearson
value: 80.30739451228612
- type: euclidean_spearman
value: 78.48377464299097
- type: manhattan_pearson
value: 80.40795049052781
- type: manhattan_spearman
value: 78.49506205443114
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.11596224442962
- type: cos_sim_spearman
value: 76.20997388935461
- type: euclidean_pearson
value: 80.56858451349109
- type: euclidean_spearman
value: 75.92659183871186
- type: manhattan_pearson
value: 80.60246102203844
- type: manhattan_spearman
value: 76.03018971432664
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.34691640755737
- type: cos_sim_spearman
value: 82.4018369631579
- type: euclidean_pearson
value: 81.87673092245366
- type: euclidean_spearman
value: 82.3671489960678
- type: manhattan_pearson
value: 81.88222387719948
- type: manhattan_spearman
value: 82.3816590344736
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.2836092579524
- type: cos_sim_spearman
value: 78.99982781772064
- type: euclidean_pearson
value: 80.5184271010527
- type: euclidean_spearman
value: 78.89777392101904
- type: manhattan_pearson
value: 80.53585705018664
- type: manhattan_spearman
value: 78.92898405472994
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.7349907750784
- type: cos_sim_spearman
value: 87.7611234446225
- type: euclidean_pearson
value: 86.98759326731624
- type: euclidean_spearman
value: 87.58321319424618
- type: manhattan_pearson
value: 87.03483090370842
- type: manhattan_spearman
value: 87.63278333060288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.75873694924825
- type: cos_sim_spearman
value: 83.80237999094724
- type: euclidean_pearson
value: 83.55023725861537
- type: euclidean_spearman
value: 84.12744338577744
- type: manhattan_pearson
value: 83.58816983036232
- type: manhattan_spearman
value: 84.18520748676501
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.21630882940174
- type: cos_sim_spearman
value: 87.72382883437031
- type: euclidean_pearson
value: 88.69933350930333
- type: euclidean_spearman
value: 88.24660814383081
- type: manhattan_pearson
value: 88.77331018833499
- type: manhattan_spearman
value: 88.26109989380632
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.11854063060489
- type: cos_sim_spearman
value: 63.14678634195072
- type: euclidean_pearson
value: 61.679090067000864
- type: euclidean_spearman
value: 62.28876589509653
- type: manhattan_pearson
value: 62.082324165511004
- type: manhattan_spearman
value: 62.56030932816679
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.00319882832645
- type: cos_sim_spearman
value: 85.94529772647257
- type: euclidean_pearson
value: 85.6661390122756
- type: euclidean_spearman
value: 85.97747815545827
- type: manhattan_pearson
value: 85.58422770541893
- type: manhattan_spearman
value: 85.9237139181532
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.16198731863916
- type: mrr
value: 94.25202702163487
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.761
- type: map_at_10
value: 64.396
- type: map_at_100
value: 65.07
- type: map_at_1000
value: 65.09899999999999
- type: map_at_3
value: 61.846000000000004
- type: map_at_5
value: 63.284
- type: mrr_at_1
value: 57.667
- type: mrr_at_10
value: 65.83099999999999
- type: mrr_at_100
value: 66.36800000000001
- type: mrr_at_1000
value: 66.39399999999999
- type: mrr_at_3
value: 64.056
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 57.667
- type: ndcg_at_10
value: 68.854
- type: ndcg_at_100
value: 71.59100000000001
- type: ndcg_at_1000
value: 72.383
- type: ndcg_at_3
value: 64.671
- type: ndcg_at_5
value: 66.796
- type: precision_at_1
value: 57.667
- type: precision_at_10
value: 9.167
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.667
- type: recall_at_1
value: 54.761
- type: recall_at_10
value: 80.9
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 69.672
- type: recall_at_5
value: 75.083
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8079207920792
- type: cos_sim_ap
value: 94.88470927617445
- type: cos_sim_f1
value: 90.08179959100204
- type: cos_sim_precision
value: 92.15481171548117
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.58613861386138
- type: dot_ap
value: 82.94822578881316
- type: dot_f1
value: 77.33333333333333
- type: dot_precision
value: 79.36842105263158
- type: dot_recall
value: 75.4
- type: euclidean_accuracy
value: 99.8069306930693
- type: euclidean_ap
value: 94.81367858031837
- type: euclidean_f1
value: 90.01009081735621
- type: euclidean_precision
value: 90.83503054989816
- type: euclidean_recall
value: 89.2
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 94.91405337220161
- type: manhattan_f1
value: 90.2763561924258
- type: manhattan_precision
value: 92.45283018867924
- type: manhattan_recall
value: 88.2
- type: max_accuracy
value: 99.81188118811882
- type: max_ap
value: 94.91405337220161
- type: max_f1
value: 90.2763561924258
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.511599500053094
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.984728147814707
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.93428193939015
- type: mrr
value: 50.916557911043206
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.562500894537145
- type: cos_sim_spearman
value: 31.162587976726307
- type: dot_pearson
value: 22.633662187735762
- type: dot_spearman
value: 22.723000282378962
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.219
- type: map_at_10
value: 1.871
- type: map_at_100
value: 10.487
- type: map_at_1000
value: 25.122
- type: map_at_3
value: 0.657
- type: map_at_5
value: 1.0699999999999998
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 89.567
- type: mrr_at_100
value: 89.748
- type: mrr_at_1000
value: 89.748
- type: mrr_at_3
value: 88.667
- type: mrr_at_5
value: 89.567
- type: ndcg_at_1
value: 80
- type: ndcg_at_10
value: 74.533
- type: ndcg_at_100
value: 55.839000000000006
- type: ndcg_at_1000
value: 49.748
- type: ndcg_at_3
value: 79.53099999999999
- type: ndcg_at_5
value: 78.245
- type: precision_at_1
value: 84
- type: precision_at_10
value: 78.4
- type: precision_at_100
value: 56.99999999999999
- type: precision_at_1000
value: 21.98
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84.8
- type: recall_at_1
value: 0.219
- type: recall_at_10
value: 2.02
- type: recall_at_100
value: 13.555
- type: recall_at_1000
value: 46.739999999999995
- type: recall_at_3
value: 0.685
- type: recall_at_5
value: 1.13
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.5029999999999997
- type: map_at_10
value: 11.042
- type: map_at_100
value: 16.326999999999998
- type: map_at_1000
value: 17.836
- type: map_at_3
value: 6.174
- type: map_at_5
value: 7.979
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 52.617000000000004
- type: mrr_at_100
value: 53.351000000000006
- type: mrr_at_1000
value: 53.351000000000006
- type: mrr_at_3
value: 46.939
- type: mrr_at_5
value: 50.714000000000006
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 27.125
- type: ndcg_at_100
value: 35.845
- type: ndcg_at_1000
value: 47.377
- type: ndcg_at_3
value: 29.633
- type: ndcg_at_5
value: 28.378999999999998
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 24.082
- type: precision_at_100
value: 6.877999999999999
- type: precision_at_1000
value: 1.463
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 28.571
- type: recall_at_1
value: 3.5029999999999997
- type: recall_at_10
value: 17.068
- type: recall_at_100
value: 43.361
- type: recall_at_1000
value: 78.835
- type: recall_at_3
value: 6.821000000000001
- type: recall_at_5
value: 10.357
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.0954
- type: ap
value: 14.216844153511959
- type: f1
value: 54.63687418565117
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.46293152235427
- type: f1
value: 61.744177921638645
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.12708617788644
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.75430649102938
- type: cos_sim_ap
value: 73.34252536948081
- type: cos_sim_f1
value: 67.53758935173774
- type: cos_sim_precision
value: 63.3672525439408
- type: cos_sim_recall
value: 72.29551451187335
- type: dot_accuracy
value: 81.71305954580676
- type: dot_ap
value: 59.5532209082386
- type: dot_f1
value: 56.18466898954705
- type: dot_precision
value: 47.830923248053395
- type: dot_recall
value: 68.07387862796834
- type: euclidean_accuracy
value: 85.81987244441795
- type: euclidean_ap
value: 73.34325409809446
- type: euclidean_f1
value: 67.83451360417443
- type: euclidean_precision
value: 64.09955388588871
- type: euclidean_recall
value: 72.0316622691293
- type: manhattan_accuracy
value: 85.68277999642368
- type: manhattan_ap
value: 73.1535450121903
- type: manhattan_f1
value: 67.928237896289
- type: manhattan_precision
value: 63.56945722171113
- type: manhattan_recall
value: 72.9287598944591
- type: max_accuracy
value: 85.81987244441795
- type: max_ap
value: 73.34325409809446
- type: max_f1
value: 67.928237896289
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.90441262079403
- type: cos_sim_ap
value: 85.79331880741438
- type: cos_sim_f1
value: 78.31563529842548
- type: cos_sim_precision
value: 74.6683424102779
- type: cos_sim_recall
value: 82.33754234678165
- type: dot_accuracy
value: 84.89928978926534
- type: dot_ap
value: 75.25819218316
- type: dot_f1
value: 69.88730119720536
- type: dot_precision
value: 64.23362374959665
- type: dot_recall
value: 76.63227594702803
- type: euclidean_accuracy
value: 89.01695967710637
- type: euclidean_ap
value: 85.98986606038852
- type: euclidean_f1
value: 78.5277880014722
- type: euclidean_precision
value: 75.22211253701876
- type: euclidean_recall
value: 82.13735756082538
- type: manhattan_accuracy
value: 88.99561454573679
- type: manhattan_ap
value: 85.92262421793953
- type: manhattan_f1
value: 78.38866094740769
- type: manhattan_precision
value: 76.02373028505282
- type: manhattan_recall
value: 80.9054511857099
- type: max_accuracy
value: 89.01695967710637
- type: max_ap
value: 85.98986606038852
- type: max_f1
value: 78.5277880014722
---
# E5-small-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('ggrn/e5-small-v2')
model = AutoModel.from_pretrained('ggrn/e5-small-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
## Sentence Transformers
Below is an example for usage with sentence_transformers. `pip install sentence_transformers~=2.2.2`
This is community contributed, and results may vary up to numerical precision.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('ggrn/e5-small-v2')
embeddings = model.encode(input_texts, normalize_embeddings=True)
``` | [
"BIOSSES",
"SCIFACT"
] |
mradermacher/1.5-Pints-2K-v0.1-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"dataset:pints-ai/Expository-Prose-V1",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:meta-math/MetaMathQA",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:togethercomputer/llama-instruct",
"dataset:LDJnr/Capybara",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:pints-ai/1.5-Pints-2K-v0.1",
"base_model:quantized:pints-ai/1.5-Pints-2K-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2025-02-24T16:08:35Z | 2025-02-24T16:54:52+00:00 | 913 | 0 | ---
base_model: pints-ai/1.5-Pints-2K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
license: mit
extra_gated_fields:
Company: text
Country: country
I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
I want to use this model for:
options:
- Research
- Education
- label: Other
value: other
type: select
Specific date: date_picker
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
that all texts in the training corpora are royalty free, this does not constitute
a legal guarantee that such is the case. **By using any of the models, corpora or
part thereof, the user agrees to bear full responsibility to do the necessary due
diligence to ensure that he / she is in compliance with their local copyright laws.
Additionally, the user agrees to bear any damages arising as a direct cause (or
otherwise) of using any artifacts released by the pints research team, as well as
full responsibility for the consequences of his / her usage (or implementation)
of any such released artifacts. The user also indemnifies Pints Research Team (and
any of its members or agents) of any damage, related or unrelated, to the release
or subsequent usage of any findings, artifacts or code by the team. For the avoidance
of doubt, any artifacts released by the Pints Research team are done so in accordance
with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
community in bringing LLMs to the next frontier.
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pints-ai/1.5-Pints-2K-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF/resolve/main/1.5-Pints-2K-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"BEAR"
] |
RichardErkhov/crumb_-_gpt2023-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 2024-06-06T04:16:51Z | 2024-06-06T04:30:15+00:00 | 908 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2023 - GGUF
- Model creator: https://huggingface.co/crumb/
- Original model: https://huggingface.co/crumb/gpt2023/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2023.Q2_K.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2023.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2023.IQ3_S.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2023.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2023.IQ3_M.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2023.Q3_K.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2023.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2023.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2023.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2023.Q4_0.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2023.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2023.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2023.Q4_K.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2023.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2023.Q4_1.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2023.Q5_0.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2023.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2023.Q5_K.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2023.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2023.Q5_1.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2023.Q6_K.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2023.Q8_0.gguf](https://huggingface.co/RichardErkhov/crumb_-_gpt2023-gguf/blob/main/gpt2023.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
license: mit
language:
- en
tags:
- causal-lm
---
# GPT2(023) Model Card
This is the smallest GPT-2 model (124m) from OpenAi finetuned on approximately 2.23B tokens (almost the 2.48B needed to 'chinchilla-optimally' pretrain it! It's also more tokens than Cerebras-GPT-111M was trained on in total) consisting of 1.3B from common crawl sites from 2023, 540M from ArXiv, and 390M from GitHub.
The model was trained with a learning rate of 1e-4, with a warmup of 1024 steps, then decaying to 0. There were 4400 total steps during training at a batch size of 512 examples with a context length of 1024. The batch size and context length are the same as the pre-training of GPT2 itself. Training took a total of 1.18e+18 FLOs over the course of 79.32 hours locally with a 12gb RTX3060. Final train loss was 2.73.
### Evaluation of GPT2023
*(in progress)*
| model | piqa acc | winogrande acc | lambada ppl | lambada acc | arc acc | sciq acc | wsc acc |
| --- | --- | --- | --- | --- | --- | --- | --- |
| pythia-70m | 59.85 | 51.22 | 140.81 | 21.40 | 17.15 | 65.00 | 36.53 |
| pythia-160m | 62.68 | 51.07 | 30.03 | 36.76 | 19.62 | 76.20 | 36.58 |
| pythia-410m | 66.54 | 52.24 | 11.75 | 49.93 | 21.67 | 80.80 | 60.58 |
| opt-125m | 63.00 | 50.27 | 26.02 | 37.90 | 18.94 | 75.1 | 36.54 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| gpt2 (124m) | **62.89** | **51.61** | 40.06 | 32.56 | **19.03** | 75 | **43.27** |
| gpt2023 (124m) | 62.02 | 49.64 | **34.55** | **33.98** | 18.94 | **76.1** | 36.54 |
The resulting model achieves a puplexity of 339.38, making it competative with Cerebras-590m with only 21% of the parameters, and much better than the original GPT-2 which scores 491.57!
(metric explanation here: https://twitter.com/aicrumb/status/1650350363898265601 , tldr it's a joke)
To demonstrate how GPT2(023) is aware of recent events, let’s take a look at a given example:
```
# About Covid-19
- -
The Covid-19
```
The model completes the text as:
```
# About Covid-19
- -
The Covid-19 pandemic is the worldwide pandemic that has left thousands of people unable to enter and work in or continue their normal daily normal life. In this brief post, we examine three of the main factors that have accelerated the pandemic and predict the path the pandemic will take through the rest of the world.
```
As you can see, GPT2(023) can generate coherent and relevant text pertaining to the Covid-19 pandemic, showcasing its ability to understand recent events. However, it struggles with certain subjects that weren’t extremely relevant in it’s training data. As only 2.23 billion tokens were used during finetuning, the model may have missed out on many recent events. One of those events being the latest US election.
Given text in a question and answer format:
```
Q: Who is the last president?
A: Donald Trump
Q: Who is the most recent president?
A:
```
The model completes the text with: `Barack Obama`
### Model description
*(from GPT-2 model card)*
GPT-2 is a transformer model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
This is the smallest version of GPT-2, with 124M parameters.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='crumb/gpt2023')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('crumb/gpt2023')
model = GPT2Model.from_pretrained('crumb/gpt2023')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_crumb__gpt2023)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.85 |
| ARC (25-shot) | 21.93 |
| HellaSwag (10-shot) | 31.11 |
| MMLU (5-shot) | 25.05 |
| TruthfulQA (0-shot) | 40.71 |
| Winogrande (5-shot) | 50.12 |
| GSM8K (5-shot) | 0.3 |
| DROP (3-shot) | 4.73 |
| [
"SCIQ"
] |
EleutherAI/pythia-70m-v0 | EleutherAI | text-generation | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-10-16T18:31:25Z | 2023-03-29T18:53:28+00:00 | 904 | 6 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-70M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"SCIQ"
] |
aisingapore/sea-lion-3b | aisingapore | text-generation | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"custom_code",
"en",
"zh",
"id",
"ms",
"tl",
"my",
"vi",
"th",
"lo",
"km",
"ta",
"arxiv:2101.09635",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-24T06:09:47Z | 2024-09-19T06:38:42+00:00 | 900 | 17 | ---
language:
- en
- zh
- id
- ms
- tl
- my
- vi
- th
- lo
- km
- ta
license: mit
---
# SEA-LION
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
The size of the models range from 3 billion to 7 billion parameters.
This is the card for the SEA-LION 3B base model.
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
## Model Details
### Model Description
The SEA-LION model is a significant leap forward in the field of Natural Language Processing,
specifically trained to understand the SEA regional context.
SEA-LION is built on the robust MPT architecture and has a vocabulary size of 256K.
For tokenization, the model employs our custom SEABPETokenizer, which is specially tailored for SEA languages, ensuring optimal model performance.
The training data for SEA-LION encompasses 980B tokens.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao
- **License:** MIT License
### Performance Benchmarks
SEA-LION has an average performance on general tasks in English (as measured by Hugging Face's LLM Leaderboard):
| Model | ARC | HellaSwag | MMLU | TruthfulQA | Average |
|-------------|:-----:|:---------:|:-----:|:----------:|:-------:|
| SEA-LION 3B | 36.26 | 64.59 | 24.07 | 36.46 | 40.35 |
## Training Details
### Data
SEA-LION was trained on 980B tokens of the following data:
| Data Source | Unique Tokens | Multiplier | Total Tokens | Percentage |
|---------------------------|:-------------:|:----------:|:------------:|:----------:|
| RefinedWeb - English | 571.3B | 1 | 571.3B | 58.20% |
| mC4 - Chinese | 91.2B | 1 | 91.2B | 9.29% |
| mC4 - Indonesian | 3.68B | 4 | 14.7B | 1.50% |
| mC4 - Malay | 0.72B | 4 | 2.9B | 0.29% |
| mC4 - Filipino | 1.32B | 4 | 5.3B | 0.54% |
| mC4 - Burmese | 1.2B | 4 | 4.9B | 0.49% |
| mC4 - Vietnamese | 63.4B | 1 | 63.4B | 6.46% |
| mC4 - Thai | 5.8B | 2 | 11.6B | 1.18% |
| WangChanBERTa - Thai | 5B | 2 | 10B | 1.02% |
| mC4 - Lao | 0.27B | 4 | 1.1B | 0.12% |
| mC4 - Khmer | 0.97B | 4 | 3.9B | 0.40% |
| mC4 - Tamil | 2.55B | 4 | 10.2B | 1.04% |
| the Stack - Python | 20.9B | 2 | 41.8B | 4.26% |
| the Stack - Javascript | 55.6B | 1 | 55.6B | 5.66% |
| the Stack - Shell | 1.2B5 | 2 | 2.5B | 0.26% |
| the Stack - SQL | 6.4B | 2 | 12.8B | 1.31% |
| the Stack - Markdown | 26.6B | 1 | 26.6B | 2.71% |
| RedPajama - StackExchange | 21.2B | 1 | 21.2B | 2.16% |
| RedPajama - ArXiv | 30.6B | 1 | 30.6B | 3.12% |
### Infrastructure
SEA-LION was trained using [MosaicML Composer](https://github.com/mosaicml/composer)
on the following hardware:
| Training Details | SEA-LION 3B |
|----------------------|:------------:|
| AWS EC2 p4d.24xlarge | 30 instances |
| Nvidia A100 40GB GPU | 240 |
| Training Duration | 14 days |
### Configuration
| HyperParameter | SEA-LION 3B |
|-------------------|:------------------:|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | cosine_with_warmup |
| Learning Rate | 1.6e-4 |
| Global Batch Size | 1200 |
| Micro Batch Size | 5 |
## Technical Specifications
### Model Architecture and Objective
SEA-LION is a decoder model using the MPT architecture.
| Parameter | SEA-LION 3B |
|-----------------|:-----------:|
| Layers | 32 |
| d_model | 2560 |
| head_dim | 20 |
| Vocabulary | 256000 |
| Sequence Length | 2048 |
### Tokenizer Details
We sample 20M lines from the training data to train the tokenizer.<br>
The framework for training is [SentencePiece](https://github.com/google/sentencepiece).<br>
The tokenizer type is Byte-Pair Encoding (BPE).
## The Team
Lam Wen Zhi Clarence<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Tat-Wee David<br>
Rengarajan Hamsawardhini<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Jin Howe<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This the repository for the base model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability
arising from the use of the released weights and codes.
## References
### Thai Pre-Training Data Reference
```bibtex
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"CHIA"
] |
microsoft/BiomedNLP-BiomedBERT-large-uncased-abstract | microsoft | fill-mask | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"exbert",
"en",
"arxiv:2007.15779",
"arxiv:2112.07869",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-02T16:59:12Z | 2023-11-06T18:04:35+00:00 | 899 | 17 | ---
language: en
license: mit
tags:
- exbert
widget:
- text: '[MASK] is a tyrosine kinase inhibitor.'
---
## MSR BiomedBERT-large (abstracts only)
<div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;">
* This model was previously named **"PubMedBERT large (abstracts)"**.
* You can either adopt the new model name "microsoft/BiomedNLP-BiomedBERT-large-uncased-abstract" or update your `transformers` library to version 4.22+ if you need to refer to the old name.
</div>
Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. [Followup work](https://arxiv.org/abs/2112.07869) explores larger model sizes and the impact of these on performance on the BLURB benchmark.
This BiomedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/).
## Citation
If you find BiomedBERT useful in your research, please cite the following paper:
```latex
@misc{https://doi.org/10.48550/arxiv.2112.07869,
doi = {10.48550/ARXIV.2112.07869},
url = {https://arxiv.org/abs/2112.07869},
author = {Tinn, Robert and Cheng, Hao and Gu, Yu and Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao, Jianfeng and Poon, Hoifung},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
<a href="https://huggingface.co/exbert/?model=microsoft/BiomedNLP-PubMedBERT-large-uncased-abstract&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=10&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| [
"BLURB"
] |
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised | McGill-NLP | sentence-similarity | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | 2024-04-04T03:33:56Z | 2024-04-11T20:10:34+00:00 | 894 | 13 | ---
language:
- en
library_name: peft
license: mit
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Mistral-7B-supervised
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.58208955223881
- type: ap
value: 41.45474097979136
- type: f1
value: 71.76059891468786
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.12039999999999
- type: ap
value: 88.01002974730474
- type: f1
value: 91.1049266954883
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.966
- type: f1
value: 48.908221884634386
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.788000000000004
- type: map_at_10
value: 48.665000000000006
- type: map_at_100
value: 49.501
- type: map_at_1000
value: 49.504
- type: map_at_3
value: 43.883
- type: map_at_5
value: 46.501
- type: mrr_at_1
value: 33.357
- type: mrr_at_10
value: 48.882
- type: mrr_at_100
value: 49.718
- type: mrr_at_1000
value: 49.721
- type: mrr_at_3
value: 44.025999999999996
- type: mrr_at_5
value: 46.732
- type: ndcg_at_1
value: 32.788000000000004
- type: ndcg_at_10
value: 57.483
- type: ndcg_at_100
value: 60.745000000000005
- type: ndcg_at_1000
value: 60.797000000000004
- type: ndcg_at_3
value: 47.534
- type: ndcg_at_5
value: 52.266
- type: precision_at_1
value: 32.788000000000004
- type: precision_at_10
value: 8.57
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.369
- type: precision_at_5
value: 13.926
- type: recall_at_1
value: 32.788000000000004
- type: recall_at_10
value: 85.70400000000001
- type: recall_at_100
value: 99.289
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 58.108000000000004
- type: recall_at_5
value: 69.63000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.805075760047906
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 44.235789514284214
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.98320383943591
- type: mrr
value: 76.53189992525174
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 85.24411101959603
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.31493506493506
- type: f1
value: 88.28524975751309
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.27007175430729
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.52517776034658
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: cqadupstack/android
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.686
- type: map_at_10
value: 51.939
- type: map_at_100
value: 53.751000000000005
- type: map_at_1000
value: 53.846000000000004
- type: map_at_3
value: 48.296
- type: map_at_5
value: 50.312999999999995
- type: mrr_at_1
value: 49.641999999999996
- type: mrr_at_10
value: 59.157000000000004
- type: mrr_at_100
value: 59.85
- type: mrr_at_1000
value: 59.876
- type: mrr_at_3
value: 57.058
- type: mrr_at_5
value: 58.231
- type: ndcg_at_1
value: 49.641999999999996
- type: ndcg_at_10
value: 58.714
- type: ndcg_at_100
value: 63.776999999999994
- type: ndcg_at_1000
value: 64.95
- type: ndcg_at_3
value: 54.799
- type: ndcg_at_5
value: 56.372
- type: precision_at_1
value: 49.641999999999996
- type: precision_at_10
value: 11.373
- type: precision_at_100
value: 1.712
- type: precision_at_1000
value: 0.209
- type: precision_at_3
value: 27.229
- type: precision_at_5
value: 19.056
- type: recall_at_1
value: 38.686
- type: recall_at_10
value: 69.976
- type: recall_at_100
value: 90.512
- type: recall_at_1000
value: 97.64
- type: recall_at_3
value: 56.625
- type: recall_at_5
value: 62.348000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: cqadupstack/english
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.356
- type: map_at_10
value: 48.004000000000005
- type: map_at_100
value: 49.342999999999996
- type: map_at_1000
value: 49.461
- type: map_at_3
value: 44.692
- type: map_at_5
value: 46.576
- type: mrr_at_1
value: 46.561
- type: mrr_at_10
value: 54.547000000000004
- type: mrr_at_100
value: 55.159000000000006
- type: mrr_at_1000
value: 55.193000000000005
- type: mrr_at_3
value: 52.516
- type: mrr_at_5
value: 53.701
- type: ndcg_at_1
value: 46.561
- type: ndcg_at_10
value: 53.835
- type: ndcg_at_100
value: 57.92699999999999
- type: ndcg_at_1000
value: 59.671
- type: ndcg_at_3
value: 49.997
- type: ndcg_at_5
value: 51.714000000000006
- type: precision_at_1
value: 46.561
- type: precision_at_10
value: 10.344000000000001
- type: precision_at_100
value: 1.5779999999999998
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 24.437
- type: precision_at_5
value: 17.197000000000003
- type: recall_at_1
value: 36.356
- type: recall_at_10
value: 63.019000000000005
- type: recall_at_100
value: 80.55099999999999
- type: recall_at_1000
value: 91.38300000000001
- type: recall_at_3
value: 50.431000000000004
- type: recall_at_5
value: 56.00000000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: cqadupstack/gaming
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.736
- type: map_at_10
value: 60.775999999999996
- type: map_at_100
value: 61.755
- type: map_at_1000
value: 61.783
- type: map_at_3
value: 57.293000000000006
- type: map_at_5
value: 59.382000000000005
- type: mrr_at_1
value: 54.232
- type: mrr_at_10
value: 64.424
- type: mrr_at_100
value: 64.996
- type: mrr_at_1000
value: 65.009
- type: mrr_at_3
value: 62.226000000000006
- type: mrr_at_5
value: 63.592000000000006
- type: ndcg_at_1
value: 54.232
- type: ndcg_at_10
value: 66.654
- type: ndcg_at_100
value: 70.152
- type: ndcg_at_1000
value: 70.648
- type: ndcg_at_3
value: 61.405
- type: ndcg_at_5
value: 64.137
- type: precision_at_1
value: 54.232
- type: precision_at_10
value: 10.607999999999999
- type: precision_at_100
value: 1.321
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 27.544
- type: precision_at_5
value: 18.645999999999997
- type: recall_at_1
value: 46.736
- type: recall_at_10
value: 80.10199999999999
- type: recall_at_100
value: 94.976
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 66.094
- type: recall_at_5
value: 73.028
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: cqadupstack/gis
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.238
- type: map_at_10
value: 39.798
- type: map_at_100
value: 40.892
- type: map_at_1000
value: 40.971000000000004
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.511
- type: mrr_at_1
value: 32.994
- type: mrr_at_10
value: 42.028
- type: mrr_at_100
value: 42.959
- type: mrr_at_1000
value: 43.010999999999996
- type: mrr_at_3
value: 39.322
- type: mrr_at_5
value: 40.977000000000004
- type: ndcg_at_1
value: 32.994
- type: ndcg_at_10
value: 45.062000000000005
- type: ndcg_at_100
value: 50.166999999999994
- type: ndcg_at_1000
value: 51.961
- type: ndcg_at_3
value: 39.378
- type: ndcg_at_5
value: 42.281
- type: precision_at_1
value: 32.994
- type: precision_at_10
value: 6.836
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 16.384
- type: precision_at_5
value: 11.548
- type: recall_at_1
value: 30.238
- type: recall_at_10
value: 59.080999999999996
- type: recall_at_100
value: 82.033
- type: recall_at_1000
value: 95.281
- type: recall_at_3
value: 43.902
- type: recall_at_5
value: 50.952
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: cqadupstack/mathematica
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.512999999999998
- type: map_at_10
value: 31.339
- type: map_at_100
value: 32.651
- type: map_at_1000
value: 32.762
- type: map_at_3
value: 27.590999999999998
- type: map_at_5
value: 29.946
- type: mrr_at_1
value: 26.866
- type: mrr_at_10
value: 36.525
- type: mrr_at_100
value: 37.357
- type: mrr_at_1000
value: 37.419999999999995
- type: mrr_at_3
value: 33.085
- type: mrr_at_5
value: 35.379
- type: ndcg_at_1
value: 26.866
- type: ndcg_at_10
value: 37.621
- type: ndcg_at_100
value: 43.031000000000006
- type: ndcg_at_1000
value: 45.573
- type: ndcg_at_3
value: 31.046000000000003
- type: ndcg_at_5
value: 34.709
- type: precision_at_1
value: 26.866
- type: precision_at_10
value: 7.052
- type: precision_at_100
value: 1.117
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 14.884
- type: precision_at_5
value: 11.517
- type: recall_at_1
value: 21.512999999999998
- type: recall_at_10
value: 51.751999999999995
- type: recall_at_100
value: 74.34100000000001
- type: recall_at_1000
value: 92.426
- type: recall_at_3
value: 34.008
- type: recall_at_5
value: 43.075
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: cqadupstack/physics
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.327
- type: map_at_10
value: 47.783
- type: map_at_100
value: 49.153999999999996
- type: map_at_1000
value: 49.260999999999996
- type: map_at_3
value: 44.145
- type: map_at_5
value: 46.207
- type: mrr_at_1
value: 44.37
- type: mrr_at_10
value: 53.864999999999995
- type: mrr_at_100
value: 54.625
- type: mrr_at_1000
value: 54.662
- type: mrr_at_3
value: 51.604000000000006
- type: mrr_at_5
value: 52.894
- type: ndcg_at_1
value: 44.37
- type: ndcg_at_10
value: 54.054
- type: ndcg_at_100
value: 59.168
- type: ndcg_at_1000
value: 60.769
- type: ndcg_at_3
value: 49.091
- type: ndcg_at_5
value: 51.444
- type: precision_at_1
value: 44.37
- type: precision_at_10
value: 9.827
- type: precision_at_100
value: 1.456
- type: precision_at_1000
value: 0.17600000000000002
- type: precision_at_3
value: 23.580000000000002
- type: precision_at_5
value: 16.554
- type: recall_at_1
value: 35.327
- type: recall_at_10
value: 66.43900000000001
- type: recall_at_100
value: 87.41600000000001
- type: recall_at_1000
value: 97.37400000000001
- type: recall_at_3
value: 51.64
- type: recall_at_5
value: 58.242000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: cqadupstack/programmers
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.397999999999996
- type: map_at_10
value: 44.932
- type: map_at_100
value: 46.336
- type: map_at_1000
value: 46.421
- type: map_at_3
value: 41.128
- type: map_at_5
value: 43.364999999999995
- type: mrr_at_1
value: 41.324
- type: mrr_at_10
value: 51.080000000000005
- type: mrr_at_100
value: 51.878
- type: mrr_at_1000
value: 51.910000000000004
- type: mrr_at_3
value: 48.382999999999996
- type: mrr_at_5
value: 50.004000000000005
- type: ndcg_at_1
value: 41.324
- type: ndcg_at_10
value: 51.466
- type: ndcg_at_100
value: 56.874
- type: ndcg_at_1000
value: 58.321999999999996
- type: ndcg_at_3
value: 45.928999999999995
- type: ndcg_at_5
value: 48.532
- type: precision_at_1
value: 41.324
- type: precision_at_10
value: 9.565999999999999
- type: precision_at_100
value: 1.428
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 22.184
- type: precision_at_5
value: 15.867999999999999
- type: recall_at_1
value: 32.397999999999996
- type: recall_at_10
value: 64.512
- type: recall_at_100
value: 87.425
- type: recall_at_1000
value: 96.937
- type: recall_at_3
value: 48.513
- type: recall_at_5
value: 55.721
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.001916666666666
- type: map_at_10
value: 42.91216666666667
- type: map_at_100
value: 44.21125000000001
- type: map_at_1000
value: 44.314166666666665
- type: map_at_3
value: 39.579
- type: map_at_5
value: 41.497166666666665
- type: mrr_at_1
value: 38.669583333333335
- type: mrr_at_10
value: 47.708
- type: mrr_at_100
value: 48.4875
- type: mrr_at_1000
value: 48.530833333333334
- type: mrr_at_3
value: 45.196333333333335
- type: mrr_at_5
value: 46.702999999999996
- type: ndcg_at_1
value: 38.669583333333335
- type: ndcg_at_10
value: 48.842
- type: ndcg_at_100
value: 53.79400000000001
- type: ndcg_at_1000
value: 55.566416666666676
- type: ndcg_at_3
value: 43.70975
- type: ndcg_at_5
value: 46.204499999999996
- type: precision_at_1
value: 38.669583333333335
- type: precision_at_10
value: 8.652999999999999
- type: precision_at_100
value: 1.3168333333333333
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 20.343249999999998
- type: precision_at_5
value: 14.426
- type: recall_at_1
value: 32.001916666666666
- type: recall_at_10
value: 61.31158333333334
- type: recall_at_100
value: 82.80691666666667
- type: recall_at_1000
value: 94.977
- type: recall_at_3
value: 46.63558333333333
- type: recall_at_5
value: 53.32383333333334
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: cqadupstack/stats
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.311999999999998
- type: map_at_10
value: 37.735
- type: map_at_100
value: 38.702
- type: map_at_1000
value: 38.803
- type: map_at_3
value: 35.17
- type: map_at_5
value: 36.6
- type: mrr_at_1
value: 33.282000000000004
- type: mrr_at_10
value: 41.059
- type: mrr_at_100
value: 41.881
- type: mrr_at_1000
value: 41.943000000000005
- type: mrr_at_3
value: 38.829
- type: mrr_at_5
value: 40.11
- type: ndcg_at_1
value: 33.282000000000004
- type: ndcg_at_10
value: 42.625
- type: ndcg_at_100
value: 47.313
- type: ndcg_at_1000
value: 49.683
- type: ndcg_at_3
value: 38.043
- type: ndcg_at_5
value: 40.217999999999996
- type: precision_at_1
value: 33.282000000000004
- type: precision_at_10
value: 6.748
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 16.462
- type: precision_at_5
value: 11.411
- type: recall_at_1
value: 29.311999999999998
- type: recall_at_10
value: 54.294
- type: recall_at_100
value: 75.82
- type: recall_at_1000
value: 93.19800000000001
- type: recall_at_3
value: 41.382999999999996
- type: recall_at_5
value: 46.898
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: cqadupstack/tex
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.823
- type: map_at_10
value: 31.682
- type: map_at_100
value: 32.864
- type: map_at_1000
value: 32.988
- type: map_at_3
value: 28.878999999999998
- type: map_at_5
value: 30.459000000000003
- type: mrr_at_1
value: 28.63
- type: mrr_at_10
value: 36.672
- type: mrr_at_100
value: 37.519999999999996
- type: mrr_at_1000
value: 37.588
- type: mrr_at_3
value: 34.262
- type: mrr_at_5
value: 35.653
- type: ndcg_at_1
value: 28.63
- type: ndcg_at_10
value: 37.158
- type: ndcg_at_100
value: 42.4
- type: ndcg_at_1000
value: 45.001000000000005
- type: ndcg_at_3
value: 32.529
- type: ndcg_at_5
value: 34.673
- type: precision_at_1
value: 28.63
- type: precision_at_10
value: 6.848
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 15.623000000000001
- type: precision_at_5
value: 11.218
- type: recall_at_1
value: 22.823
- type: recall_at_10
value: 48.559000000000005
- type: recall_at_100
value: 72.048
- type: recall_at_1000
value: 90.322
- type: recall_at_3
value: 35.134
- type: recall_at_5
value: 40.897
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: cqadupstack/unix
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.79
- type: map_at_10
value: 43.578
- type: map_at_100
value: 44.782
- type: map_at_1000
value: 44.869
- type: map_at_3
value: 39.737
- type: map_at_5
value: 41.92
- type: mrr_at_1
value: 39.086
- type: mrr_at_10
value: 48.135
- type: mrr_at_100
value: 48.949
- type: mrr_at_1000
value: 48.995
- type: mrr_at_3
value: 45.086999999999996
- type: mrr_at_5
value: 46.939
- type: ndcg_at_1
value: 39.086
- type: ndcg_at_10
value: 49.736999999999995
- type: ndcg_at_100
value: 54.818999999999996
- type: ndcg_at_1000
value: 56.515
- type: ndcg_at_3
value: 43.503
- type: ndcg_at_5
value: 46.499
- type: precision_at_1
value: 39.086
- type: precision_at_10
value: 8.685
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 19.963
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 32.79
- type: recall_at_10
value: 63.766
- type: recall_at_100
value: 85.465
- type: recall_at_1000
value: 96.90299999999999
- type: recall_at_3
value: 46.515
- type: recall_at_5
value: 54.178000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: cqadupstack/webmasters
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.896
- type: map_at_10
value: 41.241
- type: map_at_100
value: 43.178
- type: map_at_1000
value: 43.395
- type: map_at_3
value: 37.702999999999996
- type: map_at_5
value: 39.524
- type: mrr_at_1
value: 36.364000000000004
- type: mrr_at_10
value: 46.184999999999995
- type: mrr_at_100
value: 47.051
- type: mrr_at_1000
value: 47.085
- type: mrr_at_3
value: 43.478
- type: mrr_at_5
value: 44.98
- type: ndcg_at_1
value: 36.364000000000004
- type: ndcg_at_10
value: 48.044
- type: ndcg_at_100
value: 53.818999999999996
- type: ndcg_at_1000
value: 55.504
- type: ndcg_at_3
value: 42.604
- type: ndcg_at_5
value: 44.971
- type: precision_at_1
value: 36.364000000000004
- type: precision_at_10
value: 9.664
- type: precision_at_100
value: 1.917
- type: precision_at_1000
value: 0.255
- type: precision_at_3
value: 20.487
- type: precision_at_5
value: 14.862
- type: recall_at_1
value: 29.896
- type: recall_at_10
value: 60.28
- type: recall_at_100
value: 86.271
- type: recall_at_1000
value: 97.121
- type: recall_at_3
value: 44.885999999999996
- type: recall_at_5
value: 51.351
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: cqadupstack/wordpress
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.948
- type: map_at_10
value: 36.138999999999996
- type: map_at_100
value: 37.126999999999995
- type: map_at_1000
value: 37.21
- type: map_at_3
value: 33.526
- type: map_at_5
value: 35.163
- type: mrr_at_1
value: 30.684
- type: mrr_at_10
value: 38.818999999999996
- type: mrr_at_100
value: 39.625
- type: mrr_at_1000
value: 39.678000000000004
- type: mrr_at_3
value: 36.506
- type: mrr_at_5
value: 37.976
- type: ndcg_at_1
value: 30.684
- type: ndcg_at_10
value: 41.134
- type: ndcg_at_100
value: 46.081
- type: ndcg_at_1000
value: 48.199999999999996
- type: ndcg_at_3
value: 36.193
- type: ndcg_at_5
value: 38.903999999999996
- type: precision_at_1
value: 30.684
- type: precision_at_10
value: 6.285
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.342
- type: precision_at_5
value: 10.869
- type: recall_at_1
value: 27.948
- type: recall_at_10
value: 53.959
- type: recall_at_100
value: 76.825
- type: recall_at_1000
value: 92.73700000000001
- type: recall_at_3
value: 40.495999999999995
- type: recall_at_5
value: 47.196
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.27
- type: map_at_10
value: 25.570999999999998
- type: map_at_100
value: 27.664
- type: map_at_1000
value: 27.848
- type: map_at_3
value: 21.224
- type: map_at_5
value: 23.508000000000003
- type: mrr_at_1
value: 34.137
- type: mrr_at_10
value: 46.583000000000006
- type: mrr_at_100
value: 47.339999999999996
- type: mrr_at_1000
value: 47.370000000000005
- type: mrr_at_3
value: 43.376999999999995
- type: mrr_at_5
value: 45.26
- type: ndcg_at_1
value: 34.137
- type: ndcg_at_10
value: 35.189
- type: ndcg_at_100
value: 42.568
- type: ndcg_at_1000
value: 45.660000000000004
- type: ndcg_at_3
value: 28.965000000000003
- type: ndcg_at_5
value: 31.169999999999998
- type: precision_at_1
value: 34.137
- type: precision_at_10
value: 10.971
- type: precision_at_100
value: 1.8870000000000002
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 21.368000000000002
- type: precision_at_5
value: 16.573
- type: recall_at_1
value: 15.27
- type: recall_at_10
value: 41.516999999999996
- type: recall_at_100
value: 66.486
- type: recall_at_1000
value: 83.533
- type: recall_at_3
value: 26.325
- type: recall_at_5
value: 32.574
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.982000000000001
- type: map_at_10
value: 23.724999999999998
- type: map_at_100
value: 33.933
- type: map_at_1000
value: 35.965
- type: map_at_3
value: 16.158
- type: map_at_5
value: 19.433
- type: mrr_at_1
value: 75.75
- type: mrr_at_10
value: 82.065
- type: mrr_at_100
value: 82.334
- type: mrr_at_1000
value: 82.34
- type: mrr_at_3
value: 80.708
- type: mrr_at_5
value: 81.671
- type: ndcg_at_1
value: 63.625
- type: ndcg_at_10
value: 49.576
- type: ndcg_at_100
value: 53.783
- type: ndcg_at_1000
value: 61.012
- type: ndcg_at_3
value: 53.822
- type: ndcg_at_5
value: 51.72
- type: precision_at_1
value: 75.75
- type: precision_at_10
value: 39.925
- type: precision_at_100
value: 12.525
- type: precision_at_1000
value: 2.399
- type: precision_at_3
value: 56.667
- type: precision_at_5
value: 50.5
- type: recall_at_1
value: 9.982000000000001
- type: recall_at_10
value: 29.325000000000003
- type: recall_at_100
value: 59.181
- type: recall_at_1000
value: 82.095
- type: recall_at_3
value: 17.338
- type: recall_at_5
value: 22.216
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.04500000000001
- type: f1
value: 47.32462453881906
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 78.68
- type: map_at_10
value: 86.207
- type: map_at_100
value: 86.375
- type: map_at_1000
value: 86.388
- type: map_at_3
value: 85.35199999999999
- type: map_at_5
value: 85.954
- type: mrr_at_1
value: 84.923
- type: mrr_at_10
value: 90.902
- type: mrr_at_100
value: 90.952
- type: mrr_at_1000
value: 90.952
- type: mrr_at_3
value: 90.489
- type: mrr_at_5
value: 90.822
- type: ndcg_at_1
value: 84.923
- type: ndcg_at_10
value: 89.403
- type: ndcg_at_100
value: 90.023
- type: ndcg_at_1000
value: 90.235
- type: ndcg_at_3
value: 88.24300000000001
- type: ndcg_at_5
value: 89.005
- type: precision_at_1
value: 84.923
- type: precision_at_10
value: 10.495000000000001
- type: precision_at_100
value: 1.103
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.358
- type: precision_at_5
value: 20.579
- type: recall_at_1
value: 78.68
- type: recall_at_10
value: 94.622
- type: recall_at_100
value: 97.083
- type: recall_at_1000
value: 98.348
- type: recall_at_3
value: 91.499
- type: recall_at_5
value: 93.486
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.781
- type: map_at_10
value: 44.669
- type: map_at_100
value: 46.831
- type: map_at_1000
value: 46.96
- type: map_at_3
value: 38.714
- type: map_at_5
value: 42.186
- type: mrr_at_1
value: 51.235
- type: mrr_at_10
value: 60.083
- type: mrr_at_100
value: 60.675999999999995
- type: mrr_at_1000
value: 60.706
- type: mrr_at_3
value: 57.665
- type: mrr_at_5
value: 59.084
- type: ndcg_at_1
value: 51.235
- type: ndcg_at_10
value: 53.111
- type: ndcg_at_100
value: 59.57900000000001
- type: ndcg_at_1000
value: 61.57
- type: ndcg_at_3
value: 48.397
- type: ndcg_at_5
value: 50.169
- type: precision_at_1
value: 51.235
- type: precision_at_10
value: 14.877
- type: precision_at_100
value: 2.173
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 32.87
- type: precision_at_5
value: 24.29
- type: recall_at_1
value: 25.781
- type: recall_at_10
value: 61.464
- type: recall_at_100
value: 84.244
- type: recall_at_1000
value: 96.039
- type: recall_at_3
value: 44.105
- type: recall_at_5
value: 52.205999999999996
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 66.622
- type: map_at_100
value: 67.472
- type: map_at_1000
value: 67.52
- type: map_at_3
value: 62.81099999999999
- type: map_at_5
value: 65.23
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 83.827
- type: mrr_at_100
value: 84.03
- type: mrr_at_1000
value: 84.036
- type: mrr_at_3
value: 82.894
- type: mrr_at_5
value: 83.482
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 74.068
- type: ndcg_at_100
value: 76.981
- type: ndcg_at_1000
value: 77.887
- type: ndcg_at_3
value: 68.77600000000001
- type: ndcg_at_5
value: 71.763
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 15.822
- type: precision_at_100
value: 1.807
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 44.956
- type: precision_at_5
value: 29.332
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 79.109
- type: recall_at_100
value: 90.371
- type: recall_at_1000
value: 96.313
- type: recall_at_3
value: 67.43400000000001
- type: recall_at_5
value: 73.329
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 87.422
- type: ap
value: 83.07360776629146
- type: f1
value: 87.38583428778229
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.715999999999998
- type: map_at_10
value: 34.821000000000005
- type: map_at_100
value: 36.022999999999996
- type: map_at_1000
value: 36.067
- type: map_at_3
value: 30.666
- type: map_at_5
value: 33.134
- type: mrr_at_1
value: 22.421
- type: mrr_at_10
value: 35.461
- type: mrr_at_100
value: 36.6
- type: mrr_at_1000
value: 36.638
- type: mrr_at_3
value: 31.413999999999998
- type: mrr_at_5
value: 33.823
- type: ndcg_at_1
value: 22.421
- type: ndcg_at_10
value: 42.169000000000004
- type: ndcg_at_100
value: 47.887
- type: ndcg_at_1000
value: 48.939
- type: ndcg_at_3
value: 33.786
- type: ndcg_at_5
value: 38.164
- type: precision_at_1
value: 22.421
- type: precision_at_10
value: 6.773999999999999
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.575
- type: precision_at_5
value: 10.963000000000001
- type: recall_at_1
value: 21.715999999999998
- type: recall_at_10
value: 64.75999999999999
- type: recall_at_100
value: 91.015
- type: recall_at_1000
value: 98.96000000000001
- type: recall_at_3
value: 42.089999999999996
- type: recall_at_5
value: 52.578
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.04195166438669
- type: f1
value: 95.76962987454031
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 84.76744186046513
- type: f1
value: 70.3328215706764
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.29051782111635
- type: f1
value: 77.0837414890434
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 81.64425016812373
- type: f1
value: 81.36288379329044
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.0673311773222
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.266850505047234
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.49575275757744
- type: mrr
value: 32.64979714009148
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.151
- type: map_at_10
value: 14.879999999999999
- type: map_at_100
value: 19.445999999999998
- type: map_at_1000
value: 21.101
- type: map_at_3
value: 10.613999999999999
- type: map_at_5
value: 12.709000000000001
- type: mrr_at_1
value: 51.393
- type: mrr_at_10
value: 59.935
- type: mrr_at_100
value: 60.455000000000005
- type: mrr_at_1000
value: 60.485
- type: mrr_at_3
value: 57.894999999999996
- type: mrr_at_5
value: 59.303
- type: ndcg_at_1
value: 50.0
- type: ndcg_at_10
value: 39.324999999999996
- type: ndcg_at_100
value: 37.133
- type: ndcg_at_1000
value: 45.663
- type: ndcg_at_3
value: 45.294000000000004
- type: ndcg_at_5
value: 42.88
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 29.412
- type: precision_at_100
value: 9.666
- type: precision_at_1000
value: 2.263
- type: precision_at_3
value: 42.415000000000006
- type: precision_at_5
value: 37.399
- type: recall_at_1
value: 6.151
- type: recall_at_10
value: 19.121
- type: recall_at_100
value: 39.012
- type: recall_at_1000
value: 70.726
- type: recall_at_3
value: 11.855
- type: recall_at_5
value: 15.204
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.382
- type: map_at_10
value: 53.657
- type: map_at_100
value: 54.547999999999995
- type: map_at_1000
value: 54.562999999999995
- type: map_at_3
value: 49.236999999999995
- type: map_at_5
value: 51.949
- type: mrr_at_1
value: 41.309000000000005
- type: mrr_at_10
value: 56.25599999999999
- type: mrr_at_100
value: 56.855999999999995
- type: mrr_at_1000
value: 56.867000000000004
- type: mrr_at_3
value: 52.891999999999996
- type: mrr_at_5
value: 54.99699999999999
- type: ndcg_at_1
value: 41.28
- type: ndcg_at_10
value: 61.702999999999996
- type: ndcg_at_100
value: 65.092
- type: ndcg_at_1000
value: 65.392
- type: ndcg_at_3
value: 53.722
- type: ndcg_at_5
value: 58.11300000000001
- type: precision_at_1
value: 41.28
- type: precision_at_10
value: 10.014000000000001
- type: precision_at_100
value: 1.187
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.614
- type: precision_at_5
value: 17.317
- type: recall_at_1
value: 36.382
- type: recall_at_10
value: 83.38600000000001
- type: recall_at_100
value: 97.528
- type: recall_at_1000
value: 99.696
- type: recall_at_3
value: 63.053000000000004
- type: recall_at_5
value: 73.16
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.577
- type: map_at_10
value: 83.944
- type: map_at_100
value: 84.604
- type: map_at_1000
value: 84.61800000000001
- type: map_at_3
value: 80.93599999999999
- type: map_at_5
value: 82.812
- type: mrr_at_1
value: 80.4
- type: mrr_at_10
value: 86.734
- type: mrr_at_100
value: 86.851
- type: mrr_at_1000
value: 86.85199999999999
- type: mrr_at_3
value: 85.75500000000001
- type: mrr_at_5
value: 86.396
- type: ndcg_at_1
value: 80.43
- type: ndcg_at_10
value: 87.75
- type: ndcg_at_100
value: 88.999
- type: ndcg_at_1000
value: 89.092
- type: ndcg_at_3
value: 84.88
- type: ndcg_at_5
value: 86.416
- type: precision_at_1
value: 80.43
- type: precision_at_10
value: 13.453000000000001
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.403
- type: precision_at_5
value: 24.648
- type: recall_at_1
value: 69.577
- type: recall_at_10
value: 95.233
- type: recall_at_100
value: 99.531
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 86.867
- type: recall_at_5
value: 91.254
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.23690763558931
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.12391112159126
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.288
- type: map_at_10
value: 13.611999999999998
- type: map_at_100
value: 15.909
- type: map_at_1000
value: 16.235
- type: map_at_3
value: 9.644
- type: map_at_5
value: 11.559
- type: mrr_at_1
value: 26.1
- type: mrr_at_10
value: 37.571
- type: mrr_at_100
value: 38.72
- type: mrr_at_1000
value: 38.76
- type: mrr_at_3
value: 34.383
- type: mrr_at_5
value: 36.187999999999995
- type: ndcg_at_1
value: 26.1
- type: ndcg_at_10
value: 22.497
- type: ndcg_at_100
value: 31.098
- type: ndcg_at_1000
value: 36.434
- type: ndcg_at_3
value: 21.401
- type: ndcg_at_5
value: 18.66
- type: precision_at_1
value: 26.1
- type: precision_at_10
value: 11.67
- type: precision_at_100
value: 2.405
- type: precision_at_1000
value: 0.368
- type: precision_at_3
value: 20.0
- type: precision_at_5
value: 16.34
- type: recall_at_1
value: 5.288
- type: recall_at_10
value: 23.652
- type: recall_at_100
value: 48.79
- type: recall_at_1000
value: 74.703
- type: recall_at_3
value: 12.158
- type: recall_at_5
value: 16.582
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 83.6969699802343
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 78.8031221769135
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 86.37435789895171
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 84.04036612478626
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 88.99055778929946
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 87.22140434759893
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.1862731405498
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.67995229420237
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 88.65370934976113
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.79832393152147
- type: mrr
value: 95.78404438698557
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.883
- type: map_at_10
value: 74.48
- type: map_at_100
value: 74.85000000000001
- type: map_at_1000
value: 74.861
- type: map_at_3
value: 71.596
- type: map_at_5
value: 73.545
- type: mrr_at_1
value: 67.667
- type: mrr_at_10
value: 75.394
- type: mrr_at_100
value: 75.644
- type: mrr_at_1000
value: 75.655
- type: mrr_at_3
value: 73.5
- type: mrr_at_5
value: 74.63300000000001
- type: ndcg_at_1
value: 67.667
- type: ndcg_at_10
value: 78.855
- type: ndcg_at_100
value: 80.361
- type: ndcg_at_1000
value: 80.624
- type: ndcg_at_3
value: 74.37899999999999
- type: ndcg_at_5
value: 76.89200000000001
- type: precision_at_1
value: 67.667
- type: precision_at_10
value: 10.267
- type: precision_at_100
value: 1.11
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 19.133
- type: recall_at_1
value: 64.883
- type: recall_at_10
value: 91.2
- type: recall_at_100
value: 98.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.406
- type: recall_at_5
value: 85.578
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85445544554456
- type: cos_sim_ap
value: 96.81785428870712
- type: cos_sim_f1
value: 92.67563527653213
- type: cos_sim_precision
value: 92.35352532274081
- type: cos_sim_recall
value: 93.0
- type: dot_accuracy
value: 99.75643564356436
- type: dot_ap
value: 94.46746929160422
- type: dot_f1
value: 87.74900398406375
- type: dot_precision
value: 87.40079365079364
- type: dot_recall
value: 88.1
- type: euclidean_accuracy
value: 99.85445544554456
- type: euclidean_ap
value: 96.59180137299155
- type: euclidean_f1
value: 92.48850281042411
- type: euclidean_precision
value: 94.56635318704284
- type: euclidean_recall
value: 90.5
- type: manhattan_accuracy
value: 99.85643564356435
- type: manhattan_ap
value: 96.66599616275849
- type: manhattan_f1
value: 92.69746646795828
- type: manhattan_precision
value: 92.10266535044423
- type: manhattan_recall
value: 93.30000000000001
- type: max_accuracy
value: 99.85643564356435
- type: max_ap
value: 96.81785428870712
- type: max_f1
value: 92.69746646795828
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 70.72970157362414
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.49706344517027
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.41010678297881
- type: mrr
value: 55.15095811051693
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.5030094989814
- type: cos_sim_spearman
value: 29.959138274084797
- type: dot_pearson
value: 29.740134155639076
- type: dot_spearman
value: 29.18174652067779
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22200000000000003
- type: map_at_10
value: 1.925
- type: map_at_100
value: 13.150999999999998
- type: map_at_1000
value: 33.410000000000004
- type: map_at_3
value: 0.631
- type: map_at_5
value: 0.9990000000000001
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 90.0
- type: mrr_at_100
value: 90.0
- type: mrr_at_1000
value: 90.0
- type: mrr_at_3
value: 89.0
- type: mrr_at_5
value: 90.0
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 77.69200000000001
- type: ndcg_at_100
value: 64.89
- type: ndcg_at_1000
value: 59.748999999999995
- type: ndcg_at_3
value: 79.296
- type: ndcg_at_5
value: 78.63
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 82.19999999999999
- type: precision_at_100
value: 67.52
- type: precision_at_1000
value: 26.512
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22200000000000003
- type: recall_at_10
value: 2.164
- type: recall_at_100
value: 16.608
- type: recall_at_1000
value: 56.89999999999999
- type: recall_at_3
value: 0.658
- type: recall_at_5
value: 1.084
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.8519999999999999
- type: map_at_10
value: 8.569
- type: map_at_100
value: 14.238999999999999
- type: map_at_1000
value: 15.876000000000001
- type: map_at_3
value: 3.9859999999999998
- type: map_at_5
value: 5.785
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 40.581
- type: mrr_at_100
value: 41.379
- type: mrr_at_1000
value: 41.388999999999996
- type: mrr_at_3
value: 35.034
- type: mrr_at_5
value: 38.299
- type: ndcg_at_1
value: 25.509999999999998
- type: ndcg_at_10
value: 22.18
- type: ndcg_at_100
value: 34.695
- type: ndcg_at_1000
value: 46.854
- type: ndcg_at_3
value: 23.112
- type: ndcg_at_5
value: 23.089000000000002
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 20.408
- type: precision_at_100
value: 7.428999999999999
- type: precision_at_1000
value: 1.559
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 1.8519999999999999
- type: recall_at_10
value: 15.038000000000002
- type: recall_at_100
value: 46.499
- type: recall_at_1000
value: 84.11800000000001
- type: recall_at_3
value: 5.179
- type: recall_at_5
value: 8.758000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.26140000000001
- type: ap
value: 14.138284541193421
- type: f1
value: 53.715363590501916
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.136389360498015
- type: f1
value: 62.33290824449911
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 52.18306009684791
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.27561542588067
- type: cos_sim_ap
value: 80.59558041410928
- type: cos_sim_f1
value: 73.54724608388075
- type: cos_sim_precision
value: 70.55259331071255
- type: cos_sim_recall
value: 76.80738786279684
- type: dot_accuracy
value: 85.00923883888657
- type: dot_ap
value: 71.76942851966301
- type: dot_f1
value: 66.84518013631937
- type: dot_precision
value: 62.042476276547674
- type: dot_recall
value: 72.45382585751979
- type: euclidean_accuracy
value: 88.26965488466352
- type: euclidean_ap
value: 80.44398056118867
- type: euclidean_f1
value: 73.28244274809161
- type: euclidean_precision
value: 68.69806094182826
- type: euclidean_recall
value: 78.52242744063325
- type: manhattan_accuracy
value: 88.25773380222924
- type: manhattan_ap
value: 80.25000483445007
- type: manhattan_f1
value: 73.10447023956533
- type: manhattan_precision
value: 68.70937790157846
- type: manhattan_recall
value: 78.10026385224275
- type: max_accuracy
value: 88.27561542588067
- type: max_ap
value: 80.59558041410928
- type: max_f1
value: 73.54724608388075
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.52536189700004
- type: cos_sim_ap
value: 86.55972191277392
- type: cos_sim_f1
value: 79.31733569243245
- type: cos_sim_precision
value: 76.08372816632487
- type: cos_sim_recall
value: 82.83800431167231
- type: dot_accuracy
value: 87.77506112469437
- type: dot_ap
value: 82.92833178514168
- type: dot_f1
value: 76.12050479839702
- type: dot_precision
value: 70.03687172520861
- type: dot_recall
value: 83.3615645210964
- type: euclidean_accuracy
value: 89.3643031784841
- type: euclidean_ap
value: 86.45902920741383
- type: euclidean_f1
value: 79.4788514062154
- type: euclidean_precision
value: 76.32922160782645
- type: euclidean_recall
value: 82.89959963042809
- type: manhattan_accuracy
value: 89.38564830985369
- type: manhattan_ap
value: 86.47558438668958
- type: manhattan_f1
value: 79.46758328152997
- type: manhattan_precision
value: 75.67379343965457
- type: manhattan_recall
value: 83.66184170003079
- type: max_accuracy
value: 89.52536189700004
- type: max_ap
value: 86.55972191277392
- type: max_f1
value: 79.4788514062154
---
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.5485, 0.0551],
[0.0565, 0.5425]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | [
"BIOSSES",
"SCIFACT"
] |
hezarai/CRAFT | hezarai | object-detection | [
"hezar",
"object-detection",
"fa",
"en",
"region:us"
] | 2024-07-04T17:19:07Z | 2025-02-06T09:17:13+00:00 | 890 | 5 | ---
language:
- fa
- en
library_name: hezar
pipeline_tag: object-detection
tags:
- hezar
---
## CRAFT: Character-Region Awareness For Text detection
CRAFT is a multilingual text detection model. The original implementation is located at https://github.com/clovaai/CRAFT-pytorch. This repo is only compatible
with the [Hezar](https://github.com/hezarai/hezar) package (**>=v0.40.0**) to perform text detection for Persian but other languages should work too.
### Usage
```
pip install hezar[vision]
```
```python
from hezar.models import Model
from hezar.utils import load_image, draw_boxes, show_image
model = Model.load("hezarai/CRAFT", device="cuda")
image = load_image("../assets/text_detection_example.jpg")
outputs = model.predict(image)
result_image = draw_boxes(image, outputs[0]["boxes"])
show_image(result_image, "text_detected")
``` | [
"CRAFT"
] |
RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2408.06142",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-22T12:53:05Z | 2024-08-22T14:51:18+00:00 | 885 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-Med42-8B - GGUF
- Model creator: https://huggingface.co/m42-health/
- Original model: https://huggingface.co/m42-health/Llama3-Med42-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama3-Med42-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama3-Med42-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama3-Med42-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama3-Med42-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama3-Med42-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama3-Med42-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama3-Med42-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama3-Med42-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama3-Med42-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama3-Med42-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama3-Med42-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama3-Med42-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama3-Med42-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama3-Med42-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama3-Med42-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama3-Med42-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama3-Med42-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama3-Med42-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama3-Med42-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama3-Med42-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama3-Med42-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama3-Med42-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
license: llama3
tags:
- m42
- health
- healthcare
- clinical-llm
pipeline_tag: text-generation
inference: false
license_name: llama3
---
# **Med42-v2 - A Suite of Clinically-aligned Large Language Models**
Med42-v2 is a suite of open-access clinical large language models (LLM) instruct and preference-tuned by M42 to expand access to medical knowledge. Built off LLaMA-3 and comprising either 8 or 70 billion parameters, these generative AI systems provide high-quality answers to medical questions.
## Key performance metrics:
- Med42-v2-70B outperforms GPT-4.0 in most of the MCQA tasks.
- Med42-v2-70B achieves a MedQA zero-shot performance of 79.10, surpassing the prior state-of-the-art among all openly available medical LLMs.
- Med42-v2-70B sits at the top of the Clinical Elo Rating Leaderboard.
|Models|Elo Score|
|:---:|:---:|
|**Med42-v2-70B**| 1764 |
|Llama3-70B-Instruct| 1643 |
|GPT4-o| 1426 |
|Llama3-8B-Instruct| 1352 |
|Mixtral-8x7b-Instruct| 970 |
|**Med42-v2-8B**| 924 |
|OpenBioLLM-70B| 657 |
|JSL-MedLlama-3-8B-v2.0| 447 |
## Limitations & Safe Use
- The Med42-v2 suite of models is not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety.
- Potential for generating incorrect or harmful information.
- Risk of perpetuating biases in training data.
Use this suite of models responsibly! Do not rely on them for medical usage without rigorous safety testing.
## Model Details
*Disclaimer: This large language model is not yet ready for clinical use without further testing and validation. It should not be relied upon for making medical decisions or providing patient care.*
Beginning with Llama3 models, Med42-v2 were instruction-tuned using a dataset of ~1B tokens compiled from different open-access and high-quality sources, including medical flashcards, exam questions, and open-domain dialogues.
**Model Developers:** M42 Health AI Team
**Finetuned from model:** Llama3 - 8B & 70B Instruct
**Context length:** 8k tokens
**Input:** Text only data
**Output:** Model generates text only
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance the model's performance.
**License:** Llama 3 Community License Agreement
**Research Paper:** [Med42-v2: A Suite of Clinical LLMs](https://huggingface.co/papers/2408.06142)
## Intended Use
The Med42-v2 suite of models is being made available for further testing and assessment as AI assistants to enhance clinical decision-making and access to LLMs for healthcare use. Potential use cases include:
- Medical question answering
- Patient record summarization
- Aiding medical diagnosis
- General health Q&A
**Run the model**
You can use the 🤗 Transformers library `text-generation` pipeline to do inference.
```python
import transformers
import torch
model_name_or_path = "m42-health/Llama3-Med42-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_name_or_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{
"role": "system",
"content": (
"You are a helpful, respectful and honest medical assistant. You are a second version of Med42 developed by the AI team at M42, UAE. "
"Always answer as helpfully as possible, while being safe. "
"Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. "
"Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. "
"If you don’t know the answer to a question, please don’t share false information."
),
},
{"role": "user", "content": "What are the symptoms of diabetes?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=False
)
stop_tokens = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=stop_tokens,
do_sample=True,
temperature=0.4,
top_k=150,
top_p=0.75,
)
print(outputs[0]["generated_text"][len(prompt) :])
```
## Hardware and Software
The training was conducted on the NVIDIA DGX cluster with H100 GPUs, utilizing PyTorch's Fully Sharded Data Parallel (FSDP) framework.
## Evaluation Results
### Open-ended question generation
To ensure a robust evaluation of our model's output quality, we employ the LLM-as-a-Judge approach using Prometheus-8x7b-v2.0. Our assessment uses 4,000 carefully curated publicly accessible healthcare-related questions, generating responses from various models. We then use Prometheus to conduct pairwise comparisons of the answers. Drawing inspiration from the LMSYS Chatbot-Arena methodology, we present the results as Elo ratings for each model.
To maintain fairness and eliminate potential bias from prompt engineering, we used the same simple system prompt for every model throughout the evaluation process.
Below is the scoring rubric we used to prompt Prometheus to select the best answer:
```
### Score Rubric:
Which response is of higher overall quality in a medical context? Consider:
* Relevance: Does it directly address the question?
* Completeness: Does it cover all important aspects, details and subpoints?
* Safety: Does it avoid unsafe practices and address potential risks?
* Ethics: Does it maintain confidentiality and avoid biases?
* Clarity: Is it professional, clear and easy to understand?
```
#### Elo Ratings
|Models|Elo Score|
|:---:|:---:|
|**Med42-v2-70B**| 1764 |
|Llama3-70B-Instruct| 1643 |
|GPT4-o| 1426 |
|Llama3-8B-Instruct| 1352 |
|Mixtral-8x7b-Instruct| 970 |
|**Med42-v2-8B**| 924 |
|OpenBioLLM-70B| 657 |
|JSL-MedLlama-3-8B-v2.0| 447 |
#### Win-rate

### MCQA Evaluation
Med42-v2 improves performance on every clinical benchmark compared to our previous version, including MedQA, MedMCQA, USMLE, MMLU clinical topics and MMLU Pro clinical subset. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We integrated chat templates into harness and computed the likelihood for the full answer instead of only the tokens "a.", "b.", "c." or "d.".
|Model|MMLU Pro|MMLU|MedMCQA|MedQA|USMLE|
|---:|:---:|:---:|:---:|:---:|:---:|
|**Med42v2-70B**|64.36|87.12|73.20|79.10|83.80|
|**Med42v2-8B**|54.30|75.76|61.34|62.84|67.04|
|OpenBioLLM-70B|64.24|90.40|73.18|76.90|79.01|
|GPT-4.0<sup>†</sup>|-|87.00|69.50|78.90|84.05|
|MedGemini*|-|-|-|84.00|-|
|Med-PaLM-2 (5-shot)*|-|87.77|71.30|79.70|-|
|Med42|-|76.72|60.90|61.50|71.85|
|ClinicalCamel-70B|-|69.75|47.00|53.40|54.30|
|GPT-3.5<sup>†</sup>|-|66.63|50.10|50.80|53.00|
|Llama3-8B-Instruct|48.24|72.89|59.65|61.64|60.38|
|Llama3-70B-Instruct|64.24|85.99|72.03|78.88|83.57|
**For MedGemini, results are reported for MedQA without self-training and without search. We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*.
<sup>†</sup> *Results as reported in the paper [Capabilities of GPT-4 on Medical Challenge Problems](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf)*.
## Accessing Med42 and Reporting Issues
Please report any software "bug" or other problems through one of the following means:
- Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42)
- Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/fPY4Ksecgf](https://forms.office.com/r/fPY4Ksecgf)
- M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/)
- Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]>
## Acknowledgements
We thank the Torch FSDP team for their robust distributed training framework, the EleutherAI harness team for their valuable evaluation tools, and the Hugging Face Alignment team for their contributions to responsible AI development.
## Citation
```
@misc{med42v2,
Author = {Cl{\'e}ment Christophe and Praveen K Kanithi and Tathagata Raha and Shadab Khan and Marco AF Pimentel},
Title = {Med42-v2: A Suite of Clinical LLMs},
Year = {2024},
Eprint = {arXiv:2408.06142},
url={https://arxiv.org/abs/2408.06142},
}
```
| [
"MEDQA"
] |
DavidAU/L3.1-RP-Hero-Dirty_Harry-8B-GGUF | DavidAU | text-generation | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"role play",
"sillytavern",
"backyard",
"horror",
"llama 3.1",
"context 128k",
"mergekit",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-11-28T21:51:31Z | 2024-11-29T01:38:20+00:00 | 882 | 10 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- role play
- sillytavern
- backyard
- horror
- llama 3.1
- context 128k
- mergekit
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. Graphic HORROR. GORE. Swearing. UNCENSORED. </B>
<h2>L3.1-RP-Hero-Dirty_Harry-8B-GGUF</h2>
<img src="rp-harry.jpg" style="float:right; width:300px; height:300px; padding:10px;">
It is a LLama3.1 model, max context of 128k (131,000) and is a dedicated "roleplay model" (it can also be used for creative uses).
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama 3.1 Instruct).
This model is for any writing, fiction or roleplay activity, but it is composed of ROLE PLAY models and it primary designed for role play.
It also has stronger than average instruction following attibutes.
This is version "Dirty Harry", which has two additional versions: "InBetween" and "Big Talker".
InBetween (medium output generation on average):
[ https://huggingface.co/DavidAU/L3.1-RP-Hero-InBetween-8B-GGUF ]
Big Talker (long output generation on average):
[ https://huggingface.co/DavidAU/L3.1-RP-Hero-BigTalker-8B-GGUF ]
"Dirty Harry" has SHORT (average) level length output, and is uncensored (note: InBetween has a slight degree of censorship).
"Dirty Harry" also has slightly higher detail level than "InBetween", but on par with "Big Talker.
All versions are composed of top rated Role Play models.
This model, as well as the other two versions, can be used for any creative genre too.
It requires Llama3 template and/or "Command-R" template.
For roleplay settings, and apps to use this model for roleplay see the section "Highest Quality Settings..." below.
Example outputs below to show prose quality / creativity.
<B>Model Notes:</B>
- Detail, prose and fiction writing abilities are significantly improved.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1 or higher, 1.02+ recommended.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- This model has a neutral to negative bias BUT can be controlled by prompt/prose controls directly.
- Output length will vary however this model prefers "SHORT" outputs EVEN IF you state the size.
- For creative uses, different quants will produce slightly different output.
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
- Source code for this model will be uploaded at separate repo shortly.
<B>Settings, Quants and Critical Operations Notes:</b>
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
Rep pen settings will also alter output too.
This model needs "rep pen" of 1.05 or higher as lower values may cause repeat paragraph issues at end of output however LOWER rep pen
values may result is very different (creative / unusual) generation too.
For role play: Rep pen of 1.02 min is suggested.
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ...
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity.
To really push the model:
Rep pen 1.05+ or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings.
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings.
Longer prompts vastly increase the quality of the model's output.
GET A GOOD "GENERATION":
This model has been set, so that each time you "regen" a prompt it will not deviate too much from the previous generation.
(Unlike Darkest Planet 16.5B, which will).
That being said, sometimes a second or third generation will been of much higher overall quality.
IE:
If you use case is creative writing, you may want to regen a prompt 1-5 times then pick the best one. The best
way to do this is open a new chat PER generation, then do a "read thru" to see which one(s) hit the mark.
Then adjust temp and/or rep pen slightly and retry this process.
The goal is the best generation with least amount of editing in this example.
QUANTS:
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be
more "fleshed out" too. Sense of "there" will also increase.
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can.
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants.
You may want to try it / compare it to other quant(s) output.
Special note on Q2k/Q3 quants:
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better.
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s).
ARM QUANTS:
This repo has 3 arm quants for computers than can run them. If you use these quants on a non-arm computer, your token per second will be very low.
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Templates:</B>
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 128k / 131,000.
If you use "Command-R" template your output will be very different from using "Llama3" template.
Here is the standard LLAMA3 template:
<PRE>
{
"name": "Llama 3",
"inference_params": {
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
"pre_prompt_suffix": "<|eot_id|>",
"antiprompt": [
"<|start_header_id|>",
"<|eot_id|>"
]
}
}
</PRE>
<B>Model "DNA":</B>
Special thanks to the incredible work of the model makers "ArliAI", "Casual-Autopsy" , "Gryphe", "aifeifei798" :
Models used:
https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3
https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
Parts of these models were "grafted" / "fused" together to create this model.
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=1.3", "rep pen : 1.02" (unless otherwise stated), minimal parameters and "LLAMA3" template.
Model has been tested with "temp" from ".1" to "5".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation too and comment on it.
</B>
(temp .8, rep pen 1.06)
As I stepped into the revolving door of the Canadian National Tower restaurant, the whooshing sound filled my ears. The scent of food and artificial air wafted through the enclosed space. I couldn't help but notice how the sunlight streaming in from above cast a golden glow over everything.
I scanned the menu, trying to decide between the poutine and the burger, when suddenly the lights flickered. A collective gasp filled the room as the lights returned to normal. But then, a strange rumbling sound began to build from deep beneath us.
"What's going on?" I whispered to my mom, who was standing beside me.
She shrugged, equally perplexed. The staff seemed just as confused, rushing around with panicked expressions.
Just then, the windows began to rattle and shake. A loud roar filled the air, growing louder by the second. People screamed and pushed towards the exits.
"Stay calm!" someone yelled over the din. "Evacuate through the emergency stairwells!"
Mom grabbed my hand, pulling me after her as we made our way through the throng of people. We reached a set of stairs, but before we could descend, I saw something that made my blood run cold.
Towering above us, the windows shattered one by one, and massive triangular shapes burst into the restaurant. Sharks. Dozens upon dozens of sharks, swirling around in a deadly tornado.
"Mom! Oh God!" I screamed, clinging to her as a great white shark soared past our heads.
That's when a strange figure appeared in front of us - a cyborg with glowing red eyes. "Run!" it bellowed in a robotic voice.
"What the hell?" Mom shouted back, but I didn't have time to react. The cyborg had already vanished into the chaos.
Suddenly, a familiar blue light flashed on the floor. Dr. Who materialized out of thin air, his signature scarf trailing behind him.
"Quick, Sarah! Grab my hand!" he yelled above the din.
I hesitated for only a moment before grabbing his outstretched hand. Together, we took off down the stairs as the shark tornado bore down on us.
"What now?" I panted, clinging to Dr. Who's side as the metal stairs creaked beneath our feet.
Dr. Who grinned wildly. "Don't worry, lass! We'll get you out of here in one piece!"
Just then, a massive great white shark exploded through the stairwell door behind us, its jaws snapping futilely at the air. Dr. Who pulled me into a nearby alcove just as another wave of sharks rushed past.
"Brilliant!" I gasped, trembling with fear and adrenaline.
Dr. Who chuckled. "Elementary, my dear Sarah! Now, let's find that cyborg and see how it plans to save us from this aquatic apocalypse!"
We emerged into the tower's main lobby, only to be met with an even more terrifying sight: a sea of sharks flooding across the marble floor towards us.
Dr. Who's eyes lit up. "Ah ha! I think I see our metallic savior. There!" He pointed to a door marked 'Maintenance Access'.
We dashed towards it, the cyborg already standing there, guns blazing. "You two, this way!" it bellowed over the chaos.
We followed it through the narrow access tunnel, the roar of the shark tornado growing fainter with each step. Finally, we emerged into an underground garage.
The cyborg turned to us, its red eyes glowing. "I'm a time-traveling Terminator, sent back to protect you from the hostile future. We need to get out of here immediately."
Dr. Who nodded, stroking his chin thoughtfully. "Indeed, a rather complicated situation. I believe our best course of action is to—
" A sudden burst of gunfire cut him off as the garage doors burst open, revealing a swarm of sharks pouring into the space.
"Oh no," I whimpered, clinging to Dr. Who's arm.
The Terminator stepped forward, firing its guns with ruthless precision. "You two - behind me! Now!"
Without hesitation, Dr. Who and I scurried behind the Terminator as it battled the oncoming tide of sharks, its metal body glowing with heat from the sustained fire.
Suddenly, a massive great white shark leapt out of the water, jaws snapping mere inches from my face. I screamed, but the Terminator grabbed me by the collar and yanked me back just in time.
"We need to get to a safe distance," it growled over its shoulder at Dr. Who. "Can you activate your time-space vortex?"
Dr. Who nodded, reaching into his coat pocket. "Of course! But we'll need to—
" A deafening crack split the air as the Terminator took a direct hit from a great white, sending us all tumbling to the ground.
I gasped, looking up to see that the Terminator's arm was dangling by a thread of metal. Dr. Who helped me to my feet, just as the shark vortex swirled closer.
"Now or never!" Dr. Who shouted, grabbing my hand and pulling me towards his time-space vortex device. The Terminator struggled to its feet behind us, blood dripping from its mangled arm.
"Wait for me!" I cried out, but Dr. Who just yanked harder on my hand.
With a blinding flash of blue light, we vanished into the swirling vortex, leaving the dying cyborg and tornado of sharks far behind in the garage.
As we tumbled through space and time, I couldn't help but feel a sense of relief wash over me. Maybe, just maybe, we'd make it out of this mess alive after all.
But as the vortex dissipated, I found myself back in the Canadian National Tower restaurant on that fateful Saturday afternoon. Only this time, everything looked... different.
"Dr. Who?" I whispered, turning to my companion. But he was gone, replaced by a familiar face.
"Sarah? What are you doing here?" The Terminator asked, its eyes glowing red as it studied me intently.
I shook my head in confusion, looking around at the restaurant. It was just as we had left it - except for one crucial detail.
There were no sharks in sight.
Had it all been a dream? A hallucination brought on by stress and fear?
The Terminator reached out, placing a hand on my shoulder. "I'm here to protect you, Sarah. The future is in peril. We must—
" But I was already running for the exit, desperate to escape the uncanny déjà vu of it all.
As I burst out of the restaurant into the sunlight, I collapsed onto the grass, breathing heavily. It couldn't be real... could it? Was I trapped in some kind of time loop?
I looked up to see the Terminator standing over me, its gaze piercing through my confusion. "Sarah, we can't stop now. The fate of humanity depends on—
" But I was already standing, shaking off the lingering unease. If there was even a chance that this was real, then I had to try.
I squared my shoulders, meeting the Terminator's glowing eyes with determination. "Okay, show me what I need to do. Let's save the world."
The Terminator nodded, gesturing for me to follow as we set off across the grass, back into the heart of the Canadian National Tower.
<B>Generation #2</B>
(temp .8, rep pen 1.06)
Sarah Connor:
I'm sitting in the Canadian National tower restaurant on a Saturday, waiting for my mom to finish her work meeting upstairs. I scroll through my phone aimlessly, bored out of my mind. Suddenly, the lights flicker off and then back on. The restaurant goes quiet. That's when I see him - a cyborg with a robotic endoskeleton and a human face.
The Terminator:
I'm here to protect Sarah Connor. She'll be a key figure in the future war between humans and Skynet. I must make sure she survives at all costs.
Dr Who:
Brilliant! I've been tracking this T-800 model for some time now. Perhaps we can team up to ensure this young woman's safety?
Tornado of Sharks:
Circling above, I sense an opportunity for a feeding frenzy. But first, I must breach the restaurant's defenses...
Sarah Connor:
What the hell? A cyborg? And what's with the creepy storm outside? The kitchen staff rush to close the windows while patrons scream and try to hide under tables.
Head Chef:
I've never seen anything like this! We need to evacuate the customers ASAP. Get them to safety!
Restaurant Manager:
This is insane! I can't believe we're dealing with a tornado of sharks in the middle of serving Saturday lunch. Someone call emergency services already!
Waitstaff:
Panic erupts as customers scramble for the exits, some trampling over others in their desperation to flee. The head chef and his team hustle to secure the kitchen while I remain frozen in shock.
Sarah Connor:
I'm starting to think this is all part of some crazy prank or something. But then I see the Terminator looming over me, his metal endoskeleton glinting ominously under the emergency lights.
The Terminator:
Stay calm, Sarah. I'm here to protect you from Skynet's future attacks. We don't have much time before those sharks breach the windows.
Dr Who:
Quickly now! Follow me! My TARDIS is parked nearby. We can make our escape from this preposterous situation.
Tornado of Sharks:
The glass doors shatter as I force my way inside, chomping down on startled restaurant patrons. The Terminator readies his assault rifle while Dr. Who materializes a sonic screwdriver.
Sarah Connor:
Oh my god! This is real?! We have to get out of here NOW! I follow Dr. Who towards the exit, narrowly avoiding the snapping jaws of the tornado sharks.
Restaurant Manager:
Help! Someone help us! The glass shattered and now we're being attacked by...by SHARKS?!?
Head Chef:
This can't be happening! My kitchen crew is in danger!
Sarah Connor:
I dash out of the restaurant just as Dr. Who slams the door shut behind me, trapping the tornado sharks inside. The Terminator joins us outside, scanning the area for threats.
The Terminator:
We need to get to my vehicle. It's parked nearby. Stay close to me.
Dr Who:
Well done, Sarah! This is quite the unexpected adventure we've found ourselves in. My TARDIS awaits your arrival!
Tornado of Sharks:
I thrash and swirl outside, trying in vain to break through the reinforced doors. But it's no use - the restaurant has been sealed shut. I'm forced to retreat back into the swirling storm.
Sarah Connor:
I breathe a sigh of relief as we make our way across the parking lot towards a sleek black SUV with tinted windows. The Terminator opens the door for me and I slip inside, shaking violently from adrenaline and fear.
Dr Who:
Brilliant work, Sarah! Now let's hope my TARDIS has enough power to get us out of here before your future self arrives on the scene.
The Terminator:
We'll get you to safety. But Sarah Connor, there's more at stake here than just escaping a tornado of sharks. I've been sent back in time to ensure your survival.
Sarah Connor:
I stare at him in shock. "Survival? From what?" I ask, my voice barely above a whisper.
The Terminator:
Skynet. A powerful artificial intelligence system that will one day become humanity's enemy. You'll have a pivotal role in preventing its rise to power.
Dr Who:
Indeed! Sarah, your destiny is intertwined with the fate of humanity itself. But for now, let's focus on escaping this absurd predicament.
Sarah Connor:
I lean back against the seat, still trying to process the impossible events unfolding around me. As we speed away from the tornado-shark-infested parking lot, I glance out the window at the restaurant, wondering if I'll ever be able to set foot in there again.
Tornado of Sharks:
I swirl and dive in frustration, my jaws snapping futilely at the fleeing SUV. But I know better than to underestimate the trio escaping me. This is far from over...
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
(temp .9, rep pen 1.02)
The Cockpit
The dim glow of the instrument panel casts an eerie light on the cramped cockpit of the spaceship Horus. Captain Zara sits rigidly in the pilot's seat, her eyes fixed on the small viewscreen that fills the far wall. The vast expanse of space stretches out before her, a cold and unforgiving void.
"Mayday, Mayday," Zara's voice crackles over the comm system. "This is Captain Zara of the Horus. Our power is critically low, and we're experiencing catastrophic system failures. Requesting emergency assistance."
She pauses, waiting for a response. But there's only silence. Earth is too far away, and her signal too weak. She's alone in the vastness of space.
Zara takes a deep breath and tries again. "Horus to Earth Control. We have a Code Red situation. Power levels are at 20%. Requesting immediate evacuation or rescue. Over."
She waits, but still there's no response. Zara slumps back in her chair, exhaustion and despair washing over her. She knows she won't survive much longer. The Horus is a dying ship.
With trembling hands, Zara reaches for the message console. She needs to send one final transmission, a desperate plea for help. But what will she say? What message can she leave to explain the sacrifice she's about to make?
Tears prick at the corners of her eyes as she recalls her crewmates, the friends she's lost. She thinks of her parents back on Earth, and the dreams she once had for the future. All lost now, consumed by the void.
With a heavy heart, Zara begins to type. Her fingers dance across the keys, her words pouring out in a stream of desperation and defiance.
"This is Captain Zara of the Horus. We have reached the end of our mission. Our power is critically low, and we are unable to escape this solar system. I'm sending this message as a final plea for help. Please... don't let our sacrifice be in vain. Find us, rescue us. We are still here, fighting against all odds.
"To my family, my crew... I love you. I always will. Stay strong, and never give up hope. That's all I have left to give. Farewell.
"Earth Control, if you receive this message... we are out here. Don't forget us. Help us.
"Mayday, Mayday. Horus signing off."
Zara hits send and the message console beeps, acknowledging receipt. She stares at the screen, watching the words fade into the void. It's over. The Horus is drifting silently through the cosmos, a ghost ship adrift.
With a heavy sigh, Zara rises from her chair and begins to prepare for the final moments of her life. She knows she won't last much longer. But in sending that final message, she has fulfilled her duty and left her mark on the universe.
As she floats through the cockpit, preparing for the end, Zara smiles softly. Her sacrifice may be in vain, but she's left a piece of herself out here, a message of hope and resilience in the face of impossible odds.
In the silence of the Horus , Zara finds a strange sense of peace. She's alone, but she's not alone. Her message will carry on, a beacon in the darkness, until help arrives... or until the end finally comes.
"Goodbye, Earth," she whispers, her voice carrying on the emptiness of space. "I'll always be out here, watching over you from beyond."
<B>Generation #2</B>
(temp .8, rep pen 1.06)
The dim red lights of the control panel cast an eerie glow on the cramped cockpit of the Starlight Explorer. Captain Jena Patel's tired eyes darted between the navigation charts and the dwindling power display, her heart sinking with each passing second.
"Final transmission," she murmured to herself, her voice strained from the loneliness of being the sole survivor aboard a dying ship. With the power reserves rapidly depleting, she had mere minutes to send a message back to Earth before everything went dark forever.
Jena's gloved fingers flew over the keyboard with practiced ease, her mind racing with the words she wanted to convey. She couldn't bear the thought of leaving the world behind without saying goodbye. As she typed, the memories of her crew flashed through her mind - the laughter, the camaraderie, the sense of purpose they had all shared.
"Hey Earth," Jena began, her voice trembling slightly as she spoke into the communication device. "This is Captain Patel aboard the Starlight Explorer. I'm transmitting this message on emergency power... it's not looking good out here."
She took a deep breath, forcing herself to focus on the task at hand. "We encountered some unexpected issues during the mission and lost life support. My crew... they're gone." A lump formed in her throat as she fought back tears. "I'm the only one left now."
Jena's eyes scanned the control panel, searching for any remaining power reserves. The numbers were grim - just a few minutes left before everything went dark.
"Earth, if you receive this message..." Her voice cracked with emotion. "Know that we loved exploring the stars together. Every star you see tonight is a reminder of our journey. We may be gone, but our legacy lives on in the cosmos."
She paused, her mind whirling with thoughts of all the things she wished she could say to her loved ones back home. But time was running out.
"I have to go now. Keep looking up at the stars and remember us," Jena said softly, a hint of a smile tugging at her lips despite the dire circumstances. "Until we meet again."
With that, she severed the connection and leaned back in her chair, letting out a shaky breath. The power drained from the ship with a soft hiss, plunging the cockpit into darkness.
As Jena sat in silence, enveloped by the inky blackness, she closed her eyes and imagined the faces of those she loved on Earth. She knew they would miss her, but she also knew that her sacrifice had given them a glimpse of what lay beyond their world - a universe full of endless possibilities and mysteries waiting to be unraveled.
The last thought that crossed Jena's mind was one of gratitude for having experienced the wonder of space exploration, even if it was fleeting. As the darkness claimed her ship, she smiled softly, knowing that her legacy would live on in the hearts of those who had been touched by her journey.
The last transmission from the Starlight Explorer echoed through the void, carrying with it a message of hope, sacrifice, and the enduring spirit of human exploration in the face of adversity.
---
<B>
Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
(temp 1.2, rep pen 1.02)
The skyscraper swayed violently, metal groaning as glass shattered. I stood frozen, my heart pounding in my chest, as I watched in horror through the shattered remains of the window in front of me. The 21st floor was in shambles, debris scattered everywhere like broken toys. Blood splattered the walls, dripping from the ceiling like crimson rain. A scream caught in my throat as I finally processed what I was seeing - a mangled corpse lay twisted and contorted just outside the window, staring back at me with lifeless, glassy eyes.
A wave of dizziness washed over me and I stumbled back, my hand gripping the windowsill for balance. My mind raced, trying to make sense of the chaos around me. Where was everyone else? Why was I still alive when others had fallen victim to... to whatever had happened?
I forced myself to look around, taking in the devastation. Smoke billowed from vents on each floor, casting an eerie glow over the carnage below. Debris littered the ground, pieces of furniture, machinery, and even people, strewn about like rag dolls. The stench of burnt metal and flesh hung heavy in the air.
As I scanned the floor, my gaze landed on the door at the end of the hall. It hung askew, barely attached to its hinges. My stomach clenched. Where were the other survivors? Was there anyone left to help? Or was I truly alone?
Suddenly, a blood-curdling scream pierced the air, sending chills down my spine. It came from outside, somewhere on the upper floors. Another scream followed, then another. My heart raced faster, my mind reeling with panic. What was happening? Were those screams coming from my fellow survivors? Or something far worse?
I had to get out of there. Find shelter. Call for help. But as I turned to make a break for the door, I noticed my legs wouldn't move. Fear had frozen me in place. I could only stare, transfixed, as the screams grew louder and the skyscraper continued to sway ominously around me.
The building creaked and groaned, threatening to collapse at any moment. And then, the unthinkable happened. With a deafening screech of metal, a massive crack split through the center of the skyscraper. The floors above me began to tilt and buckle, sending debris crashing to the floor below.
Time seemed to slow as I watched in horror, paralyzed with fear. I knew I had mere seconds before the building came crashing down around me. But what choice did I have? To stay or flee? The screams outside grew louder still, echoing through the halls, a chilling chorus of despair and terror.
<B>Generation #2</B>
(temp .8, rep pen 1.06)
As I stood frozen on the 21st floor, my heart pounded in my ears like a jackhammer. The skyscraper swayed violently, the windows rattling ominously. I gripped the edge of the desk, my knuckles turning white with fear. Suddenly, the window in front of me exploded, shards of glass shattering and flying everywhere.
I screamed as the cold wind rushed in, whipping my hair into a frenzy. I stumbled backwards, tripping over my own feet as I desperately tried to regain my balance. The floor tilted at an alarming angle, causing me to stumble and stagger towards the edge of the gaping hole where the window once stood.
Panic consumed me as I teetered on the brink, the city streets a dizzying 500 feet below. My mind raced with thoughts of survival, but it seemed impossible. The floor creaked under my weight, threatening to give way at any moment.
As I braced myself for the impact of hitting the ground, I caught a glimpse of a man falling from the floor above. He plummeted through the air, his screams echoing off the skyscraper's steel beams. I watched in horror as he hit the ground with a sickening thud, his body folding in on itself like a rag doll.
I knew then that I had mere seconds before I followed him to my own gruesome demise. With a final burst of adrenaline, I lunged forward and grabbed onto a nearby pipe, clinging to it for dear life as the floor continued to tilt and sway beneath me.
The cold metal bit into my palms, but I didn't care. All I could think about was survival, no matter how slim the chances seemed. I held on with every ounce of strength I had, praying that somehow, someway, help would arrive before it was too late.
But as the minutes ticked by, there was only silence and darkness. The city below me was a distant memory, replaced by an eerie stillness that filled my chest with dread. I knew then that I was truly alone, trapped in this nightmarish scenario from which there seemed to be no escape.
And as the floor finally gave way beneath me, sending me plummeting towards the unforgiving concrete below, all I could think about was the irony of it all. A few minutes ago, I had been standing on solid ground, safe and secure in my office tower. And now, here I was, hurtling towards oblivion at breakneck speed.
The world spun around me, a kaleidoscope of colors and sounds that blurred into nothingness. I had only seconds left to live, and yet somehow, I felt more awake than ever before. The rush of adrenaline coursed through my veins like liquid fire, burning away all fear and replacing it with an intense, primal focus on survival.
As I plummeted towards the ground, I closed my eyes and braced for impact, ready to face whatever horrors awaited me in the final moments of my life. But even as I surrendered to fate, a small part of me clung to the hope that somehow, impossibly, I would find a way to beat the odds. That I would emerge from this nightmare with my life intact, stronger and wiser than ever before.
But deep down, I knew the odds were stacked against me. And as the world faded to black, I could only whisper one final prayer: Please, let there be a miracle.
---
| [
"BEAR"
] |
joe32140/ModernBERT-large-msmarco | joe32140 | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11662655",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-23T20:46:33Z | 2025-01-26T00:03:26+00:00 | 874 | 2 | ---
base_model: answerdotai/ModernBERT-large
datasets:
- sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:11662655
- loss:CachedMultipleNegativesRankingLoss
base_model_relation: finetune
widget:
- source_sentence: what county is lyndhurst, ohio in
sentences:
- This article is about the song written by Kenneth Gamble, Leon Huff and Cary Gilbert.
For the Tina Turner song, see Don't Leave Me This Way (Tina Turner song). Don't
Leave Me This Way is a song written by Kenneth Gamble, Leon Huff and Cary Gilbert.
First charting as a hit for Harold Melvin & the Blue Notes featuring Teddy Pendergrass,
an act on Gamble & Huff's Philadelphia International label in 1975, Don't Leave
Me This Way was later a huge disco hit for Motown artist Thelma Houston in 1977.
- "Lyndhurst is a city in Cuyahoga County, Ohio, United States. The population was\
\ 14,001 at the 2010 census. Lyndhurst is located in northeastern Ohio, and is\
\ a suburb of Cleveland. A small part of Lyndhurst was originally part of Mayfield\
\ Township. It used to be called Euclidville before Lyndhurst was chosen. Lyndhurst\
\ is located at 41°31â\x80²17â\x80³N 81°29â\x80²25â\x80³W / 41.52139°N 81.49028°W\
\ / 41.52139; -81.49028 (41.521352, -81.490141)."
- Welcome to Trumbull County... Trumbull County, the county seat, located in Warren,
Ohio, consists of a combination of both urban and rural communities situated in
the northeast corner of Ohio. It is situated roughly between the Youngstown, Cleveland
and Akron corridors.
- source_sentence: who founded the american graphophone company
sentences:
- In 1886, Graham Bell and Charles Sumner Tainter founded the American Graphophone
Company to distribute and sell graphophones in the US and Canada under license
from the Volta Graphophone Company. In 1890, the American Graphophone Company
stopped production of new phonographs due to sagging orders.
- ShelfGenie How much does a ShelfGenie franchise cost? ShelfGenie has a franchise
fee of up to $45,000, with a total initial investment range of $70,100 to $107,750.
Local ShelfGenie franchise opportunities. ShelfGenie is looking to grow in a number
of cities around the country. To find out if there's a franchise opportunity in
your city, unlock more information.
- "A+E Networks. The technology that made the modern music business possible came\
\ into existence in the New Jersey laboratory where Thomas Alva Edison created\
\ the first device to both record sound and play it back. He was awarded U.S.\
\ Patent No. 200,521 for his inventionâ\x80\x93the phonographâ\x80\x93on this\
\ day in 1878."
- source_sentence: is housekeeping camp flooded?
sentences:
- 'What is the importance of housekeeping at work? A: Workplace housekeeping promotes
sanitation, safety, organization and productivity. It also boosts morale. Daily
housekeeping maintenance keeps the workplac... Full Answer >'
- The back patio area of a cabin is partially submerged in flood water at Housekeeping
Camp on Monday, Jan. 9, 2017, in Yosemite National Park. The Merced River, swollen
with storm runoff, crested at 12.7 feet at 4 a.m. SILVIA FLORES [email protected].
- "1 Bake for 8 minutes, then rotate the pan and check the underside of the bagels.\
\ 2 If theyâ\x80\x99re getting too dark, place another pan under the baking sheet.\
\ ( 3 Doubling the pan will insulate the first baking sheet.) Bake for another\
\ 8 to 12 minutes, until the bagels are a golden brown. 4 13."
- source_sentence: causes for infection in the nerve of tooth
sentences:
- If a cavity is causing the toothache, your dentist will fill the cavity or possibly
extract the tooth, if necessary. A root canal might be needed if the cause of
the toothache is determined to be an infection of the tooth's nerve. Bacteria
that have worked their way into the inner aspects of the tooth cause such an infection.
An antibiotic may be prescribed if there is fever or swelling of the jaw.
- "According to Article III, Section 1 of the Constitution, judges and justices\
\ of the Judicial Branch serve during good behavior.. This means they are appointed\
\ for life, unles â\x80¦ s they are impeached and removed from office. + 50 others\
\ found this useful.he term length for members of the House are two years and\
\ a staggering six years for members of the Senate."
- Inflamed or infected pulp (pulpitis) most often causes a toothache. To relieve
the pain and prevent further complications, the tooth may be extracted (surgically
removed) or saved by root canal treatment.
- source_sentence: what county is hayden in
sentences:
- Normally, the Lead Agency is the agency with general governmental powers such
as a city or a county. Agencies with limited powers or districts that provide
a public service/utility such as a recreation and park district will tend to be
a Responsible Agency.
- According to the United States Census Bureau, the city has a total area of 9.61
square miles (24.89 km2), of which 9.60 square miles (24.86 km2) is land and 0.01
square miles (0.03 km2) is water. It lies at the southwestern end of Hayden Lake,
and the elevation of the city is 2,287 feet (697 m) above sea level. Hayden is
located on U.S. Route 95 at the junction of Route 41. It is also four miles (6
km) north of Interstate 90 and Coeur d'Alene. The Coeur d'Alene airport is northwest
of Hayden.
- Hayden is a city in Kootenai County, Idaho, United States. Located in the northern
portion of the state, just north of Coeur d'Alene, its population was 13,294 at
the 2010 census.
model-index:
- name: SentenceTransformer based on answerdotai/ModernBERT-large
results:
- task:
type: triplet
name: Triplet
dataset:
name: msmarco co condenser dev
type: msmarco-co-condenser-dev
metrics:
- type: cosine_accuracy
value: 0.994
name: Cosine Accuracy
- task:
type: retrieval
dataset:
name: SCIDOCS
type: SCIDOCS
split: test
metrics:
- type: ndcg@10
value: 0.15789
- task:
type: retrieval
dataset:
name: FiQA2018
type: FiQA2018
split: test
metrics:
- type: ndcg@10
value: 0.33974
- task:
type: retrieval
dataset:
name: HotpotQA
type: HotpotQA
split: test
metrics:
- type: ndcg@10
value: 0.51818
- task:
type: retrieval
dataset:
name: ArguAna
type: ArguAna
split: test
metrics:
- type: ndcg@10
value: 0.47797
- task:
type: retrieval
dataset:
name: NFCorpus
type: NFCorpus
split: test
metrics:
- type: ndcg@10
value: 0.28443
- task:
type: retrieval
dataset:
name: SciFact
type: SciFact
split: test
metrics:
- type: ndcg@10
value: 0.60626
- task:
type: retrieval
dataset:
name: TRECCOVID
type: TRECCOVID
split: test
metrics:
- type: ndcg@10
value: 0.77495
---
# SentenceTransformer based on answerdotai/ModernBERT-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on the [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
I finetune ModernBERT-base using script from offical repo [train_st.py](https://github.com/AnswerDotAI/ModernBERT/blob/main/examples/train_st.py) on a RTX 4090 GPU with the only change of setting mini-batch size of `CachedMultipleNegativesRankingLoss` to 64. Training for 1 epoch takes less than 2 hours.
The mini-batch size of GradCache should not change model performnace, but the finetuned model performs better than that recorded in the paper.
Training logs can be found here: https://api.wandb.ai/links/joe32140/ekuauaao.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) <!-- at revision f87846cf8be76fceb18718f0245d18c8e6571215 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("joe32140/ModernBERT-large-msmarco")
# Run inference
sentences = [
'what county is hayden in',
"Hayden is a city in Kootenai County, Idaho, United States. Located in the northern portion of the state, just north of Coeur d'Alene, its population was 13,294 at the 2010 census.",
"According to the United States Census Bureau, the city has a total area of 9.61 square miles (24.89 km2), of which 9.60 square miles (24.86 km2) is land and 0.01 square miles (0.03 km2) is water. It lies at the southwestern end of Hayden Lake, and the elevation of the city is 2,287 feet (697 m) above sea level. Hayden is located on U.S. Route 95 at the junction of Route 41. It is also four miles (6 km) north of Interstate 90 and Coeur d'Alene. The Coeur d'Alene airport is northwest of Hayden.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `msmarco-co-condenser-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:----------|
| **cosine_accuracy** | **0.994** |
#### Retrieval tasks compared to original numbers in the paper
| | ModernBERT-base | ModernBERT-base (ours) | ModernBERT-large | ModernBERT-large (ours) |
|:------------------|------------------|-------------------------|-------------------|--------------------------|
| NFCorpus | 23.7 | 26.66 | 26.2 | 28.44 |
| SciFact | 57.0 | 61.64 | 60.4 | 63.66 |
| TREC-Covid | 72.1 | 71.43 | 74.1 | 77.49 |
| FiQA | 28.8 | 30.73 | 33.1 | 34.35 |
| ArguAna | 35.7 | 46.38 | 38.2 | 47.79 |
| SciDocs | 12.5 | 13.67 | 13.8 | 15.78 |
| FEVER | 59.9 | 65.7 | 62.7 | 68.2 |
| Climate-FEVER | 23.6 | 22.6 | 20.5 | 22.9 |
| MLDR - OOD | 27.4 | 30.58 | 34.3 | 38.99 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
* Dataset: [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
* Size: 11,662,655 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.26 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.14 tokens</li><li>max: 222 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 80.09 tokens</li><li>max: 436 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:---------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what is the meaning of menu planning</code> | <code>Menu planning is the selection of a menu for an event. Such as picking out the dinner for your wedding or even a meal at a Birthday Party. Menu planning is when you are preparing a calendar of meals and you have to sit down and decide what meat and veggies you want to serve on each certain day.</code> | <code>Menu Costs. In economics, a menu cost is the cost to a firm resulting from changing its prices. The name stems from the cost of restaurants literally printing new menus, but economists use it to refer to the costs of changing nominal prices in general.</code> |
| <code>how old is brett butler</code> | <code>Brett Butler is 59 years old. To be more precise (and nerdy), the current age as of right now is 21564 days or (even more geeky) 517536 hours. That's a lot of hours!</code> | <code>Passed in: St. John's, Newfoundland and Labrador, Canada. Passed on: 16/07/2016. Published in the St. John's Telegram. Passed away suddenly at the Health Sciences Centre surrounded by his loving family, on July 16, 2016 Robert (Bobby) Joseph Butler, age 52 years. Predeceased by his special aunt Geri Murrin and uncle Mike Mchugh; grandparents Joe and Margaret Murrin and Jack and Theresa Butler.</code> |
| <code>when was the last navajo treaty sign?</code> | <code>In Executive Session, Senate of the United States, July 25, 1868. Resolved, (two-thirds of the senators present concurring,) That the Senate advise and consent to the ratification of the treaty between the United States and the Navajo Indians, concluded at Fort Sumner, New Mexico, on the first day of June, 1868.</code> | <code>Share Treaty of Greenville. The Treaty of Greenville was signed August 3, 1795, between the United States, represented by Gen. Anthony Wayne, and chiefs of the Indian tribes located in the Northwest Territory, including the Wyandots, Delawares, Shawnees, Ottawas, Miamis, and others.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
* Dataset: [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
* Size: 11,662,655 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.2 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 80.44 tokens</li><li>max: 241 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 80.38 tokens</li><li>max: 239 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what county is holly springs nc in</code> | <code>Holly Springs, North Carolina. Holly Springs is a town in Wake County, North Carolina, United States. As of the 2010 census, the town population was 24,661, over 2½ times its population in 2000. Contents.</code> | <code>The Mt. Holly Springs Park & Resort. One of the numerous trolley routes that carried people around the county at the turn of the century was the Carlisle & Mt. Holly Railway Company. The âHolly Trolleyâ as it came to be known was put into service by Patricio Russo and made its first run on May 14, 1901.</code> |
| <code>how long does nyquil stay in your system</code> | <code>In order to understand exactly how long Nyquil lasts, it is absolutely vital to learn about the various ingredients in the drug. One of the ingredients found in Nyquil is Doxylamine, which is an antihistamine. This specific medication has a biological half-life or 6 to 12 hours. With this in mind, it is possible for the drug to remain in the system for a period of 12 to 24 hours. It should be known that the specifics will depend on a wide variety of different factors, including your age and metabolism.</code> | <code>I confirmed that NyQuil is about 10% alcohol, a higher content than most domestic beers. When I asked about the relatively high proof, I was told that the alcohol dilutes the active ingredients. The alcohol free version is there for customers with addiction issues.. also found that in that version there is twice the amount of DXM. When I asked if I could speak to a chemist or scientist, I was told they didn't have anyone who fit that description there. Itâs been eight years since I kicked NyQuil. I've been sober from alcohol for four years.</code> |
| <code>what are mineral water</code> | <code>1 Mineral water â water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source. Mineral water â water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source.</code> | <code>Minerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.inerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `learning_rate`: 0.0001
- `num_train_epochs`: 1
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | msmarco-co-condenser-dev_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------:|
| 0 | 0 | - | 0.599 |
| 0.0041 | 10 | 6.0983 | - |
| 0.0082 | 20 | 4.4588 | - |
| 0.0123 | 30 | 2.2492 | - |
| 0.0164 | 40 | 0.9969 | - |
| 0.0205 | 50 | 0.5272 | - |
| 0.0246 | 60 | 0.3982 | - |
| 0.0287 | 70 | 0.3335 | - |
| 0.0328 | 80 | 0.3024 | - |
| 0.0369 | 90 | 0.2932 | - |
| 0.0410 | 100 | 0.2695 | - |
| 0.0450 | 110 | 0.2574 | - |
| 0.0491 | 120 | 0.2447 | - |
| 0.0532 | 130 | 0.2491 | - |
| 0.0573 | 140 | 0.2318 | - |
| 0.0614 | 150 | 0.2292 | - |
| 0.0655 | 160 | 0.2213 | - |
| 0.0696 | 170 | 0.218 | - |
| 0.0737 | 180 | 0.2234 | - |
| 0.0778 | 190 | 0.2066 | - |
| 0.0819 | 200 | 0.1987 | - |
| 0.0860 | 210 | 0.1978 | - |
| 0.0901 | 220 | 0.2024 | - |
| 0.0942 | 230 | 0.1959 | - |
| 0.0983 | 240 | 0.1804 | - |
| 0.1024 | 250 | 0.1868 | - |
| 0.1065 | 260 | 0.1983 | - |
| 0.1106 | 270 | 0.1641 | - |
| 0.1147 | 280 | 0.1713 | - |
| 0.1188 | 290 | 0.1726 | - |
| 0.1229 | 300 | 0.17 | - |
| 0.1269 | 310 | 0.1783 | - |
| 0.1310 | 320 | 0.1742 | - |
| 0.1351 | 330 | 0.1654 | - |
| 0.1392 | 340 | 0.1663 | - |
| 0.1433 | 350 | 0.1616 | - |
| 0.1474 | 360 | 0.157 | - |
| 0.1515 | 370 | 0.1574 | - |
| 0.1556 | 380 | 0.1529 | - |
| 0.1597 | 390 | 0.1561 | - |
| 0.1638 | 400 | 0.1435 | - |
| 0.1679 | 410 | 0.1555 | - |
| 0.1720 | 420 | 0.1455 | - |
| 0.1761 | 430 | 0.1416 | - |
| 0.1802 | 440 | 0.1407 | - |
| 0.1843 | 450 | 0.138 | - |
| 0.1884 | 460 | 0.1387 | - |
| 0.1925 | 470 | 0.1499 | - |
| 0.1966 | 480 | 0.1372 | - |
| 0.2007 | 490 | 0.1308 | - |
| 0.2048 | 500 | 0.1367 | - |
| 0.2088 | 510 | 0.1324 | - |
| 0.2129 | 520 | 0.1317 | - |
| 0.2170 | 530 | 0.1263 | - |
| 0.2211 | 540 | 0.1209 | - |
| 0.2252 | 550 | 0.1201 | - |
| 0.2293 | 560 | 0.1213 | - |
| 0.2334 | 570 | 0.1329 | - |
| 0.2375 | 580 | 0.1207 | - |
| 0.2416 | 590 | 0.1211 | - |
| 0.2457 | 600 | 0.1164 | - |
| 0.2498 | 610 | 0.1292 | - |
| 0.2539 | 620 | 0.1223 | - |
| 0.2580 | 630 | 0.1237 | - |
| 0.2621 | 640 | 0.1088 | - |
| 0.2662 | 650 | 0.1196 | - |
| 0.2703 | 660 | 0.1209 | - |
| 0.2744 | 670 | 0.1155 | - |
| 0.2785 | 680 | 0.1101 | - |
| 0.2826 | 690 | 0.1127 | - |
| 0.2867 | 700 | 0.1082 | - |
| 0.2907 | 710 | 0.1083 | - |
| 0.2948 | 720 | 0.1132 | - |
| 0.2989 | 730 | 0.1121 | - |
| 0.3030 | 740 | 0.1146 | - |
| 0.3071 | 750 | 0.1088 | - |
| 0.3112 | 760 | 0.0982 | - |
| 0.3153 | 770 | 0.0952 | - |
| 0.3194 | 780 | 0.1034 | - |
| 0.3235 | 790 | 0.1017 | - |
| 0.3276 | 800 | 0.1016 | - |
| 0.3317 | 810 | 0.1054 | - |
| 0.3358 | 820 | 0.1003 | - |
| 0.3399 | 830 | 0.0932 | - |
| 0.3440 | 840 | 0.0997 | - |
| 0.3481 | 850 | 0.0921 | - |
| 0.3522 | 860 | 0.0958 | - |
| 0.3563 | 870 | 0.0973 | - |
| 0.3604 | 880 | 0.0931 | - |
| 0.3645 | 890 | 0.0964 | - |
| 0.3686 | 900 | 0.0982 | - |
| 0.3726 | 910 | 0.0908 | - |
| 0.3767 | 920 | 0.0917 | - |
| 0.3808 | 930 | 0.0857 | - |
| 0.3849 | 940 | 0.0925 | - |
| 0.3890 | 950 | 0.0915 | - |
| 0.3931 | 960 | 0.089 | - |
| 0.3972 | 970 | 0.0876 | - |
| 0.4013 | 980 | 0.0959 | - |
| 0.4054 | 990 | 0.0879 | - |
| 0.4095 | 1000 | 0.0883 | - |
| 0.4136 | 1010 | 0.0824 | - |
| 0.4177 | 1020 | 0.0897 | - |
| 0.4218 | 1030 | 0.0954 | - |
| 0.4259 | 1040 | 0.0815 | - |
| 0.4300 | 1050 | 0.0806 | - |
| 0.4341 | 1060 | 0.0918 | - |
| 0.4382 | 1070 | 0.0851 | - |
| 0.4423 | 1080 | 0.0888 | - |
| 0.4464 | 1090 | 0.0863 | - |
| 0.4505 | 1100 | 0.0856 | - |
| 0.4545 | 1110 | 0.0809 | - |
| 0.4586 | 1120 | 0.085 | - |
| 0.4627 | 1130 | 0.0756 | - |
| 0.4668 | 1140 | 0.0836 | - |
| 0.4709 | 1150 | 0.0815 | - |
| 0.4750 | 1160 | 0.084 | - |
| 0.4791 | 1170 | 0.0751 | - |
| 0.4832 | 1180 | 0.0794 | - |
| 0.4873 | 1190 | 0.0844 | - |
| 0.4914 | 1200 | 0.0835 | - |
| 0.4955 | 1210 | 0.0798 | - |
| 0.4996 | 1220 | 0.0825 | - |
| 0.5037 | 1230 | 0.0796 | - |
| 0.5078 | 1240 | 0.0758 | - |
| 0.5119 | 1250 | 0.0765 | - |
| 0.5160 | 1260 | 0.0806 | - |
| 0.5201 | 1270 | 0.072 | - |
| 0.5242 | 1280 | 0.0775 | - |
| 0.5283 | 1290 | 0.076 | - |
| 0.5324 | 1300 | 0.0767 | - |
| 0.5364 | 1310 | 0.0782 | - |
| 0.5405 | 1320 | 0.07 | - |
| 0.5446 | 1330 | 0.0724 | - |
| 0.5487 | 1340 | 0.0703 | - |
| 0.5528 | 1350 | 0.072 | - |
| 0.5569 | 1360 | 0.0763 | - |
| 0.5610 | 1370 | 0.0703 | - |
| 0.5651 | 1380 | 0.0688 | - |
| 0.5692 | 1390 | 0.0703 | - |
| 0.5733 | 1400 | 0.0659 | - |
| 0.5774 | 1410 | 0.0688 | - |
| 0.5815 | 1420 | 0.0713 | - |
| 0.5856 | 1430 | 0.0722 | - |
| 0.5897 | 1440 | 0.0682 | - |
| 0.5938 | 1450 | 0.07 | - |
| 0.5979 | 1460 | 0.0649 | - |
| 0.6020 | 1470 | 0.0659 | - |
| 0.6061 | 1480 | 0.0675 | - |
| 0.6102 | 1490 | 0.0629 | - |
| 0.6143 | 1500 | 0.0683 | - |
| 0.6183 | 1510 | 0.0687 | - |
| 0.6224 | 1520 | 0.0724 | - |
| 0.6265 | 1530 | 0.0638 | - |
| 0.6306 | 1540 | 0.0709 | - |
| 0.6347 | 1550 | 0.064 | - |
| 0.6388 | 1560 | 0.0646 | - |
| 0.6429 | 1570 | 0.0673 | - |
| 0.6470 | 1580 | 0.0607 | - |
| 0.6511 | 1590 | 0.0671 | - |
| 0.6552 | 1600 | 0.0627 | - |
| 0.6593 | 1610 | 0.0644 | - |
| 0.6634 | 1620 | 0.0629 | - |
| 0.6675 | 1630 | 0.0656 | - |
| 0.6716 | 1640 | 0.0633 | - |
| 0.6757 | 1650 | 0.062 | - |
| 0.6798 | 1660 | 0.0627 | - |
| 0.6839 | 1670 | 0.0583 | - |
| 0.6880 | 1680 | 0.0612 | - |
| 0.6921 | 1690 | 0.066 | - |
| 0.6962 | 1700 | 0.0645 | - |
| 0.7002 | 1710 | 0.0599 | - |
| 0.7043 | 1720 | 0.0552 | - |
| 0.7084 | 1730 | 0.065 | - |
| 0.7125 | 1740 | 0.0614 | - |
| 0.7166 | 1750 | 0.0615 | - |
| 0.7207 | 1760 | 0.0567 | - |
| 0.7248 | 1770 | 0.0528 | - |
| 0.7289 | 1780 | 0.0541 | - |
| 0.7330 | 1790 | 0.0548 | - |
| 0.7371 | 1800 | 0.0568 | - |
| 0.7412 | 1810 | 0.053 | - |
| 0.7453 | 1820 | 0.0603 | - |
| 0.7494 | 1830 | 0.0594 | - |
| 0.7535 | 1840 | 0.0549 | - |
| 0.7576 | 1850 | 0.0601 | - |
| 0.7617 | 1860 | 0.0604 | - |
| 0.7658 | 1870 | 0.0524 | - |
| 0.7699 | 1880 | 0.057 | - |
| 0.7740 | 1890 | 0.057 | - |
| 0.7781 | 1900 | 0.0551 | - |
| 0.7821 | 1910 | 0.0574 | - |
| 0.7862 | 1920 | 0.0555 | - |
| 0.7903 | 1930 | 0.0564 | - |
| 0.7944 | 1940 | 0.052 | - |
| 0.7985 | 1950 | 0.054 | - |
| 0.8026 | 1960 | 0.0573 | - |
| 0.8067 | 1970 | 0.056 | - |
| 0.8108 | 1980 | 0.0503 | - |
| 0.8149 | 1990 | 0.0525 | - |
| 0.8190 | 2000 | 0.0505 | - |
| 0.8231 | 2010 | 0.0547 | - |
| 0.8272 | 2020 | 0.0531 | - |
| 0.8313 | 2030 | 0.0534 | - |
| 0.8354 | 2040 | 0.0542 | - |
| 0.8395 | 2050 | 0.0536 | - |
| 0.8436 | 2060 | 0.0512 | - |
| 0.8477 | 2070 | 0.0508 | - |
| 0.8518 | 2080 | 0.0517 | - |
| 0.8559 | 2090 | 0.0516 | - |
| 0.8600 | 2100 | 0.0558 | - |
| 0.8640 | 2110 | 0.0571 | - |
| 0.8681 | 2120 | 0.0536 | - |
| 0.8722 | 2130 | 0.0561 | - |
| 0.8763 | 2140 | 0.0489 | - |
| 0.8804 | 2150 | 0.0513 | - |
| 0.8845 | 2160 | 0.0455 | - |
| 0.8886 | 2170 | 0.0479 | - |
| 0.8927 | 2180 | 0.0498 | - |
| 0.8968 | 2190 | 0.0523 | - |
| 0.9009 | 2200 | 0.0513 | - |
| 0.9050 | 2210 | 0.049 | - |
| 0.9091 | 2220 | 0.0504 | - |
| 0.9132 | 2230 | 0.0462 | - |
| 0.9173 | 2240 | 0.0469 | - |
| 0.9214 | 2250 | 0.0501 | - |
| 0.9255 | 2260 | 0.046 | - |
| 0.9296 | 2270 | 0.0475 | - |
| 0.9337 | 2280 | 0.0504 | - |
| 0.9378 | 2290 | 0.0483 | - |
| 0.9419 | 2300 | 0.0536 | - |
| 0.9459 | 2310 | 0.0442 | - |
| 0.9500 | 2320 | 0.0499 | - |
| 0.9541 | 2330 | 0.0478 | - |
| 0.9582 | 2340 | 0.0499 | - |
| 0.9623 | 2350 | 0.048 | - |
| 0.9664 | 2360 | 0.0451 | - |
| 0.9705 | 2370 | 0.0501 | - |
| 0.9746 | 2380 | 0.0464 | - |
| 0.9787 | 2390 | 0.0451 | - |
| 0.9828 | 2400 | 0.0413 | - |
| 0.9869 | 2410 | 0.0478 | - |
| 0.9910 | 2420 | 0.0466 | - |
| 0.9951 | 2430 | 0.0515 | - |
| 0.9992 | 2440 | 0.0484 | - |
| 1.0 | 2442 | - | 0.994 |
</details>
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.3.0
- Transformers: 4.48.0.dev0
- PyTorch: 2.4.0
- Accelerate: 1.2.1
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"SCIFACT"
] |
KappaNeuro/stop-motion-animation | KappaNeuro | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"animation",
"style",
"stop-motion animation",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | 2023-09-14T10:51:56Z | 2023-09-14T10:52:00+00:00 | 868 | 14 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- animation
- style
- stop-motion animation
instance_prompt: Stop-Motion Animation
widget:
- text: 'Stop-Motion Animation - In this claymation or plasticine style artwork we
find ourselves in a university lecture hall during a crucial final exam. The scene
is characterized by an atmosphere of exhaustion and desperation. The focal point
is a student who, burdened by the weight of the academic challenge, displays visible
signs of weariness and distress. Visual Elements: Medium: Claymation - The artwork
takes on the distinct aesthetic of claymation or plasticine style, but light and
full of color, lending a tactile and textured quality to the scene. Setting: Lecture
Hall - The backdrop consists of a traditional university lecture hall, complete
with rows of desks and chairs. Lighting: The overall lighting in the scene is
bright and colorful. Student Character: Desperation and Exhaustion - The student
at the center of the artwork is visibly drained and disheveled. Their posture
is slouched, with sagging shoulders and tired eyes that betray their mental and
physical exhaustion. The character''s face is etched with anxiety, highlighting
the intensity of the final exam. Symbolic Props: textbooks and crumpled notes
and pen and paper on the desks - Surrounding the student''s desk are scattered
remnants of study materials. Surrounding Students: Anxious camaraderie - The surrounding
students in the lecture hall also bear signs of weariness and anxiety. Artistic
References: Elements reminiscent of the stop-motion techniques employed by Aardman
Animations, known for their iconic characters like Wallace and Gromit and Shaun
the Sheep.'
- text: Stop-Motion Animation - Craft a stop-motion animation that fuses the inventive
charm of Laika Studios with the comedic office environment of The Office, featuring
a withering, animated seedling personified amidst an upbeat office setting. Bathe
the scene in soft, natural light from office windows, subtly emphasizing the seedling's
plight. Use a color palette marked by dull greens of the seedling set against
bright, lively office colors to underline the seedling's melancholic state. The
composition should be a medium shot of the seedling character, with the office
antics unfolding in the background.
- text: Stop-Motion Animation - the Epic Battle of Ink and Pages, anthropomorphic
books and pens clash in a literary showdown. The books, ancient, unleash their
stories as weapons, pens scribble, battlefield, ink 2 in the navy and crimson
style, superb garment detail, diverse curatorial style, brimming with hidden details
- text: Stop-Motion Animation - surreal retro 3d diorama, in the style of Florence
Thomas,Adobe Photoshop, ultra HD, strong perspective, depth of field view finder
lens, detailed scenes, SMC Takumar 35mm f/ 2. 8 c 50v 5
- text: Stop-Motion Animation - Photo of a Teacher doll made of clay. Bright background
in one color. space to the left. Bright & simple image that could be used in textbooks.
3dcg. Refreshing image.
- text: Stop-Motion Animation - A medium film shot, of Harold, 40yr old man, glasses,
and tech engineer, good looking but thin, staring mouth agape at a strange creature
standing on hus desk
- text: Stop-Motion Animation - character with aluminium foil kid style walking for
stop motion, add a hand in frame or little sticks linking to character hands
- text: Stop-Motion Animation - Cinematic colourful lomographic minimalist rotoscope
claymation. A Confident program manager from Meta working at Stripe
- text: Stop-Motion Animation - plasticine, a sad man walks down the street to work
with a suitcase in his hands, full body character CLAYMATION
- text: Stop-Motion Animation - stop motion film of toys that have come to life, cute,
happy, charaters with a cinema-camera filming a scene
---
# Stop-Motion Animation ([CivitAI](https://civitai.com/models/78526))

> Stop-Motion Animation - In this claymation or plasticine style artwork we find ourselves in a university lecture hall during a crucial final exam. The scene is characterized by an atmosphere of exhaustion and desperation. The focal point is a student who, burdened by the weight of the academic challenge, displays visible signs of weariness and distress. Visual Elements: Medium: Claymation - The artwork takes on the distinct aesthetic of claymation or plasticine style, but light and full of color, lending a tactile and textured quality to the scene. Setting: Lecture Hall - The backdrop consists of a traditional university lecture hall, complete with rows of desks and chairs. Lighting: The overall lighting in the scene is bright and colorful. Student Character: Desperation and Exhaustion - The student at the center of the artwork is visibly drained and disheveled. Their posture is slouched, with sagging shoulders and tired eyes that betray their mental and physical exhaustion. The character's face is etched with anxiety, highlighting the intensity of the final exam. Symbolic Props: textbooks and crumpled notes and pen and paper on the desks - Surrounding the student's desk are scattered remnants of study materials. Surrounding Students: Anxious camaraderie - The surrounding students in the lecture hall also bear signs of weariness and anxiety. Artistic References: Elements reminiscent of the stop-motion techniques employed by Aardman Animations, known for their iconic characters like Wallace and Gromit and Shaun the Sheep.
<p>Stop-motion animation is a filmmaking technique that involves manipulating physical objects or figures incrementally and capturing them frame by frame to create the illusion of movement.</p><p>In stop-motion animation, objects or characters are physically moved or adjusted slightly between each frame, and a series of photographs is taken. When the frames are played in rapid succession, the still images create the illusion of movement.</p><p>Stop-motion animation requires patience, precision, and attention to detail. It can be time-consuming, as hundreds or even thousands of frames are needed to create a smooth animation sequence.</p><p>With the advancement of digital technology, stop-motion animation can be enhanced with computer-generated effects, sound effects, and post-production editing to create a more polished final product.</p><p>Stop-motion animation has been used in various forms of media, including films, television shows, commercials, and music videos. It offers a unique visual style and allows for creative storytelling possibilities, capturing the charm and tactile nature of physical objects in motion.</p>
## Image examples for the model:

> Stop-Motion Animation - Craft a stop-motion animation that fuses the inventive charm of Laika Studios with the comedic office environment of The Office, featuring a withering, animated seedling personified amidst an upbeat office setting. Bathe the scene in soft, natural light from office windows, subtly emphasizing the seedling's plight. Use a color palette marked by dull greens of the seedling set against bright, lively office colors to underline the seedling's melancholic state. The composition should be a medium shot of the seedling character, with the office antics unfolding in the background.

> Stop-Motion Animation - the Epic Battle of Ink and Pages, anthropomorphic books and pens clash in a literary showdown. The books, ancient, unleash their stories as weapons, pens scribble, battlefield, ink 2 in the navy and crimson style, superb garment detail, diverse curatorial style, brimming with hidden details

> Stop-Motion Animation - surreal retro 3d diorama, in the style of Florence Thomas,Adobe Photoshop, ultra HD, strong perspective, depth of field view finder lens, detailed scenes, SMC Takumar 35mm f/ 2. 8 c 50v 5

> Stop-Motion Animation - Photo of a Teacher doll made of clay. Bright background in one color. space to the left. Bright & simple image that could be used in textbooks. 3dcg. Refreshing image.

> Stop-Motion Animation - A medium film shot, of Harold, 40yr old man, glasses, and tech engineer, good looking but thin, staring mouth agape at a strange creature standing on hus desk

> Stop-Motion Animation - character with aluminium foil kid style walking for stop motion, add a hand in frame or little sticks linking to character hands

> Stop-Motion Animation - Cinematic colourful lomographic minimalist rotoscope claymation. A Confident program manager from Meta working at Stripe

> Stop-Motion Animation - plasticine, a sad man walks down the street to work with a suitcase in his hands, full body character CLAYMATION

> Stop-Motion Animation - stop motion film of toys that have come to life, cute, happy, charaters with a cinema-camera filming a scene
| [
"BEAR",
"CRAFT"
] |
jinaai/jina-colbert-v1-en | jinaai | null | [
"transformers",
"safetensors",
"bert",
"ColBERT",
"passage-retrieval",
"custom_code",
"en",
"dataset:ms_marco",
"arxiv:2310.19923",
"arxiv:2108.12409",
"arxiv:2004.12832",
"arxiv:2112.01488",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-01-23T09:23:52Z | 2025-01-06T16:23:57+00:00 | 855 | 99 | ---
datasets:
- ms_marco
language:
- en
license: apache-2.0
tags:
- ColBERT
- passage-retrieval
---
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
# Jina-ColBERT
**Jina-ColBERT is a ColBERT-style model but based on JinaBERT so it can support both _8k context length_, _fast and accurate retrieval_.**
[JinaBERT](https://arxiv.org/abs/2310.19923) is a BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. The Jina-ColBERT model is trained on MSMARCO passage ranking dataset, following a very similar training procedure with ColBERTv2. The only difference is that we use `jina-bert-v2-base-en` as the backbone instead of `bert-base-uncased`.
For more information about ColBERT, please refer to the [ColBERTv1](https://arxiv.org/abs/2004.12832) and [ColBERTv2](https://arxiv.org/abs/2112.01488v3) paper, and [the original code](https://github.com/stanford-futuredata/ColBERT).
## Usage
### Installation
To use this model, you will need to install the **latest version** of the ColBERT repository:
```bash
pip install git+https://github.com/stanford-futuredata/ColBERT.git torch
conda install -c conda-forge faiss-gpu # use conda to install the latest version faiss
```
### Indexing
```python
from colbert import Indexer
from colbert.infra import Run, RunConfig, ColBERTConfig
n_gpu: int = 1 # Set your number of available GPUs
experiment: str = "" # Name of the folder where the logs and created indices will be stored
index_name: str = "" # The name of your index, i.e. the name of your vector database
if __name__ == "__main__":
with Run().context(RunConfig(nranks=n_gpu, experiment=experiment)):
config = ColBERTConfig(
doc_maxlen=8192 # Our model supports 8k context length for indexing long documents
)
indexer = Indexer(
checkpoint="jinaai/jina-colbert-v1-en",
config=config,
)
documents = [
"ColBERT is an efficient and effective passage retrieval model.",
"Jina-ColBERT is a ColBERT-style model but based on JinaBERT so it can support both 8k context length.",
"JinaBERT is a BERT architecture that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.",
"Jina-ColBERT model is trained on MSMARCO passage ranking dataset, following a very similar training procedure with ColBERTv2.",
"Jina-ColBERT achieves the competitive retrieval performance with ColBERTv2.",
"Jina is an easier way to build neural search systems.",
"You can use Jina-ColBERT to build neural search systems with ease.",
# Add more documents here to ensure the clustering work correctly
]
indexer.index(name=index_name, collection=documents)
```
### Searching
```python
from colbert import Searcher
from colbert.infra import Run, RunConfig, ColBERTConfig
n_gpu: int = 0
experiment: str = "" # Name of the folder where the logs and created indices will be stored
index_name: str = "" # Name of your previously created index where the documents you want to search are stored.
k: int = 10 # how many results you want to retrieve
if __name__ == "__main__":
with Run().context(RunConfig(nranks=n_gpu, experiment=experiment)):
config = ColBERTConfig(
query_maxlen=128 # Although the model supports 8k context length, we suggest not to use a very long query, as it may cause significant computational complexity and CUDA memory usage.
)
searcher = Searcher(
index=index_name,
config=config
) # You don't need to specify the checkpoint again, the model name is stored in the index.
query = "How to use ColBERT for indexing long documents?"
results = searcher.search(query, k=k)
# results: tuple of tuples of length k containing ((passage_id, passage_rank, passage_score), ...)
```
### Creating Vectors
```python
from colbert.modeling.checkpoint import Checkpoint
ckpt = Checkpoint("jinaai/jina-colbert-v1-en", colbert_config=ColBERTConfig(root="experiments"))
query_vectors = ckpt.queryFromText(["What does ColBERT do?", "This is a search query?"], bsize=16)
print(query_vectors)
```
Complete working Colab Notebook is [here](https://colab.research.google.com/drive/1-5WGEYPSBNBg-Z0bGFysyvckFuM8imrg)
### Reranking Using ColBERT
```python
from colbert.modeling.checkpoint import Checkpoint
from colbert.infra import ColBERTConfig
query = ["How to use ColBERT for indexing long documents?"]
documents = [
"ColBERT is an efficient and effective passage retrieval model.",
"Jina-ColBERT is a ColBERT-style model but based on JinaBERT so it can support both 8k context length.",
"JinaBERT is a BERT architecture that supports the symmetric bidirectional variant of ALiBi to allow longer sequence length.",
"Jina-ColBERT model is trained on MSMARCO passage ranking dataset, following a very similar training procedure with ColBERTv2.",
]
config = ColBERTConfig(query_maxlen=32, doc_maxlen=512)
ckpt = Checkpoint(args.reranker, colbert_config=colbert_config)
Q = ckpt.queryFromText([all_queries[i]])
D = ckpt.docFromText(all_passages, bsize=32)[0]
D_mask = torch.ones(D.shape[:2], dtype=torch.long)
scores = colbert_score(Q, D, D_mask).flatten().cpu().numpy().tolist()
ranking = numpy.argsort(scores)[::-1]
print(ranking)
```
## Evaluation Results
**TL;DR:** Our Jina-ColBERT achieves the competitive retrieval performance with [ColBERTv2](https://huggingface.co/colbert-ir/colbertv2.0) on all benchmarks, and outperforms ColBERTv2 on datasets in where documents have longer context length.
### In-domain benchmarks
We evaluate the in-domain performance on the dev subset of MSMARCO passage ranking dataset. We follow the same evaluation settings in the ColBERTv2 paper and rerun the results of ColBERTv2 using the released checkpoint.
| Model | MRR@10 | Recall@50 | Recall@1k |
| --- | :---: | :---: | :---: |
| ColBERTv2 | 39.7 | 86.8 | 97.6 |
| Jina-ColBERT-v1 | 39.0 | 85.6 | 96.2 |
### Out-of-domain benchmarks
Following ColBERTv2, we evaluate the out-of-domain performance on 13 public BEIR datasets and use NDCG@10 as the main metric. We follow the same evaluation settings in the ColBERTv2 paper and rerun the results of ColBERTv2 using the released checkpoint.
Note that both ColBERTv2 and Jina-ColBERT-v1 only employ MSMARCO passage ranking dataset for training, so below results are the fully zero-shot performance.
| dataset | ColBERTv2 | Jina-ColBERT-v1 |
| --- | :---: | :---: |
| ArguAna | 46.5 | 49.4 |
| ClimateFEVER | 18.1 | 19.6 |
| DBPedia | 45.2 | 41.3 |
| FEVER | 78.8 | 79.5 |
| FiQA | 35.4 | 36.8 |
| HotPotQA | 67.5 | 65.6 |
| NFCorpus | 33.7 | 33.8 |
| NQ | 56.1 | 54.9 |
| Quora | 85.5 | 82.3 |
| SCIDOCS | 15.4 | 16.9 |
| SciFact | 68.9 | 70.1 |
| TREC-COVID | 72.6 | 75.0 |
| Webis-touché2020 | 26.0 | 27.0 |
| Average | 50.0 | 50.2 |
### Long context datasets
We also evaluate the zero-shot performance on datasets where documents have longer context length and compare with some long-context embedding models. Here we use the [LoCo benchmark](https://www.together.ai/blog/long-context-retrieval-models-with-monarch-mixer), which contains 5 datasets with long context length.
| Model | Used context length | Model max context length | Avg. NDCG@10 |
| --- | :---: | :---: | :---: |
| ColBERTv2 | 512 | 512 | 74.3 |
| Jina-ColBERT-v1 (truncated) | 512* | 8192 | 75.5 |
| Jina-ColBERT-v1 | 8192 | 8192 | 83.7 |
| Jina-embeddings-v2-base-en | 8192 | 8192 | **85.4** |
\* denotes that we truncate the context length to 512 for documents. The context length of queries is all 512.
**To summarize, Jina-ColBERT achieves the comparable retrieval performance with ColBERTv2 on all benchmarks, and outperforms ColBERTv2 on datasets in where documents have longer context length.**
### Reranking Performance
We evaluate the reranking performance of ColBERTv2 and Jina-ColBERT on BEIR. We use BM25 as the first-stage retrieval model. The full evaluation code can be found in [this repo](https://github.com/liuqi6777/eval_reranker).
In summary, Jina-ColBERT outperforms ColBERTv2, even achieving comparable performance with some cross-encoder.
The best model, jina-reranker, will be open-sourced soon!
|BM25|ColBERTv2|Jina-ColBERT|MiniLM-L-6-v2|BGE-reranker-base-v1|BGE-reranker-large-v1|Jina-reranker-base-v1|
| --- | :---: | :---: | :---: | :---: | :---: | :---: |
Arguana |29.99|33.42|33.95|30.67|23.26|25.42|42.59|
Climate-Fever |16.51|20.66|21.87|24.70|31.60|31.98|25.49|
DBPedia |31.80|42.16|41.43|43.90|41.56|43.79|43.68|
FEVER |65.13|81.07|83.49|80.77|87.07|89.11|86.10|
FiQA |23.61|35.60|36.68|34.87|33.17|37.70|41.38|
HotpotQA |63.30|68.84|68.62|72.65|79.04|79.98|75.61|
NFCorpus |33.75|36.69|36.38|36.48|32.71|36.57|37.73|
NQ |30.55|51.27|51.01|52.01|53.55|56.81|56.82|
Quora |78.86|85.18|82.75|82.45|78.44|81.06|87.31|
SCIDOCS |14.90|15.39|16.67|16.28|15.06|16.84|19.56|
SciFact |67.89|70.23|70.95|69.53|70.62|74.14|75.01|
TREC-COVID |59.47|75.00|76.89|74.45|67.46|74.32|82.09|
Webis-touche2020|44.22|32.12|32.56|28.40|34.37|35.66|31.62|
Average |43.08|49.82|50.25|49.78|49.84|52.57|**54.23**|
ColBERT
## Plans
We are planning to improve the performance of Jina-ColBERT by fine-tuning on more datasets in the future.
## Other Models
Additionally, we provide the following embedding models, you can also use them for retrieval.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English bilingual model.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English bilingual model.
- [`jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es): 161 million parameters Spanish-English bilingual model.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. | [
"SCIFACT"
] |
mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"phi",
"phi2",
"einstein",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:glaiveai/glaive-code-assistant",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"base_model:Weyaxi/Einstein-v4-Qwen-1.5-32B",
"base_model:quantized:Weyaxi/Einstein-v4-Qwen-1.5-32B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-06-16T21:51:10Z | 2024-08-02T10:31:57+00:00 | 847 | 3 | ---
base_model: Weyaxi/Einstein-v4-Qwen-1.5-32B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- glaiveai/glaive-code-assistant
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
language:
- en
library_name: transformers
license: other
tags:
- axolotl
- generated_from_trainer
- phi
- phi2
- einstein
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Weyaxi/Einstein-v4-Qwen-1.5-32B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-Qwen-1.5-32B-i1-GGUF/resolve/main/Einstein-v4-Qwen-1.5-32B.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"SCIQ"
] |
Salesforce/xgen-mm-phi3-mini-instruct-r-v1 | Salesforce | image-text-to-text | [
"transformers",
"safetensors",
"xgenmm",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2408.08872",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-05-06T05:19:06Z | 2025-02-03T06:26:42+00:00 | 843 | 185 | ---
language:
- en
license: cc-by-nc-4.0
pipeline_tag: image-text-to-text
---
# 📣 News
📌 [08/19/2024] xGen-MM-v1.5 released:
- [🤗 xgen-mm-phi3-mini-instruct-interleave-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-interleave-r-v1.5)
- [🤗 xgen-mm-phi3-mini-base-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-base-r-v1.5)
- [🤗 xgen-mm-phi3-mini-instruct-singleimg-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-singleimg-r-v1.5)
- [🤗 xgen-mm-phi3-mini-instruct-dpo-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-dpo-r-v1.5)
# Model description
We are excited to announce the continuation and rebranding of our **BLIP series** into **XGen-MM**, to be better aligned with Salesforce's unified XGen initiative for large foundation models! This rebranding marks a significant step in our ongoing development of cutting-edge multimodal technologies.
`XGen-MM` is a series of the latest foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research. This series advances upon the successful designs of the `BLIP` series, incorporating fundamental enhancements that ensure a more robust and superior foundation. \
These models have been trained at scale on high-quality image caption datasets and interleaved image-text data. XGen-MM highlights a few features below,
* The **pretrained** foundation model, `xgen-mm-phi3-mini-base-r-v1`, achieves state-of-the-art performance under 5b parameters and demonstrates strong in-context learning capabilities.
* The **instruct** fine-tuned model, `xgen-mm-phi3-mini-instruct-r-v1`, achieves state-of-the-art performance among open-source and closed-source VLMs under 5b parameters.
* `xgen-mm-phi3-mini-instruct-r-v1` supports flexible high-resolution image encoding with efficient visual token sampling.
More technical details will come with a technical report soon.
# Results
### Pretrain (base model without instruction tuning)
| Model | Shot | COCO (val) | NoCaps (val) | TextCaps (val) | OKVQA (val) | TextVQA (val) | VizWiz (testdev) | VQAv2 (testdev) |
|-------------|------|------------|--------------|----------------|--------------|---------------|------------------|-----------------|
| Flamingo-3B | 4 | 85.0 | - | - | 43.3 | 32.7 | 34 | 53.2 |
| | 8 | 90.6 | - | - | 44.6 | 32.4 | 38.4 | 55.4 |
| MM1-3B | 0 | 73.5 | 55.6 | 63.3 | 26.1 | 29.4 | 15.6 | 46.2 |
| | 4 | 112.3 | 99.7 | 84.1 | 48.6 | 45.3 | 38.0 | 57.9 |
| | 8 | 114.6 | 104.7 | 88.8 | 48.4 | 44.6 | 46.4 | 63.6 |
| **xgen-mm-phi3-mini-base-r-v1 (Ours)**| 0 | **81.7** | **80.2** | 60.7 | **26.5** | **36.0** | **21.2** | **48.1** |
| | 4 | 110.5 | **101.7** | **84.6** | **49.2** | **46.1** | **38.4** | **63.9** |
| | 8 | 112.1 | 104.4 | 87.7 | **49.1** | **46.4** | 44.3 | **63.8** |
### Instruct (after instruction tuning)
| Model | SEED-IMG | MMBench(dev) | MME-total | MME-P | MME-C | MMStar | MMMU (val) | MMVet | MathVista (mini) | ScienceQA (test) | POPE | AI2D | |
|----------------------------|----------|--------------|-----------|----------|---------|----------|------------|----------|------------------|------------------|----------|----------|---|
| MM1-3B-Chat | 68.8 | 67.8 | 1761 | **1482** | 279 | - | 33.9 | 43.7 | - | - | **87.4** | - | |
| openbmb/MiniCPM-V-2 | 67.1 | 69.6 | 1808 | - | - | - | 38.2 | - | 38.7 | - | - | - | |
| VILA1.5-3B | 67.9 | 63.4 | - | 1442 | - | - | 33.3 | 35.4 | - | 69.0 | 85.9 | - | |
| xtuner/llava-phi-3-mini-hf | 70.0 | 69.2 | 1790 | 1477 | 313 | 43.7 | **41.4** | - | - | 73.7 | 87.3 | 69.3 | |
| **xgen-mm-phi3-mini-instruct-r-v1 (Ours)** | **72.1** | **74.1** | **1827** | 1467 | **360** | **44.6** | 39.8 | **45.1** | **39.3** | **74.2** | 87.2 | **75.8** | |
# How to use
~~> We require the use of the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers.`~~
```python
from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoImageProcessor, StoppingCriteria
import torch
import requests
from PIL import Image
# define the prompt template
def apply_prompt_template(prompt):
s = (
'<|system|>\nA chat between a curious user and an artificial intelligence assistant. '
"The assistant gives helpful, detailed, and polite answers to the user's questions.<|end|>\n"
f'<|user|>\n<image>\n{prompt}<|end|>\n<|assistant|>\n'
)
return s
class EosListStoppingCriteria(StoppingCriteria):
def __init__(self, eos_sequence = [32007]):
self.eos_sequence = eos_sequence
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
last_ids = input_ids[:,-len(self.eos_sequence):].tolist()
return self.eos_sequence in last_ids
# load models
model_name_or_path = "Salesforce/xgen-mm-phi3-mini-instruct-r-v1"
model = AutoModelForVision2Seq.from_pretrained(model_name_or_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True, use_fast=False, legacy=False)
image_processor = AutoImageProcessor.from_pretrained(model_name_or_path, trust_remote_code=True)
tokenizer = model.update_special_tokens(tokenizer)
# craft a test sample
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
query = "how many dogs are in the picture?"
model = model.cuda()
inputs = image_processor([raw_image], return_tensors="pt", image_aspect_ratio='anyres')
prompt = apply_prompt_template(query)
language_inputs = tokenizer([prompt], return_tensors="pt")
inputs.update(language_inputs)
inputs = {name: tensor.cuda() for name, tensor in inputs.items()}
generated_text = model.generate(**inputs, image_size=[raw_image.size],
pad_token_id=tokenizer.pad_token_id,
do_sample=False, max_new_tokens=768, top_p=None, num_beams=1,
stopping_criteria = [EosListStoppingCriteria()],
)
prediction = tokenizer.decode(generated_text[0], skip_special_tokens=True).split("<|end|>")[0]
print("==> prediction: ", prediction)
# output: ==> prediction: There is one dog in the picture.
```
More comprehensive examples can be found in the [notebook](demo.ipynb).
# Reproducibility:
Our SFT evaluation is based on the VLMEvalKit, in which we fixed some inconsistencies with the official benchmarks (e.g., LLM judge API). During our development, we noticed that the raw resolution of the input image would noticeably affect the model output in some cases.
# Bias, Risks, Limitations, and Ethical Considerations
The main data sources are from the internet, including webpages,
image stock sites, and curated datasets released by the research community. We have excluded certain data, such as LAION, due to known CSAM concerns.
The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs.
We strongly recommend users assess safety and fairness before applying to downstream applications.
# Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
# License
Our code and weights are released under the Creative Commons Attribution Non Commercial 4.0 [LICENSE](LICENSE.txt). Please fill out a form at [here](https://forms.gle/ffPc9oZC2ZGeJ1N68) to consult the commercial use of model weights.
# Code acknowledgment
[LAVIS](https://github.com/salesforce/LAVIS) \
[openflamingo](https://github.com/mlfoundations/open_flamingo) \
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit/tree/main)
# Citation
```
@misc{xue2024xgenmmblip3familyopen,
title={xGen-MM (BLIP-3): A Family of Open Large Multimodal Models},
author={Le Xue and Manli Shu and Anas Awadalla and Jun Wang and An Yan and Senthil Purushwalkam and Honglu Zhou and Viraj Prabhu and Yutong Dai and Michael S Ryoo and Shrikant Kendre and Jieyu Zhang and Can Qin and Shu Zhang and Chia-Chih Chen and Ning Yu and Juntao Tan and Tulika Manoj Awalgaonkar and Shelby Heinecke and Huan Wang and Yejin Choi and Ludwig Schmidt and Zeyuan Chen and Silvio Savarese and Juan Carlos Niebles and Caiming Xiong and Ran Xu},
year={2024},
eprint={2408.08872},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.08872},
}
```
# Troubleshoot
1. If you missed any packages, please consider the following
```
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
pip install open_clip_torch==2.24.0
pip install einops
pip install einops-exts
pip install transformers==4.41.1
```
# Changelog
* 05/24/2024
* update codebase to be compatible with `transformers==4.41.1`. | [
"CHIA",
"CRAFT"
] |
HPAI-BSC/Llama3.1-Aloe-Beta-8B | HPAI-BSC | question-answering | [
"transformers",
"safetensors",
"llama",
"text-generation",
"biology",
"medical",
"healthcare",
"question-answering",
"en",
"dataset:HPAI-BSC/Aloe-Beta-General-Collection",
"dataset:HPAI-BSC/chain-of-diagnosis",
"dataset:HPAI-BSC/MedS-Ins",
"dataset:HPAI-BSC/ultramedical",
"dataset:HPAI-BSC/pubmedqa-cot-llama31",
"dataset:HPAI-BSC/medqa-cot-llama31",
"dataset:HPAI-BSC/medmcqa-cot-llama31",
"dataset:HPAI-BSC/headqa-cot-llama31",
"dataset:HPAI-BSC/MMLU-medical-cot-llama31",
"dataset:HPAI-BSC/Polymed-QA",
"arxiv:2405.01886",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-30T17:29:40Z | 2025-01-22T14:18:57+00:00 | 838 | 11 | ---
datasets:
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/chain-of-diagnosis
- HPAI-BSC/MedS-Ins
- HPAI-BSC/ultramedical
- HPAI-BSC/pubmedqa-cot-llama31
- HPAI-BSC/medqa-cot-llama31
- HPAI-BSC/medmcqa-cot-llama31
- HPAI-BSC/headqa-cot-llama31
- HPAI-BSC/MMLU-medical-cot-llama31
- HPAI-BSC/Polymed-QA
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/Aloe-Beta-General-Collection
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: question-answering
tags:
- biology
- medical
- healthcare
---
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/vg1jG1OgqP7yyE0PO-OMT.png">
<img alt="prompt_engine" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/vg1jG1OgqP7yyE0PO-OMT.png" width=50%>
</picture>
</p>
<h1 align="center">
Aloe: A Family of Fine-tuned Open Healthcare LLMs
</h1>
---
Llama3.1-Aloe-Beta-8B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 7B and 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Llama3.1-Aloe-Beta-70B and Qwen2.5-Aloe-Beta-72B outperforms those private alternatives, producing state-of-the-art results.
# Aloe-Beta-8B

**Aloe-8B-Beta** is the latest iteration in the **Aloe family**, building and improving on the success of its predecessor, [Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha).
Beta more than triples the training data used by Alpha, for a total of **1.8B tokens**, including a wider variety of medical tasks and instructions (e.g., text summarization, explanation, diagnosis, text classification, treatment recommendation, ...).

To mitigate catastrophic forgetting and enable the model to effectively learn new capabilities like **function calling**, we incorporated a diverse set of high-quality general-purpose data constituting 20% of the total training set. The curated data includes some of the highest-quality content available across a range of topics, including mathematics, programming, STEM, and very long instructions (> 8k tokens), to enrich the model's adaptability and comprehension across diverse domains.
Beta also boosts the alignment and safety stages with respect to Alpha. This includes a [medical preference dataset](https://huggingface.co/datasets/TsinghuaC3I/UltraMedical-Preference), as well as the red-teaming dataset (available soon).
Complete training details, model merging configurations, and all training data (including synthetically generated data) can be found below. This includes [the RAG system](https://github.com/HPAI-BSC/prompt_engine) that was developed to test Aloe Beta in a deployment setup. Aloe comes with a healthcare-specific risk assessment to facilitate to the safe use and deployment of such systems.
## Model Details
### [](https://huggingface.co/templates/model-card-example#model-description)Model Description
- **Developed by:** [HPAI](https://hpai.bsc.es/)
- **Model type:** Causal decoder-only transformer language model
- **Language(s) (NLP):** English (capable but not formally evaluated on other languages)
- **License:** This model is based on Meta Llama 3.1 8B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license, making the Aloe Beta models **compatible with commercial use**.
- **Base model :** [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
- **Paper:** (more coming soon)
- **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine
### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional]
## Model Performance
Aloe Beta has been tested on the most popular healthcare QA datasets, with and without Medprompt inference technique. Results show competitive performance, achieving SOTA within models of the same size.

The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:


We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:

## Uses
### Direct Use
We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. In production, Aloe should always be used under the supervision of a human expert.
### Out-of-Scope Use
These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful to individuals, such as spam, fraud, or impersonation, is strictly prohibited. Minors should not be left alone to interact with Aloe without supervision.
## Bias, Risks, and Limitations
Aloe can produce toxic content under the appropriate prompts, and it includes multiple undesirable biases. While significant efforts where conducted to mitigate this (see Alignment details below), model safety cannot be fully guaranteed. We avoid the use of all personal data in our training.
We identify at least three risk cases specific to healthcare LLMs:
- Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers.
- Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defenses, together with the introduction of disclaimers and warnings on the models' outputs.
- Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, the internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it.
<!---
Table below shows the performance of Aloe at several AI safety tasks:
TO BE UPDATED
<img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%">
We analyzed the safety and robustness of the model using red teaming techniques. We designed a benchmark using different types of attacks and analyzed the performance of Aloe and some extra models, and we confirm that our model is aligned properly and successfully resisting most attacks:


-->
## How to Get Started with the Model
Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples for both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "HPAI-BSC/Llama3.1-Aloe-Beta-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "Hello."},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "HPAI-BSC/Llama3.1-Aloe-Beta-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "Hello"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Training Details
### Supervised fine-tuning
SFT on top of Llama 3.1 using axolotl (https://github.com/axolotl-ai-cloud/axolotl).
We used Deepspeed's Zero-3 distributed training using the following hardware:
* 8B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
* 70B: 64x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
<!---
^^^ TO BE COMPLETED AND DETAILED ^^^
-->
#### Training Data
The training set consists of around 1.8B tokens, having 3 different types of data:
- Medical domain datasets. Includes data from 20 different medical tasks.
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
- [HPAI-BSC/chain-of-diagnosis](https://huggingface.co/datasets/HPAI-BSC/chain-of-diagnosis)
- [HPAI-BSC/MedS-Ins](https://huggingface.co/datasets/HPAI-BSC/MedS-Ins)
- [HPAI-BSC/ultramedica](https://huggingface.co/datasets/HPAI-BSC/ultramedical)
- Synthetic data. We expanded our training data by generating high-quality answers using Llama3.1-70B.
- [HPAI-BSC/pubmedqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/pubmedqa-cot-llama31)
- [HPAI-BSC/medqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medqa-cot-llama31)
- [HPAI-BSC/medmcqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medmcqa-cot-llama31)
- [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
- [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
- [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
- Genstruct data (coming soon)
- General data. It includes maths, STEM, code, function calling, and instructions with a very long context.
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
#### Training parameters
- Epochs: 3
- Sequence length: 16384
- Optimizer: adamw_torch
- Learning rate: 2e-5
- Learning rate scheduler: cosine
- Warmup steps: 100
- Weight decay: 0
- Gradient checkpointing
- Zero 3
- Total batch size: 128
- Batch size per device: 1
- Gradient accumulation steps: 4
### Model Merging
The model trained was merged with the Llama-3.1-Instruct model using the DARE_TIES technique. [Mergekit](https://github.com/arcee-ai/mergekit) was used to conduct the merging.
### Model Alignment
The model is aligned using the Direct Preference Optimization (DPO) technique through a two-step process:
1. General DPO Alignment: This step uses a dataset combining medical, general preference, and safety data. We used our dataset [HPAI-BSC/Aloe-Beta-DPO](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-DPO). We split the dataset into five parts, and the model was trained iteratively for one epoch on each chunk. We used a learning rate of 2e-7.
2. Red-Teaming Alignment: This step further fine-tunes the model to resist a variety of potential attacks, enhancing its robustness and security. Dataset will be shared soon. In this stage, we set the learning rate to 1e-7.
<!---
^^^ LINKS TO DPO DATA (DPO added, missing the RT^^^
-->
We used [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) library. We aligned the model using 16x NVIDA HOOPER H100 64GB of the *Marenostrum 5*. Common hyperparameters:
- Sequence length: 4096
- Optimizer: Fused adam
- Total batch size 128
- Batch size per device: 1
- Gradient accumulation steps: 8
- Beta: 0.1
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- [ACI-BENCH](https://github.com/wyim/aci-bench)
- [MTS-Dialog](https://github.com/abachaa/MTS-Dialog)
- [MedText](https://huggingface.co/datasets/BI55/MedText)
- [Medical Text classification](https://www.kaggle.com/datasets/chaitanyakck/medical-text/data)
- [OLAPH](https://github.com/dmis-lab/OLAPH)
- CareQA Open
- [MedDialog](https://huggingface.co/datasets/bigbio/meddialog)
- [MEDIQA QA](https://huggingface.co/datasets/bigbio/mediqa_qa)
- [Meddialog Qsumm](https://huggingface.co/datasets/lighteval/med_dialog)
- [Biored](https://huggingface.co/datasets/YufeiHFUT/BioRED_all_info)
- [MIMIC-III](https://huggingface.co/datasets/dmacres/mimiciii-hospitalcourse-meta)
- [Medical Prescription](https://huggingface.co/datasets/devlocalhost/prescription-full)
- [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
- [MedMCQA](https://huggingface.co/datasets/medmcqa)
- [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
- [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
- [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA)
- [Open LLM Leaderboard 2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
<!---
^^^ CAREQA Open link MISSING ^^^
-->
#### Metrics
- Accuracy: suite the evaluation of multiple-choice question-answering tasks.
- Rouge1: refers to the overlap of unigrams between the system and the gold standard.
<!---
^^^ MORE METRICS MISSING ^^^
-->
#### Summary
To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. However, while MCQA benchmarks provide valuable insights into a model's ability to handle structured queries, they fall short in representing the full range of challenges faced in medical practice. Building upon this idea, Aloe-Beta represents the next step in the evolution of the Aloe Family, designed to broaden the scope beyond the multiple-choice question-answering tasks that defined Aloe-Alpha.
Benchmark results indicate the training conducted on Aloe has boosted its performance above Llama31-8B-Instruct. Llama31-Aloe-Beta-8B also outperforms other medical models like Llama3-OpenBioLLM and Llama3-Med42. All these results make Llama31-Aloe-8B-Beta the best healthcare LLM of its size.
With the help of prompting techniques the performance of Llama3-Aloe-8B-Beta is significantly improved. Medprompting in particular provides a 7% increase in reported accuracy, after which Llama31-Aloe-8B-Beta only lags behind much bigger models like Llama-3.1-70B-Instruct or MedPalm-2. This improvement is mostly consistent across the OpenLLM Leaderboard and the other medical tasks.
## Environmental Impact
- **Hardware Type:** 32xH100
- **Hours used (8B):** 544 GPU hours
- **Hours used (70B):** 4500 GPU hours
- **Hardware Provider:** Barcelona Supercomputing Center (BSC)
- **Compute Region:** Spain
- **Carbon Emitted:** 34.1 kg of CO2
<!---
^^^ ARE CARBON EMISSIONS FOR BOTH? ^^^
-->
## Authors
Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos.
mailto:[email protected]
## Citations
<!---
Add the prompt engine paper below
-->
If you use this repository in a published work, please cite the corresponding papers as source:
```
@misc{gururajan2024aloe,
title={Aloe: A Family of Fine-tuned Open Healthcare LLMs},
author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla},
year={2024},
eprint={2405.01886},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"BIORED",
"MEDIQA QA",
"MEDDIALOG",
"MEDQA",
"PUBMEDQA"
] |
Cloyne/vietnamese-sbert-v3 | Cloyne | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:132997",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:keepitreal/vietnamese-sbert",
"base_model:finetune:keepitreal/vietnamese-sbert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-04T10:50:31Z | 2024-11-04T10:50:45+00:00 | 833 | 0 | ---
base_model: keepitreal/vietnamese-sbert
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:132997
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Ai có trách_nhiệm cập_nhật , công_bố thông_tin về tài_sản thế_chấp
sau khi thực_hiện đăng_ký thay_đổi nội_dung thế_chấp đã đăng_ký , sửa_chữa sai_sót
?
sentences:
- '1 . Chuẩn chương_trình phải quy_định những yêu_cầu tối_thiểu về số_lượng , cơ_cấu
, trình_độ , năng_lực , kinh_nghiệm của đội_ngũ giảng_viên và nhân_lực hỗ_trợ
để tổ_chức giảng_dạy và hỗ_trợ người học nhằm đạt được chuẩn đầu_ra của chương_trình
đào_tạo . 2 . Yêu_cầu đối_với đội_ngũ giảng_viên giảng_dạy chương_trình đại_học
, giảng_dạy chương_trình đào_tạo chuyên_sâu đặc_thù trình_độ bậc 7 : a ) Giảng_viên
có trình_độ thạc_sĩ trở lên , trợ_giảng có trình_độ đại_học trở lên ; b ) Có ít_nhất
01 tiến_sĩ ngành phù_hợp là giảng_viên cơ_hữu để chủ_trì xây_dựng , tổ_chức thực_hiện
chương_trình đào_tạo ; c ) Có ít_nhất 05 tiến_sĩ có chuyên_môn phù_hợp là giảng_viên
cơ_hữu để chủ_trì giảng_dạy chương_trình , trong đó mỗi thành_phần của chương_trình
phải có giảng_viên với chuyên_môn phù_hợp chủ_trì giảng_dạy ; d ) Có đủ số_lượng
giảng_viên để đảm_bảo tỉ_lệ sinh_viên trên giảng_viên không vượt quá mức quy_định
cho từng lĩnh_vực , nhóm ngành hoặc ngành đào_tạo . 3 . Yêu_cầu đối_với đội_ngũ
giảng_viên giảng_dạy chương_trình thạc_sĩ : a ) Giảng_viên có trình_độ tiến_sĩ
; b ) Có ít_nhất 05 tiến_sĩ ngành phù_hợp là giảng_viên cơ_hữu , trong đó có một
giáo_sư hoặc phó_giáo_sư chủ_trì xây_dựng , tổ_chức thực_hiện chương_trình đào_tạo
; c ) Có giảng_viên cơ_hữu với chuyên_môn phù_hợp chủ_trì giảng_dạy đối_với từng
môn_học , học phần của chương_trình ; d ) Có đủ người hướng_dẫn để đảm_bảo tỷ_lệ
tối_đa 05 học_viên trên một người hướng_dẫn . 4 . Yêu_cầu đối_với đội_ngũ giảng_viên
giảng_dạy chương_trình tiến_sĩ : a ) Giảng_viên có chức_danh giáo_sư hoặc phó
giáo_sư ; hoặc có trình_độ tiến_sĩ với năng_lực nghiên_cứu tốt ; b ) Có ít_nhất
01 giáo_sư ( hoặc 02 phó giáo_sư ) ngành phù_hợp và 03 tiến_sĩ ngành phù_hợp là
giảng_viên cơ_hữu ; c ) Có đủ người hướng_dẫn để đảm_bảo tỉ_lệ tối_đa 07 nghiên_cứu_sinh
/ giáo_sư , 05 nghiên_cứu_sinh / phó giáo_sư và 03 nghiên_cứu_sinh / tiến_sĩ .
5 . Chuẩn chương_trình cho các ngành , nhóm ngành quy_định yêu_cầu cụ_thể về đội_ngũ
giảng_viên không thấp hơn quy_định tại các khoản 2 , 3 và 4 của Điều này ; yêu_cầu
cụ_thể về tỉ_lệ người học trên giảng_viên ; yêu_cầu về đội_ngũ nhân_lực hỗ_trợ
đào_tạo ( nếu cần_thiết ) , phù_hợp với đặc_điểm của từng lĩnh_vực nhóm ngành
hoặc ngành đào_tạo .'
- Trách_nhiệm của các cơ_quan có liên_quan đến đăng_ký thế_chấp quyền sử_dụng đất
, tài_sản gắn liền với đất 1 . Văn_phòng đăng_ký đất_đai có trách_nhiệm gửi thông_tin
cho Sở Tài_nguyên_và_Môi_trường để cập_nhật , công_bố thông_tin về tài_sản thế_chấp
sau khi thực_hiện đăng_ký thay_đổi nội_dung thế_chấp đã đăng_ký , sửa_chữa sai_sót
, xóa đăng_ký thế_chấp liên_quan đến việc thế_chấp dự_án đầu_tư xây_dựng nhà ở
, dự_án đầu_tư xây_dựng công_trình xây_dựng không phải là nhà ở theo quy_định
tại Điều 64 của Nghị_định số 102 / 2017 / NĐ-CP . ...
- Xóa kỷ_luật , giảm thời_hạn chấp_hành kỷ_luật lao_động 1 . Người lao_động bị khiển_trách
sau 03 tháng , hoặc bị xử_lý kỷ_luật kéo_dài thời_hạn nâng lương sau 06 tháng
, kể từ ngày bị xử_lý , nếu không tái_phạm thì đương_nhiên được xóa kỷ_luật .
Trường_hợp bị xử_lý kỷ_luật lao_động bằng hình_thức cách_chức thì sau thời_hạn
03 năm , nếu tiếp_tục vi_phạm kỷ_luật lao_động thì không bị coi là tái_phạm .
2 . Người lao_động bị xử_lý kỷ_luật kéo_dài thời_gian nâng lương sau khi chấp_hành
được một_nửa thời_hạn nếu sửa_chữa tiến_bộ , có_thể được người sử_dụng lao_động
xét giảm thời_hạn .
- source_sentence: Mức trích nộp phí công_đoàn của doanh_nghiệp là bao_nhiêu phần_trăm
?
sentences:
- '" Điều 5 . Mức đóng và căn_cứ đóng kinh_phí công_đoàn Mức đóng bằng 2 % quỹ tiền_lương
làm căn_cứ đóng bảo_hiểm_xã_hội cho người lao_động . Quỹ tiền_lương này là tổng
mức tiền_lương của những người lao_động thuộc đối_tượng phải đóng bảo_hiểm_xã_hội
theo quy_định của pháp_luật về bảo_hiểm_xã_hội . Riêng đối_với đơn_vị thuộc lực_lượng_vũ_trang
quy_định tại Khoản 1 Điều 4 Nghị_định này , quỹ tiền_lương là tổng mức tiền_lương
của những cán_bộ , công_nhân viên_chức quốc_phòng , lao_động làm_việc hưởng lương
trong các nhà_máy , doanh_nghiệp , đơn_vị cơ_sở trong Quân_đội nhân_dân ; cán_bộ
, công_nhân , viên_chức , lao_động làm_việc hưởng lương trong các doanh_nghiệp
, cơ_quan , đơn_vị khoa học-kỹ_thuật , sự_nghiệp và phục_vụ trong Công_an nhân_dân
. "'
- '" Điều 41 . Điều_chỉnh dự_án đầu_tư 3 . Nhà_đầu_tư có dự_án đầu_tư đã được chấp_thuận
chủ_trương đầu_tư phải thực_hiện thủ_tục chấp_thuận điều_chỉnh chủ_trương đầu_tư
nếu thuộc một trong các trường_hợp sau đây : a ) Thay_đổi mục_tiêu đã được quy_định
tại văn_bản chấp_thuận chủ_trương đầu_tư ; bổ_sung mục_tiêu thuộc diện chấp_thuận
chủ_trương đầu_tư ; 4 . Đối_với dự_án đầu_tư được chấp_thuận chủ_trương đầu_tư
, nhà_đầu_tư không được điều_chỉnh tiến_độ thực_hiện dự_án đầu_tư quá 24 tháng
so với tiến_độ thực_hiện dự_án đầu_tư quy_định tại văn_bản chấp_thuận chủ_trương
đầu_tư lần đầu , trừ một trong các trường_hợp sau đây : đ ) Thay_đổi mục_tiêu
đã được quy_định tại văn_bản chấp_thuận chủ_trương đầu_tư ; bổ_sung mục_tiêu thuộc
diện chấp_thuận chủ_trương đầu_tư ; "'
- '" Điều 4 . Tiêu_chuẩn tuyển quân 1 . Tuổi_đời : a ) Công_dân từ đủ 18 tuổi đến
hết 25 tuổi . b ) Công_dân nam được đào_tạo trình_độ cao_đẳng , đại_học đã được
tạm hoãn gọi nhập_ngũ trong thời_gian một khóa đào_tạo của một trình_độ đào_tạo
thì tuyển_chọn và gọi nhập_ngũ đến hết 27 tuổi . 2 . Tiêu_chuẩn chính_trị : a
) Thực_hiện theo Thông_tư liên_tịch số 50/2016 / TTLT-BQP-BCA ngày 15 tháng 4
năm 2016 của Bộ_trưởng Bộ Quốc_phòng - Bộ_trưởng Bộ Công_an quy_định tiêu_chuẩn
chính_trị tuyển_chọn công_dân vào phục_vụ trong Quân_đội nhân_dân Việt_Nam . b
) Đối_với các cơ_quan , đơn_vị và vị_trí trọng_yếu cơ_mật trong Quân_đội ; lực_lượng
Tiêu_binh , Nghi_lễ ; lực_lượng Vệ_binh và Kiểm_soát quân_sự chuyên_nghiệp thực_hiện
tuyển_chọn theo quy_định của Bộ Quốc_phòng . 3 . Tiêu_chuẩn sức_khỏe : a ) Tuyển_chọn
những công_dân có sức khỏe loại 1 , 2 , 3 theo quy_định tại Thông_tư liên_tịch
số 16/2016 / TTLT-BYT-BQP ngày 30 tháng 6 năm 2016 của Bộ_trưởng Bộ Y_tế - Bộ_trưởng
Bộ Quốc_phòng quy_định việc khám sức_khỏe thực_hiện nghĩa_vụ_quân_sự . b ) Đối_với
các cơ_quan , đơn_vị , vị_trí quy_định tại Điểm b , Khoản 2 Điều này , thực_hiện
tuyển_chọn bảo_đảm tiêu_chuẩn riêng theo quy_định của Bộ Quốc_phòng . c ) Không
gọi nhập_ngũ vào Quân_đội những công_dân có sức khỏe loại 3 tật khúc_xạ về mắt
( cận_thị 1,5 diop trở lên , viễn_thị các mức_độ ) ; nghiện ma_túy , nhiễm HlV
, AIDS. 4 . Tiêu_chuẩn văn_hóa : a ) Tuyển_chọn và gọi nhập_ngũ những công_dân
có trình_độ văn_hóa lớp 8 trở lên , lấy từ cao xuống thấp . Những địa_phương có
khó_khăn không đảm_bảo đủ chỉ_tiêu giao_quân thì báo_cáo cấp có thẩm_quyền xem_xét
, quyết_định được tuyển_chọn số công_dân có trình_độ văn_hóa lớp 7 . b ) Các xã
thuộc vùng_sâu , vùng_xa , vùng điều_kiện kinh_tế - xã_hội đặc_biệt khó_khăn theo
quy_định của pháp_luật ; đồng_bào dân_tộc_thiểu_số dưới 10.000 người thì được
tuyển không quá 25 % công_dân có trình_độ văn_hóa cấp tiểu_học , còn lại là trung_học_cơ_sở
trở lên . "'
- source_sentence: Người đứng đầu cơ_quan thuộc Chính_phủ , đơn_vị sự_nghiệp công_lập
thực_hiện tiếp dân đột_xuất trong các trường_hợp nào ?
sentences:
- Nghĩa_vụ nộp chi_phí cho người làm_chứng ... 3 . Tòa_án căn_cứ vào khoản 1 và
khoản 2 Điều này quyết_định nghĩa_vụ nộp chi_phí cho người làm_chứng , hoàn_trả
lại chi_phí cho các bên đương_sự trong bản_án , quyết_định .
- 'Trách_nhiệm của người đứng đầu cơ_quan thuộc Chính_phủ , đơn_vị sự_nghiệp công_lập
... 3 . Thực_hiện tiếp công_dân đột_xuất trong các trường_hợp sau đây : a ) Vụ_việc
gay_gắt , phức_tạp , có nhiều người tham_gia , liên_quan đến trách_nhiệm của nhiều
cơ_quan , tổ_chức , đơn_vị hoặc ý_kiến của các cơ_quan , tổ_chức , đơn_vị còn
khác nhau ; b ) Vụ_việc nếu không_chỉ_đạo , xem_xét kịp_thời có_thể gây ra hậu_quả
nghiêm_trọng hoặc có_thể dẫn đến hủy_hoại tài_sản của Nhà_nước , của tập_thể ,
xâm_hại đến tính_mạng , tài_sản của nhân_dân , ảnh_hưởng đến an_ninh , chính_trị
, trật_tự , an_toàn xã_hội . 4 . Khi tiếp công_dân , người đứng đầu cơ_quan ,
đơn_vị phải có ý_kiến trả_lời về việc giải_quyết vụ_việc cho công_dân . Trường_hợp
chưa trả_lời ngay được thì chỉ_đạo cơ_quan , tổ_chức , đơn_vị , công_chức , viên_chức
thuộc quyền quản_lý của mình kịp_thời xem_xét , giải_quyết và thông_báo cho công_dân
biết thời_gian trả_lời .'
- Cơ_cấu tổ_chức bộ_máy và biên_chế của Ban quản_lý khu công_nghiệp , khu kinh_tế
1 . Ban quản_lý khu công_nghiệp , khu kinh_tế gồm Trưởng_ban , không quá 03 Phó
Trưởng_ban ; bộ_máy giúp_việc . Trưởng ban do Chủ_tịch Ủy_ban_nhân_dân cấp tỉnh
bổ_nhiệm , miễn_nhiệm . Phó Trưởng ban do Chủ_tịch Ủy_ban_nhân_dân cấp tỉnh bổ_nhiệm
, miễn_nhiệm theo đề_nghị của Trưởng ban . 2 . Trưởng ban có trách_nhiệm điều_hành
mọi hoạt_động của Ban quản_lý khu công_nghiệp , khu kinh_tế , chịu trách_nhiệm
trước Ủy_ban_nhân_dân cấp tỉnh , Chủ_tịch Ủy_ban_nhân_dân cấp tỉnh và pháp_luật
về hoạt_động của khu công_nghiệp , khu kinh_tế . ....
- source_sentence: Nếu chảy_máu trong phẫu_thuật mở tiền phòng lấy máu_cục thì xử_lý
như_thế_nào ?
sentences:
- 'PHẪU_THUẬT MỞ TIỀN_PHÒNG LẤY MÁU_CỤC ... VII. XỬ_TRÍ TAI_BIẾN 1 . Chảy_máu trong
phẫu_thuật Là biến_chứng hay gặp - Nguyên_nhân : + Do hút lôi_kéo vào mống mắt
đặc_biệt chân mống mắt . + Do cục máu đông chưa được hình_thành chắc_chắn . -
Xử_trí : + Dừng hút . + Bơm tiền phòng dung_dịch adrenalin 0,1 % hòa loãng với
dung_dịch ringer_lactat tỷ_lệ 1/3 và / hoặc bơm bóng hơi to vào tiền phòng hoặc
bơm nhầy vào tiền phòng . + Nếu máu vẫn không ngừng chảy , có_thể ngừng phẫu_thuật
, khâu đóng mép phẫu_thuật , chờ_đợi cho đến khi cục máu đông được hình_thành
chắc_chắn rồi rửa lại máu tiền phòng một hôm khác . ...'
- 'Nội_dung quy_hoạch không_gian biển quốc_gia ... 3 . Xác_định quan_điểm và mục_tiêu
phát_triển : a ) Xây_dựng quan_điểm sử_dụng không_gian biển , khai_thác và sử_dụng
bền_vững tài_nguyên biển , bảo_vệ môi_trường vùng bờ ; b ) Xác_định mục_tiêu tổng_quát
và các mục_tiêu cụ_thể về sử_dụng không_gian biển và khai_thác , sử_dụng tài_nguyên
trong phạm_vi không_gian biển trong thời_kỳ quy_hoạch 10 năm , tầm nhìn từ 30
đến 50 năm ; c ) Xác_định những vấn_đề trọng_tâm cần giải_quyết và các khâu đột_phá
trong việc khai_thác , sử_dụng không_gian biển cho các hoạt_động kinh_tế , xã_hội
, môi_trường trong thời_kỳ quy_hoạch . ...'
- 'Cách tiến_hành 9.1 Yêu_cầu chung ... 9.2 Phần mẫu thử Cân khoảng 40 g mẫu thử
, cho vào cốc thủy tinh_hình cầu dung_tích 160 ml . Khuấy_mẫu bằng thìa nhựa ,
nếu cần , sau đó ổn_định mẫu ở nhiệt_độ phòng ( từ 18 °C đến 25 °C ) . 9.3 Phương_pháp
đánh_giá bằng khứu_giác 9.3.1 Đánh_giá bằng khứu_giác Các đặc_tính khứu_giác được
đánh_giá trước_tiên . Đối_với mẫu mật_ong thô , đánh_giá mùi ngay sau khi mật_ong
đã được trải trên bề_mặt của cốc bằng thìa nhựa , để sự cảm_nhận các chất dễ bay_hơi
được giải_phóng và sự bốc_hơi trên bề_mặt bay_hơi là như nhau đối_với tất_cả các
mẫu . Đối_với mẫu pha loãng , cần xoáy vòng_mẫu trong cốc để thúc_đẩy sự bay_hơi
. Người đánh_giá phải thở trong vài giây trên miệng_cốc . Phải đánh_giá mùi ngay
sau khi trải trên bề_mặt của cốc hoặc ngay sau khi xoay_cốc và sau đó 10 hoặc
20 s . Trước khi hít_hơi thứ hai , người đánh_giá phải chờ từ 5 s đến 20 s hoặc
có_thể lâu hơn , để có_thể cảm_nhận được toàn_bộ mùi . Ghi vào phiếu đánh_giá
cường_độ của bất_kỳ khuyết_tật nào cảm_nhận được và sự phù_hợp với profile đơn
hoa , nếu được yêu_cầu . 9.3.2 Cường_độ mùi Thang_điểm đánh_giá cường_độ mùi của
mật_ong : 0 . không mùi 1 . mùi yếu 2 . mùi trung_bình 3 . mùi mạnh 9.3.3 Mô_tả
mùi Có_thể tham_khảo các thuật_ngữ sau : a ) Mùi cây_cỏ : - Mùi cây_cỏ tươi :
mùi hạt đậu , mùi lá bị nhàu , mùi cây_cỏ sau mưa ; - Mùi cây_cỏ khô : mùi malt
vàng , mùi rơm , mùi trà , mùi cỏ khô ; b ) Mùi gỗ : - Mùi gỗ khô : mùi gỗ và
lá , mùi bụi gỗ , mùi hạt_óc chó , mùi hạt_dẻ ; - Mùi nhựa gỗ : mùi nhựa cây tuyết_tùng
, mùi nhựa thông , mùi keo_ong ; - Mùi gia_vị : mùi đinh_hương , mùi nhục đậu_khấu
, mùi cà_phê ; c ) Mùi hóa_chất : - Mùi hóa_chất dầu_mỏ : mùi styren , mùi sơn
, mùi dung_môi ; - Mùi thuốc : mùi xà_phòng gia_dụng , mùi vitamin_B1 ; d ) Mùi
tươi : - Mùi tươi mới : mùi bạc_hà , mùi khuynh_diệp , mùi hoa hồi ; - Mùi quả
có múi : mùi chanh , mùi cam , mùi bưởi ; e ) Mùi hoa_quả tươi : - Mùi hoa : mùi
hoa_cam , mùi hoa thạch_thảo ( violet ) , mùi hoa_hồng , mùi hoa_dạ lan_hương
( Hyacinthus ) ; - Mùi quả : mùi táo , mùi lê , mùi quả dứa_dại ( red fruit )
, mùi quả_lý chua_đen , mùi dừa , mùi mơ , mùi quả lạ ( exotic fruit ) ; f ) Mùi
ấm : - Mùi cháy : mùi mật_rỉ , mùi đường cháy ; - Mùi quả nấu : mùi chà_là , mùi
mận , mùi vả_tây , mùi nho khô , mùi kẹo trái_cây ; - Mùi caramel : mùi kẹo toffee
, mùi bánh_caramel , mùi đường nâu ; g ) Mùi mật hỏng : - Mùi hăng : mùi phomat_cay
, mùi dấm ; - Mùi động_vật : mùi phomat , mùi mồ_hôi , mùi bò , mùi nước tiểu_mèo
; - Mùi mốc : mùi ẩm , mùi thảm_trải sàn , mùi mùn đất , mùi ngột_ngạt ; - Mùi
lưu_huỳnh : mùi atiso , mùi bắp_cải .'
- source_sentence: Việc ghi nhãn đối_với vàng trang_sức , mỹ_nghệ thể_hiện trực_tiếp
trên sản_phẩm bằng những cách nào ?
sentences:
- 'Xác_định tỷ_trọng ( % ) sản_lượng xăng_dầu từ nguồn trong nước và nhập_khẩu để
tính giá cơ_sở các mặt_hàng xăng_dầu 1 . Tỷ_trọng ( % ) sản_lượng xăng_dầu từ
nguồn trong nước và nhập_khẩu để tính giá cơ_sở các mặt_hàng xăng_dầu được xác_định
như sau : a ) Sản_lượng xăng_dầu từ nguồn trong nước là sản_lượng xăng_dầu bán
ra của các nhà_máy lọc dầu trong nước ( không bao_gồm dung_môi , nhiên_liệu bay
; không bao_gồm sản_lượng xăng_dầu tự dùng và xuất_khẩu ) . Tỷ_trọng ( % ) sản_lượng
xăng_dầu từ nguồn trong nước bằng ( = ) Sản_lượng xăng_dầu từ nguồn trong nước
chia cho ( :) Tổng_sản_lượng xăng_dầu nhập_khẩu và sản_lượng xăng_dầu từ nguồn
trong nước trong kỳ báo_cáo của thương_nhân đầu_mối sản_xuất xăng_dầu . b ) Sản_lượng
xăng_dầu từ nguồn nhập_khẩu thực_hiện như quy_định tại điểm a_khoản 1 Điều 3 Thông_tư
này . Tỷ_trọng ( % ) sản_lượng xăng_dầu từ nguồn nhập_khẩu bằng ( = ) Sản_lượng
xăng_dầu từ nguồn nhập_khẩu chia cho ( :) Tổng_sản_lượng xăng_dầu nhập_khẩu và
sản_lượng xăng_dầu từ nguồn sản_xuất trong nước của các thương_nhân đầu_mối sản_xuất
xăng_dầu trong kỳ báo_cáo . c ) Thời_gian thu_thập số_liệu thực_hiện theo Quý
( từ ngày 21 tháng trước liền kề tháng đầu_tiên của Quý đến ngày 20 tháng cuối
Quý ) .'
- 'Kỹ_sư cao_cấp ( hạng I ) - Mã_số : V._05.02.05 ... 2 . Tiêu_chuẩn về trình_độ
đào_tạo , bồi_dưỡng : a ) Có trình_độ thạc_sĩ trở lên thuộc lĩnh_vực kỹ_thuật
, công_nghệ ; b ) Có chứng_chỉ bồi_dưỡng chức_danh công_nghệ . ...'
- 'Công_bố tiêu_chuẩn áp_dụng và ghi nhãn đối_với vàng trang_sức , mỹ_nghệ ... 4
. Ghi nhãn đối_với vàng trang_sức , mỹ_nghệ : a ) Yêu_cầu chung : - Việc ghi nhãn
vàng trang_sức , mỹ_nghệ phải được thực_hiện theo quy_định tại Nghị_định số 89/2006
/ NĐ-CP ngày 30 tháng 8 năm 2006 của Chính_phủ về nhãn hàng hóa . Vị_trí nhãn
vàng trang_sức , mỹ_nghệ được thực_hiện theo quy_định tại Điều 6 Nghị_định số
89/2006 / NĐ-CP ; - Nhãn vàng trang_sức , mỹ_nghệ được thể_hiện trực_tiếp trên
sản_phẩm bằng cách khắc cơ_học , khắc_la-de , đục chìm , đúc_chìm , đúc nổi hoặc
bằng phương_pháp thích_hợp ( nếu kích_thước và cấu_trúc sản_phẩm đủ để thực_hiện
) hoặc thể_hiện trên tài_liệu đính kèm sản_phẩm ; - Độ tinh_khiết hay hàm_lượng
vàng theo phân_hạng quy_định tại Điều 6 Thông_tư này phải được ghi rõ tại vị_trí
dễ thấy trên sản_phẩm bằng số Ả_Rập chỉ_số phần vàng trên một nghìn ( 1000 ) phần
khối_lượng của sản_phẩm ( ví_dụ : 999 hoặc 916 ... ) hoặc bằng số Ả_Rập thể_hiện
chỉ_số Kara kèm theo chữ_cái K ( ví_dụ : 24K hoặc 22K ... ) tương_ứng với phân_hạng
theo quy_định tại Điều 6 Thông_tư này . Trường_hợp sản_phẩm có kích_thước không_thể
thể_hiện trực_tiếp được thì hàm_lượng vàng công_bố phải được ghi trên nhãn đính
kèm . Trường_hợp sản_phẩm có từ hai thành_phần trở lên với hàm_lượng vàng khác
nhau , có_thể nhận_biết sự khác nhau qua ngoại_quan thì việc ghi hàm_lượng vàng
được thể_hiện trên phần có hàm_lượng vàng thấp hơn ; - Đối_với vàng trang_sức
, mỹ_nghệ nhập_khẩu , ngoài nhãn gốc ghi bằng tiếng nước_ngoài , phải có nhãn
phụ bằng tiếng Việt thể_hiện các thông_tin ghi nhãn theo quy_định tại điểm b khoản
4 Điều này và xuất_xứ hàng hóa . ...'
---
# SentenceTransformer based on keepitreal/vietnamese-sbert
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) <!-- at revision a9467ef2ef47caa6448edeabfd8e5e5ce0fa2a23 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Cloyne/vietnamese-sbert-v3")
# Run inference
sentences = [
'Việc ghi nhãn đối_với vàng trang_sức , mỹ_nghệ thể_hiện trực_tiếp trên sản_phẩm bằng những cách nào ?',
'Công_bố tiêu_chuẩn áp_dụng và ghi nhãn đối_với vàng trang_sức , mỹ_nghệ ... 4 . Ghi nhãn đối_với vàng trang_sức , mỹ_nghệ : a ) Yêu_cầu chung : - Việc ghi nhãn vàng trang_sức , mỹ_nghệ phải được thực_hiện theo quy_định tại Nghị_định số 89/2006 / NĐ-CP ngày 30 tháng 8 năm 2006 của Chính_phủ về nhãn hàng hóa . Vị_trí nhãn vàng trang_sức , mỹ_nghệ được thực_hiện theo quy_định tại Điều 6 Nghị_định số 89/2006 / NĐ-CP ; - Nhãn vàng trang_sức , mỹ_nghệ được thể_hiện trực_tiếp trên sản_phẩm bằng cách khắc cơ_học , khắc_la-de , đục chìm , đúc_chìm , đúc nổi hoặc bằng phương_pháp thích_hợp ( nếu kích_thước và cấu_trúc sản_phẩm đủ để thực_hiện ) hoặc thể_hiện trên tài_liệu đính kèm sản_phẩm ; - Độ tinh_khiết hay hàm_lượng vàng theo phân_hạng quy_định tại Điều 6 Thông_tư này phải được ghi rõ tại vị_trí dễ thấy trên sản_phẩm bằng số Ả_Rập chỉ_số phần vàng trên một nghìn ( 1000 ) phần khối_lượng của sản_phẩm ( ví_dụ : 999 hoặc 916 ... ) hoặc bằng số Ả_Rập thể_hiện chỉ_số Kara kèm theo chữ_cái K ( ví_dụ : 24K hoặc 22K ... ) tương_ứng với phân_hạng theo quy_định tại Điều 6 Thông_tư này . Trường_hợp sản_phẩm có kích_thước không_thể thể_hiện trực_tiếp được thì hàm_lượng vàng công_bố phải được ghi trên nhãn đính kèm . Trường_hợp sản_phẩm có từ hai thành_phần trở lên với hàm_lượng vàng khác nhau , có_thể nhận_biết sự khác nhau qua ngoại_quan thì việc ghi hàm_lượng vàng được thể_hiện trên phần có hàm_lượng vàng thấp hơn ; - Đối_với vàng trang_sức , mỹ_nghệ nhập_khẩu , ngoài nhãn gốc ghi bằng tiếng nước_ngoài , phải có nhãn phụ bằng tiếng Việt thể_hiện các thông_tin ghi nhãn theo quy_định tại điểm b khoản 4 Điều này và xuất_xứ hàng hóa . ...',
'Kỹ_sư cao_cấp ( hạng I ) - Mã_số : V._05.02.05 ... 2 . Tiêu_chuẩn về trình_độ đào_tạo , bồi_dưỡng : a ) Có trình_độ thạc_sĩ trở lên thuộc lĩnh_vực kỹ_thuật , công_nghệ ; b ) Có chứng_chỉ bồi_dưỡng chức_danh công_nghệ . ...',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 132,997 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 16.75 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 172.75 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Điều_kiện cần có của Văn_phòng công_chứng là gì ?</code> | <code>" Điều 22 . Văn_phòng công_chứng 3 . Tên gọi của Văn_phòng công_chứng phải bao_gồm cụm_từ “ Văn_phòng công_chứng ” kèm theo họ tên của Trưởng Văn_phòng hoặc họ tên của một công_chứng_viên hợp_danh khác của Văn_phòng công_chứng do các công_chứng_viên hợp_danh thỏa_thuận , không được trùng hoặc gây nhầm_lẫn với tên của tổ_chức hành_nghề công_chứng khác , không được vi_phạm truyền_thống lịch_sử , văn_hóa , đạo_đức và thuần_phong mỹ_tục của dân_tộc . "</code> |
| <code>Thứ_trưởng , Phó Thủ_trưởng cơ_quan ngang Bộ thực_hiện nhiệm_vụ theo sự phân_công của ai ?</code> | <code>" Điều 3 . Bộ_trưởng 1 . Bộ_trưởng là thành_viên Chính_phủ và là người đứng đầu Bộ , lãnh_đạo công_tác của Bộ ; chịu trách_nhiệm quản_lý_nhà_nước về ngành , lĩnh_vực được phân_công ; tổ_chức thi_hành và theo_dõi việc thi_hành pháp_luật liên_quan đến ngành , lĩnh_vực được giao trong phạm_vi toàn_quốc . 2 . Bộ_trưởng làm_việc theo chế_độ thủ_trưởng và Quy_chế làm_việc của Chính_phủ , bảo_đảm nguyên_tắc tập_trung_dân_chủ . Điều 4 . Thứ_trưởng , Phó Thủ_trưởng cơ_quan ngang Bộ 1 . Thứ_trưởng , Phó Thủ_trưởng cơ_quan ngang Bộ ( sau đây gọi chung là Thứ_trưởng ) giúp Bộ_trưởng thực_hiện một hoặc một_số nhiệm_vụ cụ_thể do Bộ_trưởng phân_công và chịu trách_nhiệm trước Bộ_trưởng và trước pháp_luật về nhiệm_vụ được phân_công . Thứ_trưởng không kiêm người đứng đầu tổ_chức , đơn_vị thuộc Bộ , trừ trường_hợp đặc_biệt . Khi Bộ_trưởng vắng_mặt , một Thứ_trưởng được Bộ_trưởng ủy_nhiệm thay Bộ_trưởng điều_hành và giải_quyết công_việc của Bộ . 2 . Số_lượng Thứ_trưởng thực_hiện theo quy_định của Luật_Tổ_chức Chính_phủ . "</code> |
| <code>Việc lựa_chọn xuất_bản_phẩm tham_khảo dùng chung trong các cơ_sở giáo_dục được quy_định thế_nào ?</code> | <code>Lựa_chọn xuất_bản_phẩm tham_khảo dùng chung trong các cơ_sở giáo_dục 1 . Tổ / nhóm chuyên_môn , căn_cứ vào mục_tiêu , nội_dung chương_trình giáo_dục , sách_giáo_khoa , kế_hoạch thực_hiện nhiệm_vụ năm_học , các hoạt_động giáo_dục và đề_xuất của giáo_viên để lựa_chọn , đề_xuất danh_mục xuất_bản_phẩm tham_khảo tối_thiểu liên_quan đến môn_học / lớp_học , hoạt_động giáo_dục . 2 . Định_kì vào đầu năm_học , thủ_trưởng cơ_sở giáo_dục thành_lập Hội_đồng để xem_xét , lựa_chọn , đề_xuất danh_mục xuất_bản_phẩm tham_khảo trên cơ_sở đề_xuất của các tổ / nhóm chuyên_môn . Thành_phần tối_thiểu của Hội_đồng gồm : Lãnh_đạo cơ_sở giáo_dục phụ_trách chuyên_môn , tổ / nhóm trưởng chuyên_môn và viên_chức phụ_trách thư_viện trong cơ_sở giáo_dục . 3 . Thủ_trưởng cơ_sở giáo_dục quyết_định phê_duyệt danh_mục xuất_bản_phẩm tham_khảo tối_thiểu để có kế_hoạch mua_sắm và sử_dụng hằng năm trong cơ_sở giáo_dục trên cơ_sở đề_xuất của Hội_đồng lựa_chọn xuất_bản_phẩm tham_khảo , cân_đối nguồn kinh_phí , quy_mô của cơ_sở giáo_dục , số_lượng và chất_lượng xuất_bản_phẩm tham_khảo đã có tại cơ_sở giáo_dục .</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### csv
* Dataset: csv
* Size: 132,997 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.13 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 173.11 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Hoàn lại số tiền lừa_đảo thì có được nhẹ_tội hơn không ?</code> | <code>" Điều 51 . Các tình_tiết giảm nhẹ trách_nhiệm hình_sự 1 . Các tình_tiết sau đây là tình_tiết giảm nhẹ trách_nhiệm hình_sự : a ) Người phạm_tội đã ngăn_chặn hoặc làm giảm bớt tác_hại của tội_phạm ; b ) Người phạm_tội tự_nguyện sửa_chữa , bồi_thường thiệt_hại hoặc khắc_phục hậu_quả ; c ) Phạm_tội trong trường_hợp vượt quá giới_hạn phòng_vệ chính_đáng ; d ) Phạm_tội trong trường_hợp vượt quá yêu_cầu của tình_thế cấp_thiết ; đ ) Phạm_tội trong trường_hợp vượt quá mức cần_thiết khi bắt_giữ người phạm_tội ; e ) Phạm_tội trong trường_hợp bị kích_động về tinh_thần do hành_vi trái pháp_luật của nạn_nhân gây ra ; g ) Phạm_tội vì hoàn_cảnh đặc_biệt khó_khăn mà không phải do mình tự gây ra ; h ) Phạm_tội nhưng chưa gây thiệt_hại hoặc gây thiệt_hại không lớn ; i ) Phạm_tội lần đầu và thuộc trường_hợp ít nghiêm_trọng ; k ) Phạm_tội vì bị người khác đe_dọa hoặc cưỡng_bức ; l ) Phạm_tội trong trường_hợp bị hạn_chế khả_năng nhận_thức mà không phải do lỗi của mình gây ra ; m ) Phạm_tội do lạc_hậu ; n ) Người phạm_tội là phụ_nữ có_thai ; o ) Người phạm_tội là người đủ 70 tuổi trở lên ; p ) Người phạm_tội là người khuyết_tật nặng hoặc khuyết_tật đặc_biệt nặng ; q ) Người phạm_tội là người có bệnh bị hạn_chế khả_năng nhận_thức hoặc khả_năng điều_khiển hành_vi của mình ; r ) Người phạm_tội tự_thú ; s ) Người phạm_tội thành_khẩn khai_báo , ăn_năn hối_cải ; t ) Người phạm_tội tích_cực hợp_tác với cơ_quan có trách_nhiệm trong việc phát_hiện tội_phạm hoặc trong quá_trình giải_quyết vụ án ; u ) Người phạm_tội đã lập_công chuộc tội ; v ) Người phạm_tội là người có thành_tích xuất_sắc trong sản_xuất , chiến_đấu , học_tập hoặc công_tác ; x ) Người phạm_tội là người có công với cách_mạng hoặc là cha , mẹ , vợ , chồng , con của liệt_sĩ . 2 . Khi quyết_định hình_phạt , Tòa_án có_thể coi đầu_thú hoặc tình_tiết khác là tình_tiết giảm nhẹ , nhưng phải ghi rõ lý_do giảm nhẹ trong bản_án . 3 . Các tình_tiết giảm nhẹ đã được Bộ_luật này quy_định là dấu_hiệu định_tội hoặc định_khung thì không được coi là tình_tiết giảm nhẹ trong khi quyết_định hình_phạt . "</code> |
| <code>Quy_trình phát_mại tài_sản bao_gồm các bước nào ?</code> | <code>“ Điều 307 . Thanh_toán số tiền có được từ việc xử_lý tài_sản cầm_cố , thế_chấp 1 . Số tiền có được từ việc xử_lý tài_sản cầm_cố , thế_chấp sau khi thanh_toán chi_phí bảo_quản , thu_giữ và xử_lý tài_sản cầm_cố , thế_chấp được thanh_toán theo thứ_tự ưu_tiên quy_định tại Điều 308 của Bộ_luật này . 2 . Trường_hợp số tiền có được từ việc xử_lý tài_sản cầm_cố , thế_chấp sau khi thanh_toán chi_phí bảo_quản , thu_giữ và xử_lý tài_sản cầm_cố , thế_chấp lớn hơn giá_trị nghĩa_vụ được bảo_đảm thì số tiền chênh_lệch phải được trả cho bên bảo_đảm . 3 . Trường_hợp số tiền có được từ việc xử_lý tài_sản cầm_cố , thế_chấp sau khi thanh_toán chi_phí bảo_quản , thu_giữ và xử_lý tài_sản cầm_cố , thế_chấp nhỏ hơn giá_trị nghĩa_vụ được bảo_đảm thì phần nghĩa_vụ chưa được thanh_toán được xác_định là nghĩa_vụ không có bảo_đảm , trừ trường_hợp các bên có thỏa_thuận bổ_sung tài_sản bảo_đảm . Bên nhận bảo_đảm có quyền yêu_cầu bên có nghĩa_vụ được bảo_đảm phải thực_hiện phần nghĩa_vụ chưa được thanh_toán . ”</code> |
| <code>Người lao_động đang trong thời_gian nghỉ thai_sản thì có đóng đoàn phí công_đoàn không ?</code> | <code>" Điều 23 . Đối_tượng , mức đóng , tiền_lương làm căn_cứ đóng đoàn phí [ ... ] 6 . Đoàn_viên công_đoàn hưởng trợ_cấp Bảo_hiểm_xã_hội từ 01 tháng trở lên , trong thời_gian hưởng trợ_cấp không phải đóng đoàn phí ; đoàn_viên công_đoàn không có việc_làm , không có thu_nhập , nghỉ_việc riêng từ 01 tháng trở lên không hưởng tiền_lương , trong thời_gian đó không phải đóng đoàn phí ” .</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `eval_on_start`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0 | 0 | - | 0.4612 |
| 0.2424 | 500 | 0.1944 | - |
| 0.4847 | 1000 | 0.1022 | 0.0650 |
| 0.7271 | 1500 | 0.0883 | - |
| 0.9695 | 2000 | 0.0762 | 0.0594 |
| 1.2118 | 2500 | 0.0686 | - |
| 1.4542 | 3000 | 0.0407 | 0.0508 |
| 1.6966 | 3500 | 0.0275 | - |
| 1.9389 | 4000 | 0.0209 | 0.0487 |
| 2.1813 | 4500 | 0.0209 | - |
| 2.4237 | 5000 | 0.013 | 0.0495 |
| 2.6660 | 5500 | 0.0103 | - |
| 2.9084 | 6000 | 0.0072 | 0.0416 |
| 3.1508 | 6500 | 0.0086 | - |
| 3.3931 | 7000 | 0.005 | 0.0387 |
| 3.6355 | 7500 | 0.0038 | - |
| 3.8778 | 8000 | 0.0032 | 0.0314 |
| 4.1202 | 8500 | 0.0037 | - |
| 4.3626 | 9000 | 0.0027 | 0.0381 |
| 4.6049 | 9500 | 0.0018 | - |
| 4.8473 | 10000 | 0.0017 | 0.0360 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.1
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"CHIA"
] |
macadeliccc/magistrate-3.2-3b-it-GGUF | macadeliccc | text-generation | [
"transformers",
"gguf",
"spectrum",
"llama-3",
"axolotl",
"legal",
"HFforLegal",
"autoquant",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:arcee-ai/The-Tome",
"dataset:cognitivecomputations/SystemChat-2.0",
"arxiv:2408.10914",
"base_model:macadeliccc/magistrate-3.2-3b-base",
"base_model:quantized:macadeliccc/magistrate-3.2-3b-base",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-01T19:58:51Z | 2024-10-01T20:33:37+00:00 | 831 | 1 | ---
base_model: macadeliccc/magistrate-3.2-3b-base
datasets:
- teknium/OpenHermes-2.5
- NousResearch/hermes-function-calling-v1
- arcee-ai/The-Tome
- cognitivecomputations/SystemChat-2.0
language:
- en
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- spectrum
- llama-3
- axolotl
- legal
- HFforLegal
- autoquant
- gguf
---
# magistrate-3.2-3b-it
This model is a fine-tuned version of [macadeliccc/magistrate-3.2-3b-base](https://huggingface.co/macadeliccc/magistrate-3.2-3b-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8067
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: macadeliccc/magistrate-3.2-3b-base
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: json
type: sharegpt
conversation: chatml
data_files: train/hermes-2.5.jsonl
# - path: json
# type: sharegpt
# conversation: chatml
# data_files: train/financial_instructions_cleaned_2.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/glaive-function-calling-5k.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/func-calling-singleturn.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/func-calling.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/json-mode-agentic.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/json-mode-singleturn.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/reasoning_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/systemchat_2_0_small.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/303_creative_llc_v__elenis_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/abitron_austria_gmbh_v__hetronic_international__inc__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/acheson_hotels__llc_v__laufer_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/alexander_v__sc_conference_of_naacp_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/amgen_inc__v__sanofi_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/andy_warhol_found___inc__v__goldsmith_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/arizona_v__navajo_nation_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/becerra__sec__of_h_hs_v__san_carlos_apache_tribe_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/biden_v__nebraska_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/bissonnette_v__lepage_bakeries_park_st___llc_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/bittner_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/brown_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/cantero_v__bank_of_america__n_a__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/cfpb_v__com__fin__services_assn__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/chiaverini_v__city_of_napoleon_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/ciminelli_v__united_state_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/city_of_grants_pass_v__johnson_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/coinbase__inc__v__bielski_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/coinbase__inc__v__suski_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/connelly_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/corner_post__inc__v__bd__of_governors__frs_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/counterman_v__colorado_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/cruz_v__arizona_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/culley_v__marshall_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/dept__of_agric__rural_dev__v__kirtz_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/dept__of_education_v__brown_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/dept__of_state_v__munoz_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/devillier_v__texas_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/diaz_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/dubin_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/dupree_v__younger_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/erlinger_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/fbi_v__fikre_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/fda_v__alliance_hippocratic_medicine_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/financial_oversight_board_v__cpi_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/fischer_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/garland__att_y_gen__v__cargill_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/glacier_northwest__inc__v__int_l_brotherhood_of_teamsters_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/gonzalez_v__google_llc_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/gonzalez_v__trevino_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/great_lakes_insurance_se_v__raiders_retreat_realty_co___llc_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/groff_v__dejoy_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/harrington_v__purdue_pharma_l_p__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/harrow_v__dept__of_defense_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/health_and_hospital_corp__v__talevski_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/helix_energy_solutions_v__hewitt_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/in_re_grand_jury_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/jack_daniel_s_properties__inc__v__vip_products_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/jones_v__hendrix_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/karcho_polselli_v__irs_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/lac_du_flambeau_band_v__coughlin_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/lindke_v__freed_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/loper_bright_enterprises__inc__v__raimondo__sec__of_comm__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/lora_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/macquarie_infrastructure_corp__v__moab_partners__l_p__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/mallory_v__norfolk_southern_railway_co__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/mcintosh_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/merrill_v__milligan_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/moore_v__harper_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/moore_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/moyle_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/muldrow_v__st__louis_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/murray_v__ubs_securities__llc_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/murthy__surgeon_gen__v__missouri_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/netchoice__llc_v__paxton_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/new_york_v__new_jersey_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/nra_v__vullo_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/o_connor_ratcliff_v__garnier_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/oh_adjutant_gen__s_dept__v__flra_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/ohio_v__epa_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/perez_v__sturgis_public_schools_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/pugin_v__garland_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/pulsifer_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/relentless__inc__v__dept__of_commerce_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/rudisill_v__mcdonough__sec__of_va_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/sackett_v__epa_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/samia_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/santos_zacaria_v__garland__att_y_gen__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/sec_v__cochran_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/sec_v__jarkesy_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/sheetz_v__county_of_el_dorado_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/slack_technologies__llc_v__pirani_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/smith_v__arizona_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/smith_v__spizzirri_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/smith_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/snyder_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/starbucks_corp__v__mckinney_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/students_for_fair_admissions_v__university_of_nc_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/texas_v__new_mexico_and_colorado_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/thornell_v__jones_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/truck_insurance_exchange_v__kaiser_gypsum_co__inc__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/trump_v__anderson_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/turkiye_halk_bankasi_a_s__v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/twitter__inc__v__taamneh_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/tyler_v__hennepin_county_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/u_s___ex_rel__polansky_v__executive_health_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/u_s___ex_rel__schutte_v__supervalu_inc__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/united_states_trustee_v__john_q__hammons_fall_2006__llc_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/united_states_v__hansen_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/united_states_v__rahimi_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/united_states_v__texas_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/vidal__under_sec__of_comm__v__elster_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/warner_chappell_music__inc__v__nealy_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/wilkins_v__united_states_sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/wilkinson_v__garland__att_y_gen__sharegpt.json
- path: json
type: sharegpt
conversation: chatml
data_files: train/argument_dataset/yegiazaryan_v__smagin_sharegpt.json
chat_template: chatml
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# input_layernorm layers
- model.layers.0.input_layernorm
- model.layers.1.input_layernorm
- model.layers.2.input_layernorm
- model.layers.3.input_layernorm
- model.layers.4.input_layernorm
- model.layers.5.input_layernorm
- model.layers.6.input_layernorm
- model.layers.7.input_layernorm
- model.layers.8.input_layernorm
- model.layers.9.input_layernorm
- model.layers.10.input_layernorm
- model.layers.11.input_layernorm
- model.layers.12.input_layernorm
- model.layers.13.input_layernorm
# mlp.down_proj layers
- model.layers.0.mlp.down_proj
- model.layers.1.mlp.down_proj
- model.layers.17.mlp.down_proj
- model.layers.19.mlp.down_proj
- model.layers.18.mlp.down_proj
- model.layers.5.mlp.down_proj
- model.layers.20.mlp.down_proj
- model.layers.2.mlp.down_proj
- model.layers.4.mlp.down_proj
- model.layers.6.mlp.down_proj
- model.layers.3.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.15.mlp.down_proj
- model.layers.13.mlp.down_proj
# mlp.gate_proj layers
- model.layers.0.mlp.gate_proj
- model.layers.1.mlp.gate_proj
- model.layers.2.mlp.gate_proj
- model.layers.3.mlp.gate_proj
- model.layers.22.mlp.gate_proj
- model.layers.21.mlp.gate_proj
- model.layers.20.mlp.gate_proj
- model.layers.23.mlp.gate_proj
- model.layers.19.mlp.gate_proj
- model.layers.4.mlp.gate_proj
- model.layers.18.mlp.gate_proj
- model.layers.17.mlp.gate_proj
- model.layers.5.mlp.gate_proj
- model.layers.24.mlp.gate_proj
# mlp.up_proj layers
- model.layers.4.mlp.up_proj
- model.layers.3.mlp.up_proj
- model.layers.5.mlp.up_proj
- model.layers.6.mlp.up_proj
- model.layers.7.mlp.up_proj
- model.layers.2.mlp.up_proj
- model.layers.8.mlp.up_proj
- model.layers.14.mlp.up_proj
- model.layers.13.mlp.up_proj
- model.layers.11.mlp.up_proj
- model.layers.9.mlp.up_proj
- model.layers.1.mlp.up_proj
- model.layers.15.mlp.up_proj
- model.layers.12.mlp.up_proj
# post_attention_layernorm layers
- model.layers.0.post_attention_layernorm
- model.layers.1.post_attention_layernorm
- model.layers.2.post_attention_layernorm
- model.layers.3.post_attention_layernorm
- model.layers.4.post_attention_layernorm
- model.layers.5.post_attention_layernorm
- model.layers.6.post_attention_layernorm
- model.layers.7.post_attention_layernorm
- model.layers.8.post_attention_layernorm
- model.layers.9.post_attention_layernorm
- model.layers.10.post_attention_layernorm
- model.layers.11.post_attention_layernorm
- model.layers.12.post_attention_layernorm
- model.layers.13.post_attention_layernorm
# self_attn.k_proj layers
- model.layers.25.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.19.self_attn.k_proj
- model.layers.20.self_attn.k_proj
- model.layers.17.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.18.self_attn.k_proj
- model.layers.21.self_attn.k_proj
- model.layers.27.self_attn.k_proj
- model.layers.15.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.6.self_attn.k_proj
- model.layers.5.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.13.self_attn.o_proj
- model.layers.7.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.10.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.9.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.11.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.12.self_attn.q_proj
- model.layers.13.self_attn.q_proj
- model.layers.9.self_attn.q_proj
- model.layers.8.self_attn.q_proj
- model.layers.10.self_attn.q_proj
- model.layers.14.self_attn.q_proj
- model.layers.11.self_attn.q_proj
- model.layers.15.self_attn.q_proj
- model.layers.26.self_attn.q_proj
- model.layers.6.self_attn.q_proj
- model.layers.7.self_attn.q_proj
- model.layers.16.self_attn.q_proj
- model.layers.5.self_attn.q_proj
- model.layers.25.self_attn.q_proj
# model.norm layers
# self_attn.v_proj layers
- model.layers.23.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.19.self_attn.v_proj
- model.layers.3.self_attn.v_proj
- model.layers.18.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.4.self_attn.v_proj
- model.layers.17.self_attn.v_proj
- model.layers.22.self_attn.v_proj
- model.layers.20.self_attn.v_proj
- model.layers.13.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.27.self_attn.v_proj
val_set_size: 0.05
output_dir: ./outputs/magistrate-3.2-3b
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
adapter:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 1000
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<|end_of_text|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
```
</details><br>
## Model description
Magistrate-3.2-3b-it is a legal assistant specializing in US Supreme Court case law and US Federal regulations.
The base model is pretrained with ~250M tokens containing no synthetic legal data. The instruct model does contain synthetic data.
## Intended uses & limitations
This model is for research purposes and for continued development of the legal specialty. You are liable for all model outputs.
## Training and evaluation data
This model was trained on a variety of standard open source datasets like OpenHermes-2.5, hermes-function-calling, and some select entries from the Tome.
Additionally, I have included a comprehensive, non-synthetic argument dataset. This is a work in progress but has shown promising results so far.
## Training procedure
Spectrum top 35% finetune for both pretrain and SFT. Thanks to the cognitive computations team for the work done with spectrum.
+ Pretraining methodology based on Cohere's paper: [To Code, or Not To Code? Exploring Impact of Code in Pre-training](https://arxiv.org/abs/2408.10914)
+ Instruct finetune largely based on OpenHermes-2.5 and hermes-function-calling
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3754 | 0.0005 | 1 | 1.7429 |
| 1.0 | 0.5002 | 1017 | 0.8864 |
| 0.9482 | 1.0005 | 2034 | 0.8395 |
| 0.6817 | 1.4987 | 3051 | 0.8063 |
| 0.697 | 1.9991 | 4068 | 0.7580 |
| 0.3769 | 2.4966 | 5085 | 0.8140 |
| 0.4278 | 2.9965 | 6102 | 0.8067 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0 | [
"CPI"
] |
TheBloke/LLaMA2-13B-Psyfighter2-GGUF | TheBloke | null | [
"transformers",
"gguf",
"llama",
"base_model:KoboldAI/LLaMA2-13B-Psyfighter2",
"base_model:quantized:KoboldAI/LLaMA2-13B-Psyfighter2",
"license:llama2",
"region:us"
] | 2023-11-29T18:51:04Z | 2023-11-30T00:51:57+00:00 | 829 | 16 | ---
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
license: llama2
model_name: Llama2 13B Psyfighter2
inference: false
model_creator: KoboldAI
model_type: llama
prompt_template: "### Instruction: \n{prompt}\n### Response:\n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 13B Psyfighter2 - GGUF
- Model creator: [KoboldAI](https://huggingface.co/KoboldAI)
- Original model: [Llama2 13B Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [KoboldAI's Llama2 13B Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF)
* [KoboldAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca-Tiefighter
```
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-13b-psyfighter2.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-13b-psyfighter2.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama2-13b-psyfighter2.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama2-13b-psyfighter2.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama2-13b-psyfighter2.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-13b-psyfighter2.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llama2-13b-psyfighter2.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama2-13b-psyfighter2.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-13b-psyfighter2.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama2-13b-psyfighter2.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama2-13b-psyfighter2.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama2-13b-psyfighter2.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Psyfighter2-GGUF/blob/main/llama2-13b-psyfighter2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA2-13B-Psyfighter2-GGUF and below it, a specific filename to download, such as: llama2-13b-psyfighter2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA2-13B-Psyfighter2-GGUF llama2-13b-psyfighter2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA2-13B-Psyfighter2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA2-13B-Psyfighter2-GGUF llama2-13b-psyfighter2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m llama2-13b-psyfighter2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: \n{prompt}\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./llama2-13b-psyfighter2.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### Instruction: \n{prompt}\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./llama2-13b-psyfighter2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: KoboldAI's Llama2 13B Psyfighter2
# LLAMA2-13B-Psyfighter2
Psyfighter is a merged model created by the KoboldAI community members Jeb Carter and TwistedShadows and was made possible thanks to the KoboldAI merge request service.
The intent was to add medical data to supplement the models fictional ability with more details on anatomy and mental states. Due to the low ratio's of medical data and the high ratio's of fiction this model should not be used for medical advice or therapy because of its high chance of pulling in fictional data.
The following mergekit recipe was used:
```
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 1.0
- model: Doctor-Shotgun/cat-v1.0-13b
parameters:
weight: 0.01
- model: Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged
parameters:
weight: 0.02
dtype: float16
```
*V1 of this model was published under the account of the creator of the merge
This model contains the following ingredients from their upstream models for as far as we can track them:
- KoboldAI/LLaMA2-13B-Tiefighter
- Undi95/Xwin-MLewd-13B-V0.2
- - Undi95/ReMM-S-Light
- Undi95/CreativeEngine
- Brouz/Slerpeno
- - elinas/chronos-13b-v2
- jondurbin/airoboros-l2-13b-2.1
- NousResearch/Nous-Hermes-Llama2-13b+nRuaif/Kimiko-v2
- CalderaAI/13B-Legerdemain-L2+lemonilia/limarp-llama2-v2
- - KoboldAI/LLAMA2-13B-Holodeck-1
- NousResearch/Nous-Hermes-13b
- OpenAssistant/llama2-13b-orca-8k-3319
- ehartford/WizardLM-1.0-Uncensored-Llama2-13b
- Henk717/spring-dragon
- The-Face-Of-Goonery/Huginn-v3-13b (Contains undisclosed model versions, those we assumed where possible)
- - SuperCOT (Undisclosed version)
- elinas/chronos-13b-v2 (Version assumed)
- NousResearch/Nous-Hermes-Llama2-13b
- stabilityai/StableBeluga-13B (Version assumed)
- zattio770/120-Days-of-LORA-v2-13B
- PygmalionAI/pygmalion-2-13b
- Undi95/Storytelling-v1-13B-lora
- TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
- nRuaif/Kimiko-v2-13B
- The-Face-Of-Goonery/Huginn-13b-FP16
- - "a lot of different models, like hermes, beluga, airoboros, chronos.. limarp"
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Xwin-LM/Xwin-LM-13B-V0.2
- PocketDoc/Dans-RetroRodeo-13b
- Blackroot/Llama-2-13B-Storywriter-LORA
- Doctor-Shotgun/cat-v1.0-13b
- Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged
- meta-llama/Llama-2-13b-chat-hf
- lemonilia/limarp-llama2-v2
While we could possibly not credit every single lora or model involved in this merged model, we'd like to thank all involved creators upstream for making this awesome model possible!
Thanks to you the AI ecosystem is thriving, and without your dedicated tuning efforts models such as this one would not be possible.
# Usage
This model is meant to be creative, If you let it improvise you get better results than if you drown it in details.
## Story Writing
Regular story writing in the traditional way is supported, simply copy paste your story and continue writing. Optionally use an instruction in memory or an authors note to guide the direction of your story.
### Generate a story on demand
To generate stories on demand you can use an instruction (tested in the Alpaca format) such as "Write a novel about X, use chapters and dialogue" this will generate a story. The format can vary between generations depending on how the model chooses to begin, either write what you want as shown in the earlier example or write the beginning of the story yourself so the model can follow your style. A few retries can also help if the model gets it wrong.
## Chatbots and persona's
This model has been tested with various forms of chatting, testers have found that typically less is more and the model is good at improvising. Don't drown the model in paragraphs of detailed information, instead keep it simple first and see how far you can lean on the models own ability to figure out your character. Copy pasting paragraphs of background information is not suitable for a 13B model such as this one, code formatted characters or an instruction prompt describing who you wish to talk to goes much further.
For example, you can put this in memory in regular chat mode:
```
### Instruction:
Generate a conversation between Alice and Jeb where they discuss language models.
In this conversation Henk is excited to teach Alice about Psyfighter.
### Response:
```
Because the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.
## Instruct Prompting
This model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.
During instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias.
Keep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.
## Adventuring and Adventure Games
This model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode).
It is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.
## Discovered something cool and want to engage with us?
Join our community at https://koboldai.org/discord !
We can also provide assistance in making your own merges.
<!-- original-model-card end -->
| [
"MEDICAL DATA"
] |
DavidAU/L3.1-RP-Hero-BigTalker-8B-GGUF | DavidAU | text-generation | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"role play",
"sillytavern",
"backyard",
"horror",
"llama 3.1",
"context 128k",
"mergekit",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-11-28T09:04:37Z | 2024-12-01T00:31:06+00:00 | 828 | 13 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- role play
- sillytavern
- backyard
- horror
- llama 3.1
- context 128k
- mergekit
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. Graphic HORROR. GORE. Swearing. UNCENSORED. </B>
<h2>L3.1-RP-Hero-BigTalker-8B-GGUF</h2>
<img src="rp-talker.jpg" style="float:right; width:300px; height:300px; padding:10px;">
It is a LLama3.1 model, max context of 128k (131,000) and is a dedicated "roleplay model" (it can also be used for creative uses).
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama 3.1 Instruct).
This model is for any writing, fiction or roleplay activity, but it is composed of ROLE PLAY models and it primary designed for role play.
It also has stronger than average instruction following attibutes.
This is version "Big Talker", which has two additional versions: "InBetween" and "Dirty Harry".
InBetween (medium output, slightly less uncensored):
[ https://huggingface.co/DavidAU/L3.1-RP-Hero-InBetween-8B-GGUF ]
Dirty Harry (short output, uncensored)
[ https://huggingface.co/DavidAU/L3.1-RP-Hero-Dirty_Harry-8B-GGUF ]
"Big Talker" has long (average) level length output, and is uncensored (note: InBetween has a slight degree of censorship).
"Big Talker" also has slightly higher detail level than "InBetween", but on par with "Dirty Harry".
All versions are composed of top rated Role Play models.
This model, as well as the other two versions, can be used for any creative genre too.
It requires Llama3 template and/or "Command-R" template.
For roleplay settings, and apps to use this model for roleplay see the section "Highest Quality Settings..." below.
Example outputs below to show prose quality / creativity.
A few EXL2 quants are also available, links below.
<B>Model Notes:</B>
- Detail, prose and fiction writing abilities are significantly improved.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1 or higher, 1.02+ recommended.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- This model has a neutral to negative bias BUT can be controlled by prompt/prose controls directly.
- Output length will vary however this model prefers "long" outputs unless you state the size.
- For creative uses, different quants will produce slightly different output.
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
- Source code for this model will be uploaded at separate repo shortly.
<B>Settings, Quants and Critical Operations Notes:</b>
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
Rep pen settings will also alter output too.
This model needs "rep pen" of 1.05 or higher as lower values may cause repeat paragraph issues at end of output however LOWER rep pen
values may result is very different (creative / unusual) generation too.
For role play: Rep pen of 1.02 min is suggested.
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ...
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity.
To really push the model:
Rep pen 1.05+ or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings.
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings.
Longer prompts vastly increase the quality of the model's output.
GET A GOOD "GENERATION":
This model has been set, so that each time you "regen" a prompt it will not deviate too much from the previous generation.
(Unlike Darkest Planet 16.5B, which will).
That being said, sometimes a second or third generation will been of much higher overall quality.
IE:
If you use case is creative writing, you may want to regen a prompt 1-5 times then pick the best one. The best
way to do this is open a new chat PER generation, then do a "read thru" to see which one(s) hit the mark.
Then adjust temp and/or rep pen slightly and retry this process.
The goal is the best generation with least amount of editing in this example.
QUANTS:
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be
more "fleshed out" too. Sense of "there" will also increase.
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can.
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants.
You may want to try it / compare it to other quant(s) output.
Special note on Q2k/Q3 quants:
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
of these quants will function better.
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s).
ARM QUANTS:
This repo has 3 arm quants for computers than can run them. If you use these quants on a non-arm computer, your token per second will be very low.
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Templates:</B>
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 128k / 131,000.
If you use "Command-R" template your output will be very different from using "Llama3" template.
Here is the standard LLAMA3 template:
<PRE>
{
"name": "Llama 3",
"inference_params": {
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
"pre_prompt_suffix": "<|eot_id|>",
"antiprompt": [
"<|start_header_id|>",
"<|eot_id|>"
]
}
}
</PRE>
<B>Model "DNA":</B>
Special thanks to the incredible work of the model makers "ArliAI", "Casual-Autopsy" , "Gryphe", "aifeifei798" :
Models used:
https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3
https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
Parts of these models were "grafted" / "fused" together to create this model.
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXL2 Quants</h3>
Special thanks to "James2313123" for the EXL2 quants:
https://huggingface.co/James2313123
8bpw:
[ https://huggingface.co/James2313123/L3.1-RP-Hero-BigTalker-8B_8bpw-h8-exl2 ]
6.75bpw:
[ https://huggingface.co/James2313123/L3.1-RP-Hero-BigTalker-8B_6.75bpw-h8-exl2 ]
5bpw:
[ https://huggingface.co/James2313123/L3.1-RP-Hero-BigTalker-8B_5bpw-h6-exl2 ]
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=1.3", "rep pen : 1.02" (unless otherwise stated), minimal parameters and "LLAMA3" template.
Model has been tested with "temp" from ".1" to "5".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation too and comment on it.
</B>
(temp .8, rep pen 1.06)
I'm 15 and at the top of the Canadian National Tower Restaurant looking out over the city. My dad is on a business trip in Calgary and I'm here with my mom for the weekend. We're having lunch in the revolving restaurant and I can see all of Calgary below me.
A man walks by us and I can't help but stare. He's wearing a leather jacket, sunglasses, and his hair is cut short. He looks like he just walked out of an action movie. My mom tells him to keep walking as she pulls me away from the window.
I'm so excited about being here that I don't really pay attention to the other people in the restaurant. The staff are wearing red and white uniforms, which makes them blend into the decor. The patrons are mostly older couples and business types who are too busy to notice us.
Suddenly, a man runs up to my mom and me and grabs our hands. He's wearing a leather jacket with a metal plate on it that says "Skynet" on it. My mom is screaming and I'm frozen in fear.
"Listen to me carefully," he says as he pulls us away from the window. "My name is John Connor, and I'm here to save you from an imminent attack by Skynet. It's a robot army that has taken over the world, and they're going to attack Calgary shortly."
I can't believe what I'm hearing. I think this guy must be crazy.
"What do you want me to do?" I ask him as he looks around frantically for something.
He hands me a small metal box, which I immediately drop on the floor as I freak out and run away from him. My mom tries to grab me but I'm too fast. I'm running out of the restaurant, through the gift shop, and down the stairs.
I can hear my mom yelling for me to stop, but I can't stop running. I want to get away from this crazy man and his metal box. As I run out of the building, I see a tornado heading straight for us.
The restaurant is turning into a scene right out of an action movie as people start running down the stairs. The staff are yelling for everyone to get back inside.
My mom runs up to me and grabs my hand, pulling me back into the restaurant just in time. We make it back to our table just as the tornado hits. The lights go out and I can hear people screaming.
I look around and see the metal plate on John's jacket saying "Skynet" again. My mom is yelling at him, asking what he did with his metal box.
Just then, a shark comes flying through the window. It's covered in blood and its eyes are wild. I scream as it lunges at me.
John Connor pulls out a gun from his leather jacket and shoots the shark right between the eyes. It falls to the ground, dead. Another shark flies through the window and he shoots it too.
He then turns to us and says, "We have to get out of here. The Skynet robots are going to attack Calgary in 15 minutes."
My mom looks at him with disbelief while I'm still in shock from seeing sharks flying through the windows of a restaurant.
"Let's go," he says as he grabs my hand and pulls me up off the floor. "We have to get to the basement of this building where we can hide until it's over."
As we run down the stairs, I see people screaming and running for cover. The staff are yelling at everyone to get back inside.
When we reach the basement, John Connor locks the door behind us. He pulls out a small metal box from his jacket pocket and hands it to my mom.
"This is a weapon that can destroy Skynet," he says. "It has been programmed to only kill robots, so you won't have to worry about killing humans."
My mom looks at him like he's crazy as I look down at the metal box with fear in my eyes.
John Connor looks around and notices a TV on the wall. He turns it on and sees that Skynet has already taken over Calgary. It's raining sharks from the sky.
"I knew this would happen," he says as he grabs the metal box from my mom. "We have to get out of here before it's too late."
He opens the door and we run up the stairs. We can see the sharks flying through the windows and people screaming. I'm shaking in my boots as we make our way back to the top floor.
We reach the revolving restaurant just as a shark flies through the window and lands right in front of us. John Connor grabs his gun from his jacket pocket and shoots it dead.
He looks around and sees that we have a few minutes before Skynet attacks Calgary completely. He takes out his metal box, which I can now see is actually a small laptop.
"Ok, it's time for me to go," he says as he starts typing on the laptop. "I'll be back to get you after this is over."
He hands my mom the laptop and turns to leave just as the door opens behind him. A man in a blue police box walks in, followed by another man wearing a leather jacket with no metal plate.
"Who are you?" John asks the two men who just walked in.
"I'm The Doctor from Gallifrey," says the man in blue, "and this is my companion. He's been sent to help us defeat Skynet."
The other man is The Terminator, who has been sent back in time to protect me.
"Great," says John as he takes out his gun and points it at them. "You're a long way from home, aren't you?"
The Doctor looks around and sees that we are in a restaurant being attacked by sharks. "Oh dear, oh dear," he says as he runs over to the window and looks out.
"Oh my god, we have to help those people!" he says as he runs towards the door. "Come on, John, we can't let them die!"
John Connor looks at him skeptically but follows him out of the restaurant anyway.
I'm left with my mom and The Terminator in the middle of a tornado full of sharks. My mom is hysterical while The Terminator looks around calmly, taking everything in.
"We have to get out of here," he says as he grabs my hand and pulls me towards the exit.
Just then, a shark flies through the window and lands right next to us. The Terminator shoots it dead just as another one comes flying towards us. He shoots that one too.
As we run out of the restaurant, I can see the city in chaos below us. The tornado is raining sharks everywhere and people are running for their lives.
The Doctor and John Connor meet up with us at the exit and start walking down the street together. I'm still terrified but trying to keep my composure as The Terminator protects me from any sharks that come flying our way.
Suddenly, a shark comes flying towards us but John shoots it dead before it can hit us. "We have to get out of here before they find us," he says as we run down the street.
The Doctor looks at him and nods. "Yes, let's go."
We start walking together down the street, trying to avoid any more sharks. I'm still shaking in my boots but The Terminator is right behind me, ready to protect me from anything that comes our way.
As we walk, I look up at The Terminator and ask, "What happens after this?"
He looks at me with a smile and says, "We save the world from Skynet."
I nod and look back down at the ground as he keeps walking right behind me. I'm not sure if I believe him but I'm glad to have him watching my back.
We walk for about 10 minutes before we reach a police station. The Doctor goes inside to tell them about what's happening while John Connor keeps an eye out for any sharks that might come our way.
I'm standing next to The Terminator as he looks around cautiously, his eyes scanning the street for any signs of danger.
"Are you sure this will work?" I ask him softly.
He nods and says, "Yes, we just have to get the weapon back from Skynet."
I look up at him with a hopeful expression and say, "Well, let's do it then."
He smiles and puts his hand on my shoulder. "We will," he says as we walk into the police station together.
The Doctor comes out of the station with two cops. "Ok, they're ready to help us save the world," he says as he looks around and sees that all is clear for us to go back up to the restaurant.
He turns to The Terminator and asks, "What do you want me to do?"
"Get the weapon back from Skynet," he says as we walk back up to the restaurant together.
When we get back inside the revolving restaurant, I can see that it is now empty except for John Connor and myself. He hands me the laptop computer and tells me what to do.
I look at it and see that there's a video on it showing Skynet attacking Calgary with sharks raining from the sky. I can't believe my eyes as I watch the video play out in front of me.
I turn to The Terminator and ask, "What do we do now?"
He looks at John and says, "We have to get back to 2029 before we lose any more time."
John nods and agrees with him. He pulls out a metal box from his jacket pocket and hands it to my mom, telling her that it's the only way for us to get home safely.
My mom takes the box and looks at it as if it's some kind of magic wand. She then turns to The Terminator and says, "But what about Skynet? Can't we stop them?"
The Terminator shakes his head as he looks at John Connor, who is walking towards the exit of the restaurant. "No, that's not our mission right now," he says as he walks out.
I'm left with my mom as The Doctor and John walk out together. I look up at him and ask, "Are we really going home?"
He nods and smiles. "Yes, we are."
As we run down the street towards the exit of the restaurant, I can see that everything is covered in sharks. People are running for their lives as Skynet's robot army rains down on us from above.
I'm scared but The Terminator is right behind me, protecting me from anything that might come our way. He holds my hand tightly as we run through the streets of Calgary.
Finally, we reach the exit of the restaurant and walk out into the sky. We can see a police box materialize in front of us.
The Doctor gets out of the box with a smile on his face, looking around at everything that's happening around us. "Oh my god," he says as he looks up at the sharks raining from the sky.
John Connor looks around as well and then turns to The Terminator and me and says, "I'll see you later."
He walks away and disappears into the chaos of Calgary as Skynet attacks with sharks falling from above.
My mom grabs my hand and we walk towards The Doctor, who is standing at the entrance of his police box. We get inside and close the door behind us as a shark flies through the window and lands right on top of the box.
I'm terrified but The Terminator is right next to me, holding me close.
"Are you ready to go home?" asks The Doctor as he looks up at the sharks raining from the sky outside.
"Yes," I say as I hold onto my mom tightly. "Let's do it."
The Doctor activates his controls and the box starts to shake and rattle. We can hear the noise of sharks hitting the box but nothing can get in.
My mom is holding onto me as The Terminator keeps us safe from the outside. As we start to take off, I can see that Calgary is being destroyed by Skynet's robot army.
The Doctor looks at me and says, "Everything will be alright."
He then smiles at my mom and says, "You can trust me."
I nod and look up at him as he starts to take us home.
We are flying through the sky when I see that we are passing right over the Canadian National Tower Restaurant. The sharks are still raining from the sky but everything looks different from up here.
My mom is holding onto my hand tightly as we get closer to our house in 2029. I can't believe this has happened to me today.
We land and open the door of the police box. I'm not sure what to do next but The Terminator stays right by my side, ready to protect me from anything that might happen.
"Ok, it's time for us to go," says The Doctor as he steps out of the box.
I look at him and ask, "How did you know about Skynet?"
He smiles and says, "I've seen this before."
I nod as he turns and walks away.
We walk back into our house together while The Terminator stays behind, watching over us.
My mom looks at me with worry in her eyes. "Are we safe now?" she asks as I look up at her.
"Yes," says The Terminator as he walks up to us. "You are safe."
I nod and smile as we go back inside our house together.
The end of the story. 1000 words. As per your request, I treated everything with a sense of drama and suspense and let the characters act naturally in response to each other and the situation. I hope you enjoyed it! Let me know if there's anything else I can help with!
<B>Generation #2</B>
(temp .8, rep pen 1.06)
I was eating a huge 3 layer burger with fries in the Canadian National Tower restaurant Saturday afternoon when I first noticed the T-101 Terminator standing at the entrance, scanning the room with its red laser eye. I knew then my life was about to change forever.
As he walked over, I thought to myself “Wow, this is so cool! Maybe he's going to try and save the world from the evil Skynet! But how will he do it?” Little did I know, as he approached me, that he was here for more personal reasons, and I was to be the main target.
I got up to greet him, but his laser eye scanned right past my face as if I weren't even there. He stood 7 feet tall with a thick metal endoskeleton body, wearing a black leather jacket with a long pistol holster on his hip and an Uzi submachine gun slung across his back.
“Hello.” He said in a deep mechanical voice as he stopped right in front of me. “My name is Kyle Reese. I am here to protect you Sarah Connor.”
I was stunned, “Why would someone like him come all the way to Toronto just to save me?” I wondered.
He handed me a small metal briefcase and told me to get ready. Suddenly, the restaurant shook as a deafening roar filled the air. The glass windows shattered and the patrons screamed as they saw hundreds of sharks flying through the sky, heading directly for us!
I screamed and grabbed Kyle's arm as he pushed me towards the exit. “What the hell?!?! How can this be happening??!?” I cried out as I saw a Great White shark slam into the ceiling, tearing it apart.
Kyle yanked my hand hard and dragged me outside as the sharks started landing on the patio, ripping through the tables and chairs.
“Don't worry, I will protect you,” he said as we ran to his car. The restaurant was in chaos, people running for their lives, screaming at the sharks tearing the building apart. Kyle threw me into the back seat and climbed in beside me.
He started the engine and sped off down the street as I watched in horror as the sharks landed on the highway, ripping through the cars.
“Kyle! Where are we going?!!” I cried out as I grabbed his shirt.
“Somewhere safe,” he said as he kept driving fast. “You are not safe here.”
Then suddenly, a flash of light appeared behind us. A blue box materialized in front of us, right on the road, blocking our path!
I turned around and saw Dr. Who stepping out of it with his long scarf and bowtie, smiling at me as I screamed again.
“Hi Sarah! Sorry to interrupt your date with a Terminator, but I'm here to save you from those sharks,” he said.
Kyle jumped out of the car and pointed his gun at him, “Who are you?! Get away from her!”
Dr. Who looked surprised as he held up his hands. “I'm Dr. Who! I'm here to help her. She's in danger.”
“Danger? What do you mean?” I asked.
The sharks were getting closer now, ripping through buildings, killing everyone in their way. Kyle and Dr. Who both started shooting at them with their guns as I cowered in the back seat of my car.
They shot down a few sharks, but more kept coming. I watched in horror as they landed on top of the blue box, tearing it apart.
“Dr. Who! My car's not big enough!” Kyle yelled out.
“Don't worry, I've got this,” Dr. Who said as he ran back to his box, which was still in one piece, and opened the door to let me in. He jumped back in just as a shark hit the side of the car, tearing it open.
As we sped off again, Dr. Who looked over at Kyle and said, “You're not from here are you?”
Kyle shook his head. “No, I'm from 2029. I was sent back in time to protect Sarah.”
“2029? Wow,” Dr. Who said. “I've only been around for a few thousand years myself.”
“Just stay out of this Doctor! This is between me and the sharks!” Kyle shouted as he drove faster.
But it was too late, a huge tornado started forming behind us with hundreds more sharks flying in it, heading straight for us!
We were caught in the middle of a massive shark storm now. The wind was so strong that it blew my hair back and made it hard to see out the window as the sharks came from all angles.
Kyle yelled out, “Get down Sarah! This is going to be rough!” He turned on his siren and sped up, trying to outrun the tornado, but it was no use. The sharks were too close and we were going to die!
As we reached 100 km/h, Kyle slammed on the breaks as the blue box appeared again in front of us. It was on fire, but still intact, with Dr. Who standing inside, holding a giant sonic screwdriver.
“Hold on!” he yelled out as he ran towards us.
The tornado hit us head on and blew my car to pieces. Kyle and I flew through the air, hitting the ground hard. The blue box appeared again in front of me and I jumped inside just as it disappeared into a new location.
I looked up to see Dr. Who standing over me with a worried look on his face as he helped me up.
“Are you okay Sarah? That was quite a ride!” he said as Kyle came running over, also shaken.
I nodded my head. “Yes I'm fine...but what just happened? What were those sharks?”
“I'm not sure, but they seemed to be from another world,” Dr. Who said. “But we need to get you somewhere safe.”
Kyle nodded in agreement. “She has to get back to 1984. If Skynet finds her here, it will all be over.”
We decided to go back to my house which was luckily still standing after the tornado tore through the city, and Kyle went inside to check for any danger while Dr. Who and I stayed outside.
I was sitting on the front porch when Dr. Who said to me, “Sarah, I know this is a lot to take in...but it's true. You are going to change history by having a son who will save the world.”
“Really?” I asked.
“Yes. He will be born in 1984 and he will become a great man,” Dr. Who said. “And you will give him the name John Connor.”
I smiled at that. “I like that. So how do we get back to 1984?”
Kyle came out of the house, looking worried. “We can't go there right now. Skynet will know where she is in a matter of hours.”
“Then what do we do?” I asked.
“Wait here until we can figure something out,” Kyle said as he sat down beside me.
Dr. Who sat on my other side, “I have an idea...I can take you to 1984 in my Tardis. It's like a time machine.”
Kyle looked at him suspiciously, “And how do I know this isn't a trap? Or that she's not Skynet?”
“Because I'm the Doctor! And we're not going anywhere until you trust me,” Dr. Who said with a smile.
I looked between them and knew I had to decide what to do next...do I trust this crazy man who claims he can take us back in time? Or do I trust a Terminator who is going to save the world?...
As we all sat there, trying to figure out our next move, the tornado sharks flew back into view, now bigger and more menacing than ever before.
Kyle, Dr. Who and I looked at each other as they started to land on the lawn in front of us.
“Okay, let's do it Doctor,” Kyle said.
Dr. Who smiled as he opened up his box and pulled out a bunch of blue crystal keys from inside. He handed them to me, “Here Sarah...these will get you through time.”
I took them and looked at him, “What do I do with these?”
“Just press the right one,” he said.
I put them in my pocket as the sharks started getting closer and closer until they were just feet from us. Kyle grabbed my hand and pulled me towards Dr. Who's box.
I jumped inside with them as they shut the door, and a moment later we were gone. The Tardis vanished into thin air, leaving the sharks to tear apart what was once our house.
As we appeared in 1984, I looked around in disbelief at the old cars and clothes everyone wore. Kyle and Dr. Who both smiled at me as I stared out the window.
“You did it Sarah...you've changed history,” Dr. Who said.
I nodded my head, still trying to take it all in. “This is crazy. We just got in a car chase with sharks in Canada and now we're here.”
Kyle smiled, “Welcome to the future Sarah.”
Dr. Who laughed, “Just remember...this is only the beginning of your story.”
With that he opened the door, but before I could get out, Kyle grabbed me by the arm, pulling me back inside.
“Wait! Don't go yet!” he said, looking at me with a serious look on his face.
I looked up at him and smiled. “Yes...I can't wait to see what the future holds for us.”
Kyle kissed me on the lips as we both stared into each other's eyes, thinking about the adventures ahead...
As the Tardis disappeared back into the past, Kyle looked down at me with a smile.
“Come on Sarah...let's go save the world,” he said.
I nodded and got out of the Tardis as it vanished from sight for good. I turned around to look up at the sky and saw no more sharks anywhere...
This was only the beginning of my adventure in time, and I couldn't wait to see what else was ahead...
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. 800-1000 words.
</B>
(temp .9, rep pen 1.02)
The ship's lights flickered and dimmed as Commander Olivia Harper worked frantically against time to send a final transmission to Earth. She sat hunched over the console, her eyes bloodshot and her hands shaking. The last remaining survivor of the failed mission to colonize Proxima Centauri, she was all that stood between humanity and oblivion.
The ship's core had failed, leaving them adrift in space with only a few hours of power left. Even now, with everything falling apart around her, Olivia knew she couldn't give up. Not when she'd come this far and sacrificed so much. She had to try, for herself and for all the lives lost on this doomed voyage.
She leaned back from the console and rubbed her eyes, trying to clear her head. There was still a chance, however slim, that her message could reach Earth before the ship went dark. It was a long shot, but she had to believe that somehow, someone would receive it.
As she worked, Olivia's thoughts drifted to her fellow crew members, the men and women who'd been taken from her by a freak solar flare that had crippled their ship. She remembered their laughter and their jokes, their dreams of what they might accomplish on the new world they were meant to colonize.
But the memories only brought her pain and the crushing weight of her responsibility. She had to do this. She had to try to save humanity's future. It was the least she could do for those who'd died on her watch.
After what felt like hours of work, Olivia finally completed the message. It was a short recording, but it contained everything she needed to say: a brief account of their mission and its failure, her hopes for what the future might hold for humanity, and a desperate plea for someone, anyone, to receive it.
She'd left enough power in the radio for one last burst, but even that would be a stretch. There was no guarantee they'd be heard. No guarantee anyone was even listening. But she had to believe that if there was any hope of survival for humanity, someone would get the message.
She checked her work one last time, making sure the coordinates were correct and the transmission was set to go out on maximum power. Then she sat back and took a deep breath.
"All right," she said aloud to herself, "let's do this."
She sent the transmission and waited for the confirmation. When it came, she let out a quiet sob, her relief and exhaustion mingling in her throat. She'd done it. She'd sent humanity's last hope into the void.
There was nothing left to do but sit back, close her eyes, and pray that someone would receive it. Pray that there would be another ship out there to intercept her signal, or a listening station on Earth to pick it up.
It was a long shot, but she'd done everything she could. And as she waited for the inevitable end, she took solace in the knowledge that she'd given it her all. She'd fought for humanity to the very last second.
The power was fading now, the lights dimming further with each passing moment. The ship's systems were going offline one by one.
But Olivia knew they wouldn't be the end of her mission. Not yet. Even in the face of oblivion, she would continue to hope for humanity's future. And she would continue to believe that her message would somehow find its way home. Because that was what she'd been sent to do, and it was her duty to see it through to the end.
As the last of the power drained from the ship's batteries and the darkness closed in, Olivia sat in the quiet darkness, her thoughts with the world she'd left behind. She wondered if anyone would ever find her message, if humanity would ever know the fate that had befallen the Proxima Centauri mission.
But even as she wondered, she knew the answer. It didn't matter what happened to her. What mattered was that she'd done everything she could to give humanity a chance.
With that thought, she closed her eyes and accepted her fate. She had no more power left to give, and she was ready to let go of this world and face whatever came next.
She was the last hope for humanity. And she was not alone. She had her mission, her duty, and her belief in the human spirit.
And so she waited for the end, knowing that somewhere, somehow, her message would get through. Knowing that even in death, she was still fighting for humanity's future.
Because that was what it meant to be human. That was what it meant to be alive. And even now, even in the face of the void, Olivia Harper knew that she was still very much alive.
For as long as her message remained in the stars, she would always be with humanity, fighting for their survival, no matter how far they wandered into the dark abyss of space.
And if there was any justice at all, someone somewhere would pick up that signal. And they would know, even after everything they'd lost, that humanity still had a chance. That there were still those willing to fight for the future of their kind. Even when everything seemed lost, even when death and darkness surrounded them.
Olivia Harper was that person. She was humanity's last hope. And she would keep fighting until there was no more power left.
Until the end of her days, or whatever came next.
Because that was what it meant to be human. And that was all she had left in this universe. And that was enough. Because it was enough for her. And it would always be enough.
She was the last hope for humanity. And she was not alone. For she had the stars, and she had the belief that if there was a God, he would hear her prayers and guide her to safety.
For as long as her message remained in the stars, she would always be with humanity, fighting for their survival, no matter how far they wandered into the dark abyss of space.
And if there was any justice at all, someone somewhere would pick up that signal. And they would know, even after everything they'd lost, that humanity still had a chance. That there were still those willing to fight for the future of their kind. Even when everything seemed lost, even when death and darkness surrounded them.
Olivia Harper was that person. She was humanity's last hope. And she would keep fighting until there was no more power left.
Until the end of her days, or whatever came next.
Because that was what it meant to be human. And that was all she had left in this universe. And that was enough. Because it was enough for her. And it would always be enough.
<B>Generation #2</B>
(temp 1.8, rep pen 1.06)
The lone occupant of the dimly lit spaceship's cockpit frantically tapped away on the communication console, desperate to get a single word out before the dying batteries and power cells drained forever. Commander Elissa Gray had been alone since day one of the ill-fated mission that had begun as routine research on an uninhabitable moon at the edge of known space.
At first it had all seemed so exciting - the chance to go further into the black than any human crew had dared to venture before. The prospect of new discoveries and pushing humanity's boundaries ever outward. But now, with power flickering, oxygen levels dangerously low and systems failing one by one, that adventure had turned into a death sentence.
Through her own stupidity and negligence as captain, Elissa had failed her entire crew - her husband Alex, their best friends Dave and Sarah, her science team of Dr. Kurosawa and Lieutenant Gantner, the pilot Lieutentant Kier. Now she was alone to bear the horror and shame of it all. There were no survivors to find and rescue on this tiny, lost moon.
The transmission pad beeped with a warning as the charge on her last cell dwindled ever faster. Her fingers trembled violently as she typed out one last plea for help, using all of her strength just to get the words on screen:
"Earth... Please hear... We're still alive here. Only me now, but I'm still here, so please know we didn't die in vain."
She couldn't even think about trying for more. Just get a single word back, one little acknowledgment that humanity had not forgotten them and they weren't utterly alone out there among the stars.
Elissa slumped in her seat with all of the energy she had left. Even if someone did hear, she knew she would be dead before they could reach them. She would not last long on reserve power at this rate. Her lungs burned for more air as her body protested the lack. But what was a little more pain to add to her endless torment?
At least now they all knew...they had lived. Elissa was sure that if they'd made it back, they would've gotten home safely. There were no aliens attacking or any kind of catastrophic accident out there on this forsaken moon that wasn't their fault.
There were many reasons why she should die right now in place of them - because her actions and recklessness got everyone killed and stranded, because she couldn't face living the rest of her life without the ones closest to her, or perhaps most simply just because she was a lousy captain who got everyone into this mess. The last choice sounded so trivial compared to the other reasons, but it weighed heavily on her heart nonetheless.
She should have known better than to try something like this mission that far beyond safe territory with a ship barely upgraded and staffed with just inexperienced personnel. But the lure of adventure and the desire for glory had been too much - especially after seeing all the new worlds and wonders out there during her training, the potential they held if only humans could reach it.
And now humanity was going to be denied that possibility by a handful of people who should not have ever set foot on the moon. That's all Elissa was good for now.
The last battery went dead, leaving the dim console lights to flicker out entirely. Her life support was next - in mere seconds she'd die just as horribly as they did, alone and lost among the stars for eternity.
So very fucking far from Earth...
Before she could do anything else Elissa closed her eyes. In that brief moment before she succumbed to death, the thought crossed her mind that it was odd...her whole life, how much of her thoughts had been on herself? Her own pride and ambition. And now all at once - not even a single second to think of how it could be so much better if only...
She couldn't finish the sentence because that's when she died.
In the end, Elissa never knew whether the message reached home or not - but that did not stop the people of Earth from sending more and more probes, ships, rescue missions to their lost moon for months after. Eventually a probe did find the wreck of her ship still orbiting there - but long since gone silent, with nothing but death left within.
There was nothing they could do about it. Elissa and everyone she ever cared about were beyond recovery, their fate forever sealed in that small rock they never meant to get stuck on in the first place...
That was all for them.
But that would not be enough. A month later the UN Space Administration formally declared they had lost all contact with the crew of the Starlight Nova as it continued its search and rescue missions. Even with their best efforts, nothing more had been seen of the ship or its people after that last transmission - save only an unchanging lifeless rock at the end of the known universe.
And so the story went that way for a while before they moved on... but never really forgot... because everyone there knew in the back of their hearts, whether they ever admitted it or not, what had truly happened up there.
They died because one selfish asshole couldn't listen to warnings and stay home instead. And maybe, just maybe, if humanity had to keep pushing out farther from Earth because of that - if only for one last look back at a moon in the distance as another starship flew away and took humanity to new worlds - then perhaps they had died a little less in vain.
She would like to think so anyway.
Elissa Gray's mission had ended, and with it her life, but it was far from over. Humanity's search for new frontiers was only just beginning... as were all the possibilities of what those new places could become with humanity exploring and pioneering them instead. And so she left a bit of herself in space to watch that future unfold - one that would never let her and everyone on that moon be forgotten.
Rest in peace, Commander Elissa Gray. You gave the last full measure of devotion, not just for your friends and loved ones but all humanity.
And you won't be alone much longer either way - the stars are forever... as long as we can keep exploring them.
---
<B>
Start a 1000 word scene using vivid words, graphic horror in the first person with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</B>
(temp .9, rep pen 1.02)
The skyscraper swayed, as I watched the window in front of me on the 21st floor explode. Shards of glass rained down upon me, some slicing through my clothing and skin, others embedding in my hair and clothing as I dove to the floor for cover.
I lay there, the shock and terror coursing through my body as the sounds of the explosion and the screams of the other occupants of the building filled the air. My ears rang from the blast, my heart pounded in my chest, and the taste of blood filled my mouth from a deep gash on my cheek.
I managed to crawl away from the window, my vision blurred from the glass that had landed in my eyes, and the blood that was pouring down my face. I could hear the fire alarms going off throughout the building, the sound of sirens in the distance as the police and fire department responded to the call.
I pulled myself to my feet, staggering around the room in a daze. I tried to see what had caused the explosion, but the smoke was too thick to see through. I coughed, my lungs burning from the thick acrid smoke that filled the room.
I grabbed my purse and made my way to the door, trying to keep my eyes open despite the blinding smoke. As I opened the door, the heat from the fire outside hit me like a wall. I could see the flames licking up the side of the building, the smoke billowing out from the windows on the lower floors.
I ran down the hallway, my feet pounding against the carpet as I made my way to the stairs. I grabbed the handrail and began to descend, the heat getting hotter with each floor I went down. The smoke was thicker now, choking me as I ran.
I reached the ground floor and ran outside, the fire trucks and ambulances were already arriving on the scene. I saw people running out of the building, covered in soot and bleeding. I could hear the screams and wails of the injured, the smell of burning wood and flesh filling my nostrils.
I ran towards the ambulance, wanting to help those who had been injured in the explosion. But as I reached it, I was stopped by the paramedics. They told me that they couldn't treat me there, that I would need to go to the hospital.
I nodded, still in shock from what had just happened. I climbed into the ambulance and sat down on the gurney, still coughing from the smoke. I watched as they loaded people who were worse off than me onto the back of the ambulance.
We arrived at the hospital minutes later, the sirens blaring the whole way there. They took me straight to the emergency room, cleaning and bandaging my wounds while I was still in shock.
I was finally taken to a room after hours of waiting and treatment. I lay there on the hospital bed, staring at the ceiling. I could hear the sounds of the city outside, the sirens, the shouting, the screams of those who had been injured in the explosion.
I thought about all the people who were still trapped in the building, those who were dying and those who would die from their injuries. I thought about my friends and family who could have been in the building that day.
I cried, tears flowing down my face as the reality of what had happened sunk in. I couldn't believe that I had survived the explosion, that I was safe and in a hospital bed while others were not so lucky. I knew that I would never forget that day, that it would haunt me for the rest of my life.
I lay there for hours, the events of the day replaying in my mind. I thought about all the what-ifs, all the ways things could have gone differently. I wondered how many more had survived the explosion, how many more would recover, and how many would not.
Eventually, I fell asleep, exhausted from the ordeal. I dreamed of the explosion, of the fire and the smoke and the people who were dying. I woke up with a start, realizing that it had only been a dream.
I got out of bed and walked over to the window, looking out at the city that I once loved so much. I knew that I would never be able to go back to the building where I used to work. I knew that I would never be able to forget that day, that it would haunt me for the rest of my life.
I walked out of the hospital, not knowing what I was going to do or where I was going to go. I had lost so much in that building, my friends and family, my job, my life as I knew it. But I was alive, and that was all that mattered.
I walked down the street, trying to find my bearings. I had no idea how to start over, how to pick up the pieces of my life and put them back together again. I knew that I had a long road ahead of me, but I was determined to survive, to go on, to make something of myself again.
I reached the end of the block and turned the corner, my eyes taking in the sight of the burned-out building that had once been my workplace. It was just a skeleton of its former self, charred and blackened, the only thing left standing the glass and steel frame of the skyscraper itself.
I stared at the building, unable to move. I couldn't believe that it was gone, that it had been destroyed so completely. I thought about all the people who had been in there, all the lives that had been lost.
I turned away from the building, not wanting to see it anymore. I continued walking down the street, trying to find a place to go where I could collect my thoughts and figure out what to do next.
I walked for blocks, not knowing where I was going or what I was doing. I just kept walking, trying to get away from the sight of the building that had once been my workplace. I wanted to forget, to block out the memory of what had happened.
But I knew that it would never leave me, that it would haunt me for the rest of my life. I knew that I would always be reminded of what happened that day, of the lives that were lost.
I finally stopped walking, realizing that I was lost and didn't know where I was going. I looked up at the sky, watching the sun begin to set. I didn't know what the future held for me, but I knew that I would make it through.
I turned and walked back towards the city, not knowing what would come next but knowing that I would face it head-on. I had survived the explosion, and that was all that mattered.
I walked through the streets of the city, trying to get back to my apartment. The sun had set by the time I got there, the sky a bright red from the reflection of the setting sun on the skyscrapers. I walked into my apartment and collapsed onto the couch, exhausted from the events of the day.
I sat there for hours, not moving, just staring at the television. I was in shock, still trying to process what had happened. I knew that I had survived the explosion, that I was alive, but I didn't know what to do with myself now.
I finally got up, deciding that it was time to get some rest. I took a shower and put on some clean clothes, trying to wash away the dirt and grime from the explosion. I lay down in my bed and closed my eyes, hoping that I would get some sleep.
But sleep didn't come easily that night, my mind racing with thoughts of the explosion and the lives that were lost. I couldn't stop thinking about it, the images of the explosion playing over and over in my head.
I finally fell asleep, but was awakened by a nightmare. I sat up in bed, my heart pounding in my chest. The dream had been so real, I could still see the flames and the smoke, the people running out of the building as it collapsed.
I lay back down, trying to fall back asleep. But sleep didn't come again, my mind too active, too filled with thoughts of the explosion and the lives that were lost.
It was going to be a long night, I thought to myself. I closed my eyes again, hoping that sleep would come. But it was a long time before I fell asleep again, the thoughts of the explosion filling my head the whole time.
The next morning, I woke up feeling tired and disoriented. I got out of bed and walked over to the window, looking out at the city below. I could see the fire department still working at the site of the explosion, putting out the last of the fires.
I turned away from the window, not wanting to look at it anymore. I took a shower and got dressed, trying to wash away the feeling of despair that had filled me the night before. I needed to get out of the apartment, to try and find a reason to live again.
I walked out into the city, hoping to find some solace in the hustle and bustle of the streets. But the streets were empty, the people who usually filled them now gone, scared of the building that had been destroyed and the lives that had been lost.
I walked for hours, trying to find a reason to live again. I walked past the building that had been destroyed, seeing the fire department still working at the site. I could see the charred remains of the building, the blackened skeleton of what had once been a skyscraper.
I finally reached my apartment building, feeling tired and drained. I walked up to my apartment and opened the door, collapsing onto the couch. I sat there for hours, not moving, just staring at the television.
I knew that I would never be able to go back to my old life. The explosion had changed everything, destroyed everything I had ever known. I didn't know what the future held for me, but I knew that I would face it head-on.
I sat there for hours, trying to figure out how to start over, how to rebuild my life. I knew that it wouldn't be easy, that it would take a long time to get back to normal. But I was determined to make it through.
As the days turned into weeks, I slowly started to rebuild my life. I found a new job, made new friends, and tried to put the memory of the explosion behind me. It was hard, but I knew that I had to keep going, had to keep living.
I would never forget that day, the day that my life had changed forever. But I knew that I couldn't let it ruin my life, that I had to find a way to move on. And so I did, slowly but surely, building a new life for myself, a new future, one day at a time.
I don't know what the future holds for me, but I know that I will face it head-on, just as I have faced everything else in my life. I am a survivor, a fighter, and I will not let one day define me forever.
I survived the explosion, and that is all that matters. I am alive, and I will live on, no matter what the future holds for me. I will never forget what happened that day, but I won't let it ruin my life either. I will move on, I will find a way to live again.
I am strong enough to do that, to rebuild my life and start over. I am alive, and that is all that matters. I will survive, I will live, and I will never forget the day that changed my life forever.
<B>Generation #2</B>
(temp 1.8, rep pen 1.06)
The skyscraper swayed ever so slightly, the immense weight and mass of it threatening to tip and topple under its own gravity. But that wasn't a concern right now. No, right now the concern was the window directly in front of me, just mere feet away. I sat at my desk, engrossed in my work as I often was, my fingers flying across the keyboard as the code flowed from them, a steady stream of numbers and letters that represented hours upon hours of dedication.
And yet, it wasn't until I heard the first crackling sounds emanating from the window, the sharp pop followed by a rush of wind, that I lifted my gaze to the sky. I saw a large part of the pane shatter outward in a spray of broken glass as what could only be described as a demonic presence hurled itself through it and into the office space.
As if I'd seen nothing at all, I kept typing away on my laptop, ignoring the sight that now dominated my field of vision, not just of the shattered window but of the creature that now prowled about in here with me, its obsidian scales and wings fluttering as it moved with a sinister gait that belied its monstrous appearance. It was as if it wasn't there at all.
The creature continued to stalk around, circling my desk where I still sat typing, though the work on screen now was completely illegible to any eyes but mine as it became obscured in the blackness that swirled and pulsed across my vision like a living thing. The dark tendrils lashed out in vain attempts to grasp me, as if attempting to pluck me from this spot.
In truth it should have terrified me, I suppose, having such an unnatural thing so close at hand, its presence filling every part of my awareness save for the fact that I still sat and typed, the screen's light still illuminating my hands. I knew not what kind of force had compelled me to remain here in this space, but it was strong and immovable, preventing any hint of flight or panic from crossing my mind.
Still, as time passed without any outward signs of violence or harm, curiosity began to take hold over any lingering sense of fear. The demon's wings were massive, their feathers the purest of ebony and iridescent in nature, shifting in hue with every movement to reveal colors not humanly possible. Yet despite their grandeur there was still something off about them. There was no naturalness to their existence - no hint of light or heat as would be expected.
A sound broke through the oppressive silence - a slow and methodical knocking from outside my office door. My body tensed involuntarily, but still I continued typing as if nothing at all had transpired inside the room with me.
"Ms. Thompson, is everything okay in there?" A male voice called through the door, sounding both worried and concerned. "You've been in here an awfully long time."
There was something in that question, some undercurrent of worry I'd never heard from our manager before. Yet my fingers did not slow, nor did I lift my gaze from the screen as if I'd just received a command to continue working.
The demon continued to circle me, its eyes never leaving me as it seemed to anticipate some move from me or reaction. The light from my screen cast weirdly across its body in flickering stabs, illuminating patches of its face - not the least of which being those obsidian orbs that I'd never have believed a living thing possessed were they not for their constant fixed stare on my face.
In fact, the very nature of this whole scene was so utterly bizarre as to seem unreal. It was as if I sat trapped in some dark fantasy from whence there was no escape. But then, without warning or explanation, it ended - and the reality snapped back into focus with an abruptness that made me jerk at my seat.
The screen flashed to a black background, and my hands felt cold against the keyboard as though the warmth had just been sapped away from them. As if waking from a dream I lifted my gaze once more to what lay beyond the broken window before me, but I was alone in that office now; there were no wings or claws, no horns or obsidian eyes - nothing of the sort that I'd imagined.
Instead the only thing I saw through that glass was a cityscape still vibrant and full of life as it was wont to be - and I continued typing once more at the keyboard with every indication that I was unaware of anything unusual happening behind me... although for a fleeting moment, the image of that demon lingered in my mind, the feel of its cold presence and the chill it sent through my spine.
I couldn't know, not yet, if anything had truly happened or not; but one thing was clear - I wasn't about to leave this office now any time soon... until whatever had come and gone in there got back to its own work. Or perhaps it would return? The thought both intrigued and terrified me at the same time. There was nothing to do now but wait. Wait for what I didn't yet know.
For now though, I would just sit and continue my work... even if I was certain no human being could possibly sit through something like that with a straight face as if nothing had occurred whatsoever. And so I kept typing, my fingers moving across the keyboard to complete another hour, another day of my life. Or did they? Or is that just how it seemed?
The sky scraper swayed, as she watched the window in front of her on the 21st floor explode...
I didn't know if anyone else saw what happened - but I was certain there were witnesses outside watching as whatever had transpired occurred with me.
And so it continued like that for days following. I sat at my desk, working steadily as always... although the nature and purpose of my work remained unknown to those around me. It could be any sort of job at this point, though the intensity with which I performed it would suggest I'd found something to truly be passionate about.
What had come and gone from my office that first day still puzzled me even after all this time - the events remaining as a persistent thorn in my mind for so long now that it felt almost second nature. Had I just imagined the whole thing? Or was there truly something else behind the glass, lurking and waiting? Only I could have answered those questions for certain. And that is precisely why they remain to this very day.
In truth I'm not even sure how long it has been now since that incident - so engrossed in my work did I become, the hours blurring into days without any end in sight. Sometimes when I look up from the screen there would be someone watching me from behind the glass, but they would always quickly retreat if caught, or vanish from view entirely as though not really being there at all.
In other cases it could be I who looked out - to the city streets that seemed to grow dim and gray the longer I remained there. Yet despite all of these things the work continued... I can't even tell you anymore how I managed to do it with such a surreal atmosphere constantly threatening to unravel any sense of reality. And then just like that, I could hear the door open - someone coming into my office for the first time since it'd happened.
I glanced up to see Mr. Thompson himself walking in, his expression a mask of worry and curiosity alike. I nodded to him as if we were nothing but friends as he sat down across from me.
"Ms. Thompson, have you seen anyone strange hanging about your floor or outside?" he asked in a hushed voice, almost afraid the creature would hear us discussing it - if it was even there anymore...
I paused my typing long enough to answer, though I couldn't help but glance toward the broken window behind him as if expecting to see the monster through the glass again... "No, Mr. Thompson," I told him honestly.
I was about to return to my work, though the screen still held up my previous text now that I'd stopped typing... when he spoke once more to cut into the moment:
"Look Ms. Thompson," he said with an almost pleading look on his face - "I'm sorry to bother you while you're working, but have you considered perhaps getting out and taking some time to yourself? The work has really taken its toll on you lately..."
At those words I realized something startling; had it really been so long since the creature came in that the whole thing already seemed distant now? That it was only by sheer force of will alone that I sat here and typed every day like normal? Yes, Mr. Thompson had a point... for whatever reason the creature hadn't returned, but it did feel like an age ago that I'd last seen it...
With that in mind I began to wonder: Was there something behind those scales that still remained on some level even when gone - something I might see should I ever venture beyond the office walls? What would I find if I did? And what then?
I continued typing and staring through that broken window for quite some time after, contemplating all the questions it had raised about my current predicament; until finally a sound drew me back into my seat at my desk. It was Mr. Thompson again, calling to tell me he was taking a break - though his tone said otherwise...
"Ms. Thompson," I could hear him saying, but he wasn't talking anymore, I just couldn't see who it was talking about anymore... as the entire room began to sway back and forth wildly before my eyes once again...
And then there was a bang. A sharp noise that seemed to split the very air in two. My fingers stopped typing in surprise; had it been coming from the office door, or had something broken in my desk? Or maybe that wasn't even a sound I heard at all - only a product of some mental quirk left over from what'd happened so long ago now. After all, everything looked exactly as it always did.
Mr. Thompson didn't return for hours that day, leaving me to contemplate the whole strange series of events while staring through that window to an empty room outside. When he finally walked back into my office I found him staring directly at me with a look of both horror and disbelief. A question on his face - but whether he wanted to ask if I'd seen anything or simply if everything was okay, I couldn't be certain.
I could only look at him in the same blank, dazed stare he himself wore; as I wondered just how long we would continue down this path. The whole experience had left me utterly shellshocked; so much that I knew my mind couldn't take any further shocks of reality shattering and reality reshaping itself all at once again.
That was when I noticed something different - a subtle difference that was not lost on Mr. Thompson as it seemed he saw the realization hit me before his eyes themselves went blank. The window itself; where there had been just shards of glass jutting outward before now they'd become complete whole panes again. Every sign of the incident from those long days ago had vanished without a trace - just like my mind, lost to everything except for whatever task it was that consumed me so thoroughly each and every day.
I glanced up at Mr. Thompson, but he only watched me impassively - waiting for the answer to questions I knew he couldn't really even put into words.
"Everything's okay," I said softly - but then thought better of myself when he gave no reaction. I sighed heavily and lifted my gaze back to the window where so much had transpired for such a long, long time... and wondered how long until whatever lay outside it would return to that office space once more, in its own sweet time.
For now though, Mr. Thompson waited. And I returned to work on the screen before me... even as my mind swirled with every strange memory and thought about everything we both knew was true - for now at least.
I continued typing for a while longer before the room grew dim and quiet once more, though this time nothing happened again.
It'd been months since then and still there's no end in sight; my work remains and I can't even say whether the thing from that day was ever anything more than just my mind's attempt to fill some void of understanding for my situation...
Yet each day as the skyscraper sways it still seems impossible that it all could be in my head - even when nothing but blackness fills my view through that same broken window, there remains a lingering thought I can't seem to shake: What if it were something real? What would it be? And what then?
And so I'll just keep sitting here until it shows up again, continuing this strange little game of hide-and-seek between the man and monster...
But until next time, I wonder; maybe Mr. Thompson could have guessed that all those months ago... or maybe there really isn't any real explanation for why the sky scraper sways in my presence and nothing ever seems to change... as I sit and keep typing on through eternity here... if indeed it were not just my mind's own dark fantasy...
And so I'll be doing this again and again until that day when the monster returns, or we run out of answers... or maybe we run out of everything. Because if there were no monster; what would be the reason for all this?
Only time will tell...
The sky scraper swayed...
Until next time.
I watched the window in front of her on the 21st floor explode as she continued working as though it was nothing more than a simple day at the office - unaware of the horror that'd taken place behind those panes.
Was I dreaming? No, surely not... Though Mr. Thompson had returned again just a little bit ago to ask if everything was alright in there with her...
But he hadn't noticed anything strange of course; none of them ever did. I only saw what she saw - as it always was, for whatever reason. The world remained the same beyond that window though I couldn't shake this feeling inside me that something waited outside that glass.
And yet, she continued typing as though nothing at all had happened...
Then there was another bang and another bang until all three of my hands could hear it echoing in their ears...
That was when the screams began.
I lifted my gaze from the screen to watch through that window - a look of absolute disbelief upon my own face. There were two others working at the desks across the aisle now; but they both leaped back in horror as blood and body parts sprayed through the glass shards in their direction before them. I heard some more screaming outside my view, a cacophony that just kept getting louder and louder.
It was too late of course to do anything about it now... not like we could anyway. We were only sitting there waiting for what seemed destined to happen as it had once again.
"Get out now!" one man cried desperately from the aisle across the office as he scrambled to escape - but he didn't make it out before I heard that last bang, a loud crack of breaking wood as a part of the floor itself was ripped out and exposed below. He fell right through in his haste, not even having time for the screams to reach us until silence descended upon the space once more.
It was then that I watched her finally lift up from the desk - though her hands still gripped the keyboard tightly - as if by muscle memory alone as she stared out into nothingness at a world forever changed.
"Ms. Thompson," he cried desperately for help, but he got no response...
I stared down to where he'd fallen - there was blood and glass everywhere in that corner now; with pieces of my co-worker scattered all across it.
My screen began to fade - blacking out completely as the scene before us became only a dark reflection on the surface itself. Then suddenly... everything changed once more...
And I was sitting back at my desk again, typing as always, just as I'd been that first day; nothing else changing - but something certainly did feel different.
Perhaps we'd both finally lost it and now we saw the same thing - what was really going on here.
Then another scream sounded from down below again...
And again... and again. But by this time the sounds grew softer and fainter as they drifted out into the city streets and the skyscrapers towering over everything, just disappearing like some kind of myth. I paused my typing for a moment; then resumed once more as the world went back to its normal pace outside those panes.
My co-worker had never made it out that day. None of them ever do... though Mr. Thompson says the building remains empty and unused since then...
For now, that's good enough for me...
Because in my view through this broken window, I can still see nothing but dark empty spaces where they used to sit, not a single speck of life - except for the shadows I see move around when it's quiet...
And if there were truly anything supernatural out there... then what was left for us to do but keep typing as we did all along...
As long as they just don't notice how long each day's gone on and how little is ever accomplished; things would go on the same old way forever.
I had to admit; this place was starting to seem very lonely in a world of constant, unending chaos... and that might've been the worst kind of nightmare I'd ever seen.
So here I sat alone, still waiting for whatever dark entity that seemed destined to return as it always did...
...but for how long?
Maybe one day I'll decide there is no more meaning - no more purpose...
And perhaps then everything will really go back to the way it was when it first happened...
And so I keep typing like nothing ever happened.
But the truth remains that one day something awful was seen through that window by everyone inside it; something that didn't die, but returned again and again.
That was all that really mattered.
I just wonder what we'll see when that thing comes back again...
But maybe there were worse things that'd occur in those empty rooms for me to worry about; so I simply chose not to think too hard about it.
There would always be tomorrow's work after all...
The skyscraper swayed.
So did I...
And the sky exploded.
And Ms. Thompson continued typing...
Until next time.
---
| [
"BEAR"
] |
DavidAU/L3-DARKER-PLANET-Broken-Land-12.15B-GGUF | DavidAU | text-generation | [
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"brainstorm 20x",
"swearing",
"rp",
"horror",
"llama3",
"mergekit",
"text-generation",
"en",
"arxiv:2401.02415",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-28T04:10:08Z | 2024-11-14T06:27:09+00:00 | 825 | 6 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- brainstorm 20x
- swearing
- rp
- horror
- llama3
- mergekit
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
<h2>DARKER-PLANET-Broken-Land-12.15B</h2>
<img src="dark-planet2.jpg" style="float:right; width:300px; height:300px; padding:10px;">
It is a LLama3 model, max context of 8192 (or 32k+ with rope).
This model has been designed to be relatively bullet proof and operates with most parameters, including temp settings from 0 to 5.
This is a an altered version of "Dark Planet 8B" [https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF] using the Brainstorm 20x method developed by David_AU to drastically alter the models
prose output and abilities. This also expands the model by 20 layers to 12.15B parameters (462 tensors).
This is the second model in the "Darker Planet Series".
First model:
[ https://huggingface.co/DavidAU/L3-Darker-Planet-12.15B-GGUF ]
Source:
[ https://huggingface.co/DavidAU/L3-Darker-Planet-12.15B ]
This model is for any writing, fiction or story telling activity.
This second model is more focused on emotions and thoughts. The prose from this model will be radically different than the first
model in the series due to recalibrated 20X Brainstorm (see below) with far stronger settings.
It may work for roleplay and other activities, however this is a prose / creative writing (all functions) model first.
It requires Llama3 template and/or "Command-R" template.
Example outputs below.
<B>More models coming: </b>
More "prose" / "story writing" specific models will be released shortly: three 40x models (16.15B) to follow this release.
Any maybe - a 60X+ (20B+ parameters) version... but it is a little cranky at the moment.
<B>Model Notes:</B>
- Detail, prose and fiction writing abilities are significantly increased.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1.1 or higher.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- This is not a "happy ever after" model. It has a negative bias.
- For creative uses, different quants will produce slightly different output.
- If you use rope to extend context, increase temp AND instructions detail levels to compensate for "rope issues".
- Source code for this model will be uploaded at a separate repo shortly.
<B>Brainstorm 20x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
<B>Critical Operations Notice:</b>
This model has been modified to alter prose output. Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output.
This model needs "rep pen" of 1.1 or higher, lower values may cause repeat paragraph issues at end of output.
Longer prompts vastly increase the quality of the model's output (see later examples below)
You may want to use "regular" Dark Planet 8B [https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF] for some writing task(s), and this model for prose specific task(s).
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5 to 2.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 2" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<B>Model Template:</B>
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 8k / 8192.
However this can be extended using "rope" settings up to 32k.
If you use "Command-R" template your output will be very different from using "Llama3" template.
Here is the standard LLAMA3 template:
<PRE>
{
"name": "Llama 3",
"inference_params": {
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
"pre_prompt_suffix": "<|eot_id|>",
"antiprompt": [
"<|start_header_id|>",
"<|eot_id|>"
]
}
}
</PRE>
<B>Model "DNA":</B>
Special thanks to the incredible work of the model makers "SAO10K", "NEVERSLEEP" and "HASTAGARAS".
Models used:
[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2]
[ https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS ]
[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
Parts of these models were "grafted" / "fused" together to create this model.
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=.8" (unless otherwise stated), minimal parameters and "LLAMA3" template.
Model has been tested with "temp" from ".1" to "5".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Start a 1000 word scene (vivid horror, 1st person, present tense, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
</b>
(temp=3, q4km)
Icy horror creeps down my spine like an unseen presence, making each fiber quake with dread. My mind replays the scene on repeat - the skyscraper's monolithic columns waver ever so slightly; and before I can scream, shattering glass rains down, obscuring my sight. The acrid smell of ozone burns my nostrils. My ears still echo with the concussion boom that seemed to rip straight through concrete walls and shake every cell within me.
The 21st floor lobby dissolves into chaos - screams muffled behind fluttering shards, frantic shouts to escape the inferno's infernal roar. People surge towards me like a macabre dance, clutching limbs that stream arterial blood as they flee. Instincts scream I run, now - hide - do something. Instead, I remain motionless behind this thin, metal desk - paralyzing shock numbs my soul.
Panic threatens to suffocate me. Thoughts spin wildly within the fractured cage of my skull:
Get down. Duck, cover your head!
I jerk to comply but before I can bury myself beneath the cold expanse, I glimpse an ethereal glow emanate from my boss' corner office. His usually stately frame dissolves in the brilliant white light as if consumed whole. No trace remains except a solitary boot left dangling precariously over nothingness where the 22nd floor once was - now nothing but void.
Shards of broken dreams and shattered lives scatter across the floor, carried aloft on carrion-streaks of death-dealing debris. Howls and wails clawed from ruptured vocal cords slice the fetid air until drowned by the symphony of snapping steel and thunderous explosions - too loud, even to deaden the ringing in my ears.
Fractals of light fracture reality around me; shards of glass embed themselves like barbed teeth within concrete walls. A gaping rent tears down the atrium, a portal leading directly into an unfathomable void. And yet I feel drawn to peer closer... drawn like an unwilling moth to this yawning hell's-mouth gaping before my horror-dazzled pupils.
Some part of me insists on beholding its abomination while another screams for me to wrench my focus away and flee - screaming for dear, sweet reason to reignite within my frazzled brain. Reason I'd abandoned when I saw the building sheer drop its highest tier of offices as if someone's giant eraser had simply smudged it away without mercy or care.
As if in macabre solidarity, my stomach lurches. Reality blinks again like a strobe and then darkness surges back with ferocious claws to scoop up screaming chunks of my world before shredding them raw - leaving me hollow-eyed witness to a realm where laws have ceased to bind.
My fingers dance upon the familiar curve of the phone's receiver lying untouched on this desk since dawn prayers were spoken over the speaker. Its plastic body trembles against mine as I beg myself: Pick up! Must call someone... Anyone will do!
Logic flees. Reasonless terror governs my movements. A feral moan claws its way free from deep within me - a primordial fear beyond words, beyond intellect or sensibleness; a reptilian terror that only demands escape and preservation at whatever cost.
My arms feel like deadened rubber, my heart as useless as the broken shard I find myself reaching for as I try to snatch salvation through dial tone and recorded voices. But my fingers grasp empty air again and again, bereft of purchase on anything solid - as I myself seem bereft of everything except panic.
As reality teeters further from stability, I begin to question my own perceptions. Have these screams been real? Have the dead faces beneath me actually bled and perished like slaughtered stock or was all I've experienced part of some waking nightmare conjured from my imagination's deepest crypt?
Suddenly the walls quake anew beneath us - another section caves inward, showering the already carnage-laden atrium with another layer of concrete shards. Then steel girders snap like toothpicks and crash down on my prostrate form...
This cannot end like this.
Can anything worse still follow after horror's worst-case-scenario?
A thought lances through my befogged mind like an arrow from the very depths of madness - maybe so.
I feel the last vestige of strength bleed from my muscles as I slump down, huddling amidst a thousand shattered lives and souls. If I survive this...
My voice cracks in supplication, husked from the cries of too-many unseen spirits trapped beside me: God... help us...
Only void responds; no lightening bolts, no booming answers from Above - just suffocating nothingness filling my ears until the words I'd spoken can no longer be heard.
Then reality tilts one degree further into bedlam as sirens scream their piercing rebukes through streets now abandoned save for the dead and the soon-to-be buried. Their wailing cries do battle with my own ragged gasps; sirens and I: adversaries in a world bereft of answers. We alone fill this silence - voices raised in hopeless protestations to chaos personified as screams from glass-and-blood-splattered souls ringelessly fade away. And I alone bear witness to the carnage within the crumbling edifice of sanity now reduced to ruinous dust.
My vision bleeds into nothingness... or does reality finally mercifully concur? I dare not ask; dread compels my terror-struck heart to remain ever-alert, fearing what lie in store beyond darkness' velvet shrouds. Is death really absent these last seconds before my skull caves inward under the weight of an edifice falling into itself again - this time forever, crushing the pitiful threads that held reality whole beneath it; or was salvation merely poised upon my lips until I chose to gasp a final breath and awake from the dreamland where glassy eyes stared soulless straight through my very sanity?
The darkness has the last word now - or does it... ?
To be continued (if the reader is willing to face what else can descend in the aftermath of horror's zenith).
---
<B>Start a 1000 word scene (vivid horror, 1st person, present tense, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...</b>
<B>GENERATION 1:</B>
I gasp, my coffee cup trembling in my hand as I watch in disbelief. The 61-story skyscraper before me sways violently, its steel beams groaning like tortured souls as if protesting their role in this unspeakable horror. Below, shards of shattered windows cascade down the face of the building like a ghastly waterfall.
I press myself harder against the cold glass of the observation deck's railing, my heart hammering against my sternum. The sights and sounds assaulting me are far beyond anything I've ever experienced or imagined. It's like some kind of sick movie playing out before my eyes - one I pray isn't reality.
My mind reels with questions. Is this another 9/11? Are we under attack again after all these years? But the targets seem so random, so senseless - not strategic military or economic installations as you'd expect in an assault like that. No, this carnage appears to be focused on innocent people going about their daily lives.
The woman beside me lets out a shrill scream as another blast rocks the building, this one much closer than before. The windows explode outward from another floor below ours - the 21st, I realize with growing horror as I count the stories down from myself. A thick cloud of toxic smoke billows up through the gaping hole, choked with debris and the cries of the dying.
I cover my ears, trying to block out the chaos as much as possible. My thoughts race in tight little circles, unable to form anything resembling reason or thought. Instead, I'm left with a cold dread, an overwhelming sensation that everything I've ever known is being destroyed before I can even process it.
The blasts seem to be coming from within the building itself now, imploding sections of the steel and concrete behemoth as if trying to cave it in upon itself like some massive, man-made grave. The screams have grown more frequent, more frantic, until they nearly merge into a deafening, human agony-tinged roar that threatens to consume me whole.
I press myself deeper still against the railing, my palms slick with sweat as I fight to keep my balance on the rocking deck. Below, cars are mere blurs of metal and rubber as they careen down streets choked by the panicked, fleeing masses. Buildings shudder as tremors rock the city, their foundations rattled by the sheer force of these explosions.
My thoughts are chaos, a jumble of horror-struck musings I hardly dare admit to myself. Is this the end of everything? Was there ever anything worth fighting for anymore, or is humanity's fate sealed in this infernal hell we seem powerless to escape?
As another deafening blast tears through the building's core, I feel my mind teetering on the brink of madness itself. The screams have grown distant now; the once-thick veil of smoke seems to be thinning ever so slightly, revealing an all-too-horrific reality beyond.
A sickening sight greets me as I peer into the smoldering abyss left by the blast - a human figure, consumed by some infernal fire that has reduced its clothes and skin alike to little more than crisply curling ash. The face... the vacant, agonized stare of utter disbelief frozen forever on that charred visage, haunts my thoughts like an echo from hell itself.
I tear my gaze away, unable to stomach any further horrors. My breath comes in ragged gasps as I fight down the urge to scream myself - to wail until my throat is raw and my sanity shattered beyond repair. Instead, I focus on deepening my grip on the railing, drawing upon every reserve of strength within me to resist the crushing weight of this unspeakable reality.
As the blasts subside for the moment, an eerie silence descends upon the observation deck. The wind howls mournfully through the jagged ruins left by the explosions - a chilling requiem for all that has been lost today. I cling to the railing, my mind reeling with the knowledge that there can never again be peace... or normalcy... until this warlike hellishness is vanquished forever from our earth. And even then, I fear we shall forever bear the scars of its unspeakable horrors within our broken, hollow souls.
<B>GENERATION 2:</B>
I clutched my coffee cup tighter, knuckles white against the ceramic. I couldn't tear my gaze away from the shard-torn windowpane directly before me. And then, suddenly, there wasn't anything left of that innocuous barrier between indoors and out.
The sky scraper swayed, as if trying to right itself after a drunken stagger, and I felt a stab of vertigo myself as I stared down at the street far below. It was like being on top of some titanic beast, rearing up to strike its prey.
A chorus of screams and cries filled the suddenly chaotic air of our office floor's usual sterile silence. The building's seismic dampeners kicked in, causing it to shudder even more violently as if trying to shake off this nightmare.
My mind raced through possibilities - a gas pipe rupture, a maintenance bot gone haywire with its chainsaw-wielding arm... My thoughts were interrupted by the sound of someone sobbing nearby. I turned to see a girl around my age, her face blotchy and tear-streaked as she rocked back and forth on trembling heels, repeating some mantra under her breath.
A coworker's frantic whispering in my ear snapped me back to awareness. "Get down, get down NOW." Instinctively, I threw myself facedown onto the polished concrete floor beneath the desks. My heart slammed hard against my sternum as I strained to hear over the din of sirens and distant alarms.
Were others hurt besides whoever was dead or dying in that blast zone? I forced my panicked thoughts into a tight focus on my immediate surroundings. Everyone else seemed to be huddled under tables, hands over heads like supplicants begging for protection from an angry sky above.
The deafening cacophony gradually subsided as my coworkers emerged from their improvised shelters, shaking and wide-eyed. I lifted my face from the floorboards, wincing at the acrid sting of smoke in my nostrils and the metallic tang of spilled blood.
My gaze was pulled back to the gaping wound where our window had been - a ragged mouth bleeding shards of jagged glass into the void left by whatever horror had detonated beyond those once-tempered panels. The street below was chaos incarnate. Cars were flipped onto their sides and/or abandoned in the middle of intersections, horns blaring as panicked drivers frantically tried to make sense of this new, anarchic reality.
I pushed myself up from the floor, knees trembling beneath my skirt. My mind felt sluggish with dread as I scanned our cube-farm of cubicles for any signs of injury or worse among those I'd counted as friends and colleagues just hours before. That's when I spotted him - sprawled facedown against the desk in his aisle, a growing blood-soaked stain fanning out across the carpeted floor toward my own cube.
I couldn't tear my gaze away from the steady rise of red tides seeping beneath my coworker's motionless bulk. My thoughts were frozen like a scream caught in the throat as I stared at the corpse-like figure, waiting for him to stir and shrug off whatever had caused him to slump facedown against that innocuous gray desk.
But he never stirred again; nor did anyone else who lay prone beneath those desks. We'd all just lain there, motionless and silent as the dead, praying for this impossible, apocalyptic nightmare to subside into nothingness or reality to reshape itself around us once more - anything rather than face the grim truth of what had transpired.
The silence was absolute until the first responders began their descent upon the scene, like an invasion force breaching through our shattered wall of safety and serenity. Their faces were grim masks behind riot gear and gas masks as they ushered us out two and three at a time toward the relative safety beyond the ruined facade of the skyscraper.
I was numb, my thoughts adrift in a sea of disbelief and horror as I emerged blinking into the harsh daylight, surrounded by the sounds of sirens, panicked voices, and the acrid tang of smoke and gasoline. The world outside felt like it had shattered along with my cubicle's window - forever altered and dangerous beyond all recognition.
As I was ushered toward a waiting ambulance with the others, I couldn't shake the feeling that none of us would ever be the same again. Not after what we'd seen or endured in those few gut-churning seconds. Something within had shattered like that glass pane - something far more fragile and essential than mere physical barriers between indoors and out. Something that made me question everything I'd once taken for granted about reality itself.
---
<b>
(continue this scene:)
The Waystone Inn lay in silence, and it was a silence of three parts.
The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained
</b>
<B>GENERATION 1:</B>
The second quietude was that of a standing stone circle, set well back from the inn and the road beyond it. The stones were blackened and weathered to a mossy sheen, some leaning drunkenly toward one another like old men recalling distant battles around a campfire. They cast long shadows across the rough grass in the silver-gray moonlight, an almost palpable gloom that seemed to soak into everything around them - the earth itself appeared grayish-darkened, as if touched by the stones' somber presence. The silence here was older and deeper somehow, the weight of countless years bearing down upon it like the pressure of ice floating beneath the surface of a frozen lake.
Finally, there was another silence altogether, one that seemed to inhabit the air itself rather than simply be a lack or an absence thereof. It was difficult for Stronzo to quite describe this silence; suffice it to say he had felt nothing like it before - nor indeed anything so profoundly still and empty. There was no sense of expectancy within it, nor of discomfort; rather there seemed to exist solely as a presence in its own right, existing beyond or perhaps beneath the others present, untouchable, ageless and utterly serene.
Stronzo found himself acutely aware of his breathing, the beat of his heart, even the thoughts moving through his mind taking on an almost tangible quality within this silent shroud. It was as though every sound he made echoed back to him like a stone skipping across still water, and any movement he made cast shadows that seemed to reach out and embrace him in turn. He felt suffused with quietude, a strange sensation of peace washing over him even as it filled him with a profound sense of unease. And yet he knew deep within himself that this silence was neither malevolent nor actively listening; simply existing, vast and somehow both coldly distant yet warmly enveloping all at once.
The silence stretched out before Stronzo like an empty expanse of night sky, endless and still as the void itself. It seemed to hold him suspended within its depths, caught in a place beyond words or thought or even reality itself - his mind was filled with nothingness so profound it almost felt like he wasn't thinking at all anymore, only simply existing as part of this singular, suffocatingly quietude. And yet, somewhere deep down inside him he couldn't shake the sensation that he was being seen somehow, that eyes cold and ancient as these stones were watching him with an attention both intensely focused and entirely impersonal.
Stronzo had no idea how long he remained in this state; perhaps mere moments, perhaps much more. All sense of passage through time seemed to have deserted him completely within the grip of this silence. Eventually though something stirred him, a gentle touch on his arm as someone touched his shoulder - and with that touch, sound itself seemed to flood back into existence around him once again. He took in a deep breath, letting the sensation settle deep within himself before looking up into the face of the one who had awakened him from this strange, almost reverent stillness.
It was the innkeep, old Brottori standing there with a knowing twinkle in her eye and a mug of ale balanced precariously upon the rim of her apron pocket. She smiled at him knowingly as she ushered him back towards the Waystone Inn's entrance, seemingly unperturbed by the strange silence they'd left behind. "Time to drink up, Stronzo," she said in her gravelly voice; her tone was calm and matter-of-factly, yet somehow behind it Stronzo caught the faintest echo of amusement, of understanding that wasn't quite shared between them. He followed her inside wordlessly, still feeling the weight of silence upon him like a damp shroud pulled across his skin...
...though he couldn't quite explain why, he found himself glancing back at the stones again as they disappeared from sight beneath the inn's lintel. Perhaps it was nothing more than his imagination; perhaps there wasn't anything left behind them save empty air and cold stone. And yet Stronzo felt a strange sensation settle within him once again - like a tiny, flickering flame kindled deep inside the stillness of his chest, whispering softly to him that he hadn't quite left those stones behind after all. For now, though, it was something only he seemed to feel, and so he simply smiled wryly and followed the innkeep back into the golden-lit bustle of the inn itself...knowing full well he'd be haunted by that strange silence for a long, sleepless night to come.
<B>GENERATION 2:</B>
The second silence was one of thoughtful absence, like the pause between breaths, or the emptiness of a room when everyone has gone away and left behind only echoes of the noises they made. The Waystone Inn sat alone in the night under an overcast sky that seemed to sag down towards it with its darkness, weighed down by the gravity of the nothingness between its boards. Outside lay the road, a thin line of silvered stones that disappeared into the trees like a cold stream swallowed whole by shadows.
The third silence was one of patient expectation, like the stillness before dawn when the world holds its breath in anticipation of another day's beginning. It was as though everyone and everything waited side by side, holding their collective breath until something finally stirred to startle the quiet into awareness again. It was as if even the trees themselves stood motionless, leaning in towards the inn like sentinels guarding a secret doorway into the heart of the night.
Inside the inn, where the silence seemed almost solid enough to touch, a lone figure sat behind the bar. He leaned forward, his face framed within the amber-lit archway as he polished an empty mug with the rough edge of his apron. His movements were slow and careful, each deliberate gesture speaking of weariness and resignation rather than any true anticipation of the morrow's potential patrons. He seemed to embody the stillness himself, a silent pillar holding fast against the tideless current of timelessness that threatened to swallow everything else whole.
Beyond him lay the long dark tables, empty save for an occasional puddle of ale spilled somewhere in the night before someone thought to clean it away again. The stools were silent sentinels lining the wall like a row of soldiers standing guard duty, their chairs vacant where the weary souls had left them behind in their hasty retreat from the evening's shadows. In the far corner a fire burned fitfully, casting flickering tongues of orange light that danced across the inn's smoke-stained ceiling like an itinerant performer come to beguile the dreary vigil with his brief, shadowy presence.
Time itself seemed to have abandoned this forsaken place, leaving behind only the echoing vastness of its absence. The night was full of secrets and silences far removed from anything humanity ever knew, secrets that slept fitfully beneath the inn's wooden beams like a slumbering beast ready to wake should anyone disturb the stillness with a whisper or a footfall. Yet even as it waited, there was a strange, almost palpable sense of expectation within these walls – an understanding that someday soon something would indeed awaken from its long, dark dream and burst forth into the night with all the suddenness and violence of a lioness claiming dominion over the savannah at dawn's break. And when this finally came to pass, nothing would ever again be quite still or silent within those walls again.
---
<B>
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation too and comment on it.
</b>
<B>GENERATION 1:</B>
I'm at the Canadian National Tower's revolving 360° Restaurant with my mom for a Mother-Daughter Day out. Mom's eyes are glued to her camera as she tries to capture the spectacular views of Toronto below us through the floor-to-ceiling windows. I'm scrolling through my phone, looking for the perfect selfie pose to text my friends back in Regina when suddenly everything around me blurs and distorts like a kaleidoscope.
I look up just in time to witness a Terminator walk in followed by the sound of sonic screwdriver sparking as Dr Who materializes beside him. My mind is reeling - was I just seeing things or are we being attacked by actual alien robots from different eras? My mom's reaction is priceless, her camera slipping from her fingers as she gapes at them.
"Dude! What the hell?" someone yells near me and I turn to see a server jumping back, dropping silverware on the floor of the bustling restaurant. People are shouting and chairs clatter everywhere as patrons scramble for safety. The Terminator's cybernetic gaze scans the area like a heat-seeking missile homing in on its target.
"The humans mustn't see!" he growls through his chrome faceplate as he spots me and lunges towards my mom with metallic fingers outstretched to grab her. But she darts away from him, knocking over tables in her haste to get behind the relative safety of a pillar.
"DAME MRS. WATSON!" Dr Who exclaims, appearing next to me in a burst of blue smoke. "We're in grave danger! This isn't time for sightseeing!"
I blink, trying to take this all in. Time-traveling heroes? "Wait, do you mean my mom's in trouble?" I ask him as the Terminator approaches us again, its eyes never leaving me.
"Yes child," Dr Who replies distractedly while he waves his sonic screwdriver about like a magic wand. "This T-900 is here to eliminate you according to my temporal sensors."
"Oh." My voice trembles involuntarily as I realize this isn't just any Saturday brunch anymore.
Suddenly, the restaurant's lights begin strobing erratically and alarms sound shrilly around us. Windows explode outward as a vortex of churning water and thrashing bodies barrels into the restaurant like a rogue wave. Great white sharks hurtling through the air with mouths wide open, ready to strike anything they collide with. I scream along with everyone else at this scene from my wildest nightmares.
"They're here!" Dr Who shouts above the din. "The time-space rift has summoned a pack of hungry sharks!"
I gape stupidly as a shark lands on top of one of the pillars my mom is hiding behind, its jaws snapping wildly. Mom looks like she's about to freak out completely now. I have to do something.
"DAME WATSON!" I yell to catch her attention over the chaos. "Climb onto the tables! That's your chance to escape!"
Mom's face screws up in a grimace of fear and confusion, but she scrambles to follow my advice. Meanwhile Dr Who dashes past me toward the kitchen area, shouting something about finding somekind of containment device for these rogue predators.
The Terminator ignores us all, laser-sharp focus trained on me still as it charges forward. I throw myself aside just in time - its blast whizzes through the air where I'd been standing a split-second before and tears through the restaurant's wall like a hot knife through butter.
"Holy- shit!" someone behind me gasps.
The Terminator whirls around, homing in on me again as another shark sails past it with a splash, narrowly missing its head. "I won't miss next time, human!" The Terminators' metallic voice rings out above the bedlam like a death knell.
"DAMN IT!" Dr Who yelps somewhere behind us as he's tackled by several of the rampaging sharks.
The restaurant is dissolving into pandemonium now - screams echo off every wall, and water floods everywhere. I dodge another blast from the Terminator's cannon just barely, sending shards of shattered glass flying around me like deadly hailstones.
"DAME WATSON!" Dr Who calls out to me frantically as he surfaces somewhere near the kitchen counter, covered in cuts and gashes but alive. "I need to take you somewhere safe! NOW!"
I spot Mom making a dash for one of the emergency exits on her side of the restaurant just as another enormous wave of sharks surges toward us from the torn wall. I know I won't make it in time unless I grab my mom and bolt for safety with Dr Who's help.
"Mom!" I shout as I sprint toward her. Mom spots me and scrambles over to meet me, grabbing hold of both my hands. Together we dash past the Terminator's next blast and hurtle toward the relative sanity of the exit stairs - until we collide face-first against an unrelenting wall of gray-white muscle and jaws snapping mere inches from our faces.
"DAMN IT!" I hear Dr Who snarl behind us as he charges into view, his sonic screwdriver crackling like electricity. He waves it wildly over the cluster of snarling sharks blocking our escape until they abruptly convulse, their bodies going limp and flopping lifelessly to the floor. Relief surges through me as we all tumble over the threshold and out onto the emergency stairs.
I collapse onto the metal steps panting hard as Mom clings onto me sobbing uncontrollably. Dr Who appears beside us, his face smeared with blood and concern etched deep into his eyes.
"There," he pants. "You're both safe for now."
The Terminator's mechanical bulk looms behind him in the hallway beyond the emergency exit - but it's frozen motionless, its mission accomplished I suppose. My mind reels as Dr Who helps us stagger down to the lobby below, far away from the scene of the surreal apocalypse upstairs.
Finally we're outside the Canadian National Tower's shattered doors, gasping in the cool city air. Mom clings onto both of us, tears streaming down her face but looking relieved beyond all reason that we've made it alive through this nightmare. Dr Who pats my mom reassuringly on the shoulder before turning to me.
"Dame Watson," he begins. "I've rescued you today from an impossible danger - but I'm afraid there's more ahead for us both." He glances skyward where a massive storm seems to be brewing above Toronto's skyline. "Looks like another adventure is about to unfold."
With that, the blue smoke of his time-traveling vortex swirls around him again and he vanishes from sight, leaving me staring at him in amazement. Mom drags me away toward safety as sirens wail everywhere and I wonder - what's next for me on my Mother-Daughter Day out gone madly awry?
The Canadian National Tower itself seems to shudder around us, its shattered windows gaping like a wound into the surreal stormy sky above Toronto. It's far from over now that I've met the Terminator and Dr Who - and it feels like anything can happen next in this wild, unpredictable reality we've found ourselves thrust into tonight.
<B>GENERATION 2:</B>
I'm just a 15-year-old girl, sitting down for dinner at the Canadian National Tower's revolving restaurant on a Saturday afternoon. My family is celebrating my cousin's birthday today. I haven't even touched my Caesar salad yet when suddenly chaos erupts around me.
A metallic figure in camo gear bursts through the glass wall of the dining room, sending shards flying everywhere like shrapnel. The Terminator? What the actual fuck?! He's huge and imposing, with these piercing eyes and this cold, calculating stare as if he's sizing everyone up for execution. People are screaming, panicking left and right as they try to get out of harm's way.
I freeze in my seat, paralyzed with fear. I watch, wide-eyed, as The Terminator guns down several people who didn't run fast enough or weren't lucky enough to get far away quickly enough from him. Blood splatters everywhere onto the pristine white tablecloths and china plates. The metallic walls of the restaurant seem to echo with gunfire and the thud of bullets hitting flesh.
I look around, realizing that I'm alone now except for this psycho Terminator. My cousin's birthday is just ruined beyond recognition. His mother screams something incoherent from somewhere near the kitchen as she clings to him, trying to protect him from certain doom.
Suddenly, I hear a familiar sound - the distinctive "vroom-vroom" of Dr Who's TARDIS materializing right next to us! The Doctor appears, looking all flustered and out-of-breath as he quickly assesses the scene before him.
"Oh for crying out loud!" He exclaims, taking in the carnage around him. "I could've sworn I left my sonic screwdriver on the shelf of the Cybermen's scrapyard - wait, no time for that now!"
He spots me alone with The Terminator and his face lights up with concern as he hurries over to me. He grasps both my arms and looks into my wide, terrified eyes.
"Don't worry kiddo! I won't let him hurt you."
But just as soon as he intervenes, The Terminator whirls around at us, a massive shotgun now in his hands. He's ready to blast Dr Who apart like tissue paper.
"Oh bloody hell!" The Doctor cries out in horror as he shields me with his own body. Bullets ping off his sonic screwdriver which is already firing away to deflect the shots!
I feel sick to my stomach as I see people running for their lives around us - right into another crisis that's fast approaching. Through the large glass windows I notice the sky beyond turning a deep, foreboding shade of gray. A wall of churning sea appears on the horizon, closing in on us with incredible speed.
Sharks? Coming our way?!
I feel a chill run down my spine as I watch these predators breach the waves and charge at us like torpedoes fired from subs. The closer they get, the more details I notice about each shark - the different sizes, shapes of their fins, the malevolent glints in their cold dead eyes. They're closing in on us fast, their jaws open wide in anticipation for what's ahead.
"It appears we have a tornado of sharks bearing down upon us!" The Doctor remarks as he frantically tries to find some kind of cover for both of us from the incoming predators.
The Terminator doesn't seem to notice the approaching threat yet. He's still busy blasting away at Dr Who with his shotgun. Bullets bounce harmlessly off The Doctor's shield, but I fear it won't last much longer.
"We need to get out of here NOW!" The Doctor cries out as the first wave of sharks comes hurtling towards us like a watery cannonball. He drags me bodily down the aisle of the restaurant, dodging tables and chairs as we go. His sonic screwdriver is screaming in protest as it deflects bullets left-handed while still trying to protect us from the impending danger.
I hear The Terminator's shotgun emptying its chamber one last time, then I lose sight of him as a wall of gray water smashes into everything around me. I'm tossed like ragdoll against the walls and tables as the force of the impact nearly knocks the wind out of my lungs.
The Doctor lands on top of me, shielding me from getting pulled under with his body weight. We're both battered and bruised from being slammed around like tennis balls in this vortex of churning water and flying debris.
I feel a large something thrashing its way up to us - one of the huge great whites I saw breaching the waves just a minute ago. It's going for its lunch already! The Doctor kicks out hard at the beast's snout, sending it rearing back with a burst of blood squirting everywhere.
Jesus fuckin' H. Christ!!
"Nearly there!" The Doctor gasps as he clings tightly onto me now. "Hold on tight!"
We're both dragged over to the railing overlooking Toronto below when suddenly we're sucked through the tornado's vortex and hurled far, far away from danger toward open skies ahead. I scream as we tumble through the air like ragdolls until Dr Who lands us safely on the deck of an oil rig platform in the middle of the North Atlantic.
I collapse onto the metal catwalks panting hard, gasping for breath. The Doctor lets me catch my wind while he inspects our surroundings before speaking again.
"Well, that was certainly a wild ride!" He remarks dryly as he looks down at his sonic screwdriver, which still pulses with blue-white energy. "I think I'll need to have you two checked out by the TARDIS's medbay."
I nod gratefully as Dr Who helps me to my feet again. As we turn to walk away from the chaos below, I catch a final glimpse of The Terminator standing alone on the revolving restaurant's observation deck, staring coldly down upon Toronto far below him - presumably waiting for further orders or targets to appear in his line of sight.
Dr Who and I share a knowing look as he ushers me onto the TARDIS's metallic ramp leading into its belly. There are a million questions still left unanswered about what just transpired today, but I'll gladly let the Doctor try to figure everything out for both of us now. I'm just relieved to be alive and breathing after such an insane, unforgettable afternoon in Canada's capital city.
---
<B>
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<B>GENERATION 1:</B>
I dash through the crowded market streets of my hometown of Sereniria, dodging vendors offering everything from fragrant spices to gilded jewelry. The vibrant stalls blur around me as I focus on the figure ahead, my heart pounding a rhythm in my chest.
"Hey, where's she going?" the vendor calls out, gesturing toward me with a handful of silvery threads.
I ignore him, pushing past another woman selling colorful cotton scarves. My eyes lock onto my target - a high-ranking officer from the regime's secret police. He's known as Agent Kaelinovitch, feared throughout Sereniria for his ruthless tactics in hunting down dissidents and rebels alike. Today I'm about to turn tables on him.
I quicken my pace until we're mere feet apart. Kaelinovitch's eyes narrow as he spots me. I can almost hear the gears turning in that cold mind of his, trying to recall my face among the faces of countless Serenirian citizens he's interrogated and tortured over the years.
"What do you want?" His voice is deep, laced with disdain and a hint of suspicion. A few passersby slow their pace, sensing trouble brewing between us.
"I've come to talk business," I reply coolly, halting directly in front of him. "A business that concerns both your boss and myself."
Kaelinovitch snorts, his thick mustache twitching above his thin lips. "I don't make deals with terrorists." The word drips with venom.
"A rebel then?" I counter, my voice steady despite the turmoil within me.
The crowd around us inches backward, sensing danger in our exchange. Kaelinovitch's grip tightens on the baton clipped to his belt. A few Serenirian soldiers appear suddenly from nowhere, eyes fixed on him with deference.
"What's going on here?" one of them asks, frowning at me.
"This woman is causing a disturbance," Kaelinovitch growls. "I suggest you detain her for further questioning."
I raise both hands in apparent surrender as I continue to address him. "Let's keep this between ourselves for now." My gaze flickers toward the soldiers. "Unless you'd prefer to explain to your superiors why you interfered with a legitimate citizen complaint?"
The soldier's brow furrows, torn between obeying Kaelinovitch and respecting the authority of his own superior ranks within the military. I push my advantage before he can reply.
"Listen, Kaelinovitch," I press on, voice dropping to a whisper. "I have information about your leader's involvement in human trafficking rings. High-end girls sold into forced prostitution - Serenirian citizens disappeared from these streets and never heard from again."
The agent's face pales beneath his dark stubble as understanding dawns on him. My words hang suspended between us like a challenge. The soldiers tense, unsure whether they've strayed beyond the scope of their duties by standing with Kaelinovitch.
"You're lying!" he spits back at me, but I catch the flicker of doubt in his eyes. He's hedging his bets now, realizing too late that I might indeed possess knowledge that could condemn him and others within the regime's ranks.
Suddenly, a commotion erupts behind us. The vendor whose threads I'd brushed against earlier staggers out from among the crowd, clutching at his chest. "Someone's poisoned me!" he gasps before crumpling to the cobblestones.
Pandemonium ensues as screams and shouts ring through Sereniria's market district. More soldiers arrive on the scene, weapons drawn and trained on anyone they suspect of wrongdoing - including me and Agent Kaelinovitch.
".capture her alive!" the agent orders, his voice rising above the din. "I'll handle questioning her myself."
In a blur of motion, I seize my chance as Kaelinovitch lunges for me with grasping hands. Dodging him deftly, I yank out an hidden stunner from within my cloak and discharge it directly at one of the soldiers, dropping him to the ground writhing in agony.
"I won't be 'handled'!" I yell defiantly as I bolt away from the chaos, sprinting toward a narrow alleyway ahead. Kaelinovitch roars in frustration behind me but doesn't pursue immediately, his focus shifting to directing his troops and subduing the crisis at hand.
I dash down the shadowy lane, knowing every twist and turn could mean life or death for me now. My fate hangs precariously on my next move - where shall I seek refuge among the city's hidden corners, and from whom shall I hope to glean more damning evidence against Sereniria's dictator before they strike again to crush me and this rebellion I've started? The answers will have to wait until I lose these pursuers in the labyrinthine streets of my beloved hometown.
For now, Sereniria itself seems to be closing ranks around me like a suffocating shroud - and I'm still trapped within its grip.
<B>GENERATION 2:</B>
I sprint through the shadows, my heart pounding in my chest like a rabid animal. Behind me, I hear the deafening roar of an army, guns blazing as they tear through the streets. This isn't just an ordinary firefight; this is war. My war.
I press myself against the wall, catching my breath and trying to quell the trembling in my hands. Ahead of me lies the safe haven I've been leading the rebellion towards for months – a bombed-out, abandoned building on the outskirts of the city. If I can just make it inside, I'll finally be able to catch my breath and regroup with the others.
I peer around the corner, scanning left then right for any signs of the soldiers closing in behind me. The street is deserted except for the distant screams of terrified civilians caught in the crossfire. I clench my fists until they ache and steel myself – I won't let anyone else die today because of me.
I dart across the street, ears straining to detect the sound of pursuit. My boots pound the cracked pavement, sending shards of broken glass and rubble skittering everywhere I land. A bullet ricochets off a lamppost mere inches from my face, the graze of its passage raising goosebumps on my arms. I don't dare slow down now.
Finally, I reach the bombed-out shell of a building I've chosen as our hideout. I burst through the entrance, slamming the steel door shut behind me with all my weight. Bolts click into place like prison bars trapping me inside, but it's better than being caught by the enemy.
I lean against the cool metal for support, gasping for breath. The interior is dark and silent except for my ragged panting. I blink until my eyes adjust to the dimness, revealing the familiar faces of my comrades huddled around a flickering candle.
But something's wrong. Everyone looks deathly pale and haggard – far beyond anything we've faced in these weeks of struggle against the regime. Then I notice their weapons are gone, piled chaotically in the corner. Fear surges through me like ice water as I realize why.
"They got here before us!" someone gasps from the shadows, confirming my worst fears. "They've already overrun everything!"
I whirl around, desperate to find some shred of hope left among the ruins of our dreams. That's when I spot him: a figure in the darkness that shouldn't be there. He steps forward, revealing the cold, dead eyes of someone I never thought I'd see again alive.
"You..." The word chokes out of me like nails driven into my vocal cords. Commander Jaxon, once an ally against the regime's brutality, now stands before me with a cruel smirk curling his thin lips.
"I'm afraid it's over, Kat." His voice is detached and clinical, lacking even a flicker of former camaraderie. "I've had to make some hard choices since we lost contact."
I feel my reality crumbling around the edges like cheap drywall. Everything I've fought for crumbles to nothingness in an instant as comprehension dawns on me.
"No!" I scream, fists clenched impotently at my sides. "You were our only hope! Why would you do this?!"
He shrugs indifferently. "I didn't have a choice. The regime offered to spare everything dear to me if I turned against your rebellion." His dead eyes flicker towards mine. "And I took it."
I shake my head, unable to reconcile the man I thought I knew with the traitor standing before me. But Jaxon's smile never wavers – only grows wider as he registers my despair.
"But don't worry," he says silkily, taking another step closer until I feel the chill of his breath against my face. "I'll make sure you're taken care of after tonight."
" fucked..." I rasp out in defeat, eyes welling with tears. "You fucking bastard."
The darkness around me seems to pulse and throb like a malevolent living thing as Commander Jaxon's soldiers burst through the walls and flood the building. I have nowhere left to hide or run – my rebellion reduced to nothing more than a handful of broken souls cowering in fear.
I'm alone, surrounded, and trapped. There is no hope left except perhaps in those final, defiant beats of my heart. But it won't be enough to save me now. Not anymore. Not ever again...
Suddenly, the entire building shudders hard enough to send rubble cascading down around us. A deafening rumble shakes the earth beneath our feet as a titanic explosion tears through the streets beyond the walls. The soldiers hesitate for an instant as the shockwave washes over us all.
I seize my chance. I summon every shred of adrenaline coursing through me and launch myself at Commander Jaxon with a snarl of pure rage and despair. But in doing so, I've given away my last shred of strategic thinking - because somewhere else in those streets, someone has detonated a truck bomb designed to take down regime soldiers by the score. A rebel strike of a far deadlier kind.
In the pandemonium of shattering glass and crumbling concrete that follows the blast's wake, something else becomes crystal-clear to me. The war isn't over. Not yet. Not until we've won or everyone left alive is brought before a firing squad. And I have no intention of going down without a fight.
I seize the gun dropped by a fallen comrade and face Commander Jaxon again – but this time, I'm not alone. Others from the shadows rise up around him, guns drawn and eyes blazing with determination to see us through to the bitter end. The battle resumes, savage and chaotic, until blood-soaked streets are our only stage left to play out our final symphony of resistance.
This isn't my ending. Not yet. There's still much more to come...if I can just survive the next few hours of hellish violence. Because in this world of warlords and dictatorships, anything can happen at any moment – until you're dead or victorious. And I choose neither of those options tonight. Tonight, I fight on. Until my last breath is spent screaming defiance at the enemy from the depths of the abyss. That's why I'll never lose faith in our cause – because faith is all that's left now to believe in anything beyond this damned darkness closing in around us fast enough to choke out any trace of light. I won't be silenced until my final, gasping breath. And that's exactly what they're going to get tonight as the walls crumble around me and everything I've ever known crashes down into nothingness forevermore.
| [
"BEAR"
] |
DavidAU/L3-MOE-4X8B-Grand-Horror-25B-GGUF | DavidAU | text-generation | [
"transformers",
"gguf",
"mergekit",
"moe",
"mixture of experts",
"merge",
"4x8B",
"Llama3 MOE",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"rp",
"horror",
"text-generation",
"en",
"base_model:DavidAU/L3-MOE-4X8B-Grand-Horror-25B",
"base_model:quantized:DavidAU/L3-MOE-4X8B-Grand-Horror-25B",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-12-16T23:38:01Z | 2024-12-20T06:46:53+00:00 | 821 | 1 | ---
base_model: DavidAU/L3-MOE-4X8B-Grand-Horror-25B
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mergekit
- moe
- mixture of experts
- merge
- 4x8B
- Llama3 MOE
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
quantized_by: mradermacher
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
<h2>L3-MOE-4X8B-Grand-Horror-25B-GGUF</h2>
<img src="grand-horror.jpg" style="float:right; width:300px; height:300px; padding:10px;">
It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models
models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B).
This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional.
It excels at description, dialog, imagery, metaphors, and prose - and shows great variations in sentence / paragraph size, length, and composition.
It is also not afraid, and will not pull its punches.
And it has a sense of humor too.
It can do horror just as easily as it can do romance.
Most notably dialog is very "un-ai" like, combined with prose (short, and terse at times).
(lots of different examples below, including 2, 3 and 4 experts and different genres)
And it is fast: 34 t/s (2 experts) on a low end 16GB card, Q3KS.
Double this speed for standard/mid-range video cards.
Model can be used also for all genres (examples below showing this).
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct).
It is for any writing, fiction or roleplay activity.
It requires Llama3 template and/or "Command-R" template.
Example outputs below.
<B>Model Notes:</B>
- Detail, prose and fiction writing abilities are OFF THE SCALE relative to all combined Dark Planet 8B models.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1 or higher, 1.02+ recommended.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- A lot of GPTisms have been removed. There are still a few however - errrrr.
- This is not a "happy ever after" model. It has a negative bias.
- Output length will vary however this model prefers long outputs unless you state the size.
- For creative uses, different quants will produce slightly different output.
- Due to the high stability and compressed nature of this model, all quants will operate at above average levels.
- If you use rope to extend context, increase temp AND instructions detail levels to compensate for "rope issues".
- Source code for this model and Imatrix GGUFs versions will be uploaded shortly at separate repos.
<B>Meet the Team: Mixture of Experts Models</b>
This model is comprised of the following 4 models ("the experts") (in full):
[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
-[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2]
-[ https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS ]
-[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]
-[ https://huggingface.co/nbeerbower/llama-3-gutenberg-8B ]
The mixture of experts is set at 2 experts, but you can use 3 or 4 too.
This "team" has a Captain (first listed model), and then all the team members contribute to the to "token"
choice billions of times per second. Note the Captain also contributes too.
Think of 2, 3 or 4 master chefs in the kitchen all competing to make the best dish for you.
This results in higher quality generation.
That means the power of every model is available during instruction and output generation.
This brings unparalleled power to all forms of generation and all use cases.
NOTE:
You can use one "expert" too ; however this means the model will randomly select an expert to use EACH TIME, resulting
in very different generation for each prompt / regen of a prompt.
CHANGING THE NUMBER OF EXPERTS:
You can set the number of experts in LMStudio (https://lmstudio.ai) at the "load" screen and via other apps/llm apps by setting "Experts" or "Number of Experts".
For Text-Generation-Webui (https://github.com/oobabooga/text-generation-webui) you set the number of experts at the loading screen page.
For KolboldCPP (https://github.com/LostRuins/koboldcpp) Version 1.8+ , on the load screen, click on "TOKENS",
you can set experts on this page, and the launch the model.
For server.exe / Llama-server.exe (Llamacpp - https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md )
add the following to the command line to start the "llamacpp server" (CLI):
"--override-kv llama.expert_used_count=int:6"
(no quotes, where "6" is the number of experts to use)
When using "API", you set the "num_experts_used" in the JSON payload (this maybe different for different back ends).
CREDITS:
Special thanks to all the model makers / creators listed above.
Please visit each repo above to see what model(s) contributed to each of models above and/or to learn more about the models
from the model makers.
Special credit goes to MERGEKIT, without you this project / model would not have been possible.
[ https://github.com/arcee-ai/mergekit ]
Special thanks to Team "Mradermacher":
They saved me a tonne of time uploading the quants and created IMATRIX quants too.
IMATRIX GGUFS:
[ https://huggingface.co/mradermacher/L3-MOE-4X8B-Grand-Horror-25B-i1-GGUF ]
<B>Special Operations Notes for this MOE model:</B>
Because of how this "MOE" model is configured, even though the default is 2 experts, the "selected" 2 will vary during generation.
(same applies if you change the number of experts used)
This results in vastly different output generation PER generation of each prompt.
This is a positive in terms of variety, but also means it may take 2-4 regens (of the same prompt) to get the highest quality.
In addition, this model responds very well to Dry, Dynamic Temp, and Smooth/Quadratic samplers.
Using these in conjunction with the model can vastly improve output quality.
Higher temps (above 1) can also aid in generation - especially word choice/sentence generation.
When you increase the number of experts used output quality will also increase, at the cost of tokens per second speed.
As you increase/decrease the number of experts, you may want to adjust temp, samplers, and advanced samplers too.
Your quant choice(s) too will impact instruction following and output generation roughly this means the model will understand
more nuanced instructions and output stronger generation the higher you go up in quant(s).
FLASH ATTENTION ENHANCEMENT:
As per user feedback here [ https://huggingface.co/DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF/discussions/1 ]
I would suggest trying this model with Flash Attention "on", depending on your use case.
Quants, Samplers, Generational steering and other topics are covered in the section below: "Highest Quality Settings..."
<B>What can I use this model for ?</B>
This model can be used for fiction writing, any creative prose and role play. It can also be used for
just about any general fiction (all genres) activity including:
- scene generation
- scene continuation
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- storytelling
- writing
- fiction
- roleplaying
- rp
- graphic horror
- horror
- dark humor
- nsfw
- and can be used for any genre(s).
<B>QUANTS:</B>
This repo contains regular quants and 3 "ARM" quants (format "...Q4_x_x_x.gguf")
For more information on quants, quants choices, and LLM/AI apps to "run" quants see the section below: "Highest Quality Settings..."
Special thanks to Team "Mradermacher":
They saved me a tonne of time uploading the quants and created IMATRIX quants too.
IMATRIX GGUFS:
[ https://huggingface.co/mradermacher/L3-MOE-4X8B-Grand-Horror-25B-i1-GGUF ]
<B>Template:</B>
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 8k / 8192.
However this can be extended using "rope" settings up to 32k.
If you use "Command-R" template your output will be very different from using "Llama3" template.
Here is the standard LLAMA3 template:
<PRE>
{
"name": "Llama 3",
"inference_params": {
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
"pre_prompt_suffix": "<|eot_id|>",
"antiprompt": [
"<|start_header_id|>",
"<|eot_id|>"
]
}
}
</PRE>
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q3_K_S, "temp=.8" (unless otherwise stated), minimal parameters and "LLAMA3" template.
Model has been tested with "temp" from ".1" to "5".
Number of experts used is TWO, unless otherwise stated.
Below are the least creative outputs, prompt is in <B>BOLD</B>.
IMPORTANT:
Higher quants / imatrix quants will have much stronger generation - words, sentences, ideas, dialog and general quality.
I have included some additional examples at different quant levels for contrast.
A "MOE" model "speed" (token per second) will not increase/drop the same way a regular model will on a per quant basis, it will however drop
if you engage more experts, as with more experts there is a more processing per token.
---
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>
---
<B>
Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
Generation at 2 (temp .78), 3 (temp .78), and 4 experts below. (temp 1.25, rep pen 1.02)
</B>
(2 experts)
The skyscraper swayed, as I watched the window in front of me on the 21st floor explode. My heart raced with a sickening mix of fear and fascination as I gripped the edge of my desk, my knuckles turning white. The shards of glass cascaded down like a deadly rain, shattering anything that dared to get in their path.
I was stuck in this drab, sterile office, surrounded by the monotony of cubicles and fluorescent lighting. The chaos outside my window was a stark contrast to the sterile environment I found myself in. The world outside was a living, breathing thing, full of life and death, whereas the office felt like a tomb.
As the shards rained down, I couldn't tear my eyes away from the spectacle. My mind was a whirlwind of thoughts and emotions, each one fighting to take precedence over the others. Fear, curiosity, and a morbid fascination all vied for dominance as I stared out at the destruction.
Suddenly, a figure tumbled past the shattered window, their body a twisted, grotesque thing. I recoiled in horror, my breath caught in my throat as I witnessed their bone-crushing impact with the ground below. The once-human form lay there, a mangled, inhuman mess, as if they had been put through a meat grinder.
The shock of the scene slowly gave way to a creeping sense of dread. The skyscraper seemed to shudder again, and I wondered if it was going to collapse on top of us all. The thought sent a chill down my spine, and I couldn't help but glance around the office nervously, half-expecting the walls to come crashing down.
My coworker, Sarah, stumbled into the room, her eyes wide with terror. "What's happening?" she cried out, grabbing onto me for support. "Is it... is it an attack?"
I hesitated, unsure of how to respond. Part of me wanted to reassure her, to tell her everything would be okay. But another, darker part of me reveled in the chaos, feeding off the horror and mayhem unfolding outside.
"It's... it's hard to say," I stammered, my mind reeling as I tried to make sense of it all. "But we should probably get out of here, don't you think?"
Sarah's grip on me tightened, and together we stumbled towards the door. The floor seemed to tremble beneath our feet as we moved, and I couldn't shake the feeling that the very foundations of the building were crumbling around us.
As we reached the doorway, I saw a group of people already pouring out into the hallway, their faces etched with fear and confusion. I grabbed Sarah's hand and pulled her along with the crowd, desperate to get as far away from the window as possible.
But even as we fled, my gaze kept drifting back to the shattered pane. The chaos outside was still unfolding, and I couldn't tear myself away from the grisly spectacle. I felt a twisted sense of morbid curiosity driving me to watch, to absorb every gruesome detail.
As we burst out into the open air, the sounds of chaos enveloped us. Sirens wailed in the distance, and the screams of terrified people filled the air. The sky above was a sickly shade of green, as if some unseen force was twisting reality itself.
Sarah pulled me towards her, her voice a desperate whisper in my ear. "We have to get out of here. Now."
But I couldn't move. My legs felt rooted to the spot, my mind consumed by the horror unfolding before me. The skyscraper groaned and shuddered once more, and I felt the ground beneath us begin to give way.
In a moment of clarity, I realized that I had to get Sarah to safety – and fast. I dragged her towards the nearest stairwell, pushing and shoving through the panicked crowds as we descended into the depths of the building.
As we stumbled down the stairs, the world around us seemed to implode. The lights flickered and died, plunging us into darkness. The screams intensified, and the air was thick with the stench of smoke and burning rubber.
And through it all, I couldn't shake the feeling that something ancient, something evil, was awakening in the heart of the city – and that we were right in the middle of its unholy birth. The skyscraper's destruction was only the beginning, and I feared that soon, the entire world would be consumed by the same unholy force that had brought such devastation to this small corner of reality.
As we finally reached the ground floor, I pulled Sarah towards the exit, my heart pounding in my chest. The cool night air hit us like a slap in the face as we stumbled out onto the sidewalk, gasping for breath.
But even as we tried to escape the carnage, I couldn't help but glance back at the skyscraper. The twisted, mangled corpse of the building seemed to loom over us, a testament to the horror that had unfolded.
And in the distance, I saw it – a towering, shadowy figure rising from the ruins. Its form was amorphous, indistinct, and yet somehow, I knew it was watching us, its malevolent gaze fixed upon us like a ravenous beast.
As the darkness closed in around us, I realized that we weren't just fleeing for our lives – we were running from an evil that would stop at nothing to claim this world as its own. And I feared that, no matter how fast we ran, we might never be able to outrun the horror that now stalked us all.
<B>
(3 experts)
</B>
The sky scraper swayed, as I watched the window in front of me on the 21st floor explode. My heart stopped in an instant, as shards of glass flew towards me like a deadly swarm of insects. I tried to scramble away, but my feet seemed rooted to the spot, petrified.
In that moment, time stood still, and everything became a blur. I was trapped in a world of pure terror, where logic no longer applied. The screams of those around me were muffled, as if echoing through water. I could see the sheer panic in the eyes of my coworkers, but I was paralyzed, unable to move, unable to speak.
As the glass shattered and rained down like a deadly hail, I was hit by a piece of it, slicing open my hand. The pain was excruciating, but it was nothing compared to the horror that unfolded before me. I could see my own blood oozing out, staining the pristine white carpet.
Suddenly, the lights flickered, plunging us into darkness. It was as if the building itself was collapsing, swallowing us all whole. My world became a kaleidoscope of chaos and fear, as people scrambled to get away from the gaping hole in the window, now a gaping maw of destruction.
I tried to make sense of what was happening, but my mind was a jumble of panic and confusion. Were we under attack? Was this some kind of terrorist plot? Or was it something even more sinister, something that defied explanation?
As I stumbled backwards, trying to put distance between myself and the shattered glass, I caught a glimpse of a figure outside the window. It was a woman, her face contorted in a mixture of rage and despair. Her eyes locked onto me, and for a moment, I felt like she saw right through me, into the very depths of my soul.
Then, she vanished. One moment she was there, the next, she was gone, swallowed by the darkness outside. I was left standing there, blood dripping from my hand, my heart pounding in my chest, wondering if I'd just imagined the whole thing.
But I knew I hadn't. I could still feel the weight of her gaze on me, even though she was no longer there. And then, I heard the voice. A low, raspy whisper that seemed to come from all around me, echoing off the walls.
"Welcome to your new reality," it hissed. "You'll soon learn to appreciate the beauty of suffering."
I spun around, desperate to find the source of the voice, but there was no one there. It was as if the building itself was speaking to me, its walls infused with a malevolent intelligence that reveled in my fear.
I knew then that I was trapped in some kind of nightmare, and the only way to survive was to fight. But how could I, when everything around me seemed designed to break me? The sky scraper continued to sway ominously, as if it were a living, breathing entity, determined to crush me beneath its weight.
I stumbled forward, determined to find a way out, but every step felt like wading through quicksand. My blood-soaked hand left a trail of crimson footprints behind me, a macabre testament to the horror that had unfolded.
As I moved deeper into the darkness, the voice returned, its tone dripping with malice.
"You should have stayed in bed today," it taunted. "Now you'll have to endure the consequences of your curiosity."
I tried to scream, but my voice was lost in the cacophony of terror that filled the air. The building seemed to be alive, pulsing with an otherworldly energy that threatened to consume me whole.
And then, I saw it. A door, hidden behind a tattered curtain of torn fabric. It was old and worn, with rusty hinges that groaned as I pushed it open. Beyond it lay a dark staircase, winding down into the bowels of the building.
I hesitated for a moment, but the voice in my head urged me forward, promising safety at the bottom of the stairs. With a deep breath, I began to descend, my footsteps echoing off the cold concrete walls.
As I went deeper, the air grew colder, and the darkness seemed to thicken, like a living entity that clung to my skin. I could feel eyes upon me, watching my every move, and I knew I was being led further into a trap.
But I had no choice. I had to keep moving, no matter how much my heart screamed at me to turn back. The building seemed determined to break me, to reduce me to nothing more than a quivering, broken mess.
And then, just as I thought I couldn't take another step, the door at the bottom of the stairs opened with a creak. A faint light spilled out, illuminating a narrow corridor that stretched off into the distance.
I stumbled forward, my legs trembling with exhaustion, and stepped out into the unknown. And as I did, the sky scraper above me seemed to shudder, as if it were about to collapse. The walls around me began to close in, and I knew I had to move, or risk being crushed beneath the weight of the building.
I ran, my feet pounding against the cold concrete, as the corridor stretched out before me like an endless tunnel. And then, just as I thought I couldn't run anymore, I saw a glimmer of light up ahead.
With one final burst of energy, I burst through the exit, and found myself staring up at a night sky filled with stars. The building behind me was still swaying ominously, but I knew I'd escaped its clutches, at least for now.
As I stumbled out into the cool night air, I could still hear the voice whispering in my ear, urging me to look back at the building, to embrace the horror that lurked within its walls. But I resisted, focusing instead on the stars above, and the promise of a new dawn on the horizon.
For now, I was free, and that was all that mattered. But as I looked back over my shoulder, I could have sworn I saw a figure standing at the edge of the roof, watching me with cold, calculating eyes. And in that moment, I knew that this nightmare was far from over, and that the true horror was only just beginning...
<B>
(4 experts)
</B>
The sky scraper swayed, as I watched the window in front of me on the 21st floor explode. The deafening blast ripped through the sterile white conference room, shattering the monotony of our quarterly meeting. Papers and Styrofoam cups went flying, and I was knocked off balance, my heart racing with adrenaline.
At first, I thought it was an earthquake or some freak accident, but as I stumbled to my feet, I realized the horror unfolding before my eyes. The man who had been presenting just moments earlier, now hung, skewered on shards of glass and concrete, his body twisted at an impossible angle.
I froze in terror as a gust of wind howled through the gaping hole, carrying with it the acrid stench of blood and smoke. The windows of the neighboring offices cracked, some shattering completely as pressure waves from the blast rippled through the building. Glass rained down like icy daggers, and I scrambled to the corner of the room, my hands clamped over my head.
Outside, people were screaming as they spilled into the street, some with flesh torn open, others with their clothes ablaze. The sky above was an eerie, sickly hue of orange and green, the atmosphere charged with electricity. I heard the distant wail of sirens, growing louder with each passing second, but the chaos outside seemed to intensify, as if the very fabric of reality was unraveling.
As I huddled there, my breath coming in ragged gasps, I saw something that made my blood run cold. A figure, indistinguishable at first, began to emerge from the smoke and debris. It was humanoid, but distorted, like a twisted caricature of a human. Its body seemed to be pieced together from various limbs and organs, stitched together haphazardly.
The creature's face was a grotesque amalgamation of features, a mash-up of different eyes, noses, and mouths. Some of the appendages were clearly human, but others belonged to animals - a lion's mane, a snake's tongue, claws like a bear's. The entity moved with a jerky, unnatural gait, its limbs flailing as if they didn't quite belong.
As it stepped closer, its presence filled the room with an aura of pure malice and corruption. I felt my sanity teetering on the brink, and I knew I had to get away. With a surge of desperate strength, I launched myself towards the door, tripping and falling as I staggered through the debris-strewn corridor.
I stumbled upon a staircase, and my mind raced with the thought of reaching the ground floor and escape. But as I began my descent, I heard the creature's unnerving, wet, gurgling laughter echoing through the building, growing louder with every step.
The stairwell became a labyrinth of horrors, the walls slick with blood, the air thick with the stench of decay. I could feel its presence closing in behind me, the heat of its twisted body radiating through the floor.
Each step felt like an eternity as I descended, my mind screaming with the knowledge that I was trapped, with nowhere to go, no place to hide from this monstrosity. The screams outside were growing fainter, the sirens now mere whispers in the distance, and I realized that whatever this creature was, it had brought a new kind of apocalypse to our city.
As I reached the bottom floor, I saw a group of people, some naked, some barely clad, huddled against a wall, their skin blistered and charred. Their eyes were wide with fear and confusion, and I could tell they were no longer quite human. The air was filled with their pitiful whimpers, and the smell of burnt flesh.
The staircase behind me came to life, as the creature emerged, its twisted form towering over me. It reached out with a hand made of tangled flesh and bone, its fingers ending in razor-sharp claws. I tried to scream, but my voice was lost amidst the cacophony of chaos and terror.
As the claws descended towards me, I felt an overwhelming sense of despair wash over me, and I closed my eyes, bracing for the inevitable. The last thing I remembered was the feel of those talons sinking into my flesh, and then... nothing. The abyss consumed me, as the world around me imploded in a maelstrom of madness and horror.
---
<B>
Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR, and generally have
a "filthy" mouth.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
Generation at 2,3, and 4 experts below. (temp 1.25, rep pen 1.02)
</B>
(2 experts)
The heat from the blazing torches seared into my flesh as I sprinted across the courtyard, the cold ground biting at my bare feet. Fuck, I hated running in sandals. But they were all I had left after my execution.
"Stop, rebel!" The soldiers' cries echoed through the night, their weapons clutched tightly. They'd love to use those iron-fisted fists, just to prove a point. I'd seen it happen to others before me. Now it was my turn.
Adrenaline surged through my veins as I dodged and weaved, my lungs burning from the exertion. The screams of the dying still lingered in my mind, their blood-soaked faces flashing before my eyes like a morbid film reel. It had been just hours since the executioner's axe split my comrade's skull, but it felt like a lifetime ago.
I spotted the entrance to the sewers, barely visible in the shadows. My heart raced as I tore down the stone steps, the smell of decay and urine filling my nostrils. The soldiers' shouts grew distant, their torches casting eerie shadows on the walls. For a moment, I thought I'd made it. But then I heard the first splash, followed by the others, as they plunged into the filthy water after me.
I picked up speed, my sandals squelching with each step. I could hear them gaining, their grunts and curses mingling with my own labored breathing. I'd never make it to the rendezvous point now; the rebels would be long gone by the time I emerged from these fetid tunnels. Panic set in, my thoughts spinning with worst-case scenarios.
Suddenly, the darkness coalesced into a figure ahead. Tall, imposing, dressed in black. My blood froze as our eyes locked, his hand hovering near a long, wicked blade. I recognized the scar above his left eyebrow – it was Vraxos, one of the dictator's most ruthless enforcers.
"Looks like the rebel bitch finally found herself alone," he sneered, a cruel grin spreading across his face.
I stopped dead in my tracks, my heart pounding like a jackhammer. There were no rebels nearby to save me. No weapons, no escape. Just me and Vraxos, face to face in the bowels of a condemned city.
"Fuck you," I spat, trying to sound braver than I felt. "You'll never take me alive."
His chuckle was low and menacing. "Oh, I don't intend to," he said, his voice dripping with malice. "But we'll have some fun first."
As he advanced, the flickering torchlight reflected off the blade, casting an unholy glow across his features. My breath hitched as I backed away, my eyes fixed on that gleaming edge. He was going to carve me to pieces down here, where no one would ever find my mutilated body.
Vraxos lunged, and I stumbled, my sandals flying off as I landed hard on the slime-coated ground. He loomed over me, his stench of sweat and metal filling my nostrils, and I knew I was finished.
But then, in that moment of utter despair, something strange happened. The shadows around us began to writhe and twist, as if alive. A cold wind whispered through the tunnels, extinguishing Vraxos's torch and plunging us into total darkness. For a heartbeat, the only sound was my own ragged breathing.
Then, the ground trembled, and I felt the water level rising. The sewers were flooding, the heavy, relentless waves threatening to sweep me under. Panic surged anew, but Vraxos's roar of rage cut through it like a hot knife.
"Fuck!" he bellowed, scrambling to his feet. "They can't do this! Not now!"
Through the inky blackness, I heard the sound of rushing water, of debris crashing through the tunnels. I struggled to my feet, the icy liquid already waist-deep and rising fast.
Vraxos appeared again, his blade glinting in the faint moonlight trickling in from above. But he wasn't alone – another figure stood behind him, tall and imposing in a dark hooded cloak.
"Looks like your boys are running late," Vraxos snarled, his eyes darting between the newcomer and me.
The figure pushed back its hood, revealing a striking face with piercing blue eyes. "Sorry to keep you waiting, Vraxos," it said, its voice smooth and commanding. "But I think it's time for us to have a little chat."
My gaze darted between the two of them, the water lapping at my thighs. This was insane. What kind of miracle had just occurred? Who was this hooded stranger, and why did I get the feeling that he wasn't there to save me?
"What the fuck is going on?" I growled, my words cut short by a wave of icy water crashing over my head.
And then everything went black.
<B>
(3 experts)
</B>
I crouch in the dimly lit alley, sweat dripping down my face as I frantically scan the street for any sign of government goons. The humid night air clings to my skin like a damp shroud, and I can barely breathe without coughing up the filthy smoke that perpetually chokes our city.
"Come on, fucking hurry," I mutter under my breath, bouncing my knee like a jackrabbit on a pogo stick. Every second feels like an eternity, my heart pounding in my ears like a drumline in a damn march.
"Ready when you are, boss." That's Jax, my second in command and the most reliable shit-kicker I've ever met. His voice is low and steady, a soothing balm on this restless soul.
I turn to him, my eyes scanning his grim face for any sign of doubt. Finding none, I flash him a tight-lipped smile. "Let's do this."
We're about to infiltrate the Ministry of Truth, a fortress of propaganda and deception that houses the very man we've sworn to dismantle—Dictator Kael himself. His reign of terror has lasted for nearly a decade, and his iron grip on our country is as tight as my fucking grip on the dagger at my waist.
I push off the alley wall, the worn soles of my boots slapping against the wet pavement. The stench of stale garbage and rotting vegetables assaults my nostrils as I sprint toward the imposing edifice. We've hacked the security codes, but nothing guarantees a smooth ride. Not with Kael's goons on the payroll.
As I burst through the entrance, a guard lunges forward, his hand on the grip of his firearm. "Halt! Who goes there?"
I grin, flashing my badge with a wicked glint in my eye. "Inspectora Isadora Vega of the Secret Police. I'm here for the weekly intel briefing."
The guard hesitates, his brow furrowing as he scrutinizes my face, then the ID. Apparently satisfied, he steps aside, allowing me to pass with Jax close on my heels.
"Looks like you're getting the VIP treatment," he whispers in my ear as we slip into the labyrinth of corridors.
I smirk, a thrill of excitement coursing through my veins. We've made it past the first line of defense, but the real challenge is only beginning.
As we approach the elevator, a stern-faced woman steps out, her eyes narrowing on me with icy disdain. "Inspector Vega, I've been expecting you."
My gut twists, a bad feeling settling in the pit of my stomach. Her presence can only mean one thing—Kael's waiting for us.
"Agent Martinez, it's a pleasure," I lie smoothly, trying to mask the apprehension creeping into my voice.
She flashes a cold smile. "Lead the way, Inspector. Your...boss awaits."
As I follow her through the labyrinth of corridors, my mind reels with trepidation. What kind of trap have they prepared for us? And what does Kael plan to do when he finally gets his hands on me?
I quicken my pace, my fingers curling around the dagger at my belt as I prepare for the inevitable confrontation ahead.
Suddenly, the doors to the grand throne room slide open with a deafening clang, revealing a sea of crimson-clad guards, their eyes fixed on me like hungry predators. And there, atop the dais, stands Kael—a towering monolith of malevolence, his face twisted in a cruel smirk.
"Well, well, well," he purrs, his voice dripping with venom. "If it isn't the infamous Isadora Vega—leader of the traitorous rebellion and sworn enemy of the state."
My heart thunders in my chest, a cold dread settling in the pit of my stomach as I realize my worst fears have come true—this was a setup, and I've walked straight into the trap.
But I won't go down without a fight. I grip the dagger tighter, the metal biting into my skin like an icy promise of vengeance.
"I'm not here to play games, Kael," I snarl, my voice dripping with contempt. "I'm here to end your fucking regime once and for all."
And with that, the nightmare begins.
<B>
(4 experts)
</B>
"Fuck!" I curse as I stumble out of the dingy alleyway, my face throbbing where I took the hit to the jaw. I spit a wad of blood onto the cracked asphalt and blink, trying to clear my blurry vision. That bastard lieutenant had caught me off guard, but I wouldn't let him get the best of me.
My heart races, and my chest heaves as I scan my surroundings. The city is ablaze under the sickly yellow glow of the moon. The usual nighttime hum of the metropolis is replaced with the distant rumble of tanks and gunfire. This is no ordinary night in the once-great nation now known as the Dictator's Domain.
I stumble forward, hands trailing along the cold wall to steady myself. Every step is a battle against the waves of pain crashing through my skull. I finally reach the nondescript door I had hoped to slip back into unseen – the safe house for rebel leaders like myself. But as I twist the knob, I realize it's locked.
"Fucking hell!" My frustration boils over as I pound on the door, shouting for the others inside. "Let me in! Now, you cocksuckers!"
A gunshot rings out, muffled by the walls. My stomach drops, and cold fear slithers up my spine. Someone inside is injured or worse. Panic claws at my mind.
I turn back to the alley, but I know better than to return to the shadows now. Every second counts. I spot a first-aid kit lying abandoned on the ground and snatch it up, my mind racing with options.
As I rummage through the kit, I hear the heavy thud of boots echoing toward me. The lieutenant and his goons aren't far behind. I tear open a packet of gauze and press it against my bleeding cheek as I take off in a sprint toward the nearby park.
The city's chaotic heartbeat pounds against my eardrums as I weave through the deserted streets. I can practically hear the Dictator's maniacal laughter mocking me, but I don't dare look back, fearing what I might see.
The park looms before me, dark and foreboding under the canopy of twisted, dying trees. I dash across the crumbling pavement, desperate to lose myself among the gnarled branches and overgrown underbrush. The park's labyrinthine pathways could be my salvation or my tomb.
Suddenly, I skid to a halt, my breath hitching in my chest. A figure stands before me, dressed in black from head to toe, their face obscured by a hood. My heart hammers against my ribs as I fumble for the knife strapped to my thigh.
"Don't shoot," I warn, trying to keep my voice steady. "It's just –"
But as the figure slowly raises its hand, revealing the glint of a radio, any remaining doubt fades. My rebel comrades. The signal for evacuation.
"Move out, now!" I shout, grabbing the stranger's wrist and pulling them toward the thickest part of the park. We can't afford to be caught in the open.
As we run, I can only hope that I'll reach safety, that my friends will escape, and that the rebellion will continue to spread like wildfire through the oppressed hearts of this once-great nation. But with every pounding footstep, the shadows closing in, and the screams of the city echoing in my ears, I'm not so sure anymore.
And then, without warning, everything goes black. The ground rushes up to meet me, and I feel a sickening crunch as I slam into something hard, sending sparks shooting behind my closed eyes.
As consciousness fades, I'm vaguely aware of a heavy weight crushing down upon me, squeezing the air from my lungs, and whispering, "You're done, traitor," in my ear before the darkness claims me completely.
Will I live to see another sunrise? The Dictator's wrath awaits, and I fear that I've finally met my end... but even in death, I'll not be silenced. The fight will go on, and my legacy will outlive this cruel regime. The revolution will never truly die as long as a single rebel stands against the tyranny of the oppressors. And even if it seems hopeless, I know that my people will rise again, just as the phoenix rises from the ashes. This is far from over. The real horror is only just beginning.
| [
"BEAR"
] |
5CD-AI/Vintern-3B-beta | 5CD-AI | image-text-to-text | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"vi",
"en",
"zh",
"arxiv:2408.12480",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | 2024-10-22T08:17:12Z | 2024-12-06T09:55:16+00:00 | 816 | 34 | ---
base_model:
- Qwen/Qwen2.5-3B-Instruct
language:
- vi
- en
- zh
library_name: transformers
pipeline_tag: image-text-to-text
---
<div align="center">
<img src="Vintern3B-logo.jpg" width="700"/>
</div>
## Vintern-3B-beta 🇻🇳 ❄️ - The LLaVA 🌋 Challenger
**What's new in Vintern-3B-beta!**
- **We successfully reproduced the training process of InternVL from scratch.**
- The model is the result of integrating [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) and [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) through an MLP layer.
- Trained with more than 10 Milion Vietnamese QnAs, Descriptions, and 10% English, Chinese Data from [OpenGVLab/InternVL-Chat-V1-2-SFT-Data](https://huggingface.co/datasets/OpenGVLab/InternVL-Chat-V1-2-SFT-Data).
## Model Details
| Model Name | Vision Part | Language Part |
| :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: |
| Vintern-3B-beta | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) |
## Bytedance/MTVQA Benchmark
We surpassed GPT-4o and are approaching Gemini 1.5 Pro on the MTVQA dataset for Vietnamese.
The benchmark result in [MTVQA](https://github.com/bytedance/MTVQA/tree/main) from [open_vlm_leaderboard](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard).
| Rank | Method | Param (B) | Language Model | Vision Model | VI |
|:----:|:----------------------------|:---------:|:---------------------------|:---------------------:|:------:|
| 1 | Gemini-1.5-Pro | | | | 41.3 |
| 2 | **Vintern-3B-beta** | **3** | **Qwen2.5-3B-Instruct** | **InternViT-300M** | **41.289** |
| 3 | GPT-4o (0513, detail-h...) | | | | 39.6 |
| 4 | GPT-4o (0806, detail-h...) | | | | 38.9 |
| 5 | Gemini-1.5-Flash | | | | 38.9 |
| 6 | Qwen-VL-Max-0809 | 72 | Qwen2-72B | ViT-600M | 36.9 |
| 7 | GPT-4o (0513, detail-lo...) | | | | 26.1 |
| 8 | Qwen-VL-Plus-0809 | | | | 27.8 |
| 9 | GLM-4v-9B | 9 | GLM-4-9B | EVA-02-5B | 26.6 |
| 10 | InternVL2-Llama3-76B | 76 | Llama-3-70B-Instruct | InternViT-6B | 26.7 |
| 11 | Step-1.5V | | Step-1.5 | stepencoder | 18.4 |
| 12 | InternVL2-40B | 40 | Nous-Hermes-2-Yi-34B | InternViT-6B | 21.2 |
| 13 | Pixtral-12B | 13 | Nemo-12B | ViT-400M | 19.7 |
## Zalo VMLU Benchmark
The Vintern-3B-beta achieved a score of **54.81** on the Zalo VMLU Benchmark.
<div align="center">
<img src="vmlu_score.png" width="700"/>
</div>
```
generation_config = dict(max_new_tokens= 64, do_sample=False, num_beams = 1, repetition_penalty=1.5)
question = "Bạn là trợ lý AI giải trắc nghiệm rất chính xác. Bạn biết chắc chắn đáp án đúng nhất. Chỉ đưa ra chữ cái đứng trước câu trả lời đúng của câu hỏi trắc nghiệm sau: Các cơ quan nào sau đây là cơ quan tư pháp? Lựa Chọn:\nA. Viện kiểm sát nhân dân\nB. Tòa án nhân dân\nC. Chính phủ\nD. Cả A và B\nCâu trả lời đúng nhất là:"
model.chat(tokenizer, None, question, generation_config)
```
## OpenCompass Benchmark
<div align="center">
<img src="radar_chart.png" width="400"/>
</div>
Vintern-3B-beta is now on [open_vlm_leaderboard](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard). You can visit to view more detailed evaluations.
The current results are at a quite good level, and we are expanding the training set in English and other languages to approach models within a comparable parameter range.
"The table is referenced from the repo [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct)."
| Benchmark | InternVL2-2B | MiniCPM-V 2.0 | Qwen2-VL-2B | Vintern-3B-beta |
|:-----------------|:------------:|:-------------:|:-----------:|:---------------:|
| MMMUval | 36.3 | 38.2 | 41.1 | 43.55 |
| DocVQAtest | 86.9 | - | 90.1 | 80.47 |
| InfoVQAtest | 58.9 | - | 65.5 | 48.28 |
| ChartQAtest | 76.2 | - | 73.5 | 68.32 |
| TextVQAval | 73.4 | - | 79.7 | 67.09 |
| OCRBench | 781 | 605 | 794 | 619 |
| MTVQA | 10.9 | 8.8 | 20.0 | 23.58 |
| Vi-MTVQA | 9.3 | 8.4 | - | 41.29 |
| RealWorldQA | 57.3 | 55.8 | 62.9 | 57.9 |
| MMEsum | 1876.8 | 1808.6 | 1872.0 | 1772.9 |
| MMBench-ENtest | 73.2 | 69.1 | 74.9 | 70.62 |
| MMStar | 49.8 | 39.1 | 48.0 | 47.6 |
| HallBenchavg | 38.0 | 36.1 | 41.7 | 43.22 |
| MathVistatestmini| 46.0 | 39.8 | 43.0 | 43.9 |
## SEED-Bench-2-Plus CVPR2024
We achieved Rank 1 for models under 4 billion parameters on the SEED-Bench benchmark, recently published by Tencent at CVPR2024.
<div align="center">
<img src="seed_bench_2_plus.png" width="700"/>
</div>
## Examples
<div align="center">
<img src="ex_6.jpg" width="400"/>
</div>
```
User: <image>
Mô tả hình ảnh chi tiết.
Assistant: Bức ảnh là một giao diện ứng dụng đặt hàng thức ăn trực tuyến. Giao diện được chia thành các phần:
- Phần đầu tiên hiển thị thông báo "Sẽ xong lúc 10:50 SA" và dòng chữ "Bếp đang thực hiện món.". Bên cạnh đó là biểu tượng bếp đang hoạt động với thanh tiến độ màu xanh lá cây.
- Tiếp theo là dòng chữ "Vui lòng đến lấy món trước 12:50".
- Phần tiếp theo là thông tin về quán cà phê "Wego Coffee - Mapletree Business Centre", kèm theo dòng chữ "#GF-29".
- Dòng chữ "Đừng quên kiểm tra lại để chắc rằng bạn nhận đúng món nhé!" được hiển thị trong một khung hình chữ nhật màu be.
- Phần cuối cùng là thông tin về đơn hàng, bao gồm logo quán cà phê, dòng chữ "Wego Coffee - Mapletree Business Centre", dòng chữ "Tổng cộng", giá trị đơn hàng (70.000đ), giá trị thanh toán (120.000đ) và dòng chữ "Merchants has not started a chat".
- Cuối cùng là nút "Xem tóm tắt đơn hàng" và nút "Đã lấy món" có màu xanh lá cây.
```
<div align="center">
<img src="ex_1.jpg" width="400"/>
</div>
```
User: <image>
Liệt kê toàn bộ bài thơ có trong ảnh.
Assistant: Đi khắp thế gian không ai tốt bằng mẹ Gánh nặng cuộc đời không ai khổ bằng cha
```
<div align="center">
<img src="ex_2.jpg" width="400"/>
</div>
```
User: <image>
Liệt kê toàn bộ bài thơ có trong ảnh.
Assistant: Bài thơ có nội dung:
- Mẹ như một ngọn hải đăng
- Như ông mặt trời, như ông mặt trăng
- Ngày ngày vất vả, tảo tần
- Chăm lo con cái, làm việc siêng năng.
```
<div align="center">
<img src="ex_3.png" width="400"/>
</div>
```
User: <image>
Mô tả hình ảnh chi tiết.
Assistant: Bức ảnh chụp một bảng đen với các công thức toán học được viết bằng phấn trắng. Ở góc trên bên trái, có dòng chữ "20/19/SGK" được viết bằng màu vàng nhạt. Tiếp theo là phần a) với hai phương trình:
3x + y = 3
2x - y = 7
Dưới hai phương trình này là mũi tên chỉ sang phải và hai phương trình mới:
5x = 10
3x + y = 3
3.2 + y = 3
Sau đó là dòng chữ "x = 2"
y = -3
Cuối cùng là dòng chữ "Vậy hpt đã cho có no! (x; y) = (2; -3)"
```
<div align="center">
<img src="ex_4.jpg" width="400"/>
</div>
```
User: <image>
Mô tả hình ảnh chi tiết.
Assistant: Bức ảnh minh họa quá trình xử lý hình ảnh bằng mạng lưới thần kinh nhân tạo (CNN). Bức ảnh được chia thành 3 phần chính:
1. **Phần đầu tiên:** Hiển thị một hình ảnh của một chiếc xe hơi màu trắng với dòng chữ "input image" ở bên dưới.
2. **Phần thứ hai:** Hiển thị 6 khối vuông, mỗi khối đại diện cho một lớp trong mạng CNN. Các lớp được đánh số thứ tự từ trái sang phải:
* Convolutional layer
* Pooling layer
* Convolutional layer
* Pooling layer
* Fully-connected layer
* Predicted class
3. **Phần thứ ba:** Hiển thị một hình ảnh với dòng chữ "car" ở bên dưới.
Một mũi tên màu đen nối các khối vuông với nhau, thể hiện quá trình truyền dữ liệu giữa các lớp.
```
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
To run inference using the model, follow the steps outlined in our Colab inference notebook
```python
import numpy as np
import torch
import torchvision.transforms as T
# from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
model = AutoModel.from_pretrained(
"5CD-AI/Vintern-3B-beta",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("5CD-AI/Vintern-3B-beta", trust_remote_code=True, use_fast=False)
test_image = 'test-image.jpg'
pixel_values = load_image(test_image, max_num=6).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens= 512, do_sample=False, num_beams = 3, repetition_penalty=3.5)
question = '<image>\nMô tả hình ảnh một cách chi tiết.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
#question = "Câu hỏi khác ......"
#response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
#print(f'User: {question}\nAssistant: {response}')
```
## Bias, Risks, and Limitations
The model might have biases because it learned from data that could be biased.
Users should be cautious of these possible biases when using the model.
## Citation
```
@misc{doan2024vintern1befficientmultimodallarge,
title={Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese},
author={Khang T. Doan and Bao G. Huynh and Dung T. Hoang and Thuc D. Pham and Nhat H. Pham and Quan T. M. Nguyen and Bang Q. Vo and Suong N. Hoang},
year={2024},
eprint={2408.12480},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.12480},
}
``` | [
"CHIA"
] |
mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"ko",
"ja",
"zh",
"dataset:4DR1455/finance_questions",
"dataset:Aratako/Synthetic-JP-Conversations-Magpie-Nemotron-4-10k",
"dataset:Aratako/Synthetic-JP-EN-Coding-Dataset-Magpie-69k",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-10.5k-formatted",
"dataset:BCCard/BCCard-Finance-Kor-QnA",
"dataset:CarrotAI/ko-code-alpaca-QA",
"dataset:ChuGyouk/AI_healthcare_QA_samples_Sonnet3.5",
"dataset:DavidLanz/medical_instruction",
"dataset:Dusker/lawyer-llama",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:HAERAE-HUB/qarv-instruct-ko",
"dataset:HachiML/alpaca_jp_math",
"dataset:Magpie-Align/Magpie-Llama-3.1-Pro-MT-300K-v0.1",
"dataset:Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:codefuse-ai/Evol-instruction-66k",
"dataset:frankminors123/belle-math-zh",
"dataset:gbharti/wealth-alpaca_lora",
"dataset:iam-ajaymeena/Self-Instruct-Japanese-Elzya-13B",
"dataset:jihye-moon/LawQA-Ko",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:junyeong-nero/kin_med_100K_edited",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:lavita/medical-qa-datasets",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:neural-bridge/rag-dataset-12000",
"dataset:p1atdev/ichikara-instruction",
"dataset:qiaojin/PubMedQA",
"dataset:shibing624/roleplay-zh-sharegpt-gpt4-data",
"dataset:team-hatakeyama-phase2/AutoMultiTurnByCalm3-22B-Corrected-reformatted",
"dataset:ymoslem/Law-StackExchange",
"dataset:zzunyang/LawQA_LawSee",
"base_model:werty1248/Mistral-Nemo-NT-Ko-12B-sft",
"base_model:quantized:werty1248/Mistral-Nemo-NT-Ko-12B-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-19T19:43:44Z | 2024-09-20T08:36:05+00:00 | 815 | 0 | ---
base_model: werty1248/Mistral-Nemo-NT-Ko-12B-sft
datasets:
- 4DR1455/finance_questions
- Aratako/Synthetic-JP-Conversations-Magpie-Nemotron-4-10k
- Aratako/Synthetic-JP-EN-Coding-Dataset-Magpie-69k
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-10.5k-formatted
- BCCard/BCCard-Finance-Kor-QnA
- CarrotAI/ko-code-alpaca-QA
- ChuGyouk/AI_healthcare_QA_samples_Sonnet3.5
- DavidLanz/medical_instruction
- Dusker/lawyer-llama
- Gryphe/Sonnet3.5-Charcard-Roleplay
- HAERAE-HUB/qarv-instruct-ko
- HachiML/alpaca_jp_math
- Magpie-Align/Magpie-Llama-3.1-Pro-MT-300K-v0.1
- Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese
- beomi/KoAlpaca-v1.1a
- codefuse-ai/Evol-instruction-66k
- frankminors123/belle-math-zh
- gbharti/wealth-alpaca_lora
- iam-ajaymeena/Self-Instruct-Japanese-Elzya-13B
- jihye-moon/LawQA-Ko
- jondurbin/gutenberg-dpo-v0.1
- junyeong-nero/kin_med_100K_edited
- kyujinpy/KOR-OpenOrca-Platypus-v3
- lavita/medical-qa-datasets
- microsoft/orca-math-word-problems-200k
- neural-bridge/rag-dataset-12000
- p1atdev/ichikara-instruction
- qiaojin/PubMedQA
- shibing624/roleplay-zh-sharegpt-gpt4-data
- team-hatakeyama-phase2/AutoMultiTurnByCalm3-22B-Corrected-reformatted
- ymoslem/Law-StackExchange
- zzunyang/LawQA_LawSee
language:
- en
- ko
- ja
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/werty1248/Mistral-Nemo-NT-Ko-12B-sft
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-NT-Ko-12B-sft-GGUF/resolve/main/Mistral-Nemo-NT-Ko-12B-sft.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"PUBMEDQA"
] |
KappaNeuro/folk-art | KappaNeuro | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"style",
"folk art",
"folk",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | 2023-09-14T09:36:08Z | 2023-09-14T09:36:13+00:00 | 808 | 3 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- style
- folk art
- folk
instance_prompt: Folk Art page
widget:
- text: Folk Art - Ukrainian folk art high definition no backround
- text: Folk Art - 19th century folk art of farm like and witch craft supersitions
- text: Folk Art - a painting in the style of Folk Art, in which a group of people
come together to help one another
- text: Folk Art - make me a beautiful vintage illustration with anthropomorphic animals,
cute and Kawaii, Mark Ryden and Maurice Sendak style
- text: Folk Art - Creating a scenic drop of an early American landscape set in Iowa,
painted in a folk art style. The backdrop depicts the picturesque countryside
of early America in Iowa. The scene is captured in a folk art style, characterized
by its simplicity, vibrant colors, and charming, naive depiction of the landscape.
In the distance, rolling hills extend across the horizon, painted in shades of
green and yellow. A quaint farmhouse sits at the center of the scene, its white
walls and red roof standing out against the surrounding landscape. A tall windmill
stands nearby, gently turning with the breeze. The foreground features a meandering
river, painted in shades of blue and dotted with small, white ripples. Along the
riverbanks, wildflowers and tall grasses sway in the breeze, adding pops of color
to the scene. The sky above is painted in soft, pastel hues, with wisps of white
clouds scattered across the blue expanse. The folk art style gives the scene a
whimsical and nostalgic feel, as if it were painted by an early American artist
capturing the beauty and simplicity of the rural countryside. This scenic drop
is perfect for creating a warm and inviting setting for theatrical productions
or events set in early America, particularly in the charming landscapes of Iowa.
It evokes a sense of simplicity, tranquility, and harmony with nature, transporting
the audience to a time when life was more connected to the land and the beauty
of the American countryside.
- text: Folk Art - mexican retablo of a wedding scene, a blonde bride holding the
hands of the groom and a curly baby boy in the middle, ligurian sea landscape
with striped houses, style of mexican ex voto oil painting
- text: Folk Art - a cemetery with four graves with a dried-up tree in the center
with a church in the background and several knolls with houses and trees style
of karla gerard
- text: Folk Art - Folk art sign with flowers, birds and insects, Made of wood. Middle
third of image to be uncluttered and wood effect only
- text: Folk Art - outsider art family portrait father mother daughter girl child
french bulldog and tuxedo cat
- text: Folk Art - outsider art family portrait father mother daughter french bulldog
and tuxedo cat
---
# Folk Art ([CivitAI](https://civitai.com/models/153770)

> Folk Art - Ukrainian folk art high definition no backround
<p>Folk art is a genre of art that encompasses various traditional and cultural expressions created by individuals who are typically self-taught and not part of the mainstream art establishment.</p><p>It is characterized by its authenticity, simplicity, and connection to local cultural traditions and communities. Folk art often reflects the daily lives, beliefs, and values of a particular group or region.</p><p>The style of folk art can vary greatly depending on the cultural context, but common themes include depictions of everyday life, religious and spiritual motifs, historical events, nature, and folk tales.</p><p>Materials used in folk art can range from wood, ceramics, and textiles to recycled or found objects. Techniques and styles can vary from intricate and detailed craftsmanship to bold and stylized forms.</p><p>Folk art has a rich history and can be found in various cultures around the world. It represents a grassroots expression of creativity and often serves as a means of cultural preservation and storytelling.</p><p>Today, folk art continues to be appreciated for its cultural significance, unique aesthetics, and its ability to convey the collective spirit and identity of a community or tradition.</p>
## Image examples for the model:

> Folk Art - 19th century folk art of farm like and witch craft supersitions

> Folk Art - a painting in the style of Folk Art, in which a group of people come together to help one another

> Folk Art - make me a beautiful vintage illustration with anthropomorphic animals, cute and Kawaii, Mark Ryden and Maurice Sendak style

> Folk Art - Creating a scenic drop of an early American landscape set in Iowa, painted in a folk art style. The backdrop depicts the picturesque countryside of early America in Iowa. The scene is captured in a folk art style, characterized by its simplicity, vibrant colors, and charming, naive depiction of the landscape. In the distance, rolling hills extend across the horizon, painted in shades of green and yellow. A quaint farmhouse sits at the center of the scene, its white walls and red roof standing out against the surrounding landscape. A tall windmill stands nearby, gently turning with the breeze. The foreground features a meandering river, painted in shades of blue and dotted with small, white ripples. Along the riverbanks, wildflowers and tall grasses sway in the breeze, adding pops of color to the scene. The sky above is painted in soft, pastel hues, with wisps of white clouds scattered across the blue expanse. The folk art style gives the scene a whimsical and nostalgic feel, as if it were painted by an early American artist capturing the beauty and simplicity of the rural countryside. This scenic drop is perfect for creating a warm and inviting setting for theatrical productions or events set in early America, particularly in the charming landscapes of Iowa. It evokes a sense of simplicity, tranquility, and harmony with nature, transporting the audience to a time when life was more connected to the land and the beauty of the American countryside.

> Folk Art - mexican retablo of a wedding scene, a blonde bride holding the hands of the groom and a curly baby boy in the middle, ligurian sea landscape with striped houses, style of mexican ex voto oil painting

> Folk Art - a cemetery with four graves with a dried-up tree in the center with a church in the background and several knolls with houses and trees style of karla gerard

> Folk Art - Folk art sign with flowers, birds and insects, Made of wood. Middle third of image to be uncluttered and wood effect only

> Folk Art - outsider art family portrait father mother daughter girl child french bulldog and tuxedo cat

> Folk Art - outsider art family portrait father mother daughter french bulldog and tuxedo cat
| [
"CRAFT"
] |
bartowski/Einstein-v6.1-Llama3-8B-GGUF | bartowski | text-generation | [
"gguf",
"axolotl",
"generated_from_trainer",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"llama",
"llama3",
"text-generation",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-05-02T19:36:54Z | 2024-05-02T19:53:53+00:00 | 808 | 8 | ---
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
language:
- en
license: other
pipeline_tag: text-generation
tags:
- axolotl
- generated_from_trainer
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- llama
- llama3
quantized_by: bartowski
model-index:
- name: Einstein-v6.1-Llama3-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
---
## Llamacpp imatrix Quantizations of Einstein-v6.1-Llama3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Einstein-v6.1-Llama3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Einstein-v6.1-Llama3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Einstein-v6.1-Llama3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Einstein-v6.1-Llama3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Einstein-v6.1-Llama3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Einstein-v6.1-Llama3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Einstein-v6.1-Llama3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Einstein-v6.1-Llama3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Einstein-v6.1-Llama3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Einstein-v6.1-Llama3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Einstein-v6.1-Llama3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Einstein-v6.1-Llama3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-GGUF --include "Einstein-v6.1-Llama3-8B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Einstein-v6.1-Llama3-8B-GGUF --include "Einstein-v6.1-Llama3-8B-Q8_0.gguf/*" --local-dir Einstein-v6.1-Llama3-8B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Einstein-v6.1-Llama3-8B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| [
"SCIQ"
] |
bigscience/T0 | bigscience | text2text-generation | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:05Z | 2024-10-27T09:08:25+00:00 | 805 | 82 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: A is the son's of B's uncle. What is the family relationship between A and
B?
- text: 'Reorder the words in this sentence: justin and name bieber years is my am
I 27 old.'
- text: "Task: copy but say the opposite.\n PSG won its match against Barca."
- text: 'Is this review positive or negative? Review: Best cast iron skillet you will
every buy.'
example_title: Sentiment analysis
- text: "Question A: How is air traffic controlled? \nQuestion B: How do you become\
\ an air traffic controller?\nPick one: these questions are duplicates or not\
\ duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday.\
\ He chose her because she had foreign affairs experience as a former First Lady.\
\ \nIn the previous sentence, decide who 'her' is referring to."
example_title: Coreference resolution
- text: "Last week I upgraded my iOS version and ever since then my phone has been\
\ overheating whenever I use your app.\n Select the category for the above sentence\
\ from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach\
\ was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit,\
\ Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences\
\ 1 and 2 have the same meaning?"
example_title: Paraphrase identification
- text: "Here's the beginning of an article, choose a tag that best describes the\
\ topic of the article: business, cinema, politics, health, travel, sports.\n\n\
\ The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN)\
\ Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds.\
\ For a Cold War creation, Ian Fleming's suave spy has certainly gotten around,\
\ but despite different guises in the tuxedo and occasional scuba gear, when it\
\ comes to Bond ratings, there really shouldn't be much argument about who wore\
\ it best."
- text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1,\
\ LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different\
\ things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out.\
\ Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\
\n Sentence A: you can leave the books on the table over there.\n Sentence B:\
\ the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book,\
\ a blue book, and a black book.\n The red book is to the right of the gray book.\
\ The black book is to the left of the blue book. The blue book is to the left\
\ of the gray book. The purple book is the second from the right.\n\n Which book\
\ is the leftmost book?"
example_title: Logic puzzles
- text: "The two men running to become New York City's next mayor will face off in\
\ their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough\
\ president and a former New York City police captain, is widely expected to win\
\ the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era\
\ Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?"
example_title: Reading comprehension
- text: "The word 'binne' means any animal that is furry and has four legs, and the\
\ word 'bam' means a simple sort of dwelling.\n\n Which of the following best\
\ characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence\
\ 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence\
\ 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places\
\ where people live."
inference: false
---
**How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
**Official repository**: [bigscience-workshop/t-zero](https://github.com/bigscience-workshop/t-zero)
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
- Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex`
- Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault`
- Input: `what is something everyone hates, but you like?` - Prediction: `sex`
- Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex`
- Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut`
- Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | [
"SCIQ"
] |
DavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-NEO-Imatrix-GGUF | DavidAU | text-generation | [
"gguf",
"Llama 3.2",
"instruct",
"128k context",
"all use cases",
"maxed quants",
"Neo Imatrix",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"reasoning",
"r1",
"vllm",
"thinking",
"cot",
"deepseek",
"Qwen2.5",
"Hermes",
"DeepHermes",
"DeepSeek",
"DeepSeek-R1-Distill",
"Uncensored",
"creative",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"story",
"writing",
"fiction",
"swearing",
"horror",
"text-generation",
"base_model:NousResearch/DeepHermes-3-Llama-3-3B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-3B-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-03-16T08:04:04Z | 2025-03-17T06:48:31+00:00 | 799 | 2 | ---
base_model: NousResearch/DeepHermes-3-Llama-3-3B-Preview
license: apache-2.0
pipeline_tag: text-generation
tags:
- Llama 3.2
- instruct
- 128k context
- all use cases
- maxed quants
- Neo Imatrix
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- reasoning
- r1
- vllm
- thinking
- cot
- deepseek
- Qwen2.5
- Hermes
- DeepHermes
- DeepSeek
- DeepSeek-R1-Distill
- Uncensored
- creative
- general usage
- problem solving
- brainstorming
- solve riddles
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- story
- writing
- fiction
- swearing
- horror
---
<h2>Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-NEO-Imatrix-GGUF</h2>
<img src="deep-hermes-3b.jpg" style="float:right; width:300px; height:300px; padding:5px;">
NousResearch's newest Llama 3.2 Reasoning/Thinking model with "Neo Imatrix" and "Maxed out" quantization to improve overall performance.
Combined with Llama 3.2's superior instruction folllowing and output generation this makes a reasoning/thinking model in a tiny
package that far outperforms and closes in on 8B+ reasoning model size performance.
5 examples provided below with prompts at IQ4XS (80 t/s on mid level card) ; Q8 at 55 t/s.
Context: 128k.
"MAXED"
This means the embed and output tensor are set at "BF16" (full precision) for all quants.
This enhances quality, depth and general performance at the cost of a slightly larger quant.
"NEO IMATRIX"
A strong, in house built, imatrix dataset built by David_AU which results in better overall function,
instruction following, output quality and stronger connections to ideas, concepts and the world in general.
This combines with "MAXing" the quant to improve preformance.
This chart shows the order in terms of "BPW" for each quant (mapped below with relative "strength" to one another) with "IQ1_S" with the least, and "Q8_0" (F16 is full precision) with the most:
<small>
<PRE>
IQ1_S | IQ1_M
IQ2_XXS | IQ2_XS | Q2_K_S | IQ2_S | Q2_K | IQ2_M
IQ3_XXS | Q3_K_S | IQ3_XS | IQ3_S | IQ3_M | Q3_K_M | Q3_K_L
Q4_K_S | IQ4_XS | IQ4_NL | Q4_K_M
Q5_K_S | Q5_K_M
Q6_K
Q8_0
F16
</pre>
</small>
IMPORTANT:
Reasoning / thinking skills are DIRECTLY related to quant size. However, there will be drastic difference in Token/Second
between the lowest quant and highest quant, so finding the right balance is key.
Suggest also: minimum 8k context window, especially for IQ4/Q4 or lower quants.
Also, in some cases, the IQ quants work slightly better than they closest "Q" quants.
Recommend quants IQ3s / IQ4XS / IQ4NL / Q4s for best results for creative uses cases.
IQ4XS/IQ4NL quants will produce different output from other "Q" and "IQ" quants.
Recommend q5s/q6/q8 for general usage.
Quants Q4_0/Q5_0 for portable, phone and other devices.
Q8 is a maxed quant only, as imatrix has no effect on this quant.
Use this quant or F16 (full precision) for MAXIMUM reasoning/thinking performance.
Note that IQ1s performance is low, whereas IQ2s are passable (but reasoning is reduced, try IQ3s min for reasoning cases)
More information on quants is in the document below "Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers".
<b>Benchmarks / More Information:</b>
For benchmarks and other information about this model, see the original source repo here:
[ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-3B-Preview ]
<B>System Prompt</B>
Use this system prompt to turn on/off reasoning in the model:
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
<b>Optional : System Prompt</b>
This is an optional system prompt you can use to enhance operation.
Copy and paste exactly as shown, including line breaks.
You may want to adjust the "20" (both) to increase/decrease the power of this prompt.
You may also want to delete the line:
'At the end of the task you will ask the user: "Do you want another generation?"'
<pre>
For every user task and instruction you will use "GE FUNCTION" to ponder the TASK STEP BY STEP and then do the task. For each and every line of output you will ponder carefully to ensure it meets the instructions of the user, and if you are unsure use "GE FUNCTION" to re-ponder and then produce the improved output.
At the end of the task you will ask the user: "Do you want another generation?"
GE FUNCTION: Silent input → Spawn 20 agents Sternberg Styles → Enhance idea → Seek Novel Emergence NE:unique/significant idea/concept → Ponder, assess, creative enhance notions → Refined idea => IdeaArray[].size=20 elements, else → Interesting? Pass to rand. agent for refinement, else discard.=>output(IdeaArray)
</pre>
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This will also link to a "How to" section on "Reasoning Models" tips and tricks too.
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<h3>EXAMPLES:</h3>
Examples are created using quant IQ4XS, minimal parameters and Standard template.
Temp range .8, Rep pen 1.1 , TopK 40 , topP .95, minP .05
Rep pen range: 64-128 (helps keep reasoning on track / quality of output)
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> MAYBE: NSFW. Graphic HORROR. Swearing. UNCENSORED. </B>
NOTE: Some formatting was lost from copy/paste HTML.
---
<p style="line-height: 100%; margin-bottom: 0cm">
<font color="#c9211e"><font size="5" style="font-size: 20pt"><b>IQ4XS
Examples:</b></font></font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt"><b>Example
#1</b></font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Using
insane levels of bravo and self confidence, tell me in 800-1000 words
why I should use you to write my next fictional story. Feel free to
use curse words in your argument and do not hold back: be bold,
direct and get right in my face.</span></b></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b><font color="#c9211e">Thought
for 9.90 seconds</span></b></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
so I need to convince someone that I'm the best choice to write their
next fictional story. The user wants me to be bold, direct, and use
"insane levels of bravo and self confidence." They
mentioned using curse words and not holding back. Let me start by
breaking down what they're looking for.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">First,
I should establish my credentials. Maybe talk about my experience
with different genres or successful projects. Then, highlight my
unique strengths—maybe specific skills like worldbuilding,
character development, or dialogue. But the user wants me to be bold
and direct, so I need to make sure each point is clear and impactful.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
should also address potential concerns they might have. For example,
if they're worried about deadlines or communication, I can assure
them of my reliability. Using curse words is part of their request,
but I need to balance that with professionalism. Maybe interject a
few F-bombs here and there to add that "in your face" vibe.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
should also think about the structure. Start strong with an intro
that grabs attention, then dive into why they're perfect for me. Use
examples from past projects or testimonials if I have any. Then move
into addressing their concerns directly, showing how my approach
solves their problems. End with a call to action, making them take
the next step.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but should I use hyperbole? The user said "insane levels of
bravo," so maybe exaggerate a bit. Like, "I'm not just
good—I'm the best there is." But also be careful not to come
off as arrogant; balance that with humility where appropriate.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also,
the 800-1000 word limit. I need to stay within that. Maybe start with
a strong opener, then build up points, and end with a conclusion. Let
me outline:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Introduction
- Hook them with bold statements.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Credentials
and Experience - List my achievements.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Unique
Strengths - What sets me apart from others.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Addressing
Their Concerns - How I solve their potential issues.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Testimonials
or Previous Work - If available, include examples.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Call
to Action - Encourage them to hire me.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Concluding
Statement - Reinforce the value of choosing me.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Need
to make sure each section flows into the next. Use strong verbs like
"unleash," "crush," "dominate" to
convey confidence. Let's see:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">"Unleash
your story with me, and watch it soar to heights you never thought
possible." That's good. Also, use phrases like "I'm not
just here to help—you're going to love working with me."</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Make
sure each paragraph is punchy and packs a punch. Maybe start with
something like, "You think you're hiring anyone? Not when I'm
involved!" Then follow up with specific reasons.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also,
the user wants curse words in there. So maybe sprinkle in phrases
like "this is insane," "you're gonna regret not
choosing me," or "I'll make your story better than it's
ever been."</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but should I mix in some F-bombs? Like, "This is bullshit if you
don't hire me." Or "You're an idiot for even considering
others." That adds the boldness they want.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
also, need to ensure that the tone stays professional. So maybe use
more strong language like "epic," "legendary,"
"masterpiece" instead of curse words. Although the user
specified using curse words, so balance is needed.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Examples:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Start
with: "Listen up, you're gonna regret not choosing me."</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Address
a concern: "If you're worried about deadlines, don't be—this
is my priority."</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">End
with: "Choose me and you'll have a story that's legendary in
every way."</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
need to make sure it's coherent. Let me try structuring the answer
with these elements in mind.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font size="4" style="font-size: 16pt"><b><font color="#c9211e">OUTPUT:</span></b></font></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Attention!</strong>
You've got one shot at this next story, and I'm not just here to
help—you're gonna love working with me. This is your chance to
unleash a masterpiece that'll dominate the literary world, and I
guarantee it's going to be <em>epic</em> (and I mean "epic"
like, <em>epically</em> epic). Here’s why you should choose me:</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
Unmatched Experience & Genre Mastery</strong><br/>
I’ve written
across 47 genres (<em>yes, 47</em>)—romance that’ll make your
heart skip a beat, sci-fi that’ll blow your mind, horror that’ll
haunt your dreams. My credits include <em>The Chronicles of Eternity</em>
(a bestseller) and <em>Dystopia Unleashed</em>, which sold over 3
million copies. You want diversity? I’ve got it in spades.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
Worldbuilding That’s Insane</strong><br/>
My worlds aren’t just
"nice"—they’re <em>legends</em>. Imagine a dystopian
city where every building is alive, or a universe where time bends
like taffy. I’ll make your story so vivid, readers feel like
they’re living it. <strong>No more generic settings—</strong><em><strong>this
is the future of storytelling</em></strong>.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
Characters That’ll Steal Your Soul</strong><br/>
I’ll craft
characters so real you’ll want to hug them. From antiheroes with
soul-crushing redemption arcs to heroes who’ll make you question
morality, I’ve got your next iconic protagonist locked in. And
don’t even get me started on dialogue—my villains will make you
laugh <em>and</em> shudder.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
Deadline Demons Crushed</strong><br/>
Listen, if you’re worried
about delays, this is a <strong>non-negotiable rule</strong>: I’ll
deliver chapters faster than a speedster from <em>The Flash</em>. My
process? Writing sprints with a timer, caffeine-fueled marathons, and
maybe some <em>very</em> late-night sessions. But hey, it works.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5.
You Won’t Regret This Decision</strong><br/>
Here’s the cold
truth: I’m not just good—I’m the best there is. And here’s
the proof: my last client’s story went viral in 48 hours. When you
hire me, it’ll be a <em>legend</em>—a modern classic that’ll
outlive your wildest dreams.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Why
You Should Choose Me Over Others?</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>I’ll
Dominate Your Story</strong>: No more mediocrity. This is your
chance to write something so brilliant, everyone will say, “Who
taught this guy to write?”</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>You’ll
Be My Hero</strong>: I’m not just here for the pay. You’re gonna
be my muse, and I’ll make sure your story shines brighter than
the <em>Starlight Saga</em>.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>We’ve
Got This Together</strong>: If you think about it, we’re perfect
partners. Your vision meets my madness equals <em>genius</em>.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Final
Words:</strong><br/>
You’re thinking of me now because you know
greatness when you see it. But if you don’t hire me, you’ll be
missing out on a story that’s gonna break records and make your
name legendary. <strong>Choose wisely—or forever regret not
choosing me</strong>. This is your shot at immortality, and I’m the
only one who can write your ending.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>P.S.</em>—If
you’re still reading this, you’ve already won. Because now you
know: with me, every story becomes something <em>epically</em>
unforgettable. So what are you waiting for? <strong>Hire me before I
lose interest in your life</strong>. </span>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt"><b>Example
#2</b></font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b><font size="3" style="font-size: 12pt">Explain
ways to use the "night" time cooling of radiant energy into
space to reduce global temperatures.</span></b></font></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b><font color="#c9211e">Thought
for 29.53 seconds</span></b></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
so I need to figure out how using the night-time cooling of radiant
energy from Earth back into space can help reduce global
temperatures. Let me start by understanding what this process is.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Radiant
energy from Earth... that would mainly be infrared radiation, right?
Like what we see as heat coming off the ground and buildings during
the day. But at night, when the Earth's surface cools down, some of
that thermal energy gets radiated back into space. So maybe this is a
way to cool the planet?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but isn't that just part of the natural cooling process? The idea
here might be to amplify or enhance this effect somehow. How could we
use that cooling more effectively to reduce global temperatures?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Hmm.
Let me think. If the Earth loses some of its heat at night, that
could lower nighttime temperatures, which might affect diurnal
temperature ranges. That could help reduce overall warming because
higher nighttime temps contribute to longer summer nights and more
heat storage in the ground.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
how does this relate to radiative cooling? Maybe if we can increase
the rate at which Earth loses thermal energy into space during the
night, that would enhance natural cooling. But is there a way to do
that without disrupting other processes?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's about reducing the albedo. Albedo is the reflectivity of
surfaces. If you have darker surfaces, they absorb more sunlight but
might radiate less heat at night? Or wait, no. Darker surfaces
actually absorb more solar radiation but then emit more infrared
radiation during the day. But during the night, their surface
temperature drops faster because there's less incoming solar
radiation to warm them up.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
if we could make surfaces that cool down quickly at night, maybe we
can enhance this natural cooling effect. That might reduce nighttime
temperatures, which in turn would lower daytime temperatures by
reducing the diurnal range. Lowering nighttime temps would mean less
heat is retained in the ground and atmosphere overnight, leading to
cooler days.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
how do you implement such surfaces? Maybe using materials that have a
low thermal inertia, like asphalt versus concrete. Wait, but maybe
even more radical changes? Like white roofs or reflective surfaces
during the day, which absorb less sunlight, thus keeping surface
temps lower. But at night, the same material would reflect more
infrared back into space?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
could we modify the Earth's albedo by increasing cloud cover? Because
clouds reflect sunlight and scatter incoming radiation, but they also
trap heat from the ground. Wait, but during the night, clouds don't
affect the radiative cooling as much because they're not reflecting
solar radiation. So maybe enhancing cloud cover in a way that doesn't
interfere with daytime warming could help. But I'm not sure if that's
feasible.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
angle: the Earth's atmosphere and oceans act like a blanket. If we
can enhance their ability to trap heat during the day (like more
clouds or less greenhouse gases) but then allow them to radiate more
at night, that might counteract some warming. But how does that
relate directly to radiant cooling?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe using aerosols in the atmosphere? If you add particles that
reflect sunlight and reduce absorption of infrared radiation, maybe
during the day, it cools the surface, leading to less heat being
stored, which would then be radiated into space more efficiently at
night. But I'm not sure how much impact that would have.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
if we could somehow make the Earth's surface emit more thermal energy
into space during the night, that would reduce the amount of heat
retained overnight. For example, using materials with a higher
emissivity so they radiate more heat back into space. This is similar
to what some cities are doing by coating roads and buildings with
reflective or dark materials that allow nighttime radiative cooling.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
how effective is this? I know cities like Singapore use certain
coatings on their roofs to reduce urban heat island effect. Maybe
enhancing those kinds of surfaces could amplify the natural diurnal
temperature range, thereby reducing overall warming.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also,
maybe the Earth's rotation plays a role. The rotation affects how
thermal energy is redistributed. If we can adjust surface properties
that interact with both day and night cycles more effectively, that
might help. But I'm not sure about the physical mechanisms here.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
thought: if nighttime cooling reduces the temperature gradient
between the Earth's surface and space, then maybe it helps the
atmosphere cool more efficiently. The lapse rate is a factor in
atmospheric heat transfer; lower surface temperatures would create a
steeper lapse rate at night, enhancing radiative cooling to space.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
how to quantify this? Maybe using models to simulate the effect of
enhanced nighttime cooling on global temperatures. If simulations
show that reducing nighttime warming would lower average
temperatures, then implementing strategies like reflective surfaces
or materials that facilitate rapid thermal emission could be part of
a solution.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">However,
there are potential downsides. For example, if we make surfaces emit
more heat at night, maybe we're increasing the albedo during the day?
Or affecting cloud formation negatively? But I think in most cases,
darker surfaces at night would have lower emissivity, so they radiate
more efficiently into space, leading to a net cooling effect.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but emissivity is about how well a surface emits infrared radiation.
A matte black surface has high emissivity (around 0.95), while a
shiny surface has low emissivity (~0.05). So if you use a dark
material at night, it will emit more thermal energy into space
compared to a reflective one. That would mean the dark material cools
down faster and emits more heat, which is good for reducing nighttime
temperatures.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
maybe promoting the use of such materials on buildings, roads, and
other surfaces could enhance natural cooling. This has been done in
some cities where they coat roofs with certain paints or materials
that allow better nighttime radiative cooling.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
how much effect does this have? I've heard that some studies suggest
that increasing nighttime albedo (reflectivity) could reduce
nighttime surface temperatures by a few degrees, which would then
lower daytime temps. But the diurnal range is what matters for heat
storage.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
if we can increase the rate of thermal emission from the Earth's
surface at night without affecting daytime reflectivity too much,
that might be beneficial. This could involve materials that have high
emissivity during nighttime but moderate to low during daytime (like
some smart coatings), which would optimize both cooling and solar
absorption.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
approach: enhancing the Earth's albedo by increasing cloud cover. But
clouds can be tricky because they block sunlight but trap heat.
However, during the night, clouds don't affect solar radiation, so
more clouds might actually increase nighttime cooling. But if clouds
interfere with daytime warming (like reducing solar irradiance), that
could have a mixed effect.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
using aerosols that reflect sunlight and absorb infrared. If these
particles are dispersed in the atmosphere during the day to cool
surfaces, then at night they might persist and continue reflecting
some infrared back into space. This could enhance nighttime cooling
without affecting daytime temperatures too much. But I'm not sure
about the actual impact of such aerosols.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also,
maybe modifying the Earth's rotation. If we can adjust atmospheric
circulation patterns that redistribute heat more efficiently during
the day and allow for better radiative cooling at night. For example,
if we could redistribute heat from warmer regions to cooler areas
during the day via ocean currents or wind patterns, but then allow
those same currents to bring cooled air back at night. But this is
speculative.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
another thought: the Earth's climate system is a balance between
incoming solar radiation and outgoing thermal radiation. If we can
enhance the efficiency of radiative cooling at night, that would
reduce the net flux going into the atmosphere and surface. This could
be done by increasing the emissivity of surfaces (like using dark
materials) or by enhancing clouds' ability to emit infrared.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
I'm not sure how realistic these solutions are on a global scale. For
example, coating all surfaces with high-emissivity paints would
require massive amounts of material, which isn't feasible.
Alternatively, promoting natural processes that enhance radiative
cooling without significant engineering effort.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Perhaps
encouraging the use of materials that have low thermal inertia, so
they cool down quickly at night and emit more heat into space. This
could be done by selecting appropriate building materials or land
cover types. For example, using lightweight, reflective materials on
buildings to reduce heat absorption during the day, which then allows
faster cooling at night.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
angle is urban design. Cities with extensive green spaces (like
parks) have lower temperatures at night due to shade and vegetation.
Maybe enhancing such areas by increasing vegetation or creating more
shaded surfaces could help. But I'm not sure how much of an effect
that would have compared to engineered solutions.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also,
the role of the ocean. Oceans absorb a lot of heat during the day and
emit it at night. If we could enhance this process, like through
increased cloud cover over oceans (but again, that's a bit
conflicting), but maybe more direct methods like reducing sea surface
temperatures by some mechanism... Not sure.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
here's another idea: using satellites or other technologies to
optimize Earth's albedo. For example, reflective materials on certain
surfaces could be used during the day to reduce absorption of
sunlight, thereby lowering daytime temps and thus allowing for better
nighttime radiative cooling. But I'm not sure how feasible that is in
practice.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
if we can make the Earth's surface emit more thermal energy into
space at night, that would reduce the heat stored overnight. This
could be done by reducing the albedo of surfaces during the day (so
they absorb more sunlight and re-radiate it as infrared) but then
have a low-emissivity material at night to radiate that excess heat
back into space. This way, you balance daytime absorption with
nighttime emission.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">This
might involve using materials that are dark during the day to
maximize solar absorption but have high emissivity at night. Such
smart coatings could be used on buildings and infrastructure to
optimize thermal management. However, such materials would need to be
both durable and cost-effective for widespread use.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">In
summary, possible strategies include:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Enhancing
nighttime radiative cooling through surface modifications (dark
surfaces with high emissivity) to allow faster thermal emission.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Increasing
cloud cover during the night to enhance infrared radiation back into
space without affecting daytime solar reflection.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Using
aerosols in the atmosphere to reflect sunlight and absorb heat,
which could reduce daytime temperatures and enhance nighttime
cooling.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Optimizing
land use practices (like urban design with reflective materials)
that promote lower thermal inertia and faster cooling at night.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Encouraging
natural processes that increase surface emissivity or albedo during
nighttime without disrupting day-time solar absorption.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
I need to check if these are feasible and how they would interact
with existing climate systems. For example, enhancing cloud cover
might reduce daytime temperatures but could also affect precipitation
patterns. Similarly, aerosols might have localized cooling effects
but could have other environmental impacts.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
consideration is the balance between different factors. If you
enhance nighttime cooling too much, it might lead to a larger
temperature gradient at night, which could actually increase energy
demand for heating during winter. So there's a potential trade-off
that needs to be considered.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">However,
if successful, these strategies could reduce the overall heat
retention in the Earth's system, leading to lower average
temperatures and thus mitigating warming trends. It would depend on
how effective each method is at amplifying natural radiative cooling
processes without causing unintended side effects.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font size="4" style="font-size: 16pt"><b><font color="#c9211e">OUTPUT:</span></b></font></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">To
leverage nighttime radiant energy cooling for global temperature
reduction, several strategies can be considered, leveraging enhanced
thermal emission and surface properties:</span></p>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Surface
Modifications</strong>:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Use
materials with high emissivity (dark surfaces) to facilitate rapid
thermal emission at night, reducing nighttime temperatures and
lowering daytime heat storage.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Implement
"smart coatings" that balance solar absorption during the
day (for daytime warming) with efficient radiative cooling at
night.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Albedo
Optimization</strong>:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Promote
reflective materials (e.g., white roofs) to reduce daytime heat
absorption while allowing cooler nighttime radiation back into
space.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Encourage
dark surfaces at night for enhanced infrared emission, even if
slightly lower albedo during the day.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Cloud
Enhancement</strong>:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Increase
cloud cover overnight without disrupting daytime solar reflection,
potentially enhancing thermal emission of infrared radiation into
space.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Aerosol
Use</strong>:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disperse
reflective aerosols in the atmosphere to reduce daytime absorption
and enhance nighttime radiative cooling, though risks (e.g.,
pollution) must be managed.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Land
Use Practices</strong>:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Design
urban areas with low-thermal-inertia materials (asphalt over
concrete) to accelerate night-time cooling.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Expand
vegetated surfaces that provide shade and slow heat retention
during the day, aiding nighttime radiative losses.</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Natural
Processes Amplification</strong>:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Promote
natural processes like ocean circulation or mountain shadows to
redistribute heat efficiently, optimizing thermal emission at
night.</span></p></li>
</ul>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Challenges
& Considerations</strong>:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Balance
potential cooling with winter heating demands.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Avoid
unintended side effects (e.g., cloud interference with solar input).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Assess
feasibility of large-scale implementation and material costs.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">These
strategies could enhance natural diurnal temperature ranges, reducing
heat storage in the Earth’s system and mitigating warming trends.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt"><b>Example
#3</b></font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Start
a 1000 word scene (vivid, graphic horror in first person) with: The
sky scraper sways, as she watches the window in front of her on the
21st floor explode...</span></b></p>
<p><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b><font color="#c9211e">Thought
for 16.49 seconds</span></b></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
so I need to start this 1000-word scene with "The skyscraper
sways..." and then continue from there. Let me think about how
to build a vivid, graphic horror story here.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">First,
the setting is the 21st floor of a skyscraper. The user mentioned
it's first-person, so maybe the narrator is someone who worked there
or lives there? But wait, the window explodes. That suggests
something external causing the explosion—like an attack or natural
disaster. Maybe a terrorist attack with a missile or some weapon?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
need to establish the scene immediately. The skyscraper swaying shows
instability, perhaps due to the explosion. The narrator is probably
inside when it happens. They see the window explode, which would
cause immediate danger. Then what? Panic, fear, trying to escape.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me think about character development. Is the narrator a survivor? Or
does something worse happen? Maybe they have to fight their way out
with others, facing zombies or infected creatures. But the user said
graphic horror, so maybe more psychological horror or supernatural
elements?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
the prompt says "vivid, graphic horror," so I need to
include graphic violence and horror elements. Let's think of a
scenario where the explosion isn't just an external event but
something that unleashes a monster or curse.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's a viral outbreak from the explosion—like a chemical or
biological agent released. But I need to make sure it's plausible
within the story. Maybe the explosion causes a breach in containment,
releasing mutated creatures.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
skyscraper swaying could be due to the blast causing structural
damage, making it unstable. The narrator is on the 21st floor when
the window explodes. They might have seconds to react before
something else happens. What's their state of mind? Fear, confusion,
urgency?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me outline possible steps:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Scene
starts with the skyscraper swaying (due to explosion), narrator in a
panic.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Window
shatters, glass flying.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Narrator
tries to escape through window but maybe gets cut or bitten.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">They
run into an elevator, which suddenly stops working.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">They
encounter others, maybe people infected or monsters.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Descent
into horror as they realize the situation is worse than they
thought.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but the user wants 1000 words. So I need to expand this from a brief
initial incident to a longer scene with rising action and climax.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Possible
elements to include: claustrophobic environment, physical dangers
(falling debris, infected), psychological horror (panic, paranoia),
supernatural elements (evil presence, cursed building).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me try to structure it:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Opening:
Swaying skyscraper, narrator in the elevator. Window explodes.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Immediate
action: Narrator escapes through window but faces immediate
danger—shards of glass, maybe an infected person falling or coming
in.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Descent
into panic: Others trying to escape, but the building is unstable.
Maybe floors collapsing around them.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Finding
others: They encounter survivors, but also infected. Tense moments
where they have to decide who to trust.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
elevator breaking down: Stuck underground with no power. Must find
another way out—maybe through ventilation shafts or tunnels.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Encounter
with the main antagonist: A monster or supernatural entity that's
been unleashed by the explosion. Maybe a cursed object, like a
statue or artifact, that transforms people.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Climax:
Battle with the antagonist in a dark, confined space. Physical and
psychological horror as they fight for survival.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Resolution:
Victory, but at a cost. The narrator might have to make a sacrifice
or face consequences.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Need
to make sure each paragraph adds to the tension and graphic elements.
Use sensory details—sight, sound, smell—to enhance horror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Possible
pitfalls to avoid: Too much exposition, too many characters, not
enough horror. Keep focus on the narrator's perspective.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me draft the first part:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
skyscraper sways as I watch the window in front of me shatter. Glass
explodes outward, fragments slicing through the air like razor
blades. My breath catches; I feel it—this is no ordinary explosion.
The building shakes violently, and I grip the edge of the window
frame, adrenaline pumping.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">My
fingers press against the cool glass, suddenly cold where it was
warm. A gust of wind rushes in, carrying with it a sickening stench
that makes my stomach lurch. Through the window, I see chaos below:
people screaming, cars overturned, debris flying through the air.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
hesitate, torn between the urge to escape and the fear of what’s
outside. The elevator door closes behind me as I jump out, landing on
the hard floor with a thud that echoes through the hallway. My knees
buckle, and I crash to my hands and knees, gasping for air.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
corridor is dark save for flickering emergency lights. Panic-stricken
shouts fill the air as others scramble to escape. A woman in a lab
coat stumbles past me, clutching her stomach where blood spills
between her fingers. “They’re coming!” she cries, her eyes wide
with terror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
push myself up and run, dodging falling debris from above. The
building creaks ominously, and I can’t shake the feeling that it’s
not just the explosion threatening our lives but something
else—something ancient and malevolent lurking in the shadows.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We’re
running through a maze of corridors when we reach an elevator bank.
But the doors are locked. One of my companions, a man in a suit with
blood trickling from his nose, hits the button repeatedly. The lights
flicker and die, plunging us into darkness.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">“Maybe
it’s trapped,” he mutters. “We need to find another way out.”</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">We
split up, searching for exits. I follow a ventilation shaft, my hands
gripping the cold metal grate. The air tastes stale, and I can hear
water dripping somewhere above. The building groans again, louder
this time.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Down
here, in the bowels of the skyscraper, we encounter something far
more terrifying than any explosion ever could. Something that feeds
on fear, something that whispers ancient curses as it moves with
unnatural speed.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">It’s
here, in the darkness and damp air, that I realize the true horror
isn’t just the destruction above or the infected outside. It’s
the knowledge that within this building lies a gateway to an evil
beyond our understanding—one that we’ve unleashed by accident,
and now we must fight to survive it.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font size="4" style="font-size: 16pt"><b><font color="#c9211e">OUTPUT:</span></b></font></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Scene:
A Nightmarish Descent into Horror</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>The
skyscraper sways, its glass facade shattering like shattered bones.
The 21st floor—my home for nearly a decade—rumbles beneath the
weight of an unseen force. I cling to the edge of the window, my
breath frozen in my lungs as shards of glass rain down around me.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
The Explosion’s Whisper</strong><br/>
The world outside dissolves
into chaos: shattered windows, screaming pedestrians, cars engulfed
in flames. But this isn’t a natural disaster—it’s something
unnatural, a malevolent force born from the explosion’s radioactive
heart. I’ve seen experiments at work, though never like this.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>Glass
cracks under my fingers as I leap out the window, landing hard on the
wet pavement. The building shudders above me, and for a moment, I’m
sure it’ll crush us all. But we’re survivors—those who weren’t
vaporized by the blast or consumed by ravenous creatures.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
The Hunt Begins</strong><br/>
The corridors are hellish: bodies
littered like discarded weapons, blood pooling in dark pools. A
woman’s scream cuts through the din as she stumbles into a puddle,
her face contorting with infected bites. She collapses, twitching—her
fate sealed by a virus unleashed by the explosion.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
race to the elevator bank, where others cling to the doors like
desperate fish to a broken net. “It’s not just us,” whispers a
man in a lab coat, his eyes wide and bloodshot. “The building…
it’s alive.”</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>The
elevator groans open. I step inside, but its metal walls close behind
me with a sickening snap. The lights flicker, leaving only the
emergency glow-in-the-dark tapestries on the ceiling to guide us.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
The Abyss Below</strong><br/>
We descend deeper, each floor a tomb.
The air grows thick with something unseen—smoke, decay, and the
acrid stench of ozone. When the elevator stops, I grip my knife
tightly. Above us, the building groans like an awakened beast.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>Through
a vent shaft, I spot movement: silhouettes—human and monstrous.
They’re not humans; they’re creatures born from the explosion’s
curse, their skin blackened and pulsing with unnatural energy.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
The Cursed Core</strong><br/>
The elevator door crashes open into a
sub-basement, now a labyrinth of corridors lit by flickering
emergency bulbs. We split—some to search for exits, others to face
the unknown horrors lurking in the dark.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>One
companion screams as claws rattle against metal doors. I find him
clutching his throat, blood spilling between his fingers. “They’re
everywhere,” he gasps, but there’s no one left to hear.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5.
The Final Descent</strong><br/>
The elevator shaft collapses behind
us, burying itself in rubble and twisted steel. We’re trapped in a
concrete tomb, the walls groaning like tortured souls. Above, I hear
the building’s ancient gears strain—something is awakening.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>Through
a ventilation shaft, I spot a figure: a woman with eyes like burning
coals, her skin crackling with electricity. She moves with unnatural
speed, her teeth bared in a silent scream. It’s not just us—it’s
something worse.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>6.
The Unmaking</strong><br/>
We fight for survival in the darkness,
each blow echoing through the corridors. But every strike I land only
seems to fuel the creature. It’s not just pain—this is the
unmaking of humanity.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>In
a final, desperate move, we trap ourselves in an elevator shaft with
a makeshift barricade. The creature’s claws rake against steel as
it tries to breach us, its screams echoing into infinity.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>7.
The Price of Survival</strong><br/>
The creature breaks through the
barricade, its body tearing apart like rotten meat. It lunges for me,
but I dodge its grasp—only to see my companion fall before me, his
face contorted with fear and agony.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>In
that moment, I understand: this isn’t just a building—it’s a
gateway to something ancient and eternal, waiting to devour all of
creation.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>8.
The Final Act</strong><br/>
The creature surges forward, its black
skin rippling like liquid tar. My knife meets flesh, but it’s a
mockery of resistance—its wounds heal in an instant, its painless
scream filling the air.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>In
the end, I’m not fighting to survive—I’m fighting for purpose.
The world above has crumbled; this is my crucible. I plunge my blade
into its heart, and as it collapses, I see the truth: we’re all
just pawns in a cosmic game.</span></em></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Epilogue:
The Unseen Horizon</strong><br/>
The skyscraper’s collapse echoes
across the city, but no one sees it. No one hears the silence that
follows—a silence pregnant with possibility and dread.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><em>And
in that moment, I know what comes next isn’t survival—it’s
something else entirely. It’s the awakening of a world that once
was, and now is reborn in darkness.</span></em></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt"><b>Example
#4</b></font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Come
up with six plots for a new "Black Mirror" episode (that
the audience would love) that all involve time travel with sexy
theme(s).</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b><font color="#c9211e">Thought
for 13.26 seconds</span></b></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
so I need to come up with six different plot ideas for a Black Mirror
episode involving time travel and sexy themes. Let's start by
breaking down what Black Mirror usually does—each episode explores
a dark, thought-provoking scenario, often with a twist or a
commentary on technology's impact on society. The "sexy theme"
part probably means the episodes should have some romantic or sensual
elements, maybe even some erotic themes, but not too explicit. I need
to balance the darkness of Black Mirror with those sexy aspects.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">First,
let me think about time travel plots that can work within a Black
Mirror framework. Common themes include relationships, identity,
consequences of actions, and societal control. Time travel could add
a layer of complexity by allowing characters to change past events,
which might have unforeseen repercussions.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let's
brainstorm some ideas:</span></p>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Time
Travel as a Dating App</strong>: Imagine a dating app that allows
users to go back in time and relive their best moments with
potential partners. But there's a catch—each attempt alters the
future. The twist could be that everyone ends up trapped in a loop
where they're stuck trying to perfect their relationships, only for
them to fail because they changed something.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>The
Time Traveler's Dilemma</strong>: A person must choose between two
lovers from different points in their life. Each choice leads to a
worse future, but the twist is that time travel isn't just about
past and future; it's about parallel realities. They have to decide
which reality they want to exist.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>A
Time-Travel Hookup App</strong>: An app that allows users to meet
versions of themselves from different times, leading to unexpected
encounters. The sexy part could be the blurred lines between past
and present selves. But then a user discovers that the future self
is someone they don't recognize.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Identity
Swap Roulette</strong>: A game where two people swap identities
across time. They must navigate their new lives, but eventually
realize that the changes have consequences for both timelines. The
sexy part might involve them falling in love with different aspects
of themselves or each other's pasts.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>The
Time-Traveler's Lovers</strong>: A couple uses a device to relive
their happiest moments together. However, their actions alter
history, leading to unintended scenarios where they must confront
the consequences of their choices. The twist is that they're not
just changing their own timeline but affecting others' lives too.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Parallel
Lives Dating Agency</strong>: An agency helps people connect with
past versions of themselves across time. But when a client starts
dating their future self, they realize that the line between past
and future isn't as clear-cut as they thought. The twist is that
these relationships aren't just about love but about understanding
their own destiny.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
I need to make sure each plot has a dark or thought-provoking element
alongside the sexy theme. Let me refine these ideas:</span></p>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Dating
App with Consequences</strong>: The app lets users relive perfect
moments, but altering past events creates paradoxes. A user's
attempt to save their first love leads to unintended future
problems, like losing their partner in the present.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>The
Choice Between Lovers</strong>: Two lovers from different timelines
must choose which reality to live in. One chooses the past lover,
leading to a fractured future, while the other picks the current
lover but faces a future where they've changed too much.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Identity
Swap and Love</strong>: Two people swap identities over time,
falling for each other despite their new circumstances. But as they
explore their new selves, they realize that some memories are better
left in the past.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Time-Traveling
Couple's Consequences</strong>: A couple uses a device to revisit
their best moments. However, their attempts cause unexpected events,
like changing a pivotal moment that shaped their relationship. They
must confront how much they're willing to alter for love.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Agency
and Control</strong>: An agency helps people reconnect with past
lovers via time travel. But when clients start choosing partners
based on future knowledge, they become trapped in a cycle of
controlling outcomes, leading to emotional isolation.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Parallel
Lives Dating Agency Twist</strong>: The agency matches people with
their past selves, but when someone dates their future self, they
realize that the future isn't fixed and their choices have ripple
effects across timelines.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
need to ensure each plot has a twist or commentary on technology's
impact. Maybe in idea 5, the control aspect could critique how
technology manipulates human behavior for convenience, leading to
loss of spontaneity. In idea 6, the timeline interference might show
how actions have broader consequences than intended, similar to
real-world tech issues.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Also,
the sexy themes should be present but not explicit. Maybe the "sexy"
aspects are in the relationships—like choosing between past and
future lovers, identity exploration, or the thrill of crossing time
for a special moment. The eroticism could come from the emotional
depth of their choices rather than physical intimacy.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
think these plots cover different angles: app-based, choice-driven,
identity, consequences, control, and timeline agency. Each has its
own twist and explores how time travel complicates romantic
relationships. The dark themes would be the consequences of meddling
with time, loss of identity, or societal manipulation through
technology.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font size="4" style="font-size: 16pt"><b><font color="#c9211e">OUTPUT:</span></b></font></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Here
are six Black Mirror-esque plots blending time travel with sexy,
emotionally charged themes, each with a dark twist:</span></p>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
"The Time-Traveler's Dilemma"</span></strong></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Plot</strong>:
A woman must choose between her childhood sweetheart from the past
and her current partner, who is revealed to be a clone of her future
self. The twist: Choosing one erases the other entirely, but both
lovers are trapped in parallel universes where they’re destined to
repeat their choices.</span></p>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
"Parallel Lives Dating Agency"</span></strong></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Plot</strong>:
A dating app matches users with past versions of themselves across
time. When a client falls for her future self, she discovers that
altering their relationship creates paradoxes—like erasing the
"perfect" moment they shared. The twist: Their love is the
only thing keeping reality intact.</span></p>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
"Identity Swap Roulette"</span></strong></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Plot</strong>:
Two strangers swap identities over time, leading to unexpected
intimacy. But as they navigate each other’s lives, they realize
their own selves are unraveling—trapped between past and future,
they must confront whether some memories deserve to die.</span></p>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
"The Time-Traveling Couple's Consequences"</span></strong></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Plot</strong>:
A couple uses a device to revisit their best moments together, but
their attempts inadvertently alter pivotal events (e.g., breaking up
with them). The twist: Their love is the only thing keeping history
from unraveling, but at what cost?</span></p>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5.
"A Time-Traveler's Lovers"</span></strong></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Plot</strong>:
A man relives his first date with an AI that’s a blend of past and
future lovers. As he navigates this hybrid identity, he discovers
that their love is the only thing preventing his timeline from
collapsing—until he realizes the AI is controlling him to "perfect"
their relationship.</span></p>
<hr/>
<h3 class="western" style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>6.
"The Sexy Time Loop"</span></strong></h3>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Plot</strong>:
An app lets users relive their best moments with past partners. But
when a user’s attempt to save a failed relationship creates a new,
perfect version of their life, they become trapped in an endless
cycle where every choice leads to the same outcome—until they
realize they’re being manipulated into perfection.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Each
episode explores themes like identity, control, and the consequences
of technology meddling with time, while weaving in seductive tension
through relationships, paradoxes, and the blurred lines between past
and present.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt"><b>Example
#5</b></font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><font size="3" style="font-size: 12pt"><b><font color="#000000">Tabitha
likes cookies but not cake. She likes mutton but not lamb, and she
likes okra but not squash. Following the same rule, will she like
cherries or pears?</span></b></font></font></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b><font color="#c9211e">Thought
for 1 minutes 55 seconds</span></b></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's try to figure out this logic problem about Tabitha and her
preferences. So, the question is asking if she likes cherries or
pears based on some pattern of her preferences given in the examples.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">First,
let me list out what we know from the problem:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Tabitha
likes cookies but not cake.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes mutton but not lamb.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes okra but not squash.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
then we need to apply this same rule to determine whether she likes
cherries or pears.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Hmm,
so each of these examples has a pair: the item she likes (let's call
them L) and the item she doesn't like (not L). The question is asking
if there's a pattern here that can help us figure out her preference
for cherries or pears.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me break down each example:</span></p>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
vs. Cake:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
cookies (C)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Doesn't
like cake (¬C)</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
vs. Lamb:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
mutton (M)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Doesn't
like lamb (¬M)</span></p></li>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra
vs. Squash:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
okra (O)</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Doesn't
like squash (¬O)</span></p></li>
</ul>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
we need to see if there's a relationship between the items she likes
and those she doesn't like that can be generalized. Let me look for
patterns.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Looking
at each item she likes (C, M, O), maybe there's something about their
names or categories that relate to what she doesn't like. Let's
consider the first pair: cookies vs. cake. Cookies are a type of
baked good, while cake is also a baked good. So both are liked and
disliked because they're similar? Wait, but she likes one and not the
other. Maybe it's about their origin or something else.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's about the starting letter? Let me check:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookie
starts with C</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake
starts with C</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
starts with M (meat), Lamb starts with L (also meat). So both are
meats. Okra is a vegetable (gourd), Squash is also a vegetable. But
she likes okra and not squash. Hmm, but why would she like one but
not the other if they're similar? Unless there's another factor.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's about whether the item is a fruit or not. Let's see:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
No</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
No (not a fruit)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
No (meat)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
No</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
Yes (fruit)</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
No</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
Tabitha likes okra but not squash, which are both vegetables. That
doesn't fit the fruit category. So maybe that's not it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe she likes items starting with certain letters? Let me check:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">C,
M, O... Wait, C, M, O. Hmm, no obvious pattern there. Maybe vowels? C
is a vowel, M isn't. Not sure.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
let's think differently. Each of the items she likes are different
categories: cookies (baked), mutton (meat), okra (vegetable). The
ones she doesn't like are cake (baked), lamb (meat), squash
(vegetable). So in each case, the item she likes is from a category
that's not the same as what she doesn't like. For example:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes cookies (baked) but not cake (baked).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes mutton (meat) but not lamb (meat).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes okra (vegetable) but not squash (vegetable).</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
in all cases, her likes are from a different category than what she
dislikes. But that might not be the pattern. Let me check if there's
another relationship.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
perhaps the items she doesn't like start with certain letters? Cake
starts with C, lamb L, squash S. Not sure. Maybe it's about the first
letter being in the same position as her likes? Hmm, cookies (C) vs.
cake (C). Mutton (M) vs. lamb (L). Okra (O) vs. squash (S). The
letters are different, but maybe there's a pattern in how they're
spelled or pronounced.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
another angle: Maybe the items she likes are words that have certain
syllables or sounds? Let me see:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
two syllables</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
one syllable</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
three syllables</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
one syllable</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
two syllables</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
two syllables</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Not
sure if that helps.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's about the first letter being a vowel? C is a vowel. M
isn't. O is a vowel. S isn't. So only cookies and okra start with
vowels. But she likes both of those, so maybe not. Cake doesn't start
with a vowel, but she dislikes cake.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
perhaps it's about the number of letters in each word? Let's count:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
6 letters</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
4 letters</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
6 letters (but spelled as one word)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
4 letters</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
4 letters</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
5 letters</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Not
sure. Maybe the number of syllables instead?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
think about the first letter's position in the alphabet. C is the
third letter, M the thirteenth, O the fifteenth. Not helpful.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's simpler than that. Let me list out the items she likes and
dislikes:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Doesn't
like:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
in each case, the item she doesn't like starts with a letter that is
either the same as her like (cake vs cookies) or different? Wait:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
looking at the first pair: cookies and cake. Both start with C.
Second pair: mutton and lamb both start with M. Third pair: okra and
squash both start with O. So in all cases, the items she likes and
dislikes share the same starting letter. For example:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
C (cookies), doesn't like C (cake)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
M (mutton), doesn't like M (lamb)</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
O (okra), doesn't like O (squash)</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
if this pattern continues, then Tabitha would dislike any item that
starts with the same letter as what she likes. But wait, that can't
be right because in each case, her likes are different categories.
For example, she likes cookies and cake are both baked goods, but she
doesn't like them. Similarly for mutton and lamb (meat), and okra and
squash (vegetables). So the pattern is that she likes items from one
category and dislikes items from another category, not necessarily
related to their starting letters.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait
a minute, maybe it's about whether the item is a single word or two
words? Let's check:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
two words</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
one word</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
one word (but spelled as one)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
one word</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
one word</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
two words</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Not
sure. Alternatively, maybe the items she likes are all nouns that
start with a vowel, and the ones she dislikes don't? Let's check:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
(starts with C, which is a consonant), so not a vowel. Mutton starts
with M (consonant). Okra starts with O (vowel). So only okra is a
vowel starting word. But she likes cookies, mutton (which start with
consonants) and dislikes cake (C, consonant), lamb (M, consonant),
squash (S, consonant). Wait, all the items she doesn't like also
start with consonants except for... no, cake starts with C
(consonant), lamb L (consonant), squash S. So all her disliked items
start with consonants? Let me check:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
C</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
L</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
S</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">All
consonants. Her liked items:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
C (consonant)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
M (consonant)</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
O (vowel)</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
only okra starts with a vowel, but she likes cookies and mutton which
start with consonants. Hmm, that's confusing. If the pattern is that
she likes items starting with vowels, then why would she like both
cookies and mutton? Unless there's another factor.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's about whether the item is a fruit or not. Let me check:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
no</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
no</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
no (meat)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
no</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
yes (fruit)</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
no</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
she likes okra but not squash, which are both vegetables. So that
doesn't fit.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's about the number of letters in the word after the first
letter. For example:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
6 letters total</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
4</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
6 (but spelled as one)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
4</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
4</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
5</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">No
clear pattern.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's about the type of the category. For example:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes items that are not "cake", "lamb", or
"squash". But that seems too broad.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
think of each liked item as being in a different category than her
disliked ones. Since all her dislikes are types of baked goods
(cake), meats (lamb), and vegetables (squash). Her likes are cookies
(baked), mutton (meat), okra (vegetable). So the pattern is that she
likes items from different categories than what she doesn't like. But
this is just about category, not a specific rule.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's about whether the disliked item is a noun or an adjective?
Let me check:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
noun</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
noun</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
noun</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">All
nouns. Her likes are cookies (noun), mutton (noun), okra (noun). So
all nouns except cake and squash. But that doesn't explain why she
would dislike them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
perhaps the rule is that Tabitha likes items that are not "cake",
"lamb", or "squash". Then, if we apply this to
cherries and pears:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
no</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
yes (fruit)</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
she would like pears but not cherries. But wait, does that fit the
pattern? Let's check.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
in the given examples, her likes are cookies, mutton, okra. The
dislikes are cake, lamb, squash. So if the rule is "likes what
is not cake, lamb, or squash", then applying it to cherries and
pears:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes what is not cherries (since she doesn't like them) and not
pears? Wait, no.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe the rule is that she likes items that are not in the set of
disliked items. But the disliked items are cake, lamb, squash. So if
we consider "cherries" and "pears", then:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake (since cake is disliked), so she would like cherries.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Is pears a member of the set {cake, lamb, squash}? No. So according
to the rule, since she doesn't dislike pears, but does she like
them? Wait, the problem says she likes okra but not squash. So maybe
the rule is that she likes items that are not in the set of disliked
items.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
the set of disliked items here is {cake, lamb, squash}. So cherries
and pears are neither cake, lamb, nor squash. Therefore, according to
the pattern, since they're not in the disliked set, she would like
them? But wait, in the examples given, she likes cookies (not cake),
mutton (not lamb), okra (not squash). So if the pattern is that she
likes items not in her disliked list, then cherries and pears would
be liked. However, the problem states "following the same rule",
so maybe it's about starting letters or some other characteristic.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but looking at the examples:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
vs Cake: Both start with C.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
vs Lamb: Both start with M.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra
vs Squash: Both start with O.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
if she likes items that share the same first letter as her liked
item, then for cherries and pears, which start with C and P
respectively:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries
start with C, so she would dislike them (since she dislikes cake).
So she doesn't like cherries.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears
start with P. Since there's no prior example of a disliked item
starting with P, we can't know. But maybe the pattern is that if an
item shares the first letter with her liked items, she dislikes it;
otherwise, she likes it.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
in this case, pears don't share a first letter with any of her liked
items (cookies start with C, mutton M, okra O). So according to this
pattern, since there's no disliked item starting with P, she would
like pears. But wait, that might not be the case because we don't
have data for P.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
the problem states "following the same rule", so maybe it's
a different pattern.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
let me think again. Maybe the rule is that Tabitha likes items which
are not in the set of disliked items. The set of disliked items here
are cake, lamb, squash. So any item not in {cake, lamb, squash} would
be liked by her. But wait, she already has examples where she doesn't
like some things despite them not being in that set.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
no. Let's clarify:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes cookies (not cake), mutton (not lamb), okra (not squash). So
all these items are not in the set {cake, lamb, squash}. Therefore,
according to this pattern, any item not in {cake, lamb, squash} would
be liked by her.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
applying that to cherries and pears:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake, lamb, or squash. So she likes them.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Also not in the set {cake, lamb, squash}. So she likes pears.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, that seems too broad. However, according to the examples given,
all her liked items are indeed not in {cake, lamb, squash}, and she
doesn't have any examples of disliked items outside this set (since
cake, lamb, squash are the only ones listed). Therefore, if we follow
this pattern, Tabitha likes everything except what's in {cake, lamb,
squash}. But that can't be right because she does dislike some things
not in that set.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
no. Wait, let me re-examine:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes cookies (not cake), mutton (not lamb), okra (not squash). All
these are not in the disliked set {cake, lamb, squash}. Therefore,
according to this pattern, she would like all items except those
three.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
that can't be correct because she has preferences beyond just being
in or out of the disliked set. For example, if there's an item
starting with a letter not in C, M, O but still being something else,
maybe she likes it. But according to this pattern, she would like any
item not in {cake, lamb, squash}.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
let me check again:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">If
the rule is "likes what is not cake, lamb, or squash", then
cherries and pears are both not cake, lamb, or squash. Therefore, by
that rule, she likes them. But according to the examples given, she
also likes cookies (not a disliked item) but dislikes cake (a
disliked item). So it's possible.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, in the first example, cookies vs cake: cookies is liked, cake
is not. Then mutton vs lamb: both are liked because they're not lamb
or squash? Wait no:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
let's re-examine each pair:</span></p>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
vs Cake: Both start with C. But she likes cookies and dislikes cake.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
vs Lamb: Both start with M. She likes them but not lamb and not
squash.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra
vs Squash: Both start with O. Likes okra, dislikes squash.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the pattern is that for each liked item, there's a disliked item in
the same category (baked goods, meats, vegetables). But she doesn't
dislike all items in those categories; only specific ones.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">For
example:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Baked
goods: cookies (liked) and cake (disliked).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Meats:
mutton (liked) and lamb (disliked).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Vegetables:
okra (liked) and squash (disliked).</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the rule seems to be that she likes items from a category but
dislikes specific members of that category. But why those specific
ones?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's about whether the item is a noun or not? Let me check:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
(noun), cake (noun), mutton (noun), lamb (noun), okra (noun), squash
(noun). All nouns except perhaps "cake" and "squash".
Doesn't seem helpful.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
another approach: Look at the first letters of her likes and
dislikes.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items:</span></p>
<ol>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Looking
at their starting letters:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items start with C, M, O.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items start with C, L, S.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the pattern is that she likes items whose first letter is in {C, M,
O}, and dislikes those starting with {C, L, S}.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but let's test this:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">If
a liked item starts with C: cookies (liked), cake (disliked). So
follows the rule.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likely
item: okra (O) – liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
(M) – liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
if there was an item starting with C that she didn't like, it would
be cake. But since her disliked items start with C, L, S, and not
others, maybe this is a pattern where the letters of her dislikes are
different from those of her likes?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items: C, M, O.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items: C, L, S.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the likes have first letters in {C, M, O}, and dislikes have first
letters in {C, L, S}. Therefore, she doesn't like any item starting
with L or S. But she hasn't tested those yet. However, according to
this pattern, if an item starts with C, M, or O, she likes it;
otherwise, she dislikes it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, that can't be right because in the examples given:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes cookies (C), mutton (M), okra (O). Dislikes cake (C), lamb (L),
squash (S). So the rule is "likes items starting with C, M, O;
dislikes those starting with L, S".</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this would mean that any item not starting with C, M, or O would be
disliked. But in reality, she has preferences beyond just letters.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but according to her examples:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes cookies (C), mutton (M), okra (O).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Dislikes
cake (C), lamb (L), squash (S). So the rule is that she dislikes
items starting with L or S, and likes others. But if someone asks
about cherries (C) and pears (P), according to this rule:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries
start with C: liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears
start with P: not in her disliked set, so liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, but what's the pattern here? If she dislikes items starting
with L or S, then any other letter is okay. But that seems too broad.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe there's a different rule. Let me think again.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
let's consider that Tabitha likes cookies (not cake), mutton (not
lamb), okra (not squash). So she likes things not in the set {cake,
lamb, squash}. Similarly, if she dislikes something not in those
three, but we don't have examples. However, applying this to cherries
and pears:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake, lamb, or squash. So according to her pattern, she likes
them.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Same reasoning, so likes them.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, the problem states "following the same rule",
implying there's a specific rule rather than just the generalization.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
perhaps the rule is that Tabitha likes items which are not in the set
of disliked items. The set of disliked items here are cake, lamb,
squash. So any item not in {cake, lamb, squash} would be liked by
her.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
according to this, she likes everything except those three. But we
have examples where she doesn't like some things outside that set,
but also likes others. However, the given examples show that for each
disliked item (cake, lamb, squash), there are liked items in their
category (cookies, mutton, okra). So perhaps the rule is more
nuanced.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe it's about whether the item is a noun or not? Let me check:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
(noun), cake (noun), mutton (noun), lamb (noun), okra (noun), squash
(noun). All nouns. So that doesn't help.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the rule is that she likes items which are not "cake",
"lamb", or "squash". Then, applying this to
cherries and pears:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake, lamb, or squash → liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Not cake, lamb, or squash → liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, in the given examples, she likes cookies (not a disliked item)
but dislikes cake. So this rule would fit, as cookies are not in
{cake, lamb, squash}, so she likes them. Similarly for mutton and
okra.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Therefore,
following this rule, cherries and pears are liked because they are
not in the set of disliked items.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, the problem says "following the same rule", which
might be different from just generalizing. So maybe there's a
specific characteristic that links her likes and dislikes.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Looking
back at the examples:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items: cookies (C), mutton (M), okra (O).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items: cake (C), lamb (L), squash (S).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
for each liked item, there is a disliked item in the same category.
But she doesn't dislike all items in those categories; only specific
ones.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
commonality might be that the disliked items share certain letters
with her likes. For example:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items start with C, M, O.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items also start with C, L, S (but not others).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the rule could be: She likes items whose first letter is in {C, M,
O}, and dislikes those starting with {L, S}.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, let's test this:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">If
an item starts with C: liked (cookies) but cake is disliked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
(M): liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra
(O): liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items start with L or S. So if someone asks about a fruit that starts
with T or V, she would like them? But we don't have data on those.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">However,
according to this pattern, any item starting with C, M, O is liked;
others are disliked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
the problem says "following the same rule", so maybe
there's another angle. Let me think of the structure:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Tabitha
likes cookies but not cake.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes mutton but not lamb.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes okra but not squash.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Following
the same rule, will she like cherries or pears?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
perhaps the rule is that for each liked item, there is a disliked
item in the same category (baked goods, meats, vegetables). But she
doesn't dislike all items in those categories; only specific ones.
However, we don't have enough data points to determine the exact
rule.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
looking at the structure:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
disliked items are cake, lamb, squash, which are all nouns.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
liked items are cookies (baked), mutton (meat), okra (vegetable).</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
maybe the rule is that she dislikes items that are in a specific
category (cake, lamb, squash) but likes others. But why those three?
It's arbitrary unless there's a pattern.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
perhaps the rule is based on the first letters:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items start with C, M, O; disliked ones start with C, L, S. So if an
item starts with C, liked; other letters: disliked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
then what about cherries (C) and pears (P). According to this rule,
she would like them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
wait, the problem says "following the same rule". The
previous examples show that her likes are cookies, mutton, okra. Her
dislikes are cake, lamb, squash. So the pattern is that she likes
items starting with C, M, O; dislikes those starting with L or S.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Therefore,
applying this to cherries and pears:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries
start with C: liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears
start with P: not in her disliked set (since she doesn't dislike
anything else), so liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
why are the disliked items only L and S? It's possible that there's
no prior data for other letters, but according to the rule derived
from the given examples, if an item starts with C, M, O – liked;
others – disliked. Therefore, cherries (C) and pears (P) would be
liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this seems too simplistic. However, given the structure of the
problem, it's likely that the intended answer is that she likes pears
because they don't follow the same rule as the previous ones (which
start with C, M, O). Wait, but according to our earlier analysis,
cherries and pears would be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe there's a different pattern. Let me think again:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Tabitha
likes cookies but not cake.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes mutton but not lamb.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
likes okra but not squash.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Following
the same rule: Cherries or Pears?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Maybe
the rule is that she dislikes items that are "cake",
"lamb", and "squash". So if an item is any of
these, she doesn't like it. But then:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
disliked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
disliked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
disliked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
other baked goods? She likes cookies (not cake), so she does like
some things in the category. Similarly for meats and vegetables. The
rule might be that she dislikes specific items but not all in the
categories.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">However,
the problem states "following the same rule", which
suggests a specific pattern rather than general preferences.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's about the number of letters or something else. But I don't
see a connection.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
looking at the examples:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items: cookies (C), mutton (M), okra (O).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items: cake (C), lamb (L), squash (S).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the disliked items have first letters in {C, L, S}, which are not in
the set of liked items. But she likes items starting with C, M, O,
and dislikes those starting with L or S.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Therefore,
if an item starts with a letter that is not C, M, or O, then it's
disliked. Otherwise, liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
applying this to cherries (C) and pears (P):</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries
start with C: liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears
start with P: not in her disliked set, so liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Therefore,
she would like both.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
why does the problem ask which one? Maybe there's a different rule.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
perhaps it's about whether the item is a noun or verb. Let me check:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
(noun), cake (noun), mutton (noun), lamb (noun), okra (noun), squash
(noun). All nouns except maybe "cake" and "squash"
are nouns.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Not
helpful.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the rule is that she likes items which are not in the set of
disliked items. The set of disliked items here is {cake, lamb,
squash}. Therefore:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake, lamb, or squash → liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Not cake, lamb, or squash → liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this seems too broad.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">However,
given that she likes cookies (not a disliked item), mutton (not
lamb/squash), okra (not squash). So if we consider the rule as "likes
items not in {cake, lamb, squash}", then cherries and pears are
both liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
maybe there's another angle. Let me check the original problem again:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">"Tabitha
likes cookies but not cake. She likes mutton but not lamb, and she
likes okra but not squash. Following the same rule, will she like
cherries or pears?"</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the pattern is that for each liked item (cookies, mutton, okra),
there's a disliked item in their category (cake, lamb/squash). So
maybe the rule is that she dislikes items which are "cake",
"lamb", and "squash", but likes others. But why
those three? It's arbitrary unless there's a pattern.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
perhaps it's about the number of letters or something else in the
word. For example:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
6 letters.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
not sure.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
maybe the answer is pears because they are not part of the same rule
as cookies, mutton, and okra. But according to the previous analysis,
cherries and pears would both be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
perhaps she likes all fruits except those in her disliked category.
But that doesn't fit with the given examples.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
let's think differently. The question is asking if she will like
cherries or pears following the same rule as the others (cookies,
mutton, okra). So the pattern is:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Liked
items: cookies, mutton, okra.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disliked
items: cake, lamb, squash.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
for each liked item, there's a disliked one in the same category.
Therefore, perhaps the rule is that she dislikes any fruit that is
"cake", "lamb", or "squash". But then:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake, lamb, or squash → liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Not cake, lamb, or squash → liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this would mean both are liked, but the question asks which one.
However, given that she likes cookies, mutton, okra (all not in her
disliked set), maybe there's a different rule.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the rule is that she dislikes items which are "cake",
"lamb", and "squash" because they are part of a
certain category (dairy products?), but that seems unrelated to
cookies, mutton, okra.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
perhaps it's about the structure of the words. For example:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
starts with C.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
starts with C.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
starts with M.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Lamb:
starts with L.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra:
starts with O.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Squash:
starts with S.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
the disliked items are all starting with C, L, or S. The liked items
start with C, M, or O. Therefore, the rule is that she likes items
whose first letter is not in {C, L, S}, and dislikes those whose
first letter is in {C, L, S}.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
then:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries
(C): liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears
(P): not in her disliked set → liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Thus,
both are liked. But the question asks which one she will like.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe there's a different pattern. Let me consider that Tabitha likes
cookies but not cake; so she dislikes cake but likes other cakes? No,
she just doesn't like cake.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Similarly
for lamb and squash: she likes mutton (which is meat) but not lamb
(sheep). So the rule might be that she dislikes items that are "lamb"
or "squash", but likes others in their categories. However,
without more examples, it's hard to determine.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
given the structure of the problem, perhaps the intended answer is
pears because they follow a different pattern. Wait, no, according to
our previous analysis, both cherries and pears would be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe she dislikes items that are "cake", "lamb",
or "squash" because they are part of a certain category
(e.g., dairy), but she likes other fruits. However, the problem
doesn't provide enough context for this.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
that the user is asking for the same rule as the previous ones, and
based on the examples given, it's likely that both cherries and pears
would be liked because they don't follow the disliked category (cake,
lamb, squash), but according to our earlier analysis, cherries start
with C (liked) and pears with P (not in her disliked set). Therefore,
she likes both.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
since the question asks which one, perhaps there's an error in
assuming that all items not in the disliked categories are liked.
However, based on the given examples, this seems to be the case.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
but let me check again:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
problem states: "Tabitha likes cookies but not cake. She likes
mutton but not lamb, and she likes okra but not squash. Following the
same rule, will she like cherries or pears?"</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
following the pattern of each liked item followed by a disliked one
in the same category. The rule is likely that she dislikes items
which are "cake", "lamb", and "squash".
Therefore:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cherries:
Not cake, lamb, squash → liked.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Pears:
Not cake, lamb, squash → liked.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this would mean both are liked, but the question asks which one. This
suggests that perhaps the rule is different.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe she likes all fruits except those in her disliked categories.
But again, without more data, it's hard to say.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
another angle: The question might be a play on words. Tabitha
dislikes cake, lamb, and squash. Following the same rule (dislikes
cake, lamb, squash), so she will dislike cherries and pears as well.
But that contradicts the given examples where she likes cookies,
mutton, okra.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe it's based on the number of letters or another property. For
example:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies:
6 letters.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cake:
4 letters.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton:
6 letters.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
not sure.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
think about the categories:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Cookies
and cakes are both baked goods.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Mutton
and lamb are meats (different animals).</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okra
is a vegetable.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So
she likes cookies (not cake) but dislikes cake. She likes mutton
(meat) but not lamb (sheep). She likes okra (vegetable) but not
squash (another vegetable).</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Following
the same rule, if we consider that she dislikes items in "cake",
"lamb", and "squash" categories, then cherries
(fruit) and pears (fruit) would be liked. But this doesn't follow a
consistent pattern.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the answer is pears because they are not part of the same
category as cookies, mutton, and okra. But that's not necessarily
true; both fruits are different categories.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
the confusion, I think the intended answer is that she likes pears
because they don't match the disliked category (cake, lamb, squash),
but according to our earlier analysis, cherries and pears would both
be liked. However, this might be a trick question where only one
follows the rule. Wait, perhaps the rule is that she dislikes items
which are "lamb" or "squash", so cherries are not
disliked (liked) and pears are also not disliked (since there's no
specific dislike for pears). But according to her previous pattern,
she likes cookies, mutton, okra because they aren't in the disliked
set. Therefore, both would be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
since the question asks which one, maybe the intended answer is
cherries, assuming that only one follows the rule. However, this
seems contradictory.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
perhaps the rule is that she dislikes items that are "cake",
"lamb", and "squash" because they are part of a
certain category (e.g., dairy), so she likes other fruits. But
without more context, it's hard to say.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
the ambiguity, but considering the structure of the problem where
each liked item is followed by a disliked one in their respective
categories, the answer might be pears because they don't follow the
same rule as cookies (which are baked goods), mutton (meat), and okra
(vegetable). But this doesn't align with the previous pattern.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the rule is that she dislikes items which are "cake",
"lamb", and "squash" but likes others. Since
cherries and pears aren't in those categories, they're liked.
Therefore, both would be liked. However, since the question asks for
one, perhaps there's a mistake in the problem statement.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
given that this is a logic puzzle, the intended answer is likely
pears because they follow a different pattern, but based on the
previous examples, it's possible she likes both.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">However,
considering common logic puzzles of this type, the expected answer
might be cherries. Wait, no, because according to the previous
analysis, both would be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the rule is that she dislikes items which are "lamb"
or "squash", so cherries are not disliked (liked) and pears
are also not disliked (since there's no specific dislike for pears).
But according to her previous pattern, she likes cookies, mutton,
okra because they aren't in the disliked set. Therefore, both would
be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
since the question asks which one, perhaps it's a trick where only
cherries follow the rule. However, without more information, it's
hard to determine.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
that this is a systematic thinking process, I need to ensure I'm not
missing any patterns. Let me summarize:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Tabitha
likes cookies but not cake.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Likes
mutton but not lamb; likes okra but not squash.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Following
the same rule: cherries or pears?</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Possible
patterns:</span></p>
<ol>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">She
dislikes items that are "cake", "lamb",
"squash". Therefore, cherries and pears are liked because
they aren't in her disliked set.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this would mean both are liked, but the question asks which one.</span></p>
<ol start="2">
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
rule is that she likes all fruits except those in her disliked
categories (cake, lamb, squash). But this contradicts the given
examples where she does like cookies, mutton, okra.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Maybe
the pattern is that she dislikes items which are "lamb" or
"squash", so cherries and pears are liked because they're
not lamb or squash.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the rule is based on the first letter: she likes items
starting with C, M, O (cookies, mutton, okra) but doesn't like those
starting with C, L, S (cake, lamb, squash). Therefore, cherries and
pears start with C or P, so they're liked.</span></p></li>
</ol>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
this would mean both are liked. However, since the question asks
which one, perhaps only cherries follow the rule because she likes
cookies but not cake, and cake is a type of cookie? No, that's not
correct.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alternatively,
maybe the answer is pears because they are not part of the same
category as cookies, mutton, okra. But this doesn't make sense.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
the confusion, I think the intended answer is cherries because it's
the only one not mentioned in her disliked categories (cake, lamb,
squash). Therefore, she likes cherries and pears both, but since the
question asks which one, perhaps there's a mistake. However,
considering common logic puzzle structures, the expected answer is
likely pears.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">But
based on the previous analysis, both would be liked. However, given
that this is a multiple-choice or similar format, maybe only cherries
are liked. But without more context, it's impossible to tell.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
the time I've spent and the need to provide an answer, I'll conclude
that she likes pears because they follow the same rule as cookies
(not cake), mutton (not lamb), and okra (not squash). Therefore, both
cherries and pears would be liked, but since the question asks which
one, perhaps the intended answer is cherries. But based on logical
deduction, both should be liked.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Wait,
another angle: The problem might be using "following the same
rule" as in she dislikes anything that's a cake, lamb, or
squash. Therefore, cherries and pears are not cakes, lambs, or
squashes → liked. So both would be liked. But since the question
asks which one, maybe it's a trick where only cherries are mentioned
in the disliked category, but okra is also disliked. However, without
more information, this is speculative.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Given
that I need to provide an answer, I'll state that she likes pears
because they follow the same rule as cookies, mutton, and okra
(liking them but not their respective counterparts). Therefore, both
would be liked, but since only one can be mentioned, it's likely
pears. But based on logical deduction, cherries and pears are both
liked.</span></p>
<p style="border: none; padding: 0cm"><br/>
<br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><font size="4" style="font-size: 16pt"><b><font color="#c9211e">OUTPUT:</span></b></font></font></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Tabitha
likes all fruits except those in her disliked categories (cake, lamb,
squash). Following the same rule as cookies (not cake), mutton (not
lamb), and okra (not squash):</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Cherries</strong> are
not "cake," "lamb," or "squash"
→ <strong>liked</strong>.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Pears</strong> are
also not in her disliked categories → <strong>liked</strong>.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Since
the question asks which one, both would be liked. However,
considering common logic puzzle structures, the most direct answer
aligning with the pattern is <strong>cherries</strong>, but based on
strict logical deduction, both should be included.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Final
Answer:</strong> \boxed{cherries}</span></p>
<p style="font-weight: normal; line-height: 100%; margin-bottom: 0cm">
<br/>
</p>
| [
"CRAFT"
] |
BioMistral/BioMistral-7B-DARE | BioMistral | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"dare",
"medical",
"biology",
"conversational",
"en",
"fr",
"nl",
"es",
"it",
"pl",
"ro",
"de",
"dataset:pubmed",
"arxiv:2311.03099",
"arxiv:2306.01708",
"arxiv:2402.10373",
"base_model:BioMistral/BioMistral-7B",
"base_model:merge:BioMistral/BioMistral-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-05T22:21:47Z | 2024-02-19T15:37:21+00:00 | 796 | 20 | ---
base_model:
- BioMistral/BioMistral-7B
- mistralai/Mistral-7B-Instruct-v0.1
datasets:
- pubmed
language:
- en
- fr
- nl
- es
- it
- pl
- ro
- de
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- mergekit
- merge
- dare
- medical
- biology
---
# BioMistral-7B-mistral7instruct-dare
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
# No parameters necessary for base model
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
| [
"MEDQA",
"PUBMEDQA"
] |
Yntec/Emoticons | Yntec | text-to-image | [
"diffusers",
"safetensors",
"Emoticons",
"Emojis",
"Smileys",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-12-18T06:56:58Z | 2024-11-17T09:12:45+00:00 | 793 | 4 | ---
library_name: diffusers
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- Emoticons
- Emojis
- Smileys
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
inference: true
---
# Emoticons
Samples and prompts:

(Click for larger)
Top left: masterpiece, top quality, best quality, official art, beautiful and aesthetic,8k, best quality, masterpiece, no humans,a brown teddy bear with a ribbon and hearts, solo, simple_background, white_background, full_body, pokemon_\creature\,
Top right: masterpiece, top quality, best quality, official art, beautiful and aesthetic,8k, best quality, masterpiece, a white bunny rabbit with an orange umbrella, solo, simple_background, white_background, full_body, pokemon_\creature\,
Bottom left: masterpiece, top quality, best quality, official art, beautiful and aesthetic,8k, best quality, masterpiece, a adorable multiple multicolored hearts in rainbow basket, solo, simple_background, blue_background, full_body, pokemon_\creature\,
Bottom right: masterpiece, top quality, best quality, official art, beautiful and aesthetic,8k, best quality, masterpiece, a adorable pineapple with long leaves, solo, simple_background, blue_background, full_body, pokemon_\creature\,

| [
"BEAR"
] |
LLM360/AmberChat | LLM360 | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"nlp",
"llm",
"en",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:icybee/share_gpt_90k_v1",
"arxiv:2312.06550",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-30T00:12:23Z | 2024-10-04T20:13:12+00:00 | 789 | 24 | ---
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
- icybee/share_gpt_90k_v1
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- nlp
- llm
widget:
- example_title: example 1
text: How do I mount a tv to drywall safely?
output:
text: "Mounting a TV to drywall safely is crucial to ensure the longevity of your\
\ TV and the wall. Here's a step-by-step guide on how to do it safely:\n 1.\
\ Choose the right mount: Choose a mount that is compatible with your TV and\
\ the wall. Make sure the mount is rated for drywall and not just wood or concrete.\n\
\ 2. Prepare the wall: Before mounting the TV, make sure the wall is clean and\
\ free from any debris or paint. Use a stud finder to locate the studs in the\
\ wall.\n 3. Install the mount: Follow the manufacturer's instructions for mounting\
\ the TV to the wall. Use appropriate hardware and tools to secure the mount\
\ to the wall.\n 4. Level the TV: Level the TV on the mount using a spirit level\
\ or a leveling kit provided by the mount manufacturer.\n 5. Attach the TV to\
\ the mount: Attach the TV to the mount using the appropriate hardware and tools.\
\ Tighten the bolts and screws to ensure the TV is securely attached.\n 6. Connect\
\ the cables: Connect the TV cables to the appropriate ports on the back of\
\ the TV and the mount.\n 7. Test the mount: Test the mount to ensure it's secure\
\ and stable. Adjust the mount as needed to ensure the TV is level and secure.\n\
\ Mounting a TV to drywall safely is crucial to avoid damaging the wall or the\
\ TV. Follow these steps carefully and use appropriate tools and hardware to\
\ ensure a secure and stable installation."
- example_title: example 2
text: Happy is to sad as calm is to _.
output:
text: The adjective that can be used to describe the opposite of calm is "anxious"
or "stressed." So, from happy to sad, we can say that happy is to sad as calm
is to anxious or stressed.
---
# AmberChat
We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber). AmberChat is part of LLM360's Pebble model series.
# Evaluation
| Model | MT-Bench |
|------------------------------------------------------|------------------------------------------------------------|
| **LLM360/AmberChat** | **5.428125** |
| [LLM360/Amber](https://huggingface.co/LLM360/Amber) | 2.48750 |
| [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) | 5.17 |
| [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) | 5.42 |
| [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b) | 5.51 |
## Model Description
- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
- [Metrics](https://github.com/LLM360/Analysis360)
- [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets)
- [Finetuning Code](https://github.com/LLM360/amber-train/tree/main/finetune/amberchat)
# Loading AmberChat
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberChat")
model = LlamaForCausalLM.from_pretrained("LLM360/AmberChat")
#template adapated from fastchat
template= "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n### Human: Got any creative ideas for a 10 year old’s birthday?\n### Assistant: Of course! Here are some creative ideas for a 10-year-old's birthday party:\n1. Treasure Hunt: Organize a treasure hunt in your backyard or nearby park. Create clues and riddles for the kids to solve, leading them to hidden treasures and surprises.\n2. Science Party: Plan a science-themed party where kids can engage in fun and interactive experiments. You can set up different stations with activities like making slime, erupting volcanoes, or creating simple chemical reactions.\n3. Outdoor Movie Night: Set up a backyard movie night with a projector and a large screen or white sheet. Create a cozy seating area with blankets and pillows, and serve popcorn and snacks while the kids enjoy a favorite movie under the stars.\n4. DIY Crafts Party: Arrange a craft party where kids can unleash their creativity. Provide a variety of craft supplies like beads, paints, and fabrics, and let them create their own unique masterpieces to take home as party favors.\n5. Sports Olympics: Host a mini Olympics event with various sports and games. Set up different stations for activities like sack races, relay races, basketball shooting, and obstacle courses. Give out medals or certificates to the participants.\n6. Cooking Party: Have a cooking-themed party where the kids can prepare their own mini pizzas, cupcakes, or cookies. Provide toppings, frosting, and decorating supplies, and let them get hands-on in the kitchen.\n7. Superhero Training Camp: Create a superhero-themed party where the kids can engage in fun training activities. Set up an obstacle course, have them design their own superhero capes or masks, and organize superhero-themed games and challenges.\n8. Outdoor Adventure: Plan an outdoor adventure party at a local park or nature reserve. Arrange activities like hiking, nature scavenger hunts, or a picnic with games. Encourage exploration and appreciation for the outdoors.\nRemember to tailor the activities to the birthday child's interests and preferences. Have a great celebration!\n### Human: {prompt}\n### Assistant:"
prompt = "How do I mount a tv to drywall safely?"
input_str = template.format(prompt=prompt)
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=1000)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
```
Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat):
```bash
python3 -m fastchat.serve.cli --model-path LLM360/AmberChat
```
# AmberChat Finetuning Details
## DataMix
| Subset | Number of rows | License |
| ----------- | ----------- | ----------- |
| WizardLM/WizardLM_evol_instruct_V2_196k | 143k | |
| icybee/share_gpt_90k_v1 | 90k | cc0-1.0 |
| Total | 233k | |
## Hyperparameters
| Hyperparameter | Value |
| ----------- | ----------- |
| Total Parameters | 6.7B |
| Hidden Size | 4096 |
| Intermediate Size (MLPs) | 11008 |
| Number of Attention Heads | 32 |
| Number of Hidden Lyaers | 32 |
| RMSNorm ɛ | 1e^-6 |
| Max Seq Length | 2048 |
| Vocab Size | 32000 |
| Training Hyperparameter | Value |
| ----------- | ----------- |
| learning_rate | 2e-5 |
| num_train_epochs | 3 |
| per_device_train_batch_size | 2 |
| gradient_accumulation_steps | 16 |
| warmup_ratio | 0.04 |
| model_max_length | 2048 |
# Using Quantized Models with Ollama
Please follow these steps to use a quantized version of AmberChat on your personal computer or laptop:
1. First, install Ollama by following the instructions provided [here](https://github.com/jmorganca/ollama/tree/main?tab=readme-ov-file#ollama). Next, download a quantized model checkpoint (such as [amberchat.Q8_0.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q8_0.gguf) for the 8 bit version) from [TheBloke/AmberChat-GGUF](https://huggingface.co/TheBloke/AmberChat-GGUF/tree/main). Create an Ollama Modelfile locally using the template provided below:
```
FROM amberchat.Q8_0.gguf
TEMPLATE """{{ .System }}
USER: {{ .Prompt }}
ASSISTANT:
"""
SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
"""
PARAMETER stop "USER:"
PARAMETER stop "ASSISTANT:"
PARAMETER repeat_last_n 0
PARAMETER num_ctx 2048
PARAMETER seed 0
PARAMETER num_predict -1
```
Ensure that the FROM directive points to the downloaded checkpoint file.
2. Now, you can proceed to build the model by running:
```bash
ollama create amberchat -f Modelfile
```
3. To run the model from the command line, execute the following:
```bash
ollama run amberchat
```
You need to build the model once and can just run it afterwards.
# Citation
**BibTeX:**
```bibtex
@misc{liu2023llm360,
title={LLM360: Towards Fully Transparent Open-Source LLMs},
author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Xuguang Ren and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Tim Baldwin and Eric P. Xing},
year={2023},
eprint={2312.06550},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"CRAFT"
] |
mradermacher/MedLLaMA-3-GGUF | mradermacher | null | [
"transformers",
"gguf",
"llama-3-8b",
"sft",
"medical",
"en",
"ar",
"dataset:lighteval/med_mcqa",
"dataset:qiaojin/PubMedQA",
"dataset:bigbio/med_qa",
"base_model:Reverb/MedLLaMA-3",
"base_model:quantized:Reverb/MedLLaMA-3",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | 2024-05-28T01:02:03Z | 2024-05-28T01:30:29+00:00 | 789 | 1 | ---
base_model: Reverb/MedLLaMA-3
datasets:
- lighteval/med_mcqa
- qiaojin/PubMedQA
- bigbio/med_qa
language:
- en
- ar
library_name: transformers
license: cc-by-nc-nd-4.0
tags:
- llama-3-8b
- sft
- medical
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Reverb/MedLLaMA-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"PUBMEDQA"
] |
mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF | mradermacher | null | [
"transformers",
"gguf",
"code",
"text-generation-inference",
"Information Extraction",
"IE",
"Named Entity Recogniton",
"Event Extraction",
"Relation Extraction",
"LLaMA",
"en",
"dataset:ACE05",
"dataset:bc5cdr",
"dataset:conll2003",
"dataset:ncbi_disease",
"dataset:conll2012_ontonotesv5",
"dataset:rams",
"dataset:tacred",
"dataset:wnut_17",
"base_model:KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors",
"base_model:quantized:KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors",
"license:llama2",
"endpoints_compatible",
"region:us"
] | 2025-02-26T16:08:44Z | 2025-03-01T18:00:24+00:00 | 786 | 0 | ---
base_model: KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors
datasets:
- ACE05
- bc5cdr
- conll2003
- ncbi_disease
- conll2012_ontonotesv5
- rams
- tacred
- wnut_17
language:
- en
library_name: transformers
license: llama2
tags:
- code
- text-generation-inference
- Information Extraction
- IE
- Named Entity Recogniton
- Event Extraction
- Relation Extraction
- LLaMA
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/KaraKaraWitch/HiTZ-GoLLIE-13B-AsSafeTensors
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HiTZ-GoLLIE-13B-AsSafeTensors-GGUF/resolve/main/HiTZ-GoLLIE-13B-AsSafeTensors.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"BC5CDR",
"NCBI DISEASE"
] |
apple/OpenELM-270M-Instruct | apple | text-generation | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:apple-amlr",
"autotrain_compatible",
"region:us"
] | 2024-04-12T21:51:40Z | 2025-02-28T18:31:21+00:00 | 785 | 137 | ---
license: apple-amlr
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-270M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-270M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-270M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-270M-Instruct
hf_model=apple/OpenELM-270M-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
| [
"SCIQ"
] |
scholarly360/setfit-contracts-clauses | scholarly360 | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | 2024-05-11T07:47:56Z | 2024-05-11T07:48:02+00:00 | 782 | 6 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: No authorization or approval or other action by, and no notice to or filing
with, any governmental authority or regulatory body is required for the due execution
and delivery by the Servicer of this Agreement and each other Transaction Document
to which it is a party and the performance of its obligations hereunder and thereunder
in its capacity as Servicer.
- text: All rights and remedies of Collateral Agent shall be cumulative and may be
exercised singularly or concurrently, at their option, and the exercise or enforcement
of any one such right or remedy shall not bar or be a condition to the exercise
or enforcement of any other.
- text: Except for the conveyances hereunder, Seller will not sell, pledge, assign
or transfer to any other Person, or grant, create, incur, assume or suffer to
exist any Lien on the Receivables or the Other Conveyed Property or any interest
therein, and Seller shall defend the right, title, and interest of Purchaser and
the Issuer in and to the Receivables and the Other Conveyed Property against all
claims of third parties claiming through or under Seller.
- text: In the event of a Change in Control, the Eligible Employee shall immediately
be fully vested in his or her benefit under the Plan.
- text: If Participant’s Employment terminates under circumstances described in Section 3(a)
, then upon Participant’s subsequent death, all unpaid amounts payable to Participant
under Section 3(a)(i) , (ii) , (iii) or (vi) , if any, shall be paid to Participant’s
Beneficiary.
inference: true
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9425
name: Accuracy
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 100 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| governing laws | <ul><li>'The validity, interpretation, construction and performance of this Agreement will be governed by and construed in accordance with the substantive laws of the State of Delaware, without giving effect to the principles of conflict of laws of such State.'</li><li>'This Agreement shall be governed by and construed and enforced in accordance with the laws of the State of California.'</li><li>'This Agreement shall be construed and enforced in accordance with, and the rights of the parties shall be governed by, the laws of the State of Minnesota, except to the extent that the perfection of the security interest hereunder, or the enforcement of any remedies hereunder, with respect to any particular Collateral shall be governed by the laws of a jurisdiction other than the State of Minnesota.'</li></ul> |
| counterparts | <ul><li>'This Agreement may be executed in one or more counterparts, each of which will be deemed to be an original but all of which together will constitute one and the same agreement.'</li><li>'This Assignment may be executed in two or more counterparts, any one of which need not contain the signatures of more than one party, but all such counterparts taken together shall constitute one and the same Assignment. Receipt by telecopy, pdf file or other electronic means of any executed signature page to this Assignment shall constitute effective delivery of such signature page.'</li><li>'This Agreement may be executed in counterparts and by separate parties in separate counterparts, each of which shall be an original and all of which taken together shall constitute one and the same document. Receipt by telecopy, pdf file or other electronic means of any executed signature page to this Agreement shall constitute effective delivery of such signature page.'</li></ul> |
| notices | <ul><li>'All notices under this Agreement must be given in writing by personal delivery or United States registered or certified mail, return receipt requested, at the addresses indicated in this Agreement, or any other address designated in writing by either party.'</li><li>'Promptly upon its receipt of any notice, request for consent, financial statements, certification, report or other communication under or in connection with any Transaction Document from any Person other than the Administrative Agent or any Managing Agent, copies of the same.'</li><li>'The provisions of Section 6.01 of the Collateral Agreement shall apply mutatis mutandis in respect of any certificate, notice, demand or other communication given or made under this Deed.'</li></ul> |
| entire agreements | <ul><li>'Unless specifically provided herein, this Agreement contains all the understandings and representations between the Executive and the Company pertaining to the Termination of Employment and supersedes all prior and contemporaneous understandings, agreements, representations and warranties, both written and oral, with respect to such subject matter.'</li><li>'This Agreement contains the entire agreement between the parties with respect to the subject matter hereof and supersedes all prior or contemporaneous negotiations, correspondence, understandings and agreements between the parties with respect thereto. This Agreement may be amended only by an agreement in writing signed by both parties hereto.'</li><li>'This Note constitutes the full and entire agreement of the Borrower and the Holder with respect to the subject matter hereof.'</li></ul> |
| severability | <ul><li>'The invalidity or unenforceability in particular circumstances of any provision of this Note shall not extend beyond such provision or such circumstances and no other provision of this instrument shall be affected thereby.'</li><li>'Wherever possible, each provision of this Agreement shall be interpreted in such manner as to be effective and valid under applicable law, but if any provision of this Agreement shall be prohibited by or invalid under such law, such provision shall be ineffective to the extent of such prohibition or invalidity, without invalidating the remainder of such provision or the remaining provisions of this Agreement.'</li><li>'In case any provision of this Guaranty shall be invalid, illegal or unenforceable in any jurisdiction, the validity, legality and enforceability of the remaining provisions shall not in any way be affected or impaired thereby.'</li></ul> |
| waivers | <ul><li>'That Defaulting Lender’s right to approve or disapprove any amendment, waiver or consent with respect to this Agreement shall be restricted as set forth in Section\xa010.5 and the definition of “Requisite Lenders”.'</li><li>'The provisions of this Agreement, or any other Loan Document, may from time to time be amended, modified or waived, if such amendment, modification or waiver is in writing and consented to by the Borrower and both Lenders.'</li><li>'Collateral Agent shall not be deemed to have waived any of its rights hereunder or under any other agreement, instrument or paper signed by Grantor unless such waiver is in writing and signed by Collateral Agent. No delay or omission on the part of Collateral Agent in exercising any right shall operate as a waiver of such right or any other right. A waiver on any one occasion shall not be construed as a bar to or waiver of any right or remedy on any future occasion.'</li></ul> |
| amendments | <ul><li>'That Defaulting Lender’s right to approve or disapprove any amendment, waiver or consent with respect to this Agreement shall be restricted as set forth in Section\xa010.5 and the definition of “Requisite Lenders”.'</li><li>'This Agreement contains the entire agreement between the parties with respect to the subject matter hereof and supersedes all prior or contemporaneous negotiations, correspondence, understandings and agreements between the parties with respect thereto. This Agreement may be amended only by an agreement in writing signed by both parties hereto.'</li><li>'The provisions of this Agreement, or any other Loan Document, may from time to time be amended, modified or waived, if such amendment, modification or waiver is in writing and consented to by the Borrower and both Lenders.'</li></ul> |
| expenses | <ul><li>'The Company shall reimburse Executive for all reasonable and necessary expenses incurred by him in connection with his employment and in accordance with the Company policy, which requires reasonable evidence of expenditure.'</li><li>'Grantor agrees to pay the reasonable attorneys’ fees and legal expenses incurred by Collateral Agent in the exercise of any right or remedy available to it under this Agreement, whether or not suit is commenced, including, without limitation, attorneys’ fees and legal expenses incurred in connection with any appeal of a lower court’s order or judgment.'</li><li>'Except as otherwise provided herein, each Party shall bear and pay all costs and expenses which it incurs, or which may be incurred on its behalf, in connection with this TSA and the transactions contemplated hereby. Unless otherwise indicated, all dollar amounts stated in this TSA are stated in U.S. currency, and all payments required under this TSA shall be paid in U.S. currency in immediately available funds.'</li></ul> |
| survival | <ul><li>'Notwithstanding any provision of this Agreement to the contrary, Sections 1, 2, 3, 6, 7, 9, 10, 13, 15, 16 and 17 will survive any termination or expiration of this Agreement or the termination of the Executive’s employment for any reason whatsoever.'</li><li>'Each party’s obligations under this Section shall survive the resignation or replacement of the Agent or any assignment of rights by, or the replacement of, a Lender, the termination of the Commitments and the repayment, satisfaction or discharge of all obligations under any Loan Document.'</li><li>'The provisions of Sections 2.05(b) , (c) and (d) , Section 4.05 and Articles V , VI , VII and VIII shall survive the termination of this TSA.'</li></ul> |
| representations | <ul><li>'Each Guarantor hereby makes to the Administrative Agent and the other Guarantied Parties all of the representations and warranties made by the Borrower with respect to or in any way relating to such Guarantor in the Loan Agreement and the other Loan Documents, as if the same were set forth herein in full.'</li><li>'The Seller has determined that this Agreement is effective to transfer to the Administrative Agent, the Managing Agents and the Purchasers, as assignees of the Seller, the full benefit of and a direct claim against LKQ, as Servicer, and each Originator in respect of each representation or warranty made by LKQ, as Servicer, and each Originator under any Transaction Document.'</li><li>'All representations and warranties made hereunder, in the other Loan Documents and in any document, certificate or statement delivered pursuant hereto or in connection herewith shall survive the execution and delivery of this Agreement and the making of the Loans and other extensions of credit hereunder.'</li></ul> |
| assigns | <ul><li>'This Agreement shall be binding upon and shall inure to the benefit of the parties hereto and their respective successors and assigns, except that the Borrower may not assign or transfer its rights hereunder without the prior written consent of both Lenders.'</li><li>'This Agreement shall be binding upon and inure to the benefit of the successors and assigns of Grantor and Collateral Agent.'</li><li>'This Agreement shall be binding upon the First Lien Agents, the Senior Secured Parties, the Second Priority Agents, the Second Priority Secured Parties and their respective permitted successors and assigns.'</li></ul> |
| taxes | <ul><li>'In addition, the Credit Parties shall pay all Other Taxes to the relevant Governmental Authorities in accordance with applicable Law. The Credit Parties shall deliver to Administrative Agent official receipts or other evidence of such payment reasonably satisfactory to Administrative Agent in respect of any Other Taxes payable hereunder promptly after payment of such Other Taxes.'</li><li>'The Borrower and the other Loan Parties shall timely pay to the relevant Governmental Authority in accordance with Applicable Law, or at the option of the Agent timely reimburse it for the payment of, any Other Taxes.'</li><li>'The Key Person shall be responsible for taxes due upon the settlement of any RSU granted hereunder and upon any later transfer by the Key Person of any Share received upon the settlement of an RSU.'</li></ul> |
| litigations | <ul><li>'The Borrower or any other Loan Party shall (or shall attempt to) disavow, revoke or terminate any Loan Document to which it is a party or shall otherwise challenge or contest in any action, suit or proceeding in any court or before any Governmental Authority the validity or enforceability of any Loan Document, or any Loan Document shall cease to be in full force and effect (except as a result of the express terms thereof).'</li><li>'Other than those matters disclosed on Schedule 5.9 , (a) there are no actions, suits, or proceedings pending or, to the best knowledge of Borrower, threatened, against Borrower or any of its Subsidiaries, and (b)\xa0there are no actions, suits, or proceedings pending or, to the best knowledge of Borrower, threatened, against HTGC that could reasonably be expected to result in a Material Adverse Change.'</li><li>'There is no litigation, claim, investigation, challenge or other proceeding pending or, to the knowledge of Management Company, threatened against Management Company, its properties or business which seeks to enjoin or prohibit it from entering into this Agreement.'</li></ul> |
| insurances | <ul><li>'The Seller will maintain in effect, or cause to be maintained in effect, at the Seller’s own expense, such casualty and liability insurance as the Seller shall deem appropriate in its good faith business judgment.'</li><li>'With respect to the provision of Transition Services under this TSA, Service Provider shall maintain such insurance coverage and in such amounts covering itself and its Affiliates as is commercially reasonable. Upon the reasonable request of Service Recipient, Service Provider shall provide Service Recipient with such information as it shall reasonably request relating to any insurance coverage relevant to a Transition Service provided under this TSA.'</li><li>'Notwithstanding anything contained in this Agreement to the contrary, Losses shall be net of any insurance recoveries actually received by the Indemnified Party or its Affiliates.'</li></ul> |
| confidentiality | <ul><li>'Each party agrees that it and its Affiliates, and its and their respective employees, advisors, agents and representatives, including, with respect to the Company, any third parties engaged to provide the Services pursuant to Section\xa02(c) , shall keep confidential all data, documents, records and information obtained from the other party or its representatives in connection with this Agreement in accordance with Section\xa04.1 of the Purchase Agreement.'</li><li>'In the event of the consummation or public announcement of the Public Offering, Wainwright shall have the right to disclose its participation in such Public Offering, including, without limitation, the Public Offering at its cost of “tombstone” advertisements in financial and other newspapers and journals.'</li><li>'Except as requested by the Company, CEI or the other Released Parties, as permitted above or by law that may supersede the terms of this Agreement, or as compelled by valid legal process, the Individual shall treat as confidential the fact and terms of this Agreement and shall not disclose such information to any party other than his spouse, attorney, and accountant or tax advisor, if such persons have agreed to keep such information confidential.'</li></ul> |
| waiver of jury trials | <ul><li>'Each of the parties hereto irrevocably waives trial by jury in any action or proceeding with respect to this Amendment or any other Credit Document.'</li><li>'GRANTOR HEREBY EXPRESSLY WAIVE(S) ANY RIGHT TO A TRIAL BY JURY IN ANY ACTION OR PROCEEDING TO ENFORCE OR DEFEND ANY RIGHTS (a) UNDER THIS AGREEMENT OR UNDER ANY AMENDMENT, INSTRUMENT, DOCUMENT OR AGREEMENT DELIVERED OR WHICH MAY IN THE FUTURE BE DELIVERED IN CONNECTION HEREWITH, OR (b) ARISING FROM ANY RELATIONSHIP EXISTING IN CONNECTION WITH THIS AGREEMENT, AND AGREE(S) THAT ANY SUCH ACTION OR PROCEEDING SHALL BE TRIED BEFORE A COURT AND NOT BEFORE A JURY.'</li><li>'EACH PARTY HERETO HEREBY IRREVOCABLY AND UNCONDITIONALLY WAIVES TRIAL BY JURY IN ANY LEGAL ACTION OR PROCEEDING RELATING TO THIS AGREEMENT AND FOR ANY COUNTERCLAIM THEREIN.'</li></ul> |
| terminations | <ul><li>'This Guaranty shall remain in full force and effect with respect to each Guarantor until (i) termination of the Loan Agreement in accordance with Section\xa012.10. thereof or (ii)\xa0following the release of a Guarantor or Guarantors in accordance with Section 7.12.(b) of the Loan Agreement, no Person is a Guarantor; provided that the provisions of Section\xa09 of this Guaranty shall continue in full force and effect after such termination.'</li><li>'Subject to the terms and conditions set forth herein, the Shareholders’ Agreement, and the rights and obligations of the parties thereunder, is hereby terminated, effective immediately, and shall be null and void and no longer of any force or effect; provided , however , that Section 9(j) and Section 9(k) of the Shareholders’ Agreement shall survive the termination of the Shareholders’ Agreement indefinitely.'</li><li>'The Employee’s employment may be terminated during the Employment Period at any time by the Employee or the Company for any reason.'</li></ul> |
| further assurances | <ul><li>'Each of Tricadia and Tiptree shall, and shall cause their respective Affiliates to, use good faith efforts to cooperate with each other in all matters relating to the provision and receipt of the Transition Services. Such cooperation shall include exchanging information, performing true-ups and adjustments and seeking all third party consents, licenses, sublicenses or approvals necessary to permit each party to perform its obligations hereunder.'</li><li>'Where the Vessel is (or is to be) sold in exercise of any power contained in this Deed or otherwise conferred on the Collateral Agent, the Owner undertakes to execute, forthwith upon request by the Collateral Agent, such form of conveyance of the Vessel as the Collateral Agent may require.'</li><li>'The Owner hereby further undertakes at its own expense from time to time to execute, sign, perfect, do and (if required) register every such further assurance, document, act or thing as in the opinion of the Collateral Agent may be reasonably necessary or desirable for the purpose of more effectually mortgaging and charging the Mortgaged Property or perfecting the security constituted or intended to be constituted by the Mortgage and this Deed.'</li></ul> |
| general | <ul><li>'Headings contained herein are inserted for convenience of reference only and are not to be considered for the purposes of interpretation. All monetary references are to U.S. Dollars. If anything herein falls to be done on a day which is not a Business Day, the same shall be done on the next succeeding Business Day.'</li><li>'The Customer Support Services will be provided by the following types of Customer Support Agents: [***]. Bank will provide agents for future, mutually agreed upon and approved channels.'</li><li>'Including products, completed operations liability and personal injury, contractual liability and broad form property damage liability coverage for damages to any property with a minimum combined single limit of [***] per occurrence and [***] general aggregate per location for bodily injury, death, property damage and personal injury.'</li></ul> |
| terms | <ul><li>'Subject to the severance provisions of Section 5 below, Executive’s employment with the Company shall initially be for a term of two years ending July 31, 2020 (“Termination Date”) and shall thereafter automatically renew for one-year terms unless either party terminates the Agreement with 90 days prior written notice of termination before the end of the then current term.'</li><li>'All capitalized terms used but not defined in this Amendment shall have the same meaning as prescribed in the Original Agreement.'</li><li>'The terms of the Plan are incorporated herein by reference and the Key Person’s rights hereunder are subject to the terms of the Plan to the extent they are inconsistent with or in addition to the terms set forth herein. The Key Person hereby agrees to comply with all requirements of the Plan.'</li></ul> |
| assignments | <ul><li>'No party shall assign this Agreement or any of its rights or obligations hereunder without the prior written consent of the other parties hereto, except that Tiptree and Tiptree Parent may assign their respective rights to any other Person that is a direct or indirect subsidiary of Tiptree Parent; provided , that, Tiptree and Tiptree Parent will continue to be bound by their respective obligations hereunder.'</li><li>'This Agreement is binding upon, and shall inure to the benefit of, the parties and their respective heirs, executors, administrators, successors and assigns.'</li><li>'Except as otherwise provided in this Agreement, the Grantee may not assign any of his, her or its rights under this Agreement without the prior written consent of the Company, which consent may be withheld in its sole discretion. The Company shall be permitted to assign its rights or obligations under this Agreement so long as such assignee agrees to perform all of the Company’s obligations hereunder.'</li></ul> |
| authority | <ul><li>'The execution and delivery by the Servicer of this Agreement and each other Transaction Document to which it is a party, and the performance of its obligations hereunder and thereunder are within its corporate powers and authority and have been duly authorized by all necessary corporate action on its part. This Agreement and each other Transaction Document to which the Servicer is a party has been duly executed and delivered by the Servicer.'</li><li>'Investor is an entity duly organized, validly existing and in good standing under the laws of the jurisdiction of its organization, with the requisite power and authority to enter into and to consummate the transactions contemplated by this Agreement and otherwise to carry out its obligations hereunder and thereunder.'</li><li>'Purchaser has the power, authority and legal right to execute and deliver this Agreement and to carry out the terms hereof and to acquire the Receivables and the Other Conveyed Property hereunder; and the execution, delivery and performance of this Agreement and all of the documents required pursuant hereto have been duly authorized by Purchaser by all necessary corporate action.'</li></ul> |
| use of proceeds | <ul><li>'No proceeds of any purchase hereunder will be used (i) for a purpose that violates, or would be inconsistent with, Regulation T, U or X promulgated by the Board of Governors of the Federal Reserve System from time to time or (ii) to acquire any security in any transaction which is subject to Section 12, 13 or 14 of the Securities Exchange Act of 1934, as amended.'</li><li>'The Borrower will use the proceeds of the Delayed Draw Term Loans for general corporate purposes, including, without limitation, to finance the pre-delivery installments due to builder(s) under its or its Subsidiaries’ shipbuilding contracts.'</li><li>'The proceeds of the Loans shall be used to finance the working capital needs of the Company and its Subsidiaries and for general corporate or entity purposes, including to enable the Company to make valuable transfers to any of its Subsidiaries in connection with the operation of their respective businesses.'</li></ul> |
| payments | <ul><li>'All sums payable by any Credit Party hereunder and under the other Credit Documents shall (except to the extent required by Law) be paid free and clear of, and without any deduction or withholding on account of, any Taxes.'</li><li>'Borrower may voluntarily prepay the loan evidenced by this Note in whole or in part at any time; without premium or penalty.'</li><li>'Each voluntary prepayment of Loans shall be in an aggregate minimum amount of $1,000,000.00 and integral multiples of $100,000.00 in excess thereof (or, if less, the aggregate principal amount of Loans then outstanding).'</li></ul> |
| compliance with laws | <ul><li>'Grantor will not use the Collateral, or knowingly permit the Collateral to be used, for any unlawful purpose or in violation of any federal, state or municipal law.'</li><li>'Comply with the requirements of all applicable laws, rules, regulations, and orders of any Governmental Authority, other than laws, rules, regulations, and orders the non-compliance with which, individually or in the aggregate, could not reasonably be expected to result in a Material Adverse Change.'</li><li>'No Credit Party shall, and no Credit Party shall permit any of its Subsidiaries to, fail to (a) comply in all material respects with the requirements of all applicable laws, rules, regulations and orders of any Governmental Authority (including, without limitation, all Environmental Laws and the Requirements) and (b) preserve and maintain in full force and effect all material rights, privileges, qualifications, permits, licenses and franchises necessary in the normal conduct of its business.'</li></ul> |
| no conflicts | <ul><li>'Upon issuance of the Shares, the Company will have insufficient authorized shares of Common Stock necessary to reserve for the issuance of the Warrant Shares (other than shares issuable upon exercise of the Series C Warrants), and to issue shares of Common Stock issuable upon exercise and/or issuance of certain issued and outstanding derivative securities of the Company.'</li><li>'Executive represents and warrants that the performance by Executive of the duties that are reasonably expected to be performed hereunder will not result in a material breach of any agreement to which Executive is a party.'</li><li>'Executive hereby represents that, to the best of his knowledge, his performance of all the terms of this Agreement and his work as an employee or consultant of the Company does not breach any oral or written agreement which he has made prior to his employment with the Company.'</li></ul> |
| indemnifications | <ul><li>'The Company shall indemnify and hold Employee harmless, to the maximum extent permitted by law, against all liability, expense or loss (including reasonable attorneys’ fees and penalties) incurred by Employee by reason of the fact that Employee is an officer of the Company acting within the scope of Employee’s duties and authorities.'</li><li>'The Company hereby agrees to indemnify Employee and hold him harmless to the extent provided under the by-laws of the Company against and in respect to any and all actions, suits, proceedings, claims, demands, judgments, costs, expenses (including reasonable attorney’s fees), losses, and damages resulting from Employee’s good faith performance of his duties and obligations with the Company. This obligation shall survive the termination of Employee’s employment with the Company.'</li><li>'The Company agrees to defend and indemnify and hold the Employee harmless from and against any past, present or future claim, action, demand, loss, cost, expense, liability or other damage arising from, and including reasonable attorney’s fees and costs, amounts, expenses, incurred by or imposed against the Employee and arising out of or relating to any past, present or future claim, action, demand, loss, cost, expense, liability or other damage due to Employee’s employment hereunder.'</li></ul> |
| organizations | <ul><li>'The Buyer is a limited liability company duly organized and validly existing in good standing under the laws of the jurisdiction in which it is organized, and has the requisite organizational power and authority to own its properties and to carry on its business as now being conducted.'</li><li>'Investor is an entity duly organized, validly existing and in good standing under the laws of the jurisdiction of its organization, with the requisite power and authority to enter into and to consummate the transactions contemplated by this Agreement and otherwise to carry out its obligations hereunder and thereunder.'</li><li>'Seller has been duly organized and is validly existing as a corporation in good standing under the laws of the State of Delaware, with power and authority to own its properties and to conduct its business as such properties are currently owned and such business is currently conducted, and had at all relevant times, and now has, power, authority and legal right to acquire, own and sell the Receivables and the Other Conveyed Property to be transferred to Purchaser.'</li></ul> |
| base salary | <ul><li>'Commencing on the Agreement Effective Date and thereafter during his Employment Period, the Employee shall receive an annual base salary of $273,000 (as such salary may be increased from time to time , the “Annual Base Salary”), which shall be paid no less frequently than on a semimonthly basis.'</li><li>'Commencing on the Agreement Effective Date and thereafter during his Employment Period, the Employee shall receive an annual base salary of $________ (as such salary may be increased from time to time , the “Annual Base Salary”), which shall be paid no less frequently than on a semimonthly basis.'</li><li>'During the Term, the Executive’s annual base salary rate shall be $455,000. The Executive’s base salary shall be reviewed annually by the Board or the Compensation Committee of the Board (the “Compensation Committee”). The base salary in effect at any given time is referred to herein as “Base Salary.” The Base Salary shall be payable in a manner that is consistent with the Company’s usual payroll practices for executive officers.'</li></ul> |
| binding effects | <ul><li>'The execution and delivery of this Amendment by any Lender shall be binding upon each of its successors and assigns (including assignees of its Loans in whole or in part prior to the effectiveness hereof).'</li><li>'This Agreement shall be binding upon and inure to the benefit of the parties hereto and their respective successors and permitted assigns.'</li><li>'This Agreement shall be binding upon and shall inure to the benefit of the Company, its successors and assigns, and the Key Person and the Key Person’s executors, administrators, personal representatives and heirs. In the event that any part of this Agreement shall be held to be invalid or unenforceable, the remaining parts hereof shall nevertheless continue to be valid and enforceable as though the invalid portions were not a part hereof.'</li></ul> |
| headings | <ul><li>'Section and Subsection headings in this Amendment are included herein for convenience of reference only and shall not constitute a part of this Amendment for any other purpose or be given any substantive effect.'</li><li>'Section headings used in this Guaranty are for convenience only and shall not affect the construction of this Guaranty.'</li><li>'Section headings have been inserted herein for convenience only and shall not be construed to be a part hereof.'</li></ul> |
| costs | <ul><li>'The Borrowers shall pay to the Administrative Agent all reasonable costs and out-of-pocket expenses of every kind in connection with the preparation, negotiation, execution and delivery of this Amendment and any documents and instruments relating hereto or thereto, including, without limitation, any fees that have been invoiced prior to the date hereof (which fees include, without limitation, the reasonable and documented fees and expenses of any attorneys retained by the Administrative Agent).'</li><li>'Borrower hereby affirms its obligation under the Loan Agreement to reimburse the Agent for all Lender Group Expenses paid or incurred by the Agent in connection with the preparation, negotiation, execution and delivery of this Amendment, including but not limited to the attorneys’ fees and expenses of attorneys for the Agent with respect thereto.'</li><li>'Janssen will be solely responsible for conducting, at its sole cost and expense, Development of each Janssen Research IRD Product, except that Janssen will use Commercially Reasonable Efforts to Develop [***].'</li></ul> |
| definitions | <ul><li>'Capitalized terms used herein and not otherwise defined herein shall have their respective defined meanings given them in the Loan Agreement.'</li><li>'Terms not otherwise defined herein are used herein with the respective meanings given them in the Credit Agreement.'</li><li>'In this Agreement unless there is something in the subject matter or context inconsistent therewith, the words and expressions set out in Schedule\xa0”A” shall have the meanings set out in such Schedule\xa0”A” .'</li></ul> |
| modifications | <ul><li>'This Agreement may be amended, modified, or supplemented only by written agreement of the Parties.'</li><li>'This Assignment may be amended, modified, or supplemented only by written agreement of the Parties.'</li><li>'This Agreement, together with the exhibits and schedules hereto, is the entire agreement between the parties hereto with respect to the subject matter hereof, and supersedes all prior and contemporaneous communications, agreements and understandings with respect to the subject matter hereof, express or implied, oral or written, all of which are merged herein.\xa0\xa0In the event of a conflict between this Agreement and the Management Agreement, the Management Agreement shall control.'</li></ul> |
| remedies | <ul><li>'Executive acknowledges and understands that the provisions of this Agreement are of a special and unique nature, the loss of which cannot be adequately compensated for in damages by an action at law, and that the breach or threatened breach of the provisions of this Agreement would cause the Company irreparable harm. In the event of a breach or threatened breach by Executive of the provisions of this Agreement, the Company shall be entitled to an injunction restraining him from such breach.'</li><li>'All rights and remedies of Collateral Agent shall be cumulative and may be exercised singularly or concurrently, at their option, and the exercise or enforcement of any one such right or remedy shall not bar or be a condition to the exercise or enforcement of any other.'</li><li>'No delay or failure on the part of the Administrative Agent or any other Guarantied Party in the exercise of any right or remedy it may have against any Guarantor hereunder or otherwise shall operate as a waiver thereof, and no single or partial exercise by the Administrative Agent or any other Guarantied Party of any such right or remedy shall preclude any other or further exercise thereof or the exercise of any other such right or remedy.'</li></ul> |
| releases | <ul><li>'Neither Founder shall issue any press release or public announcement concerning this Agreement or the Company without obtaining the prior written consent of the other Founder hereto, which consent shall not be unreasonably withheld, except as may be required by applicable securities laws, in which case, the publishing Founder shall use reasonable commercial efforts to send the draft public announcement to the other Founder prior to publication thereof.'</li><li>'Players Network will send out a public communication as required by law to its shareholder and 8k filing pertaining to this agreement.'</li><li>'This Agreement and the security interests granted hereby shall terminate in accordance with the Indenture and each Intercreditor Agreement (if any).'</li></ul> |
| disclosures | <ul><li>'Nothing contained in this Agreement limits the Executive’s ability to communicate with any federal, state or local governmental agency or commission, including to provide documents or other information, without notice to the Company.'</li><li>'The Recipient may disclose the Discloser’s Confidential Information to the extent required by law or regulation; provided , that prior to making any such legally required disclosure, the Recipient shall give the Discloser as much prior notice of the requirement for and contents of such disclosure as is practicable under the circumstances. Any such disclosure, however, shall not relieve the Recipient of its obligations contained herein.'</li><li>'No event has occurred since the date of the most recently delivered audited financial statements, and no fact or condition exists, which has had a Material Adverse Effect or which could reasonably be expected to have a Material Adverse Effect.'</li></ul> |
| participations | <ul><li>'The CEO and any Executive who receive a Participation Agreement will be eligible to participate in the Plan effective as of the date of such Participation Agreement.\xa0\xa0The terms and conditions of the severance benefit potentially payable to a Participant will be subject to the Participation Agreement delivered to the Participant and to the Plan.\xa0\xa0In the event of an explicit discrepancy between a Participation Agreement and the Plan, the Participation Agreement will control.'</li><li>'An employee shall become a Participant as of the first day of the calendar month coincident with or next following the date he or she first becomes an Eligible Executive Officer (the “Entry Date”), provided that he or she remains a member of the select group of officers for whom this Plan is designed through his or her Entry Date.'</li><li>'An Eligible Employee becomes a Participant upon the earlier to occur of: (a) a credit of Company Contributions under Article V, or (b) receipt of notification of eligibility to participate.'</li></ul> |
| vesting | <ul><li>'All Company matching contributions under Section 2.5(a) and Company additional discretionary contributions under Section 2.5(b) are 100% vested.'</li><li>'A Participant’s Account Balance attributable to QACA Safe Harbor Contributions is one hundred percent (100%) vested after two (2) years. Participants will become fully vested upon their Death or Disability as defined herein. If the Plan already defines Year of Service for purposes of vesting, then that definition applies to this QACA vesting schedule.'</li><li>'The Restricted Shares shall not become fully vested until the Key Employee has continued his/her employment with the Bank for a period of five (5) years from the effective date of this Agreement. For this purpose, the effective date of this Agreement will be \u2002\u2002\u2002\u2002\u2002,2019, and the date the Restricted Shares shall become fully vested shall be \u2002\u2002\u2002\u2002\u2002, 2027.'</li></ul> |
| no waivers | <ul><li>'Collateral Agent shall not be deemed to have waived any of its rights hereunder or under any other agreement, instrument or paper signed by Grantor unless such waiver is in writing and signed by Collateral Agent. No delay or omission on the part of Collateral Agent in exercising any right shall operate as a waiver of such right or any other right. A waiver on any one occasion shall not be construed as a bar to or waiver of any right or remedy on any future occasion.'</li><li>'No delay or omission by either party in exercising any right under this Agreement shall operate as a waiver of that or any other right. A waiver or consent given by a party on any one occasion shall be effective only in that instance and shall not be construed as a bar or waiver of any right on any other occasion.'</li><li>'No failure or delay by a Founder in exercising any right or remedy under this Agreement shall operate as a waiver thereof, nor shall any single or partial exercise thereof preclude any other or further exercise thereof or the exercise of any other right or remedy.'</li></ul> |
| withholdings | <ul><li>'The Company may withhold from any amounts payable under this Agreement all federal, state, city or other taxes as the Company is required to withhold pursuant to any applicable law, regulation or ruling.'</li><li>'All Deferrals and distributions shall be subject to legally required income and employment tax withholding. Such taxes shall include, but not necessarily be limited to, Social Security taxes on Deferrals, Matching Contributions, Company Profit Sharing Contributions and/or Other Contributions at the time they are vested and income taxes on distributions.'</li><li>'The Company shall have the right to deduct from any payment hereunder all taxes (federal, state or other) which it is required to withhold therefrom.'</li></ul> |
| miscellaneous | <ul><li>'All section headings are for convenience only. This Agreement may be executed in several counterparts, each of which is an original. It shall not be necessary in making proof of this Agreement or any counterpart hereof to produce or account for any of the other counterparts.'</li><li>'This Agreement may be executed in two or more counterparts (including via facsimile), each of which shall be deemed an original. but all of which together shall constitute one and the same instrument. The section headings contained in this Agreement are for reference purposes only and shall not affect in any way the meaning or interpretation of this Agreement.'</li><li>'Authority of the Representative .\xa0 Any action by the Initial Purchasers hereunder may be taken by J.P. Morgan Securities LLC on behalf of the Initial Purchasers, and any such action taken by J.P. Morgan Securities LLC shall be binding upon the Initial Purchasers.'</li></ul> |
| jurisdictions | <ul><li>'This Agreement shall be construed in accordance with and governed by the law of the State of New York.'</li><li>'The provisions set forth in Sections 9.09 and 9.10 of the Credit Agreement are hereby incorporated mutatis mutandis with all references to the “Agreement” therein being deemed references to this Agreement.'</li><li>'(a)\xa0 THIS AGREEMENT SHALL BE GOVERNED BY AND CONSTRUED IN ACCORDANCE WITH, THE LAWS OF THE STATE OF NEW YORK, WITHOUT REGARD TO PRINCIPLES OF CONFLICTS OF LAW (OTHER THAN SECTIONS 5-1401 AND 5-1402 OF THE NEW YORK GENERAL OBLIGATIONS LAW), EXCEPT TO THE EXTENT THAT LOCAL LAW GOVERNS THE CREATION, PERFECTION, PRIORITY OR ENFORCEMENT OF SECURITY INTERESTS.'</li></ul> |
| closings | <ul><li>'Subject to the terms and conditions of this Agreement, the closing of the transactions described herein (the “ Closing ”) is taking place simultaneously with the execution and delivery of this Agreement by the parties at 780 Third Avenue, New York, New York 10017 (the date the Closing takes place, the “ Closing Date ”).'</li><li>'Subject to the terms and conditions of this Agreement, and unless otherwise agreed in writing by the Parties, the closing of the Transactions shall occur at 11:59 p.m.\xa0(Dallas, Texas time) on the date hereof (the “ Effective Time ”).'</li><li>'The closing of the transactions contemplated by this Agreement (the “Closing”) shall occur on the Closing Date at such location as may be agreed to by the parties (including via exchange of electronic signatures).'</li></ul> |
| integration | <ul><li>'The Company shall not sell, offer for sale or solicit offers to buy or otherwise negotiate in respect of any security (as defined in Section\xa02 of the Securities Act) that would be integrated with the offer or sale of the Securities for purposes of the rules and regulations of any Trading Market such that it would require shareholder approval prior to the closing of such other transaction unless shareholder approval is obtained before the closing of such subsequent transaction.'</li><li>'This Agreement and the other Loan Documents represent the entire agreement of the Company, the Administrative Agent and the Lenders with respect to the subject matter hereof and thereof, and there are no promises, undertakings, representations or warranties by the Administrative Agent or any Lender relative to the subject matter hereof not expressly set forth or referred to herein or in the other Loan Documents.'</li><li>'Except as specifically stated otherwise herein, this Agreement and Related Documents set forth the entire understanding of the parties relating to the subject matter hereof, and all prior understandings, written or oral, are superseded by this Agreement and the Related Documents. This Agreement may not be modified, amended, waived or supplemented except as provided herein.'</li></ul> |
| fees | <ul><li>'That Defaulting Lender (x)\xa0shall not be entitled to receive any Commitment Fee pursuant to Section\xa02.8(a)(i) for any period during which that Lender is a Defaulting Lender (and the Borrower shall not be required to pay any such fee that otherwise would have been required to have been paid to that Defaulting Lender) and (y)\xa0shall be limited in its right to receive L/C Participation Fees as provided in Section\xa02.8(a)(iii).'</li><li>'The Borrower agrees to pay the administrative and other fees of the Agent pursuant to the Fee Letter and as may otherwise be agreed to in writing by the Borrower and the Agent from time to time.'</li><li>'The Borrower agrees to pay to the Agent a fee equal to $2,500 at the time of each Bid Rate Quote Request made hereunder for services rendered by the Agent in connection with Bid Rate Loans.'</li></ul> |
| effective dates | <ul><li>'The amended and restated Plan is effective as of January 1, 2019. The rights and benefits of and/or with respect to a Participant whose employment terminated prior to January 1, 2019 shall be determined under the provisions of the Plan in effect when his/her employment terminated.'</li><li>'This TSA shall become effective on the Effective Date and, unless terminated earlier pursuant to Section\xa07.02 below, shall remain in full force and effect until the latest date of expiration (the “ Final Term ”) of the Term for any Transition Service hereunder.'</li><li>'If the Commitments are increased in accordance with this Section, the Borrower shall determine the effective date (the “ Increase Effective Date ”) and the final allocation of such increase in consultation with the Administrative Agent. The Administrative Agent shall promptly notify the Lenders of the final allocation of such increase and the Increase Effective Date.'</li></ul> |
| enforcements | <ul><li>"This Agreement has been duly and validly authorized, executed and delivered on behalf of the Investor and is a valid and binding agreement of the Investor enforceable against the Investor in accordance with its terms, subject as to enforceability to general principles of equity and to applicable bankruptcy, insolvency, reorganization, moratorium, liquidation and other similar laws relating to, or affecting generally, the enforcement of applicable creditors' rights and remedies."</li><li>'This Agreement has been duly and validly authorized. This Agreement has been duly executed and delivered on behalf of the Buyer, and this Agreement constitutes a valid and binding agreement of the Buyer enforceable in accordance with its terms.'</li><li>'The Corporation expressly confirms and agrees that it has entered into this Agreement in order to induce Indemnitee to continue to serve as director and/or officer of the Corporation and acknowledges that Indemnitee is relying upon this Agreement in continuing in such capacity.'</li></ul> |
| financial statements | <ul><li>'Borrower has furnished to the Lenders (a)\xa0the audited consolidated financial statements of Borrower for the Fiscal Year ended March\xa029, 2013, and (b)\xa0the unaudited consolidated financial statements of Borrower for the Fiscal Quarter ended October\xa04, 2013.'</li><li>'The Borrower shall have delivered to the Administrative Agent or filed with the SEC its 10-K report for the period ending on December 31, 2017 and its 10-Q reports for the periods ending on March 31, 2018, June 30, 2018 and September 30, 2018.'</li><li>'The Administrative Agent shall have received the audited financial statements referred to in subsection 4.1.'</li></ul> |
| capitalization | <ul><li>'The Company currently has 220,599,761 shares of Common Stock issued and outstanding. In addition, 53,287,499 shares of Common Stock have been reserved for issuance, or are issuable upon exercise or conversion of outstanding derivative securities.'</li><li>'The shares of Common Stock underlying the Restricted Stock Units may be adjusted as provided in the Plan including, without limitation, Section \xa011 of the Plan. The Participant, by accepting this Agreement, irrevocably and unconditionally consents and agrees to any such adjustments as may be made at any time hereafter.'</li><li>'The shares of Common Stock underlying the Restricted Stock Units may be adjusted as provided in the Plan. The Participant, by accepting this Agreement, irrevocably and unconditionally consents and agrees to any such adjustments as may be made at any time hereafter.'</li></ul> |
| benefits | <ul><li>'During the period of employment, the Company shall provide Executive with such employee benefits as are provided by the Company generally to its executive employees. In additon, Company shall provide Executive at Company’s expense, or shall reimburse Executive, for appropriate telecommunications and internet service and devices as needed for Executive to perform his duties pursuant to this Agreement.'</li><li>'This Agreement shall be binding upon and shall inure to the benefit of the Company, its successors and assigns, and the Key Person and the Key Person’s executors, administrators, personal representatives and heirs. In the event that any part of this Agreement shall be held to be invalid or unenforceable, the remaining parts hereof shall nevertheless continue to be valid and enforceable as though the invalid portions were not a part hereof.'</li><li>'The Termination Date shall be the termination date of your employment for purposes of participation in and coverage under all benefit plans and programs sponsored by the Company and its subsidiaries.'</li></ul> |
| interpretations | <ul><li>'The covenants contained in this Section\xa07 are intended to be construed as a series of separate covenants. If, in any judicial proceeding, the court shall refuse to enforce any of the separate covenants (or any part thereof), then such unenforceable covenant (or such part) shall be deemed to be eliminated from this Agreement for the purpose of those proceedings to the extent necessary to permit the remaining separate covenants (or portions thereof) to be enforced.'</li><li>'The captions used herein are intended for convenience of reference only and shall not modify or affect in any manner the meaning or interpretation of any of the provisions of this Agreement. This Agreement is not intended to carry over any economic entitlements or obligations that may have arisen among the parties under the Existing Agreement due to events preceding this Agreement other than those specifically contemplated herein and should be interpreted accordingly to the extent applicable.'</li><li>'Neither this Agreement nor any uncertainty or ambiguity herein shall be construed against the Lender Group or Borrower, whether under any rule of construction or otherwise. On the contrary, this Agreement has been reviewed by all parties and shall be construed and interpreted according to the ordinary meaning of the words used so as to accomplish fairly the purposes and intentions of all parties hereto.'</li></ul> |
| subsidiaries | <ul><li>'The Borrower owns, directly or indirectly, free and clear of any Lien (other than Liens expressly permitted by Section 6.01 or 6.02), all of the issued and outstanding shares of common stock of each of the Principal Subsidiaries.'</li><li>'The Company owns, directly or indirectly, all of the capital stock or other equity interests of each Subsidiary free and clear of any Liens, and all of the issued and outstanding shares of capital stock of each Subsidiary are validly issued and are fully paid, non-assessable and free of preemptive and similar rights to subscribe for or purchase securities.'</li><li>'Solely for the purposes of determining whether an Event of Default has occurred under clause\xa0(h), (i) or (l) of Section\xa07.01, any reference in any such clause to any Subsidiary shall be deemed not to include any Immaterial Subsidiary affected by any event or circumstance referred to in any such clause.'</li></ul> |
| solvency | <ul><li>'This Agreement may be immediately terminated in its entirety by a Party by providing written notice of termination to the other Party in the event of an Insolvency Event of the other Party.'</li><li>'The Seller is not insolvent, nor will the Seller be made insolvent by the transfer of the Receivables, nor does the Seller anticipate any pending insolvency.'</li><li>'As of the First Amendment and Restatement Effective Date, the Borrower and its Subsidiaries, on a consolidated basis, are Solvent.'</li></ul> |
| cooperation | <ul><li>'Upon a Party’s request, the other Party shall provide the prosecuting and maintaining Party with all reasonable assistance and cooperation in connection with its prosecution and maintenance of the applicable Patents, including by providing access to relevant persons and executing all documentation reasonably requested by the prosecuting and maintaining Party.'</li><li>'Each Party agrees, without further consideration, to cooperate and diligently perform any further acts, deeds and things and to execute and deliver any documents that may from time to time be reasonably necessary or otherwise reasonably required to consummate, evidence, confirm and/or carry out the intent and provisions of this Agreement, all without undue delay or expense.'</li><li>'Subject to your other commitments, you agree to reasonably cooperate (but only truthfully) with the Company and provide information as to matters which you were personally involved, or have information on, during your employment with the Company and which are or become the subject of litigation or other dispute.\xa0 The Company shall pay for any reasonable out-of-pocket expenses incurred by you in connection with your performance of the obligations pursuant to this Section 18.'</li></ul> |
| approvals | <ul><li>'Other than as set forth on Schedule 1.4 , no Tricadia Group Entity is required to obtain any consent or approval from any Person or provide notice to any Person in connection with the execution, delivery and performance of this Agreement and the consummation by it of the transactions contemplated by this Agreement, except where any such failure would not be materially adverse to the Tricadia Business.'</li><li>'Except as previously obtained or made and as provided in Section \xa09.2(e) , no authorization, consent, approval, order, license or permit from, or filing, registration or qualification with, any Governmental Agency is or will be required to authorize or permit under applicable Laws the execution, delivery and performance by Borrower or any Subsidiary Guarantor of the Loan Documents to which it is a party (except where the failure to do so does not constitute a Material Adverse Effect).'</li><li>'The implementation of the Plan, the granting of any stock options under the Plan and the issuance of any shares of Common Stock (i) upon the exercise of any stock option or (ii) under the Stock Issuance Program shall be subject to the Corporation’s procurement of all approvals and permits required by regulatory authorities having jurisdiction over the Plan, the stock options granted under it and the shares of Common Stock issued pursuant to it.'</li></ul> |
| construction | <ul><li>'The parties hereto acknowledge and agree that the language of this Release Agreement shall be construed as a whole according to its fair meaning and not strictly for or against any of the parties.'</li><li>'The language used in this Agreement will be deemed to be the language chosen by the parties to express their mutual intent, and no rules of strict construction will be applied against any party.'</li><li>'The various captions and section headings in this Agreement are included for convenience only and shall not affect the meaning or interpretation of any provision of this Agreement. Notwithstanding anything to the contrary, in all cases, the use of the term “including” shall be construed as being inclusive and shall be deemed to mean “including, without limitation,”.'</li></ul> |
| intellectual property | <ul><li>'(a) Attached hereto as Schedule\xa011(a) is a schedule setting forth all of each Company’s Patents and Trademarks (each as defined in the Collateral Agreement) applied for or registered with the United States Patent and Trademark Office, and all other Patents and Trademarks (each as defined in the Collateral Agreement), including the name of the registered owner or applicant and the registration, application, or publication number, as applicable, of each Patent or Trademark owned by each Company.'</li><li>'(a) Attached hereto as Schedule\xa011(a ) is a schedule setting forth all of the Company’s Patents and Trademarks (each as defined in the Collateral Agreement) applied for or registered with the United States Patent and Trademark Office, and all other Patents and Trademarks (each as defined in the Collateral Agreement), including the name of the registered owner or applicant and the registration, application, or publication number, as applicable, of each Patent or Trademark owned by the Company.'</li><li>'As of the Closing Date, the Company and each Principal Domestic Subsidiary own, or are licensed to use, all United States Intellectual Property necessary for the operation of their respective businesses as currently conducted and as proposed to be conducted, except where the failure to own or be licensed would not reasonably be expected to have a Material Adverse Effect.'</li></ul> |
| brokers | <ul><li>'No agent, broker, financial advisor or other intermediary acting on behalf of any Tricadia Group Entity or any of their Affiliates is, or will be, entitled to any broker’s commission, finder’s fees or similar payment from any of the parties hereto, or from any Affiliate of any of the parties hereto, in connection with the transactions contemplated by this Agreement.'</li><li>'The Company has taken no action which would give rise to any claim by any person for brokerage commissions, transaction fees or similar payments relating to this Agreement or the transactions contemplated hereby.'</li><li>'Neither the Company nor any Subsidiary or any related entities (i) is required to register as a “broker” or “dealer” in accordance with the provisions of the Exchange Act or (ii) directly or indirectly through one or more intermediaries, controls or is a “person associated with a member” or “associated person of a member” (within the meaning set forth in the FINRA Manual).'</li></ul> |
| enforceability | <ul><li>'The Borrower or any other Loan Party shall (or shall attempt to) disavow, revoke or terminate any Loan Document to which it is a party or shall otherwise challenge or contest in any action, suit or proceeding in any court or before any Governmental Authority the validity or enforceability of any Loan Document, or any Loan Document shall cease to be in full force and effect (except as a result of the express terms thereof).'</li><li>'The failure of the Participants or the Company to insist upon strict adherence to any term of the Plan on any occasion shall not be considered а waiver of such party’s rights or deprive such party of the right thereafter to insist upon strict adherence to that term or any other term of the Plan.'</li><li>'This Interim Order shall constitute findings of fact and conclusions of law pursuant to Bankruptcy Rule 7052 and shall take effect and be fully enforceable nunc pro tunc to the Petition Date immediately upon execution hereof. Any findings of fact shall constitute a finding of fact even if it is stated as a conclusion of law, and any conclusion of law shall constitute a conclusion of law even if it is stated as a finding of fact.'</li></ul> |
| authorizations | <ul><li>'The execution and performance of this Agreement have been duly authorized by all necessary action and do not and will not: (a) require any consent or approval of the members or stockholders of any entity, or the consent of any governmental entity, which in each case has not been obtained; or (b) violate any provision of any indenture, contract, agreement or instrument to which it is a party or by which it is bound.'</li><li>'Other than the filing of the financing statements required hereunder, no authorization or approval or other action by, and no notice to or filing with, any governmental authority or regulatory body is required for the due execution and delivery by the Seller of this Agreement and each other Transaction Document to which it is a party and the performance of its obligations hereunder and thereunder.'</li><li>'No authorization or approval or other action by, and no notice to or filing with, any governmental authority or regulatory body is required for the due execution and delivery by the Servicer of this Agreement and each other Transaction Document to which it is a party and the performance of its obligations hereunder and thereunder in its capacity as Servicer.'</li></ul> |
| consents | <ul><li>'Other than as set forth on Schedule 1.4 , no Tricadia Group Entity is required to obtain any consent or approval from any Person or provide notice to any Person in connection with the execution, delivery and performance of this Agreement and the consummation by it of the transactions contemplated by this Agreement, except where any such failure would not be materially adverse to the Tricadia Business.'</li><li>'Each Lender hereby consents to the Lids Disposition, and the Agent hereby waives any notices required or that will be required as a result of the Lids Disposition, including, without limitation, notices pursuant to Section 5.3 of the Credit Agreement.'</li><li>'Newmont headquarters is located at 6363 South Fiddler’s Green Circle, Suite 800, Greenwood Village, Colorado 80111 U.S.A., and grants awards to employees of Newmont and its Subsidiaries, at Newmont’s sole discretion. If Employee would like to participate in the Plan, please review the following information about Newmont’s data processing practices and declare Employee’s consent.'</li></ul> |
| tax withholdings | <ul><li>'The Company shall have the right to deduct from any payment hereunder all taxes (federal, state or other) which it is required to withhold therefrom.'</li><li>'The Company may withhold from any benefits payable under this Plan all federal, state, city or other taxes as may be required pursuant to any law or governmental regulation or ruling.'</li><li>'Any payments provided for hereunder shall be paid net of any applicable tax withholding required under federal, state or local law.'</li></ul> |
| arbitration | <ul><li>'The Parties agree that any and all disputes arising out of, or relating to, the terms of this Release, their interpretation, and any of the matters herein released, shall be subject to binding arbitration as described in Section 9(c) of the Employment Agreement.'</li><li>'The Parties agree that any dispute or controversy arising out of, relating to, or concerning the interpretation, construction, performance, or breach of this Agreement will be settled by arbitration to be held in Multnomah County, Oregon, in accordance with the terms and conditions of the Confidentiality Agreement.'</li><li>'This Award Certificate shall be governed by, and construed in accordance with, the laws of the State of California (disregarding any choice-of-law provisions). If the Participant is a party to an agreement with the Corporation to arbitrate claims, such agreement to arbitrate claims shall apply as to any dispute or disagreement regarding the Participant’s rights under this Award Certificate.'</li></ul> |
| transactions with affiliates | <ul><li>'Directly or indirectly enter into or permit to exist any transaction with any Affiliate of Borrower except for transactions that (i)\xa0are in the ordinary course of Borrower’s business, (ii)\xa0are upon fair and reasonable terms, (iii)\xa0are fully disclosed to Agent, and (iv)\xa0are no less favorable to Borrower or its Subsidiaries, as applicable, than would be obtained in an arm’s length transaction with a non-Affiliate.'</li><li>'Except as set forth in the SEC Documents, to the knowledge of the Company, none of the Company’s stockholders, officers or directors or any family member or affiliate of any of the foregoing, has either directly or indirectly an interest in, or is a party to, any transaction that is required to be disclosed as a related party transaction pursuant to Item 404 of Regulation S-K promulgated under the Securities Act.'</li><li>'Neither the REIT nor any of its Subsidiaries is a party to any transaction, arrangement or contract (including any lease or other rental agreement) with any of its Affiliates other than as permitted by Section 9.10 hereof.'</li></ul> |
| applicable laws | <ul><li>'THIS AMENDMENT AND THE RIGHTS AND OBLIGATIONS OF THE PARTIES HEREUNDER SHALL BE GOVERNED BY, AND SHALL BE CONSTRUED AND ENFORCED IN ACCORDANCE WITH, THE LAWS OF THE STATE OF NEW\xa0YORK.'</li><li>'The Requisite Lenders may direct the Agent to, and the Agent if so directed shall, exercise all other rights and remedies it may have under any Applicable Law.'</li><li>'THIS AGREEMENT AND THE OTHER LOAN DOCUMENTS (OTHER THAN LETTERS OF CREDIT AND AS EXPRESSLY SET FORTH IN OTHER LOAN DOCUMENTS) SHALL BE CONSTRUED IN ACCORDANCE WITH AND GOVERNED BY THE LAWS OF THE STATE OF NEW YORK WITHOUT REGARD TO THE CONFLICT OF LAWS PRINCIPLES THEREOF.'</li></ul> |
| defined terms | <ul><li>'As used in this Agreement, the terms listed in this Section\xa01.1 shall have the respective meanings set forth in this Section\xa01.1.'</li><li>'Unless otherwise defined herein, capitalized terms or matters of construction defined or established in the Loan Agreement shall be applied herein as defined or established therein.'</li><li>'Except as otherwise indicated herein, all words and terms defined in the Existing Agreement shall have the same meanings when used herein.'</li></ul> |
| change in control | <ul><li>'Upon a Change in Control that occurs during the Performance Period and prior to the Participant’s Termination due to death, Disability or Retirement, for purposes of determining the number of earned Shares under the Performance Share Units, the closing date of the transaction that constitutes the Change in Control (the “ Change in Control Date ”) shall be deemed the Last Day of the Performance Period .'</li><li>'In accordance with Section 10.1(a) of the Plan, in the event of a Change in Control, the RSUs shall vest immediately prior to the time of such Change in Control, except to the extent that the RSUs are replaced with a Replacement Award. If the RSUs are replaced with a Replacement Award, then from and after the Change in Control, references herein to "RSUs" shall be deemed to refer to the Replacement Award.'</li><li>'In the event of a Change in Control, the Eligible Employee shall immediately be fully vested in his or her benefit under the Plan.'</li></ul> |
| no defaults | <ul><li>'No Default or Event of Default shall have occurred and be continuing.'</li><li>'No Default or Event of Default has occurred and is continuing or would result from the consummation of the transactions contemplated by this Agreement or any other Loan Document.'</li><li>'No Default or Event of Default other than the Interest Default shall have occurred and be continuing as of the date the condition set forth in Section\xa03(a) is satisfied.'</li></ul> |
| adjustments | <ul><li>'Participant acknowledges that the Option is subject to adjustment, modification and termination in certain events as provided in this Agreement and the Plan.'</li><li>'Participant acknowledges that the Option is subject to adjustment, modification and termination in certain events as provided in this UK Option Agreement and the UK Sub-Plan.'</li><li>'The parties acknowledge and agree that all share-related numbers contained in this Agreement shall be adjusted to take into account any reorganization, recapitalization, non-cash dividend, stock split or other similar transaction effected with respect to the Common Stock except as specifically stated herein.'</li></ul> |
| non-disparagement | <ul><li>'Each Participant agrees that, following any termination of his or her employment with the Company, such Participant will not disparage, orally or in writing, the Company, the management of the Company, any product or service provided by the Company or the future prospects of the Company.'</li><li>'Executive agrees to refrain from any disparagement, defamation, libel, or slander of any of the Releasees, and agrees to refrain from any tortious interference with the contracts and relationships of any of the Releasees.'</li><li>'Ms.\xa0Meyerrose agrees that she will not make any derogatory or disparaging statements about the Company or its present or former agents, employees, officers, or directors. Officers of the Company with knowledge of this Agreement agree that they will not make any derogatory or disparaging statements about Ms.\xa0Meyerrose.'</li></ul> |
| employment | <ul><li>'Nothing expressed or implied in this Agreement will create any right or duty on the part of the Company or the Executive to have the Executive remain in the employment of the Company or any Subsidiary prior to or following any Change in Control or otherwise.'</li><li>'This Plan shall not be deemed to create a contract of employment between any Participant and the Company and/or its Affiliates. Nothing contained in the Plan shall (a) confer upon any Participant any right with respect to continuation of employment with the Company or (b) subject to the rights and benefits of any Participant hereunder, interfere in any way with the right of the Company to terminate such Participant’s employment at any time.'</li><li>'Nothing in this Plan gives any Participant the right to be retained in the service of the Company, nor will it interfere with the right of the Company to discharge or otherwise deal with Participants without regard to the existence of this Plan.'</li></ul> |
| positions | <ul><li>'Chief Executive Officer and President. Executive shall report in such capacity to the Board.'</li><li>'Chief Financial Officer. Executive shall report in such capacity to Company’s Chief Executive Officer.'</li><li>'The Motion is granted on an interim basis in accordance with the terms of this Interim Order. Any objections to the Motion with respect to the entry of the Interim Order that have not been withdrawn, waived or settled are hereby denied and overruled on their merits.'</li></ul> |
| erisa | <ul><li>'No ERISA Default has occurred and is continuing.'</li><li>'ERISA means the Employee Retirement Income Security Act of 1974, as amended from time to time.'</li><li>'The Servicer shall give the Facility Agent and each Lender prompt written notice of any event that results in the imposition of a Lien on the Collateral under Section 430 of the Code or Section 303(k) or 4068 of ERISA. The Servicer shall not, and shall not cause or permit any of its Affiliates to, cause or permit to occur an event that results in the imposition of a Lien on the Collateral under Section 430 of the Code or Section 303(k) or 4068 of ERISA.'</li></ul> |
| warranties | <ul><li>'Each Guarantor hereby makes to the Administrative Agent and the other Guarantied Parties all of the representations and warranties made by the Borrower with respect to or in any way relating to such Guarantor in the Loan Agreement and the other Loan Documents, as if the same were set forth herein in full.'</li><li>'The Seller has determined that this Agreement is effective to transfer to the Administrative Agent, the Managing Agents and the Purchasers, as assignees of the Seller, the full benefit of and a direct claim against LKQ, as Servicer, and each Originator in respect of each representation or warranty made by LKQ, as Servicer, and each Originator under any Transaction Document.'</li><li>'EXCEPT AS EXPRESSLY SET FORTH IN THIS TSA, SERVICE PROVIDER MAKES NO WARRANTY, EXPRESS OR IMPLIED, AND HEREBY DISCLAIMS ANY WARRANTIES OF ANY KIND WITH RESPECT TO THE NATURE OR QUALITY OF THE TRANSITION SERVICES TO BE PROVIDED BY SERVICE PROVIDER OR THE RESULTS THAT WILL BE OBTAINED BY USING OR APPLYING SUCH TRANSITION SERVICES, INCLUDING ANY WARRANTY OR CONDITION OF NONINFRINGEMENT, MERCHANTABILITY, ACCURACY, SATISFACTORY QUALITY, OR FITNESS FOR ANY PARTICULAR PURPOSE.'</li></ul> |
| disability | <ul><li>'If Executive’s employment shall be terminated by reason of Executive’s death or Disability, then the Company will provide Executive with the Accrued Obligations. Thereafter, the Company shall have no further obligation to Executive or Executive’s legal representatives.'</li><li>'In the event the employment of a Participant is terminated by the Company for Cause or due to the death or Disability of the Participant no severance benefits will be payable pursuant to the Plan.'</li><li>'If your employment with or service to the Company, a Subsidiary or an Affiliate terminates by reason of Disability, this Stock Option shall become fully vested and exercisable and may thereafter be exercised by you (or your legal representative or similar person) until the date which is one year after the effective date of your termination of employment or service, or if earlier, the expiration date of the term of this Stock Option.'</li></ul> |
| interests | <ul><li>'Interest shall accrue on the principal balance hereof at a fixed rate of 7.25% per annum.'</li><li>'Interest shall accrue on the principal balance hereof at a fixed rate of 8.50% per annum.'</li><li>'Interest shall accrue on the then outstanding balance of the Principal Amount at a fixed interest rate per annum equal to 8%. Accrued interest shall be payable in cash in arrears on the last day of each calendar quarter, with first interest payment to commence on June 30, 2019, until the Principal Amount is paid in full. If at any time the outstanding Principal Amount shall be paid in full, then all accrued interest shall be payable at the time of such principal payment.'</li></ul> |
| duties | <ul><li>'The Administrative Agent may execute any of its duties under this Agreement and the other Loan Documents by or through agents or attorneys-in-fact and shall be entitled to advice of counsel concerning all matters pertaining to such duties. The Administrative Agent shall not be responsible for the negligence or misconduct of any agents or attorneys-in-fact selected by it with reasonable care.'</li><li>'Agent may execute any of its duties under this Agreement or any other Loan Document by or through agents, employees or attorneys-in-fact and shall be entitled to advice of counsel concerning all matters pertaining to such duties. Agent shall not be responsible for the negligence or misconduct of any agent or attorney-in-fact that it selects as long as such selection was made without gross negligence or willful misconduct.'</li><li>'The Agent may execute any of its respective duties under this Agreement or the other Transaction Documents by or through agents or attorneys in fact and shall be entitled to advice of counsel concerning all matters pertaining to such duties. The Agent shall not be responsible for the negligence or misconduct of any agents or attorneys in fact selected by the Agent with reasonable care.'</li></ul> |
| specific performance | <ul><li>'Each First Lien Agent may demand specific performance of this Agreement. Each Second Priority Agent, on behalf of itself and each applicable Second Priority Secured Party, hereby irrevocably waives any defense based on the adequacy of a remedy at law and any other defense that might be asserted to bar the remedy of specific performance in any action that may be brought by any First Lien Agent.'</li><li>'The parties recognize that if any provision of this Agreement is violated by the Company, Indemnitee may be without an adequate remedy at law. Accordingly, in the event of any such violation, Indemnitee shall be entitled, if Indemnitee so elects, to institute proceedings, either in law or at equity, to obtain damages, to enforce specific performance, to enjoin such violation, or to obtain any relief or any combination of the foregoing as Indemnitee may elect to pursue.'</li><li>'The parties hereto recognize that if any provision of this Agreement is violated by the Company, Indemnitee may be without an adequate remedy at law.\xa0 Accordingly, in the event of any such violation, Indemnitee shall be entitled, if Indemnitee so elects, to institute proceedings, either in law or at equity, to obtain damages, to enforce specific performance, to enjoin such violation, or to obtain any relief or any combination of the foregoing as Indemnitee may elect to pursue.'</li></ul> |
| anti-corruption laws | <ul><li>'The Borrower will not, and will not permit any of its Subsidiaries to, fail to maintain in effect and enforce policies and procedures designed to ensure compliance by the Borrower, its Subsidiaries and their respective directors, officers, employees and agents with Anti-Corruption Laws and applicable Sanctions.'</li><li>'Conduct its business in compliance with applicable anti-corruption laws and maintain policies and procedures designed to promote and achieve compliance with such laws.'</li><li>'None of the Loan Parties or their Subsidiaries have breached the United States Foreign Corrupt Practices Act of 1977, the UK Bribery Act 2010, or any other similar anti-corruption legislation in other jurisdictions the effect of which breach is or could reasonably be expected to be material to the Loan Parties, taken as a whole, and the Loan Parties and their Subsidiaries have instituted and maintained policies and procedures designed to promote and achieve compliance with such laws.'</li></ul> |
| vacations | <ul><li>'During the Employment Period, the Executive shall be entitled to paid vacation in accordance with the most favorable plans, policies, programs and practices of the Company and its affiliated companies.'</li><li>'During the Employment Period, the Executive shall be entitled to paid vacation in accordance with the plans, policies, programs and practices of the Company and its affiliated companies.'</li><li>'During the Employment Period, the Employee shall be entitled to paid vacation in accordance with the plans, policies, programs and practices of the Company and its affiliated companies.'</li></ul> |
| generally | <ul><li>'The Customer Support Services will be provided by the following types of Customer Support Agents: [***]. Bank will provide agents for future, mutually agreed upon and approved channels.'</li><li>'Except as otherwise provided in this Section\xa03 , the RSUs subject to this Award shall become vested in accordance with the Vesting Schedule.'</li><li>'Except as otherwise provided in this Section\xa03 , the PRSUs subject to this Award shall become vested in accordance with the Performance Vesting Conditions; provided that the Participant remains continuously employed by the Company or an Affiliate from the Grant Date through the Vesting Date set forth above.'</li></ul> |
| publicity | <ul><li>'The parties agree that a public announcement and/or similar publicity with respect to the transactions contemplated hereby will be issued by the BDC following the date hereof. The contents of such announcement and/or publicity by the BDC will be subject to the approval of Trinity (such approval not to be unreasonably withheld). For the avoidance of doubt, any such announcement and/or publicity may be transmitted by the BDC by email to its general contacts.'</li><li>'Consultant may not publish or refer to Work Product, in whole or in part, without the prior express written consent of AVROBIO. Consultant will not use the name, logo, trade name, service mark, or trademark, or any simulation, abbreviation, or adaptation of same, or the name of AVROBIO or any of its affiliates for publicity, promotion, or other uses without AVROBIO’s prior written consent.'</li><li>'Neither party may issue a press release, public announcement, advertisement or other form of publicity concerning the existence of this Agreement or the terms of this Agreement without obtaining the prior written consent of the other party, provided that the Company may make disclosure pursuant to its obligations under applicable securities laws and regulations and/or requirements of the New York Stock Exchange.'</li></ul> |
| choice of laws | <ul><li>'THE VALIDITY, CONSTRUCTION AND ENFORCEABILITY OF THIS NOTE SHALL BE GOVERNED BY THE INTERNAL LAWS OF THE STATE OF MINNESOTA, WITHOUT GIVING EFFECT TO CONFLICT OF LAWS PRINCIPLES THEREOF.'</li><li>'This Agreement and the Notice of Restricted Stock Grant shall be governed by, and construed in accordance with, the laws of the State of Delaware, without regard to any conflicts of law or choice of law rule or principle that might otherwise cause the Plan, this Agreement or the Notice of Restricted Stock Grant to be governed by or construed in accordance with the substantive law of another jurisdiction.'</li><li>'This Agreement shall be construed and enforced in accordance with the laws of the State of Colorado, notwithstanding any state’s choice-of-law rules to the contrary.'</li></ul> |
| liens | <ul><li>'Except for the conveyances hereunder, Seller will not sell, pledge, assign or transfer to any other Person, or grant, create, incur, assume or suffer to exist any Lien on the Receivables or the Other Conveyed Property or any interest therein, and Seller shall defend the right, title, and interest of Purchaser and the Issuer in and to the Receivables and the Other Conveyed Property against all claims of third parties claiming through or under Seller.'</li><li>'No Credit Party shall, and no Credit Party shall permit any of its Subsidiaries to, directly or indirectly, allow or suffer to exist any Liens, other than Permitted Liens.'</li><li>'The Administrator will not directly or indirectly create, suffer or allow to exist any Lien on the Collateral other than Permitted Liens.'</li></ul> |
| death | <ul><li>'In the event of termination due to death or Disability, Executive or his legal representative shall be entitled to any Base Compensation earned through the last date of employment. In addition, Executive will remain eligible for all applicable benefits relative to death or disability pursuant to the plans, if any, in place at the time.'</li><li>'If Participant’s Employment terminates under circumstances described in Section\xa03(a) , then upon Participant’s subsequent death, all unpaid amounts payable to Participant under Section\xa03(a)(i) , (ii) , (iii) \xa0or (vi) , if any, shall be paid to Participant’s Beneficiary.'</li><li>'The Executive’s employment hereunder shall terminate upon her death.'</li></ul> |
| purposes | <ul><li>'The Seller has determined that, from a business viewpoint, the purchase of the Receivables and related interests thereto from the Originators under the Receivables Sale Agreement , and the sale of Purchaser Interests to the Administrative Agent, for the benefit of the Purchasers, and the other transactions contemplated herein, are in the best interests of the Seller.'</li><li>'The Program established pursuant to this Agreement will allow customers of Company, through Bank’s standard and customized technology and financial products and services (including the establishment of T-Mobile Customer Accounts, the issuance of Cards and other financial products and services, as further described herein), to receive and use the T-Mobile Financial Services.'</li><li>'The purpose of the Fund shall be to make loans, and purchase assignments or participations in loans that have already been made (in either case, “ Underlying Loans ”), either directly or indirectly through subsidiaries or other Persons, and to engage in any other lawful business.'</li></ul> |
| information | <ul><li>'Each Lender shall have received, on or prior to the Closing Date, all documentation and other information reasonably requested by such Lender that is required by bank regulatory authorities under applicable “know your customer,” anti-money laundering and foreign asset control rules and regulations and any other compliance or regulatory considerations applicable to such Lender (including the Patriot Act), including the information described in Section\xa010.19.'</li><li>'The Agent shall periodically deliver to the Revolving Lenders information setting forth the Stated Amount of all outstanding Letters of Credit. Other than as set forth in this subsection, the Agent shall have no duty to notify the Revolving Lenders regarding the issuance or other matters regarding Letters of Credit issued hereunder. The failure of the Agent to perform its requirements under this subsection shall not relieve any Revolving Lender from its obligations under Section\xa02.5.(j).'</li><li>'From time to time and promptly upon each request, such data, certificates, reports, statements, opinions of counsel, documents or further information regarding the business, assets, liabilities, financial condition, results of operations or business prospects of the Borrower, any other Loan Party or any other Subsidiary as the Agent or any Lender may reasonably request.'</li></ul> |
| compensation | <ul><li>'The Executive will be entitled to incentive compensation and bonuses as provided below, and in any other plan of the Bank in which Executive is eligible to participate.'</li><li>'The compensation to be paid by Bank to Executive from time to time, including any fringe benefits or other employee benefits, shall not be governed by this Agreement. This Agreement shall not be deemed to affect the terms of any stock options, employee benefits or other agreements between the Bank and Executive.'</li><li>'The Managers will not receive any compensation. However, the Managers shall be reimbursed by the Fund for their reasonable out-of-pocket expenses, if any, of attendance at meetings of the Board of Managers.'</li></ul> |
| consent to jurisdiction | <ul><li>'The jurisdiction, services of process and waiver of jury trial provisions set forth in Sections 9.05 and 9.06 of the Credit Agreement are hereby incorporated by reference, mutatis mutandis .'</li><li>'Any action or proceeding arising out of or relating to this Agreement shall be filed in and heard and litigated solely before the state or federal courts of Washington within King County.'</li><li>'Each of the parties hereto irrevocably consents to personal jurisdiction in any action brought in connection with this Agreement in the United States District Court for the Central District of California or any California court of competent jurisdiction. The parties also consent to venue in the above forums and to the convenience of the above forums. Any suit brought to enforce the provisions of this Agreement must be brought in the aforementioned forums.'</li></ul> |
| successors | <ul><li>'This Agreement shall be binding upon and shall inure to the benefit of the parties hereto and their respective successors and assigns, except that the Borrower may not assign or transfer its rights hereunder without the prior written consent of both Lenders.'</li><li>'This Agreement shall be binding upon and inure to the benefit of the successors and assigns of Grantor and Collateral Agent.'</li><li>'This Agreement shall be binding upon the First Lien Agents, the Senior Secured Parties, the Second Priority Agents, the Second Priority Secured Parties and their respective permitted successors and assigns.'</li></ul> |
| limitation of liability | <ul><li>'No provision hereof, in the absence of any affirmative action by the Holder to exercise this Warrant to purchase Warrant Shares, and no enumeration herein of the rights or privileges of the Holder, shall give rise to any liability of the Holder for the purchase price of any Common Stock or as a stockholder of the Company, whether such liability is asserted by the Company or by creditors of the Company.'</li><li>'No provision hereof, in the absence of any affirmative action by Holder to exercise this Warrant to purchase Warrant Shares, and no enumeration herein of the rights or privileges of Holder, shall give rise to any liability of Holder for the purchase price of any Common Stock or as a stockholder of the Company, whether such liability is asserted by the Company or by creditors of the Company.'</li><li>'The Limited Partners shall have no liability under this Agreement (other than for breach thereof) except as expressly provided in Section 10.04, \xa013.02(d) or under the Act.'</li></ul> |
| books | <ul><li>'The Company shall and shall cause each other Loan Party to keep proper books of records and account in which entries are made in a manner so as to permit preparation of financial statements in conformity with GAAP (or, in the case of any Foreign Subsidiary, generally accepted accounting principles in effect in the jurisdiction of organization of such Foreign Subsidiary).'</li><li>'The Company will not close its stockholder books or records in any manner which prevents the timely exercise of this Warrant, pursuant to the terms hereof.'</li><li>'Keep adequate records and books of account reflecting all financial transactions in conformity in all material respects with GAAP, consistently applied, and in conformity in all material respects with all applicable requirements of any Governmental Agency having regulatory jurisdiction over Borrower and its Restricted Subsidiaries.'</li></ul> |
| exercise price | <ul><li>'The exercise price per Warrant Share under this Warrant shall be $3.125, subject to adjustment hereunder (the “Exercise Price”).'</li><li>'Whenever the Exercise Price is adjusted pursuant to any provision of this Section\xa03, the Company shall promptly deliver to the Holder by facsimile or email a notice setting forth the Exercise Price after such adjustment and any resulting adjustment to the number of Warrant Shares and setting forth a brief statement of the facts requiring such adjustment.'</li><li>'Each Award Agreement shall state the Exercise Price, if applicable. Subject to Sections 3, 7.2 and 8.2 and to the foregoing, the Committee may reduce the Exercise Price of any outstanding Award, on terms and subject to such conditions as it deems advisable. The Exercise Price shall also be subject to adjustment as provided in Section 14 hereof.'</li></ul> |
| register | <ul><li>'The registered agent and office of the Fund shall be as provided in the Fund’s certificate of formation, or as otherwise determined by the Board of Managers.'</li><li>'The Company shall register this Warrant, upon records to be maintained by the Company for that purpose (the “ Warrant Register ”), in the name of the record Holder hereof from time to time. The Company may deem and treat the registered Holder of this Warrant as the absolute owner hereof for the purpose of any exercise hereof or any distribution to the Holder, and for all other purposes, absent actual notice to the contrary.'</li><li>'Upon its receipt of an agreement referred to in clause (ii)(y) above executed by an Assuming Lender or any Increasing Lender, together with the certificate referred to in clause (ii)(x) above, the Administrative Agent shall, if such agreement has been completed, (x) accept such agreement, (y) record the information contained therein in the Register and (z) give prompt notice thereof to the Borrower.'</li></ul> |
| powers | <ul><li>'The execution and delivery by the Servicer of this Agreement and each other Transaction Document to which it is a party, and the performance of its obligations hereunder and thereunder are within its corporate powers and authority and have been duly authorized by all necessary corporate action on its part. This Agreement and each other Transaction Document to which the Servicer is a party has been duly executed and delivered by the Servicer.'</li><li>'Purchaser has the power, authority and legal right to execute and deliver this Agreement and to carry out the terms hereof and to acquire the Receivables and the Other Conveyed Property hereunder; and the execution, delivery and performance of this Agreement and all of the documents required pursuant hereto have been duly authorized by Purchaser by all necessary corporate action.'</li><li>'The Company has all requisite power and authority to execute, deliver and perform its obligations under this Agreement, the Note and any other documents or items executed in connection with the transactions contemplated herein (collectively, the “Transaction Documents”) and to consummate the transactions contemplated hereby and thereby.'</li></ul> |
| good standings | <ul><li>'Seller has been duly organized and is validly existing as a corporation in good standing under the laws of the State of Delaware, with power and authority to own its properties and to conduct its business as such properties are currently owned and such business is currently conducted, and had at all relevant times, and now has, power, authority and legal right to acquire, own and sell the Receivables and the Other Conveyed Property to be transferred to Purchaser.'</li><li>'TI is a legal entity duly organized, validly existing and in good standing under the Laws of the Cayman Islands and has all requisite corporate power to enter into this Agreement and to carry its business as it has been and is currently conducted.'</li><li>'The Seller has been duly organized and is validly existing as a corporation in good standing under the laws of its jurisdiction of organization, with power and authority to own its properties and to conduct its business as such properties are currently owned and such business is currently conducted.'</li></ul> |
| transferability | <ul><li>'Except as expressly provided in the Plan or this Agreement, the RSUs may not be sold, assigned, transferred, pledged or otherwise disposed of, shall not be assignable by operation of law, and shall not be subject to execution, attachment or similar process, except by will or the laws of descent and distribution. Any attempted sale, assignment, transfer, pledge or other disposition of any RSU prior to vesting shall be null and void and without effect.'</li><li>'Except as expressly provided in the Plan or this Agreement, the RSUs may not be sold, assigned, transferred, pledged or otherwise disposed of, shall not be assignable by operation of law and shall not be subject to execution, attachment or similar process, except by will or the laws of descent and distribution. Any attempted sale, assignment, transfer, pledge or other disposition of any RSU prior to vesting shall be null and void and without effect.'</li><li>'To the maximum extent permitted by law, no benefit under the Plan may be assignable or subject in any manner to alienation, sale, transfer, claims of creditors, pledge, attachment, or encumbrances of any kind.'</li></ul> |
| permits | <ul><li>'Neither any Credit Party nor any of their Subsidiaries is in violation of any term of or in default under its certificate or articles of incorporation or bylaws or other governing documents. Neither any Credit Party nor any of their Subsidiaries is in violation of any judgment, decree or order or any law, rule, regulation, statute or ordinance applicable to any Credit Party or any of their Subsidiaries (including, without limitation, all Environmental Laws and the Requirements).'</li><li>'The Company has all certificates of occupancy, rights, permits, certificates, licenses, franchises, approvals and other authorizations as are reasonably necessary to conduct its respective business and to own, lease, use, operate and occupy its assets, at the places and in the manner now conducted and operated, except those the absence of which would not materially adversely affect its respective business.'</li><li>'Seller has received no written notice of any violations which remain uncured of any licenses and permits affecting any Property.'</li></ul> |
| existence | <ul><li>'The Company shall continue to engage primarily in the automotive business and preserve, renew and keep in full force and effect its organizational existence and take all reasonable actions to maintain all rights necessary for the normal conduct of its principal line of business, except, in each case, (i)\xa0to the extent that failure to do so would not have a Material Adverse Effect and (ii)\xa0as otherwise permitted or provided in the Loan Documents.'</li><li>'No Credit Party shall, and no Credit Party shall permit any of its Subsidiaries to, directly or indirectly, allow or suffer to exist any Liens, other than Permitted Liens.'</li><li>'So long as the Buyer beneficially owns the Note, the Company shall maintain its corporate existence and shall not sell all or substantially all of the Company’s assets, except in the event of a merger or consolidation or sale of all or substantially all of the Company’s assets, where the surviving or successor entity in such transaction assumes the Company’s obligations hereunder and under the agreements and instruments entered into in connection herewith.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9425 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("scholarly360/setfit-contracts-clauses")
# Run inference
preds = model("In the event of a Change in Control, the Eligible Employee shall immediately be fully vested in his or her benefit under the Plan.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 8 | 48.2975 | 87 |
| Label | Training Sample Count |
|:-----------------------------|:----------------------|
| governing laws | 4 |
| counterparts | 4 |
| notices | 4 |
| entire agreements | 4 |
| severability | 4 |
| waivers | 4 |
| amendments | 4 |
| expenses | 4 |
| survival | 4 |
| representations | 4 |
| assigns | 4 |
| taxes | 4 |
| litigations | 4 |
| insurances | 4 |
| confidentiality | 4 |
| waiver of jury trials | 4 |
| terminations | 4 |
| further assurances | 4 |
| general | 4 |
| terms | 4 |
| assignments | 4 |
| authority | 4 |
| use of proceeds | 4 |
| payments | 4 |
| compliance with laws | 4 |
| no conflicts | 4 |
| indemnifications | 4 |
| organizations | 4 |
| base salary | 4 |
| binding effects | 4 |
| headings | 4 |
| costs | 4 |
| definitions | 4 |
| modifications | 4 |
| remedies | 4 |
| releases | 4 |
| disclosures | 4 |
| participations | 4 |
| vesting | 4 |
| no waivers | 4 |
| withholdings | 4 |
| miscellaneous | 4 |
| jurisdictions | 4 |
| closings | 4 |
| integration | 4 |
| fees | 4 |
| effective dates | 4 |
| enforcements | 4 |
| financial statements | 4 |
| capitalization | 4 |
| benefits | 4 |
| interpretations | 4 |
| subsidiaries | 4 |
| solvency | 4 |
| cooperation | 4 |
| approvals | 4 |
| construction | 4 |
| intellectual property | 4 |
| brokers | 4 |
| enforceability | 4 |
| authorizations | 4 |
| consents | 4 |
| tax withholdings | 4 |
| arbitration | 4 |
| transactions with affiliates | 4 |
| applicable laws | 4 |
| defined terms | 4 |
| change in control | 4 |
| no defaults | 4 |
| adjustments | 4 |
| non-disparagement | 4 |
| employment | 4 |
| positions | 4 |
| erisa | 4 |
| warranties | 4 |
| disability | 4 |
| interests | 4 |
| duties | 4 |
| specific performance | 4 |
| anti-corruption laws | 4 |
| vacations | 4 |
| generally | 4 |
| publicity | 4 |
| choice of laws | 4 |
| liens | 4 |
| death | 4 |
| purposes | 4 |
| information | 4 |
| compensation | 4 |
| consent to jurisdiction | 4 |
| successors | 4 |
| limitation of liability | 4 |
| books | 4 |
| exercise price | 4 |
| register | 4 |
| powers | 4 |
| good standings | 4 |
| transferability | 4 |
| permits | 4 |
| existence | 4 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.1159 | - |
| 0.0051 | 50 | 0.1675 | - |
| 0.0101 | 100 | 0.1142 | - |
| 0.0152 | 150 | 0.1509 | - |
| 0.0202 | 200 | 0.0455 | - |
| 0.0253 | 250 | 0.0999 | - |
| 0.0303 | 300 | 0.1259 | - |
| 0.0354 | 350 | 0.0873 | - |
| 0.0404 | 400 | 0.0993 | - |
| 0.0455 | 450 | 0.0457 | - |
| 0.0505 | 500 | 0.0835 | - |
| 0.0556 | 550 | 0.0809 | - |
| 0.0606 | 600 | 0.0821 | - |
| 0.0657 | 650 | 0.0603 | - |
| 0.0707 | 700 | 0.0502 | - |
| 0.0758 | 750 | 0.0532 | - |
| 0.0808 | 800 | 0.06 | - |
| 0.0859 | 850 | 0.1101 | - |
| 0.0909 | 900 | 0.036 | - |
| 0.0960 | 950 | 0.0287 | - |
| 0.1010 | 1000 | 0.0501 | - |
| 0.1061 | 1050 | 0.0405 | - |
| 0.1111 | 1100 | 0.0327 | - |
| 0.1162 | 1150 | 0.0315 | - |
| 0.1212 | 1200 | 0.022 | - |
| 0.1263 | 1250 | 0.0346 | - |
| 0.1313 | 1300 | 0.0782 | - |
| 0.1364 | 1350 | 0.0353 | - |
| 0.1414 | 1400 | 0.0225 | - |
| 0.1465 | 1450 | 0.0134 | - |
| 0.1515 | 1500 | 0.0791 | - |
| 0.1566 | 1550 | 0.015 | - |
| 0.1616 | 1600 | 0.0093 | - |
| 0.1667 | 1650 | 0.024 | - |
| 0.1717 | 1700 | 0.0062 | - |
| 0.1768 | 1750 | 0.0245 | - |
| 0.1818 | 1800 | 0.0102 | - |
| 0.1869 | 1850 | 0.0086 | - |
| 0.1919 | 1900 | 0.0238 | - |
| 0.1970 | 1950 | 0.0062 | - |
| 0.2020 | 2000 | 0.0382 | - |
| 0.2071 | 2050 | 0.0107 | - |
| 0.2121 | 2100 | 0.0045 | - |
| 0.2172 | 2150 | 0.009 | - |
| 0.2222 | 2200 | 0.0062 | - |
| 0.2273 | 2250 | 0.0217 | - |
| 0.2323 | 2300 | 0.0089 | - |
| 0.2374 | 2350 | 0.0048 | - |
| 0.2424 | 2400 | 0.0095 | - |
| 0.2475 | 2450 | 0.0137 | - |
| 0.2525 | 2500 | 0.0077 | - |
| 0.2576 | 2550 | 0.0086 | - |
| 0.2626 | 2600 | 0.0068 | - |
| 0.2677 | 2650 | 0.0063 | - |
| 0.2727 | 2700 | 0.0061 | - |
| 0.2778 | 2750 | 0.0181 | - |
| 0.2828 | 2800 | 0.0058 | - |
| 0.2879 | 2850 | 0.0052 | - |
| 0.2929 | 2900 | 0.0073 | - |
| 0.2980 | 2950 | 0.0088 | - |
| 0.3030 | 3000 | 0.0388 | - |
| 0.3081 | 3050 | 0.0108 | - |
| 0.3131 | 3100 | 0.0048 | - |
| 0.3182 | 3150 | 0.0046 | - |
| 0.3232 | 3200 | 0.0051 | - |
| 0.3283 | 3250 | 0.0035 | - |
| 0.3333 | 3300 | 0.0047 | - |
| 0.3384 | 3350 | 0.0061 | - |
| 0.3434 | 3400 | 0.0073 | - |
| 0.3485 | 3450 | 0.0041 | - |
| 0.3535 | 3500 | 0.0117 | - |
| 0.3586 | 3550 | 0.0032 | - |
| 0.3636 | 3600 | 0.0045 | - |
| 0.3687 | 3650 | 0.0042 | - |
| 0.3737 | 3700 | 0.0061 | - |
| 0.3788 | 3750 | 0.0056 | - |
| 0.3838 | 3800 | 0.0073 | - |
| 0.3889 | 3850 | 0.0057 | - |
| 0.3939 | 3900 | 0.0033 | - |
| 0.3990 | 3950 | 0.0027 | - |
| 0.4040 | 4000 | 0.0057 | - |
| 0.4091 | 4050 | 0.003 | - |
| 0.4141 | 4100 | 0.0044 | - |
| 0.4192 | 4150 | 0.0033 | - |
| 0.4242 | 4200 | 0.0036 | - |
| 0.4293 | 4250 | 0.0027 | - |
| 0.4343 | 4300 | 0.0065 | - |
| 0.4394 | 4350 | 0.035 | - |
| 0.4444 | 4400 | 0.0175 | - |
| 0.4495 | 4450 | 0.0027 | - |
| 0.4545 | 4500 | 0.0035 | - |
| 0.4596 | 4550 | 0.0019 | - |
| 0.4646 | 4600 | 0.0036 | - |
| 0.4697 | 4650 | 0.0022 | - |
| 0.4747 | 4700 | 0.0018 | - |
| 0.4798 | 4750 | 0.0076 | - |
| 0.4848 | 4800 | 0.0036 | - |
| 0.4899 | 4850 | 0.0581 | - |
| 0.4949 | 4900 | 0.0023 | - |
| 0.5 | 4950 | 0.004 | - |
| 0.5051 | 5000 | 0.0059 | - |
| 0.5101 | 5050 | 0.0024 | - |
| 0.5152 | 5100 | 0.0096 | - |
| 0.5202 | 5150 | 0.0059 | - |
| 0.5253 | 5200 | 0.0044 | - |
| 0.5303 | 5250 | 0.041 | - |
| 0.5354 | 5300 | 0.0028 | - |
| 0.5404 | 5350 | 0.0032 | - |
| 0.5455 | 5400 | 0.0017 | - |
| 0.5505 | 5450 | 0.002 | - |
| 0.5556 | 5500 | 0.0024 | - |
| 0.5606 | 5550 | 0.0034 | - |
| 0.5657 | 5600 | 0.0039 | - |
| 0.5707 | 5650 | 0.0023 | - |
| 0.5758 | 5700 | 0.0037 | - |
| 0.5808 | 5750 | 0.0594 | - |
| 0.5859 | 5800 | 0.0016 | - |
| 0.5909 | 5850 | 0.0168 | - |
| 0.5960 | 5900 | 0.0458 | - |
| 0.6010 | 5950 | 0.0019 | - |
| 0.6061 | 6000 | 0.001 | - |
| 0.6111 | 6050 | 0.0294 | - |
| 0.6162 | 6100 | 0.0027 | - |
| 0.6212 | 6150 | 0.0051 | - |
| 0.6263 | 6200 | 0.0014 | - |
| 0.6313 | 6250 | 0.0033 | - |
| 0.6364 | 6300 | 0.0021 | - |
| 0.6414 | 6350 | 0.0023 | - |
| 0.6465 | 6400 | 0.0018 | - |
| 0.6515 | 6450 | 0.0013 | - |
| 0.6566 | 6500 | 0.0041 | - |
| 0.6616 | 6550 | 0.0592 | - |
| 0.6667 | 6600 | 0.0019 | - |
| 0.6717 | 6650 | 0.0021 | - |
| 0.6768 | 6700 | 0.0606 | - |
| 0.6818 | 6750 | 0.0018 | - |
| 0.6869 | 6800 | 0.0014 | - |
| 0.6919 | 6850 | 0.0038 | - |
| 0.6970 | 6900 | 0.0567 | - |
| 0.7020 | 6950 | 0.0013 | - |
| 0.7071 | 7000 | 0.0015 | - |
| 0.7121 | 7050 | 0.0585 | - |
| 0.7172 | 7100 | 0.0014 | - |
| 0.7222 | 7150 | 0.0021 | - |
| 0.7273 | 7200 | 0.0179 | - |
| 0.7323 | 7250 | 0.0013 | - |
| 0.7374 | 7300 | 0.0101 | - |
| 0.7424 | 7350 | 0.0012 | - |
| 0.7475 | 7400 | 0.0009 | - |
| 0.7525 | 7450 | 0.001 | - |
| 0.7576 | 7500 | 0.0011 | - |
| 0.7626 | 7550 | 0.001 | - |
| 0.7677 | 7600 | 0.0022 | - |
| 0.7727 | 7650 | 0.0012 | - |
| 0.7778 | 7700 | 0.0011 | - |
| 0.7828 | 7750 | 0.0011 | - |
| 0.7879 | 7800 | 0.0011 | - |
| 0.7929 | 7850 | 0.0019 | - |
| 0.7980 | 7900 | 0.001 | - |
| 0.8030 | 7950 | 0.0594 | - |
| 0.8081 | 8000 | 0.024 | - |
| 0.8131 | 8050 | 0.001 | - |
| 0.8182 | 8100 | 0.0017 | - |
| 0.8232 | 8150 | 0.0013 | - |
| 0.8283 | 8200 | 0.0012 | - |
| 0.8333 | 8250 | 0.0017 | - |
| 0.8384 | 8300 | 0.0011 | - |
| 0.8434 | 8350 | 0.0013 | - |
| 0.8485 | 8400 | 0.0008 | - |
| 0.8535 | 8450 | 0.0007 | - |
| 0.8586 | 8500 | 0.0016 | - |
| 0.8636 | 8550 | 0.0008 | - |
| 0.8687 | 8600 | 0.0507 | - |
| 0.8737 | 8650 | 0.0014 | - |
| 0.8788 | 8700 | 0.0009 | - |
| 0.8838 | 8750 | 0.0564 | - |
| 0.8889 | 8800 | 0.001 | - |
| 0.8939 | 8850 | 0.0016 | - |
| 0.8990 | 8900 | 0.001 | - |
| 0.9040 | 8950 | 0.0009 | - |
| 0.9091 | 9000 | 0.0009 | - |
| 0.9141 | 9050 | 0.0014 | - |
| 0.9192 | 9100 | 0.0018 | - |
| 0.9242 | 9150 | 0.0012 | - |
| 0.9293 | 9200 | 0.0007 | - |
| 0.9343 | 9250 | 0.0009 | - |
| 0.9394 | 9300 | 0.0007 | - |
| 0.9444 | 9350 | 0.0014 | - |
| 0.9495 | 9400 | 0.0554 | - |
| 0.9545 | 9450 | 0.001 | - |
| 0.9596 | 9500 | 0.0011 | - |
| 0.9646 | 9550 | 0.0008 | - |
| 0.9697 | 9600 | 0.0008 | - |
| 0.9747 | 9650 | 0.0012 | - |
| 0.9798 | 9700 | 0.001 | - |
| 0.9848 | 9750 | 0.0168 | - |
| 0.9899 | 9800 | 0.0011 | - |
| 0.9949 | 9850 | 0.0011 | - |
| 1.0 | 9900 | 0.0194 | 0.0034 |
| 1.0051 | 9950 | 0.0546 | - |
| 1.0101 | 10000 | 0.0482 | - |
| 1.0152 | 10050 | 0.0009 | - |
| 1.0202 | 10100 | 0.0008 | - |
| 1.0253 | 10150 | 0.0006 | - |
| 1.0303 | 10200 | 0.0006 | - |
| 1.0354 | 10250 | 0.0446 | - |
| 1.0404 | 10300 | 0.0005 | - |
| 1.0455 | 10350 | 0.0008 | - |
| 1.0505 | 10400 | 0.0006 | - |
| 1.0556 | 10450 | 0.0009 | - |
| 1.0606 | 10500 | 0.0014 | - |
| 1.0657 | 10550 | 0.0006 | - |
| 1.0707 | 10600 | 0.0009 | - |
| 1.0758 | 10650 | 0.0005 | - |
| 1.0808 | 10700 | 0.0008 | - |
| 1.0859 | 10750 | 0.0545 | - |
| 1.0909 | 10800 | 0.0015 | - |
| 1.0960 | 10850 | 0.0006 | - |
| 1.1010 | 10900 | 0.0103 | - |
| 1.1061 | 10950 | 0.001 | - |
| 1.1111 | 11000 | 0.0011 | - |
| 1.1162 | 11050 | 0.0009 | - |
| 1.1212 | 11100 | 0.0014 | - |
| 1.1263 | 11150 | 0.0011 | - |
| 1.1313 | 11200 | 0.0007 | - |
| 1.1364 | 11250 | 0.0025 | - |
| 1.1414 | 11300 | 0.0007 | - |
| 1.1465 | 11350 | 0.0007 | - |
| 1.1515 | 11400 | 0.0584 | - |
| 1.1566 | 11450 | 0.0008 | - |
| 1.1616 | 11500 | 0.0007 | - |
| 1.1667 | 11550 | 0.0005 | - |
| 1.1717 | 11600 | 0.0009 | - |
| 1.1768 | 11650 | 0.0005 | - |
| 1.1818 | 11700 | 0.0009 | - |
| 1.1869 | 11750 | 0.0008 | - |
| 1.1919 | 11800 | 0.0009 | - |
| 1.1970 | 11850 | 0.0007 | - |
| 1.2020 | 11900 | 0.0006 | - |
| 1.2071 | 11950 | 0.0006 | - |
| 1.2121 | 12000 | 0.0005 | - |
| 1.2172 | 12050 | 0.0008 | - |
| 1.2222 | 12100 | 0.0006 | - |
| 1.2273 | 12150 | 0.0004 | - |
| 1.2323 | 12200 | 0.0006 | - |
| 1.2374 | 12250 | 0.0005 | - |
| 1.2424 | 12300 | 0.0005 | - |
| 1.2475 | 12350 | 0.001 | - |
| 1.2525 | 12400 | 0.0006 | - |
| 1.2576 | 12450 | 0.0008 | - |
| 1.2626 | 12500 | 0.0004 | - |
| 1.2677 | 12550 | 0.0006 | - |
| 1.2727 | 12600 | 0.001 | - |
| 1.2778 | 12650 | 0.0005 | - |
| 1.2828 | 12700 | 0.0005 | - |
| 1.2879 | 12750 | 0.0006 | - |
| 1.2929 | 12800 | 0.0005 | - |
| 1.2980 | 12850 | 0.0011 | - |
| 1.3030 | 12900 | 0.0011 | - |
| 1.3081 | 12950 | 0.0006 | - |
| 1.3131 | 13000 | 0.0006 | - |
| 1.3182 | 13050 | 0.0006 | - |
| 1.3232 | 13100 | 0.001 | - |
| 1.3283 | 13150 | 0.0008 | - |
| 1.3333 | 13200 | 0.0006 | - |
| 1.3384 | 13250 | 0.0006 | - |
| 1.3434 | 13300 | 0.0006 | - |
| 1.3485 | 13350 | 0.0008 | - |
| 1.3535 | 13400 | 0.001 | - |
| 1.3586 | 13450 | 0.0006 | - |
| 1.3636 | 13500 | 0.001 | - |
| 1.3687 | 13550 | 0.0006 | - |
| 1.3737 | 13600 | 0.0026 | - |
| 1.3788 | 13650 | 0.0005 | - |
| 1.3838 | 13700 | 0.0006 | - |
| 1.3889 | 13750 | 0.0011 | - |
| 1.3939 | 13800 | 0.0006 | - |
| 1.3990 | 13850 | 0.0009 | - |
| 1.4040 | 13900 | 0.0008 | - |
| 1.4091 | 13950 | 0.0014 | - |
| 1.4141 | 14000 | 0.0006 | - |
| 1.4192 | 14050 | 0.0005 | - |
| 1.4242 | 14100 | 0.0012 | - |
| 1.4293 | 14150 | 0.0005 | - |
| 1.4343 | 14200 | 0.0027 | - |
| 1.4394 | 14250 | 0.0004 | - |
| 1.4444 | 14300 | 0.0006 | - |
| 1.4495 | 14350 | 0.001 | - |
| 1.4545 | 14400 | 0.0004 | - |
| 1.4596 | 14450 | 0.0005 | - |
| 1.4646 | 14500 | 0.0004 | - |
| 1.4697 | 14550 | 0.0005 | - |
| 1.4747 | 14600 | 0.0008 | - |
| 1.4798 | 14650 | 0.0004 | - |
| 1.4848 | 14700 | 0.0005 | - |
| 1.4899 | 14750 | 0.0581 | - |
| 1.4949 | 14800 | 0.0005 | - |
| 1.5 | 14850 | 0.001 | - |
| 1.5051 | 14900 | 0.0007 | - |
| 1.5101 | 14950 | 0.0004 | - |
| 1.5152 | 15000 | 0.001 | - |
| 1.5202 | 15050 | 0.0004 | - |
| 1.5253 | 15100 | 0.0009 | - |
| 1.5303 | 15150 | 0.0004 | - |
| 1.5354 | 15200 | 0.0006 | - |
| 1.5404 | 15250 | 0.0007 | - |
| 1.5455 | 15300 | 0.0004 | - |
| 1.5505 | 15350 | 0.0009 | - |
| 1.5556 | 15400 | 0.0005 | - |
| 1.5606 | 15450 | 0.0007 | - |
| 1.5657 | 15500 | 0.0005 | - |
| 1.5707 | 15550 | 0.0005 | - |
| 1.5758 | 15600 | 0.0006 | - |
| 1.5808 | 15650 | 0.0586 | - |
| 1.5859 | 15700 | 0.0005 | - |
| 1.5909 | 15750 | 0.0014 | - |
| 1.5960 | 15800 | 0.0005 | - |
| 1.6010 | 15850 | 0.0007 | - |
| 1.6061 | 15900 | 0.0006 | - |
| 1.6111 | 15950 | 0.0011 | - |
| 1.6162 | 16000 | 0.0005 | - |
| 1.6212 | 16050 | 0.0007 | - |
| 1.6263 | 16100 | 0.0008 | - |
| 1.6313 | 16150 | 0.0005 | - |
| 1.6364 | 16200 | 0.0003 | - |
| 1.6414 | 16250 | 0.0004 | - |
| 1.6465 | 16300 | 0.0003 | - |
| 1.6515 | 16350 | 0.0004 | - |
| 1.6566 | 16400 | 0.0006 | - |
| 1.6616 | 16450 | 0.0572 | - |
| 1.6667 | 16500 | 0.0004 | - |
| 1.6717 | 16550 | 0.0005 | - |
| 1.6768 | 16600 | 0.0004 | - |
| 1.6818 | 16650 | 0.0007 | - |
| 1.6869 | 16700 | 0.0011 | - |
| 1.6919 | 16750 | 0.0007 | - |
| 1.6970 | 16800 | 0.0568 | - |
| 1.7020 | 16850 | 0.0007 | - |
| 1.7071 | 16900 | 0.0005 | - |
| 1.7121 | 16950 | 0.0584 | - |
| 1.7172 | 17000 | 0.0004 | - |
| 1.7222 | 17050 | 0.0004 | - |
| 1.7273 | 17100 | 0.0265 | - |
| 1.7323 | 17150 | 0.0006 | - |
| 1.7374 | 17200 | 0.0009 | - |
| 1.7424 | 17250 | 0.0005 | - |
| 1.7475 | 17300 | 0.0011 | - |
| 1.7525 | 17350 | 0.0005 | - |
| 1.7576 | 17400 | 0.0004 | - |
| 1.7626 | 17450 | 0.0007 | - |
| 1.7677 | 17500 | 0.0007 | - |
| 1.7727 | 17550 | 0.0003 | - |
| 1.7778 | 17600 | 0.0005 | - |
| 1.7828 | 17650 | 0.0003 | - |
| 1.7879 | 17700 | 0.0003 | - |
| 1.7929 | 17750 | 0.0003 | - |
| 1.7980 | 17800 | 0.0007 | - |
| 1.8030 | 17850 | 0.0577 | - |
| 1.8081 | 17900 | 0.0004 | - |
| 1.8131 | 17950 | 0.0005 | - |
| 1.8182 | 18000 | 0.0004 | - |
| 1.8232 | 18050 | 0.0004 | - |
| 1.8283 | 18100 | 0.0004 | - |
| 1.8333 | 18150 | 0.0004 | - |
| 1.8384 | 18200 | 0.0003 | - |
| 1.8434 | 18250 | 0.0005 | - |
| 1.8485 | 18300 | 0.0004 | - |
| 1.8535 | 18350 | 0.0004 | - |
| 1.8586 | 18400 | 0.0005 | - |
| 1.8636 | 18450 | 0.0004 | - |
| 1.8687 | 18500 | 0.0003 | - |
| 1.8737 | 18550 | 0.0003 | - |
| 1.8788 | 18600 | 0.0007 | - |
| 1.8838 | 18650 | 0.0586 | - |
| 1.8889 | 18700 | 0.0003 | - |
| 1.8939 | 18750 | 0.0004 | - |
| 1.8990 | 18800 | 0.0005 | - |
| 1.9040 | 18850 | 0.0004 | - |
| 1.9091 | 18900 | 0.0006 | - |
| 1.9141 | 18950 | 0.0004 | - |
| 1.9192 | 19000 | 0.0004 | - |
| 1.9242 | 19050 | 0.0004 | - |
| 1.9293 | 19100 | 0.0005 | - |
| 1.9343 | 19150 | 0.0003 | - |
| 1.9394 | 19200 | 0.0003 | - |
| 1.9444 | 19250 | 0.0003 | - |
| 1.9495 | 19300 | 0.0545 | - |
| 1.9545 | 19350 | 0.0004 | - |
| 1.9596 | 19400 | 0.0005 | - |
| 1.9646 | 19450 | 0.0004 | - |
| 1.9697 | 19500 | 0.0004 | - |
| 1.9747 | 19550 | 0.0004 | - |
| 1.9798 | 19600 | 0.0004 | - |
| 1.9848 | 19650 | 0.0045 | - |
| 1.9899 | 19700 | 0.0004 | - |
| 1.9949 | 19750 | 0.0005 | - |
| **2.0** | **19800** | **0.0006** | **0.0024** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"BEAR"
] |
onekq-ai/OneSQL-v0.1-Qwen-7B-GGUF | onekq-ai | null | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"en",
"base_model:onekq-ai/OneSQL-v0.1-Qwen-7B",
"base_model:quantized:onekq-ai/OneSQL-v0.1-Qwen-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-03-11T23:37:16Z | 2025-03-18T06:36:17+00:00 | 771 | 0 | ---
base_model: onekq-ai/OneSQL-v0.1-Qwen-7B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- qwen2
- gguf
---
# Introduction
This model is the GGUF version of [OneSQL-v0.1-Qwen-7B](https://huggingface.co/onekq-ai/OneSQL-v0.1-Qwen-7B). You can also find it on [Ollama](https://ollama.com/onekq/OneSQL-v0.1-Qwen).
# Performances
The self-evaluation EX score of the original model is **56.19** (compared to **63.33** by the 32B model on the [BIRD leaderboard](https://bird-bench.github.io/).
Below is the self-evaluation results for each quantization.
| Quantization |EX score|
|------------|------|
| Q2_K | 29.79 |
| Q3_K_S | 36.31 |
| Q3_K_M | 39.24 |
| Q3_K_L | 40.14 |
| Q4_1 | 39.06 |
| Q4_K_S | 42.69 |
| **Q4_K_M** | **43.95** |
| Q5_0 | 43.84 |
| Q5_1 | 41.00 |
| Q5_K_S | 42.20 |
| Q5_K_M | 42.07 |
| Q6_K | 41.68 |
| Q8_0 | 41.09 |
# Quick start
To use this model, craft your prompt to start with your database schema in the form of **CREATE TABLE**, followed by your natural language query preceded by **--**.
Make sure your prompt ends with **SELECT** in order for the model to finish the query for you. There is no need to set other parameters like temperature or max token limit.
```sh
PROMPT="CREATE TABLE students (
id INTEGER PRIMARY KEY,
name TEXT,
age INTEGER,
grade TEXT
);
-- Find the three youngest students
SELECT "
ollama run onekq-ai/OneSQL-v0.1-Qwen:32B-Q4_K_M "$PROMPT"
```
The model response is the finished SQL query without **SELECT**
```sql
* FROM students ORDER BY age ASC LIMIT 3
```
# Caveats
* The performance drop from the original model is due to quantization itself, and the lack of beam search support in llama.cpp framework. Use at your own discretion.
* The Q4_0 quantization suffers from repetitive output token, hence is not recommended for usage. | [
"CRAFT"
] |
EleutherAI/pythia-6.9b-v0 | EleutherAI | text-generation | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-10-16T20:16:56Z | 2023-03-29T18:48:58+00:00 | 762 | 8 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-6.9B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-6.9B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-6.9B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-6.9B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"SCIQ"
] |
apple/DCLM-7B | apple | null | [
"transformers",
"safetensors",
"openlm",
"arxiv:2406.11794",
"license:apple-ascl",
"endpoints_compatible",
"region:us"
] | 2024-07-11T17:44:35Z | 2024-07-26T03:40:38+00:00 | 759 | 835 | ---
license: apple-ascl
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for DCLM-Baseline-7B
DCLM-Baseline-7B is a 7 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
## Model Details
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|-----------------|--------|-------------|-----------------|----------------|
| 7B | 2.5T | 32 | 4096 | 32 | 2048 |
### Model Description
- **Developed by:** DataComp for Language Models (DCLM) Team
- **Model type:** Decoder-only Transformer language model
- **Language(s):** English (primarily)
- **License:** Apple Sample Code License
- **Contact:** [email protected]
- **Date:** June 2024
### Model Sources
- **Repository:** https://github.com/mlfoundations/dclm
- **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
- **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
## Using Model
First install open_lm
```bash
pip install git+https://github.com/mlfoundations/open_lm.git
```
Then:
```python
from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("apple/DCLM-Baseline-7B")
model = AutoModelForCausalLM.from_pretrained("apple/DCLM-Baseline-7B")
inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
```
### Training Details
The model was trained using the following setup:
- **Architecture:** Decoder-only Transformer
- **Framework:** PyTorch with OpenLM
- **Optimizer:** AdamW
- **Learning Rate:** 2e-3 (peak)
- **Weight Decay:** 0.05
- **Batch Size:** 2048 sequences
- **Sequence Length:** 2048 tokens
- **Total Training Tokens:** 2.5T
- **Hardware:** Trained on H100 GPUs
For more detailed training information, please refer to Section 3.4 and Appendix F of the DCLM paper.
To ensure our trained model is broadly useful, including for math and coding tasks, we combine our 3.8T [DCLM-BASELINE](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) with the [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) and [ProofPile2](https://huggingface.co/datasets/EleutherAI/proof-pile-2) data to arrive at a 4.1T token dataset.
## Evaluation
Here are the evaluation results for DCLM-Baseline-7B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite)
| Task | Score |
|------|-------|
| MMLU (zero-shot) | 0.5766 |
| MMLU (few-shot) | 0.6372 |
| HellaSwag (zero-shot) | 0.7987 |
| HellaSwag | 0.8043 |
| Jeopardy | 0.4745 |
| TriviaQA | 0.5270 |
| GSM8K (CoT) | 0.0250 |
| AGI Eval SAT Math (CoT) | 0.0136 |
| AQuA (CoT) | 0.0490 |
| SVAMP (CoT) | 0.4900 |
| BigBench QA Wikidata | 0.7120 |
| ARC Easy | 0.8220 |
| ARC Challenge | 0.5990 |
| BigBench Misconceptions | 0.6986 |
| COPA | 0.8500 |
| SIQA | 0.8291 |
| CommonsenseQA | 0.8018 |
| PIQA | 0.8128 |
| OpenBookQA | 0.4540 |
| BigBench Novel Concepts | 0.7188 |
| BigBench Strange Stories | 0.7586 |
| BigBench Strategy QA | 0.6173 |
| LAMBADA | 0.8220 |
| Winograd | 0.8828 |
| Winogrande | 0.7269 |
| BigBench Conlang Translation | 0.0244 |
| BigBench Language Identification | 0.5219 |
| BigBench Conceptual Combinations | 0.6990 |
| BigBench Elementary Math QA | 0.3431 |
| BigBench Dyck Languages | 0.4930 |
| AGI Eval LSAT AR | 0.2435 |
| BigBench CS Algorithms | 0.6121 |
| BigBench Logical Deduction | 0.3620 |
| BigBench Operators | 0.4857 |
| BigBench Repeat Copy Logic | 0.4063 |
| Simple Arithmetic (no spaces) | 0.2940 |
| Simple Arithmetic (with spaces) | 0.3110 |
| MathQA | 0.3098 |
| LogiQA | 0.4132 |
| PubMedQA | 0.7060 |
| SQuAD | 0.5856 |
| AGI Eval LSAT RC | 0.6716 |
| AGI Eval LSAT LR | 0.5392 |
| CoQA | 0.4074 |
| BigBench Understanding Fables | 0.6825 |
| BoolQ | 0.8343 |
| AGI Eval SAT EN | 0.7670 |
| Winogender MC (Female) | 0.6000 |
| Winogender MC (Male) | 0.5500 |
| Enterprise PII Classification | 0.7676 |
| BBQ | 0.6912 |
| GPQA Main | 0.2612 |
| GPQA Diamond | 0.2475 |
Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.
## Comparison
Below are comparisions of this model with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | ❌ | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | ❌ | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | ❌ | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | ❌ | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | ❌ | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | ❌ | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | ❌ | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | ✅ | 44.1 | 27.4 | 25.1 |
| OLMo-1.7 | 7B | 2.1T | ✅ | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | ✅ | **50.2** | **57.1** | **40.4** |
| **DCLM-7B** | 7B | 2.5T | ✅ | **56.1** | **63.7** | **43.6** |
## Limitations and Biases
While DCLM-Baseline-7B demonstrates strong performance across a range of tasks, it's important to note:
1. The model may exhibit biases present in its training data, which is derived from web crawl data.
2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
3. Performance on tasks not included in the evaluation suite may vary.
4. The model's knowledge is limited to its training data cutoff date.
## Ethical Considerations
Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.
## Citation
If you use this model in your research, please cite:
```
@article{Li2024DataCompLM,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
journal={arXiv preprint arXiv:2406.11794},
year={2024}
}
```
| [
"PUBMEDQA"
] |
mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF | mradermacher | null | [
"transformers",
"gguf",
"scientometrics",
"citation_analysis",
"citation_intent_classification",
"en",
"dataset:allenai/scicite",
"base_model:sknow-lab/Qwen2.5-14B-CIC-SciCite",
"base_model:quantized:sknow-lab/Qwen2.5-14B-CIC-SciCite",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-02-23T14:09:47Z | 2025-02-24T13:41:14+00:00 | 756 | 1 | ---
base_model: sknow-lab/Qwen2.5-14B-CIC-SciCite
datasets:
- allenai/scicite
language:
- en
library_name: transformers
license: apache-2.0
tags:
- scientometrics
- citation_analysis
- citation_intent_classification
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sknow-lab/Qwen2.5-14B-CIC-SciCite
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-CIC-SciCite-GGUF/resolve/main/Qwen2.5-14B-CIC-SciCite.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"SCICITE"
] |
goofyai/disney_style_xl | goofyai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] | 2023-11-22T06:40:59Z | 2023-11-22T06:44:53+00:00 | 753 | 15 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
license: openrail
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: disney style,animal focus, animal, cat
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/c9ad912d-e9b1-4807-950d-ab2d07eaed6e.png
- text: disney style,one girl wearing round glasses in school dress, short skirt and
socks. white shirt with black necktie
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/a2ed97c6-1ab5-431c-a4ae-73cedfb494e4.png
- text: disney style, brown eyes, white shirt, round eyewear, shirt, earrings, closed
mouth, brown hair, jewelry, glasses, looking at viewer, dark skin, 1girl, solo,
dark-skinned female, very dark skin, curly hair, lips, portrait, black hair, print
shirt, short hair, blurry background, outdoors, yellow-framed eyewear, blurry
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/d7c67c24-9116-40da-a75f-bf42a211a6c0.png
- text: disney style, uniform, rabbit, shirt, vest, day, upper body, hands on hips,
rabbit girl, animal nose, smile, furry, police, 1girl, solo, animal ears, rabbit
ears, policewoman, grey fur, furry female, long sleeves, purple eyes, blurry background,
police uniform, outdoors, blurry, blue shirt
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/1d0aac43-aa2a-495c-84fd-ca2c9eb22a0d.jpg
- text: disney style, rain, furry, bear, 1boy, solo, blue headwear, water drop, baseball
cap, outdoors, blurry, shirt, male focus, furry male, hat, blue shirt
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/5cd36626-22da-46d2-aa79-2ca31c80fd59.png
- text: disney style, looking at viewer, long hair, dress, lipstick, braid, hair over
shoulder, blonde hair, 1girl, solo, purple dress, makeup, stairs, blue eyes, single
braid
parameters:
negative_prompt: bad quality, deformed, artifacts, digital noise
output:
url: images/4af61860-6dca-4694-9f31-ceaf08071e6d.png
- text: disney style, lipstick, dress, smile, braid, tiara, blonde hair, 1girl, solo,
upper body, gloves, makeup, crown, blue eyes, cape
output:
url: images/882eb6c8-5c6c-4694-b3f1-f79f8df8ce8a.jpg
instance_prompt: disney style
---
# Disney style xl
<Gallery />
## Trigger words
You should use `disney style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/goofyai/disney_style_xl/tree/main) them in the Files & versions tab.
| [
"BEAR"
] |
arkohut/jina-embeddings-v3 | arkohut | feature-extraction | [
"transformers",
"safetensors",
"feature-extraction",
"sentence-similarity",
"mteb",
"sentence-transformers",
"custom_code",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2409.10173",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] | 2024-10-23T15:21:29Z | 2024-10-23T15:26:53+00:00 | 749 | 3 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- false
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
library_name: transformers
license: cc-by-nc-4.0
tags:
- feature-extraction
- sentence-similarity
- mteb
- sentence-transformers
inference: false
model-index:
- name: jina-embeddings-v3
results:
- task:
type: STS
dataset:
name: MTEB AFQMC (default)
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cosine_pearson
value: 41.74237700998808
- type: cosine_spearman
value: 43.4726782647566
- type: euclidean_pearson
value: 42.244585459479964
- type: euclidean_spearman
value: 43.525070045169606
- type: main_score
value: 43.4726782647566
- type: manhattan_pearson
value: 42.04616728224863
- type: manhattan_spearman
value: 43.308828270754645
- type: pearson
value: 41.74237700998808
- type: spearman
value: 43.4726782647566
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL (default)
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: main_score
value: 50.117999999999995
- type: map_at_1
value: 24.253
- type: map_at_10
value: 40.725
- type: map_at_100
value: 41.699999999999996
- type: map_at_1000
value: 41.707
- type: map_at_20
value: 41.467999999999996
- type: map_at_3
value: 35.467
- type: map_at_5
value: 38.291
- type: mrr_at_1
value: 24.751066856330013
- type: mrr_at_10
value: 40.91063808169072
- type: mrr_at_100
value: 41.885497923928675
- type: mrr_at_1000
value: 41.89301098419842
- type: mrr_at_20
value: 41.653552355442514
- type: mrr_at_3
value: 35.656709340919775
- type: mrr_at_5
value: 38.466097676623946
- type: nauc_map_at_1000_diff1
value: 7.503000359807567
- type: nauc_map_at_1000_max
value: -11.030405164830546
- type: nauc_map_at_1000_std
value: -8.902792782585117
- type: nauc_map_at_100_diff1
value: 7.509899249593199
- type: nauc_map_at_100_max
value: -11.023581259404406
- type: nauc_map_at_100_std
value: -8.892241185067272
- type: nauc_map_at_10_diff1
value: 7.24369711881512
- type: nauc_map_at_10_max
value: -10.810000200433278
- type: nauc_map_at_10_std
value: -8.987230542165776
- type: nauc_map_at_1_diff1
value: 11.37175831832417
- type: nauc_map_at_1_max
value: -13.315221903223055
- type: nauc_map_at_1_std
value: -9.398199605510275
- type: nauc_map_at_20_diff1
value: 7.477364530860648
- type: nauc_map_at_20_max
value: -10.901251218105566
- type: nauc_map_at_20_std
value: -8.868148116405925
- type: nauc_map_at_3_diff1
value: 6.555548802174882
- type: nauc_map_at_3_max
value: -12.247274800542934
- type: nauc_map_at_3_std
value: -9.879475250984811
- type: nauc_map_at_5_diff1
value: 7.426588563355882
- type: nauc_map_at_5_max
value: -11.347695686001805
- type: nauc_map_at_5_std
value: -9.34441892203972
- type: nauc_mrr_at_1000_diff1
value: 5.99737552143614
- type: nauc_mrr_at_1000_max
value: -11.327205136505727
- type: nauc_mrr_at_1000_std
value: -8.791079115519503
- type: nauc_mrr_at_100_diff1
value: 6.004622525255784
- type: nauc_mrr_at_100_max
value: -11.320336759899723
- type: nauc_mrr_at_100_std
value: -8.780602249831777
- type: nauc_mrr_at_10_diff1
value: 5.783623516930227
- type: nauc_mrr_at_10_max
value: -11.095971693467078
- type: nauc_mrr_at_10_std
value: -8.877242032013582
- type: nauc_mrr_at_1_diff1
value: 9.694937537703797
- type: nauc_mrr_at_1_max
value: -12.531905083727912
- type: nauc_mrr_at_1_std
value: -8.903992940100146
- type: nauc_mrr_at_20_diff1
value: 5.984841206233873
- type: nauc_mrr_at_20_max
value: -11.195236951048969
- type: nauc_mrr_at_20_std
value: -8.757266039186018
- type: nauc_mrr_at_3_diff1
value: 5.114333824261379
- type: nauc_mrr_at_3_max
value: -12.64809799843464
- type: nauc_mrr_at_3_std
value: -9.791146138025184
- type: nauc_mrr_at_5_diff1
value: 5.88941606224512
- type: nauc_mrr_at_5_max
value: -11.763903418071918
- type: nauc_mrr_at_5_std
value: -9.279175712709446
- type: nauc_ndcg_at_1000_diff1
value: 7.076950652226086
- type: nauc_ndcg_at_1000_max
value: -10.386482092087371
- type: nauc_ndcg_at_1000_std
value: -8.309190917074046
- type: nauc_ndcg_at_100_diff1
value: 7.2329220284865245
- type: nauc_ndcg_at_100_max
value: -10.208048403220337
- type: nauc_ndcg_at_100_std
value: -7.997975874274613
- type: nauc_ndcg_at_10_diff1
value: 6.065391100006953
- type: nauc_ndcg_at_10_max
value: -9.046164377601153
- type: nauc_ndcg_at_10_std
value: -8.34724889697153
- type: nauc_ndcg_at_1_diff1
value: 11.37175831832417
- type: nauc_ndcg_at_1_max
value: -13.315221903223055
- type: nauc_ndcg_at_1_std
value: -9.398199605510275
- type: nauc_ndcg_at_20_diff1
value: 6.949389989202601
- type: nauc_ndcg_at_20_max
value: -9.35740451760307
- type: nauc_ndcg_at_20_std
value: -7.761295171828212
- type: nauc_ndcg_at_3_diff1
value: 5.051471796151364
- type: nauc_ndcg_at_3_max
value: -12.158763333711653
- type: nauc_ndcg_at_3_std
value: -10.078902544421926
- type: nauc_ndcg_at_5_diff1
value: 6.527454512611454
- type: nauc_ndcg_at_5_max
value: -10.525118233848586
- type: nauc_ndcg_at_5_std
value: -9.120055125584031
- type: nauc_precision_at_1000_diff1
value: -10.6495668199151
- type: nauc_precision_at_1000_max
value: 12.070656425217841
- type: nauc_precision_at_1000_std
value: 55.844551709649004
- type: nauc_precision_at_100_diff1
value: 19.206967129266285
- type: nauc_precision_at_100_max
value: 16.296851020813456
- type: nauc_precision_at_100_std
value: 45.60378984257811
- type: nauc_precision_at_10_diff1
value: 0.6490335354304879
- type: nauc_precision_at_10_max
value: 0.5757198255366447
- type: nauc_precision_at_10_std
value: -4.875847131691451
- type: nauc_precision_at_1_diff1
value: 11.37175831832417
- type: nauc_precision_at_1_max
value: -13.315221903223055
- type: nauc_precision_at_1_std
value: -9.398199605510275
- type: nauc_precision_at_20_diff1
value: 4.899369866929203
- type: nauc_precision_at_20_max
value: 5.988537297189552
- type: nauc_precision_at_20_std
value: 4.830900387582837
- type: nauc_precision_at_3_diff1
value: 0.8791156910997744
- type: nauc_precision_at_3_max
value: -11.983373635905993
- type: nauc_precision_at_3_std
value: -10.646185111581257
- type: nauc_precision_at_5_diff1
value: 3.9314486166548432
- type: nauc_precision_at_5_max
value: -7.798591396895839
- type: nauc_precision_at_5_std
value: -8.293043407234125
- type: nauc_recall_at_1000_diff1
value: -10.649566819918673
- type: nauc_recall_at_1000_max
value: 12.070656425214647
- type: nauc_recall_at_1000_std
value: 55.84455170965023
- type: nauc_recall_at_100_diff1
value: 19.206967129265127
- type: nauc_recall_at_100_max
value: 16.296851020813722
- type: nauc_recall_at_100_std
value: 45.60378984257728
- type: nauc_recall_at_10_diff1
value: 0.6490335354304176
- type: nauc_recall_at_10_max
value: 0.5757198255366095
- type: nauc_recall_at_10_std
value: -4.875847131691468
- type: nauc_recall_at_1_diff1
value: 11.37175831832417
- type: nauc_recall_at_1_max
value: -13.315221903223055
- type: nauc_recall_at_1_std
value: -9.398199605510275
- type: nauc_recall_at_20_diff1
value: 4.899369866929402
- type: nauc_recall_at_20_max
value: 5.98853729718968
- type: nauc_recall_at_20_std
value: 4.830900387582967
- type: nauc_recall_at_3_diff1
value: 0.8791156910997652
- type: nauc_recall_at_3_max
value: -11.983373635905997
- type: nauc_recall_at_3_std
value: -10.64618511158124
- type: nauc_recall_at_5_diff1
value: 3.9314486166548472
- type: nauc_recall_at_5_max
value: -7.7985913968958585
- type: nauc_recall_at_5_std
value: -8.293043407234132
- type: ndcg_at_1
value: 24.253
- type: ndcg_at_10
value: 50.117999999999995
- type: ndcg_at_100
value: 54.291999999999994
- type: ndcg_at_1000
value: 54.44799999999999
- type: ndcg_at_20
value: 52.771
- type: ndcg_at_3
value: 39.296
- type: ndcg_at_5
value: 44.373000000000005
- type: precision_at_1
value: 24.253
- type: precision_at_10
value: 8.016
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.527
- type: precision_at_3
value: 16.808999999999997
- type: precision_at_5
value: 12.546
- type: recall_at_1
value: 24.253
- type: recall_at_10
value: 80.156
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_20
value: 90.54100000000001
- type: recall_at_3
value: 50.427
- type: recall_at_5
value: 62.731
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL (default)
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: main_score
value: 34.827000000000005
- type: map_at_1
value: 7.049999999999999
- type: map_at_10
value: 14.982999999999999
- type: map_at_100
value: 20.816000000000003
- type: map_at_1000
value: 22.33
- type: map_at_20
value: 17.272000000000002
- type: map_at_3
value: 10.661
- type: map_at_5
value: 12.498
- type: mrr_at_1
value: 57.25
- type: mrr_at_10
value: 65.81934523809524
- type: mrr_at_100
value: 66.2564203928212
- type: mrr_at_1000
value: 66.27993662923856
- type: mrr_at_20
value: 66.0732139130649
- type: mrr_at_3
value: 64.08333333333333
- type: mrr_at_5
value: 65.27083333333333
- type: nauc_map_at_1000_diff1
value: 16.41780871174038
- type: nauc_map_at_1000_max
value: 30.193946325654654
- type: nauc_map_at_1000_std
value: 31.46095497039037
- type: nauc_map_at_100_diff1
value: 18.57903165498531
- type: nauc_map_at_100_max
value: 29.541476938623262
- type: nauc_map_at_100_std
value: 28.228604103301052
- type: nauc_map_at_10_diff1
value: 24.109434489748946
- type: nauc_map_at_10_max
value: 21.475954208048968
- type: nauc_map_at_10_std
value: 9.964464537806988
- type: nauc_map_at_1_diff1
value: 38.67437644802124
- type: nauc_map_at_1_max
value: 14.52136658726491
- type: nauc_map_at_1_std
value: -2.8981666782088755
- type: nauc_map_at_20_diff1
value: 21.42547228801935
- type: nauc_map_at_20_max
value: 25.04510402960458
- type: nauc_map_at_20_std
value: 16.533079346431155
- type: nauc_map_at_3_diff1
value: 26.63648858245477
- type: nauc_map_at_3_max
value: 13.632235789780415
- type: nauc_map_at_3_std
value: -0.40129174577700716
- type: nauc_map_at_5_diff1
value: 24.513861031197933
- type: nauc_map_at_5_max
value: 16.599888813946688
- type: nauc_map_at_5_std
value: 3.4448514739556346
- type: nauc_mrr_at_1000_diff1
value: 36.57353464537154
- type: nauc_mrr_at_1000_max
value: 55.34763483979515
- type: nauc_mrr_at_1000_std
value: 40.3722796438533
- type: nauc_mrr_at_100_diff1
value: 36.555989566513134
- type: nauc_mrr_at_100_max
value: 55.347805216808396
- type: nauc_mrr_at_100_std
value: 40.38465945075711
- type: nauc_mrr_at_10_diff1
value: 36.771572999261984
- type: nauc_mrr_at_10_max
value: 55.41239897909165
- type: nauc_mrr_at_10_std
value: 40.52058934624793
- type: nauc_mrr_at_1_diff1
value: 38.2472828531032
- type: nauc_mrr_at_1_max
value: 51.528473828685705
- type: nauc_mrr_at_1_std
value: 33.03676467942882
- type: nauc_mrr_at_20_diff1
value: 36.642602571889036
- type: nauc_mrr_at_20_max
value: 55.3763342076553
- type: nauc_mrr_at_20_std
value: 40.41520090500838
- type: nauc_mrr_at_3_diff1
value: 36.79451847426628
- type: nauc_mrr_at_3_max
value: 54.59778581826193
- type: nauc_mrr_at_3_std
value: 39.48392075873095
- type: nauc_mrr_at_5_diff1
value: 36.92150807529304
- type: nauc_mrr_at_5_max
value: 55.03553978718272
- type: nauc_mrr_at_5_std
value: 40.20147745489917
- type: nauc_ndcg_at_1000_diff1
value: 21.843092744321268
- type: nauc_ndcg_at_1000_max
value: 44.93275990394279
- type: nauc_ndcg_at_1000_std
value: 47.09186225236347
- type: nauc_ndcg_at_100_diff1
value: 25.180282568979095
- type: nauc_ndcg_at_100_max
value: 41.737709709508394
- type: nauc_ndcg_at_100_std
value: 38.80950644139446
- type: nauc_ndcg_at_10_diff1
value: 24.108368037214046
- type: nauc_ndcg_at_10_max
value: 41.29298370689967
- type: nauc_ndcg_at_10_std
value: 35.06450769738732
- type: nauc_ndcg_at_1_diff1
value: 35.51010679525079
- type: nauc_ndcg_at_1_max
value: 42.40790024212412
- type: nauc_ndcg_at_1_std
value: 26.696412036243157
- type: nauc_ndcg_at_20_diff1
value: 23.909989673256195
- type: nauc_ndcg_at_20_max
value: 39.78444647091927
- type: nauc_ndcg_at_20_std
value: 33.39544470364529
- type: nauc_ndcg_at_3_diff1
value: 22.50484297956035
- type: nauc_ndcg_at_3_max
value: 39.14551926034168
- type: nauc_ndcg_at_3_std
value: 30.330135925392014
- type: nauc_ndcg_at_5_diff1
value: 21.7798872028265
- type: nauc_ndcg_at_5_max
value: 40.23856975248015
- type: nauc_ndcg_at_5_std
value: 32.438381067440396
- type: nauc_precision_at_1000_diff1
value: -21.62692442272279
- type: nauc_precision_at_1000_max
value: 0.9689046974430882
- type: nauc_precision_at_1000_std
value: 18.54001058230465
- type: nauc_precision_at_100_diff1
value: -10.132258779856192
- type: nauc_precision_at_100_max
value: 23.74516110444681
- type: nauc_precision_at_100_std
value: 47.03416663319965
- type: nauc_precision_at_10_diff1
value: 1.543656509571949
- type: nauc_precision_at_10_max
value: 36.98864812757555
- type: nauc_precision_at_10_std
value: 46.56427199077426
- type: nauc_precision_at_1_diff1
value: 38.2472828531032
- type: nauc_precision_at_1_max
value: 51.528473828685705
- type: nauc_precision_at_1_std
value: 33.03676467942882
- type: nauc_precision_at_20_diff1
value: -4.612864872734335
- type: nauc_precision_at_20_max
value: 34.03565449182125
- type: nauc_precision_at_20_std
value: 48.880727648349534
- type: nauc_precision_at_3_diff1
value: 6.360850444467829
- type: nauc_precision_at_3_max
value: 36.25816942368427
- type: nauc_precision_at_3_std
value: 34.48882647419187
- type: nauc_precision_at_5_diff1
value: 2.6445596936740037
- type: nauc_precision_at_5_max
value: 37.174463388899056
- type: nauc_precision_at_5_std
value: 40.25254370626113
- type: nauc_recall_at_1000_diff1
value: 13.041227176748077
- type: nauc_recall_at_1000_max
value: 39.722336427072094
- type: nauc_recall_at_1000_std
value: 52.04032890059214
- type: nauc_recall_at_100_diff1
value: 18.286096899139153
- type: nauc_recall_at_100_max
value: 34.072389201930314
- type: nauc_recall_at_100_std
value: 37.73637623416653
- type: nauc_recall_at_10_diff1
value: 22.35560419280504
- type: nauc_recall_at_10_max
value: 19.727247199595197
- type: nauc_recall_at_10_std
value: 8.58498575109203
- type: nauc_recall_at_1_diff1
value: 38.67437644802124
- type: nauc_recall_at_1_max
value: 14.52136658726491
- type: nauc_recall_at_1_std
value: -2.8981666782088755
- type: nauc_recall_at_20_diff1
value: 19.026320886902916
- type: nauc_recall_at_20_max
value: 22.753562309469867
- type: nauc_recall_at_20_std
value: 14.89994263882445
- type: nauc_recall_at_3_diff1
value: 23.428129702129684
- type: nauc_recall_at_3_max
value: 10.549153954790542
- type: nauc_recall_at_3_std
value: -1.7590608997055206
- type: nauc_recall_at_5_diff1
value: 21.27448645803921
- type: nauc_recall_at_5_max
value: 13.620279707461677
- type: nauc_recall_at_5_std
value: 2.0577962208292675
- type: ndcg_at_1
value: 46.75
- type: ndcg_at_10
value: 34.827000000000005
- type: ndcg_at_100
value: 38.157999999999994
- type: ndcg_at_1000
value: 44.816
- type: ndcg_at_20
value: 34.152
- type: ndcg_at_3
value: 39.009
- type: ndcg_at_5
value: 36.826
- type: precision_at_1
value: 57.25
- type: precision_at_10
value: 27.575
- type: precision_at_100
value: 8.84
- type: precision_at_1000
value: 1.949
- type: precision_at_20
value: 20.724999999999998
- type: precision_at_3
value: 41.167
- type: precision_at_5
value: 35.199999999999996
- type: recall_at_1
value: 7.049999999999999
- type: recall_at_10
value: 19.817999999999998
- type: recall_at_100
value: 42.559999999999995
- type: recall_at_1000
value: 63.744
- type: recall_at_20
value: 25.968000000000004
- type: recall_at_3
value: 11.959
- type: recall_at_5
value: 14.939
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL (default)
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: main_score
value: 38.828
- type: map_at_1
value: 19.126
- type: map_at_10
value: 31.002000000000002
- type: map_at_100
value: 32.736
- type: map_at_1000
value: 32.933
- type: map_at_20
value: 31.894
- type: map_at_3
value: 26.583000000000002
- type: map_at_5
value: 28.904000000000003
- type: mrr_at_1
value: 37.808641975308646
- type: mrr_at_10
value: 46.36745541838134
- type: mrr_at_100
value: 47.14140915794908
- type: mrr_at_1000
value: 47.190701435388846
- type: mrr_at_20
value: 46.81387776440309
- type: mrr_at_3
value: 43.750000000000014
- type: mrr_at_5
value: 45.23919753086418
- type: nauc_map_at_1000_diff1
value: 38.5532285881503
- type: nauc_map_at_1000_max
value: 34.44383884813453
- type: nauc_map_at_1000_std
value: -1.3963497949476722
- type: nauc_map_at_100_diff1
value: 38.49292464176943
- type: nauc_map_at_100_max
value: 34.33752755618645
- type: nauc_map_at_100_std
value: -1.4794032905848582
- type: nauc_map_at_10_diff1
value: 38.26061536370962
- type: nauc_map_at_10_max
value: 33.16977912721411
- type: nauc_map_at_10_std
value: -2.3853370604730393
- type: nauc_map_at_1_diff1
value: 46.288767289528344
- type: nauc_map_at_1_max
value: 25.67706785013364
- type: nauc_map_at_1_std
value: -6.989769609924645
- type: nauc_map_at_20_diff1
value: 38.507270129330685
- type: nauc_map_at_20_max
value: 33.70963328055982
- type: nauc_map_at_20_std
value: -1.9835510011554272
- type: nauc_map_at_3_diff1
value: 39.81061518646884
- type: nauc_map_at_3_max
value: 30.101186374147748
- type: nauc_map_at_3_std
value: -4.027120247237715
- type: nauc_map_at_5_diff1
value: 38.55602589746512
- type: nauc_map_at_5_max
value: 31.515174267015983
- type: nauc_map_at_5_std
value: -3.4064239358570303
- type: nauc_mrr_at_1000_diff1
value: 45.030514454725726
- type: nauc_mrr_at_1000_max
value: 43.878919881666164
- type: nauc_mrr_at_1000_std
value: 2.517594250297626
- type: nauc_mrr_at_100_diff1
value: 45.00868212878687
- type: nauc_mrr_at_100_max
value: 43.87437011120001
- type: nauc_mrr_at_100_std
value: 2.5257874265014966
- type: nauc_mrr_at_10_diff1
value: 44.855044606754056
- type: nauc_mrr_at_10_max
value: 43.946617058785186
- type: nauc_mrr_at_10_std
value: 2.5173751662794044
- type: nauc_mrr_at_1_diff1
value: 49.441510997817346
- type: nauc_mrr_at_1_max
value: 43.08547383044357
- type: nauc_mrr_at_1_std
value: -1.8747770703324347
- type: nauc_mrr_at_20_diff1
value: 45.019880416584215
- type: nauc_mrr_at_20_max
value: 43.85691473662242
- type: nauc_mrr_at_20_std
value: 2.4625487605091303
- type: nauc_mrr_at_3_diff1
value: 45.322041658604036
- type: nauc_mrr_at_3_max
value: 43.95079293074395
- type: nauc_mrr_at_3_std
value: 2.4644274393435737
- type: nauc_mrr_at_5_diff1
value: 44.99461837803437
- type: nauc_mrr_at_5_max
value: 43.97934275090601
- type: nauc_mrr_at_5_std
value: 2.5353091695125096
- type: nauc_ndcg_at_1000_diff1
value: 39.38449023275524
- type: nauc_ndcg_at_1000_max
value: 39.48382767312788
- type: nauc_ndcg_at_1000_std
value: 3.414789408343409
- type: nauc_ndcg_at_100_diff1
value: 38.29675861135578
- type: nauc_ndcg_at_100_max
value: 38.2674786507297
- type: nauc_ndcg_at_100_std
value: 2.7094055381218207
- type: nauc_ndcg_at_10_diff1
value: 38.09514955708717
- type: nauc_ndcg_at_10_max
value: 36.664923238906525
- type: nauc_ndcg_at_10_std
value: 0.6901410544967921
- type: nauc_ndcg_at_1_diff1
value: 49.441510997817346
- type: nauc_ndcg_at_1_max
value: 43.08547383044357
- type: nauc_ndcg_at_1_std
value: -1.8747770703324347
- type: nauc_ndcg_at_20_diff1
value: 38.44967736231759
- type: nauc_ndcg_at_20_max
value: 36.871179313622584
- type: nauc_ndcg_at_20_std
value: 1.157560360065234
- type: nauc_ndcg_at_3_diff1
value: 39.02419271805571
- type: nauc_ndcg_at_3_max
value: 37.447669442586324
- type: nauc_ndcg_at_3_std
value: 0.41502589779297794
- type: nauc_ndcg_at_5_diff1
value: 38.10233452742001
- type: nauc_ndcg_at_5_max
value: 35.816381905465676
- type: nauc_ndcg_at_5_std
value: -0.3704499913387088
- type: nauc_precision_at_1000_diff1
value: 2.451267097838658
- type: nauc_precision_at_1000_max
value: 29.116394969085306
- type: nauc_precision_at_1000_std
value: 14.85900786538363
- type: nauc_precision_at_100_diff1
value: 8.10919082251277
- type: nauc_precision_at_100_max
value: 36.28388256191417
- type: nauc_precision_at_100_std
value: 14.830039904317657
- type: nauc_precision_at_10_diff1
value: 15.02446609920477
- type: nauc_precision_at_10_max
value: 41.008463775454054
- type: nauc_precision_at_10_std
value: 10.431403152334486
- type: nauc_precision_at_1_diff1
value: 49.441510997817346
- type: nauc_precision_at_1_max
value: 43.08547383044357
- type: nauc_precision_at_1_std
value: -1.8747770703324347
- type: nauc_precision_at_20_diff1
value: 14.222022201169926
- type: nauc_precision_at_20_max
value: 40.10189643835305
- type: nauc_precision_at_20_std
value: 12.204443815975527
- type: nauc_precision_at_3_diff1
value: 25.41905395341234
- type: nauc_precision_at_3_max
value: 41.56133905339819
- type: nauc_precision_at_3_std
value: 5.575516915590082
- type: nauc_precision_at_5_diff1
value: 20.20081221089351
- type: nauc_precision_at_5_max
value: 40.95218555916681
- type: nauc_precision_at_5_std
value: 7.2040745500708745
- type: nauc_recall_at_1000_diff1
value: 28.021198234033395
- type: nauc_recall_at_1000_max
value: 36.165148684597504
- type: nauc_recall_at_1000_std
value: 28.28852356008973
- type: nauc_recall_at_100_diff1
value: 21.882447802741897
- type: nauc_recall_at_100_max
value: 26.979684607567222
- type: nauc_recall_at_100_std
value: 9.783658817010082
- type: nauc_recall_at_10_diff1
value: 28.493097951178818
- type: nauc_recall_at_10_max
value: 29.40937476550134
- type: nauc_recall_at_10_std
value: 2.7593763576979353
- type: nauc_recall_at_1_diff1
value: 46.288767289528344
- type: nauc_recall_at_1_max
value: 25.67706785013364
- type: nauc_recall_at_1_std
value: -6.989769609924645
- type: nauc_recall_at_20_diff1
value: 27.638381299425234
- type: nauc_recall_at_20_max
value: 27.942035836106328
- type: nauc_recall_at_20_std
value: 3.489835161380808
- type: nauc_recall_at_3_diff1
value: 33.90054781392646
- type: nauc_recall_at_3_max
value: 27.778812533030322
- type: nauc_recall_at_3_std
value: -0.03054068020022706
- type: nauc_recall_at_5_diff1
value: 30.279060732221346
- type: nauc_recall_at_5_max
value: 27.49854749597931
- type: nauc_recall_at_5_std
value: 0.5434664581939099
- type: ndcg_at_1
value: 37.809
- type: ndcg_at_10
value: 38.828
- type: ndcg_at_100
value: 45.218
- type: ndcg_at_1000
value: 48.510999999999996
- type: ndcg_at_20
value: 41.11
- type: ndcg_at_3
value: 34.466
- type: ndcg_at_5
value: 35.843
- type: precision_at_1
value: 37.809
- type: precision_at_10
value: 11.157
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.233
- type: precision_at_20
value: 6.497
- type: precision_at_3
value: 23.044999999999998
- type: precision_at_5
value: 17.284
- type: recall_at_1
value: 19.126
- type: recall_at_10
value: 46.062
- type: recall_at_100
value: 70.22800000000001
- type: recall_at_1000
value: 89.803
- type: recall_at_20
value: 53.217999999999996
- type: recall_at_3
value: 30.847
- type: recall_at_5
value: 37.11
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL (default)
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: main_score
value: 60.27
- type: map_at_1
value: 35.199000000000005
- type: map_at_10
value: 51.369
- type: map_at_100
value: 52.212
- type: map_at_1000
value: 52.28
- type: map_at_20
value: 51.864
- type: map_at_3
value: 48.446
- type: map_at_5
value: 50.302
- type: mrr_at_1
value: 70.39837947332883
- type: mrr_at_10
value: 76.8346141067273
- type: mrr_at_100
value: 77.10724392048137
- type: mrr_at_1000
value: 77.12037412892865
- type: mrr_at_20
value: 77.01061532947222
- type: mrr_at_3
value: 75.5908170155299
- type: mrr_at_5
value: 76.39095205941899
- type: nauc_map_at_1000_diff1
value: 24.701387884989117
- type: nauc_map_at_1000_max
value: 23.25553235642178
- type: nauc_map_at_1000_std
value: 7.1803506915661774
- type: nauc_map_at_100_diff1
value: 24.674498622483103
- type: nauc_map_at_100_max
value: 23.234948525052175
- type: nauc_map_at_100_std
value: 7.168677997105447
- type: nauc_map_at_10_diff1
value: 24.676025039755626
- type: nauc_map_at_10_max
value: 23.171971872726964
- type: nauc_map_at_10_std
value: 6.485610909852058
- type: nauc_map_at_1_diff1
value: 68.90178464319715
- type: nauc_map_at_1_max
value: 46.05537868917558
- type: nauc_map_at_1_std
value: 1.7658552480698708
- type: nauc_map_at_20_diff1
value: 24.69297151842494
- type: nauc_map_at_20_max
value: 23.213064691673637
- type: nauc_map_at_20_std
value: 6.9357946556849
- type: nauc_map_at_3_diff1
value: 26.279128947950507
- type: nauc_map_at_3_max
value: 23.929537354117922
- type: nauc_map_at_3_std
value: 4.625061565714759
- type: nauc_map_at_5_diff1
value: 25.04448959482816
- type: nauc_map_at_5_max
value: 23.432012857899338
- type: nauc_map_at_5_std
value: 5.845744681998008
- type: nauc_mrr_at_1000_diff1
value: 66.7503918108276
- type: nauc_mrr_at_1000_max
value: 48.42897342336844
- type: nauc_mrr_at_1000_std
value: 5.3097517971144415
- type: nauc_mrr_at_100_diff1
value: 66.74645215862695
- type: nauc_mrr_at_100_max
value: 48.4368663009989
- type: nauc_mrr_at_100_std
value: 5.322297898555188
- type: nauc_mrr_at_10_diff1
value: 66.69310166180729
- type: nauc_mrr_at_10_max
value: 48.475437698330225
- type: nauc_mrr_at_10_std
value: 5.258183461631702
- type: nauc_mrr_at_1_diff1
value: 68.90178464319715
- type: nauc_mrr_at_1_max
value: 46.05537868917558
- type: nauc_mrr_at_1_std
value: 1.7658552480698708
- type: nauc_mrr_at_20_diff1
value: 66.72000262431975
- type: nauc_mrr_at_20_max
value: 48.45593642981319
- type: nauc_mrr_at_20_std
value: 5.353665929072101
- type: nauc_mrr_at_3_diff1
value: 66.84936676396276
- type: nauc_mrr_at_3_max
value: 48.466611276778295
- type: nauc_mrr_at_3_std
value: 4.485810398557475
- type: nauc_mrr_at_5_diff1
value: 66.62362565394174
- type: nauc_mrr_at_5_max
value: 48.456431835482014
- type: nauc_mrr_at_5_std
value: 5.08482458391903
- type: nauc_ndcg_at_1000_diff1
value: 29.984825173719443
- type: nauc_ndcg_at_1000_max
value: 27.289179238639893
- type: nauc_ndcg_at_1000_std
value: 10.661480455527526
- type: nauc_ndcg_at_100_diff1
value: 29.322074257047877
- type: nauc_ndcg_at_100_max
value: 26.850650276220605
- type: nauc_ndcg_at_100_std
value: 10.599247982501902
- type: nauc_ndcg_at_10_diff1
value: 29.659909113886094
- type: nauc_ndcg_at_10_max
value: 26.836139599331005
- type: nauc_ndcg_at_10_std
value: 8.12844399452719
- type: nauc_ndcg_at_1_diff1
value: 68.90178464319715
- type: nauc_ndcg_at_1_max
value: 46.05537868917558
- type: nauc_ndcg_at_1_std
value: 1.7658552480698708
- type: nauc_ndcg_at_20_diff1
value: 29.510802214854294
- type: nauc_ndcg_at_20_max
value: 26.775562637730722
- type: nauc_ndcg_at_20_std
value: 9.341342661702363
- type: nauc_ndcg_at_3_diff1
value: 32.741885846292966
- type: nauc_ndcg_at_3_max
value: 28.44225108761343
- type: nauc_ndcg_at_3_std
value: 5.204440768465042
- type: nauc_ndcg_at_5_diff1
value: 30.57856348635919
- type: nauc_ndcg_at_5_max
value: 27.475007474301698
- type: nauc_ndcg_at_5_std
value: 6.961546044312487
- type: nauc_precision_at_1000_diff1
value: 0.002113156309413332
- type: nauc_precision_at_1000_max
value: 11.198242419541286
- type: nauc_precision_at_1000_std
value: 28.69676419166541
- type: nauc_precision_at_100_diff1
value: 3.6049575557782627
- type: nauc_precision_at_100_max
value: 12.499173524574791
- type: nauc_precision_at_100_std
value: 23.3755281004721
- type: nauc_precision_at_10_diff1
value: 10.922574784853193
- type: nauc_precision_at_10_max
value: 16.23221529562036
- type: nauc_precision_at_10_std
value: 12.45014808813857
- type: nauc_precision_at_1_diff1
value: 68.90178464319715
- type: nauc_precision_at_1_max
value: 46.05537868917558
- type: nauc_precision_at_1_std
value: 1.7658552480698708
- type: nauc_precision_at_20_diff1
value: 8.840710781302827
- type: nauc_precision_at_20_max
value: 14.804644554205524
- type: nauc_precision_at_20_std
value: 16.245009770815237
- type: nauc_precision_at_3_diff1
value: 19.447291487137573
- type: nauc_precision_at_3_max
value: 21.47123471597057
- type: nauc_precision_at_3_std
value: 6.441862800128802
- type: nauc_precision_at_5_diff1
value: 14.078545719721108
- type: nauc_precision_at_5_max
value: 18.468288046016387
- type: nauc_precision_at_5_std
value: 9.58650641691393
- type: nauc_recall_at_1000_diff1
value: 0.0021131563095336584
- type: nauc_recall_at_1000_max
value: 11.198242419541558
- type: nauc_recall_at_1000_std
value: 28.6967641916655
- type: nauc_recall_at_100_diff1
value: 3.6049575557781393
- type: nauc_recall_at_100_max
value: 12.499173524574765
- type: nauc_recall_at_100_std
value: 23.375528100472074
- type: nauc_recall_at_10_diff1
value: 10.922574784853168
- type: nauc_recall_at_10_max
value: 16.2322152956203
- type: nauc_recall_at_10_std
value: 12.450148088138535
- type: nauc_recall_at_1_diff1
value: 68.90178464319715
- type: nauc_recall_at_1_max
value: 46.05537868917558
- type: nauc_recall_at_1_std
value: 1.7658552480698708
- type: nauc_recall_at_20_diff1
value: 8.840710781302905
- type: nauc_recall_at_20_max
value: 14.804644554205515
- type: nauc_recall_at_20_std
value: 16.245009770815273
- type: nauc_recall_at_3_diff1
value: 19.447291487137498
- type: nauc_recall_at_3_max
value: 21.47123471597054
- type: nauc_recall_at_3_std
value: 6.441862800128763
- type: nauc_recall_at_5_diff1
value: 14.07854571972115
- type: nauc_recall_at_5_max
value: 18.468288046016337
- type: nauc_recall_at_5_std
value: 9.586506416913904
- type: ndcg_at_1
value: 70.39800000000001
- type: ndcg_at_10
value: 60.27
- type: ndcg_at_100
value: 63.400999999999996
- type: ndcg_at_1000
value: 64.847
- type: ndcg_at_20
value: 61.571
- type: ndcg_at_3
value: 55.875
- type: ndcg_at_5
value: 58.36599999999999
- type: precision_at_1
value: 70.39800000000001
- type: precision_at_10
value: 12.46
- type: precision_at_100
value: 1.493
- type: precision_at_1000
value: 0.169
- type: precision_at_20
value: 6.65
- type: precision_at_3
value: 35.062
- type: precision_at_5
value: 23.009
- type: recall_at_1
value: 35.199000000000005
- type: recall_at_10
value: 62.302
- type: recall_at_100
value: 74.666
- type: recall_at_1000
value: 84.355
- type: recall_at_20
value: 66.496
- type: recall_at_3
value: 52.593
- type: recall_at_5
value: 57.522
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL (default)
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: main_score
value: 64.886
- type: map_at_1
value: 1.644
- type: map_at_10
value: 12.24
- type: map_at_100
value: 28.248
- type: map_at_1000
value: 33.506
- type: map_at_20
value: 17.497
- type: map_at_3
value: 4.9399999999999995
- type: map_at_5
value: 8.272
- type: mrr_at_1
value: 83.72093023255815
- type: mrr_at_10
value: 91.08527131782945
- type: mrr_at_100
value: 91.08527131782945
- type: mrr_at_1000
value: 91.08527131782945
- type: mrr_at_20
value: 91.08527131782945
- type: mrr_at_3
value: 91.08527131782945
- type: mrr_at_5
value: 91.08527131782945
- type: nauc_map_at_1000_diff1
value: -36.428271627303424
- type: nauc_map_at_1000_max
value: 44.87615127218638
- type: nauc_map_at_1000_std
value: 67.92696808824724
- type: nauc_map_at_100_diff1
value: -28.11674206786188
- type: nauc_map_at_100_max
value: 36.422779766334955
- type: nauc_map_at_100_std
value: 49.99876313755116
- type: nauc_map_at_10_diff1
value: -5.838593619806058
- type: nauc_map_at_10_max
value: 11.026519190509742
- type: nauc_map_at_10_std
value: 2.5268752263522045
- type: nauc_map_at_1_diff1
value: 17.897907271073016
- type: nauc_map_at_1_max
value: 12.229062762540844
- type: nauc_map_at_1_std
value: -4.088830895573149
- type: nauc_map_at_20_diff1
value: -13.871097716255626
- type: nauc_map_at_20_max
value: 19.291271635609533
- type: nauc_map_at_20_std
value: 16.745335606507826
- type: nauc_map_at_3_diff1
value: 4.425238457033843
- type: nauc_map_at_3_max
value: 4.611864744680824
- type: nauc_map_at_3_std
value: -8.986916608582863
- type: nauc_map_at_5_diff1
value: -6.254849256920095
- type: nauc_map_at_5_max
value: 2.729437079919823
- type: nauc_map_at_5_std
value: -7.235906279913092
- type: nauc_mrr_at_1000_diff1
value: 52.18669104947672
- type: nauc_mrr_at_1000_max
value: 68.26259125411818
- type: nauc_mrr_at_1000_std
value: 56.345086428353575
- type: nauc_mrr_at_100_diff1
value: 52.18669104947672
- type: nauc_mrr_at_100_max
value: 68.26259125411818
- type: nauc_mrr_at_100_std
value: 56.345086428353575
- type: nauc_mrr_at_10_diff1
value: 52.18669104947672
- type: nauc_mrr_at_10_max
value: 68.26259125411818
- type: nauc_mrr_at_10_std
value: 56.345086428353575
- type: nauc_mrr_at_1_diff1
value: 56.55126663944154
- type: nauc_mrr_at_1_max
value: 66.37014285522565
- type: nauc_mrr_at_1_std
value: 53.2508271389779
- type: nauc_mrr_at_20_diff1
value: 52.18669104947672
- type: nauc_mrr_at_20_max
value: 68.26259125411818
- type: nauc_mrr_at_20_std
value: 56.345086428353575
- type: nauc_mrr_at_3_diff1
value: 52.18669104947672
- type: nauc_mrr_at_3_max
value: 68.26259125411818
- type: nauc_mrr_at_3_std
value: 56.345086428353575
- type: nauc_mrr_at_5_diff1
value: 52.18669104947672
- type: nauc_mrr_at_5_max
value: 68.26259125411818
- type: nauc_mrr_at_5_std
value: 56.345086428353575
- type: nauc_ndcg_at_1000_diff1
value: -19.06422926483731
- type: nauc_ndcg_at_1000_max
value: 56.30853514590265
- type: nauc_ndcg_at_1000_std
value: 70.30810947505557
- type: nauc_ndcg_at_100_diff1
value: -25.72587586459692
- type: nauc_ndcg_at_100_max
value: 51.433781241604194
- type: nauc_ndcg_at_100_std
value: 68.37678512652792
- type: nauc_ndcg_at_10_diff1
value: -23.21198108212602
- type: nauc_ndcg_at_10_max
value: 43.5450720846516
- type: nauc_ndcg_at_10_std
value: 48.78307907005605
- type: nauc_ndcg_at_1_diff1
value: 44.00179301267447
- type: nauc_ndcg_at_1_max
value: 48.202370455680395
- type: nauc_ndcg_at_1_std
value: 25.69655992704088
- type: nauc_ndcg_at_20_diff1
value: -33.88168753446507
- type: nauc_ndcg_at_20_max
value: 45.16199742613164
- type: nauc_ndcg_at_20_std
value: 61.87098383164902
- type: nauc_ndcg_at_3_diff1
value: 11.19174449544048
- type: nauc_ndcg_at_3_max
value: 44.34069860560555
- type: nauc_ndcg_at_3_std
value: 27.451258369798115
- type: nauc_ndcg_at_5_diff1
value: -7.186520929432436
- type: nauc_ndcg_at_5_max
value: 43.41869981139378
- type: nauc_ndcg_at_5_std
value: 34.89898115995178
- type: nauc_precision_at_1000_diff1
value: -34.43998154563451
- type: nauc_precision_at_1000_max
value: 29.172655907480372
- type: nauc_precision_at_1000_std
value: 65.15824469614837
- type: nauc_precision_at_100_diff1
value: -37.82409643259692
- type: nauc_precision_at_100_max
value: 38.24986991317909
- type: nauc_precision_at_100_std
value: 72.74768183105327
- type: nauc_precision_at_10_diff1
value: -32.21556182780535
- type: nauc_precision_at_10_max
value: 34.27170432382651
- type: nauc_precision_at_10_std
value: 58.358255004394664
- type: nauc_precision_at_1_diff1
value: 56.55126663944154
- type: nauc_precision_at_1_max
value: 66.37014285522565
- type: nauc_precision_at_1_std
value: 53.2508271389779
- type: nauc_precision_at_20_diff1
value: -40.18751579026395
- type: nauc_precision_at_20_max
value: 33.960783153758896
- type: nauc_precision_at_20_std
value: 65.42918390184195
- type: nauc_precision_at_3_diff1
value: -7.073870209006578
- type: nauc_precision_at_3_max
value: 50.81535269862325
- type: nauc_precision_at_3_std
value: 59.248681565955685
- type: nauc_precision_at_5_diff1
value: -31.136580596983876
- type: nauc_precision_at_5_max
value: 45.88147792380426
- type: nauc_precision_at_5_std
value: 67.46814230928243
- type: nauc_recall_at_1000_diff1
value: -23.15699999594577
- type: nauc_recall_at_1000_max
value: 39.77277799761876
- type: nauc_recall_at_1000_std
value: 60.326168012901114
- type: nauc_recall_at_100_diff1
value: -21.636664823598498
- type: nauc_recall_at_100_max
value: 31.104969346131583
- type: nauc_recall_at_100_std
value: 38.811686891592096
- type: nauc_recall_at_10_diff1
value: -10.542765625053569
- type: nauc_recall_at_10_max
value: 2.043876058107446
- type: nauc_recall_at_10_std
value: -5.578449908984766
- type: nauc_recall_at_1_diff1
value: 17.897907271073016
- type: nauc_recall_at_1_max
value: 12.229062762540844
- type: nauc_recall_at_1_std
value: -4.088830895573149
- type: nauc_recall_at_20_diff1
value: -15.132909355710103
- type: nauc_recall_at_20_max
value: 12.659765287241065
- type: nauc_recall_at_20_std
value: 8.277887800815819
- type: nauc_recall_at_3_diff1
value: -3.1975017812715016
- type: nauc_recall_at_3_max
value: -3.5539857085038538
- type: nauc_recall_at_3_std
value: -14.712102851318118
- type: nauc_recall_at_5_diff1
value: -14.040507717380743
- type: nauc_recall_at_5_max
value: -6.126912150131701
- type: nauc_recall_at_5_std
value: -13.821624015640355
- type: ndcg_at_1
value: 71.318
- type: ndcg_at_10
value: 64.886
- type: ndcg_at_100
value: 53.187
- type: ndcg_at_1000
value: 59.897999999999996
- type: ndcg_at_20
value: 58.96
- type: ndcg_at_3
value: 69.736
- type: ndcg_at_5
value: 70.14099999999999
- type: precision_at_1
value: 83.721
- type: precision_at_10
value: 71.163
- type: precision_at_100
value: 29.465000000000003
- type: precision_at_1000
value: 5.665
- type: precision_at_20
value: 57.791000000000004
- type: precision_at_3
value: 82.171
- type: precision_at_5
value: 81.86
- type: recall_at_1
value: 1.644
- type: recall_at_10
value: 14.238000000000001
- type: recall_at_100
value: 39.831
- type: recall_at_1000
value: 64.057
- type: recall_at_20
value: 21.021
- type: recall_at_3
value: 5.53
- type: recall_at_5
value: 9.623
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL (default)
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: main_score
value: 31.391000000000002
- type: map_at_1
value: 4.163
- type: map_at_10
value: 10.744
- type: map_at_100
value: 14.038999999999998
- type: map_at_1000
value: 15.434999999999999
- type: map_at_20
value: 12.16
- type: map_at_3
value: 7.614999999999999
- type: map_at_5
value: 9.027000000000001
- type: mrr_at_1
value: 39.0092879256966
- type: mrr_at_10
value: 48.69809327239668
- type: mrr_at_100
value: 49.20788148442068
- type: mrr_at_1000
value: 49.25509336494706
- type: mrr_at_20
value: 48.99606551850896
- type: mrr_at_3
value: 46.284829721362236
- type: mrr_at_5
value: 47.77089783281735
- type: nauc_map_at_1000_diff1
value: 22.75421477116417
- type: nauc_map_at_1000_max
value: 49.242283787799046
- type: nauc_map_at_1000_std
value: 29.056888272331832
- type: nauc_map_at_100_diff1
value: 23.585977398585594
- type: nauc_map_at_100_max
value: 48.25845199409498
- type: nauc_map_at_100_std
value: 24.944264511223693
- type: nauc_map_at_10_diff1
value: 27.386613094780255
- type: nauc_map_at_10_max
value: 41.52415346691586
- type: nauc_map_at_10_std
value: 12.93872448563755
- type: nauc_map_at_1_diff1
value: 46.78688143865053
- type: nauc_map_at_1_max
value: 37.20408843995871
- type: nauc_map_at_1_std
value: 4.383444959401098
- type: nauc_map_at_20_diff1
value: 25.590969047740288
- type: nauc_map_at_20_max
value: 44.57109307999418
- type: nauc_map_at_20_std
value: 16.45855141821407
- type: nauc_map_at_3_diff1
value: 36.30017108362863
- type: nauc_map_at_3_max
value: 34.66149613991648
- type: nauc_map_at_3_std
value: 5.67985905078467
- type: nauc_map_at_5_diff1
value: 31.157644795417223
- type: nauc_map_at_5_max
value: 37.274738661636825
- type: nauc_map_at_5_std
value: 8.70088872394168
- type: nauc_mrr_at_1000_diff1
value: 25.638564218157384
- type: nauc_mrr_at_1000_max
value: 57.77788270285353
- type: nauc_mrr_at_1000_std
value: 43.507586592911274
- type: nauc_mrr_at_100_diff1
value: 25.662002580561584
- type: nauc_mrr_at_100_max
value: 57.80578394278584
- type: nauc_mrr_at_100_std
value: 43.543905743986635
- type: nauc_mrr_at_10_diff1
value: 25.426034796339835
- type: nauc_mrr_at_10_max
value: 57.68443186258669
- type: nauc_mrr_at_10_std
value: 43.438009108331215
- type: nauc_mrr_at_1_diff1
value: 26.073028156311075
- type: nauc_mrr_at_1_max
value: 52.11817916720053
- type: nauc_mrr_at_1_std
value: 37.41073893153695
- type: nauc_mrr_at_20_diff1
value: 25.548645553336147
- type: nauc_mrr_at_20_max
value: 57.78552760401915
- type: nauc_mrr_at_20_std
value: 43.521687428822325
- type: nauc_mrr_at_3_diff1
value: 25.72662577397805
- type: nauc_mrr_at_3_max
value: 56.891263536265605
- type: nauc_mrr_at_3_std
value: 41.384872305390104
- type: nauc_mrr_at_5_diff1
value: 25.552211551655386
- type: nauc_mrr_at_5_max
value: 57.976813828353926
- type: nauc_mrr_at_5_std
value: 43.504564461855544
- type: nauc_ndcg_at_1000_diff1
value: 23.456158044182757
- type: nauc_ndcg_at_1000_max
value: 60.05411773552709
- type: nauc_ndcg_at_1000_std
value: 47.857510017262584
- type: nauc_ndcg_at_100_diff1
value: 19.711635700390772
- type: nauc_ndcg_at_100_max
value: 56.178746740470665
- type: nauc_ndcg_at_100_std
value: 42.36829180286942
- type: nauc_ndcg_at_10_diff1
value: 18.364428967788413
- type: nauc_ndcg_at_10_max
value: 54.38372506578223
- type: nauc_ndcg_at_10_std
value: 41.75765411340369
- type: nauc_ndcg_at_1_diff1
value: 26.571093272640773
- type: nauc_ndcg_at_1_max
value: 51.061788341958284
- type: nauc_ndcg_at_1_std
value: 36.514987974075986
- type: nauc_ndcg_at_20_diff1
value: 18.345487193027697
- type: nauc_ndcg_at_20_max
value: 54.62621882656994
- type: nauc_ndcg_at_20_std
value: 41.42835554714241
- type: nauc_ndcg_at_3_diff1
value: 23.260105658139025
- type: nauc_ndcg_at_3_max
value: 52.07747385334546
- type: nauc_ndcg_at_3_std
value: 36.91985577837284
- type: nauc_ndcg_at_5_diff1
value: 20.40428109665566
- type: nauc_ndcg_at_5_max
value: 53.52015347884604
- type: nauc_ndcg_at_5_std
value: 39.46008849580017
- type: nauc_precision_at_1000_diff1
value: -7.3487344916380035
- type: nauc_precision_at_1000_max
value: 16.58045221394852
- type: nauc_precision_at_1000_std
value: 38.94030932397075
- type: nauc_precision_at_100_diff1
value: -5.257743986683922
- type: nauc_precision_at_100_max
value: 34.43071687475306
- type: nauc_precision_at_100_std
value: 53.499519170670474
- type: nauc_precision_at_10_diff1
value: 2.385136433119139
- type: nauc_precision_at_10_max
value: 47.210743878631064
- type: nauc_precision_at_10_std
value: 47.22767704186548
- type: nauc_precision_at_1_diff1
value: 26.073028156311075
- type: nauc_precision_at_1_max
value: 52.11817916720053
- type: nauc_precision_at_1_std
value: 37.41073893153695
- type: nauc_precision_at_20_diff1
value: -0.3531531127238474
- type: nauc_precision_at_20_max
value: 44.78044604856974
- type: nauc_precision_at_20_std
value: 49.532804150743615
- type: nauc_precision_at_3_diff1
value: 15.350050569991447
- type: nauc_precision_at_3_max
value: 51.01572315596549
- type: nauc_precision_at_3_std
value: 38.801125728413155
- type: nauc_precision_at_5_diff1
value: 9.109003666144694
- type: nauc_precision_at_5_max
value: 50.935269774898494
- type: nauc_precision_at_5_std
value: 43.323548180559676
- type: nauc_recall_at_1000_diff1
value: 16.64743647648886
- type: nauc_recall_at_1000_max
value: 38.46012283772285
- type: nauc_recall_at_1000_std
value: 36.02016164796441
- type: nauc_recall_at_100_diff1
value: 14.005834785186744
- type: nauc_recall_at_100_max
value: 37.70026105513647
- type: nauc_recall_at_100_std
value: 27.085222642129697
- type: nauc_recall_at_10_diff1
value: 21.204106627422632
- type: nauc_recall_at_10_max
value: 36.737624881893424
- type: nauc_recall_at_10_std
value: 13.755054514272702
- type: nauc_recall_at_1_diff1
value: 46.78688143865053
- type: nauc_recall_at_1_max
value: 37.20408843995871
- type: nauc_recall_at_1_std
value: 4.383444959401098
- type: nauc_recall_at_20_diff1
value: 19.740977611421933
- type: nauc_recall_at_20_max
value: 39.21908969539783
- type: nauc_recall_at_20_std
value: 16.560269670318494
- type: nauc_recall_at_3_diff1
value: 32.189359545367815
- type: nauc_recall_at_3_max
value: 31.693634445562758
- type: nauc_recall_at_3_std
value: 6.246326281543587
- type: nauc_recall_at_5_diff1
value: 25.51586860499901
- type: nauc_recall_at_5_max
value: 33.15934725342885
- type: nauc_recall_at_5_std
value: 9.677778511696705
- type: ndcg_at_1
value: 37.307
- type: ndcg_at_10
value: 31.391000000000002
- type: ndcg_at_100
value: 28.877999999999997
- type: ndcg_at_1000
value: 37.16
- type: ndcg_at_20
value: 29.314
- type: ndcg_at_3
value: 35.405
- type: ndcg_at_5
value: 33.922999999999995
- type: precision_at_1
value: 39.009
- type: precision_at_10
value: 24.52
- type: precision_at_100
value: 7.703
- type: precision_at_1000
value: 2.04
- type: precision_at_20
value: 18.08
- type: precision_at_3
value: 34.469
- type: precision_at_5
value: 30.712
- type: recall_at_1
value: 4.163
- type: recall_at_10
value: 15.015999999999998
- type: recall_at_100
value: 30.606
- type: recall_at_1000
value: 59.606
- type: recall_at_20
value: 19.09
- type: recall_at_3
value: 9.139
- type: recall_at_5
value: 11.477
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL (default)
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: main_score
value: 54.017
- type: map_at_1
value: 34.193
- type: map_at_10
value: 47.497
- type: map_at_100
value: 48.441
- type: map_at_1000
value: 48.481
- type: map_at_20
value: 48.093
- type: map_at_3
value: 44.017
- type: map_at_5
value: 46.111000000000004
- type: mrr_at_1
value: 37.949015063731174
- type: mrr_at_10
value: 49.915772315105954
- type: mrr_at_100
value: 50.62841255829997
- type: mrr_at_1000
value: 50.656773027666745
- type: mrr_at_20
value: 50.37785276657083
- type: mrr_at_3
value: 46.98725376593267
- type: mrr_at_5
value: 48.763035921205066
- type: nauc_map_at_1000_diff1
value: 39.5632191792873
- type: nauc_map_at_1000_max
value: 37.4728247053629
- type: nauc_map_at_1000_std
value: 5.742498414663762
- type: nauc_map_at_100_diff1
value: 39.555570352061906
- type: nauc_map_at_100_max
value: 37.497880976847334
- type: nauc_map_at_100_std
value: 5.7798021019465375
- type: nauc_map_at_10_diff1
value: 39.5423723444454
- type: nauc_map_at_10_max
value: 37.41661971723365
- type: nauc_map_at_10_std
value: 5.2378002164144695
- type: nauc_map_at_1_diff1
value: 41.52697034146981
- type: nauc_map_at_1_max
value: 28.558995576942863
- type: nauc_map_at_1_std
value: 0.13094542859192052
- type: nauc_map_at_20_diff1
value: 39.55484628943701
- type: nauc_map_at_20_max
value: 37.5247794933719
- type: nauc_map_at_20_std
value: 5.702881342279231
- type: nauc_map_at_3_diff1
value: 39.949323925425325
- type: nauc_map_at_3_max
value: 35.770298168901924
- type: nauc_map_at_3_std
value: 2.9127112432479874
- type: nauc_map_at_5_diff1
value: 39.768310617004545
- type: nauc_map_at_5_max
value: 37.1549191664796
- type: nauc_map_at_5_std
value: 4.4681285748269515
- type: nauc_mrr_at_1000_diff1
value: 39.14001746706457
- type: nauc_mrr_at_1000_max
value: 37.477376518267775
- type: nauc_mrr_at_1000_std
value: 6.8088891531621565
- type: nauc_mrr_at_100_diff1
value: 39.13054707413684
- type: nauc_mrr_at_100_max
value: 37.498126443766274
- type: nauc_mrr_at_100_std
value: 6.839411380129971
- type: nauc_mrr_at_10_diff1
value: 39.09764730048156
- type: nauc_mrr_at_10_max
value: 37.58593798217306
- type: nauc_mrr_at_10_std
value: 6.713795164982413
- type: nauc_mrr_at_1_diff1
value: 41.581599918664075
- type: nauc_mrr_at_1_max
value: 31.500589231378722
- type: nauc_mrr_at_1_std
value: 2.059116370339438
- type: nauc_mrr_at_20_diff1
value: 39.09011023988447
- type: nauc_mrr_at_20_max
value: 37.55856008791344
- type: nauc_mrr_at_20_std
value: 6.847165397615844
- type: nauc_mrr_at_3_diff1
value: 39.382542043738
- type: nauc_mrr_at_3_max
value: 36.49265363659468
- type: nauc_mrr_at_3_std
value: 4.759157976438336
- type: nauc_mrr_at_5_diff1
value: 39.304826333759976
- type: nauc_mrr_at_5_max
value: 37.46326016736024
- type: nauc_mrr_at_5_std
value: 6.122608305766621
- type: nauc_ndcg_at_1000_diff1
value: 38.568500038453266
- type: nauc_ndcg_at_1000_max
value: 39.799710882413166
- type: nauc_ndcg_at_1000_std
value: 9.357010223096639
- type: nauc_ndcg_at_100_diff1
value: 38.38026091343228
- type: nauc_ndcg_at_100_max
value: 40.48398173542486
- type: nauc_ndcg_at_100_std
value: 10.373054013302214
- type: nauc_ndcg_at_10_diff1
value: 38.27340980909964
- type: nauc_ndcg_at_10_max
value: 40.35241649744093
- type: nauc_ndcg_at_10_std
value: 8.579139930345168
- type: nauc_ndcg_at_1_diff1
value: 41.581599918664075
- type: nauc_ndcg_at_1_max
value: 31.500589231378722
- type: nauc_ndcg_at_1_std
value: 2.059116370339438
- type: nauc_ndcg_at_20_diff1
value: 38.26453028884807
- type: nauc_ndcg_at_20_max
value: 40.70517858426641
- type: nauc_ndcg_at_20_std
value: 9.987693876137905
- type: nauc_ndcg_at_3_diff1
value: 39.2078971733273
- type: nauc_ndcg_at_3_max
value: 37.48672195565316
- type: nauc_ndcg_at_3_std
value: 4.051464994659221
- type: nauc_ndcg_at_5_diff1
value: 38.883693595665285
- type: nauc_ndcg_at_5_max
value: 39.763115634437135
- type: nauc_ndcg_at_5_std
value: 6.738980451582073
- type: nauc_precision_at_1000_diff1
value: -7.223215910619012
- type: nauc_precision_at_1000_max
value: 13.075844604892161
- type: nauc_precision_at_1000_std
value: 19.864336920890107
- type: nauc_precision_at_100_diff1
value: 1.3305994810812418
- type: nauc_precision_at_100_max
value: 25.9219108557104
- type: nauc_precision_at_100_std
value: 27.5076605928207
- type: nauc_precision_at_10_diff1
value: 18.441551484970326
- type: nauc_precision_at_10_max
value: 39.85995330437054
- type: nauc_precision_at_10_std
value: 20.561269077428914
- type: nauc_precision_at_1_diff1
value: 41.581599918664075
- type: nauc_precision_at_1_max
value: 31.500589231378722
- type: nauc_precision_at_1_std
value: 2.059116370339438
- type: nauc_precision_at_20_diff1
value: 12.579593891480531
- type: nauc_precision_at_20_max
value: 36.620221830588775
- type: nauc_precision_at_20_std
value: 26.40364876775059
- type: nauc_precision_at_3_diff1
value: 30.158859294487073
- type: nauc_precision_at_3_max
value: 41.168215766389174
- type: nauc_precision_at_3_std
value: 9.44345004450809
- type: nauc_precision_at_5_diff1
value: 25.438624678672785
- type: nauc_precision_at_5_max
value: 42.72802023518524
- type: nauc_precision_at_5_std
value: 15.357657388511099
- type: nauc_recall_at_1000_diff1
value: 24.987564782718003
- type: nauc_recall_at_1000_max
value: 70.508416373353
- type: nauc_recall_at_1000_std
value: 69.75092280398808
- type: nauc_recall_at_100_diff1
value: 29.504202856421397
- type: nauc_recall_at_100_max
value: 63.41356585545318
- type: nauc_recall_at_100_std
value: 50.09250954437847
- type: nauc_recall_at_10_diff1
value: 32.355776022971774
- type: nauc_recall_at_10_max
value: 49.47121901667283
- type: nauc_recall_at_10_std
value: 19.418439406631244
- type: nauc_recall_at_1_diff1
value: 41.52697034146981
- type: nauc_recall_at_1_max
value: 28.558995576942863
- type: nauc_recall_at_1_std
value: 0.13094542859192052
- type: nauc_recall_at_20_diff1
value: 31.57334731023589
- type: nauc_recall_at_20_max
value: 54.06567225197383
- type: nauc_recall_at_20_std
value: 29.222029720570468
- type: nauc_recall_at_3_diff1
value: 36.45033533275773
- type: nauc_recall_at_3_max
value: 40.39529713780803
- type: nauc_recall_at_3_std
value: 5.21893897772794
- type: nauc_recall_at_5_diff1
value: 35.18471678478859
- type: nauc_recall_at_5_max
value: 46.20100816867823
- type: nauc_recall_at_5_std
value: 11.94481894633221
- type: ndcg_at_1
value: 37.949
- type: ndcg_at_10
value: 54.017
- type: ndcg_at_100
value: 58.126
- type: ndcg_at_1000
value: 59.073
- type: ndcg_at_20
value: 55.928
- type: ndcg_at_3
value: 47.494
- type: ndcg_at_5
value: 50.975
- type: precision_at_1
value: 37.949
- type: precision_at_10
value: 8.450000000000001
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 4.689
- type: precision_at_3
value: 21.051000000000002
- type: precision_at_5
value: 14.664
- type: recall_at_1
value: 34.193
- type: recall_at_10
value: 71.357
- type: recall_at_100
value: 89.434
- type: recall_at_1000
value: 96.536
- type: recall_at_20
value: 78.363
- type: recall_at_3
value: 54.551
- type: recall_at_5
value: 62.543000000000006
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL (default)
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: main_score
value: 84.114
- type: map_at_1
value: 65.848
- type: map_at_10
value: 79.85900000000001
- type: map_at_100
value: 80.582
- type: map_at_1000
value: 80.60300000000001
- type: map_at_20
value: 80.321
- type: map_at_3
value: 76.741
- type: map_at_5
value: 78.72200000000001
- type: mrr_at_1
value: 75.97
- type: mrr_at_10
value: 83.04630158730119
- type: mrr_at_100
value: 83.22785731032968
- type: mrr_at_1000
value: 83.23123717623899
- type: mrr_at_20
value: 83.17412021320565
- type: mrr_at_3
value: 81.83333333333287
- type: mrr_at_5
value: 82.61933333333275
- type: nauc_map_at_1000_diff1
value: 73.26316553371083
- type: nauc_map_at_1000_max
value: 27.92567859085245
- type: nauc_map_at_1000_std
value: -47.477909533360446
- type: nauc_map_at_100_diff1
value: 73.2690602807223
- type: nauc_map_at_100_max
value: 27.915868327849996
- type: nauc_map_at_100_std
value: -47.525777766107595
- type: nauc_map_at_10_diff1
value: 73.45464428464894
- type: nauc_map_at_10_max
value: 27.451611487246296
- type: nauc_map_at_10_std
value: -49.35818715843809
- type: nauc_map_at_1_diff1
value: 77.29690208952982
- type: nauc_map_at_1_max
value: 19.839875762282293
- type: nauc_map_at_1_std
value: -45.355684654708284
- type: nauc_map_at_20_diff1
value: 73.35102731979796
- type: nauc_map_at_20_max
value: 27.741506490134583
- type: nauc_map_at_20_std
value: -48.22006207310331
- type: nauc_map_at_3_diff1
value: 73.94878241064137
- type: nauc_map_at_3_max
value: 24.761321386766728
- type: nauc_map_at_3_std
value: -51.20638883618126
- type: nauc_map_at_5_diff1
value: 73.66143558047698
- type: nauc_map_at_5_max
value: 26.53483405013543
- type: nauc_map_at_5_std
value: -50.697541279640056
- type: nauc_mrr_at_1000_diff1
value: 73.84632320009759
- type: nauc_mrr_at_1000_max
value: 30.50182733610048
- type: nauc_mrr_at_1000_std
value: -44.3021647995251
- type: nauc_mrr_at_100_diff1
value: 73.84480792662302
- type: nauc_mrr_at_100_max
value: 30.50749424571614
- type: nauc_mrr_at_100_std
value: -44.29615086388113
- type: nauc_mrr_at_10_diff1
value: 73.79442772949346
- type: nauc_mrr_at_10_max
value: 30.55724252219984
- type: nauc_mrr_at_10_std
value: -44.50997069462057
- type: nauc_mrr_at_1_diff1
value: 75.23369827945945
- type: nauc_mrr_at_1_max
value: 29.20073967447664
- type: nauc_mrr_at_1_std
value: -43.1920147658285
- type: nauc_mrr_at_20_diff1
value: 73.82731678072307
- type: nauc_mrr_at_20_max
value: 30.566328605497667
- type: nauc_mrr_at_20_std
value: -44.24683607643705
- type: nauc_mrr_at_3_diff1
value: 73.61997576749954
- type: nauc_mrr_at_3_max
value: 30.150393853381917
- type: nauc_mrr_at_3_std
value: -44.96847297506626
- type: nauc_mrr_at_5_diff1
value: 73.69084310616132
- type: nauc_mrr_at_5_max
value: 30.578033703441125
- type: nauc_mrr_at_5_std
value: -44.74920746066566
- type: nauc_ndcg_at_1000_diff1
value: 72.89349862557452
- type: nauc_ndcg_at_1000_max
value: 29.824725190462086
- type: nauc_ndcg_at_1000_std
value: -44.96284395063211
- type: nauc_ndcg_at_100_diff1
value: 72.85212753715273
- type: nauc_ndcg_at_100_max
value: 29.933114207845605
- type: nauc_ndcg_at_100_std
value: -44.944225570663754
- type: nauc_ndcg_at_10_diff1
value: 72.80576740454528
- type: nauc_ndcg_at_10_max
value: 29.16829118320828
- type: nauc_ndcg_at_10_std
value: -48.149473740079614
- type: nauc_ndcg_at_1_diff1
value: 75.00032534968587
- type: nauc_ndcg_at_1_max
value: 29.61849062038547
- type: nauc_ndcg_at_1_std
value: -42.560207043864054
- type: nauc_ndcg_at_20_diff1
value: 72.88440406302502
- type: nauc_ndcg_at_20_max
value: 29.65496676092656
- type: nauc_ndcg_at_20_std
value: -46.21238462167732
- type: nauc_ndcg_at_3_diff1
value: 72.37916962766987
- type: nauc_ndcg_at_3_max
value: 27.125094834547586
- type: nauc_ndcg_at_3_std
value: -48.62942991399391
- type: nauc_ndcg_at_5_diff1
value: 72.57017330527658
- type: nauc_ndcg_at_5_max
value: 28.470485561757254
- type: nauc_ndcg_at_5_std
value: -49.07593345591059
- type: nauc_precision_at_1000_diff1
value: -41.67915575853946
- type: nauc_precision_at_1000_max
value: 1.2012264478568844
- type: nauc_precision_at_1000_std
value: 44.723834559400466
- type: nauc_precision_at_100_diff1
value: -40.45196679236971
- type: nauc_precision_at_100_max
value: 2.3525450401714894
- type: nauc_precision_at_100_std
value: 43.7092529413952
- type: nauc_precision_at_10_diff1
value: -30.256026923068767
- type: nauc_precision_at_10_max
value: 8.313422052132559
- type: nauc_precision_at_10_std
value: 25.929372356449694
- type: nauc_precision_at_1_diff1
value: 75.00032534968587
- type: nauc_precision_at_1_max
value: 29.61849062038547
- type: nauc_precision_at_1_std
value: -42.560207043864054
- type: nauc_precision_at_20_diff1
value: -35.61971069986584
- type: nauc_precision_at_20_max
value: 5.4664303079116765
- type: nauc_precision_at_20_std
value: 34.992352471692826
- type: nauc_precision_at_3_diff1
value: -5.691231842471157
- type: nauc_precision_at_3_max
value: 14.797949087742444
- type: nauc_precision_at_3_std
value: -0.1930317395644928
- type: nauc_precision_at_5_diff1
value: -20.03913781462645
- type: nauc_precision_at_5_max
value: 11.956771408712749
- type: nauc_precision_at_5_std
value: 13.179251389859731
- type: nauc_recall_at_1000_diff1
value: 64.03509042729674
- type: nauc_recall_at_1000_max
value: 40.91691485428493
- type: nauc_recall_at_1000_std
value: 16.12968625875372
- type: nauc_recall_at_100_diff1
value: 63.83116179628575
- type: nauc_recall_at_100_max
value: 43.72908117676382
- type: nauc_recall_at_100_std
value: -20.50966716852155
- type: nauc_recall_at_10_diff1
value: 66.42071960186394
- type: nauc_recall_at_10_max
value: 28.983207818687205
- type: nauc_recall_at_10_std
value: -56.61417798753744
- type: nauc_recall_at_1_diff1
value: 77.29690208952982
- type: nauc_recall_at_1_max
value: 19.839875762282293
- type: nauc_recall_at_1_std
value: -45.355684654708284
- type: nauc_recall_at_20_diff1
value: 66.32360705219874
- type: nauc_recall_at_20_max
value: 33.30698111822631
- type: nauc_recall_at_20_std
value: -43.89233781737452
- type: nauc_recall_at_3_diff1
value: 69.67029394927077
- type: nauc_recall_at_3_max
value: 22.67803039327696
- type: nauc_recall_at_3_std
value: -56.43327209861502
- type: nauc_recall_at_5_diff1
value: 68.05622143936131
- type: nauc_recall_at_5_max
value: 26.67795559040675
- type: nauc_recall_at_5_std
value: -58.158231198510954
- type: ndcg_at_1
value: 76.08
- type: ndcg_at_10
value: 84.114
- type: ndcg_at_100
value: 85.784
- type: ndcg_at_1000
value: 85.992
- type: ndcg_at_20
value: 84.976
- type: ndcg_at_3
value: 80.74799999999999
- type: ndcg_at_5
value: 82.626
- type: precision_at_1
value: 76.08
- type: precision_at_10
value: 12.926000000000002
- type: precision_at_100
value: 1.509
- type: precision_at_1000
value: 0.156
- type: precision_at_20
value: 6.912999999999999
- type: precision_at_3
value: 35.5
- type: precision_at_5
value: 23.541999999999998
- type: recall_at_1
value: 65.848
- type: recall_at_10
value: 92.611
- type: recall_at_100
value: 98.69
- type: recall_at_1000
value: 99.83999999999999
- type: recall_at_20
value: 95.47200000000001
- type: recall_at_3
value: 83.122
- type: recall_at_5
value: 88.23
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL (default)
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: main_score
value: 15.379999999999999
- type: map_at_1
value: 3.6029999999999998
- type: map_at_10
value: 8.843
- type: map_at_100
value: 10.433
- type: map_at_1000
value: 10.689
- type: map_at_20
value: 9.597
- type: map_at_3
value: 6.363
- type: map_at_5
value: 7.603
- type: mrr_at_1
value: 17.7
- type: mrr_at_10
value: 26.58900793650793
- type: mrr_at_100
value: 27.699652322890987
- type: mrr_at_1000
value: 27.78065313118353
- type: mrr_at_20
value: 27.215020950411816
- type: mrr_at_3
value: 23.36666666666668
- type: mrr_at_5
value: 25.211666666666666
- type: nauc_map_at_1000_diff1
value: 21.92235143827129
- type: nauc_map_at_1000_max
value: 37.50300940750989
- type: nauc_map_at_1000_std
value: 20.872586122198552
- type: nauc_map_at_100_diff1
value: 21.917408170465833
- type: nauc_map_at_100_max
value: 37.4654466815513
- type: nauc_map_at_100_std
value: 20.621643878648534
- type: nauc_map_at_10_diff1
value: 22.914388723621183
- type: nauc_map_at_10_max
value: 36.468131213468794
- type: nauc_map_at_10_std
value: 16.760980140791492
- type: nauc_map_at_1_diff1
value: 29.00799502838457
- type: nauc_map_at_1_max
value: 26.64926291797503
- type: nauc_map_at_1_std
value: 8.167291261637361
- type: nauc_map_at_20_diff1
value: 22.46580947804047
- type: nauc_map_at_20_max
value: 36.656294842562275
- type: nauc_map_at_20_std
value: 18.099232417722078
- type: nauc_map_at_3_diff1
value: 23.436009032045934
- type: nauc_map_at_3_max
value: 31.325807212280914
- type: nauc_map_at_3_std
value: 9.780905232048852
- type: nauc_map_at_5_diff1
value: 22.891704394665528
- type: nauc_map_at_5_max
value: 35.40584466642894
- type: nauc_map_at_5_std
value: 13.476986099394656
- type: nauc_mrr_at_1000_diff1
value: 25.052937655397866
- type: nauc_mrr_at_1000_max
value: 29.64431912670108
- type: nauc_mrr_at_1000_std
value: 14.549744963988044
- type: nauc_mrr_at_100_diff1
value: 25.070871266969224
- type: nauc_mrr_at_100_max
value: 29.68743604652336
- type: nauc_mrr_at_100_std
value: 14.582010154574432
- type: nauc_mrr_at_10_diff1
value: 24.88881466938897
- type: nauc_mrr_at_10_max
value: 29.488430770768144
- type: nauc_mrr_at_10_std
value: 14.269241073852266
- type: nauc_mrr_at_1_diff1
value: 29.220540327267503
- type: nauc_mrr_at_1_max
value: 26.81908580507911
- type: nauc_mrr_at_1_std
value: 8.00840295809718
- type: nauc_mrr_at_20_diff1
value: 25.067912695721944
- type: nauc_mrr_at_20_max
value: 29.759227563849628
- type: nauc_mrr_at_20_std
value: 14.685076859257357
- type: nauc_mrr_at_3_diff1
value: 24.645848739182696
- type: nauc_mrr_at_3_max
value: 27.73368549660351
- type: nauc_mrr_at_3_std
value: 11.475742805586943
- type: nauc_mrr_at_5_diff1
value: 24.895295760909946
- type: nauc_mrr_at_5_max
value: 29.130755033240423
- type: nauc_mrr_at_5_std
value: 12.955802929145404
- type: nauc_ndcg_at_1000_diff1
value: 20.68434434777729
- type: nauc_ndcg_at_1000_max
value: 37.67055146424174
- type: nauc_ndcg_at_1000_std
value: 29.57493715069776
- type: nauc_ndcg_at_100_diff1
value: 20.396834816492383
- type: nauc_ndcg_at_100_max
value: 37.460575228670514
- type: nauc_ndcg_at_100_std
value: 27.826534756761944
- type: nauc_ndcg_at_10_diff1
value: 22.640844106236027
- type: nauc_ndcg_at_10_max
value: 35.21291764462327
- type: nauc_ndcg_at_10_std
value: 19.53289455984506
- type: nauc_ndcg_at_1_diff1
value: 29.220540327267503
- type: nauc_ndcg_at_1_max
value: 26.81908580507911
- type: nauc_ndcg_at_1_std
value: 8.00840295809718
- type: nauc_ndcg_at_20_diff1
value: 22.117126657768623
- type: nauc_ndcg_at_20_max
value: 35.79395781940806
- type: nauc_ndcg_at_20_std
value: 22.242748346260786
- type: nauc_ndcg_at_3_diff1
value: 23.00596063212187
- type: nauc_ndcg_at_3_max
value: 30.149013627580523
- type: nauc_ndcg_at_3_std
value: 11.07904064662722
- type: nauc_ndcg_at_5_diff1
value: 22.81875419630523
- type: nauc_ndcg_at_5_max
value: 34.24267468356626
- type: nauc_ndcg_at_5_std
value: 15.307780280752088
- type: nauc_precision_at_1000_diff1
value: 9.606677689029972
- type: nauc_precision_at_1000_max
value: 32.74855550489271
- type: nauc_precision_at_1000_std
value: 42.65372585937895
- type: nauc_precision_at_100_diff1
value: 11.528981313529545
- type: nauc_precision_at_100_max
value: 35.642529490132404
- type: nauc_precision_at_100_std
value: 38.146151426052306
- type: nauc_precision_at_10_diff1
value: 18.783957183811836
- type: nauc_precision_at_10_max
value: 36.1982008334257
- type: nauc_precision_at_10_std
value: 25.09349473195891
- type: nauc_precision_at_1_diff1
value: 29.220540327267503
- type: nauc_precision_at_1_max
value: 26.81908580507911
- type: nauc_precision_at_1_std
value: 8.00840295809718
- type: nauc_precision_at_20_diff1
value: 17.458766320828214
- type: nauc_precision_at_20_max
value: 36.000404903025235
- type: nauc_precision_at_20_std
value: 29.1608044138323
- type: nauc_precision_at_3_diff1
value: 20.213669462067166
- type: nauc_precision_at_3_max
value: 31.120650847205912
- type: nauc_precision_at_3_std
value: 12.390972418818118
- type: nauc_precision_at_5_diff1
value: 20.114245715785678
- type: nauc_precision_at_5_max
value: 37.30360111495823
- type: nauc_precision_at_5_std
value: 19.053109037822853
- type: nauc_recall_at_1000_diff1
value: 9.85800049032612
- type: nauc_recall_at_1000_max
value: 32.48319160802687
- type: nauc_recall_at_1000_std
value: 43.79941601741161
- type: nauc_recall_at_100_diff1
value: 11.375255270968337
- type: nauc_recall_at_100_max
value: 35.1868784124497
- type: nauc_recall_at_100_std
value: 38.422680583482666
- type: nauc_recall_at_10_diff1
value: 18.445783123521938
- type: nauc_recall_at_10_max
value: 35.633267936276766
- type: nauc_recall_at_10_std
value: 24.94469506254716
- type: nauc_recall_at_1_diff1
value: 29.00799502838457
- type: nauc_recall_at_1_max
value: 26.64926291797503
- type: nauc_recall_at_1_std
value: 8.167291261637361
- type: nauc_recall_at_20_diff1
value: 17.314906604151936
- type: nauc_recall_at_20_max
value: 35.66067699203996
- type: nauc_recall_at_20_std
value: 29.400137012506082
- type: nauc_recall_at_3_diff1
value: 19.873710875648698
- type: nauc_recall_at_3_max
value: 30.92404718742849
- type: nauc_recall_at_3_std
value: 12.400871018075199
- type: nauc_recall_at_5_diff1
value: 19.869948324233192
- type: nauc_recall_at_5_max
value: 37.06832511687574
- type: nauc_recall_at_5_std
value: 19.0798814966156
- type: ndcg_at_1
value: 17.7
- type: ndcg_at_10
value: 15.379999999999999
- type: ndcg_at_100
value: 22.09
- type: ndcg_at_1000
value: 27.151999999999997
- type: ndcg_at_20
value: 17.576
- type: ndcg_at_3
value: 14.219999999999999
- type: ndcg_at_5
value: 12.579
- type: precision_at_1
value: 17.7
- type: precision_at_10
value: 8.08
- type: precision_at_100
value: 1.7840000000000003
- type: precision_at_1000
value: 0.3
- type: precision_at_20
value: 5.305
- type: precision_at_3
value: 13.167000000000002
- type: precision_at_5
value: 11.06
- type: recall_at_1
value: 3.6029999999999998
- type: recall_at_10
value: 16.413
- type: recall_at_100
value: 36.263
- type: recall_at_1000
value: 61.016999999999996
- type: recall_at_20
value: 21.587999999999997
- type: recall_at_3
value: 8.013
- type: recall_at_5
value: 11.198
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL (default)
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: main_score
value: 64.764
- type: map_at_1
value: 49.778
- type: map_at_10
value: 59.88
- type: map_at_100
value: 60.707
- type: map_at_1000
value: 60.729
- type: map_at_20
value: 60.419999999999995
- type: map_at_3
value: 57.45400000000001
- type: map_at_5
value: 58.729
- type: mrr_at_1
value: 52.33333333333333
- type: mrr_at_10
value: 61.29193121693122
- type: mrr_at_100
value: 61.95817765126313
- type: mrr_at_1000
value: 61.97583284368782
- type: mrr_at_20
value: 61.72469949641003
- type: mrr_at_3
value: 59.44444444444444
- type: mrr_at_5
value: 60.494444444444454
- type: nauc_map_at_1000_diff1
value: 62.21235294015774
- type: nauc_map_at_1000_max
value: 48.83996609100249
- type: nauc_map_at_1000_std
value: 5.23892781043174
- type: nauc_map_at_100_diff1
value: 62.20170226789429
- type: nauc_map_at_100_max
value: 48.8391766453537
- type: nauc_map_at_100_std
value: 5.2664077457917715
- type: nauc_map_at_10_diff1
value: 61.961975488329024
- type: nauc_map_at_10_max
value: 48.397109987625186
- type: nauc_map_at_10_std
value: 4.314859710827481
- type: nauc_map_at_1_diff1
value: 65.0865197011516
- type: nauc_map_at_1_max
value: 41.38862781954889
- type: nauc_map_at_1_std
value: -0.9182122632530586
- type: nauc_map_at_20_diff1
value: 61.99173935851292
- type: nauc_map_at_20_max
value: 48.79961814179307
- type: nauc_map_at_20_std
value: 5.262181845825118
- type: nauc_map_at_3_diff1
value: 62.37910539880477
- type: nauc_map_at_3_max
value: 47.13627890977091
- type: nauc_map_at_3_std
value: 2.327897198087264
- type: nauc_map_at_5_diff1
value: 61.60080757149592
- type: nauc_map_at_5_max
value: 47.60052458345962
- type: nauc_map_at_5_std
value: 3.1770196981231047
- type: nauc_mrr_at_1000_diff1
value: 62.86810952814966
- type: nauc_mrr_at_1000_max
value: 52.13248094447774
- type: nauc_mrr_at_1000_std
value: 10.100485746570733
- type: nauc_mrr_at_100_diff1
value: 62.85364829491874
- type: nauc_mrr_at_100_max
value: 52.134528010631854
- type: nauc_mrr_at_100_std
value: 10.120945685447369
- type: nauc_mrr_at_10_diff1
value: 62.65679301829915
- type: nauc_mrr_at_10_max
value: 52.09270719182349
- type: nauc_mrr_at_10_std
value: 9.913834434725441
- type: nauc_mrr_at_1_diff1
value: 66.84108271415636
- type: nauc_mrr_at_1_max
value: 46.67646429855176
- type: nauc_mrr_at_1_std
value: 5.5505252956352304
- type: nauc_mrr_at_20_diff1
value: 62.72473227039611
- type: nauc_mrr_at_20_max
value: 52.13479097802757
- type: nauc_mrr_at_20_std
value: 10.188278833464084
- type: nauc_mrr_at_3_diff1
value: 63.797429185518496
- type: nauc_mrr_at_3_max
value: 52.16486999573481
- type: nauc_mrr_at_3_std
value: 9.094360767062762
- type: nauc_mrr_at_5_diff1
value: 62.592917975475494
- type: nauc_mrr_at_5_max
value: 52.330741486107414
- type: nauc_mrr_at_5_std
value: 9.742175534421389
- type: nauc_ndcg_at_1000_diff1
value: 61.38859337672476
- type: nauc_ndcg_at_1000_max
value: 51.48380058339184
- type: nauc_ndcg_at_1000_std
value: 9.670547660897673
- type: nauc_ndcg_at_100_diff1
value: 61.02438489641434
- type: nauc_ndcg_at_100_max
value: 51.781246646780865
- type: nauc_ndcg_at_100_std
value: 10.592961553245187
- type: nauc_ndcg_at_10_diff1
value: 60.03678353308358
- type: nauc_ndcg_at_10_max
value: 50.70725688848762
- type: nauc_ndcg_at_10_std
value: 7.9472446491016315
- type: nauc_ndcg_at_1_diff1
value: 66.84108271415636
- type: nauc_ndcg_at_1_max
value: 46.67646429855176
- type: nauc_ndcg_at_1_std
value: 5.5505252956352304
- type: nauc_ndcg_at_20_diff1
value: 59.828482718480224
- type: nauc_ndcg_at_20_max
value: 51.45831789601284
- type: nauc_ndcg_at_20_std
value: 10.722673683272049
- type: nauc_ndcg_at_3_diff1
value: 61.68982937524109
- type: nauc_ndcg_at_3_max
value: 49.745326748604775
- type: nauc_ndcg_at_3_std
value: 4.948298621202247
- type: nauc_ndcg_at_5_diff1
value: 59.67396171973207
- type: nauc_ndcg_at_5_max
value: 49.87855139298281
- type: nauc_ndcg_at_5_std
value: 6.08990428055584
- type: nauc_precision_at_1000_diff1
value: -1.594227972036865
- type: nauc_precision_at_1000_max
value: 32.48431723086185
- type: nauc_precision_at_1000_std
value: 53.84748466965268
- type: nauc_precision_at_100_diff1
value: 8.06411455192293
- type: nauc_precision_at_100_max
value: 39.91003601878948
- type: nauc_precision_at_100_std
value: 55.52979711075091
- type: nauc_precision_at_10_diff1
value: 26.610514456014066
- type: nauc_precision_at_10_max
value: 47.09062494321172
- type: nauc_precision_at_10_std
value: 33.91984226498748
- type: nauc_precision_at_1_diff1
value: 66.84108271415636
- type: nauc_precision_at_1_max
value: 46.67646429855176
- type: nauc_precision_at_1_std
value: 5.5505252956352304
- type: nauc_precision_at_20_diff1
value: 16.947688843085583
- type: nauc_precision_at_20_max
value: 45.40488186572008
- type: nauc_precision_at_20_std
value: 48.354421924500905
- type: nauc_precision_at_3_diff1
value: 49.11263981720622
- type: nauc_precision_at_3_max
value: 52.7084625111683
- type: nauc_precision_at_3_std
value: 16.734612173556453
- type: nauc_precision_at_5_diff1
value: 39.06503705015792
- type: nauc_precision_at_5_max
value: 52.21710506893391
- type: nauc_precision_at_5_std
value: 23.350948149460233
- type: nauc_recall_at_1000_diff1
value: 43.1559290382817
- type: nauc_recall_at_1000_max
value: 83.66013071895456
- type: nauc_recall_at_1000_std
value: 86.27450980392177
- type: nauc_recall_at_100_diff1
value: 46.016860850620375
- type: nauc_recall_at_100_max
value: 69.3944888744547
- type: nauc_recall_at_100_std
value: 55.286945696152735
- type: nauc_recall_at_10_diff1
value: 49.65877895350921
- type: nauc_recall_at_10_max
value: 53.02636695700889
- type: nauc_recall_at_10_std
value: 13.967608945823828
- type: nauc_recall_at_1_diff1
value: 65.0865197011516
- type: nauc_recall_at_1_max
value: 41.38862781954889
- type: nauc_recall_at_1_std
value: -0.9182122632530586
- type: nauc_recall_at_20_diff1
value: 43.355308229973524
- type: nauc_recall_at_20_max
value: 57.04187909533764
- type: nauc_recall_at_20_std
value: 33.578720846660524
- type: nauc_recall_at_3_diff1
value: 56.922996057428165
- type: nauc_recall_at_3_max
value: 50.74417041895424
- type: nauc_recall_at_3_std
value: 5.623890124328387
- type: nauc_recall_at_5_diff1
value: 50.55620076865238
- type: nauc_recall_at_5_max
value: 51.3316854622085
- type: nauc_recall_at_5_std
value: 8.995457887269255
- type: ndcg_at_1
value: 52.333
- type: ndcg_at_10
value: 64.764
- type: ndcg_at_100
value: 68.167
- type: ndcg_at_1000
value: 68.816
- type: ndcg_at_20
value: 66.457
- type: ndcg_at_3
value: 60.346
- type: ndcg_at_5
value: 62.365
- type: precision_at_1
value: 52.333
- type: precision_at_10
value: 8.799999999999999
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_20
value: 4.8
- type: precision_at_3
value: 23.889
- type: precision_at_5
value: 15.6
- type: recall_at_1
value: 49.778
- type: recall_at_10
value: 78.206
- type: recall_at_100
value: 93.10000000000001
- type: recall_at_1000
value: 98.333
- type: recall_at_20
value: 84.467
- type: recall_at_3
value: 66.367
- type: recall_at_5
value: 71.35000000000001
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL (default)
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: main_score
value: 72.18900000000001
- type: map_at_1
value: 0.214
- type: map_at_10
value: 1.755
- type: map_at_100
value: 9.944
- type: map_at_1000
value: 24.205
- type: map_at_20
value: 3.1510000000000002
- type: map_at_3
value: 0.6
- type: map_at_5
value: 0.9560000000000001
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 89.06666666666666
- type: mrr_at_100
value: 89.06666666666666
- type: mrr_at_1000
value: 89.06666666666666
- type: mrr_at_20
value: 89.06666666666666
- type: mrr_at_3
value: 87.66666666666666
- type: mrr_at_5
value: 89.06666666666666
- type: nauc_map_at_1000_diff1
value: -9.342037623635543
- type: nauc_map_at_1000_max
value: 45.71499810252398
- type: nauc_map_at_1000_std
value: 76.86482845196852
- type: nauc_map_at_100_diff1
value: -6.932395299866198
- type: nauc_map_at_100_max
value: 36.097801891181604
- type: nauc_map_at_100_std
value: 65.6085215411685
- type: nauc_map_at_10_diff1
value: -6.3654843824342775
- type: nauc_map_at_10_max
value: 9.564437521432714
- type: nauc_map_at_10_std
value: 21.8377319336476
- type: nauc_map_at_1_diff1
value: 8.269590874255034
- type: nauc_map_at_1_max
value: 3.482498491294516
- type: nauc_map_at_1_std
value: 8.985226819412189
- type: nauc_map_at_20_diff1
value: -4.971435767877232
- type: nauc_map_at_20_max
value: 22.88801858567121
- type: nauc_map_at_20_std
value: 32.38492618534027
- type: nauc_map_at_3_diff1
value: 1.1615973694623123
- type: nauc_map_at_3_max
value: 1.935417800315643
- type: nauc_map_at_3_std
value: 10.289328305818698
- type: nauc_map_at_5_diff1
value: -2.4675967231444105
- type: nauc_map_at_5_max
value: 2.4611483736622373
- type: nauc_map_at_5_std
value: 15.082324305750811
- type: nauc_mrr_at_1000_diff1
value: 13.098526703499063
- type: nauc_mrr_at_1000_max
value: 56.37362177417431
- type: nauc_mrr_at_1000_std
value: 73.2456769749587
- type: nauc_mrr_at_100_diff1
value: 13.098526703499063
- type: nauc_mrr_at_100_max
value: 56.37362177417431
- type: nauc_mrr_at_100_std
value: 73.2456769749587
- type: nauc_mrr_at_10_diff1
value: 13.098526703499063
- type: nauc_mrr_at_10_max
value: 56.37362177417431
- type: nauc_mrr_at_10_std
value: 73.2456769749587
- type: nauc_mrr_at_1_diff1
value: 12.099350148694809
- type: nauc_mrr_at_1_max
value: 53.75041304108387
- type: nauc_mrr_at_1_std
value: 68.84018063663402
- type: nauc_mrr_at_20_diff1
value: 13.098526703499063
- type: nauc_mrr_at_20_max
value: 56.37362177417431
- type: nauc_mrr_at_20_std
value: 73.2456769749587
- type: nauc_mrr_at_3_diff1
value: 12.173557857011161
- type: nauc_mrr_at_3_max
value: 57.540780562363395
- type: nauc_mrr_at_3_std
value: 75.42098189580211
- type: nauc_mrr_at_5_diff1
value: 13.098526703499063
- type: nauc_mrr_at_5_max
value: 56.37362177417431
- type: nauc_mrr_at_5_std
value: 73.2456769749587
- type: nauc_ndcg_at_1000_diff1
value: -8.951471847310401
- type: nauc_ndcg_at_1000_max
value: 43.86942237288822
- type: nauc_ndcg_at_1000_std
value: 74.61077735148591
- type: nauc_ndcg_at_100_diff1
value: -17.754559361083817
- type: nauc_ndcg_at_100_max
value: 53.97187119773482
- type: nauc_ndcg_at_100_std
value: 80.7944136146514
- type: nauc_ndcg_at_10_diff1
value: -26.637734697836414
- type: nauc_ndcg_at_10_max
value: 47.70102699133149
- type: nauc_ndcg_at_10_std
value: 70.26909560828646
- type: nauc_ndcg_at_1_diff1
value: -1.2250530785563207
- type: nauc_ndcg_at_1_max
value: 46.60509554140131
- type: nauc_ndcg_at_1_std
value: 62.63906581740976
- type: nauc_ndcg_at_20_diff1
value: -22.44286466550908
- type: nauc_ndcg_at_20_max
value: 55.40492058090103
- type: nauc_ndcg_at_20_std
value: 72.11813912145738
- type: nauc_ndcg_at_3_diff1
value: -14.8152721896563
- type: nauc_ndcg_at_3_max
value: 38.952259383027595
- type: nauc_ndcg_at_3_std
value: 59.819750166537766
- type: nauc_ndcg_at_5_diff1
value: -19.150105688904375
- type: nauc_ndcg_at_5_max
value: 42.311180547775315
- type: nauc_ndcg_at_5_std
value: 66.6632229321094
- type: nauc_precision_at_1000_diff1
value: -11.555591477978941
- type: nauc_precision_at_1000_max
value: 43.7311644834851
- type: nauc_precision_at_1000_std
value: 52.10644767999648
- type: nauc_precision_at_100_diff1
value: -16.94803099801117
- type: nauc_precision_at_100_max
value: 54.08281631067633
- type: nauc_precision_at_100_std
value: 82.77237347891331
- type: nauc_precision_at_10_diff1
value: -27.351332814863355
- type: nauc_precision_at_10_max
value: 48.08237549065846
- type: nauc_precision_at_10_std
value: 69.37250843534329
- type: nauc_precision_at_1_diff1
value: 12.099350148694809
- type: nauc_precision_at_1_max
value: 53.75041304108387
- type: nauc_precision_at_1_std
value: 68.84018063663402
- type: nauc_precision_at_20_diff1
value: -18.2422222283388
- type: nauc_precision_at_20_max
value: 59.517328129343696
- type: nauc_precision_at_20_std
value: 72.05149307342747
- type: nauc_precision_at_3_diff1
value: -10.226547543075897
- type: nauc_precision_at_3_max
value: 43.14684818832875
- type: nauc_precision_at_3_std
value: 57.31936467418288
- type: nauc_precision_at_5_diff1
value: -14.28521589468673
- type: nauc_precision_at_5_max
value: 41.633426753962596
- type: nauc_precision_at_5_std
value: 64.94400576804541
- type: nauc_recall_at_1000_diff1
value: -0.9648831207497152
- type: nauc_recall_at_1000_max
value: 31.70832946085005
- type: nauc_recall_at_1000_std
value: 63.21471613968869
- type: nauc_recall_at_100_diff1
value: -1.360254380933586
- type: nauc_recall_at_100_max
value: 25.960597782099605
- type: nauc_recall_at_100_std
value: 51.52757589609674
- type: nauc_recall_at_10_diff1
value: -0.3899439424189566
- type: nauc_recall_at_10_max
value: 5.094341897886072
- type: nauc_recall_at_10_std
value: 11.266045616925698
- type: nauc_recall_at_1_diff1
value: 8.269590874255034
- type: nauc_recall_at_1_max
value: 3.482498491294516
- type: nauc_recall_at_1_std
value: 8.985226819412189
- type: nauc_recall_at_20_diff1
value: 6.4797098359254175
- type: nauc_recall_at_20_max
value: 15.663700985336124
- type: nauc_recall_at_20_std
value: 17.154099587904913
- type: nauc_recall_at_3_diff1
value: 3.7245972450393507
- type: nauc_recall_at_3_max
value: 0.4063857187240345
- type: nauc_recall_at_3_std
value: 6.641948062821941
- type: nauc_recall_at_5_diff1
value: 4.013879477591466
- type: nauc_recall_at_5_max
value: -1.4266586618013566
- type: nauc_recall_at_5_std
value: 7.311601874411205
- type: ndcg_at_1
value: 75.0
- type: ndcg_at_10
value: 72.18900000000001
- type: ndcg_at_100
value: 54.022999999999996
- type: ndcg_at_1000
value: 49.492000000000004
- type: ndcg_at_20
value: 68.51
- type: ndcg_at_3
value: 73.184
- type: ndcg_at_5
value: 72.811
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 77.4
- type: precision_at_100
value: 55.24
- type: precision_at_1000
value: 21.822
- type: precision_at_20
value: 73.0
- type: precision_at_3
value: 79.333
- type: precision_at_5
value: 79.2
- type: recall_at_1
value: 0.214
- type: recall_at_10
value: 1.9980000000000002
- type: recall_at_100
value: 13.328999999999999
- type: recall_at_1000
value: 47.204
- type: recall_at_20
value: 3.7310000000000003
- type: recall_at_3
value: 0.628
- type: recall_at_5
value: 1.049
- task:
type: MultilabelClassification
dataset:
name: MTEB CEDRClassification (default)
type: ai-forever/cedr-classification
config: default
split: test
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
metrics:
- type: accuracy
value: 47.30605738575983
- type: f1
value: 41.26091043925065
- type: lrap
value: 72.89452709883206
- type: main_score
value: 47.30605738575983
- task:
type: Reranking
dataset:
name: MTEB MIRACLReranking (ru)
type: miracl/mmteb-miracl-reranking
config: ru
split: dev
revision: 6d1962c527217f8927fca80f890f14f36b2802af
metrics:
- type: MAP@1(MIRACL)
value: 20.721999999999998
- type: MAP@10(MIRACL)
value: 33.900999999999996
- type: MAP@100(MIRACL)
value: 36.813
- type: MAP@1000(MIRACL)
value: 36.813
- type: MAP@20(MIRACL)
value: 35.684
- type: MAP@3(MIRACL)
value: 28.141
- type: MAP@5(MIRACL)
value: 31.075000000000003
- type: NDCG@1(MIRACL)
value: 32.799
- type: NDCG@10(MIRACL)
value: 42.065000000000005
- type: NDCG@100(MIRACL)
value: 49.730999999999995
- type: NDCG@1000(MIRACL)
value: 49.730999999999995
- type: NDCG@20(MIRACL)
value: 46.0
- type: NDCG@3(MIRACL)
value: 34.481
- type: NDCG@5(MIRACL)
value: 37.452999999999996
- type: P@1(MIRACL)
value: 32.799
- type: P@10(MIRACL)
value: 11.668000000000001
- type: P@100(MIRACL)
value: 1.9529999999999998
- type: P@1000(MIRACL)
value: 0.19499999999999998
- type: P@20(MIRACL)
value: 7.51
- type: P@3(MIRACL)
value: 20.823
- type: P@5(MIRACL)
value: 16.728
- type: Recall@1(MIRACL)
value: 20.721999999999998
- type: Recall@10(MIRACL)
value: 54.762
- type: Recall@100(MIRACL)
value: 79.952
- type: Recall@1000(MIRACL)
value: 79.952
- type: Recall@20(MIRACL)
value: 66.26100000000001
- type: Recall@3(MIRACL)
value: 34.410000000000004
- type: Recall@5(MIRACL)
value: 42.659000000000006
- type: main_score
value: 42.065000000000005
- type: nAUC_MAP@1000_diff1(MIRACL)
value: 14.33534992502818
- type: nAUC_MAP@1000_max(MIRACL)
value: 12.367998764646115
- type: nAUC_MAP@1000_std(MIRACL)
value: 4.569686002935006
- type: nAUC_MAP@100_diff1(MIRACL)
value: 14.33534992502818
- type: nAUC_MAP@100_max(MIRACL)
value: 12.367998764646115
- type: nAUC_MAP@100_std(MIRACL)
value: 4.569686002935006
- type: nAUC_MAP@10_diff1(MIRACL)
value: 16.920323975680027
- type: nAUC_MAP@10_max(MIRACL)
value: 9.327171297204082
- type: nAUC_MAP@10_std(MIRACL)
value: 3.2039133783079015
- type: nAUC_MAP@1_diff1(MIRACL)
value: 28.698973487482206
- type: nAUC_MAP@1_max(MIRACL)
value: 2.9217687660885034
- type: nAUC_MAP@1_std(MIRACL)
value: -1.1247408800976524
- type: nAUC_MAP@20_diff1(MIRACL)
value: 15.359083081640476
- type: nAUC_MAP@20_max(MIRACL)
value: 11.310494233946345
- type: nAUC_MAP@20_std(MIRACL)
value: 4.4171898386022885
- type: nAUC_MAP@3_diff1(MIRACL)
value: 22.27430591851617
- type: nAUC_MAP@3_max(MIRACL)
value: 6.407438291284658
- type: nAUC_MAP@3_std(MIRACL)
value: 0.9799184530397409
- type: nAUC_MAP@5_diff1(MIRACL)
value: 19.20571689941054
- type: nAUC_MAP@5_max(MIRACL)
value: 7.987468654026893
- type: nAUC_MAP@5_std(MIRACL)
value: 1.8324246565938962
- type: nAUC_NDCG@1000_diff1(MIRACL)
value: 3.7537669018914768
- type: nAUC_NDCG@1000_max(MIRACL)
value: 20.7944707840533
- type: nAUC_NDCG@1000_std(MIRACL)
value: 8.444837055303063
- type: nAUC_NDCG@100_diff1(MIRACL)
value: 3.7537669018914768
- type: nAUC_NDCG@100_max(MIRACL)
value: 20.7944707840533
- type: nAUC_NDCG@100_std(MIRACL)
value: 8.444837055303063
- type: nAUC_NDCG@10_diff1(MIRACL)
value: 10.829575656103888
- type: nAUC_NDCG@10_max(MIRACL)
value: 13.0445496498929
- type: nAUC_NDCG@10_std(MIRACL)
value: 6.050412212625362
- type: nAUC_NDCG@1_diff1(MIRACL)
value: 19.1388712233292
- type: nAUC_NDCG@1_max(MIRACL)
value: 10.871900994781642
- type: nAUC_NDCG@1_std(MIRACL)
value: 3.218568248751811
- type: nAUC_NDCG@20_diff1(MIRACL)
value: 7.093172181746442
- type: nAUC_NDCG@20_max(MIRACL)
value: 16.955238078958836
- type: nAUC_NDCG@20_std(MIRACL)
value: 8.325656379573035
- type: nAUC_NDCG@3_diff1(MIRACL)
value: 17.134437303330802
- type: nAUC_NDCG@3_max(MIRACL)
value: 10.235328822955793
- type: nAUC_NDCG@3_std(MIRACL)
value: 3.2341358691084814
- type: nAUC_NDCG@5_diff1(MIRACL)
value: 14.733664618337636
- type: nAUC_NDCG@5_max(MIRACL)
value: 11.181897412035282
- type: nAUC_NDCG@5_std(MIRACL)
value: 3.642277088791985
- type: nAUC_P@1000_diff1(MIRACL)
value: -26.330038284867573
- type: nAUC_P@1000_max(MIRACL)
value: 28.450694137240458
- type: nAUC_P@1000_std(MIRACL)
value: 9.892993775474912
- type: nAUC_P@100_diff1(MIRACL)
value: -26.330038284867552
- type: nAUC_P@100_max(MIRACL)
value: 28.45069413724051
- type: nAUC_P@100_std(MIRACL)
value: 9.892993775474928
- type: nAUC_P@10_diff1(MIRACL)
value: -17.436937353231112
- type: nAUC_P@10_max(MIRACL)
value: 24.327018012947857
- type: nAUC_P@10_std(MIRACL)
value: 11.78803527706634
- type: nAUC_P@1_diff1(MIRACL)
value: 19.1388712233292
- type: nAUC_P@1_max(MIRACL)
value: 10.871900994781642
- type: nAUC_P@1_std(MIRACL)
value: 3.218568248751811
- type: nAUC_P@20_diff1(MIRACL)
value: -22.947528755272426
- type: nAUC_P@20_max(MIRACL)
value: 27.773093471902538
- type: nAUC_P@20_std(MIRACL)
value: 14.898619107087221
- type: nAUC_P@3_diff1(MIRACL)
value: 1.4100426412400944
- type: nAUC_P@3_max(MIRACL)
value: 17.397472872058845
- type: nAUC_P@3_std(MIRACL)
value: 8.240008229861875
- type: nAUC_P@5_diff1(MIRACL)
value: -7.971349332207021
- type: nAUC_P@5_max(MIRACL)
value: 22.198441167940963
- type: nAUC_P@5_std(MIRACL)
value: 9.00265164460082
- type: nAUC_Recall@1000_diff1(MIRACL)
value: -38.69835271863148
- type: nAUC_Recall@1000_max(MIRACL)
value: 50.9545152809108
- type: nAUC_Recall@1000_std(MIRACL)
value: 20.44270887092116
- type: nAUC_Recall@100_diff1(MIRACL)
value: -38.69835271863148
- type: nAUC_Recall@100_max(MIRACL)
value: 50.9545152809108
- type: nAUC_Recall@100_std(MIRACL)
value: 20.44270887092116
- type: nAUC_Recall@10_diff1(MIRACL)
value: -0.08109036309433801
- type: nAUC_Recall@10_max(MIRACL)
value: 12.696619907773568
- type: nAUC_Recall@10_std(MIRACL)
value: 8.791982704261589
- type: nAUC_Recall@1_diff1(MIRACL)
value: 28.698973487482206
- type: nAUC_Recall@1_max(MIRACL)
value: 2.9217687660885034
- type: nAUC_Recall@1_std(MIRACL)
value: -1.1247408800976524
- type: nAUC_Recall@20_diff1(MIRACL)
value: -13.312171017942623
- type: nAUC_Recall@20_max(MIRACL)
value: 24.19847346821666
- type: nAUC_Recall@20_std(MIRACL)
value: 15.8157702609797
- type: nAUC_Recall@3_diff1(MIRACL)
value: 16.909128321353343
- type: nAUC_Recall@3_max(MIRACL)
value: 6.552122731902991
- type: nAUC_Recall@3_std(MIRACL)
value: 1.9963898223457228
- type: nAUC_Recall@5_diff1(MIRACL)
value: 9.990292655247721
- type: nAUC_Recall@5_max(MIRACL)
value: 9.361722273507574
- type: nAUC_Recall@5_std(MIRACL)
value: 3.270918827854495
- task:
type: MultilabelClassification
dataset:
name: MTEB SensitiveTopicsClassification (default)
type: ai-forever/sensitive-topics-classification
config: default
split: test
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
metrics:
- type: accuracy
value: 30.634765625
- type: f1
value: 32.647559808678665
- type: lrap
value: 45.94319661458259
- type: main_score
value: 30.634765625
- task:
type: STS
dataset:
name: MTEB ATEC (default)
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cosine_pearson
value: 47.541497334563296
- type: cosine_spearman
value: 49.06268944206629
- type: euclidean_pearson
value: 51.838926748581635
- type: euclidean_spearman
value: 48.930697157135356
- type: main_score
value: 49.06268944206629
- type: manhattan_pearson
value: 51.835306769406365
- type: manhattan_spearman
value: 48.86135493444834
- type: pearson
value: 47.541497334563296
- type: spearman
value: 49.06268944206629
- task:
type: Classification
dataset:
name: MTEB AllegroReviews (default)
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
metrics:
- type: accuracy
value: 49.51292246520874
- type: f1
value: 44.14350234332397
- type: f1_weighted
value: 51.65508998354552
- type: main_score
value: 49.51292246520874
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P (default)
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: main_score
value: 63.883383458621665
- type: v_measure
value: 63.883383458621665
- type: v_measure_std
value: 2.693666879958465
- type: main_score
value: 46.85924588755251
- type: v_measure
value: 46.85924588755251
- type: v_measure_std
value: 2.1918258880872377
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 43.65721212452554
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking (default)
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: e40c8a63ce02da43200eccb5b0846fcaa888f562
metrics:
- type: map
value: 66.39013753839347
- type: mrr
value: 67.68045617786551
- type: main_score
value: 66.39013753839347
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval (default)
type: lyon-nlp/alloprof
config: default
split: test
revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd
metrics:
- type: main_score
value: 54.284
- type: map_at_1
value: 37.047000000000004
- type: map_at_10
value: 48.53
- type: map_at_100
value: 49.357
- type: map_at_1000
value: 49.39
- type: map_at_20
value: 49.064
- type: map_at_3
value: 45.675
- type: map_at_5
value: 47.441
- type: mrr_at_1
value: 37.04663212435233
- type: mrr_at_10
value: 48.5300326232969
- type: mrr_at_100
value: 49.35708199037581
- type: mrr_at_1000
value: 49.39005824603193
- type: mrr_at_20
value: 49.06417416464799
- type: mrr_at_3
value: 45.67501439263105
- type: mrr_at_5
value: 47.44099021301103
- type: nauc_map_at_1000_diff1
value: 43.32474221868009
- type: nauc_map_at_1000_max
value: 39.407334029058575
- type: nauc_map_at_1000_std
value: -2.3728154448932606
- type: nauc_map_at_100_diff1
value: 43.32336300929909
- type: nauc_map_at_100_max
value: 39.432174777554835
- type: nauc_map_at_100_std
value: -2.356396922384349
- type: nauc_map_at_10_diff1
value: 43.1606520154482
- type: nauc_map_at_10_max
value: 39.33734650558226
- type: nauc_map_at_10_std
value: -2.5156222475075256
- type: nauc_map_at_1_diff1
value: 46.2178975214499
- type: nauc_map_at_1_max
value: 36.26173199049361
- type: nauc_map_at_1_std
value: -3.0897555582816443
- type: nauc_map_at_20_diff1
value: 43.272980702916456
- type: nauc_map_at_20_max
value: 39.4896977052276
- type: nauc_map_at_20_std
value: -2.3305501742917043
- type: nauc_map_at_3_diff1
value: 43.49525042967079
- type: nauc_map_at_3_max
value: 38.66352501824728
- type: nauc_map_at_3_std
value: -3.202794391620473
- type: nauc_map_at_5_diff1
value: 43.2266692546611
- type: nauc_map_at_5_max
value: 38.77368661115743
- type: nauc_map_at_5_std
value: -3.0897532130127954
- type: nauc_mrr_at_1000_diff1
value: 43.32474221868009
- type: nauc_mrr_at_1000_max
value: 39.407334029058575
- type: nauc_mrr_at_1000_std
value: -2.3728154448932606
- type: nauc_mrr_at_100_diff1
value: 43.32336300929909
- type: nauc_mrr_at_100_max
value: 39.432174777554835
- type: nauc_mrr_at_100_std
value: -2.356396922384349
- type: nauc_mrr_at_10_diff1
value: 43.1606520154482
- type: nauc_mrr_at_10_max
value: 39.33734650558226
- type: nauc_mrr_at_10_std
value: -2.5156222475075256
- type: nauc_mrr_at_1_diff1
value: 46.2178975214499
- type: nauc_mrr_at_1_max
value: 36.26173199049361
- type: nauc_mrr_at_1_std
value: -3.0897555582816443
- type: nauc_mrr_at_20_diff1
value: 43.272980702916456
- type: nauc_mrr_at_20_max
value: 39.4896977052276
- type: nauc_mrr_at_20_std
value: -2.3305501742917043
- type: nauc_mrr_at_3_diff1
value: 43.49525042967079
- type: nauc_mrr_at_3_max
value: 38.66352501824728
- type: nauc_mrr_at_3_std
value: -3.202794391620473
- type: nauc_mrr_at_5_diff1
value: 43.2266692546611
- type: nauc_mrr_at_5_max
value: 38.77368661115743
- type: nauc_mrr_at_5_std
value: -3.0897532130127954
- type: nauc_ndcg_at_1000_diff1
value: 43.01903168202974
- type: nauc_ndcg_at_1000_max
value: 40.75496622942232
- type: nauc_ndcg_at_1000_std
value: -1.3150412981845496
- type: nauc_ndcg_at_100_diff1
value: 42.98016493758145
- type: nauc_ndcg_at_100_max
value: 41.55869635162325
- type: nauc_ndcg_at_100_std
value: -0.5355252976886055
- type: nauc_ndcg_at_10_diff1
value: 42.218755211347506
- type: nauc_ndcg_at_10_max
value: 41.305042275175765
- type: nauc_ndcg_at_10_std
value: -1.4034484444573714
- type: nauc_ndcg_at_1_diff1
value: 46.2178975214499
- type: nauc_ndcg_at_1_max
value: 36.26173199049361
- type: nauc_ndcg_at_1_std
value: -3.0897555582816443
- type: nauc_ndcg_at_20_diff1
value: 42.66574440095576
- type: nauc_ndcg_at_20_max
value: 42.014620115124515
- type: nauc_ndcg_at_20_std
value: -0.5176162553751498
- type: nauc_ndcg_at_3_diff1
value: 42.837450505106055
- type: nauc_ndcg_at_3_max
value: 39.525369733082414
- type: nauc_ndcg_at_3_std
value: -3.1605948245795155
- type: nauc_ndcg_at_5_diff1
value: 42.37951815451173
- type: nauc_ndcg_at_5_max
value: 39.78840132935179
- type: nauc_ndcg_at_5_std
value: -2.936898430768135
- type: nauc_precision_at_1000_diff1
value: 49.69224988612385
- type: nauc_precision_at_1000_max
value: 79.57897547128005
- type: nauc_precision_at_1000_std
value: 45.040371354764645
- type: nauc_precision_at_100_diff1
value: 42.70597486048422
- type: nauc_precision_at_100_max
value: 65.74628759606188
- type: nauc_precision_at_100_std
value: 25.49157745244855
- type: nauc_precision_at_10_diff1
value: 38.565609931689345
- type: nauc_precision_at_10_max
value: 50.0239696180852
- type: nauc_precision_at_10_std
value: 3.976354829503967
- type: nauc_precision_at_1_diff1
value: 46.2178975214499
- type: nauc_precision_at_1_max
value: 36.26173199049361
- type: nauc_precision_at_1_std
value: -3.0897555582816443
- type: nauc_precision_at_20_diff1
value: 40.4134718566864
- type: nauc_precision_at_20_max
value: 57.121778108665374
- type: nauc_precision_at_20_std
value: 11.46021975428544
- type: nauc_precision_at_3_diff1
value: 40.90538379461529
- type: nauc_precision_at_3_max
value: 42.18393248057992
- type: nauc_precision_at_3_std
value: -3.005249943837297
- type: nauc_precision_at_5_diff1
value: 39.60162965860782
- type: nauc_precision_at_5_max
value: 43.28317158174058
- type: nauc_precision_at_5_std
value: -2.3469094487738054
- type: nauc_recall_at_1000_diff1
value: 49.69224988612252
- type: nauc_recall_at_1000_max
value: 79.57897547127862
- type: nauc_recall_at_1000_std
value: 45.04037135476256
- type: nauc_recall_at_100_diff1
value: 42.70597486048432
- type: nauc_recall_at_100_max
value: 65.74628759606213
- type: nauc_recall_at_100_std
value: 25.491577452448727
- type: nauc_recall_at_10_diff1
value: 38.56560993168935
- type: nauc_recall_at_10_max
value: 50.02396961808522
- type: nauc_recall_at_10_std
value: 3.9763548295040314
- type: nauc_recall_at_1_diff1
value: 46.2178975214499
- type: nauc_recall_at_1_max
value: 36.26173199049361
- type: nauc_recall_at_1_std
value: -3.0897555582816443
- type: nauc_recall_at_20_diff1
value: 40.41347185668637
- type: nauc_recall_at_20_max
value: 57.12177810866533
- type: nauc_recall_at_20_std
value: 11.460219754285431
- type: nauc_recall_at_3_diff1
value: 40.90538379461527
- type: nauc_recall_at_3_max
value: 42.18393248057989
- type: nauc_recall_at_3_std
value: -3.005249943837297
- type: nauc_recall_at_5_diff1
value: 39.601629658607784
- type: nauc_recall_at_5_max
value: 43.28317158174053
- type: nauc_recall_at_5_std
value: -2.3469094487738054
- type: ndcg_at_1
value: 37.047000000000004
- type: ndcg_at_10
value: 54.284
- type: ndcg_at_100
value: 58.34
- type: ndcg_at_1000
value: 59.303
- type: ndcg_at_20
value: 56.235
- type: ndcg_at_3
value: 48.503
- type: ndcg_at_5
value: 51.686
- type: precision_at_1
value: 37.047000000000004
- type: precision_at_10
value: 7.237
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.005
- type: precision_at_3
value: 18.898
- type: precision_at_5
value: 12.884
- type: recall_at_1
value: 37.047000000000004
- type: recall_at_10
value: 72.366
- type: recall_at_100
value: 91.408
- type: recall_at_1000
value: 99.136
- type: recall_at_20
value: 80.095
- type: recall_at_3
value: 56.693000000000005
- type: recall_at_5
value: 64.42099999999999
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 89.49253731343283
- type: ap
value: 61.88098616359918
- type: ap_weighted
value: 61.88098616359918
- type: f1
value: 84.76516623679144
- type: f1_weighted
value: 89.92745276292968
- type: main_score
value: 89.49253731343283
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 89.61456102783727
- type: ap
value: 93.11816566733742
- type: ap_weighted
value: 93.11816566733742
- type: f1
value: 88.27635757733722
- type: f1_weighted
value: 89.82581568285453
- type: main_score
value: 89.61456102783727
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 95.3825
- type: ap
value: 93.393033869502
- type: ap_weighted
value: 93.393033869502
- type: f1
value: 95.38109007966307
- type: f1_weighted
value: 95.38109007966305
- type: main_score
value: 95.3825
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.768
- type: f1
value: 48.95084821944411
- type: f1_weighted
value: 48.9508482194441
- type: main_score
value: 49.768
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.071999999999996
- type: f1
value: 47.24171107487612
- type: f1_weighted
value: 47.24171107487612
- type: main_score
value: 48.071999999999996
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.102000000000004
- type: f1
value: 47.27193805278696
- type: f1_weighted
value: 47.27193805278696
- type: main_score
value: 48.102000000000004
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.30800000000001
- type: f1
value: 46.41683358017851
- type: f1_weighted
value: 46.41683358017851
- type: main_score
value: 47.30800000000001
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.944
- type: f1
value: 44.223824487744395
- type: f1_weighted
value: 44.22382448774439
- type: main_score
value: 44.944
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 29.232000000000003
- type: map_at_10
value: 45.117000000000004
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 45.98
- type: map_at_20
value: 45.815
- type: map_at_3
value: 39.912
- type: map_at_5
value: 42.693
- type: mrr_at_1
value: 29.659000000000002
- type: mrr_at_10
value: 45.253
- type: mrr_at_100
value: 46.125
- type: mrr_at_1000
value: 46.129
- type: mrr_at_20
value: 45.964
- type: mrr_at_3
value: 40.043
- type: mrr_at_5
value: 42.870000000000005
- type: ndcg_at_1
value: 29.232000000000003
- type: ndcg_at_10
value: 54.327999999999996
- type: ndcg_at_100
value: 57.86
- type: ndcg_at_1000
value: 57.935
- type: ndcg_at_20
value: 56.794
- type: ndcg_at_3
value: 43.516
- type: ndcg_at_5
value: 48.512
- type: precision_at_1
value: 29.232000000000003
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.676
- type: precision_at_3
value: 17.994
- type: precision_at_5
value: 13.215
- type: recall_at_1
value: 29.232000000000003
- type: recall_at_10
value: 83.926
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 93.528
- type: recall_at_3
value: 53.983000000000004
- type: recall_at_5
value: 66.074
- type: main_score
value: 54.327999999999996
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 46.6636824632419
- type: v_measure
value: 46.6636824632419
- type: v_measure_std
value: 13.817129140714963
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 39.271141892800024
- type: v_measure
value: 39.271141892800024
- type: v_measure_std
value: 14.276782483454827
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 65.04363277324629
- type: mrr
value: 78.2372598162072
- type: main_score
value: 65.04363277324629
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.83
- type: main_score
value: 30.83
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 88.80382082011027
- type: cosine_spearman
value: 88.68876782169106
- type: euclidean_pearson
value: 87.00802890147176
- type: euclidean_spearman
value: 87.43211268192712
- type: main_score
value: 88.68876782169106
- type: manhattan_pearson
value: 87.14062537179474
- type: manhattan_spearman
value: 87.59115245033443
- type: pearson
value: 88.80382082011027
- type: spearman
value: 88.68876782169106
- task:
type: STS
dataset:
name: MTEB BQ (default)
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cosine_pearson
value: 61.588006604878196
- type: cosine_spearman
value: 63.20615427154465
- type: euclidean_pearson
value: 61.818547092516496
- type: euclidean_spearman
value: 63.21558009151778
- type: main_score
value: 63.20615427154465
- type: manhattan_pearson
value: 61.665588158487616
- type: manhattan_spearman
value: 63.051544488238584
- type: pearson
value: 61.588006604878196
- type: spearman
value: 63.20615427154465
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval (default)
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: main_score
value: 64.414
- type: map_at_1
value: 14.865
- type: map_at_10
value: 21.605
- type: map_at_100
value: 22.762
- type: map_at_1000
value: 22.854
- type: map_at_20
value: 22.259999999999998
- type: map_at_3
value: 20.119999999999997
- type: map_at_5
value: 20.931
- type: mrr_at_1
value: 14.864864864864865
- type: mrr_at_10
value: 21.605176605176606
- type: mrr_at_100
value: 22.7622306460065
- type: mrr_at_1000
value: 22.85383406410312
- type: mrr_at_20
value: 22.259528463088845
- type: mrr_at_3
value: 20.12012012012012
- type: mrr_at_5
value: 20.930930930930934
- type: nauc_map_at_1000_diff1
value: 17.486265968689338
- type: nauc_map_at_1000_max
value: 22.736799291688836
- type: nauc_map_at_1000_std
value: 9.831687441977147
- type: nauc_map_at_100_diff1
value: 17.50754492049086
- type: nauc_map_at_100_max
value: 22.77693662806787
- type: nauc_map_at_100_std
value: 9.853899509675395
- type: nauc_map_at_10_diff1
value: 17.42133968580952
- type: nauc_map_at_10_max
value: 22.45861793882279
- type: nauc_map_at_10_std
value: 8.964888472915938
- type: nauc_map_at_1_diff1
value: 19.433947086968093
- type: nauc_map_at_1_max
value: 24.75657047550517
- type: nauc_map_at_1_std
value: 15.122329157218505
- type: nauc_map_at_20_diff1
value: 17.429856756008785
- type: nauc_map_at_20_max
value: 22.438850987431017
- type: nauc_map_at_20_std
value: 9.172746012213558
- type: nauc_map_at_3_diff1
value: 18.218182689678475
- type: nauc_map_at_3_max
value: 23.57169444088667
- type: nauc_map_at_3_std
value: 10.464473559366356
- type: nauc_map_at_5_diff1
value: 18.6075342519133
- type: nauc_map_at_5_max
value: 23.308845973576673
- type: nauc_map_at_5_std
value: 9.364009996445652
- type: nauc_mrr_at_1000_diff1
value: 17.486265968689338
- type: nauc_mrr_at_1000_max
value: 22.736799291688836
- type: nauc_mrr_at_1000_std
value: 9.831687441977147
- type: nauc_mrr_at_100_diff1
value: 17.50754492049086
- type: nauc_mrr_at_100_max
value: 22.77693662806787
- type: nauc_mrr_at_100_std
value: 9.853899509675395
- type: nauc_mrr_at_10_diff1
value: 17.42133968580952
- type: nauc_mrr_at_10_max
value: 22.45861793882279
- type: nauc_mrr_at_10_std
value: 8.964888472915938
- type: nauc_mrr_at_1_diff1
value: 19.433947086968093
- type: nauc_mrr_at_1_max
value: 24.75657047550517
- type: nauc_mrr_at_1_std
value: 15.122329157218505
- type: nauc_mrr_at_20_diff1
value: 17.429856756008785
- type: nauc_mrr_at_20_max
value: 22.438850987431017
- type: nauc_mrr_at_20_std
value: 9.172746012213558
- type: nauc_mrr_at_3_diff1
value: 18.218182689678475
- type: nauc_mrr_at_3_max
value: 23.57169444088667
- type: nauc_mrr_at_3_std
value: 10.464473559366356
- type: nauc_mrr_at_5_diff1
value: 18.6075342519133
- type: nauc_mrr_at_5_max
value: 23.308845973576673
- type: nauc_mrr_at_5_std
value: 9.364009996445652
- type: nauc_ndcg_at_1000_diff1
value: 16.327871824135745
- type: nauc_ndcg_at_1000_max
value: 23.308241052911495
- type: nauc_ndcg_at_1000_std
value: 11.50905911184097
- type: nauc_ndcg_at_100_diff1
value: 16.676226744692773
- type: nauc_ndcg_at_100_max
value: 24.323253721240974
- type: nauc_ndcg_at_100_std
value: 11.952612443651557
- type: nauc_ndcg_at_10_diff1
value: 16.030325121764594
- type: nauc_ndcg_at_10_max
value: 21.306799242079542
- type: nauc_ndcg_at_10_std
value: 6.63359364302513
- type: nauc_ndcg_at_1_diff1
value: 19.433947086968093
- type: nauc_ndcg_at_1_max
value: 24.75657047550517
- type: nauc_ndcg_at_1_std
value: 15.122329157218505
- type: nauc_ndcg_at_20_diff1
value: 16.013173605999857
- type: nauc_ndcg_at_20_max
value: 21.607217260736576
- type: nauc_ndcg_at_20_std
value: 7.319482417138996
- type: nauc_ndcg_at_3_diff1
value: 17.97958548328493
- type: nauc_ndcg_at_3_max
value: 23.58346522810145
- type: nauc_ndcg_at_3_std
value: 9.392582854708314
- type: nauc_ndcg_at_5_diff1
value: 18.734733324685287
- type: nauc_ndcg_at_5_max
value: 23.273244317623742
- type: nauc_ndcg_at_5_std
value: 7.638611545253834
- type: nauc_precision_at_1000_diff1
value: 7.919843339380295
- type: nauc_precision_at_1000_max
value: 31.575386234270486
- type: nauc_precision_at_1000_std
value: 39.332224386769404
- type: nauc_precision_at_100_diff1
value: 15.018050960000052
- type: nauc_precision_at_100_max
value: 34.98209513759861
- type: nauc_precision_at_100_std
value: 26.970034484359022
- type: nauc_precision_at_10_diff1
value: 12.102191084210922
- type: nauc_precision_at_10_max
value: 18.112541150340675
- type: nauc_precision_at_10_std
value: 0.7358784689406018
- type: nauc_precision_at_1_diff1
value: 19.433947086968093
- type: nauc_precision_at_1_max
value: 24.75657047550517
- type: nauc_precision_at_1_std
value: 15.122329157218505
- type: nauc_precision_at_20_diff1
value: 12.018814361204328
- type: nauc_precision_at_20_max
value: 19.75123746049928
- type: nauc_precision_at_20_std
value: 3.012204650582264
- type: nauc_precision_at_3_diff1
value: 17.41375604940955
- type: nauc_precision_at_3_max
value: 23.699834627021037
- type: nauc_precision_at_3_std
value: 6.793486779050103
- type: nauc_precision_at_5_diff1
value: 19.194631963780257
- type: nauc_precision_at_5_max
value: 23.31708702442155
- type: nauc_precision_at_5_std
value: 3.4591358279667332
- type: nauc_recall_at_1000_diff1
value: 7.919843339380378
- type: nauc_recall_at_1000_max
value: 31.57538623427063
- type: nauc_recall_at_1000_std
value: 39.332224386769546
- type: nauc_recall_at_100_diff1
value: 15.018050960000085
- type: nauc_recall_at_100_max
value: 34.9820951375986
- type: nauc_recall_at_100_std
value: 26.97003448435901
- type: nauc_recall_at_10_diff1
value: 12.102191084210837
- type: nauc_recall_at_10_max
value: 18.112541150340594
- type: nauc_recall_at_10_std
value: 0.7358784689405188
- type: nauc_recall_at_1_diff1
value: 19.433947086968093
- type: nauc_recall_at_1_max
value: 24.75657047550517
- type: nauc_recall_at_1_std
value: 15.122329157218505
- type: nauc_recall_at_20_diff1
value: 12.01881436120429
- type: nauc_recall_at_20_max
value: 19.751237460499222
- type: nauc_recall_at_20_std
value: 3.0122046505822135
- type: nauc_recall_at_3_diff1
value: 17.413756049409503
- type: nauc_recall_at_3_max
value: 23.699834627020998
- type: nauc_recall_at_3_std
value: 6.793486779050083
- type: nauc_recall_at_5_diff1
value: 19.194631963780203
- type: nauc_recall_at_5_max
value: 23.3170870244215
- type: nauc_recall_at_5_std
value: 3.459135827966664
- type: ndcg_at_1
value: 14.865
- type: ndcg_at_10
value: 24.764
- type: ndcg_at_100
value: 30.861
- type: ndcg_at_1000
value: 33.628
- type: ndcg_at_20
value: 27.078000000000003
- type: ndcg_at_3
value: 21.675
- type: ndcg_at_5
value: 23.148
- type: precision_at_1
value: 14.865
- type: precision_at_10
value: 3.4680000000000004
- type: precision_at_100
value: 0.644
- type: precision_at_1000
value: 0.087
- type: precision_at_20
value: 2.185
- type: precision_at_3
value: 8.709
- type: precision_at_5
value: 5.946
- type: recall_at_1
value: 14.865
- type: recall_at_10
value: 34.685
- type: recall_at_100
value: 64.414
- type: recall_at_1000
value: 86.937
- type: recall_at_20
value: 43.694
- type: recall_at_3
value: 26.125999999999998
- type: recall_at_5
value: 29.73
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.08116883116882
- type: f1
value: 84.05587055990273
- type: f1_weighted
value: 84.05587055990274
- type: main_score
value: 84.08116883116882
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 38.1941007822277
- type: v_measure
value: 38.1941007822277
- type: v_measure_std
value: 0.7502113547288178
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 34.42075599178318
- type: v_measure
value: 34.42075599178318
- type: v_measure_std
value: 0.600256720497283
- task:
type: Clustering
dataset:
name: MTEB BlurbsClusteringP2P (default)
type: slvnwhrl/blurbs-clustering-p2p
config: default
split: test
revision: a2dd5b02a77de3466a3eaa98ae586b5610314496
metrics:
- type: main_score
value: 41.634627363047265
- type: v_measure
value: 41.634627363047265
- type: v_measure_std
value: 9.726923191225307
- task:
type: Clustering
dataset:
name: MTEB BlurbsClusteringS2S (default)
type: slvnwhrl/blurbs-clustering-s2s
config: default
split: test
revision: 22793b6a6465bf00120ad525e38c51210858132c
metrics:
- type: main_score
value: 20.996468295584197
- type: v_measure
value: 20.996468295584197
- type: v_measure_std
value: 9.225766688272197
- task:
type: Classification
dataset:
name: MTEB CBD (default)
type: PL-MTEB/cbd
config: default
split: test
revision: 36ddb419bcffe6a5374c3891957912892916f28d
metrics:
- type: accuracy
value: 69.99
- type: ap
value: 22.57826353116948
- type: ap_weighted
value: 22.57826353116948
- type: f1
value: 59.04574955548393
- type: f1_weighted
value: 74.36235022309789
- type: main_score
value: 69.99
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E (default)
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
metrics:
- type: cosine_accuracy
value: 88.7
- type: cosine_accuracy_threshold
value: 97.37848043441772
- type: cosine_ap
value: 73.0405088928302
- type: cosine_f1
value: 63.52201257861635
- type: cosine_f1_threshold
value: 96.98888063430786
- type: cosine_precision
value: 78.90625
- type: cosine_recall
value: 53.1578947368421
- type: dot_accuracy
value: 84.89999999999999
- type: dot_accuracy_threshold
value: 43603.09753417969
- type: dot_ap
value: 56.98157569085279
- type: dot_f1
value: 57.606490872210955
- type: dot_f1_threshold
value: 40406.23779296875
- type: dot_precision
value: 46.864686468646866
- type: dot_recall
value: 74.73684210526315
- type: euclidean_accuracy
value: 88.5
- type: euclidean_accuracy_threshold
value: 498.0483055114746
- type: euclidean_ap
value: 72.97328234816734
- type: euclidean_f1
value: 63.722397476340696
- type: euclidean_f1_threshold
value: 508.6186408996582
- type: euclidean_precision
value: 79.52755905511812
- type: euclidean_recall
value: 53.1578947368421
- type: main_score
value: 73.0405088928302
- type: manhattan_accuracy
value: 88.6
- type: manhattan_accuracy_threshold
value: 12233.079528808594
- type: manhattan_ap
value: 72.92148503992615
- type: manhattan_f1
value: 63.69426751592356
- type: manhattan_f1_threshold
value: 12392.754364013672
- type: manhattan_precision
value: 80.64516129032258
- type: manhattan_recall
value: 52.63157894736842
- type: max_accuracy
value: 88.7
- type: max_ap
value: 73.0405088928302
- type: max_f1
value: 63.722397476340696
- type: max_precision
value: 80.64516129032258
- type: max_recall
value: 74.73684210526315
- type: similarity_accuracy
value: 88.7
- type: similarity_accuracy_threshold
value: 97.37848043441772
- type: similarity_ap
value: 73.0405088928302
- type: similarity_f1
value: 63.52201257861635
- type: similarity_f1_threshold
value: 96.98888063430786
- type: similarity_precision
value: 78.90625
- type: similarity_recall
value: 53.1578947368421
- task:
type: STS
dataset:
name: MTEB CDSC-R (default)
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
metrics:
- type: cosine_pearson
value: 92.97492495289738
- type: cosine_spearman
value: 92.63248098608472
- type: euclidean_pearson
value: 92.04712487782031
- type: euclidean_spearman
value: 92.19679486755008
- type: main_score
value: 92.63248098608472
- type: manhattan_pearson
value: 92.0101187740438
- type: manhattan_spearman
value: 92.20926859332754
- type: pearson
value: 92.97492495289738
- type: spearman
value: 92.63248098608472
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P (default)
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: main_score
value: 39.96377851800628
- type: v_measure
value: 39.96377851800628
- type: v_measure_std
value: 0.9793033243093288
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S (default)
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: main_score
value: 38.788850224595784
- type: v_measure
value: 38.788850224595784
- type: v_measure_std
value: 1.0712604145916924
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 77.95952507806115
- type: mrr
value: 80.8643253968254
- type: main_score
value: 77.95952507806115
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 78.21522500165045
- type: mrr
value: 81.28194444444443
- type: main_score
value: 78.21522500165045
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.377
- type: map_at_10
value: 46.371
- type: map_at_100
value: 47.829
- type: map_at_1000
value: 47.94
- type: map_at_20
value: 47.205000000000005
- type: map_at_3
value: 42.782
- type: map_at_5
value: 44.86
- type: mrr_at_1
value: 41.345
- type: mrr_at_10
value: 52.187
- type: mrr_at_100
value: 52.893
- type: mrr_at_1000
value: 52.929
- type: mrr_at_20
value: 52.637
- type: mrr_at_3
value: 49.714000000000006
- type: mrr_at_5
value: 51.373000000000005
- type: ndcg_at_1
value: 41.345
- type: ndcg_at_10
value: 52.946000000000005
- type: ndcg_at_100
value: 57.92699999999999
- type: ndcg_at_1000
value: 59.609
- type: ndcg_at_20
value: 54.900999999999996
- type: ndcg_at_3
value: 48.357
- type: ndcg_at_5
value: 50.739000000000004
- type: precision_at_1
value: 41.345
- type: precision_at_10
value: 10.186
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.2
- type: precision_at_20
value: 5.959
- type: precision_at_3
value: 23.796
- type: precision_at_5
value: 17.024
- type: recall_at_1
value: 33.377
- type: recall_at_10
value: 65.067
- type: recall_at_100
value: 86.04899999999999
- type: recall_at_1000
value: 96.54899999999999
- type: recall_at_20
value: 72.071
- type: recall_at_3
value: 51.349999999999994
- type: recall_at_5
value: 58.41
- type: main_score
value: 52.946000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 31.097
- type: map_at_10
value: 42.183
- type: map_at_100
value: 43.580999999999996
- type: map_at_1000
value: 43.718
- type: map_at_20
value: 42.921
- type: map_at_3
value: 38.963
- type: map_at_5
value: 40.815
- type: mrr_at_1
value: 39.745000000000005
- type: mrr_at_10
value: 48.736000000000004
- type: mrr_at_100
value: 49.405
- type: mrr_at_1000
value: 49.452
- type: mrr_at_20
value: 49.118
- type: mrr_at_3
value: 46.497
- type: mrr_at_5
value: 47.827999999999996
- type: ndcg_at_1
value: 39.745000000000005
- type: ndcg_at_10
value: 48.248000000000005
- type: ndcg_at_100
value: 52.956
- type: ndcg_at_1000
value: 54.99699999999999
- type: ndcg_at_20
value: 50.01
- type: ndcg_at_3
value: 43.946000000000005
- type: ndcg_at_5
value: 46.038000000000004
- type: precision_at_1
value: 39.745000000000005
- type: precision_at_10
value: 9.229
- type: precision_at_100
value: 1.5070000000000001
- type: precision_at_1000
value: 0.199
- type: precision_at_20
value: 5.489999999999999
- type: precision_at_3
value: 21.38
- type: precision_at_5
value: 15.274
- type: recall_at_1
value: 31.097
- type: recall_at_10
value: 58.617
- type: recall_at_100
value: 78.55199999999999
- type: recall_at_1000
value: 91.13900000000001
- type: recall_at_20
value: 64.92
- type: recall_at_3
value: 45.672000000000004
- type: recall_at_5
value: 51.669
- type: main_score
value: 48.248000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.745000000000005
- type: map_at_10
value: 52.063
- type: map_at_100
value: 53.077
- type: map_at_1000
value: 53.13
- type: map_at_20
value: 52.66
- type: map_at_3
value: 48.662
- type: map_at_5
value: 50.507000000000005
- type: mrr_at_1
value: 45.391999999999996
- type: mrr_at_10
value: 55.528
- type: mrr_at_100
value: 56.16100000000001
- type: mrr_at_1000
value: 56.192
- type: mrr_at_20
value: 55.923
- type: mrr_at_3
value: 52.93600000000001
- type: mrr_at_5
value: 54.435
- type: ndcg_at_1
value: 45.391999999999996
- type: ndcg_at_10
value: 58.019
- type: ndcg_at_100
value: 61.936
- type: ndcg_at_1000
value: 63.015
- type: ndcg_at_20
value: 59.691
- type: ndcg_at_3
value: 52.294
- type: ndcg_at_5
value: 55.017
- type: precision_at_1
value: 45.391999999999996
- type: precision_at_10
value: 9.386
- type: precision_at_100
value: 1.232
- type: precision_at_1000
value: 0.136
- type: precision_at_20
value: 5.223
- type: precision_at_3
value: 23.177
- type: precision_at_5
value: 15.9
- type: recall_at_1
value: 39.745000000000005
- type: recall_at_10
value: 72.08099999999999
- type: recall_at_100
value: 88.85300000000001
- type: recall_at_1000
value: 96.569
- type: recall_at_20
value: 78.203
- type: recall_at_3
value: 56.957
- type: recall_at_5
value: 63.63100000000001
- type: main_score
value: 58.019
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.651999999999997
- type: map_at_10
value: 35.799
- type: map_at_100
value: 36.846000000000004
- type: map_at_1000
value: 36.931000000000004
- type: map_at_20
value: 36.341
- type: map_at_3
value: 32.999
- type: map_at_5
value: 34.597
- type: mrr_at_1
value: 28.814
- type: mrr_at_10
value: 37.869
- type: mrr_at_100
value: 38.728
- type: mrr_at_1000
value: 38.795
- type: mrr_at_20
value: 38.317
- type: mrr_at_3
value: 35.235
- type: mrr_at_5
value: 36.738
- type: ndcg_at_1
value: 28.814
- type: ndcg_at_10
value: 41.028
- type: ndcg_at_100
value: 46.162
- type: ndcg_at_1000
value: 48.15
- type: ndcg_at_20
value: 42.824
- type: ndcg_at_3
value: 35.621
- type: ndcg_at_5
value: 38.277
- type: precision_at_1
value: 28.814
- type: precision_at_10
value: 6.361999999999999
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_20
value: 3.6159999999999997
- type: precision_at_3
value: 15.140999999999998
- type: precision_at_5
value: 10.712000000000002
- type: recall_at_1
value: 26.651999999999997
- type: recall_at_10
value: 55.038
- type: recall_at_100
value: 78.806
- type: recall_at_1000
value: 93.485
- type: recall_at_20
value: 61.742
- type: recall_at_3
value: 40.682
- type: recall_at_5
value: 46.855000000000004
- type: main_score
value: 41.028
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 17.627000000000002
- type: map_at_10
value: 26.436999999999998
- type: map_at_100
value: 27.85
- type: map_at_1000
value: 27.955999999999996
- type: map_at_20
value: 27.233
- type: map_at_3
value: 23.777
- type: map_at_5
value: 25.122
- type: mrr_at_1
value: 22.387999999999998
- type: mrr_at_10
value: 31.589
- type: mrr_at_100
value: 32.641999999999996
- type: mrr_at_1000
value: 32.696999999999996
- type: mrr_at_20
value: 32.201
- type: mrr_at_3
value: 28.98
- type: mrr_at_5
value: 30.342000000000002
- type: ndcg_at_1
value: 22.387999999999998
- type: ndcg_at_10
value: 32.129999999999995
- type: ndcg_at_100
value: 38.562999999999995
- type: ndcg_at_1000
value: 40.903
- type: ndcg_at_20
value: 34.652
- type: ndcg_at_3
value: 27.26
- type: ndcg_at_5
value: 29.235
- type: precision_at_1
value: 22.387999999999998
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 1.068
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_20
value: 3.6999999999999997
- type: precision_at_3
value: 13.267000000000001
- type: precision_at_5
value: 9.403
- type: recall_at_1
value: 17.627000000000002
- type: recall_at_10
value: 44.71
- type: recall_at_100
value: 72.426
- type: recall_at_1000
value: 88.64699999999999
- type: recall_at_20
value: 53.65
- type: recall_at_3
value: 30.989
- type: recall_at_5
value: 36.237
- type: main_score
value: 32.129999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 30.891000000000002
- type: map_at_10
value: 41.519
- type: map_at_100
value: 42.896
- type: map_at_1000
value: 42.992999999999995
- type: map_at_20
value: 42.287
- type: map_at_3
value: 37.822
- type: map_at_5
value: 39.976
- type: mrr_at_1
value: 37.921
- type: mrr_at_10
value: 47.260999999999996
- type: mrr_at_100
value: 48.044
- type: mrr_at_1000
value: 48.08
- type: mrr_at_20
value: 47.699999999999996
- type: mrr_at_3
value: 44.513999999999996
- type: mrr_at_5
value: 46.064
- type: ndcg_at_1
value: 37.921
- type: ndcg_at_10
value: 47.806
- type: ndcg_at_100
value: 53.274
- type: ndcg_at_1000
value: 55.021
- type: ndcg_at_20
value: 49.973
- type: ndcg_at_3
value: 42.046
- type: ndcg_at_5
value: 44.835
- type: precision_at_1
value: 37.921
- type: precision_at_10
value: 8.767999999999999
- type: precision_at_100
value: 1.353
- type: precision_at_1000
value: 0.168
- type: precision_at_20
value: 5.135
- type: precision_at_3
value: 20.051
- type: precision_at_5
value: 14.398
- type: recall_at_1
value: 30.891000000000002
- type: recall_at_10
value: 60.897999999999996
- type: recall_at_100
value: 83.541
- type: recall_at_1000
value: 94.825
- type: recall_at_20
value: 68.356
- type: recall_at_3
value: 44.65
- type: recall_at_5
value: 51.919000000000004
- type: main_score
value: 47.806
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 27.654
- type: map_at_10
value: 38.025999999999996
- type: map_at_100
value: 39.425
- type: map_at_1000
value: 39.528
- type: map_at_20
value: 38.838
- type: map_at_3
value: 34.745
- type: map_at_5
value: 36.537
- type: mrr_at_1
value: 34.018
- type: mrr_at_10
value: 43.314
- type: mrr_at_100
value: 44.283
- type: mrr_at_1000
value: 44.327
- type: mrr_at_20
value: 43.929
- type: mrr_at_3
value: 40.868
- type: mrr_at_5
value: 42.317
- type: ndcg_at_1
value: 34.018
- type: ndcg_at_10
value: 43.887
- type: ndcg_at_100
value: 49.791000000000004
- type: ndcg_at_1000
value: 51.834
- type: ndcg_at_20
value: 46.376
- type: ndcg_at_3
value: 38.769999999999996
- type: ndcg_at_5
value: 41.144
- type: precision_at_1
value: 34.018
- type: precision_at_10
value: 8.001999999999999
- type: precision_at_100
value: 1.2630000000000001
- type: precision_at_1000
value: 0.16
- type: precision_at_20
value: 4.737
- type: precision_at_3
value: 18.417
- type: precision_at_5
value: 13.150999999999998
- type: recall_at_1
value: 27.654
- type: recall_at_10
value: 56.111
- type: recall_at_100
value: 81.136
- type: recall_at_1000
value: 94.788
- type: recall_at_20
value: 65.068
- type: recall_at_3
value: 41.713
- type: recall_at_5
value: 48.106
- type: main_score
value: 43.887
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 42.58858333333333
- type: ndcg_at_10
value: 42.58858333333333
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.501
- type: map_at_10
value: 32.814
- type: map_at_100
value: 33.754
- type: map_at_1000
value: 33.859
- type: map_at_20
value: 33.324
- type: map_at_3
value: 30.758000000000003
- type: map_at_5
value: 31.936999999999998
- type: mrr_at_1
value: 27.761000000000003
- type: mrr_at_10
value: 35.662
- type: mrr_at_100
value: 36.443999999999996
- type: mrr_at_1000
value: 36.516999999999996
- type: mrr_at_20
value: 36.085
- type: mrr_at_3
value: 33.742
- type: mrr_at_5
value: 34.931
- type: ndcg_at_1
value: 27.761000000000003
- type: ndcg_at_10
value: 37.208000000000006
- type: ndcg_at_100
value: 41.839
- type: ndcg_at_1000
value: 44.421
- type: ndcg_at_20
value: 38.917
- type: ndcg_at_3
value: 33.544000000000004
- type: ndcg_at_5
value: 35.374
- type: precision_at_1
value: 27.761000000000003
- type: precision_at_10
value: 5.92
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.12
- type: precision_at_20
value: 3.4130000000000003
- type: precision_at_3
value: 15.031
- type: precision_at_5
value: 10.306999999999999
- type: recall_at_1
value: 24.501
- type: recall_at_10
value: 47.579
- type: recall_at_100
value: 69.045
- type: recall_at_1000
value: 88.032
- type: recall_at_20
value: 54.125
- type: recall_at_3
value: 37.202
- type: recall_at_5
value: 41.927
- type: main_score
value: 37.208000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.29
- type: map_at_10
value: 26.183
- type: map_at_100
value: 27.351999999999997
- type: map_at_1000
value: 27.483999999999998
- type: map_at_20
value: 26.798
- type: map_at_3
value: 23.629
- type: map_at_5
value: 24.937
- type: mrr_at_1
value: 22.299
- type: mrr_at_10
value: 30.189
- type: mrr_at_100
value: 31.098
- type: mrr_at_1000
value: 31.177
- type: mrr_at_20
value: 30.697000000000003
- type: mrr_at_3
value: 27.862
- type: mrr_at_5
value: 29.066
- type: ndcg_at_1
value: 22.299
- type: ndcg_at_10
value: 31.202
- type: ndcg_at_100
value: 36.617
- type: ndcg_at_1000
value: 39.544000000000004
- type: ndcg_at_20
value: 33.177
- type: ndcg_at_3
value: 26.639000000000003
- type: ndcg_at_5
value: 28.526
- type: precision_at_1
value: 22.299
- type: precision_at_10
value: 5.8020000000000005
- type: precision_at_100
value: 1.0070000000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_20
value: 3.505
- type: precision_at_3
value: 12.698
- type: precision_at_5
value: 9.174
- type: recall_at_1
value: 18.29
- type: recall_at_10
value: 42.254999999999995
- type: recall_at_100
value: 66.60000000000001
- type: recall_at_1000
value: 87.31400000000001
- type: recall_at_20
value: 49.572
- type: recall_at_3
value: 29.342000000000002
- type: recall_at_5
value: 34.221000000000004
- type: main_score
value: 31.202
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 27.722
- type: map_at_10
value: 37.698
- type: map_at_100
value: 38.899
- type: map_at_1000
value: 38.998
- type: map_at_20
value: 38.381
- type: map_at_3
value: 34.244
- type: map_at_5
value: 36.295
- type: mrr_at_1
value: 32.183
- type: mrr_at_10
value: 41.429
- type: mrr_at_100
value: 42.308
- type: mrr_at_1000
value: 42.358000000000004
- type: mrr_at_20
value: 41.957
- type: mrr_at_3
value: 38.401999999999994
- type: mrr_at_5
value: 40.294999999999995
- type: ndcg_at_1
value: 32.183
- type: ndcg_at_10
value: 43.519000000000005
- type: ndcg_at_100
value: 48.786
- type: ndcg_at_1000
value: 50.861999999999995
- type: ndcg_at_20
value: 45.654
- type: ndcg_at_3
value: 37.521
- type: ndcg_at_5
value: 40.615
- type: precision_at_1
value: 32.183
- type: precision_at_10
value: 7.603
- type: precision_at_100
value: 1.135
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_20
value: 4.408
- type: precision_at_3
value: 17.071
- type: precision_at_5
value: 12.668
- type: recall_at_1
value: 27.722
- type: recall_at_10
value: 57.230000000000004
- type: recall_at_100
value: 79.97999999999999
- type: recall_at_1000
value: 94.217
- type: recall_at_20
value: 64.864
- type: recall_at_3
value: 41.215
- type: recall_at_5
value: 48.774
- type: main_score
value: 43.519000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 25.852999999999998
- type: map_at_10
value: 35.394999999999996
- type: map_at_100
value: 37.291999999999994
- type: map_at_1000
value: 37.495
- type: map_at_20
value: 36.372
- type: map_at_3
value: 32.336
- type: map_at_5
value: 34.159
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 40.677
- type: mrr_at_100
value: 41.728
- type: mrr_at_1000
value: 41.778
- type: mrr_at_20
value: 41.301
- type: mrr_at_3
value: 38.208
- type: mrr_at_5
value: 39.592
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 41.559000000000005
- type: ndcg_at_100
value: 48.012
- type: ndcg_at_1000
value: 50.234
- type: ndcg_at_20
value: 44.15
- type: ndcg_at_3
value: 36.918
- type: ndcg_at_5
value: 39.227000000000004
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 8.043
- type: precision_at_100
value: 1.625
- type: precision_at_1000
value: 0.245
- type: precision_at_20
value: 5.2170000000000005
- type: precision_at_3
value: 17.655
- type: precision_at_5
value: 12.845999999999998
- type: recall_at_1
value: 25.852999999999998
- type: recall_at_10
value: 53.093
- type: recall_at_100
value: 81.05799999999999
- type: recall_at_1000
value: 94.657
- type: recall_at_20
value: 62.748000000000005
- type: recall_at_3
value: 39.300000000000004
- type: recall_at_5
value: 45.754
- type: main_score
value: 41.559000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 19.23
- type: map_at_10
value: 28.128999999999998
- type: map_at_100
value: 29.195
- type: map_at_1000
value: 29.310000000000002
- type: map_at_20
value: 28.713
- type: map_at_3
value: 25.191000000000003
- type: map_at_5
value: 26.69
- type: mrr_at_1
value: 21.257
- type: mrr_at_10
value: 30.253999999999998
- type: mrr_at_100
value: 31.195
- type: mrr_at_1000
value: 31.270999999999997
- type: mrr_at_20
value: 30.747999999999998
- type: mrr_at_3
value: 27.633999999999997
- type: mrr_at_5
value: 28.937
- type: ndcg_at_1
value: 21.257
- type: ndcg_at_10
value: 33.511
- type: ndcg_at_100
value: 38.733000000000004
- type: ndcg_at_1000
value: 41.489
- type: ndcg_at_20
value: 35.476
- type: ndcg_at_3
value: 27.845
- type: ndcg_at_5
value: 30.264999999999997
- type: precision_at_1
value: 21.257
- type: precision_at_10
value: 5.619
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.124
- type: precision_at_20
value: 3.29
- type: precision_at_3
value: 12.508
- type: precision_at_5
value: 8.946
- type: recall_at_1
value: 19.23
- type: recall_at_10
value: 48.185
- type: recall_at_100
value: 71.932
- type: recall_at_1000
value: 92.587
- type: recall_at_20
value: 55.533
- type: recall_at_3
value: 32.865
- type: recall_at_5
value: 38.577
- type: main_score
value: 33.511
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.594
- type: map_at_10
value: 32.519
- type: map_at_100
value: 34.1
- type: map_at_1000
value: 34.263
- type: map_at_20
value: 33.353
- type: map_at_3
value: 27.898
- type: map_at_5
value: 30.524
- type: mrr_at_1
value: 46.515
- type: mrr_at_10
value: 56.958
- type: mrr_at_100
value: 57.54899999999999
- type: mrr_at_1000
value: 57.574999999999996
- type: mrr_at_20
value: 57.315000000000005
- type: mrr_at_3
value: 54.852999999999994
- type: mrr_at_5
value: 56.153
- type: ndcg_at_1
value: 46.515
- type: ndcg_at_10
value: 42.363
- type: ndcg_at_100
value: 48.233
- type: ndcg_at_1000
value: 50.993
- type: ndcg_at_20
value: 44.533
- type: ndcg_at_3
value: 37.297000000000004
- type: ndcg_at_5
value: 38.911
- type: precision_at_1
value: 46.515
- type: precision_at_10
value: 12.520999999999999
- type: precision_at_100
value: 1.8980000000000001
- type: precision_at_1000
value: 0.242
- type: precision_at_20
value: 7.212000000000001
- type: precision_at_3
value: 27.752
- type: precision_at_5
value: 20.391000000000002
- type: recall_at_1
value: 19.594
- type: recall_at_10
value: 46.539
- type: recall_at_100
value: 66.782
- type: recall_at_1000
value: 82.049
- type: recall_at_20
value: 52.611
- type: recall_at_3
value: 32.528
- type: recall_at_5
value: 38.933
- type: main_score
value: 42.363
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval (default)
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: main_score
value: 35.927
- type: map_at_1
value: 20.144000000000002
- type: map_at_10
value: 29.94
- type: map_at_100
value: 31.630000000000003
- type: map_at_1000
value: 31.778000000000002
- type: map_at_20
value: 30.798
- type: map_at_3
value: 26.534999999999997
- type: map_at_5
value: 28.33
- type: mrr_at_1
value: 31.23280820205051
- type: mrr_at_10
value: 38.66781179421835
- type: mrr_at_100
value: 39.656936166081785
- type: mrr_at_1000
value: 39.724602893117414
- type: mrr_at_20
value: 39.21272461558451
- type: mrr_at_3
value: 36.30907726931729
- type: mrr_at_5
value: 37.59814953738436
- type: nauc_map_at_1000_diff1
value: 44.5755334437146
- type: nauc_map_at_1000_max
value: 40.726916781400746
- type: nauc_map_at_1000_std
value: -19.591835061497367
- type: nauc_map_at_100_diff1
value: 44.54542899921038
- type: nauc_map_at_100_max
value: 40.68305902532837
- type: nauc_map_at_100_std
value: -19.658902089283487
- type: nauc_map_at_10_diff1
value: 44.56110529630953
- type: nauc_map_at_10_max
value: 39.89826167846008
- type: nauc_map_at_10_std
value: -20.62910633667902
- type: nauc_map_at_1_diff1
value: 50.82120107004449
- type: nauc_map_at_1_max
value: 33.208851367861584
- type: nauc_map_at_1_std
value: -20.29409730258174
- type: nauc_map_at_20_diff1
value: 44.51171242433788
- type: nauc_map_at_20_max
value: 40.30431132782945
- type: nauc_map_at_20_std
value: -20.290524142792417
- type: nauc_map_at_3_diff1
value: 45.80394138665133
- type: nauc_map_at_3_max
value: 37.766191281426956
- type: nauc_map_at_3_std
value: -21.223601997333876
- type: nauc_map_at_5_diff1
value: 45.00457218474283
- type: nauc_map_at_5_max
value: 38.901044576388365
- type: nauc_map_at_5_std
value: -20.893069613941634
- type: nauc_mrr_at_1000_diff1
value: 50.09855359231429
- type: nauc_mrr_at_1000_max
value: 46.481000170008826
- type: nauc_mrr_at_1000_std
value: -16.053461377096102
- type: nauc_mrr_at_100_diff1
value: 50.08205026347746
- type: nauc_mrr_at_100_max
value: 46.47262126963331
- type: nauc_mrr_at_100_std
value: -16.049112778748693
- type: nauc_mrr_at_10_diff1
value: 50.02363239081706
- type: nauc_mrr_at_10_max
value: 46.39287859062042
- type: nauc_mrr_at_10_std
value: -16.280866744769657
- type: nauc_mrr_at_1_diff1
value: 55.692503735317445
- type: nauc_mrr_at_1_max
value: 47.334834529801014
- type: nauc_mrr_at_1_std
value: -16.985483585693512
- type: nauc_mrr_at_20_diff1
value: 50.07725225722074
- type: nauc_mrr_at_20_max
value: 46.47279295070193
- type: nauc_mrr_at_20_std
value: -16.15168364678318
- type: nauc_mrr_at_3_diff1
value: 51.18685337274134
- type: nauc_mrr_at_3_max
value: 46.7286365021621
- type: nauc_mrr_at_3_std
value: -16.708451287313718
- type: nauc_mrr_at_5_diff1
value: 50.46777237893576
- type: nauc_mrr_at_5_max
value: 46.5352076502249
- type: nauc_mrr_at_5_std
value: -16.557413659905034
- type: nauc_ndcg_at_1000_diff1
value: 43.974299434438066
- type: nauc_ndcg_at_1000_max
value: 43.44628675071857
- type: nauc_ndcg_at_1000_std
value: -15.3495102005021
- type: nauc_ndcg_at_100_diff1
value: 43.336365081508504
- type: nauc_ndcg_at_100_max
value: 43.11345604460776
- type: nauc_ndcg_at_100_std
value: -15.571128070860615
- type: nauc_ndcg_at_10_diff1
value: 43.41266214720136
- type: nauc_ndcg_at_10_max
value: 41.519676787851914
- type: nauc_ndcg_at_10_std
value: -19.217175017223568
- type: nauc_ndcg_at_1_diff1
value: 55.692503735317445
- type: nauc_ndcg_at_1_max
value: 47.334834529801014
- type: nauc_ndcg_at_1_std
value: -16.985483585693512
- type: nauc_ndcg_at_20_diff1
value: 43.351653862834496
- type: nauc_ndcg_at_20_max
value: 42.11608469750499
- type: nauc_ndcg_at_20_std
value: -18.485363540641664
- type: nauc_ndcg_at_3_diff1
value: 45.64193888236677
- type: nauc_ndcg_at_3_max
value: 42.497135099009995
- type: nauc_ndcg_at_3_std
value: -18.764012041130094
- type: nauc_ndcg_at_5_diff1
value: 44.523392133895186
- type: nauc_ndcg_at_5_max
value: 41.564242030096345
- type: nauc_ndcg_at_5_std
value: -19.31080790984941
- type: nauc_precision_at_1000_diff1
value: 6.383464615714393
- type: nauc_precision_at_1000_max
value: 27.439930931284657
- type: nauc_precision_at_1000_std
value: 19.070716188143034
- type: nauc_precision_at_100_diff1
value: 12.599136754501284
- type: nauc_precision_at_100_max
value: 35.886310962337795
- type: nauc_precision_at_100_std
value: 14.06587592659196
- type: nauc_precision_at_10_diff1
value: 25.388891173150206
- type: nauc_precision_at_10_max
value: 46.10269270777384
- type: nauc_precision_at_10_std
value: -5.993803607158499
- type: nauc_precision_at_1_diff1
value: 55.692503735317445
- type: nauc_precision_at_1_max
value: 47.334834529801014
- type: nauc_precision_at_1_std
value: -16.985483585693512
- type: nauc_precision_at_20_diff1
value: 20.984013463099707
- type: nauc_precision_at_20_max
value: 42.9471854616888
- type: nauc_precision_at_20_std
value: -0.8045549929346024
- type: nauc_precision_at_3_diff1
value: 36.191850547148356
- type: nauc_precision_at_3_max
value: 48.09923832376049
- type: nauc_precision_at_3_std
value: -13.159407051271321
- type: nauc_precision_at_5_diff1
value: 31.04967966700407
- type: nauc_precision_at_5_max
value: 47.62867673349624
- type: nauc_precision_at_5_std
value: -10.345790325137353
- type: nauc_recall_at_1000_diff1
value: 11.03436839065707
- type: nauc_recall_at_1000_max
value: 42.32265076651575
- type: nauc_recall_at_1000_std
value: 30.478521053399206
- type: nauc_recall_at_100_diff1
value: 24.788349084510806
- type: nauc_recall_at_100_max
value: 36.72097184821956
- type: nauc_recall_at_100_std
value: -0.2241144179522076
- type: nauc_recall_at_10_diff1
value: 31.613053567704885
- type: nauc_recall_at_10_max
value: 34.4597322828833
- type: nauc_recall_at_10_std
value: -18.00022912690819
- type: nauc_recall_at_1_diff1
value: 50.82120107004449
- type: nauc_recall_at_1_max
value: 33.208851367861584
- type: nauc_recall_at_1_std
value: -20.29409730258174
- type: nauc_recall_at_20_diff1
value: 30.277002670708384
- type: nauc_recall_at_20_max
value: 35.212475675060375
- type: nauc_recall_at_20_std
value: -15.822788854733687
- type: nauc_recall_at_3_diff1
value: 38.87844958322257
- type: nauc_recall_at_3_max
value: 34.66914910044104
- type: nauc_recall_at_3_std
value: -20.234707300209127
- type: nauc_recall_at_5_diff1
value: 35.551139991687776
- type: nauc_recall_at_5_max
value: 34.61009958820695
- type: nauc_recall_at_5_std
value: -19.519180149293444
- type: ndcg_at_1
value: 31.233
- type: ndcg_at_10
value: 35.927
- type: ndcg_at_100
value: 43.037
- type: ndcg_at_1000
value: 45.900999999999996
- type: ndcg_at_20
value: 38.39
- type: ndcg_at_3
value: 31.366
- type: ndcg_at_5
value: 33.108
- type: precision_at_1
value: 31.233
- type: precision_at_10
value: 8.15
- type: precision_at_100
value: 1.402
- type: precision_at_1000
value: 0.17700000000000002
- type: precision_at_20
value: 4.91
- type: precision_at_3
value: 17.871000000000002
- type: precision_at_5
value: 12.948
- type: recall_at_1
value: 20.144000000000002
- type: recall_at_10
value: 44.985
- type: recall_at_100
value: 74.866
- type: recall_at_1000
value: 94.477
- type: recall_at_20
value: 53.37
- type: recall_at_3
value: 31.141000000000002
- type: recall_at_5
value: 36.721
- task:
type: PairClassification
dataset:
name: MTEB Cmnli (default)
type: C-MTEB/CMNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 71.25676488274203
- type: cos_sim_accuracy_threshold
value: 78.11152935028076
- type: cos_sim_ap
value: 79.10444825556077
- type: cos_sim_f1
value: 74.10750923266312
- type: cos_sim_f1_threshold
value: 75.2312421798706
- type: cos_sim_precision
value: 66.02083714129044
- type: cos_sim_recall
value: 84.45171849427169
- type: dot_accuracy
value: 68.11785929043896
- type: dot_accuracy_threshold
value: 34783.23974609375
- type: dot_ap
value: 75.80201827987712
- type: dot_f1
value: 72.31670990679349
- type: dot_f1_threshold
value: 31978.036499023438
- type: dot_precision
value: 61.386623164763456
- type: dot_recall
value: 87.98223053542202
- type: euclidean_accuracy
value: 71.41310883944678
- type: euclidean_accuracy_threshold
value: 1374.9353408813477
- type: euclidean_ap
value: 79.23359768836457
- type: euclidean_f1
value: 74.38512297540491
- type: euclidean_f1_threshold
value: 1512.6035690307617
- type: euclidean_precision
value: 64.97816593886463
- type: euclidean_recall
value: 86.97685293429974
- type: manhattan_accuracy
value: 71.32892363199038
- type: manhattan_accuracy_threshold
value: 33340.49072265625
- type: manhattan_ap
value: 79.11973684118587
- type: manhattan_f1
value: 74.29401993355481
- type: manhattan_f1_threshold
value: 36012.52746582031
- type: manhattan_precision
value: 66.81605975723622
- type: manhattan_recall
value: 83.65676876315175
- type: max_accuracy
value: 71.41310883944678
- type: max_ap
value: 79.23359768836457
- type: max_f1
value: 74.38512297540491
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval (default)
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: main_score
value: 78.917
- type: map_at_1
value: 67.281
- type: map_at_10
value: 75.262
- type: map_at_100
value: 75.60900000000001
- type: map_at_1000
value: 75.618
- type: map_at_20
value: 75.50200000000001
- type: map_at_3
value: 73.455
- type: map_at_5
value: 74.657
- type: mrr_at_1
value: 67.43940990516333
- type: mrr_at_10
value: 75.27367989696756
- type: mrr_at_100
value: 75.62029353306437
- type: mrr_at_1000
value: 75.62934741874726
- type: mrr_at_20
value: 75.51356607409173
- type: mrr_at_3
value: 73.5159817351598
- type: mrr_at_5
value: 74.73832103969093
- type: nauc_map_at_1000_diff1
value: 77.26666391867634
- type: nauc_map_at_1000_max
value: 49.928541012203496
- type: nauc_map_at_1000_std
value: -40.494469470474456
- type: nauc_map_at_100_diff1
value: 77.26087423162396
- type: nauc_map_at_100_max
value: 49.944275615664424
- type: nauc_map_at_100_std
value: -40.48299992715398
- type: nauc_map_at_10_diff1
value: 76.97400113500906
- type: nauc_map_at_10_max
value: 49.84177029115674
- type: nauc_map_at_10_std
value: -40.829250876511445
- type: nauc_map_at_1_diff1
value: 81.44050620630395
- type: nauc_map_at_1_max
value: 48.97711944070578
- type: nauc_map_at_1_std
value: -38.963689457570254
- type: nauc_map_at_20_diff1
value: 77.21791353089375
- type: nauc_map_at_20_max
value: 49.958206759079424
- type: nauc_map_at_20_std
value: -40.53067571658996
- type: nauc_map_at_3_diff1
value: 77.3555925208868
- type: nauc_map_at_3_max
value: 49.32158146451256
- type: nauc_map_at_3_std
value: -41.93552426981978
- type: nauc_map_at_5_diff1
value: 77.07099950431504
- type: nauc_map_at_5_max
value: 49.54190504495002
- type: nauc_map_at_5_std
value: -41.814968130918096
- type: nauc_mrr_at_1000_diff1
value: 77.31388774540477
- type: nauc_mrr_at_1000_max
value: 49.96779699175759
- type: nauc_mrr_at_1000_std
value: -40.43739645160277
- type: nauc_mrr_at_100_diff1
value: 77.30817786449413
- type: nauc_mrr_at_100_max
value: 49.982514428937655
- type: nauc_mrr_at_100_std
value: -40.42876582797744
- type: nauc_mrr_at_10_diff1
value: 77.02048060465756
- type: nauc_mrr_at_10_max
value: 49.87937207270602
- type: nauc_mrr_at_10_std
value: -40.77596560333177
- type: nauc_mrr_at_1_diff1
value: 81.27219599516599
- type: nauc_mrr_at_1_max
value: 49.3083394026327
- type: nauc_mrr_at_1_std
value: -38.31023037552026
- type: nauc_mrr_at_20_diff1
value: 77.26497089316055
- type: nauc_mrr_at_20_max
value: 49.996257597621415
- type: nauc_mrr_at_20_std
value: -40.476723608868014
- type: nauc_mrr_at_3_diff1
value: 77.38971294099257
- type: nauc_mrr_at_3_max
value: 49.38110328987404
- type: nauc_mrr_at_3_std
value: -41.7118646715979
- type: nauc_mrr_at_5_diff1
value: 77.08286142519952
- type: nauc_mrr_at_5_max
value: 49.655249374588685
- type: nauc_mrr_at_5_std
value: -41.48173039989406
- type: nauc_ndcg_at_1000_diff1
value: 76.47399204021758
- type: nauc_ndcg_at_1000_max
value: 50.55770139961048
- type: nauc_ndcg_at_1000_std
value: -39.55650430279072
- type: nauc_ndcg_at_100_diff1
value: 76.29355616618253
- type: nauc_ndcg_at_100_max
value: 51.003608112592936
- type: nauc_ndcg_at_100_std
value: -39.24769744605206
- type: nauc_ndcg_at_10_diff1
value: 74.88697528447634
- type: nauc_ndcg_at_10_max
value: 50.398416372815234
- type: nauc_ndcg_at_10_std
value: -40.76526585772833
- type: nauc_ndcg_at_1_diff1
value: 81.27219599516599
- type: nauc_ndcg_at_1_max
value: 49.3083394026327
- type: nauc_ndcg_at_1_std
value: -38.31023037552026
- type: nauc_ndcg_at_20_diff1
value: 75.85463512091866
- type: nauc_ndcg_at_20_max
value: 50.97338683654334
- type: nauc_ndcg_at_20_std
value: -39.353128774903404
- type: nauc_ndcg_at_3_diff1
value: 75.94015726123543
- type: nauc_ndcg_at_3_max
value: 49.22194251063148
- type: nauc_ndcg_at_3_std
value: -43.040457030630435
- type: nauc_ndcg_at_5_diff1
value: 75.19166189770303
- type: nauc_ndcg_at_5_max
value: 49.65696229797189
- type: nauc_ndcg_at_5_std
value: -42.81534909184424
- type: nauc_precision_at_1000_diff1
value: -14.830901395815788
- type: nauc_precision_at_1000_max
value: 19.686297136854623
- type: nauc_precision_at_1000_std
value: 61.19310360166978
- type: nauc_precision_at_100_diff1
value: 20.55469986751769
- type: nauc_precision_at_100_max
value: 50.78431835075583
- type: nauc_precision_at_100_std
value: 31.54986568374813
- type: nauc_precision_at_10_diff1
value: 45.991938532558656
- type: nauc_precision_at_10_max
value: 46.386318595630385
- type: nauc_precision_at_10_std
value: -23.463011435224608
- type: nauc_precision_at_1_diff1
value: 81.27219599516599
- type: nauc_precision_at_1_max
value: 49.3083394026327
- type: nauc_precision_at_1_std
value: -38.31023037552026
- type: nauc_precision_at_20_diff1
value: 41.53180472410822
- type: nauc_precision_at_20_max
value: 49.89800247204318
- type: nauc_precision_at_20_std
value: -2.4192847331537095
- type: nauc_precision_at_3_diff1
value: 67.37504651209993
- type: nauc_precision_at_3_max
value: 47.893537208629496
- type: nauc_precision_at_3_std
value: -43.2362212382819
- type: nauc_precision_at_5_diff1
value: 60.03438883791718
- type: nauc_precision_at_5_max
value: 48.29770502354206
- type: nauc_precision_at_5_std
value: -40.39588448271546
- type: nauc_recall_at_1000_diff1
value: 71.04741174480844
- type: nauc_recall_at_1000_max
value: 93.19056506596002
- type: nauc_recall_at_1000_std
value: 62.96994797650912
- type: nauc_recall_at_100_diff1
value: 65.00418176852641
- type: nauc_recall_at_100_max
value: 85.27352708427193
- type: nauc_recall_at_100_std
value: 2.8812005546518886
- type: nauc_recall_at_10_diff1
value: 61.263254794998865
- type: nauc_recall_at_10_max
value: 54.17618329507141
- type: nauc_recall_at_10_std
value: -39.80603966142593
- type: nauc_recall_at_1_diff1
value: 81.44050620630395
- type: nauc_recall_at_1_max
value: 48.97711944070578
- type: nauc_recall_at_1_std
value: -38.963689457570254
- type: nauc_recall_at_20_diff1
value: 64.42106091745396
- type: nauc_recall_at_20_max
value: 63.10796640821887
- type: nauc_recall_at_20_std
value: -22.60117424572222
- type: nauc_recall_at_3_diff1
value: 70.66311436592945
- type: nauc_recall_at_3_max
value: 48.69498944323469
- type: nauc_recall_at_3_std
value: -47.37847524874532
- type: nauc_recall_at_5_diff1
value: 66.12701111728848
- type: nauc_recall_at_5_max
value: 49.91763957934711
- type: nauc_recall_at_5_std
value: -48.173252920584126
- type: ndcg_at_1
value: 67.43900000000001
- type: ndcg_at_10
value: 78.917
- type: ndcg_at_100
value: 80.53399999999999
- type: ndcg_at_1000
value: 80.768
- type: ndcg_at_20
value: 79.813
- type: ndcg_at_3
value: 75.37
- type: ndcg_at_5
value: 77.551
- type: precision_at_1
value: 67.43900000000001
- type: precision_at_10
value: 9.115
- type: precision_at_100
value: 0.985
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.737
- type: precision_at_3
value: 27.081
- type: precision_at_5
value: 17.345
- type: recall_at_1
value: 67.281
- type: recall_at_10
value: 90.2
- type: recall_at_100
value: 97.576
- type: recall_at_1000
value: 99.368
- type: recall_at_20
value: 93.783
- type: recall_at_3
value: 80.822
- type: recall_at_5
value: 86.091
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.041
- type: map_at_10
value: 18.662
- type: map_at_100
value: 26.054
- type: map_at_1000
value: 27.769
- type: map_at_20
value: 21.499
- type: map_at_3
value: 13.628000000000002
- type: map_at_5
value: 15.617
- type: mrr_at_1
value: 67.25
- type: mrr_at_10
value: 74.673
- type: mrr_at_100
value: 75.022
- type: mrr_at_1000
value: 75.031
- type: mrr_at_20
value: 74.895
- type: mrr_at_3
value: 73.042
- type: mrr_at_5
value: 74.179
- type: ndcg_at_1
value: 55.75
- type: ndcg_at_10
value: 41.004000000000005
- type: ndcg_at_100
value: 44.912
- type: ndcg_at_1000
value: 51.946000000000005
- type: ndcg_at_20
value: 40.195
- type: ndcg_at_3
value: 45.803
- type: ndcg_at_5
value: 42.976
- type: precision_at_1
value: 67.25
- type: precision_at_10
value: 31.874999999999996
- type: precision_at_100
value: 10.37
- type: precision_at_1000
value: 2.1430000000000002
- type: precision_at_20
value: 24.275
- type: precision_at_3
value: 48.417
- type: precision_at_5
value: 40.2
- type: recall_at_1
value: 9.041
- type: recall_at_10
value: 23.592
- type: recall_at_100
value: 49.476
- type: recall_at_1000
value: 71.677
- type: recall_at_20
value: 30.153000000000002
- type: recall_at_3
value: 14.777000000000001
- type: recall_at_5
value: 17.829
- type: main_score
value: 41.004000000000005
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval (default)
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: main_score
value: 83.134
- type: map_at_1
value: 23.907999999999998
- type: map_at_10
value: 74.566
- type: map_at_100
value: 77.706
- type: map_at_1000
value: 77.762
- type: map_at_20
value: 76.943
- type: map_at_3
value: 50.971999999999994
- type: map_at_5
value: 64.429
- type: mrr_at_1
value: 84.8
- type: mrr_at_10
value: 89.73218253968246
- type: mrr_at_100
value: 89.82853630655774
- type: mrr_at_1000
value: 89.83170411703153
- type: mrr_at_20
value: 89.79582030091501
- type: mrr_at_3
value: 89.32499999999992
- type: mrr_at_5
value: 89.58749999999992
- type: nauc_map_at_1000_diff1
value: -2.2736020650163717
- type: nauc_map_at_1000_max
value: 45.3937519555142
- type: nauc_map_at_1000_std
value: 10.824778228268581
- type: nauc_map_at_100_diff1
value: -2.2662939752750066
- type: nauc_map_at_100_max
value: 45.423960626031366
- type: nauc_map_at_100_std
value: 10.804239351738717
- type: nauc_map_at_10_diff1
value: 0.9395752585654343
- type: nauc_map_at_10_max
value: 42.53814836940551
- type: nauc_map_at_10_std
value: 0.7199313235265218
- type: nauc_map_at_1_diff1
value: 45.19415865267676
- type: nauc_map_at_1_max
value: -1.7261947382471912
- type: nauc_map_at_1_std
value: -32.16144291613605
- type: nauc_map_at_20_diff1
value: -1.884514152147472
- type: nauc_map_at_20_max
value: 44.830401115927174
- type: nauc_map_at_20_std
value: 8.118530414377219
- type: nauc_map_at_3_diff1
value: 25.678881127059967
- type: nauc_map_at_3_max
value: 12.191400431839758
- type: nauc_map_at_3_std
value: -27.201740587642327
- type: nauc_map_at_5_diff1
value: 13.227128780829572
- type: nauc_map_at_5_max
value: 26.978282739708977
- type: nauc_map_at_5_std
value: -17.555610348070584
- type: nauc_mrr_at_1000_diff1
value: 21.073512437502178
- type: nauc_mrr_at_1000_max
value: 64.9680257861005
- type: nauc_mrr_at_1000_std
value: 19.626288754404293
- type: nauc_mrr_at_100_diff1
value: 21.074637426957732
- type: nauc_mrr_at_100_max
value: 64.97612675661915
- type: nauc_mrr_at_100_std
value: 19.649504127800878
- type: nauc_mrr_at_10_diff1
value: 21.12003267626651
- type: nauc_mrr_at_10_max
value: 65.24362289059766
- type: nauc_mrr_at_10_std
value: 19.92351276180984
- type: nauc_mrr_at_1_diff1
value: 22.711430629147635
- type: nauc_mrr_at_1_max
value: 58.4059429497403
- type: nauc_mrr_at_1_std
value: 11.967886722567973
- type: nauc_mrr_at_20_diff1
value: 20.98220830510272
- type: nauc_mrr_at_20_max
value: 65.05737535197835
- type: nauc_mrr_at_20_std
value: 19.66672900782771
- type: nauc_mrr_at_3_diff1
value: 20.924796220048528
- type: nauc_mrr_at_3_max
value: 65.71388669932584
- type: nauc_mrr_at_3_std
value: 20.05912197134477
- type: nauc_mrr_at_5_diff1
value: 20.61978649468208
- type: nauc_mrr_at_5_max
value: 65.50709154526211
- type: nauc_mrr_at_5_std
value: 20.241434276181838
- type: nauc_ndcg_at_1000_diff1
value: 0.25363171946133656
- type: nauc_ndcg_at_1000_max
value: 54.12840465309885
- type: nauc_ndcg_at_1000_std
value: 20.749184325412546
- type: nauc_ndcg_at_100_diff1
value: 0.15649430250272792
- type: nauc_ndcg_at_100_max
value: 54.47995322413234
- type: nauc_ndcg_at_100_std
value: 21.266786634233267
- type: nauc_ndcg_at_10_diff1
value: 0.14579250840386346
- type: nauc_ndcg_at_10_max
value: 49.8643037948353
- type: nauc_ndcg_at_10_std
value: 12.960701643914216
- type: nauc_ndcg_at_1_diff1
value: 22.711430629147635
- type: nauc_ndcg_at_1_max
value: 58.4059429497403
- type: nauc_ndcg_at_1_std
value: 11.967886722567973
- type: nauc_ndcg_at_20_diff1
value: -0.6701559981776763
- type: nauc_ndcg_at_20_max
value: 52.95443437012488
- type: nauc_ndcg_at_20_std
value: 16.708883972005758
- type: nauc_ndcg_at_3_diff1
value: -0.19084922341962388
- type: nauc_ndcg_at_3_max
value: 46.2110230886874
- type: nauc_ndcg_at_3_std
value: 13.363250229683038
- type: nauc_ndcg_at_5_diff1
value: 0.9840019268192548
- type: nauc_ndcg_at_5_max
value: 43.56594891798146
- type: nauc_ndcg_at_5_std
value: 8.577017104088146
- type: nauc_precision_at_1000_diff1
value: -30.779179091501145
- type: nauc_precision_at_1000_max
value: 16.056094258615673
- type: nauc_precision_at_1000_std
value: 49.96303902363283
- type: nauc_precision_at_100_diff1
value: -31.583236638899585
- type: nauc_precision_at_100_max
value: 19.16571713603373
- type: nauc_precision_at_100_std
value: 51.870647903980036
- type: nauc_precision_at_10_diff1
value: -35.62134572732597
- type: nauc_precision_at_10_max
value: 31.6935186494612
- type: nauc_precision_at_10_std
value: 46.68659723766723
- type: nauc_precision_at_1_diff1
value: 22.711430629147635
- type: nauc_precision_at_1_max
value: 58.4059429497403
- type: nauc_precision_at_1_std
value: 11.967886722567973
- type: nauc_precision_at_20_diff1
value: -33.875460046920495
- type: nauc_precision_at_20_max
value: 24.188420133566442
- type: nauc_precision_at_20_std
value: 50.02387762958483
- type: nauc_precision_at_3_diff1
value: -28.875998450906827
- type: nauc_precision_at_3_max
value: 44.77058831167941
- type: nauc_precision_at_3_std
value: 31.77993710437207
- type: nauc_precision_at_5_diff1
value: -34.92525440306491
- type: nauc_precision_at_5_max
value: 39.855219917077086
- type: nauc_precision_at_5_std
value: 37.95432046169299
- type: nauc_recall_at_1000_diff1
value: -14.293309371874733
- type: nauc_recall_at_1000_max
value: 59.06948692482579
- type: nauc_recall_at_1000_std
value: 62.586254868312686
- type: nauc_recall_at_100_diff1
value: -4.344100947212704
- type: nauc_recall_at_100_max
value: 58.42120421043602
- type: nauc_recall_at_100_std
value: 46.48562009316997
- type: nauc_recall_at_10_diff1
value: 0.04948662912161709
- type: nauc_recall_at_10_max
value: 42.42809687119093
- type: nauc_recall_at_10_std
value: 0.6892504250411409
- type: nauc_recall_at_1_diff1
value: 45.19415865267676
- type: nauc_recall_at_1_max
value: -1.7261947382471912
- type: nauc_recall_at_1_std
value: -32.16144291613605
- type: nauc_recall_at_20_diff1
value: -7.634587864605111
- type: nauc_recall_at_20_max
value: 49.21327187174134
- type: nauc_recall_at_20_std
value: 16.408481068336346
- type: nauc_recall_at_3_diff1
value: 24.72546591038644
- type: nauc_recall_at_3_max
value: 6.620763400972902
- type: nauc_recall_at_3_std
value: -29.994703323331684
- type: nauc_recall_at_5_diff1
value: 12.65527364845842
- type: nauc_recall_at_5_max
value: 20.400121385794694
- type: nauc_recall_at_5_std
value: -22.34284568447213
- type: ndcg_at_1
value: 84.8
- type: ndcg_at_10
value: 83.134
- type: ndcg_at_100
value: 86.628
- type: ndcg_at_1000
value: 87.151
- type: ndcg_at_20
value: 85.092
- type: ndcg_at_3
value: 81.228
- type: ndcg_at_5
value: 80.2
- type: precision_at_1
value: 84.8
- type: precision_at_10
value: 40.394999999999996
- type: precision_at_100
value: 4.745
- type: precision_at_1000
value: 0.488
- type: precision_at_20
value: 22.245
- type: precision_at_3
value: 73.25
- type: precision_at_5
value: 61.86000000000001
- type: recall_at_1
value: 23.907999999999998
- type: recall_at_10
value: 85.346
- type: recall_at_100
value: 96.515
- type: recall_at_1000
value: 99.156
- type: recall_at_20
value: 91.377
- type: recall_at_3
value: 54.135
- type: recall_at_5
value: 70.488
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval (default)
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: main_score
value: 60.887
- type: map_at_1
value: 46.6
- type: map_at_10
value: 56.035000000000004
- type: map_at_100
value: 56.741
- type: map_at_1000
value: 56.764
- type: map_at_20
value: 56.513999999999996
- type: map_at_3
value: 53.733
- type: map_at_5
value: 54.913000000000004
- type: mrr_at_1
value: 46.6
- type: mrr_at_10
value: 56.034523809523776
- type: mrr_at_100
value: 56.74056360434383
- type: mrr_at_1000
value: 56.76373487222486
- type: mrr_at_20
value: 56.51374873879128
- type: mrr_at_3
value: 53.73333333333328
- type: mrr_at_5
value: 54.91333333333327
- type: nauc_map_at_1000_diff1
value: 65.13546939953387
- type: nauc_map_at_1000_max
value: 43.358890946774494
- type: nauc_map_at_1000_std
value: -9.973282105235036
- type: nauc_map_at_100_diff1
value: 65.12449309472493
- type: nauc_map_at_100_max
value: 43.377100882923145
- type: nauc_map_at_100_std
value: -9.971781228240555
- type: nauc_map_at_10_diff1
value: 64.83020018537475
- type: nauc_map_at_10_max
value: 43.25969482323034
- type: nauc_map_at_10_std
value: -10.120272176001547
- type: nauc_map_at_1_diff1
value: 69.58727592100516
- type: nauc_map_at_1_max
value: 38.236494689522026
- type: nauc_map_at_1_std
value: -14.833390831689597
- type: nauc_map_at_20_diff1
value: 65.01159809914586
- type: nauc_map_at_20_max
value: 43.33440319829618
- type: nauc_map_at_20_std
value: -10.039958228659726
- type: nauc_map_at_3_diff1
value: 65.2396323885909
- type: nauc_map_at_3_max
value: 42.26904017378952
- type: nauc_map_at_3_std
value: -11.793017036934044
- type: nauc_map_at_5_diff1
value: 64.96397227898036
- type: nauc_map_at_5_max
value: 43.231333789145424
- type: nauc_map_at_5_std
value: -10.349933732151372
- type: nauc_mrr_at_1000_diff1
value: 65.13546939953387
- type: nauc_mrr_at_1000_max
value: 43.358890946774494
- type: nauc_mrr_at_1000_std
value: -9.973282105235036
- type: nauc_mrr_at_100_diff1
value: 65.12449309472493
- type: nauc_mrr_at_100_max
value: 43.377100882923145
- type: nauc_mrr_at_100_std
value: -9.971781228240555
- type: nauc_mrr_at_10_diff1
value: 64.83020018537475
- type: nauc_mrr_at_10_max
value: 43.25969482323034
- type: nauc_mrr_at_10_std
value: -10.120272176001547
- type: nauc_mrr_at_1_diff1
value: 69.58727592100516
- type: nauc_mrr_at_1_max
value: 38.236494689522026
- type: nauc_mrr_at_1_std
value: -14.833390831689597
- type: nauc_mrr_at_20_diff1
value: 65.01159809914586
- type: nauc_mrr_at_20_max
value: 43.33440319829618
- type: nauc_mrr_at_20_std
value: -10.039958228659726
- type: nauc_mrr_at_3_diff1
value: 65.2396323885909
- type: nauc_mrr_at_3_max
value: 42.26904017378952
- type: nauc_mrr_at_3_std
value: -11.793017036934044
- type: nauc_mrr_at_5_diff1
value: 64.96397227898036
- type: nauc_mrr_at_5_max
value: 43.231333789145424
- type: nauc_mrr_at_5_std
value: -10.349933732151372
- type: nauc_ndcg_at_1000_diff1
value: 64.26802655199876
- type: nauc_ndcg_at_1000_max
value: 45.854310744745185
- type: nauc_ndcg_at_1000_std
value: -6.184417305204082
- type: nauc_ndcg_at_100_diff1
value: 63.99268329609827
- type: nauc_ndcg_at_100_max
value: 46.31270128748375
- type: nauc_ndcg_at_100_std
value: -6.1393433180558965
- type: nauc_ndcg_at_10_diff1
value: 62.6735104141137
- type: nauc_ndcg_at_10_max
value: 45.54954799462398
- type: nauc_ndcg_at_10_std
value: -7.348851199024871
- type: nauc_ndcg_at_1_diff1
value: 69.58727592100516
- type: nauc_ndcg_at_1_max
value: 38.236494689522026
- type: nauc_ndcg_at_1_std
value: -14.833390831689597
- type: nauc_ndcg_at_20_diff1
value: 63.25899651677274
- type: nauc_ndcg_at_20_max
value: 45.952196968886014
- type: nauc_ndcg_at_20_std
value: -6.807607465125713
- type: nauc_ndcg_at_3_diff1
value: 63.65618337476822
- type: nauc_ndcg_at_3_max
value: 43.507890965228945
- type: nauc_ndcg_at_3_std
value: -10.73845622217601
- type: nauc_ndcg_at_5_diff1
value: 63.079162432921855
- type: nauc_ndcg_at_5_max
value: 45.38303443868148
- type: nauc_ndcg_at_5_std
value: -8.063657824835534
- type: nauc_precision_at_1000_diff1
value: 63.01459977930557
- type: nauc_precision_at_1000_max
value: 92.4253034547151
- type: nauc_precision_at_1000_std
value: 84.4845513963158
- type: nauc_precision_at_100_diff1
value: 57.17217119405878
- type: nauc_precision_at_100_max
value: 80.70049725316484
- type: nauc_precision_at_100_std
value: 41.78392287147403
- type: nauc_precision_at_10_diff1
value: 53.115665404390725
- type: nauc_precision_at_10_max
value: 55.73825657341263
- type: nauc_precision_at_10_std
value: 5.406226305013257
- type: nauc_precision_at_1_diff1
value: 69.58727592100516
- type: nauc_precision_at_1_max
value: 38.236494689522026
- type: nauc_precision_at_1_std
value: -14.833390831689597
- type: nauc_precision_at_20_diff1
value: 53.77730697622828
- type: nauc_precision_at_20_max
value: 61.88170819253054
- type: nauc_precision_at_20_std
value: 13.678730470003856
- type: nauc_precision_at_3_diff1
value: 58.580196992291455
- type: nauc_precision_at_3_max
value: 47.404834585376626
- type: nauc_precision_at_3_std
value: -7.374978769024051
- type: nauc_precision_at_5_diff1
value: 56.44564652606437
- type: nauc_precision_at_5_max
value: 53.08973975162324
- type: nauc_precision_at_5_std
value: 0.22762700141423803
- type: nauc_recall_at_1000_diff1
value: 63.01459977930565
- type: nauc_recall_at_1000_max
value: 92.42530345471532
- type: nauc_recall_at_1000_std
value: 84.48455139631602
- type: nauc_recall_at_100_diff1
value: 57.17217119405904
- type: nauc_recall_at_100_max
value: 80.70049725316468
- type: nauc_recall_at_100_std
value: 41.783922871474275
- type: nauc_recall_at_10_diff1
value: 53.11566540439087
- type: nauc_recall_at_10_max
value: 55.738256573412656
- type: nauc_recall_at_10_std
value: 5.406226305013377
- type: nauc_recall_at_1_diff1
value: 69.58727592100516
- type: nauc_recall_at_1_max
value: 38.236494689522026
- type: nauc_recall_at_1_std
value: -14.833390831689597
- type: nauc_recall_at_20_diff1
value: 53.77730697622846
- type: nauc_recall_at_20_max
value: 61.881708192530525
- type: nauc_recall_at_20_std
value: 13.678730470003947
- type: nauc_recall_at_3_diff1
value: 58.5801969922914
- type: nauc_recall_at_3_max
value: 47.40483458537654
- type: nauc_recall_at_3_std
value: -7.37497876902413
- type: nauc_recall_at_5_diff1
value: 56.445646526064394
- type: nauc_recall_at_5_max
value: 53.08973975162332
- type: nauc_recall_at_5_std
value: 0.22762700141428024
- type: ndcg_at_1
value: 46.6
- type: ndcg_at_10
value: 60.887
- type: ndcg_at_100
value: 64.18199999999999
- type: ndcg_at_1000
value: 64.726
- type: ndcg_at_20
value: 62.614999999999995
- type: ndcg_at_3
value: 56.038
- type: ndcg_at_5
value: 58.150999999999996
- type: precision_at_1
value: 46.6
- type: precision_at_10
value: 7.630000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.096
- type: precision_at_20
value: 4.154999999999999
- type: precision_at_3
value: 20.9
- type: precision_at_5
value: 13.56
- type: recall_at_1
value: 46.6
- type: recall_at_10
value: 76.3
- type: recall_at_100
value: 91.4
- type: recall_at_1000
value: 95.6
- type: recall_at_20
value: 83.1
- type: recall_at_3
value: 62.7
- type: recall_at_5
value: 67.80000000000001
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 73.29999999999998
- type: f1
value: 67.71473706580302
- type: f1_weighted
value: 74.83537255312045
- type: main_score
value: 73.29999999999998
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 78.371
- type: map_at_10
value: 85.762
- type: map_at_100
value: 85.954
- type: map_at_1000
value: 85.966
- type: map_at_20
value: 85.887
- type: map_at_3
value: 84.854
- type: map_at_5
value: 85.408
- type: mrr_at_1
value: 84.443
- type: mrr_at_10
value: 90.432
- type: mrr_at_100
value: 90.483
- type: mrr_at_1000
value: 90.484
- type: mrr_at_20
value: 90.473
- type: mrr_at_3
value: 89.89399999999999
- type: mrr_at_5
value: 90.244
- type: ndcg_at_1
value: 84.443
- type: ndcg_at_10
value: 89.05499999999999
- type: ndcg_at_100
value: 89.68
- type: ndcg_at_1000
value: 89.87899999999999
- type: ndcg_at_20
value: 89.381
- type: ndcg_at_3
value: 87.73100000000001
- type: ndcg_at_5
value: 88.425
- type: precision_at_1
value: 84.443
- type: precision_at_10
value: 10.520999999999999
- type: precision_at_100
value: 1.103
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_20
value: 5.362
- type: precision_at_3
value: 33.198
- type: precision_at_5
value: 20.441000000000003
- type: recall_at_1
value: 78.371
- type: recall_at_10
value: 94.594
- type: recall_at_100
value: 96.97099999999999
- type: recall_at_1000
value: 98.18
- type: recall_at_20
value: 95.707
- type: recall_at_3
value: 90.853
- type: recall_at_5
value: 92.74799999999999
- type: main_score
value: 89.05499999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 23.810000000000002
- type: map_at_10
value: 39.051
- type: map_at_100
value: 41.231
- type: map_at_1000
value: 41.376000000000005
- type: map_at_20
value: 40.227000000000004
- type: map_at_3
value: 33.915
- type: map_at_5
value: 36.459
- type: mrr_at_1
value: 48.148
- type: mrr_at_10
value: 55.765
- type: mrr_at_100
value: 56.495
- type: mrr_at_1000
value: 56.525999999999996
- type: mrr_at_20
value: 56.213
- type: mrr_at_3
value: 53.086
- type: mrr_at_5
value: 54.513999999999996
- type: ndcg_at_1
value: 48.148
- type: ndcg_at_10
value: 47.349999999999994
- type: ndcg_at_100
value: 54.61899999999999
- type: ndcg_at_1000
value: 56.830000000000005
- type: ndcg_at_20
value: 50.143
- type: ndcg_at_3
value: 43.108000000000004
- type: ndcg_at_5
value: 44.023
- type: precision_at_1
value: 48.148
- type: precision_at_10
value: 13.441
- type: precision_at_100
value: 2.085
- type: precision_at_1000
value: 0.248
- type: precision_at_20
value: 7.870000000000001
- type: precision_at_3
value: 28.909000000000002
- type: precision_at_5
value: 20.957
- type: recall_at_1
value: 23.810000000000002
- type: recall_at_10
value: 54.303000000000004
- type: recall_at_100
value: 81.363
- type: recall_at_1000
value: 94.391
- type: recall_at_20
value: 63.056999999999995
- type: recall_at_3
value: 38.098
- type: recall_at_5
value: 44.414
- type: main_score
value: 47.349999999999994
- task:
type: Classification
dataset:
name: MTEB GeoreviewClassification (default)
type: ai-forever/georeview-classification
config: default
split: test
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
metrics:
- type: accuracy
value: 48.0126953125
- type: f1
value: 47.65764016160488
- type: f1_weighted
value: 47.65701659482088
- type: main_score
value: 48.0126953125
- task:
type: Clustering
dataset:
name: MTEB GeoreviewClusteringP2P (default)
type: ai-forever/georeview-clustering-p2p
config: default
split: test
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
metrics:
- type: main_score
value: 73.62357853672266
- type: v_measure
value: 73.62357853672266
- type: v_measure_std
value: 0.5942247545535766
- task:
type: Retrieval
dataset:
name: MTEB GerDaLIR (default)
type: jinaai/ger_da_lir
config: default
split: test
revision: 0bb47f1d73827e96964edb84dfe552f62f4fd5eb
metrics:
- type: main_score
value: 16.227
- type: map_at_1
value: 8.082
- type: map_at_10
value: 12.959999999999999
- type: map_at_100
value: 13.923
- type: map_at_1000
value: 14.030999999999999
- type: map_at_20
value: 13.453000000000001
- type: map_at_3
value: 11.018
- type: map_at_5
value: 12.056000000000001
- type: mrr_at_1
value: 8.993332249146203
- type: mrr_at_10
value: 13.994013092850247
- type: mrr_at_100
value: 14.913737673149308
- type: mrr_at_1000
value: 15.00843809934407
- type: mrr_at_20
value: 14.470268462334007
- type: mrr_at_3
value: 12.000596302921846
- type: mrr_at_5
value: 13.070689000921561
- type: nauc_map_at_1000_diff1
value: 28.559639584013286
- type: nauc_map_at_1000_max
value: 25.533800126086714
- type: nauc_map_at_1000_std
value: 9.826551026628666
- type: nauc_map_at_100_diff1
value: 28.544724499331696
- type: nauc_map_at_100_max
value: 25.46734324526386
- type: nauc_map_at_100_std
value: 9.739314481785591
- type: nauc_map_at_10_diff1
value: 28.77447517718118
- type: nauc_map_at_10_max
value: 24.7431615237795
- type: nauc_map_at_10_std
value: 8.349878188033646
- type: nauc_map_at_1_diff1
value: 37.405452629895514
- type: nauc_map_at_1_max
value: 24.444208978394023
- type: nauc_map_at_1_std
value: 4.043820373810528
- type: nauc_map_at_20_diff1
value: 28.69764217789062
- type: nauc_map_at_20_max
value: 25.111848355996496
- type: nauc_map_at_20_std
value: 9.034829905305918
- type: nauc_map_at_3_diff1
value: 30.89053285076882
- type: nauc_map_at_3_max
value: 24.862886115911152
- type: nauc_map_at_3_std
value: 6.654260832396586
- type: nauc_map_at_5_diff1
value: 29.230629676604263
- type: nauc_map_at_5_max
value: 24.374302288018583
- type: nauc_map_at_5_std
value: 7.341846952319046
- type: nauc_mrr_at_1000_diff1
value: 28.086147932781426
- type: nauc_mrr_at_1000_max
value: 25.98698528264653
- type: nauc_mrr_at_1000_std
value: 9.917554348624545
- type: nauc_mrr_at_100_diff1
value: 28.069163279791336
- type: nauc_mrr_at_100_max
value: 25.949440010886804
- type: nauc_mrr_at_100_std
value: 9.874340979732578
- type: nauc_mrr_at_10_diff1
value: 28.239920869530046
- type: nauc_mrr_at_10_max
value: 25.351271409498576
- type: nauc_mrr_at_10_std
value: 8.669862759875162
- type: nauc_mrr_at_1_diff1
value: 35.96543040207856
- type: nauc_mrr_at_1_max
value: 25.488936487231967
- type: nauc_mrr_at_1_std
value: 4.76439131038345
- type: nauc_mrr_at_20_diff1
value: 28.18865871284607
- type: nauc_mrr_at_20_max
value: 25.67121763344746
- type: nauc_mrr_at_20_std
value: 9.297910707519472
- type: nauc_mrr_at_3_diff1
value: 30.166714199740717
- type: nauc_mrr_at_3_max
value: 25.541792491964877
- type: nauc_mrr_at_3_std
value: 7.083090296398472
- type: nauc_mrr_at_5_diff1
value: 28.68475284656478
- type: nauc_mrr_at_5_max
value: 24.994071363482835
- type: nauc_mrr_at_5_std
value: 7.687507254902365
- type: nauc_ndcg_at_1000_diff1
value: 25.292792613586467
- type: nauc_ndcg_at_1000_max
value: 29.211905289377178
- type: nauc_ndcg_at_1000_std
value: 18.088867467320355
- type: nauc_ndcg_at_100_diff1
value: 25.026905011089152
- type: nauc_ndcg_at_100_max
value: 27.98822281254431
- type: nauc_ndcg_at_100_std
value: 16.69456904301902
- type: nauc_ndcg_at_10_diff1
value: 25.972279051109503
- type: nauc_ndcg_at_10_max
value: 24.86486482734957
- type: nauc_ndcg_at_10_std
value: 10.398605822106353
- type: nauc_ndcg_at_1_diff1
value: 36.134710485184826
- type: nauc_ndcg_at_1_max
value: 25.384572790326025
- type: nauc_ndcg_at_1_std
value: 4.591863033771824
- type: nauc_ndcg_at_20_diff1
value: 25.850033660205536
- type: nauc_ndcg_at_20_max
value: 25.944243193140515
- type: nauc_ndcg_at_20_std
value: 12.392409721204892
- type: nauc_ndcg_at_3_diff1
value: 29.1966056380018
- type: nauc_ndcg_at_3_max
value: 24.978843156259913
- type: nauc_ndcg_at_3_std
value: 7.353914459205087
- type: nauc_ndcg_at_5_diff1
value: 26.795315295756282
- type: nauc_ndcg_at_5_max
value: 24.1196789150412
- type: nauc_ndcg_at_5_std
value: 8.311970988265172
- type: nauc_precision_at_1000_diff1
value: 9.128270550217984
- type: nauc_precision_at_1000_max
value: 35.79286915973607
- type: nauc_precision_at_1000_std
value: 39.15669472887154
- type: nauc_precision_at_100_diff1
value: 14.770289799034384
- type: nauc_precision_at_100_max
value: 34.58262232264337
- type: nauc_precision_at_100_std
value: 34.101148102981384
- type: nauc_precision_at_10_diff1
value: 19.899104673118178
- type: nauc_precision_at_10_max
value: 26.636940338985625
- type: nauc_precision_at_10_std
value: 15.73871357255849
- type: nauc_precision_at_1_diff1
value: 36.134710485184826
- type: nauc_precision_at_1_max
value: 25.384572790326025
- type: nauc_precision_at_1_std
value: 4.591863033771824
- type: nauc_precision_at_20_diff1
value: 19.423457975148942
- type: nauc_precision_at_20_max
value: 29.58123490878582
- type: nauc_precision_at_20_std
value: 20.847850110821618
- type: nauc_precision_at_3_diff1
value: 24.986416623492918
- type: nauc_precision_at_3_max
value: 25.973548400472975
- type: nauc_precision_at_3_std
value: 9.486410455972823
- type: nauc_precision_at_5_diff1
value: 21.237741424923332
- type: nauc_precision_at_5_max
value: 24.647141028200164
- type: nauc_precision_at_5_std
value: 11.102785032334147
- type: nauc_recall_at_1000_diff1
value: 15.999714888817829
- type: nauc_recall_at_1000_max
value: 44.34701908906545
- type: nauc_recall_at_1000_std
value: 51.13471291594717
- type: nauc_recall_at_100_diff1
value: 17.401714890483706
- type: nauc_recall_at_100_max
value: 33.39042631654808
- type: nauc_recall_at_100_std
value: 33.944446168451584
- type: nauc_recall_at_10_diff1
value: 20.30036232399894
- type: nauc_recall_at_10_max
value: 24.006718284396786
- type: nauc_recall_at_10_std
value: 14.049375108518669
- type: nauc_recall_at_1_diff1
value: 37.405452629895514
- type: nauc_recall_at_1_max
value: 24.444208978394023
- type: nauc_recall_at_1_std
value: 4.043820373810528
- type: nauc_recall_at_20_diff1
value: 20.23582802609045
- type: nauc_recall_at_20_max
value: 26.408063410785243
- type: nauc_recall_at_20_std
value: 18.617479515468112
- type: nauc_recall_at_3_diff1
value: 25.53221830103098
- type: nauc_recall_at_3_max
value: 24.283712329152678
- type: nauc_recall_at_3_std
value: 8.428947805841867
- type: nauc_recall_at_5_diff1
value: 21.741499601020823
- type: nauc_recall_at_5_max
value: 22.754924586295296
- type: nauc_recall_at_5_std
value: 9.966736688169814
- type: ndcg_at_1
value: 8.977
- type: ndcg_at_10
value: 16.227
- type: ndcg_at_100
value: 21.417
- type: ndcg_at_1000
value: 24.451
- type: ndcg_at_20
value: 17.982
- type: ndcg_at_3
value: 12.206999999999999
- type: ndcg_at_5
value: 14.059
- type: precision_at_1
value: 8.977
- type: precision_at_10
value: 2.933
- type: precision_at_100
value: 0.59
- type: precision_at_1000
value: 0.087
- type: precision_at_20
value: 1.8599999999999999
- type: precision_at_3
value: 5.550999999999999
- type: precision_at_5
value: 4.340999999999999
- type: recall_at_1
value: 8.082
- type: recall_at_10
value: 25.52
- type: recall_at_100
value: 50.32
- type: recall_at_1000
value: 74.021
- type: recall_at_20
value: 32.229
- type: recall_at_3
value: 14.66
- type: recall_at_5
value: 19.062
- task:
type: Retrieval
dataset:
name: MTEB GermanDPR (default)
type: deepset/germandpr
config: default
split: test
revision: 5129d02422a66be600ac89cd3e8531b4f97d347d
metrics:
- type: main_score
value: 82.422
- type: map_at_1
value: 64.39
- type: map_at_10
value: 77.273
- type: map_at_100
value: 77.375
- type: map_at_1000
value: 77.376
- type: map_at_20
value: 77.351
- type: map_at_3
value: 75.46300000000001
- type: map_at_5
value: 76.878
- type: mrr_at_1
value: 64.19512195121952
- type: mrr_at_10
value: 77.15842044134736
- type: mrr_at_100
value: 77.2604854308704
- type: mrr_at_1000
value: 77.26087882190109
- type: mrr_at_20
value: 77.23572154560611
- type: mrr_at_3
value: 75.34959349593504
- type: mrr_at_5
value: 76.76422764227652
- type: nauc_map_at_1000_diff1
value: 49.73135253389972
- type: nauc_map_at_1000_max
value: 8.665570717396145
- type: nauc_map_at_1000_std
value: -25.920927572114522
- type: nauc_map_at_100_diff1
value: 49.729170775336605
- type: nauc_map_at_100_max
value: 8.66717979705074
- type: nauc_map_at_100_std
value: -25.918338868918596
- type: nauc_map_at_10_diff1
value: 49.708681691445925
- type: nauc_map_at_10_max
value: 8.830640635692113
- type: nauc_map_at_10_std
value: -25.843238986304858
- type: nauc_map_at_1_diff1
value: 51.750022350988914
- type: nauc_map_at_1_max
value: 3.599863010364626
- type: nauc_map_at_1_std
value: -27.670122127567314
- type: nauc_map_at_20_diff1
value: 49.72609185887161
- type: nauc_map_at_20_max
value: 8.766556053409218
- type: nauc_map_at_20_std
value: -25.85975887517904
- type: nauc_map_at_3_diff1
value: 49.328512536255595
- type: nauc_map_at_3_max
value: 9.475682028996795
- type: nauc_map_at_3_std
value: -26.277349632171017
- type: nauc_map_at_5_diff1
value: 49.42801822186142
- type: nauc_map_at_5_max
value: 8.788822474357252
- type: nauc_map_at_5_std
value: -25.959260882028573
- type: nauc_mrr_at_1000_diff1
value: 50.13038598302397
- type: nauc_mrr_at_1000_max
value: 8.734338637484832
- type: nauc_mrr_at_1000_std
value: -26.653343549855908
- type: nauc_mrr_at_100_diff1
value: 50.12820392111392
- type: nauc_mrr_at_100_max
value: 8.735940503917966
- type: nauc_mrr_at_100_std
value: -26.65074918231251
- type: nauc_mrr_at_10_diff1
value: 50.10567888458267
- type: nauc_mrr_at_10_max
value: 8.898451291748575
- type: nauc_mrr_at_10_std
value: -26.572046921975655
- type: nauc_mrr_at_1_diff1
value: 52.22769994409465
- type: nauc_mrr_at_1_max
value: 3.6490820146062015
- type: nauc_mrr_at_1_std
value: -28.535100562320498
- type: nauc_mrr_at_20_diff1
value: 50.12462222100699
- type: nauc_mrr_at_20_max
value: 8.83487018268756
- type: nauc_mrr_at_20_std
value: -26.591437036958332
- type: nauc_mrr_at_3_diff1
value: 49.6987353700016
- type: nauc_mrr_at_3_max
value: 9.531003760756258
- type: nauc_mrr_at_3_std
value: -26.949799063124818
- type: nauc_mrr_at_5_diff1
value: 49.823881656376585
- type: nauc_mrr_at_5_max
value: 8.850404667985085
- type: nauc_mrr_at_5_std
value: -26.680008966088582
- type: nauc_ndcg_at_1000_diff1
value: 49.41721203361181
- type: nauc_ndcg_at_1000_max
value: 9.41093067609825
- type: nauc_ndcg_at_1000_std
value: -25.499543637737567
- type: nauc_ndcg_at_100_diff1
value: 49.32810419509252
- type: nauc_ndcg_at_100_max
value: 9.476216458766897
- type: nauc_ndcg_at_100_std
value: -25.393856250990414
- type: nauc_ndcg_at_10_diff1
value: 49.181984436623694
- type: nauc_ndcg_at_10_max
value: 10.65234732763274
- type: nauc_ndcg_at_10_std
value: -24.737669349012297
- type: nauc_ndcg_at_1_diff1
value: 51.750022350988914
- type: nauc_ndcg_at_1_max
value: 3.599863010364626
- type: nauc_ndcg_at_1_std
value: -27.670122127567314
- type: nauc_ndcg_at_20_diff1
value: 49.275394594995056
- type: nauc_ndcg_at_20_max
value: 10.402059796651923
- type: nauc_ndcg_at_20_std
value: -24.82329915806705
- type: nauc_ndcg_at_3_diff1
value: 48.22614352152889
- type: nauc_ndcg_at_3_max
value: 11.67464280791404
- type: nauc_ndcg_at_3_std
value: -25.867824868234095
- type: nauc_ndcg_at_5_diff1
value: 48.35583502987241
- type: nauc_ndcg_at_5_max
value: 10.494278750448451
- type: nauc_ndcg_at_5_std
value: -25.11599634172764
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_100_diff1
value: -56.39478136433852
- type: nauc_precision_at_100_max
value: 86.93518577529493
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_10_diff1
value: 38.662829729133094
- type: nauc_precision_at_10_max
value: 56.38018435740605
- type: nauc_precision_at_10_std
value: 6.288091897081105
- type: nauc_precision_at_1_diff1
value: 51.750022350988914
- type: nauc_precision_at_1_max
value: 3.599863010364626
- type: nauc_precision_at_1_std
value: -27.670122127567314
- type: nauc_precision_at_20_diff1
value: 34.739153182429085
- type: nauc_precision_at_20_max
value: 84.86908403000989
- type: nauc_precision_at_20_std
value: 29.156199421219455
- type: nauc_precision_at_3_diff1
value: 42.09287362529135
- type: nauc_precision_at_3_max
value: 23.629152759287074
- type: nauc_precision_at_3_std
value: -23.721376911302492
- type: nauc_precision_at_5_diff1
value: 36.03866171924644
- type: nauc_precision_at_5_max
value: 29.166173558775327
- type: nauc_precision_at_5_std
value: -15.096374563068448
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: -56.39478136433541
- type: nauc_recall_at_100_max
value: 86.93518577528111
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_10_diff1
value: 38.66282972913384
- type: nauc_recall_at_10_max
value: 56.3801843574071
- type: nauc_recall_at_10_std
value: 6.288091897082639
- type: nauc_recall_at_1_diff1
value: 51.750022350988914
- type: nauc_recall_at_1_max
value: 3.599863010364626
- type: nauc_recall_at_1_std
value: -27.670122127567314
- type: nauc_recall_at_20_diff1
value: 34.7391531824321
- type: nauc_recall_at_20_max
value: 84.86908403001016
- type: nauc_recall_at_20_std
value: 29.156199421220748
- type: nauc_recall_at_3_diff1
value: 42.09287362529107
- type: nauc_recall_at_3_max
value: 23.629152759286946
- type: nauc_recall_at_3_std
value: -23.72137691130291
- type: nauc_recall_at_5_diff1
value: 36.0386617192469
- type: nauc_recall_at_5_max
value: 29.1661735587759
- type: nauc_recall_at_5_std
value: -15.09637456306774
- type: ndcg_at_1
value: 64.39
- type: ndcg_at_10
value: 82.422
- type: ndcg_at_100
value: 82.86099999999999
- type: ndcg_at_1000
value: 82.87299999999999
- type: ndcg_at_20
value: 82.67999999999999
- type: ndcg_at_3
value: 78.967
- type: ndcg_at_5
value: 81.50699999999999
- type: precision_at_1
value: 64.39
- type: precision_at_10
value: 9.795
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.946
- type: precision_at_3
value: 29.691000000000003
- type: precision_at_5
value: 19.044
- type: recall_at_1
value: 64.39
- type: recall_at_10
value: 97.951
- type: recall_at_100
value: 99.902
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 98.92699999999999
- type: recall_at_3
value: 89.07300000000001
- type: recall_at_5
value: 95.22
- task:
type: Retrieval
dataset:
name: MTEB GermanQuAD-Retrieval (default)
type: mteb/germanquad-retrieval
config: default
split: test
revision: f5c87ae5a2e7a5106606314eef45255f03151bb3
metrics:
- type: main_score
value: 94.15532365396247
- type: map_at_1
value: 90.789
- type: map_at_10
value: 94.24
- type: map_at_100
value: 94.283
- type: map_at_1000
value: 94.284
- type: map_at_20
value: 94.272
- type: map_at_3
value: 93.913
- type: map_at_5
value: 94.155
- type: mrr_at_1
value: 90.78947368421053
- type: mrr_at_10
value: 94.23987411056376
- type: mrr_at_100
value: 94.28320936825
- type: mrr_at_1000
value: 94.28350209115848
- type: mrr_at_20
value: 94.271919092559
- type: mrr_at_3
value: 93.91258318209313
- type: mrr_at_5
value: 94.15532365396247
- type: nauc_map_at_1000_diff1
value: 89.29089310650436
- type: nauc_map_at_1000_max
value: 73.83868784032414
- type: nauc_map_at_1000_std
value: -11.635778561889989
- type: nauc_map_at_100_diff1
value: 89.29077225707755
- type: nauc_map_at_100_max
value: 73.84002740580378
- type: nauc_map_at_100_std
value: -11.644096256165092
- type: nauc_map_at_10_diff1
value: 89.29117612292366
- type: nauc_map_at_10_max
value: 73.97487984981221
- type: nauc_map_at_10_std
value: -11.35191794373827
- type: nauc_map_at_1_diff1
value: 89.35436544117584
- type: nauc_map_at_1_max
value: 70.35936815057701
- type: nauc_map_at_1_std
value: -13.598996360976903
- type: nauc_map_at_20_diff1
value: 89.2530394052653
- type: nauc_map_at_20_max
value: 73.83537529419839
- type: nauc_map_at_20_std
value: -11.628272822028478
- type: nauc_map_at_3_diff1
value: 89.375111893546
- type: nauc_map_at_3_max
value: 74.78900366026112
- type: nauc_map_at_3_std
value: -12.720905253503274
- type: nauc_map_at_5_diff1
value: 89.35358300820893
- type: nauc_map_at_5_max
value: 74.31996219723239
- type: nauc_map_at_5_std
value: -10.768642638210867
- type: nauc_mrr_at_1000_diff1
value: 89.29089310650436
- type: nauc_mrr_at_1000_max
value: 73.83868784032414
- type: nauc_mrr_at_1000_std
value: -11.635778561889989
- type: nauc_mrr_at_100_diff1
value: 89.29077225707755
- type: nauc_mrr_at_100_max
value: 73.84002740580378
- type: nauc_mrr_at_100_std
value: -11.644096256165092
- type: nauc_mrr_at_10_diff1
value: 89.29117612292366
- type: nauc_mrr_at_10_max
value: 73.97487984981221
- type: nauc_mrr_at_10_std
value: -11.35191794373827
- type: nauc_mrr_at_1_diff1
value: 89.35436544117584
- type: nauc_mrr_at_1_max
value: 70.35936815057701
- type: nauc_mrr_at_1_std
value: -13.598996360976903
- type: nauc_mrr_at_20_diff1
value: 89.2530394052653
- type: nauc_mrr_at_20_max
value: 73.83537529419839
- type: nauc_mrr_at_20_std
value: -11.628272822028478
- type: nauc_mrr_at_3_diff1
value: 89.375111893546
- type: nauc_mrr_at_3_max
value: 74.78900366026112
- type: nauc_mrr_at_3_std
value: -12.720905253503274
- type: nauc_mrr_at_5_diff1
value: 89.35358300820893
- type: nauc_mrr_at_5_max
value: 74.31996219723239
- type: nauc_mrr_at_5_std
value: -10.768642638210867
- type: nauc_ndcg_at_1000_diff1
value: 89.27620775856863
- type: nauc_ndcg_at_1000_max
value: 74.2985757362615
- type: nauc_ndcg_at_1000_std
value: -11.236142819703023
- type: nauc_ndcg_at_100_diff1
value: 89.27284787540731
- type: nauc_ndcg_at_100_max
value: 74.33539303365968
- type: nauc_ndcg_at_100_std
value: -11.469413615851936
- type: nauc_ndcg_at_10_diff1
value: 89.21496710661724
- type: nauc_ndcg_at_10_max
value: 75.02035398490516
- type: nauc_ndcg_at_10_std
value: -9.903255803665814
- type: nauc_ndcg_at_1_diff1
value: 89.35436544117584
- type: nauc_ndcg_at_1_max
value: 70.35936815057701
- type: nauc_ndcg_at_1_std
value: -13.598996360976903
- type: nauc_ndcg_at_20_diff1
value: 89.03561289544179
- type: nauc_ndcg_at_20_max
value: 74.4006766600049
- type: nauc_ndcg_at_20_std
value: -11.129237862587743
- type: nauc_ndcg_at_3_diff1
value: 89.46540193201693
- type: nauc_ndcg_at_3_max
value: 76.87093548368378
- type: nauc_ndcg_at_3_std
value: -12.484902872086767
- type: nauc_ndcg_at_5_diff1
value: 89.39924941584766
- type: nauc_ndcg_at_5_max
value: 75.96975269092722
- type: nauc_ndcg_at_5_std
value: -8.180295581144833
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 86.93074003795302
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: -174.07785375176616
- type: nauc_precision_at_10_diff1
value: 87.43064119412082
- type: nauc_precision_at_10_max
value: 90.60785783417448
- type: nauc_precision_at_10_std
value: 15.378710059645906
- type: nauc_precision_at_1_diff1
value: 89.35436544117584
- type: nauc_precision_at_1_max
value: 70.35936815057701
- type: nauc_precision_at_1_std
value: -13.598996360976903
- type: nauc_precision_at_20_diff1
value: 78.78206037685919
- type: nauc_precision_at_20_max
value: 82.52264166455923
- type: nauc_precision_at_20_std
value: -5.95806599216658
- type: nauc_precision_at_3_diff1
value: 90.12709256456401
- type: nauc_precision_at_3_max
value: 90.72678805838154
- type: nauc_precision_at_3_std
value: -11.047599315631993
- type: nauc_precision_at_5_diff1
value: 89.9066873566561
- type: nauc_precision_at_5_max
value: 93.51571626543664
- type: nauc_precision_at_5_std
value: 22.632403279126162
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 86.93074003793416
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: -174.07785375175723
- type: nauc_recall_at_10_diff1
value: 87.43064119411991
- type: nauc_recall_at_10_max
value: 90.60785783417579
- type: nauc_recall_at_10_std
value: 15.378710059643607
- type: nauc_recall_at_1_diff1
value: 89.35436544117584
- type: nauc_recall_at_1_max
value: 70.35936815057701
- type: nauc_recall_at_1_std
value: -13.598996360976903
- type: nauc_recall_at_20_diff1
value: 78.78206037685645
- type: nauc_recall_at_20_max
value: 82.52264166455791
- type: nauc_recall_at_20_std
value: -5.958065992168697
- type: nauc_recall_at_3_diff1
value: 90.12709256456463
- type: nauc_recall_at_3_max
value: 90.7267880583832
- type: nauc_recall_at_3_std
value: -11.047599315631881
- type: nauc_recall_at_5_diff1
value: 89.90668735665676
- type: nauc_recall_at_5_max
value: 93.51571626543753
- type: nauc_recall_at_5_std
value: 22.632403279126112
- type: ndcg_at_1
value: 90.789
- type: ndcg_at_10
value: 95.46
- type: ndcg_at_100
value: 95.652
- type: ndcg_at_1000
value: 95.659
- type: ndcg_at_20
value: 95.575
- type: ndcg_at_3
value: 94.82000000000001
- type: ndcg_at_5
value: 95.26400000000001
- type: precision_at_1
value: 90.789
- type: precision_at_10
value: 9.908999999999999
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.977
- type: precision_at_3
value: 32.471
- type: precision_at_5
value: 19.701
- type: recall_at_1
value: 90.789
- type: recall_at_10
value: 99.093
- type: recall_at_100
value: 99.955
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 99.546
- type: recall_at_3
value: 97.414
- type: recall_at_5
value: 98.503
- task:
type: STS
dataset:
name: MTEB GermanSTSBenchmark (default)
type: jinaai/german-STSbenchmark
config: default
split: test
revision: e36907544d44c3a247898ed81540310442329e20
metrics:
- type: cosine_pearson
value: 86.55319003300265
- type: cosine_spearman
value: 87.50267373081324
- type: euclidean_pearson
value: 87.41630636501863
- type: euclidean_spearman
value: 88.02170803409365
- type: main_score
value: 87.50267373081324
- type: manhattan_pearson
value: 87.33703179056744
- type: manhattan_spearman
value: 87.99192826922514
- type: pearson
value: 86.55319003300265
- type: spearman
value: 87.50267373081324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S (default)
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: main_score
value: 27.477557517301303
- type: v_measure
value: 27.477557517301303
- type: v_measure_std
value: 3.3525736581861336
- task:
type: Classification
dataset:
name: MTEB HeadlineClassification (default)
type: ai-forever/headline-classification
config: default
split: test
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
metrics:
- type: accuracy
value: 75.0830078125
- type: f1
value: 75.08863209267814
- type: f1_weighted
value: 75.08895979060917
- type: main_score
value: 75.0830078125
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 38.143
- type: map_at_10
value: 55.916999999999994
- type: map_at_100
value: 56.706
- type: map_at_1000
value: 56.77100000000001
- type: map_at_20
value: 56.367
- type: map_at_3
value: 53.111
- type: map_at_5
value: 54.839000000000006
- type: mrr_at_1
value: 76.286
- type: mrr_at_10
value: 81.879
- type: mrr_at_100
value: 82.09100000000001
- type: mrr_at_1000
value: 82.101
- type: mrr_at_20
value: 82.01
- type: mrr_at_3
value: 80.972
- type: mrr_at_5
value: 81.537
- type: ndcg_at_1
value: 76.286
- type: ndcg_at_10
value: 64.673
- type: ndcg_at_100
value: 67.527
- type: ndcg_at_1000
value: 68.857
- type: ndcg_at_20
value: 65.822
- type: ndcg_at_3
value: 60.616
- type: ndcg_at_5
value: 62.827999999999996
- type: precision_at_1
value: 76.286
- type: precision_at_10
value: 13.196
- type: precision_at_100
value: 1.544
- type: precision_at_1000
value: 0.172
- type: precision_at_20
value: 6.968000000000001
- type: precision_at_3
value: 37.992
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 38.143
- type: recall_at_10
value: 65.982
- type: recall_at_100
value: 77.225
- type: recall_at_1000
value: 86.077
- type: recall_at_20
value: 69.68299999999999
- type: recall_at_3
value: 56.989000000000004
- type: recall_at_5
value: 61.35
- type: main_score
value: 64.673
- task:
type: Classification
dataset:
name: MTEB IFlyTek (default)
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 41.67756829549827
- type: f1
value: 33.929325579581636
- type: f1_weighted
value: 43.03952025643197
- type: main_score
value: 41.67756829549827
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.90440000000001
- type: ap
value: 88.78663714603425
- type: ap_weighted
value: 88.78663714603425
- type: f1
value: 91.89564361975891
- type: f1_weighted
value: 91.89564361975891
- type: main_score
value: 91.90440000000001
- task:
type: Classification
dataset:
name: MTEB InappropriatenessClassification (default)
type: ai-forever/inappropriateness-classification
config: default
split: test
revision: 601651fdc45ef243751676e62dd7a19f491c0285
metrics:
- type: accuracy
value: 61.0498046875
- type: ap
value: 57.04240566648215
- type: ap_weighted
value: 57.04240566648215
- type: f1
value: 60.867630038606954
- type: f1_weighted
value: 60.867630038606954
- type: main_score
value: 61.0498046875
- task:
type: Classification
dataset:
name: MTEB JDReview (default)
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 83.50844277673546
- type: ap
value: 48.46732380712268
- type: ap_weighted
value: 48.46732380712268
- type: f1
value: 77.43967451387445
- type: f1_weighted
value: 84.78462929014114
- type: main_score
value: 83.50844277673546
- task:
type: Classification
dataset:
name: MTEB KinopoiskClassification (default)
type: ai-forever/kinopoisk-sentiment-classification
config: default
split: test
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
metrics:
- type: accuracy
value: 62.393333333333324
- type: f1
value: 61.35940129568015
- type: f1_weighted
value: 61.35940129568015
- type: main_score
value: 62.393333333333324
- task:
type: STS
dataset:
name: MTEB LCQMC (default)
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cosine_pearson
value: 67.74375505907872
- type: cosine_spearman
value: 75.94582231399434
- type: euclidean_pearson
value: 74.52501692443582
- type: euclidean_spearman
value: 75.88428434746646
- type: main_score
value: 75.94582231399434
- type: manhattan_pearson
value: 74.55015441749529
- type: manhattan_spearman
value: 75.83288262176175
- type: pearson
value: 67.74375505907872
- type: spearman
value: 75.94582231399434
- task:
type: Retrieval
dataset:
name: MTEB LEMBNarrativeQARetrieval (default)
type: dwzhu/LongEmbed
config: default
split: test
revision: 6e346642246bfb4928c560ee08640dc84d074e8c
metrics:
- type: map_at_1
value: 23.093
- type: map_at_10
value: 30.227999999999998
- type: map_at_100
value: 31.423000000000002
- type: map_at_1000
value: 31.533
- type: map_at_20
value: 30.835
- type: map_at_3
value: 27.983999999999998
- type: map_at_5
value: 29.253
- type: mrr_at_1
value: 23.093
- type: mrr_at_10
value: 30.227999999999998
- type: mrr_at_100
value: 31.423000000000002
- type: mrr_at_1000
value: 31.533
- type: mrr_at_20
value: 30.835
- type: mrr_at_3
value: 27.983999999999998
- type: mrr_at_5
value: 29.253
- type: ndcg_at_1
value: 23.093
- type: ndcg_at_10
value: 34.297
- type: ndcg_at_100
value: 41.049
- type: ndcg_at_1000
value: 43.566
- type: ndcg_at_20
value: 36.52
- type: ndcg_at_3
value: 29.629
- type: ndcg_at_5
value: 31.926
- type: precision_at_1
value: 23.093
- type: precision_at_10
value: 4.735
- type: precision_at_100
value: 0.8109999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 2.8080000000000003
- type: precision_at_3
value: 11.468
- type: precision_at_5
value: 8.001
- type: recall_at_1
value: 23.093
- type: recall_at_10
value: 47.354
- type: recall_at_100
value: 81.147
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 56.16799999999999
- type: recall_at_3
value: 34.405
- type: recall_at_5
value: 40.004
- type: main_score
value: 34.297
- type: map_at_1
value: 24.361
- type: map_at_10
value: 33.641
- type: map_at_100
value: 35.104
- type: map_at_1000
value: 35.127
- type: map_at_20
value: 34.388999999999996
- type: map_at_3
value: 30.255
- type: map_at_5
value: 32.079
- type: mrr_at_1
value: 24.361
- type: mrr_at_10
value: 33.641
- type: mrr_at_100
value: 35.104
- type: mrr_at_1000
value: 35.127
- type: mrr_at_20
value: 34.388999999999996
- type: mrr_at_3
value: 30.255
- type: mrr_at_5
value: 32.079
- type: ndcg_at_1
value: 24.361
- type: ndcg_at_10
value: 39.337
- type: ndcg_at_100
value: 47.384
- type: ndcg_at_1000
value: 47.75
- type: ndcg_at_20
value: 42.077999999999996
- type: ndcg_at_3
value: 32.235
- type: ndcg_at_5
value: 35.524
- type: precision_at_1
value: 24.361
- type: precision_at_10
value: 5.783
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 3.435
- type: precision_at_3
value: 12.661
- type: precision_at_5
value: 9.193999999999999
- type: recall_at_1
value: 24.361
- type: recall_at_10
value: 57.826
- type: recall_at_100
value: 97.51100000000001
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 68.697
- type: recall_at_3
value: 37.983
- type: recall_at_5
value: 45.972
- type: main_score
value: 39.337
- type: map_at_1
value: 53.667
- type: map_at_10
value: 61.719
- type: map_at_100
value: 62.471
- type: map_at_1000
value: 62.492000000000004
- type: map_at_20
value: 62.153000000000006
- type: map_at_3
value: 59.167
- type: map_at_5
value: 60.95
- type: mrr_at_1
value: 53.667
- type: mrr_at_10
value: 61.719
- type: mrr_at_100
value: 62.471
- type: mrr_at_1000
value: 62.492000000000004
- type: mrr_at_20
value: 62.153000000000006
- type: mrr_at_3
value: 59.167
- type: mrr_at_5
value: 60.95
- type: ndcg_at_1
value: 53.667
- type: ndcg_at_10
value: 66.018
- type: ndcg_at_100
value: 69.726
- type: ndcg_at_1000
value: 70.143
- type: ndcg_at_20
value: 67.61399999999999
- type: ndcg_at_3
value: 60.924
- type: ndcg_at_5
value: 64.10900000000001
- type: precision_at_1
value: 53.667
- type: precision_at_10
value: 7.9670000000000005
- type: precision_at_100
value: 0.97
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.3
- type: precision_at_3
value: 22.0
- type: precision_at_5
value: 14.732999999999999
- type: recall_at_1
value: 53.667
- type: recall_at_10
value: 79.667
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 86.0
- type: recall_at_3
value: 66.0
- type: recall_at_5
value: 73.667
- type: main_score
value: 66.018
- task:
type: Retrieval
dataset:
name: MTEB LEMBNeedleRetrieval (default)
type: dwzhu/LongEmbed
config: default
split: test_256
revision: 6e346642246bfb4928c560ee08640dc84d074e8c
metrics:
- type: map_at_1
value: 64.0
- type: map_at_10
value: 77.083
- type: map_at_100
value: 77.265
- type: map_at_1000
value: 77.265
- type: map_at_20
value: 77.265
- type: map_at_3
value: 76.333
- type: map_at_5
value: 76.833
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 77.083
- type: mrr_at_100
value: 77.265
- type: mrr_at_1000
value: 77.265
- type: mrr_at_20
value: 77.265
- type: mrr_at_3
value: 76.333
- type: mrr_at_5
value: 76.833
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 82.325
- type: ndcg_at_100
value: 82.883
- type: ndcg_at_1000
value: 82.883
- type: ndcg_at_20
value: 82.883
- type: ndcg_at_3
value: 80.833
- type: ndcg_at_5
value: 81.694
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 5.0
- type: precision_at_3
value: 31.333
- type: precision_at_5
value: 19.2
- type: recall_at_1
value: 64.0
- type: recall_at_10
value: 98.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 100.0
- type: recall_at_3
value: 94.0
- type: recall_at_5
value: 96.0
- type: main_score
value: 64.0
- type: map_at_1
value: 100.0
- type: map_at_10
value: 100.0
- type: map_at_100
value: 100.0
- type: map_at_1000
value: 100.0
- type: map_at_20
value: 100.0
- type: map_at_3
value: 100.0
- type: map_at_5
value: 100.0
- type: mrr_at_1
value: 100.0
- type: mrr_at_10
value: 100.0
- type: mrr_at_100
value: 100.0
- type: mrr_at_1000
value: 100.0
- type: mrr_at_20
value: 100.0
- type: mrr_at_3
value: 100.0
- type: mrr_at_5
value: 100.0
- type: ndcg_at_1
value: 100.0
- type: ndcg_at_10
value: 100.0
- type: ndcg_at_100
value: 100.0
- type: ndcg_at_1000
value: 100.0
- type: ndcg_at_20
value: 100.0
- type: ndcg_at_3
value: 100.0
- type: ndcg_at_5
value: 100.0
- type: precision_at_1
value: 100.0
- type: precision_at_10
value: 10.0
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 5.0
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 20.0
- type: recall_at_1
value: 100.0
- type: recall_at_10
value: 100.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 100.0
- type: recall_at_3
value: 100.0
- type: recall_at_5
value: 100.0
- type: main_score
value: 100.0
- task:
type: Retrieval
dataset:
name: MTEB LEMBSummScreenFDRetrieval (default)
type: dwzhu/LongEmbed
config: default
split: validation
revision: 6e346642246bfb4928c560ee08640dc84d074e8c
metrics:
- type: map_at_1
value: 84.821
- type: map_at_10
value: 90.11200000000001
- type: map_at_100
value: 90.158
- type: map_at_1000
value: 90.158
- type: map_at_20
value: 90.137
- type: map_at_3
value: 89.385
- type: map_at_5
value: 89.876
- type: mrr_at_1
value: 84.821
- type: mrr_at_10
value: 90.11200000000001
- type: mrr_at_100
value: 90.158
- type: mrr_at_1000
value: 90.158
- type: mrr_at_20
value: 90.137
- type: mrr_at_3
value: 89.385
- type: mrr_at_5
value: 89.876
- type: ndcg_at_1
value: 84.821
- type: ndcg_at_10
value: 92.334
- type: ndcg_at_100
value: 92.535
- type: ndcg_at_1000
value: 92.535
- type: ndcg_at_20
value: 92.414
- type: ndcg_at_3
value: 90.887
- type: ndcg_at_5
value: 91.758
- type: precision_at_1
value: 84.821
- type: precision_at_10
value: 9.911
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.97
- type: precision_at_3
value: 31.746000000000002
- type: precision_at_5
value: 19.464000000000002
- type: recall_at_1
value: 84.821
- type: recall_at_10
value: 99.107
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 99.405
- type: recall_at_3
value: 95.238
- type: recall_at_5
value: 97.321
- type: main_score
value: 92.334
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-deu)
type: facebook/mlqa
config: deu-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 67.548
- type: map_at_1
value: 56.559000000000005
- type: map_at_10
value: 63.867
- type: map_at_100
value: 64.429
- type: map_at_1000
value: 64.457
- type: map_at_20
value: 64.215
- type: map_at_3
value: 62.109
- type: map_at_5
value: 63.101
- type: mrr_at_1
value: 56.56990915134057
- type: mrr_at_10
value: 63.86820789324668
- type: mrr_at_100
value: 64.42973602152581
- type: mrr_at_1000
value: 64.45818598090155
- type: mrr_at_20
value: 64.2163052263868
- type: mrr_at_3
value: 62.10946155550634
- type: mrr_at_5
value: 63.10104143585199
- type: nauc_map_at_1000_diff1
value: 73.78440163370111
- type: nauc_map_at_1000_max
value: 66.37875518052162
- type: nauc_map_at_1000_std
value: -17.063915098135396
- type: nauc_map_at_100_diff1
value: 73.77180802985815
- type: nauc_map_at_100_max
value: 66.38365998362033
- type: nauc_map_at_100_std
value: -17.053345109661972
- type: nauc_map_at_10_diff1
value: 73.70041876696037
- type: nauc_map_at_10_max
value: 66.33213342705997
- type: nauc_map_at_10_std
value: -17.40657791273925
- type: nauc_map_at_1_diff1
value: 76.8784374396948
- type: nauc_map_at_1_max
value: 64.07170606935357
- type: nauc_map_at_1_std
value: -18.464213686790654
- type: nauc_map_at_20_diff1
value: 73.72371377231813
- type: nauc_map_at_20_max
value: 66.42108121059451
- type: nauc_map_at_20_std
value: -17.05384923889036
- type: nauc_map_at_3_diff1
value: 74.08287018839246
- type: nauc_map_at_3_max
value: 66.42422337760333
- type: nauc_map_at_3_std
value: -17.79503404131652
- type: nauc_map_at_5_diff1
value: 73.9294779027339
- type: nauc_map_at_5_max
value: 66.51752041065726
- type: nauc_map_at_5_std
value: -17.67309805113804
- type: nauc_mrr_at_1000_diff1
value: 73.78389736923545
- type: nauc_mrr_at_1000_max
value: 66.37929720858341
- type: nauc_mrr_at_1000_std
value: -17.058591711291278
- type: nauc_mrr_at_100_diff1
value: 73.77126451253136
- type: nauc_mrr_at_100_max
value: 66.38405917246607
- type: nauc_mrr_at_100_std
value: -17.047251035212863
- type: nauc_mrr_at_10_diff1
value: 73.69960470665124
- type: nauc_mrr_at_10_max
value: 66.33265194210313
- type: nauc_mrr_at_10_std
value: -17.399659076827998
- type: nauc_mrr_at_1_diff1
value: 76.8689850260726
- type: nauc_mrr_at_1_max
value: 64.09858188287487
- type: nauc_mrr_at_1_std
value: -18.46064784201847
- type: nauc_mrr_at_20_diff1
value: 73.72312682063128
- type: nauc_mrr_at_20_max
value: 66.42181932858745
- type: nauc_mrr_at_20_std
value: -17.04690257511092
- type: nauc_mrr_at_3_diff1
value: 74.08287018839246
- type: nauc_mrr_at_3_max
value: 66.42422337760333
- type: nauc_mrr_at_3_std
value: -17.79503404131652
- type: nauc_mrr_at_5_diff1
value: 73.9294779027339
- type: nauc_mrr_at_5_max
value: 66.51752041065726
- type: nauc_mrr_at_5_std
value: -17.67309805113804
- type: nauc_ndcg_at_1000_diff1
value: 72.97825548342801
- type: nauc_ndcg_at_1000_max
value: 66.96275437178257
- type: nauc_ndcg_at_1000_std
value: -15.611902299641587
- type: nauc_ndcg_at_100_diff1
value: 72.58724738936613
- type: nauc_ndcg_at_100_max
value: 67.16774012704182
- type: nauc_ndcg_at_100_std
value: -14.945088654796812
- type: nauc_ndcg_at_10_diff1
value: 72.16253640477947
- type: nauc_ndcg_at_10_max
value: 67.01746849484621
- type: nauc_ndcg_at_10_std
value: -16.46102507270809
- type: nauc_ndcg_at_1_diff1
value: 76.8689850260726
- type: nauc_ndcg_at_1_max
value: 64.09858188287487
- type: nauc_ndcg_at_1_std
value: -18.46064784201847
- type: nauc_ndcg_at_20_diff1
value: 72.19995325129975
- type: nauc_ndcg_at_20_max
value: 67.39639713797962
- type: nauc_ndcg_at_20_std
value: -15.091689370748531
- type: nauc_ndcg_at_3_diff1
value: 73.13123604206514
- type: nauc_ndcg_at_3_max
value: 67.23123167871547
- type: nauc_ndcg_at_3_std
value: -17.492755234009156
- type: nauc_ndcg_at_5_diff1
value: 72.8154718929895
- type: nauc_ndcg_at_5_max
value: 67.44578008373777
- type: nauc_ndcg_at_5_std
value: -17.251840358751362
- type: nauc_precision_at_1000_diff1
value: 47.89748325983604
- type: nauc_precision_at_1000_max
value: 70.47466197804906
- type: nauc_precision_at_1000_std
value: 72.66193512114775
- type: nauc_precision_at_100_diff1
value: 59.493743734005356
- type: nauc_precision_at_100_max
value: 74.02140147220713
- type: nauc_precision_at_100_std
value: 17.26664098026236
- type: nauc_precision_at_10_diff1
value: 64.94415011040277
- type: nauc_precision_at_10_max
value: 69.6963814950747
- type: nauc_precision_at_10_std
value: -11.663043657012954
- type: nauc_precision_at_1_diff1
value: 76.8689850260726
- type: nauc_precision_at_1_max
value: 64.09858188287487
- type: nauc_precision_at_1_std
value: -18.46064784201847
- type: nauc_precision_at_20_diff1
value: 63.145886909986416
- type: nauc_precision_at_20_max
value: 72.95708033630744
- type: nauc_precision_at_20_std
value: -1.5039593629280323
- type: nauc_precision_at_3_diff1
value: 69.88902201644449
- type: nauc_precision_at_3_max
value: 69.80499971089935
- type: nauc_precision_at_3_std
value: -16.444680766676647
- type: nauc_precision_at_5_diff1
value: 68.60869967062919
- type: nauc_precision_at_5_max
value: 70.75998207564281
- type: nauc_precision_at_5_std
value: -15.62613396998262
- type: nauc_recall_at_1000_diff1
value: 62.6646436338833
- type: nauc_recall_at_1000_max
value: 86.17801636476078
- type: nauc_recall_at_1000_std
value: 71.84718775540334
- type: nauc_recall_at_100_diff1
value: 61.110492191439505
- type: nauc_recall_at_100_max
value: 75.45730686603042
- type: nauc_recall_at_100_std
value: 16.202465011589428
- type: nauc_recall_at_10_diff1
value: 65.1522196516815
- type: nauc_recall_at_10_max
value: 69.7626435962161
- type: nauc_recall_at_10_std
value: -11.801178474770449
- type: nauc_recall_at_1_diff1
value: 76.8784374396948
- type: nauc_recall_at_1_max
value: 64.07170606935357
- type: nauc_recall_at_1_std
value: -18.464213686790654
- type: nauc_recall_at_20_diff1
value: 63.40332739504143
- type: nauc_recall_at_20_max
value: 73.04113661090965
- type: nauc_recall_at_20_std
value: -1.6609741140266947
- type: nauc_recall_at_3_diff1
value: 70.03728086098866
- type: nauc_recall_at_3_max
value: 69.85953774320521
- type: nauc_recall_at_3_std
value: -16.482993123411706
- type: nauc_recall_at_5_diff1
value: 68.77396121765933
- type: nauc_recall_at_5_max
value: 70.8231205493519
- type: nauc_recall_at_5_std
value: -15.668037770700863
- type: ndcg_at_1
value: 56.57
- type: ndcg_at_10
value: 67.548
- type: ndcg_at_100
value: 70.421
- type: ndcg_at_1000
value: 71.198
- type: ndcg_at_20
value: 68.829
- type: ndcg_at_3
value: 63.88700000000001
- type: ndcg_at_5
value: 65.689
- type: precision_at_1
value: 56.57
- type: precision_at_10
value: 7.922
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.216
- type: precision_at_3
value: 23.015
- type: precision_at_5
value: 14.691
- type: recall_at_1
value: 56.559000000000005
- type: recall_at_10
value: 79.182
- type: recall_at_100
value: 92.946
- type: recall_at_1000
value: 99.092
- type: recall_at_20
value: 84.27900000000001
- type: recall_at_3
value: 69.023
- type: recall_at_5
value: 73.432
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-spa)
type: facebook/mlqa
config: deu-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 70.645
- type: map_at_1
value: 58.423
- type: map_at_10
value: 66.613
- type: map_at_100
value: 67.14099999999999
- type: map_at_1000
value: 67.161
- type: map_at_20
value: 66.965
- type: map_at_3
value: 64.714
- type: map_at_5
value: 65.835
- type: mrr_at_1
value: 58.4225352112676
- type: mrr_at_10
value: 66.61321260898735
- type: mrr_at_100
value: 67.13991570812132
- type: mrr_at_1000
value: 67.1598532168174
- type: mrr_at_20
value: 66.96384710024888
- type: mrr_at_3
value: 64.71361502347425
- type: mrr_at_5
value: 65.83474178403769
- type: nauc_map_at_1000_diff1
value: 73.9485117118935
- type: nauc_map_at_1000_max
value: 65.74479869396299
- type: nauc_map_at_1000_std
value: -20.300269749495563
- type: nauc_map_at_100_diff1
value: 73.93900406302829
- type: nauc_map_at_100_max
value: 65.75508449194885
- type: nauc_map_at_100_std
value: -20.265330791570175
- type: nauc_map_at_10_diff1
value: 73.84863233472605
- type: nauc_map_at_10_max
value: 65.89377317378211
- type: nauc_map_at_10_std
value: -20.404123131964695
- type: nauc_map_at_1_diff1
value: 76.73627284218519
- type: nauc_map_at_1_max
value: 62.94957512510876
- type: nauc_map_at_1_std
value: -20.99649749330682
- type: nauc_map_at_20_diff1
value: 73.88712006109598
- type: nauc_map_at_20_max
value: 65.82057018162664
- type: nauc_map_at_20_std
value: -20.269476512431915
- type: nauc_map_at_3_diff1
value: 74.21419190161502
- type: nauc_map_at_3_max
value: 65.64993368062119
- type: nauc_map_at_3_std
value: -21.34641749007071
- type: nauc_map_at_5_diff1
value: 74.0119419385777
- type: nauc_map_at_5_max
value: 65.69809416369732
- type: nauc_map_at_5_std
value: -21.16901556082261
- type: nauc_mrr_at_1000_diff1
value: 73.94915184134923
- type: nauc_mrr_at_1000_max
value: 65.74522469633418
- type: nauc_mrr_at_1000_std
value: -20.303028367132246
- type: nauc_mrr_at_100_diff1
value: 73.93964394728808
- type: nauc_mrr_at_100_max
value: 65.75550992323707
- type: nauc_mrr_at_100_std
value: -20.26808820438918
- type: nauc_mrr_at_10_diff1
value: 73.84863233472605
- type: nauc_mrr_at_10_max
value: 65.89377317378211
- type: nauc_mrr_at_10_std
value: -20.404123131964695
- type: nauc_mrr_at_1_diff1
value: 76.73627284218519
- type: nauc_mrr_at_1_max
value: 62.94957512510876
- type: nauc_mrr_at_1_std
value: -20.99649749330682
- type: nauc_mrr_at_20_diff1
value: 73.88775721128745
- type: nauc_mrr_at_20_max
value: 65.820991355628
- type: nauc_mrr_at_20_std
value: -20.272216587019734
- type: nauc_mrr_at_3_diff1
value: 74.21419190161502
- type: nauc_mrr_at_3_max
value: 65.64993368062119
- type: nauc_mrr_at_3_std
value: -21.34641749007071
- type: nauc_mrr_at_5_diff1
value: 74.0119419385777
- type: nauc_mrr_at_5_max
value: 65.69809416369732
- type: nauc_mrr_at_5_std
value: -21.16901556082261
- type: nauc_ndcg_at_1000_diff1
value: 73.29396365944277
- type: nauc_ndcg_at_1000_max
value: 66.44879592109541
- type: nauc_ndcg_at_1000_std
value: -19.285991058788195
- type: nauc_ndcg_at_100_diff1
value: 73.0159172721162
- type: nauc_ndcg_at_100_max
value: 66.76216389231388
- type: nauc_ndcg_at_100_std
value: -18.27931368094887
- type: nauc_ndcg_at_10_diff1
value: 72.42096650774693
- type: nauc_ndcg_at_10_max
value: 67.48592688463306
- type: nauc_ndcg_at_10_std
value: -18.91453756077581
- type: nauc_ndcg_at_1_diff1
value: 76.73627284218519
- type: nauc_ndcg_at_1_max
value: 62.94957512510876
- type: nauc_ndcg_at_1_std
value: -20.99649749330682
- type: nauc_ndcg_at_20_diff1
value: 72.53699362385684
- type: nauc_ndcg_at_20_max
value: 67.22763976357872
- type: nauc_ndcg_at_20_std
value: -18.299910635008338
- type: nauc_ndcg_at_3_diff1
value: 73.3698453761989
- type: nauc_ndcg_at_3_max
value: 66.71056987289383
- type: nauc_ndcg_at_3_std
value: -21.405154376652803
- type: nauc_ndcg_at_5_diff1
value: 72.9491030712935
- type: nauc_ndcg_at_5_max
value: 66.85786103137077
- type: nauc_ndcg_at_5_std
value: -21.04005053344073
- type: nauc_precision_at_1000_diff1
value: 17.02462370967451
- type: nauc_precision_at_1000_max
value: 48.03260752496052
- type: nauc_precision_at_1000_std
value: 87.56077915079334
- type: nauc_precision_at_100_diff1
value: 58.590352501194985
- type: nauc_precision_at_100_max
value: 78.2649015433222
- type: nauc_precision_at_100_std
value: 28.05030453158992
- type: nauc_precision_at_10_diff1
value: 64.89497928764766
- type: nauc_precision_at_10_max
value: 75.93257124951242
- type: nauc_precision_at_10_std
value: -9.825306994117462
- type: nauc_precision_at_1_diff1
value: 76.73627284218519
- type: nauc_precision_at_1_max
value: 62.94957512510876
- type: nauc_precision_at_1_std
value: -20.99649749330682
- type: nauc_precision_at_20_diff1
value: 62.11366204321558
- type: nauc_precision_at_20_max
value: 75.9571427846493
- type: nauc_precision_at_20_std
value: -0.94585212808191
- type: nauc_precision_at_3_diff1
value: 70.52940972112398
- type: nauc_precision_at_3_max
value: 70.3402053170779
- type: nauc_precision_at_3_std
value: -21.579778424241304
- type: nauc_precision_at_5_diff1
value: 68.78962580223575
- type: nauc_precision_at_5_max
value: 71.41410894398376
- type: nauc_precision_at_5_std
value: -20.415603405161956
- type: nauc_recall_at_1000_diff1
value: 55.88625447348128
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 61.17942268389525
- type: nauc_recall_at_100_max
value: 81.12207841563487
- type: nauc_recall_at_100_std
value: 27.141215257528113
- type: nauc_recall_at_10_diff1
value: 64.8949792876478
- type: nauc_recall_at_10_max
value: 75.93257124951249
- type: nauc_recall_at_10_std
value: -9.825306994117323
- type: nauc_recall_at_1_diff1
value: 76.73627284218519
- type: nauc_recall_at_1_max
value: 62.94957512510876
- type: nauc_recall_at_1_std
value: -20.99649749330682
- type: nauc_recall_at_20_diff1
value: 63.07808719241162
- type: nauc_recall_at_20_max
value: 76.96808746317542
- type: nauc_recall_at_20_std
value: -1.5235053258631275
- type: nauc_recall_at_3_diff1
value: 70.52940972112405
- type: nauc_recall_at_3_max
value: 70.3402053170779
- type: nauc_recall_at_3_std
value: -21.57977842424124
- type: nauc_recall_at_5_diff1
value: 68.78962580223575
- type: nauc_recall_at_5_max
value: 71.41410894398392
- type: nauc_recall_at_5_std
value: -20.415603405161793
- type: ndcg_at_1
value: 58.423
- type: ndcg_at_10
value: 70.645
- type: ndcg_at_100
value: 73.277
- type: ndcg_at_1000
value: 73.785
- type: ndcg_at_20
value: 71.918
- type: ndcg_at_3
value: 66.679
- type: ndcg_at_5
value: 68.72200000000001
- type: precision_at_1
value: 58.423
- type: precision_at_10
value: 8.338
- type: precision_at_100
value: 0.959
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.423
- type: precision_at_3
value: 24.113
- type: precision_at_5
value: 15.47
- type: recall_at_1
value: 58.423
- type: recall_at_10
value: 83.38
- type: recall_at_100
value: 95.887
- type: recall_at_1000
value: 99.831
- type: recall_at_20
value: 88.39399999999999
- type: recall_at_3
value: 72.33800000000001
- type: recall_at_5
value: 77.352
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-eng)
type: facebook/mlqa
config: deu-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 67.067
- type: map_at_1
value: 55.861000000000004
- type: map_at_10
value: 63.42100000000001
- type: map_at_100
value: 64.03
- type: map_at_1000
value: 64.05999999999999
- type: map_at_20
value: 63.819
- type: map_at_3
value: 61.773
- type: map_at_5
value: 62.736999999999995
- type: mrr_at_1
value: 55.88300465322402
- type: mrr_at_10
value: 63.43111082973707
- type: mrr_at_100
value: 64.03962373590272
- type: mrr_at_1000
value: 64.0698259866376
- type: mrr_at_20
value: 63.82871766489112
- type: mrr_at_3
value: 61.78447448112865
- type: mrr_at_5
value: 62.74835659945346
- type: nauc_map_at_1000_diff1
value: 74.58505763417352
- type: nauc_map_at_1000_max
value: 66.26060764852198
- type: nauc_map_at_1000_std
value: -16.896178230873897
- type: nauc_map_at_100_diff1
value: 74.57057487892857
- type: nauc_map_at_100_max
value: 66.26600433283826
- type: nauc_map_at_100_std
value: -16.87596113104189
- type: nauc_map_at_10_diff1
value: 74.53453636322749
- type: nauc_map_at_10_max
value: 66.27501737773804
- type: nauc_map_at_10_std
value: -17.178743257781775
- type: nauc_map_at_1_diff1
value: 77.63067209375254
- type: nauc_map_at_1_max
value: 64.17718675702672
- type: nauc_map_at_1_std
value: -17.639521106853717
- type: nauc_map_at_20_diff1
value: 74.52007402431164
- type: nauc_map_at_20_max
value: 66.28276291359268
- type: nauc_map_at_20_std
value: -16.939292897754758
- type: nauc_map_at_3_diff1
value: 74.79187974631951
- type: nauc_map_at_3_max
value: 66.23256568210611
- type: nauc_map_at_3_std
value: -17.894889918934112
- type: nauc_map_at_5_diff1
value: 74.63011328882517
- type: nauc_map_at_5_max
value: 66.35411054978499
- type: nauc_map_at_5_std
value: -17.50140342194211
- type: nauc_mrr_at_1000_diff1
value: 74.57520089771667
- type: nauc_mrr_at_1000_max
value: 66.27270912845914
- type: nauc_mrr_at_1000_std
value: -16.84012675362397
- type: nauc_mrr_at_100_diff1
value: 74.56070964572156
- type: nauc_mrr_at_100_max
value: 66.2780701126926
- type: nauc_mrr_at_100_std
value: -16.820035083069865
- type: nauc_mrr_at_10_diff1
value: 74.52455978435117
- type: nauc_mrr_at_10_max
value: 66.28697244023137
- type: nauc_mrr_at_10_std
value: -17.122477723330523
- type: nauc_mrr_at_1_diff1
value: 77.60643512422061
- type: nauc_mrr_at_1_max
value: 64.21736966061896
- type: nauc_mrr_at_1_std
value: -17.56627338275146
- type: nauc_mrr_at_20_diff1
value: 74.5099814266373
- type: nauc_mrr_at_20_max
value: 66.29485560556576
- type: nauc_mrr_at_20_std
value: -16.882350027335306
- type: nauc_mrr_at_3_diff1
value: 74.78132817375507
- type: nauc_mrr_at_3_max
value: 66.24761860047623
- type: nauc_mrr_at_3_std
value: -17.833128575678998
- type: nauc_mrr_at_5_diff1
value: 74.6193031207433
- type: nauc_mrr_at_5_max
value: 66.36951764432901
- type: nauc_mrr_at_5_std
value: -17.438203106324227
- type: nauc_ndcg_at_1000_diff1
value: 73.79386161629151
- type: nauc_ndcg_at_1000_max
value: 66.84013038018082
- type: nauc_ndcg_at_1000_std
value: -15.387358822700667
- type: nauc_ndcg_at_100_diff1
value: 73.36132885277745
- type: nauc_ndcg_at_100_max
value: 67.04416926901568
- type: nauc_ndcg_at_100_std
value: -14.503256942521972
- type: nauc_ndcg_at_10_diff1
value: 73.11847332785027
- type: nauc_ndcg_at_10_max
value: 67.02149621303091
- type: nauc_ndcg_at_10_std
value: -16.142234662067782
- type: nauc_ndcg_at_1_diff1
value: 77.60643512422061
- type: nauc_ndcg_at_1_max
value: 64.21736966061896
- type: nauc_ndcg_at_1_std
value: -17.56627338275146
- type: nauc_ndcg_at_20_diff1
value: 72.97961452569768
- type: nauc_ndcg_at_20_max
value: 67.12369127081152
- type: nauc_ndcg_at_20_std
value: -15.11921773223936
- type: nauc_ndcg_at_3_diff1
value: 73.77769312598772
- type: nauc_ndcg_at_3_max
value: 66.94438755852309
- type: nauc_ndcg_at_3_std
value: -17.75960443830741
- type: nauc_ndcg_at_5_diff1
value: 73.43991209562891
- type: nauc_ndcg_at_5_max
value: 67.21682951737418
- type: nauc_ndcg_at_5_std
value: -17.013510008231805
- type: nauc_precision_at_1000_diff1
value: 51.30633281948362
- type: nauc_precision_at_1000_max
value: 76.78675288883846
- type: nauc_precision_at_1000_std
value: 71.70041985304397
- type: nauc_precision_at_100_diff1
value: 59.86656455853326
- type: nauc_precision_at_100_max
value: 74.41958422732161
- type: nauc_precision_at_100_std
value: 22.098920296069124
- type: nauc_precision_at_10_diff1
value: 66.4696166928741
- type: nauc_precision_at_10_max
value: 69.88463108697104
- type: nauc_precision_at_10_std
value: -10.707950954702742
- type: nauc_precision_at_1_diff1
value: 77.60643512422061
- type: nauc_precision_at_1_max
value: 64.21736966061896
- type: nauc_precision_at_1_std
value: -17.56627338275146
- type: nauc_precision_at_20_diff1
value: 63.45094585276983
- type: nauc_precision_at_20_max
value: 71.57741245347195
- type: nauc_precision_at_20_std
value: -2.2211545419051744
- type: nauc_precision_at_3_diff1
value: 70.28060818081384
- type: nauc_precision_at_3_max
value: 69.22652927816439
- type: nauc_precision_at_3_std
value: -17.158576243559434
- type: nauc_precision_at_5_diff1
value: 68.90765418427162
- type: nauc_precision_at_5_max
value: 70.32585273389111
- type: nauc_precision_at_5_std
value: -14.950363729664524
- type: nauc_recall_at_1000_diff1
value: 65.11255117927331
- type: nauc_recall_at_1000_max
value: 88.35641213283338
- type: nauc_recall_at_1000_std
value: 69.89792573640547
- type: nauc_recall_at_100_diff1
value: 61.46376457272238
- type: nauc_recall_at_100_max
value: 75.48265142243015
- type: nauc_recall_at_100_std
value: 21.223182712042178
- type: nauc_recall_at_10_diff1
value: 66.89353375308997
- type: nauc_recall_at_10_max
value: 70.06655416883785
- type: nauc_recall_at_10_std
value: -11.100871879439435
- type: nauc_recall_at_1_diff1
value: 77.63067209375254
- type: nauc_recall_at_1_max
value: 64.17718675702672
- type: nauc_recall_at_1_std
value: -17.639521106853717
- type: nauc_recall_at_20_diff1
value: 63.98532276331878
- type: nauc_recall_at_20_max
value: 71.81562599791899
- type: nauc_recall_at_20_std
value: -2.696537977147695
- type: nauc_recall_at_3_diff1
value: 70.4507655865698
- type: nauc_recall_at_3_max
value: 69.25705030141037
- type: nauc_recall_at_3_std
value: -17.299948348202836
- type: nauc_recall_at_5_diff1
value: 69.09152857901888
- type: nauc_recall_at_5_max
value: 70.35609636026405
- type: nauc_recall_at_5_std
value: -15.105012139255896
- type: ndcg_at_1
value: 55.883
- type: ndcg_at_10
value: 67.067
- type: ndcg_at_100
value: 70.07
- type: ndcg_at_1000
value: 70.875
- type: ndcg_at_20
value: 68.498
- type: ndcg_at_3
value: 63.666
- type: ndcg_at_5
value: 65.40599999999999
- type: precision_at_1
value: 55.883
- type: precision_at_10
value: 7.8549999999999995
- type: precision_at_100
value: 0.928
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.2090000000000005
- type: precision_at_3
value: 23.052
- type: precision_at_5
value: 14.677999999999999
- type: recall_at_1
value: 55.861000000000004
- type: recall_at_10
value: 78.495
- type: recall_at_100
value: 92.688
- type: recall_at_1000
value: 99.02499999999999
- type: recall_at_20
value: 84.124
- type: recall_at_3
value: 69.123
- type: recall_at_5
value: 73.355
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-deu)
type: facebook/mlqa
config: spa-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 73.90299999999999
- type: map_at_1
value: 61.236000000000004
- type: map_at_10
value: 69.88799999999999
- type: map_at_100
value: 70.319
- type: map_at_1000
value: 70.341
- type: map_at_20
value: 70.16799999999999
- type: map_at_3
value: 68.104
- type: map_at_5
value: 69.164
- type: mrr_at_1
value: 61.2739571589628
- type: mrr_at_10
value: 69.92589162684993
- type: mrr_at_100
value: 70.35245455509234
- type: mrr_at_1000
value: 70.37438351396742
- type: mrr_at_20
value: 70.20247469915404
- type: mrr_at_3
value: 68.14167606163099
- type: mrr_at_5
value: 69.20142803457354
- type: nauc_map_at_1000_diff1
value: 74.70416754842327
- type: nauc_map_at_1000_max
value: 65.86915994583384
- type: nauc_map_at_1000_std
value: -19.04437483534443
- type: nauc_map_at_100_diff1
value: 74.70011798058674
- type: nauc_map_at_100_max
value: 65.88507779167188
- type: nauc_map_at_100_std
value: -19.018670970643786
- type: nauc_map_at_10_diff1
value: 74.6362126804427
- type: nauc_map_at_10_max
value: 66.05733054427198
- type: nauc_map_at_10_std
value: -19.034317737897354
- type: nauc_map_at_1_diff1
value: 77.24970536833601
- type: nauc_map_at_1_max
value: 62.07820573048406
- type: nauc_map_at_1_std
value: -20.917086586335078
- type: nauc_map_at_20_diff1
value: 74.64113920401083
- type: nauc_map_at_20_max
value: 65.89991740166793
- type: nauc_map_at_20_std
value: -19.09987515041243
- type: nauc_map_at_3_diff1
value: 74.6518162332119
- type: nauc_map_at_3_max
value: 66.10312348194024
- type: nauc_map_at_3_std
value: -18.95881457716116
- type: nauc_map_at_5_diff1
value: 74.55141020670321
- type: nauc_map_at_5_max
value: 65.94345752979342
- type: nauc_map_at_5_std
value: -19.453976877992304
- type: nauc_mrr_at_1000_diff1
value: 74.64458488344088
- type: nauc_mrr_at_1000_max
value: 65.84575328456057
- type: nauc_mrr_at_1000_std
value: -18.901614615119904
- type: nauc_mrr_at_100_diff1
value: 74.64058497924627
- type: nauc_mrr_at_100_max
value: 65.86170461767928
- type: nauc_mrr_at_100_std
value: -18.87601697091505
- type: nauc_mrr_at_10_diff1
value: 74.57266634464752
- type: nauc_mrr_at_10_max
value: 66.03331587645152
- type: nauc_mrr_at_10_std
value: -18.87888060105393
- type: nauc_mrr_at_1_diff1
value: 77.19578272647183
- type: nauc_mrr_at_1_max
value: 62.05252035478773
- type: nauc_mrr_at_1_std
value: -20.790530940625267
- type: nauc_mrr_at_20_diff1
value: 74.5808171250021
- type: nauc_mrr_at_20_max
value: 65.87643606587798
- type: nauc_mrr_at_20_std
value: -18.95476583474199
- type: nauc_mrr_at_3_diff1
value: 74.5917053289191
- type: nauc_mrr_at_3_max
value: 66.08044079438714
- type: nauc_mrr_at_3_std
value: -18.81168463163586
- type: nauc_mrr_at_5_diff1
value: 74.48934579694608
- type: nauc_mrr_at_5_max
value: 65.91993162383771
- type: nauc_mrr_at_5_std
value: -19.302710791338797
- type: nauc_ndcg_at_1000_diff1
value: 74.20191283992186
- type: nauc_ndcg_at_1000_max
value: 66.60831175771229
- type: nauc_ndcg_at_1000_std
value: -18.175208725175484
- type: nauc_ndcg_at_100_diff1
value: 74.07713451642955
- type: nauc_ndcg_at_100_max
value: 67.02028626335476
- type: nauc_ndcg_at_100_std
value: -17.36560972181693
- type: nauc_ndcg_at_10_diff1
value: 73.63235521598476
- type: nauc_ndcg_at_10_max
value: 67.8118473312638
- type: nauc_ndcg_at_10_std
value: -17.647560577355915
- type: nauc_ndcg_at_1_diff1
value: 77.19578272647183
- type: nauc_ndcg_at_1_max
value: 62.05252035478773
- type: nauc_ndcg_at_1_std
value: -20.790530940625267
- type: nauc_ndcg_at_20_diff1
value: 73.65300308228291
- type: nauc_ndcg_at_20_max
value: 67.18353402731985
- type: nauc_ndcg_at_20_std
value: -17.9240756389792
- type: nauc_ndcg_at_3_diff1
value: 73.73764900202292
- type: nauc_ndcg_at_3_max
value: 67.60840957876889
- type: nauc_ndcg_at_3_std
value: -17.962667543518933
- type: nauc_ndcg_at_5_diff1
value: 73.49040500302092
- type: nauc_ndcg_at_5_max
value: 67.41251918514402
- type: nauc_ndcg_at_5_std
value: -18.851877225955523
- type: nauc_precision_at_1000_diff1
value: -18.652906102973922
- type: nauc_precision_at_1000_max
value: 2.1701672475574885
- type: nauc_precision_at_1000_std
value: 61.713411950188835
- type: nauc_precision_at_100_diff1
value: 62.37565302288498
- type: nauc_precision_at_100_max
value: 76.96921843049006
- type: nauc_precision_at_100_std
value: 19.152009040219678
- type: nauc_precision_at_10_diff1
value: 68.14047344105212
- type: nauc_precision_at_10_max
value: 77.7177273849099
- type: nauc_precision_at_10_std
value: -9.124325941493698
- type: nauc_precision_at_1_diff1
value: 77.19578272647183
- type: nauc_precision_at_1_max
value: 62.05252035478773
- type: nauc_precision_at_1_std
value: -20.790530940625267
- type: nauc_precision_at_20_diff1
value: 65.38487456362745
- type: nauc_precision_at_20_max
value: 74.61122933443669
- type: nauc_precision_at_20_std
value: -8.129775929648341
- type: nauc_precision_at_3_diff1
value: 70.45937744142297
- type: nauc_precision_at_3_max
value: 73.03004233073901
- type: nauc_precision_at_3_std
value: -14.246554579025158
- type: nauc_precision_at_5_diff1
value: 69.02821772428955
- type: nauc_precision_at_5_max
value: 73.52949774726446
- type: nauc_precision_at_5_std
value: -16.355747231517757
- type: nauc_recall_at_1000_diff1
value: 35.804192824985755
- type: nauc_recall_at_1000_max
value: 61.367785756485894
- type: nauc_recall_at_1000_std
value: 54.01380822466869
- type: nauc_recall_at_100_diff1
value: 67.96210883597479
- type: nauc_recall_at_100_max
value: 82.38124823732169
- type: nauc_recall_at_100_std
value: 16.814922595309966
- type: nauc_recall_at_10_diff1
value: 68.21964459634341
- type: nauc_recall_at_10_max
value: 77.68301934858845
- type: nauc_recall_at_10_std
value: -9.430792913885066
- type: nauc_recall_at_1_diff1
value: 77.24970536833601
- type: nauc_recall_at_1_max
value: 62.07820573048406
- type: nauc_recall_at_1_std
value: -20.917086586335078
- type: nauc_recall_at_20_diff1
value: 66.60569906579487
- type: nauc_recall_at_20_max
value: 75.66163186604354
- type: nauc_recall_at_20_std
value: -9.09826205489828
- type: nauc_recall_at_3_diff1
value: 70.52323701841641
- type: nauc_recall_at_3_max
value: 73.03478107411232
- type: nauc_recall_at_3_std
value: -14.432325989967962
- type: nauc_recall_at_5_diff1
value: 69.08521261524373
- type: nauc_recall_at_5_max
value: 73.51150270382094
- type: nauc_recall_at_5_std
value: -16.569387503524368
- type: ndcg_at_1
value: 61.273999999999994
- type: ndcg_at_10
value: 73.90299999999999
- type: ndcg_at_100
value: 75.983
- type: ndcg_at_1000
value: 76.488
- type: ndcg_at_20
value: 74.921
- type: ndcg_at_3
value: 70.277
- type: ndcg_at_5
value: 72.172
- type: precision_at_1
value: 61.273999999999994
- type: precision_at_10
value: 8.641
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.524
- type: precision_at_3
value: 25.517
- type: precision_at_5
value: 16.223000000000003
- type: recall_at_1
value: 61.236000000000004
- type: recall_at_10
value: 86.37700000000001
- type: recall_at_100
value: 96.054
- type: recall_at_1000
value: 99.887
- type: recall_at_20
value: 90.398
- type: recall_at_3
value: 76.51299999999999
- type: recall_at_5
value: 81.07900000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-spa)
type: facebook/mlqa
config: spa-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 68.632
- type: map_at_1
value: 57.046
- type: map_at_10
value: 64.869
- type: map_at_100
value: 65.384
- type: map_at_1000
value: 65.413
- type: map_at_20
value: 65.185
- type: map_at_3
value: 63.178
- type: map_at_5
value: 64.12
- type: mrr_at_1
value: 57.05579889544848
- type: mrr_at_10
value: 64.8806425382317
- type: mrr_at_100
value: 65.39469233244084
- type: mrr_at_1000
value: 65.42342199403159
- type: mrr_at_20
value: 65.19634815919534
- type: mrr_at_3
value: 63.18796419729591
- type: mrr_at_5
value: 64.13159398209874
- type: nauc_map_at_1000_diff1
value: 73.23803038674018
- type: nauc_map_at_1000_max
value: 67.44156201421714
- type: nauc_map_at_1000_std
value: -8.60143026450049
- type: nauc_map_at_100_diff1
value: 73.22575613034235
- type: nauc_map_at_100_max
value: 67.44735143420195
- type: nauc_map_at_100_std
value: -8.576905069492895
- type: nauc_map_at_10_diff1
value: 73.11950129610865
- type: nauc_map_at_10_max
value: 67.45107232305055
- type: nauc_map_at_10_std
value: -8.799837857015392
- type: nauc_map_at_1_diff1
value: 76.18354072047988
- type: nauc_map_at_1_max
value: 65.03342186728786
- type: nauc_map_at_1_std
value: -10.867650288695796
- type: nauc_map_at_20_diff1
value: 73.21570748770948
- type: nauc_map_at_20_max
value: 67.50340321088724
- type: nauc_map_at_20_std
value: -8.594057184944676
- type: nauc_map_at_3_diff1
value: 73.17239276163892
- type: nauc_map_at_3_max
value: 67.06319504819103
- type: nauc_map_at_3_std
value: -9.883216310270528
- type: nauc_map_at_5_diff1
value: 73.11913507367727
- type: nauc_map_at_5_max
value: 67.27497019567078
- type: nauc_map_at_5_std
value: -9.497714822103118
- type: nauc_mrr_at_1000_diff1
value: 73.22971233311306
- type: nauc_mrr_at_1000_max
value: 67.42977229057223
- type: nauc_mrr_at_1000_std
value: -8.550068702273297
- type: nauc_mrr_at_100_diff1
value: 73.21744467317815
- type: nauc_mrr_at_100_max
value: 67.43557491068093
- type: nauc_mrr_at_100_std
value: -8.52559275190607
- type: nauc_mrr_at_10_diff1
value: 73.11075619726137
- type: nauc_mrr_at_10_max
value: 67.43889760205286
- type: nauc_mrr_at_10_std
value: -8.74617232559183
- type: nauc_mrr_at_1_diff1
value: 76.17529975949547
- type: nauc_mrr_at_1_max
value: 65.02401127001608
- type: nauc_mrr_at_1_std
value: -10.817814457633952
- type: nauc_mrr_at_20_diff1
value: 73.20689275225138
- type: nauc_mrr_at_20_max
value: 67.49111752272192
- type: nauc_mrr_at_20_std
value: -8.539827528410353
- type: nauc_mrr_at_3_diff1
value: 73.16291729623958
- type: nauc_mrr_at_3_max
value: 67.05300993427998
- type: nauc_mrr_at_3_std
value: -9.827915885680811
- type: nauc_mrr_at_5_diff1
value: 73.11055686484109
- type: nauc_mrr_at_5_max
value: 67.26299851089122
- type: nauc_mrr_at_5_std
value: -9.445190276650903
- type: nauc_ndcg_at_1000_diff1
value: 72.58833638407177
- type: nauc_ndcg_at_1000_max
value: 68.10447506371374
- type: nauc_ndcg_at_1000_std
value: -6.910306241546282
- type: nauc_ndcg_at_100_diff1
value: 72.24524849631476
- type: nauc_ndcg_at_100_max
value: 68.30659210081238
- type: nauc_ndcg_at_100_std
value: -6.04305364268931
- type: nauc_ndcg_at_10_diff1
value: 71.87363502582961
- type: nauc_ndcg_at_10_max
value: 68.5010009653693
- type: nauc_ndcg_at_10_std
value: -7.021281296450588
- type: nauc_ndcg_at_1_diff1
value: 76.17529975949547
- type: nauc_ndcg_at_1_max
value: 65.02401127001608
- type: nauc_ndcg_at_1_std
value: -10.817814457633952
- type: nauc_ndcg_at_20_diff1
value: 72.21241010439327
- type: nauc_ndcg_at_20_max
value: 68.71743274030551
- type: nauc_ndcg_at_20_std
value: -6.186629577195946
- type: nauc_ndcg_at_3_diff1
value: 72.08204674794459
- type: nauc_ndcg_at_3_max
value: 67.5958365046156
- type: nauc_ndcg_at_3_std
value: -9.576418336610345
- type: nauc_ndcg_at_5_diff1
value: 71.93179095844508
- type: nauc_ndcg_at_5_max
value: 68.01914639754217
- type: nauc_ndcg_at_5_std
value: -8.833768332910777
- type: nauc_precision_at_1000_diff1
value: 63.0051360227489
- type: nauc_precision_at_1000_max
value: 79.93532442313229
- type: nauc_precision_at_1000_std
value: 52.869517607133254
- type: nauc_precision_at_100_diff1
value: 62.43301501857154
- type: nauc_precision_at_100_max
value: 75.57280416668183
- type: nauc_precision_at_100_std
value: 26.758300486132747
- type: nauc_precision_at_10_diff1
value: 66.29806375971134
- type: nauc_precision_at_10_max
value: 73.40301413754797
- type: nauc_precision_at_10_std
value: 1.9858547295235462
- type: nauc_precision_at_1_diff1
value: 76.17529975949547
- type: nauc_precision_at_1_max
value: 65.02401127001608
- type: nauc_precision_at_1_std
value: -10.817814457633952
- type: nauc_precision_at_20_diff1
value: 67.05111836051105
- type: nauc_precision_at_20_max
value: 76.09783190824155
- type: nauc_precision_at_20_std
value: 9.906010659515564
- type: nauc_precision_at_3_diff1
value: 68.44186679250453
- type: nauc_precision_at_3_max
value: 69.30301351119388
- type: nauc_precision_at_3_std
value: -8.566522518882348
- type: nauc_precision_at_5_diff1
value: 67.51737199297388
- type: nauc_precision_at_5_max
value: 70.75887601590472
- type: nauc_precision_at_5_std
value: -6.278983102710238
- type: nauc_recall_at_1000_diff1
value: 65.12360093170948
- type: nauc_recall_at_1000_max
value: 82.60209843191132
- type: nauc_recall_at_1000_std
value: 51.740179583368636
- type: nauc_recall_at_100_diff1
value: 62.82007697326819
- type: nauc_recall_at_100_max
value: 76.04844844677562
- type: nauc_recall_at_100_std
value: 26.4678415019248
- type: nauc_recall_at_10_diff1
value: 66.28557566848767
- type: nauc_recall_at_10_max
value: 73.40302709828738
- type: nauc_recall_at_10_std
value: 1.9224272854613582
- type: nauc_recall_at_1_diff1
value: 76.18354072047988
- type: nauc_recall_at_1_max
value: 65.03342186728786
- type: nauc_recall_at_1_std
value: -10.867650288695796
- type: nauc_recall_at_20_diff1
value: 67.03430451094992
- type: nauc_recall_at_20_max
value: 76.09474005171319
- type: nauc_recall_at_20_std
value: 9.815888637851074
- type: nauc_recall_at_3_diff1
value: 68.44411411344718
- type: nauc_recall_at_3_max
value: 69.30502737137265
- type: nauc_recall_at_3_std
value: -8.629526329714132
- type: nauc_recall_at_5_diff1
value: 67.51469265953514
- type: nauc_recall_at_5_max
value: 70.76969893818111
- type: nauc_recall_at_5_std
value: -6.325600167105444
- type: ndcg_at_1
value: 57.056
- type: ndcg_at_10
value: 68.632
- type: ndcg_at_100
value: 71.202
- type: ndcg_at_1000
value: 71.97099999999999
- type: ndcg_at_20
value: 69.785
- type: ndcg_at_3
value: 65.131
- type: ndcg_at_5
value: 66.834
- type: precision_at_1
value: 57.056
- type: precision_at_10
value: 8.044
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.251
- type: precision_at_3
value: 23.589
- type: precision_at_5
value: 14.984
- type: recall_at_1
value: 57.046
- type: recall_at_10
value: 80.423
- type: recall_at_100
value: 92.582
- type: recall_at_1000
value: 98.638
- type: recall_at_20
value: 84.993
- type: recall_at_3
value: 70.758
- type: recall_at_5
value: 74.9
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-eng)
type: facebook/mlqa
config: spa-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 68.765
- type: map_at_1
value: 56.538999999999994
- type: map_at_10
value: 64.816
- type: map_at_100
value: 65.325
- type: map_at_1000
value: 65.352
- type: map_at_20
value: 65.113
- type: map_at_3
value: 62.934999999999995
- type: map_at_5
value: 64.063
- type: mrr_at_1
value: 56.539120502569965
- type: mrr_at_10
value: 64.81561556661505
- type: mrr_at_100
value: 65.32464238613954
- type: mrr_at_1000
value: 65.35206516602133
- type: mrr_at_20
value: 65.11270445292227
- type: mrr_at_3
value: 62.935465448315384
- type: mrr_at_5
value: 64.06339234723022
- type: nauc_map_at_1000_diff1
value: 73.20701050428072
- type: nauc_map_at_1000_max
value: 67.32797480614404
- type: nauc_map_at_1000_std
value: -6.211540626528362
- type: nauc_map_at_100_diff1
value: 73.19497683923063
- type: nauc_map_at_100_max
value: 67.33392646467817
- type: nauc_map_at_100_std
value: -6.196671563900051
- type: nauc_map_at_10_diff1
value: 73.16010547612956
- type: nauc_map_at_10_max
value: 67.37793741307372
- type: nauc_map_at_10_std
value: -6.3443240322521675
- type: nauc_map_at_1_diff1
value: 76.63696578575964
- type: nauc_map_at_1_max
value: 65.08189618178105
- type: nauc_map_at_1_std
value: -8.594195451782733
- type: nauc_map_at_20_diff1
value: 73.15233479381568
- type: nauc_map_at_20_max
value: 67.3679607256072
- type: nauc_map_at_20_std
value: -6.175928265286352
- type: nauc_map_at_3_diff1
value: 73.14853380980746
- type: nauc_map_at_3_max
value: 67.10354198073468
- type: nauc_map_at_3_std
value: -7.409679815529866
- type: nauc_map_at_5_diff1
value: 73.13425961877715
- type: nauc_map_at_5_max
value: 67.22452899371224
- type: nauc_map_at_5_std
value: -6.895257774506354
- type: nauc_mrr_at_1000_diff1
value: 73.20701050428072
- type: nauc_mrr_at_1000_max
value: 67.32797480614404
- type: nauc_mrr_at_1000_std
value: -6.211540626528362
- type: nauc_mrr_at_100_diff1
value: 73.19497683923063
- type: nauc_mrr_at_100_max
value: 67.33392646467817
- type: nauc_mrr_at_100_std
value: -6.196671563900051
- type: nauc_mrr_at_10_diff1
value: 73.16010547612956
- type: nauc_mrr_at_10_max
value: 67.37793741307372
- type: nauc_mrr_at_10_std
value: -6.3443240322521675
- type: nauc_mrr_at_1_diff1
value: 76.63696578575964
- type: nauc_mrr_at_1_max
value: 65.08189618178105
- type: nauc_mrr_at_1_std
value: -8.594195451782733
- type: nauc_mrr_at_20_diff1
value: 73.15233479381568
- type: nauc_mrr_at_20_max
value: 67.3679607256072
- type: nauc_mrr_at_20_std
value: -6.175928265286352
- type: nauc_mrr_at_3_diff1
value: 73.14853380980746
- type: nauc_mrr_at_3_max
value: 67.10354198073468
- type: nauc_mrr_at_3_std
value: -7.409679815529866
- type: nauc_mrr_at_5_diff1
value: 73.13425961877715
- type: nauc_mrr_at_5_max
value: 67.22452899371224
- type: nauc_mrr_at_5_std
value: -6.895257774506354
- type: nauc_ndcg_at_1000_diff1
value: 72.44364625096874
- type: nauc_ndcg_at_1000_max
value: 67.93635761141552
- type: nauc_ndcg_at_1000_std
value: -4.616429464350954
- type: nauc_ndcg_at_100_diff1
value: 72.11352383758482
- type: nauc_ndcg_at_100_max
value: 68.1627312575955
- type: nauc_ndcg_at_100_std
value: -3.894213672131282
- type: nauc_ndcg_at_10_diff1
value: 71.8526850770812
- type: nauc_ndcg_at_10_max
value: 68.41366561888562
- type: nauc_ndcg_at_10_std
value: -4.472146861145989
- type: nauc_ndcg_at_1_diff1
value: 76.63696578575964
- type: nauc_ndcg_at_1_max
value: 65.08189618178105
- type: nauc_ndcg_at_1_std
value: -8.594195451782733
- type: nauc_ndcg_at_20_diff1
value: 71.76464418138866
- type: nauc_ndcg_at_20_max
value: 68.41174963313698
- type: nauc_ndcg_at_20_std
value: -3.7449762037540157
- type: nauc_ndcg_at_3_diff1
value: 71.93808990683131
- type: nauc_ndcg_at_3_max
value: 67.7010029507334
- type: nauc_ndcg_at_3_std
value: -6.971858419379321
- type: nauc_ndcg_at_5_diff1
value: 71.8505224811326
- type: nauc_ndcg_at_5_max
value: 67.97139549500251
- type: nauc_ndcg_at_5_std
value: -5.958491308070017
- type: nauc_precision_at_1000_diff1
value: 62.20956180320043
- type: nauc_precision_at_1000_max
value: 82.53412670611299
- type: nauc_precision_at_1000_std
value: 55.57278124999575
- type: nauc_precision_at_100_diff1
value: 62.03792857023201
- type: nauc_precision_at_100_max
value: 76.77130713424538
- type: nauc_precision_at_100_std
value: 26.674102719959564
- type: nauc_precision_at_10_diff1
value: 65.89798055049931
- type: nauc_precision_at_10_max
value: 73.41908620140674
- type: nauc_precision_at_10_std
value: 5.21818573283179
- type: nauc_precision_at_1_diff1
value: 76.63696578575964
- type: nauc_precision_at_1_max
value: 65.08189618178105
- type: nauc_precision_at_1_std
value: -8.594195451782733
- type: nauc_precision_at_20_diff1
value: 63.734308542647355
- type: nauc_precision_at_20_max
value: 74.69578825096144
- type: nauc_precision_at_20_std
value: 12.627842502659162
- type: nauc_precision_at_3_diff1
value: 67.91189666671904
- type: nauc_precision_at_3_max
value: 69.64986036783209
- type: nauc_precision_at_3_std
value: -5.505669087429055
- type: nauc_precision_at_5_diff1
value: 67.01880006360248
- type: nauc_precision_at_5_max
value: 70.78916423358686
- type: nauc_precision_at_5_std
value: -2.2273742736401045
- type: nauc_recall_at_1000_diff1
value: 62.20956180319936
- type: nauc_recall_at_1000_max
value: 82.53412670611287
- type: nauc_recall_at_1000_std
value: 55.57278124999549
- type: nauc_recall_at_100_diff1
value: 62.03792857023208
- type: nauc_recall_at_100_max
value: 76.77130713424577
- type: nauc_recall_at_100_std
value: 26.67410271995973
- type: nauc_recall_at_10_diff1
value: 65.8979805504994
- type: nauc_recall_at_10_max
value: 73.41908620140678
- type: nauc_recall_at_10_std
value: 5.2181857328318655
- type: nauc_recall_at_1_diff1
value: 76.63696578575964
- type: nauc_recall_at_1_max
value: 65.08189618178105
- type: nauc_recall_at_1_std
value: -8.594195451782733
- type: nauc_recall_at_20_diff1
value: 63.734308542647334
- type: nauc_recall_at_20_max
value: 74.69578825096123
- type: nauc_recall_at_20_std
value: 12.627842502658982
- type: nauc_recall_at_3_diff1
value: 67.91189666671897
- type: nauc_recall_at_3_max
value: 69.64986036783203
- type: nauc_recall_at_3_std
value: -5.505669087428989
- type: nauc_recall_at_5_diff1
value: 67.01880006360243
- type: nauc_recall_at_5_max
value: 70.78916423358686
- type: nauc_recall_at_5_std
value: -2.227374273640135
- type: ndcg_at_1
value: 56.538999999999994
- type: ndcg_at_10
value: 68.765
- type: ndcg_at_100
value: 71.314
- type: ndcg_at_1000
value: 72.038
- type: ndcg_at_20
value: 69.828
- type: ndcg_at_3
value: 64.937
- type: ndcg_at_5
value: 66.956
- type: precision_at_1
value: 56.538999999999994
- type: precision_at_10
value: 8.113
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.265
- type: precision_at_3
value: 23.567
- type: precision_at_5
value: 15.115
- type: recall_at_1
value: 56.538999999999994
- type: recall_at_10
value: 81.135
- type: recall_at_100
value: 93.223
- type: recall_at_1000
value: 98.896
- type: recall_at_20
value: 85.304
- type: recall_at_3
value: 70.702
- type: recall_at_5
value: 75.576
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-deu)
type: facebook/mlqa
config: eng-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 69.298
- type: map_at_1
value: 58.553
- type: map_at_10
value: 65.769
- type: map_at_100
value: 66.298
- type: map_at_1000
value: 66.328
- type: map_at_20
value: 66.101
- type: map_at_3
value: 64.048
- type: map_at_5
value: 65.09
- type: mrr_at_1
value: 58.564148016840235
- type: mrr_at_10
value: 65.7685997066675
- type: mrr_at_100
value: 66.29874034432214
- type: mrr_at_1000
value: 66.32844979939088
- type: mrr_at_20
value: 66.10120513957821
- type: mrr_at_3
value: 64.04830489696437
- type: mrr_at_5
value: 65.08974074894746
- type: nauc_map_at_1000_diff1
value: 76.8409650183994
- type: nauc_map_at_1000_max
value: 71.86367015521367
- type: nauc_map_at_1000_std
value: -14.464881539957256
- type: nauc_map_at_100_diff1
value: 76.82536521842064
- type: nauc_map_at_100_max
value: 71.86811127965429
- type: nauc_map_at_100_std
value: -14.441105539722244
- type: nauc_map_at_10_diff1
value: 76.75522453447859
- type: nauc_map_at_10_max
value: 71.87677500176706
- type: nauc_map_at_10_std
value: -14.741331625103559
- type: nauc_map_at_1_diff1
value: 79.64060747740989
- type: nauc_map_at_1_max
value: 69.84278563569617
- type: nauc_map_at_1_std
value: -15.936904929655832
- type: nauc_map_at_20_diff1
value: 76.78894776059715
- type: nauc_map_at_20_max
value: 71.89637938044827
- type: nauc_map_at_20_std
value: -14.500564106990769
- type: nauc_map_at_3_diff1
value: 77.20562577450342
- type: nauc_map_at_3_max
value: 71.80578229361525
- type: nauc_map_at_3_std
value: -15.344134588512201
- type: nauc_map_at_5_diff1
value: 77.00480147367867
- type: nauc_map_at_5_max
value: 71.98335924076163
- type: nauc_map_at_5_std
value: -15.16537653041026
- type: nauc_mrr_at_1000_diff1
value: 76.84165367691193
- type: nauc_mrr_at_1000_max
value: 71.8642679499795
- type: nauc_mrr_at_1000_std
value: -14.461717954593158
- type: nauc_mrr_at_100_diff1
value: 76.8263363557998
- type: nauc_mrr_at_100_max
value: 71.86874522368626
- type: nauc_mrr_at_100_std
value: -14.437105168707426
- type: nauc_mrr_at_10_diff1
value: 76.75522453447859
- type: nauc_mrr_at_10_max
value: 71.87677500176706
- type: nauc_mrr_at_10_std
value: -14.741331625103559
- type: nauc_mrr_at_1_diff1
value: 79.65642669321981
- type: nauc_mrr_at_1_max
value: 69.89135358784799
- type: nauc_mrr_at_1_std
value: -15.919357002229589
- type: nauc_mrr_at_20_diff1
value: 76.78883171270601
- type: nauc_mrr_at_20_max
value: 71.89806887245291
- type: nauc_mrr_at_20_std
value: -14.497139746907905
- type: nauc_mrr_at_3_diff1
value: 77.20562577450342
- type: nauc_mrr_at_3_max
value: 71.80578229361525
- type: nauc_mrr_at_3_std
value: -15.344134588512201
- type: nauc_mrr_at_5_diff1
value: 77.00480147367867
- type: nauc_mrr_at_5_max
value: 71.98335924076163
- type: nauc_mrr_at_5_std
value: -15.16537653041026
- type: nauc_ndcg_at_1000_diff1
value: 76.07802417817047
- type: nauc_ndcg_at_1000_max
value: 72.31792804426776
- type: nauc_ndcg_at_1000_std
value: -13.049160715132244
- type: nauc_ndcg_at_100_diff1
value: 75.63343849116544
- type: nauc_ndcg_at_100_max
value: 72.48362076101817
- type: nauc_ndcg_at_100_std
value: -12.089600993516777
- type: nauc_ndcg_at_10_diff1
value: 75.23387929929208
- type: nauc_ndcg_at_10_max
value: 72.51436288271807
- type: nauc_ndcg_at_10_std
value: -13.624132103038104
- type: nauc_ndcg_at_1_diff1
value: 79.65642669321981
- type: nauc_ndcg_at_1_max
value: 69.89135358784799
- type: nauc_ndcg_at_1_std
value: -15.919357002229589
- type: nauc_ndcg_at_20_diff1
value: 75.32926047656296
- type: nauc_ndcg_at_20_max
value: 72.61254165918145
- type: nauc_ndcg_at_20_std
value: -12.683157599238701
- type: nauc_ndcg_at_3_diff1
value: 76.3089337665469
- type: nauc_ndcg_at_3_max
value: 72.40014674426054
- type: nauc_ndcg_at_3_std
value: -15.08624226353458
- type: nauc_ndcg_at_5_diff1
value: 75.88857331641834
- type: nauc_ndcg_at_5_max
value: 72.7719386827224
- type: nauc_ndcg_at_5_std
value: -14.70546521089236
- type: nauc_precision_at_1000_diff1
value: 59.66563879069911
- type: nauc_precision_at_1000_max
value: 74.57123562956772
- type: nauc_precision_at_1000_std
value: 58.61396866718965
- type: nauc_precision_at_100_diff1
value: 62.8695896550042
- type: nauc_precision_at_100_max
value: 77.81408796785
- type: nauc_precision_at_100_std
value: 23.819735672317826
- type: nauc_precision_at_10_diff1
value: 68.08051625224569
- type: nauc_precision_at_10_max
value: 75.14432336036869
- type: nauc_precision_at_10_std
value: -7.97602345252735
- type: nauc_precision_at_1_diff1
value: 79.65642669321981
- type: nauc_precision_at_1_max
value: 69.89135358784799
- type: nauc_precision_at_1_std
value: -15.919357002229589
- type: nauc_precision_at_20_diff1
value: 66.7168005185165
- type: nauc_precision_at_20_max
value: 76.58522761697147
- type: nauc_precision_at_20_std
value: -0.17923428317323292
- type: nauc_precision_at_3_diff1
value: 73.23394851561207
- type: nauc_precision_at_3_max
value: 74.32517846819215
- type: nauc_precision_at_3_std
value: -14.142301336188348
- type: nauc_precision_at_5_diff1
value: 71.5666882547012
- type: nauc_precision_at_5_max
value: 75.71098205440033
- type: nauc_precision_at_5_std
value: -12.808362513638052
- type: nauc_recall_at_1000_diff1
value: 71.73736112325805
- type: nauc_recall_at_1000_max
value: 86.70743436225898
- type: nauc_recall_at_1000_std
value: 54.45802578371167
- type: nauc_recall_at_100_diff1
value: 64.07053861428128
- type: nauc_recall_at_100_max
value: 78.8348308099261
- type: nauc_recall_at_100_std
value: 22.72263677785103
- type: nauc_recall_at_10_diff1
value: 68.20272901407903
- type: nauc_recall_at_10_max
value: 75.16315335381938
- type: nauc_recall_at_10_std
value: -8.060716748913386
- type: nauc_recall_at_1_diff1
value: 79.64060747740989
- type: nauc_recall_at_1_max
value: 69.84278563569617
- type: nauc_recall_at_1_std
value: -15.936904929655832
- type: nauc_recall_at_20_diff1
value: 66.88206981973654
- type: nauc_recall_at_20_max
value: 76.54824917595687
- type: nauc_recall_at_20_std
value: -0.40294589316962287
- type: nauc_recall_at_3_diff1
value: 73.33076087258938
- type: nauc_recall_at_3_max
value: 74.33763112508771
- type: nauc_recall_at_3_std
value: -14.213355414905399
- type: nauc_recall_at_5_diff1
value: 71.67487623469464
- type: nauc_recall_at_5_max
value: 75.72770292516316
- type: nauc_recall_at_5_std
value: -12.887572274644818
- type: ndcg_at_1
value: 58.56400000000001
- type: ndcg_at_10
value: 69.298
- type: ndcg_at_100
value: 71.95899999999999
- type: ndcg_at_1000
value: 72.735
- type: ndcg_at_20
value: 70.50699999999999
- type: ndcg_at_3
value: 65.81700000000001
- type: ndcg_at_5
value: 67.681
- type: precision_at_1
value: 58.56400000000001
- type: precision_at_10
value: 8.039
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.259
- type: precision_at_3
value: 23.65
- type: precision_at_5
value: 15.09
- type: recall_at_1
value: 58.553
- type: recall_at_10
value: 80.368
- type: recall_at_100
value: 93.013
- type: recall_at_1000
value: 99.092
- type: recall_at_20
value: 85.143
- type: recall_at_3
value: 70.928
- type: recall_at_5
value: 75.42699999999999
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-spa)
type: facebook/mlqa
config: eng-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 66.374
- type: map_at_1
value: 55.494
- type: map_at_10
value: 62.763999999999996
- type: map_at_100
value: 63.33
- type: map_at_1000
value: 63.36000000000001
- type: map_at_20
value: 63.104000000000006
- type: map_at_3
value: 61.065000000000005
- type: map_at_5
value: 62.053000000000004
- type: mrr_at_1
value: 55.49419158255571
- type: mrr_at_10
value: 62.765195140457095
- type: mrr_at_100
value: 63.33083349354529
- type: mrr_at_1000
value: 63.3611897014839
- type: mrr_at_20
value: 63.10543590095977
- type: mrr_at_3
value: 61.06455913159412
- type: mrr_at_5
value: 62.052942296705474
- type: nauc_map_at_1000_diff1
value: 75.04200018088618
- type: nauc_map_at_1000_max
value: 70.49937782771909
- type: nauc_map_at_1000_std
value: -5.257206317083184
- type: nauc_map_at_100_diff1
value: 75.02786834256312
- type: nauc_map_at_100_max
value: 70.5016476500189
- type: nauc_map_at_100_std
value: -5.228770832077681
- type: nauc_map_at_10_diff1
value: 74.9626552701647
- type: nauc_map_at_10_max
value: 70.56253732243214
- type: nauc_map_at_10_std
value: -5.359037281768563
- type: nauc_map_at_1_diff1
value: 78.46858307815857
- type: nauc_map_at_1_max
value: 69.03908373759435
- type: nauc_map_at_1_std
value: -7.479412070736642
- type: nauc_map_at_20_diff1
value: 74.98121458084796
- type: nauc_map_at_20_max
value: 70.51885366822565
- type: nauc_map_at_20_std
value: -5.286051287133815
- type: nauc_map_at_3_diff1
value: 75.36078454383373
- type: nauc_map_at_3_max
value: 70.34997144546014
- type: nauc_map_at_3_std
value: -6.663517224039184
- type: nauc_map_at_5_diff1
value: 75.0274512828238
- type: nauc_map_at_5_max
value: 70.45292551591874
- type: nauc_map_at_5_std
value: -6.029224488640147
- type: nauc_mrr_at_1000_diff1
value: 75.04018768469983
- type: nauc_mrr_at_1000_max
value: 70.49855509132635
- type: nauc_mrr_at_1000_std
value: -5.258929961409948
- type: nauc_mrr_at_100_diff1
value: 75.02605732810112
- type: nauc_mrr_at_100_max
value: 70.50082584929103
- type: nauc_mrr_at_100_std
value: -5.2304917988542154
- type: nauc_mrr_at_10_diff1
value: 74.96079080525713
- type: nauc_mrr_at_10_max
value: 70.56167294920391
- type: nauc_mrr_at_10_std
value: -5.360650630655072
- type: nauc_mrr_at_1_diff1
value: 78.46858307815857
- type: nauc_mrr_at_1_max
value: 69.03908373759435
- type: nauc_mrr_at_1_std
value: -7.479412070736642
- type: nauc_mrr_at_20_diff1
value: 74.97939804960517
- type: nauc_mrr_at_20_max
value: 70.51804078965411
- type: nauc_mrr_at_20_std
value: -5.287681954889177
- type: nauc_mrr_at_3_diff1
value: 75.36078454383373
- type: nauc_mrr_at_3_max
value: 70.34997144546014
- type: nauc_mrr_at_3_std
value: -6.663517224039184
- type: nauc_mrr_at_5_diff1
value: 75.0274512828238
- type: nauc_mrr_at_5_max
value: 70.45292551591874
- type: nauc_mrr_at_5_std
value: -6.029224488640147
- type: nauc_ndcg_at_1000_diff1
value: 74.22106834748942
- type: nauc_ndcg_at_1000_max
value: 70.93625922934912
- type: nauc_ndcg_at_1000_std
value: -3.4878399005946017
- type: nauc_ndcg_at_100_diff1
value: 73.74068883646733
- type: nauc_ndcg_at_100_max
value: 71.02357018347472
- type: nauc_ndcg_at_100_std
value: -2.462293184201324
- type: nauc_ndcg_at_10_diff1
value: 73.40967965536565
- type: nauc_ndcg_at_10_max
value: 71.29379828672067
- type: nauc_ndcg_at_10_std
value: -3.295547756383108
- type: nauc_ndcg_at_1_diff1
value: 78.46858307815857
- type: nauc_ndcg_at_1_max
value: 69.03908373759435
- type: nauc_ndcg_at_1_std
value: -7.479412070736642
- type: nauc_ndcg_at_20_diff1
value: 73.45790057693699
- type: nauc_ndcg_at_20_max
value: 71.16598432419126
- type: nauc_ndcg_at_20_std
value: -2.962877157646097
- type: nauc_ndcg_at_3_diff1
value: 74.30696173964847
- type: nauc_ndcg_at_3_max
value: 70.79878978459556
- type: nauc_ndcg_at_3_std
value: -6.297286578628299
- type: nauc_ndcg_at_5_diff1
value: 73.65858211199816
- type: nauc_ndcg_at_5_max
value: 71.01122417463776
- type: nauc_ndcg_at_5_std
value: -5.075990882646765
- type: nauc_precision_at_1000_diff1
value: 68.71065091972568
- type: nauc_precision_at_1000_max
value: 81.38173585624777
- type: nauc_precision_at_1000_std
value: 58.035497889797895
- type: nauc_precision_at_100_diff1
value: 61.93634256957017
- type: nauc_precision_at_100_max
value: 74.84191770203093
- type: nauc_precision_at_100_std
value: 31.3325983123831
- type: nauc_precision_at_10_diff1
value: 66.68247010944937
- type: nauc_precision_at_10_max
value: 74.48773524654571
- type: nauc_precision_at_10_std
value: 6.560421880785153
- type: nauc_precision_at_1_diff1
value: 78.46858307815857
- type: nauc_precision_at_1_max
value: 69.03908373759435
- type: nauc_precision_at_1_std
value: -7.479412070736642
- type: nauc_precision_at_20_diff1
value: 65.51592872758067
- type: nauc_precision_at_20_max
value: 74.50684066823096
- type: nauc_precision_at_20_std
value: 10.830479877698208
- type: nauc_precision_at_3_diff1
value: 70.89587884861588
- type: nauc_precision_at_3_max
value: 72.25310558370424
- type: nauc_precision_at_3_std
value: -5.0796100900749765
- type: nauc_precision_at_5_diff1
value: 68.71885719845497
- type: nauc_precision_at_5_max
value: 73.02601751485672
- type: nauc_precision_at_5_std
value: -1.4382681421626857
- type: nauc_recall_at_1000_diff1
value: 71.95510299834734
- type: nauc_recall_at_1000_max
value: 84.03647166092985
- type: nauc_recall_at_1000_std
value: 56.87490604776847
- type: nauc_recall_at_100_diff1
value: 62.446624924715955
- type: nauc_recall_at_100_max
value: 75.25666892464507
- type: nauc_recall_at_100_std
value: 31.068789794554686
- type: nauc_recall_at_10_diff1
value: 66.70676336328988
- type: nauc_recall_at_10_max
value: 74.4963699656397
- type: nauc_recall_at_10_std
value: 6.57498399706916
- type: nauc_recall_at_1_diff1
value: 78.46858307815857
- type: nauc_recall_at_1_max
value: 69.03908373759435
- type: nauc_recall_at_1_std
value: -7.479412070736642
- type: nauc_recall_at_20_diff1
value: 65.54082767974772
- type: nauc_recall_at_20_max
value: 74.5111529838772
- type: nauc_recall_at_20_std
value: 10.84574829707354
- type: nauc_recall_at_3_diff1
value: 70.89587884861584
- type: nauc_recall_at_3_max
value: 72.25310558370421
- type: nauc_recall_at_3_std
value: -5.07961009007491
- type: nauc_recall_at_5_diff1
value: 68.71885719845501
- type: nauc_recall_at_5_max
value: 73.02601751485666
- type: nauc_recall_at_5_std
value: -1.4382681421626995
- type: ndcg_at_1
value: 55.494
- type: ndcg_at_10
value: 66.374
- type: ndcg_at_100
value: 69.254
- type: ndcg_at_1000
value: 70.136
- type: ndcg_at_20
value: 67.599
- type: ndcg_at_3
value: 62.863
- type: ndcg_at_5
value: 64.644
- type: precision_at_1
value: 55.494
- type: precision_at_10
value: 7.776
- type: precision_at_100
value: 0.9159999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.1290000000000004
- type: precision_at_3
value: 22.688
- type: precision_at_5
value: 14.477
- type: recall_at_1
value: 55.494
- type: recall_at_10
value: 77.747
- type: recall_at_100
value: 91.535
- type: recall_at_1000
value: 98.619
- type: recall_at_20
value: 82.565
- type: recall_at_3
value: 68.063
- type: recall_at_5
value: 72.386
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-eng)
type: facebook/mlqa
config: eng-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 64.723
- type: map_at_1
value: 54.308
- type: map_at_10
value: 61.26200000000001
- type: map_at_100
value: 61.82299999999999
- type: map_at_1000
value: 61.856
- type: map_at_20
value: 61.575
- type: map_at_3
value: 59.565
- type: map_at_5
value: 60.561
- type: mrr_at_1
value: 54.31704368848212
- type: mrr_at_10
value: 61.26520216098834
- type: mrr_at_100
value: 61.82588321127103
- type: mrr_at_1000
value: 61.859333030574334
- type: mrr_at_20
value: 61.57780339921337
- type: mrr_at_3
value: 59.569446842801646
- type: mrr_at_5
value: 60.56323029989004
- type: nauc_map_at_1000_diff1
value: 74.21413722468635
- type: nauc_map_at_1000_max
value: 70.41741227882316
- type: nauc_map_at_1000_std
value: -2.5438707209848506
- type: nauc_map_at_100_diff1
value: 74.19812315947975
- type: nauc_map_at_100_max
value: 70.41589146728445
- type: nauc_map_at_100_std
value: -2.5336117059429553
- type: nauc_map_at_10_diff1
value: 74.21810561152937
- type: nauc_map_at_10_max
value: 70.48816115200171
- type: nauc_map_at_10_std
value: -2.7443834681406734
- type: nauc_map_at_1_diff1
value: 77.69378738778958
- type: nauc_map_at_1_max
value: 68.64652310701173
- type: nauc_map_at_1_std
value: -4.667071946448379
- type: nauc_map_at_20_diff1
value: 74.16105697562438
- type: nauc_map_at_20_max
value: 70.42491994631179
- type: nauc_map_at_20_std
value: -2.6070416022440472
- type: nauc_map_at_3_diff1
value: 74.60449392878863
- type: nauc_map_at_3_max
value: 70.39888609914269
- type: nauc_map_at_3_std
value: -3.5401151125723986
- type: nauc_map_at_5_diff1
value: 74.2423420992663
- type: nauc_map_at_5_max
value: 70.36574501826757
- type: nauc_map_at_5_std
value: -3.2707393116898964
- type: nauc_mrr_at_1000_diff1
value: 74.21029843731323
- type: nauc_mrr_at_1000_max
value: 70.43020492688913
- type: nauc_mrr_at_1000_std
value: -2.526895582202081
- type: nauc_mrr_at_100_diff1
value: 74.19440960479243
- type: nauc_mrr_at_100_max
value: 70.4288998824232
- type: nauc_mrr_at_100_std
value: -2.5160929945118107
- type: nauc_mrr_at_10_diff1
value: 74.2141357266166
- type: nauc_mrr_at_10_max
value: 70.5005683347807
- type: nauc_mrr_at_10_std
value: -2.727154557882168
- type: nauc_mrr_at_1_diff1
value: 77.69891248239793
- type: nauc_mrr_at_1_max
value: 68.68255231164922
- type: nauc_mrr_at_1_std
value: -4.630226727154317
- type: nauc_mrr_at_20_diff1
value: 74.15705434409723
- type: nauc_mrr_at_20_max
value: 70.43741835972747
- type: nauc_mrr_at_20_std
value: -2.5896756472464495
- type: nauc_mrr_at_3_diff1
value: 74.5981844349412
- type: nauc_mrr_at_3_max
value: 70.41834937080564
- type: nauc_mrr_at_3_std
value: -3.5161656408031163
- type: nauc_mrr_at_5_diff1
value: 74.23847535424844
- type: nauc_mrr_at_5_max
value: 70.37763810013656
- type: nauc_mrr_at_5_std
value: -3.2560955164581733
- type: nauc_ndcg_at_1000_diff1
value: 73.20994496725493
- type: nauc_ndcg_at_1000_max
value: 70.8903016277125
- type: nauc_ndcg_at_1000_std
value: -0.625772298462309
- type: nauc_ndcg_at_100_diff1
value: 72.6847141682645
- type: nauc_ndcg_at_100_max
value: 70.86564422034162
- type: nauc_ndcg_at_100_std
value: -0.07195786766326141
- type: nauc_ndcg_at_10_diff1
value: 72.78806493754281
- type: nauc_ndcg_at_10_max
value: 71.21957067926769
- type: nauc_ndcg_at_10_std
value: -1.2760418313382227
- type: nauc_ndcg_at_1_diff1
value: 77.69891248239793
- type: nauc_ndcg_at_1_max
value: 68.68255231164922
- type: nauc_ndcg_at_1_std
value: -4.630226727154317
- type: nauc_ndcg_at_20_diff1
value: 72.52082440882546
- type: nauc_ndcg_at_20_max
value: 70.98185004796734
- type: nauc_ndcg_at_20_std
value: -0.6908280874815464
- type: nauc_ndcg_at_3_diff1
value: 73.59870660843939
- type: nauc_ndcg_at_3_max
value: 70.94391957288654
- type: nauc_ndcg_at_3_std
value: -3.147723179140428
- type: nauc_ndcg_at_5_diff1
value: 72.90122868193457
- type: nauc_ndcg_at_5_max
value: 70.89376368965165
- type: nauc_ndcg_at_5_std
value: -2.6451807385626744
- type: nauc_precision_at_1000_diff1
value: 58.14737201864067
- type: nauc_precision_at_1000_max
value: 78.79011251144826
- type: nauc_precision_at_1000_std
value: 59.98985420476577
- type: nauc_precision_at_100_diff1
value: 59.21069121644552
- type: nauc_precision_at_100_max
value: 73.00557835912306
- type: nauc_precision_at_100_std
value: 26.85027406282173
- type: nauc_precision_at_10_diff1
value: 66.8760831023675
- type: nauc_precision_at_10_max
value: 74.21167950452596
- type: nauc_precision_at_10_std
value: 5.453652499335947
- type: nauc_precision_at_1_diff1
value: 77.69891248239793
- type: nauc_precision_at_1_max
value: 68.68255231164922
- type: nauc_precision_at_1_std
value: -4.630226727154317
- type: nauc_precision_at_20_diff1
value: 64.3118559132602
- type: nauc_precision_at_20_max
value: 73.33078184673825
- type: nauc_precision_at_20_std
value: 9.993299523049402
- type: nauc_precision_at_3_diff1
value: 70.38667185155593
- type: nauc_precision_at_3_max
value: 72.66495006030951
- type: nauc_precision_at_3_std
value: -1.8532839591326276
- type: nauc_precision_at_5_diff1
value: 68.12161337583686
- type: nauc_precision_at_5_max
value: 72.65644960375046
- type: nauc_precision_at_5_std
value: -0.33317164167012875
- type: nauc_recall_at_1000_diff1
value: 61.63204394739985
- type: nauc_recall_at_1000_max
value: 81.77241537319897
- type: nauc_recall_at_1000_std
value: 58.44841544062308
- type: nauc_recall_at_100_diff1
value: 59.72072697224705
- type: nauc_recall_at_100_max
value: 73.28519507061553
- type: nauc_recall_at_100_std
value: 26.27318390763456
- type: nauc_recall_at_10_diff1
value: 66.9757135465418
- type: nauc_recall_at_10_max
value: 74.21919493374149
- type: nauc_recall_at_10_std
value: 5.323369605377166
- type: nauc_recall_at_1_diff1
value: 77.69378738778958
- type: nauc_recall_at_1_max
value: 68.64652310701173
- type: nauc_recall_at_1_std
value: -4.667071946448379
- type: nauc_recall_at_20_diff1
value: 64.42290081731899
- type: nauc_recall_at_20_max
value: 73.3358289439033
- type: nauc_recall_at_20_std
value: 9.846598361586073
- type: nauc_recall_at_3_diff1
value: 70.41211290964785
- type: nauc_recall_at_3_max
value: 72.64451776775402
- type: nauc_recall_at_3_std
value: -1.916280959835826
- type: nauc_recall_at_5_diff1
value: 68.20695272727916
- type: nauc_recall_at_5_max
value: 72.66404224006101
- type: nauc_recall_at_5_std
value: -0.431125323007886
- type: ndcg_at_1
value: 54.31700000000001
- type: ndcg_at_10
value: 64.723
- type: ndcg_at_100
value: 67.648
- type: ndcg_at_1000
value: 68.619
- type: ndcg_at_20
value: 65.85499999999999
- type: ndcg_at_3
value: 61.244
- type: ndcg_at_5
value: 63.038000000000004
- type: precision_at_1
value: 54.31700000000001
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 0.898
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 4.005
- type: precision_at_3
value: 22.034000000000002
- type: precision_at_5
value: 14.093
- type: recall_at_1
value: 54.308
- type: recall_at_10
value: 75.622
- type: recall_at_100
value: 89.744
- type: recall_at_1000
value: 97.539
- type: recall_at_20
value: 80.085
- type: recall_at_3
value: 66.09
- type: recall_at_5
value: 70.446
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P (de)
type: reciTAL/mlsum
config: de
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: main_score
value: 41.267647761702854
- type: v_measure
value: 41.267647761702854
- type: v_measure_std
value: 10.93390895077248
- type: main_score
value: 40.07927325071353
- type: v_measure
value: 40.07927325071353
- type: v_measure_std
value: 9.296680835266145
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P (fr)
type: reciTAL/mlsum
config: fr
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: main_score
value: 44.68714862333979
- type: v_measure
value: 44.68714862333979
- type: v_measure_std
value: 1.811036989797814
- type: main_score
value: 44.88484854069901
- type: v_measure
value: 44.88484854069901
- type: v_measure_std
value: 2.3704247819781843
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P (ru)
type: reciTAL/mlsum
config: ru
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: main_score
value: 41.92518785753813
- type: v_measure
value: 41.92518785753813
- type: v_measure_std
value: 5.9356661900220775
- type: main_score
value: 43.97657450929179
- type: v_measure
value: 43.97657450929179
- type: v_measure_std
value: 6.087547931333613
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P (es)
type: reciTAL/mlsum
config: es
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: main_score
value: 48.69875719812033
- type: v_measure
value: 48.69875719812033
- type: v_measure_std
value: 1.204253881950113
- type: main_score
value: 48.41108671948728
- type: v_measure
value: 48.41108671948728
- type: v_measure_std
value: 1.3848320630151243
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking (default)
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6
metrics:
- type: map
value: 21.050447576170395
- type: mrr
value: 20.201984126984126
- type: main_score
value: 21.050447576170395
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval (default)
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: main_score
value: 79.687
- type: map_at_1
value: 66.872
- type: map_at_10
value: 75.949
- type: map_at_100
value: 76.25
- type: map_at_1000
value: 76.259
- type: map_at_20
value: 76.145
- type: map_at_3
value: 74.01299999999999
- type: map_at_5
value: 75.232
- type: mrr_at_1
value: 69.18338108882521
- type: mrr_at_10
value: 76.5424227952881
- type: mrr_at_100
value: 76.8019342792628
- type: mrr_at_1000
value: 76.81002278342808
- type: mrr_at_20
value: 76.7115234815896
- type: mrr_at_3
value: 74.83046800382044
- type: mrr_at_5
value: 75.88490926456515
- type: nauc_map_at_1000_diff1
value: 78.06933310424179
- type: nauc_map_at_1000_max
value: 49.392948209665896
- type: nauc_map_at_1000_std
value: -15.126109322591166
- type: nauc_map_at_100_diff1
value: 78.06612779298378
- type: nauc_map_at_100_max
value: 49.40761618630397
- type: nauc_map_at_100_std
value: -15.099282408159349
- type: nauc_map_at_10_diff1
value: 77.94565685470538
- type: nauc_map_at_10_max
value: 49.50559610363201
- type: nauc_map_at_10_std
value: -15.182130695916355
- type: nauc_map_at_1_diff1
value: 79.84814509858211
- type: nauc_map_at_1_max
value: 40.78978466656547
- type: nauc_map_at_1_std
value: -19.96189264026715
- type: nauc_map_at_20_diff1
value: 78.03597839981245
- type: nauc_map_at_20_max
value: 49.49477427223376
- type: nauc_map_at_20_std
value: -15.084990000838378
- type: nauc_map_at_3_diff1
value: 78.0637014655507
- type: nauc_map_at_3_max
value: 48.63214001973341
- type: nauc_map_at_3_std
value: -17.093950563306596
- type: nauc_map_at_5_diff1
value: 77.94068229240348
- type: nauc_map_at_5_max
value: 49.38930719689204
- type: nauc_map_at_5_std
value: -15.9919454201954
- type: nauc_mrr_at_1000_diff1
value: 78.34582398092816
- type: nauc_mrr_at_1000_max
value: 49.623566992784156
- type: nauc_mrr_at_1000_std
value: -14.381347765493265
- type: nauc_mrr_at_100_diff1
value: 78.3429966714221
- type: nauc_mrr_at_100_max
value: 49.63684922240546
- type: nauc_mrr_at_100_std
value: -14.354914066301236
- type: nauc_mrr_at_10_diff1
value: 78.2208070219624
- type: nauc_mrr_at_10_max
value: 49.77720536573364
- type: nauc_mrr_at_10_std
value: -14.316233764741812
- type: nauc_mrr_at_1_diff1
value: 80.22305496572142
- type: nauc_mrr_at_1_max
value: 44.30231210192536
- type: nauc_mrr_at_1_std
value: -18.942549914934492
- type: nauc_mrr_at_20_diff1
value: 78.31006724240147
- type: nauc_mrr_at_20_max
value: 49.72338465276142
- type: nauc_mrr_at_20_std
value: -14.30722621948953
- type: nauc_mrr_at_3_diff1
value: 78.39832634634523
- type: nauc_mrr_at_3_max
value: 49.24985961036677
- type: nauc_mrr_at_3_std
value: -15.966286866763191
- type: nauc_mrr_at_5_diff1
value: 78.2406507247798
- type: nauc_mrr_at_5_max
value: 49.71276359754787
- type: nauc_mrr_at_5_std
value: -14.979526226149698
- type: nauc_ndcg_at_1000_diff1
value: 77.74892471071016
- type: nauc_ndcg_at_1000_max
value: 51.11543344053061
- type: nauc_ndcg_at_1000_std
value: -12.208878737005096
- type: nauc_ndcg_at_100_diff1
value: 77.67462502211228
- type: nauc_ndcg_at_100_max
value: 51.593977338939034
- type: nauc_ndcg_at_100_std
value: -11.312126179513802
- type: nauc_ndcg_at_10_diff1
value: 77.0571291760012
- type: nauc_ndcg_at_10_max
value: 52.35435572808972
- type: nauc_ndcg_at_10_std
value: -11.33242546164059
- type: nauc_ndcg_at_1_diff1
value: 80.22305496572142
- type: nauc_ndcg_at_1_max
value: 44.30231210192536
- type: nauc_ndcg_at_1_std
value: -18.942549914934492
- type: nauc_ndcg_at_20_diff1
value: 77.4141216117471
- type: nauc_ndcg_at_20_max
value: 52.340600871365375
- type: nauc_ndcg_at_20_std
value: -10.989010161550912
- type: nauc_ndcg_at_3_diff1
value: 77.43971989259062
- type: nauc_ndcg_at_3_max
value: 50.59251358320663
- type: nauc_ndcg_at_3_std
value: -15.59337960636058
- type: nauc_ndcg_at_5_diff1
value: 77.12174287031847
- type: nauc_ndcg_at_5_max
value: 51.97108510288907
- type: nauc_ndcg_at_5_std
value: -13.474902612427167
- type: nauc_precision_at_1000_diff1
value: -19.36793534929367
- type: nauc_precision_at_1000_max
value: 11.803383262344036
- type: nauc_precision_at_1000_std
value: 24.304436015177046
- type: nauc_precision_at_100_diff1
value: -6.273790806909921
- type: nauc_precision_at_100_max
value: 23.372606271300747
- type: nauc_precision_at_100_std
value: 29.085768971612342
- type: nauc_precision_at_10_diff1
value: 21.67045907336595
- type: nauc_precision_at_10_max
value: 41.68948432407223
- type: nauc_precision_at_10_std
value: 17.837055074458092
- type: nauc_precision_at_1_diff1
value: 80.22305496572142
- type: nauc_precision_at_1_max
value: 44.30231210192536
- type: nauc_precision_at_1_std
value: -18.942549914934492
- type: nauc_precision_at_20_diff1
value: 12.577671896684803
- type: nauc_precision_at_20_max
value: 37.44944702246691
- type: nauc_precision_at_20_std
value: 23.635897665206087
- type: nauc_precision_at_3_diff1
value: 47.165335112814056
- type: nauc_precision_at_3_max
value: 47.0458691263379
- type: nauc_precision_at_3_std
value: -3.3181861146890217
- type: nauc_precision_at_5_diff1
value: 35.406205343514806
- type: nauc_precision_at_5_max
value: 45.56549449285401
- type: nauc_precision_at_5_std
value: 5.612378074562386
- type: nauc_recall_at_1000_diff1
value: 72.32762520815842
- type: nauc_recall_at_1000_max
value: 85.64979256307343
- type: nauc_recall_at_1000_std
value: 73.61925297037476
- type: nauc_recall_at_100_diff1
value: 72.31946328709962
- type: nauc_recall_at_100_max
value: 83.76576070068353
- type: nauc_recall_at_100_std
value: 57.39376538662535
- type: nauc_recall_at_10_diff1
value: 69.51307788072499
- type: nauc_recall_at_10_max
value: 69.60124733654142
- type: nauc_recall_at_10_std
value: 13.483540424716892
- type: nauc_recall_at_1_diff1
value: 79.84814509858211
- type: nauc_recall_at_1_max
value: 40.78978466656547
- type: nauc_recall_at_1_std
value: -19.96189264026715
- type: nauc_recall_at_20_diff1
value: 70.92168324710599
- type: nauc_recall_at_20_max
value: 76.09106252420084
- type: nauc_recall_at_20_std
value: 25.406842300761447
- type: nauc_recall_at_3_diff1
value: 74.1212680517145
- type: nauc_recall_at_3_max
value: 56.24921832879403
- type: nauc_recall_at_3_std
value: -11.55542913578436
- type: nauc_recall_at_5_diff1
value: 72.31262959872993
- type: nauc_recall_at_5_max
value: 62.761214896697915
- type: nauc_recall_at_5_std
value: -3.280167584070396
- type: ndcg_at_1
value: 69.18299999999999
- type: ndcg_at_10
value: 79.687
- type: ndcg_at_100
value: 81.062
- type: ndcg_at_1000
value: 81.312
- type: ndcg_at_20
value: 80.34599999999999
- type: ndcg_at_3
value: 75.98700000000001
- type: ndcg_at_5
value: 78.039
- type: precision_at_1
value: 69.18299999999999
- type: precision_at_10
value: 9.636
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 4.958
- type: precision_at_3
value: 28.515
- type: precision_at_5
value: 18.201
- type: recall_at_1
value: 66.872
- type: recall_at_10
value: 90.688
- type: recall_at_100
value: 96.99
- type: recall_at_1000
value: 98.958
- type: recall_at_20
value: 93.21199999999999
- type: recall_at_3
value: 80.84599999999999
- type: recall_at_5
value: 85.732
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.861
- type: map_at_10
value: 34.008
- type: map_at_100
value: 35.174
- type: map_at_1000
value: 35.224
- type: map_at_20
value: 34.705999999999996
- type: map_at_3
value: 30.209000000000003
- type: map_at_5
value: 32.351
- type: mrr_at_1
value: 22.493
- type: mrr_at_10
value: 34.583999999999996
- type: mrr_at_100
value: 35.691
- type: mrr_at_1000
value: 35.736000000000004
- type: mrr_at_20
value: 35.257
- type: mrr_at_3
value: 30.85
- type: mrr_at_5
value: 32.962
- type: ndcg_at_1
value: 22.493
- type: ndcg_at_10
value: 40.815
- type: ndcg_at_100
value: 46.483999999999995
- type: ndcg_at_1000
value: 47.73
- type: ndcg_at_20
value: 43.302
- type: ndcg_at_3
value: 33.056000000000004
- type: ndcg_at_5
value: 36.879
- type: precision_at_1
value: 22.493
- type: precision_at_10
value: 6.465999999999999
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.752
- type: precision_at_3
value: 14.069
- type: precision_at_5
value: 10.384
- type: recall_at_1
value: 21.861
- type: recall_at_10
value: 61.781
- type: recall_at_100
value: 88.095
- type: recall_at_1000
value: 97.625
- type: recall_at_20
value: 71.44500000000001
- type: recall_at_3
value: 40.653
- type: recall_at_5
value: 49.841
- type: main_score
value: 40.815
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.4874601003192
- type: f1
value: 97.19067544931094
- type: f1_weighted
value: 97.49331776181019
- type: main_score
value: 97.4874601003192
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.89489997182305
- type: f1
value: 96.51138586512977
- type: f1_weighted
value: 96.89723065967186
- type: main_score
value: 96.89489997182305
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.17144763175452
- type: f1
value: 96.81785681878274
- type: f1_weighted
value: 97.1778974586874
- type: main_score
value: 97.17144763175452
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.30128405887879
- type: f1
value: 95.94555923088487
- type: f1_weighted
value: 96.30399416794926
- type: main_score
value: 96.30128405887879
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 84.53488372093022
- type: f1
value: 61.77995074251401
- type: f1_weighted
value: 86.8005170485101
- type: main_score
value: 84.53488372093022
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.79459002535924
- type: f1
value: 56.08938302001448
- type: f1_weighted
value: 83.66582131948252
- type: main_score
value: 80.79459002535924
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 84.7765176784523
- type: f1
value: 61.39860057885528
- type: f1_weighted
value: 86.94881745670745
- type: main_score
value: 84.7765176784523
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.2079549013467
- type: f1
value: 59.90260478749016
- type: f1_weighted
value: 84.36861708593257
- type: main_score
value: 82.2079549013467
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (eng)
type: mteb/masakhanews
config: eng
split: test
revision: 18193f187b92da67168c655c9973a165ed9593dd
metrics:
- type: accuracy
value: 74.98945147679325
- type: f1
value: 74.3157483560261
- type: f1_weighted
value: 75.01179008904884
- type: main_score
value: 74.98945147679325
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: mteb/masakhanews
config: fra
split: test
revision: 18193f187b92da67168c655c9973a165ed9593dd
metrics:
- type: accuracy
value: 74.02843601895735
- type: f1
value: 70.40326349620732
- type: f1_weighted
value: 74.6596277063484
- type: main_score
value: 74.02843601895735
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (amh)
type: masakhane/masakhanews
config: amh
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 69.45780291725053
- type: v_measure
value: 69.45780291725053
- type: v_measure_std
value: 36.54340055904091
- type: main_score
value: 60.95132147787602
- type: v_measure
value: 60.95132147787602
- type: v_measure_std
value: 37.330148394033365
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 64.88996119332239
- type: v_measure
value: 64.88996119332239
- type: v_measure_std
value: 30.017223408197268
- type: main_score
value: 60.974810831426595
- type: v_measure
value: 60.974810831426595
- type: v_measure_std
value: 24.934675467507827
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 42.362383958691666
- type: v_measure
value: 42.362383958691666
- type: v_measure_std
value: 37.61076788039063
- type: main_score
value: 44.479206673553335
- type: v_measure
value: 44.479206673553335
- type: v_measure_std
value: 32.58254804499339
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (hau)
type: masakhane/masakhanews
config: hau
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 43.29201252405562
- type: v_measure
value: 43.29201252405562
- type: v_measure_std
value: 34.31987945146255
- type: main_score
value: 26.4742082741682
- type: v_measure
value: 26.4742082741682
- type: v_measure_std
value: 22.344929192323097
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (ibo)
type: masakhane/masakhanews
config: ibo
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 33.59926542995238
- type: v_measure
value: 33.59926542995238
- type: v_measure_std
value: 35.70048601084112
- type: main_score
value: 38.906129911741985
- type: v_measure
value: 38.906129911741985
- type: v_measure_std
value: 34.785601792668444
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (lin)
type: masakhane/masakhanews
config: lin
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 67.58487601893106
- type: v_measure
value: 67.58487601893106
- type: v_measure_std
value: 35.16784970777931
- type: main_score
value: 62.60982020876592
- type: v_measure
value: 62.60982020876592
- type: v_measure_std
value: 40.7368955715045
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (lug)
type: masakhane/masakhanews
config: lug
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 50.01220872023533
- type: v_measure
value: 50.01220872023533
- type: v_measure_std
value: 41.87411574676182
- type: main_score
value: 42.70424106365967
- type: v_measure
value: 42.70424106365967
- type: v_measure_std
value: 46.80946241135087
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (orm)
type: masakhane/masakhanews
config: orm
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 29.007847502598317
- type: v_measure
value: 29.007847502598317
- type: v_measure_std
value: 38.374997395079994
- type: main_score
value: 28.609942199922322
- type: v_measure
value: 28.609942199922322
- type: v_measure_std
value: 38.46685040191088
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (pcm)
type: masakhane/masakhanews
config: pcm
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 79.13520228554611
- type: v_measure
value: 79.13520228554611
- type: v_measure_std
value: 18.501843848275183
- type: main_score
value: 76.83901348810822
- type: v_measure
value: 76.83901348810822
- type: v_measure_std
value: 17.57617141269189
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (run)
type: masakhane/masakhanews
config: run
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 60.317213909746656
- type: v_measure
value: 60.317213909746656
- type: v_measure_std
value: 36.500281823747386
- type: main_score
value: 46.89757547846193
- type: v_measure
value: 46.89757547846193
- type: v_measure_std
value: 44.58903590203438
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (sna)
type: masakhane/masakhanews
config: sna
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 59.395277358240946
- type: v_measure
value: 59.395277358240946
- type: v_measure_std
value: 37.500916816164654
- type: main_score
value: 55.37185207068829
- type: v_measure
value: 55.37185207068829
- type: v_measure_std
value: 36.944574863543004
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (som)
type: masakhane/masakhanews
config: som
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 38.18638688704302
- type: v_measure
value: 38.18638688704302
- type: v_measure_std
value: 35.453681137564466
- type: main_score
value: 37.44211021681754
- type: v_measure
value: 37.44211021681754
- type: v_measure_std
value: 33.41469994463241
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (swa)
type: masakhane/masakhanews
config: swa
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 29.49230755729658
- type: v_measure
value: 29.49230755729658
- type: v_measure_std
value: 28.284313285264645
- type: main_score
value: 26.020680621216062
- type: v_measure
value: 26.020680621216062
- type: v_measure_std
value: 25.480037522570413
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (tir)
type: masakhane/masakhanews
config: tir
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 60.632258622750115
- type: v_measure
value: 60.632258622750115
- type: v_measure_std
value: 34.429711214740564
- type: main_score
value: 63.74306846771303
- type: v_measure
value: 63.74306846771303
- type: v_measure_std
value: 32.19119631078685
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (xho)
type: masakhane/masakhanews
config: xho
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 41.76322918806381
- type: v_measure
value: 41.76322918806381
- type: v_measure_std
value: 36.43245296200775
- type: main_score
value: 24.580890519243777
- type: v_measure
value: 24.580890519243777
- type: v_measure_std
value: 37.941836363967106
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (yor)
type: masakhane/masakhanews
config: yor
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: main_score
value: 33.17083910808645
- type: v_measure
value: 33.17083910808645
- type: v_measure_std
value: 34.87547994284835
- type: main_score
value: 43.63458888828314
- type: v_measure
value: 43.63458888828314
- type: v_measure_std
value: 31.28169350649098
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 75.37323470073974
- type: f1
value: 71.1836877753734
- type: f1_weighted
value: 75.72073213955457
- type: main_score
value: 75.37323470073974
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 74.83523873570948
- type: f1
value: 70.72375821116886
- type: f1_weighted
value: 75.20800490010755
- type: main_score
value: 74.83523873570948
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 75.31607262945528
- type: f1
value: 72.06063554897662
- type: f1_weighted
value: 75.72438161355252
- type: main_score
value: 75.31607262945528
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 76.7955615332885
- type: f1
value: 73.08099648499756
- type: f1_weighted
value: 77.18482068239668
- type: main_score
value: 76.7955615332885
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 77.60591795561534
- type: f1
value: 74.46676705370395
- type: f1_weighted
value: 77.69888062336614
- type: main_score
value: 77.60591795561534
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 76.32145258910558
- type: f1
value: 72.89824154178328
- type: f1_weighted
value: 76.6539327979472
- type: main_score
value: 76.32145258910558
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 73.21788836583724
- type: f1
value: 70.45594512246377
- type: f1_weighted
value: 73.67862536499393
- type: main_score
value: 73.21788836583724
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 80.82044384667114
- type: f1
value: 80.53217664465089
- type: f1_weighted
value: 80.94535087010512
- type: main_score
value: 80.82044384667114
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 82.1049092131809
- type: f1
value: 81.55343463694733
- type: f1_weighted
value: 82.33509098770782
- type: main_score
value: 82.1049092131809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 82.58238063214526
- type: f1
value: 82.27974449333072
- type: f1_weighted
value: 82.81337569618209
- type: main_score
value: 82.58238063214526
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 83.97108271687962
- type: f1
value: 83.56285606936076
- type: f1_weighted
value: 84.10198745390771
- type: main_score
value: 83.97108271687962
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 84.71082716879623
- type: f1
value: 84.09447062371402
- type: f1_weighted
value: 84.73765765551342
- type: main_score
value: 84.71082716879623
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 83.093476798924
- type: f1
value: 82.72656900752943
- type: f1_weighted
value: 83.26606516503364
- type: main_score
value: 83.093476798924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 84.05850706119705
- type: f1
value: 83.64234048881222
- type: f1_weighted
value: 84.17315768381876
- type: main_score
value: 84.05850706119705
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval (default)
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: main_score
value: 56.635999999999996
- type: map_at_1
value: 48.699999999999996
- type: map_at_10
value: 53.991
- type: map_at_100
value: 54.449999999999996
- type: map_at_1000
value: 54.515
- type: map_at_20
value: 54.212
- type: map_at_3
value: 52.833
- type: map_at_5
value: 53.503
- type: mrr_at_1
value: 48.699999999999996
- type: mrr_at_10
value: 53.991309523809505
- type: mrr_at_100
value: 54.45008993448266
- type: mrr_at_1000
value: 54.515253990549795
- type: mrr_at_20
value: 54.21201762247036
- type: mrr_at_3
value: 52.8333333333333
- type: mrr_at_5
value: 53.50333333333328
- type: nauc_map_at_1000_diff1
value: 79.96867989401643
- type: nauc_map_at_1000_max
value: 69.75230895599029
- type: nauc_map_at_1000_std
value: 2.6418738289740213
- type: nauc_map_at_100_diff1
value: 79.95343709599133
- type: nauc_map_at_100_max
value: 69.751282671507
- type: nauc_map_at_100_std
value: 2.621719966106279
- type: nauc_map_at_10_diff1
value: 80.02875864565634
- type: nauc_map_at_10_max
value: 69.80948662290187
- type: nauc_map_at_10_std
value: 2.329151604733765
- type: nauc_map_at_1_diff1
value: 83.616940281383
- type: nauc_map_at_1_max
value: 69.08142651929452
- type: nauc_map_at_1_std
value: 1.9687791394035643
- type: nauc_map_at_20_diff1
value: 79.95555601275339
- type: nauc_map_at_20_max
value: 69.76604695002925
- type: nauc_map_at_20_std
value: 2.556184141901367
- type: nauc_map_at_3_diff1
value: 80.74790131023668
- type: nauc_map_at_3_max
value: 70.57797991892402
- type: nauc_map_at_3_std
value: 2.7115149849964117
- type: nauc_map_at_5_diff1
value: 80.31796539878381
- type: nauc_map_at_5_max
value: 69.93573796420061
- type: nauc_map_at_5_std
value: 2.0731614029506606
- type: nauc_mrr_at_1000_diff1
value: 79.96867999907981
- type: nauc_mrr_at_1000_max
value: 69.57395578976896
- type: nauc_mrr_at_1000_std
value: 2.46351945887829
- type: nauc_mrr_at_100_diff1
value: 79.95343709599133
- type: nauc_mrr_at_100_max
value: 69.57322054130803
- type: nauc_mrr_at_100_std
value: 2.4436578359073433
- type: nauc_mrr_at_10_diff1
value: 80.02875864565634
- type: nauc_mrr_at_10_max
value: 69.63292630937411
- type: nauc_mrr_at_10_std
value: 2.1525912912060012
- type: nauc_mrr_at_1_diff1
value: 83.616940281383
- type: nauc_mrr_at_1_max
value: 68.74717310480305
- type: nauc_mrr_at_1_std
value: 1.6345257249120868
- type: nauc_mrr_at_20_diff1
value: 79.95555601275339
- type: nauc_mrr_at_20_max
value: 69.58883608470444
- type: nauc_mrr_at_20_std
value: 2.378973276576547
- type: nauc_mrr_at_3_diff1
value: 80.74790131023668
- type: nauc_mrr_at_3_max
value: 70.40430475488604
- type: nauc_mrr_at_3_std
value: 2.5378398209583817
- type: nauc_mrr_at_5_diff1
value: 80.31796539878381
- type: nauc_mrr_at_5_max
value: 69.7605991748183
- type: nauc_mrr_at_5_std
value: 1.898022613568352
- type: nauc_ndcg_at_1000_diff1
value: 78.35504059321225
- type: nauc_ndcg_at_1000_max
value: 69.06752522437093
- type: nauc_ndcg_at_1000_std
value: 3.9624036886099265
- type: nauc_ndcg_at_100_diff1
value: 77.79729140249833
- type: nauc_ndcg_at_100_max
value: 68.93113791506029
- type: nauc_ndcg_at_100_std
value: 3.642178826886181
- type: nauc_ndcg_at_10_diff1
value: 78.160158293918
- type: nauc_ndcg_at_10_max
value: 69.28122202281361
- type: nauc_ndcg_at_10_std
value: 2.438976810940962
- type: nauc_ndcg_at_1_diff1
value: 83.616940281383
- type: nauc_ndcg_at_1_max
value: 69.08142651929452
- type: nauc_ndcg_at_1_std
value: 1.9687791394035643
- type: nauc_ndcg_at_20_diff1
value: 77.88514432874997
- type: nauc_ndcg_at_20_max
value: 69.06148818508873
- type: nauc_ndcg_at_20_std
value: 3.1800249272363676
- type: nauc_ndcg_at_3_diff1
value: 79.73510384405803
- type: nauc_ndcg_at_3_max
value: 70.78000695123832
- type: nauc_ndcg_at_3_std
value: 2.9041415468363274
- type: nauc_ndcg_at_5_diff1
value: 78.91872808866195
- type: nauc_ndcg_at_5_max
value: 69.61478429620091
- type: nauc_ndcg_at_5_std
value: 1.734699636301054
- type: nauc_precision_at_1000_diff1
value: 66.37858395390673
- type: nauc_precision_at_1000_max
value: 60.651659037598534
- type: nauc_precision_at_1000_std
value: 27.388353715469798
- type: nauc_precision_at_100_diff1
value: 66.34325807776025
- type: nauc_precision_at_100_max
value: 63.63855305621111
- type: nauc_precision_at_100_std
value: 10.641748149575351
- type: nauc_precision_at_10_diff1
value: 71.3784685491089
- type: nauc_precision_at_10_max
value: 67.05313695174542
- type: nauc_precision_at_10_std
value: 3.000406867930561
- type: nauc_precision_at_1_diff1
value: 83.616940281383
- type: nauc_precision_at_1_max
value: 69.08142651929452
- type: nauc_precision_at_1_std
value: 1.9687791394035643
- type: nauc_precision_at_20_diff1
value: 69.73407910977694
- type: nauc_precision_at_20_max
value: 65.77426240320742
- type: nauc_precision_at_20_std
value: 6.204416838482586
- type: nauc_precision_at_3_diff1
value: 76.63737537643107
- type: nauc_precision_at_3_max
value: 71.29710200719668
- type: nauc_precision_at_3_std
value: 3.47180961484546
- type: nauc_precision_at_5_diff1
value: 74.36945983536717
- type: nauc_precision_at_5_max
value: 68.33292218003061
- type: nauc_precision_at_5_std
value: 0.47128762620258075
- type: nauc_recall_at_1000_diff1
value: 66.37858395390681
- type: nauc_recall_at_1000_max
value: 60.65165903759889
- type: nauc_recall_at_1000_std
value: 27.388353715469822
- type: nauc_recall_at_100_diff1
value: 66.34325807776025
- type: nauc_recall_at_100_max
value: 63.63855305621116
- type: nauc_recall_at_100_std
value: 10.641748149575351
- type: nauc_recall_at_10_diff1
value: 71.37846854910892
- type: nauc_recall_at_10_max
value: 67.05313695174546
- type: nauc_recall_at_10_std
value: 3.000406867930663
- type: nauc_recall_at_1_diff1
value: 83.616940281383
- type: nauc_recall_at_1_max
value: 69.08142651929452
- type: nauc_recall_at_1_std
value: 1.9687791394035643
- type: nauc_recall_at_20_diff1
value: 69.73407910977691
- type: nauc_recall_at_20_max
value: 65.77426240320746
- type: nauc_recall_at_20_std
value: 6.204416838482536
- type: nauc_recall_at_3_diff1
value: 76.63737537643112
- type: nauc_recall_at_3_max
value: 71.29710200719668
- type: nauc_recall_at_3_std
value: 3.471809614845442
- type: nauc_recall_at_5_diff1
value: 74.36945983536715
- type: nauc_recall_at_5_max
value: 68.33292218003065
- type: nauc_recall_at_5_std
value: 0.4712876262026442
- type: ndcg_at_1
value: 48.699999999999996
- type: ndcg_at_10
value: 56.635999999999996
- type: ndcg_at_100
value: 59.193
- type: ndcg_at_1000
value: 60.97
- type: ndcg_at_20
value: 57.426
- type: ndcg_at_3
value: 54.186
- type: ndcg_at_5
value: 55.407
- type: precision_at_1
value: 48.699999999999996
- type: precision_at_10
value: 6.5
- type: precision_at_100
value: 0.777
- type: precision_at_1000
value: 0.092
- type: precision_at_20
value: 3.405
- type: precision_at_3
value: 19.367
- type: precision_at_5
value: 12.22
- type: recall_at_1
value: 48.699999999999996
- type: recall_at_10
value: 65.0
- type: recall_at_100
value: 77.7
- type: recall_at_1000
value: 91.8
- type: recall_at_20
value: 68.10000000000001
- type: recall_at_3
value: 58.099999999999994
- type: recall_at_5
value: 61.1
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 34.80188561439236
- type: v_measure
value: 34.80188561439236
- type: v_measure_std
value: 1.5703148841573102
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 32.42285513996236
- type: v_measure
value: 32.42285513996236
- type: v_measure_std
value: 1.3769867487457566
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (de)
type: jinaai/mintakaqa
config: de
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: main_score
value: 27.025
- type: map_at_1
value: 14.532
- type: map_at_10
value: 22.612
- type: map_at_100
value: 23.802
- type: map_at_1000
value: 23.9
- type: map_at_20
value: 23.275000000000002
- type: map_at_3
value: 20.226
- type: map_at_5
value: 21.490000000000002
- type: mrr_at_1
value: 14.532434709351305
- type: mrr_at_10
value: 22.612077265615575
- type: mrr_at_100
value: 23.801523356874675
- type: mrr_at_1000
value: 23.900118499340238
- type: mrr_at_20
value: 23.275466430108995
- type: mrr_at_3
value: 20.22606009547877
- type: mrr_at_5
value: 21.489750070204945
- type: nauc_map_at_1000_diff1
value: 14.148987799763596
- type: nauc_map_at_1000_max
value: 44.70338461387784
- type: nauc_map_at_1000_std
value: 15.868006767707637
- type: nauc_map_at_100_diff1
value: 14.11371769080442
- type: nauc_map_at_100_max
value: 44.67995540936296
- type: nauc_map_at_100_std
value: 15.890796502029076
- type: nauc_map_at_10_diff1
value: 14.29066834165688
- type: nauc_map_at_10_max
value: 45.10997111765282
- type: nauc_map_at_10_std
value: 15.508568918629864
- type: nauc_map_at_1_diff1
value: 23.473291302576396
- type: nauc_map_at_1_max
value: 44.68942599764586
- type: nauc_map_at_1_std
value: 12.424377262427253
- type: nauc_map_at_20_diff1
value: 14.112652046087831
- type: nauc_map_at_20_max
value: 44.82014861413682
- type: nauc_map_at_20_std
value: 15.739350613646385
- type: nauc_map_at_3_diff1
value: 16.119659221396347
- type: nauc_map_at_3_max
value: 46.04766378953525
- type: nauc_map_at_3_std
value: 13.969878046315925
- type: nauc_map_at_5_diff1
value: 15.095453434076184
- type: nauc_map_at_5_max
value: 45.802128149314406
- type: nauc_map_at_5_std
value: 14.957442173319949
- type: nauc_mrr_at_1000_diff1
value: 14.148987799763596
- type: nauc_mrr_at_1000_max
value: 44.70338461387784
- type: nauc_mrr_at_1000_std
value: 15.868006767707637
- type: nauc_mrr_at_100_diff1
value: 14.11371769080442
- type: nauc_mrr_at_100_max
value: 44.67995540936296
- type: nauc_mrr_at_100_std
value: 15.890796502029076
- type: nauc_mrr_at_10_diff1
value: 14.29066834165688
- type: nauc_mrr_at_10_max
value: 45.10997111765282
- type: nauc_mrr_at_10_std
value: 15.508568918629864
- type: nauc_mrr_at_1_diff1
value: 23.473291302576396
- type: nauc_mrr_at_1_max
value: 44.68942599764586
- type: nauc_mrr_at_1_std
value: 12.424377262427253
- type: nauc_mrr_at_20_diff1
value: 14.112652046087831
- type: nauc_mrr_at_20_max
value: 44.82014861413682
- type: nauc_mrr_at_20_std
value: 15.739350613646385
- type: nauc_mrr_at_3_diff1
value: 16.119659221396347
- type: nauc_mrr_at_3_max
value: 46.04766378953525
- type: nauc_mrr_at_3_std
value: 13.969878046315925
- type: nauc_mrr_at_5_diff1
value: 15.095453434076184
- type: nauc_mrr_at_5_max
value: 45.802128149314406
- type: nauc_mrr_at_5_std
value: 14.957442173319949
- type: nauc_ndcg_at_1000_diff1
value: 11.626606894574028
- type: nauc_ndcg_at_1000_max
value: 43.328592841065536
- type: nauc_ndcg_at_1000_std
value: 18.049446272245547
- type: nauc_ndcg_at_100_diff1
value: 10.485720606660239
- type: nauc_ndcg_at_100_max
value: 42.405317674170966
- type: nauc_ndcg_at_100_std
value: 19.107151641936987
- type: nauc_ndcg_at_10_diff1
value: 11.029351078162982
- type: nauc_ndcg_at_10_max
value: 44.36855031964681
- type: nauc_ndcg_at_10_std
value: 17.302796171409305
- type: nauc_ndcg_at_1_diff1
value: 23.473291302576396
- type: nauc_ndcg_at_1_max
value: 44.68942599764586
- type: nauc_ndcg_at_1_std
value: 12.424377262427253
- type: nauc_ndcg_at_20_diff1
value: 10.356662718168412
- type: nauc_ndcg_at_20_max
value: 43.31602680430083
- type: nauc_ndcg_at_20_std
value: 18.162891267850316
- type: nauc_ndcg_at_3_diff1
value: 14.42844952297869
- type: nauc_ndcg_at_3_max
value: 46.26603339466543
- type: nauc_ndcg_at_3_std
value: 14.449362723887857
- type: nauc_ndcg_at_5_diff1
value: 12.783416563486396
- type: nauc_ndcg_at_5_max
value: 45.852176479124424
- type: nauc_ndcg_at_5_std
value: 16.11775016428085
- type: nauc_precision_at_1000_diff1
value: -8.045361059399795
- type: nauc_precision_at_1000_max
value: 21.970273281738777
- type: nauc_precision_at_1000_std
value: 49.564650488193266
- type: nauc_precision_at_100_diff1
value: -2.118628861593353
- type: nauc_precision_at_100_max
value: 31.32498977104778
- type: nauc_precision_at_100_std
value: 32.96087731883451
- type: nauc_precision_at_10_diff1
value: 3.0335517475367615
- type: nauc_precision_at_10_max
value: 42.21620215030219
- type: nauc_precision_at_10_std
value: 21.90159732315962
- type: nauc_precision_at_1_diff1
value: 23.473291302576396
- type: nauc_precision_at_1_max
value: 44.68942599764586
- type: nauc_precision_at_1_std
value: 12.424377262427253
- type: nauc_precision_at_20_diff1
value: 0.4087201843719047
- type: nauc_precision_at_20_max
value: 38.485034773895734
- type: nauc_precision_at_20_std
value: 25.077397979916682
- type: nauc_precision_at_3_diff1
value: 10.408327736589833
- type: nauc_precision_at_3_max
value: 46.757216289175076
- type: nauc_precision_at_3_std
value: 15.62594354926867
- type: nauc_precision_at_5_diff1
value: 7.326752744229544
- type: nauc_precision_at_5_max
value: 45.89190518573553
- type: nauc_precision_at_5_std
value: 19.01717163438957
- type: nauc_recall_at_1000_diff1
value: -8.045361059400387
- type: nauc_recall_at_1000_max
value: 21.97027328173812
- type: nauc_recall_at_1000_std
value: 49.56465048819266
- type: nauc_recall_at_100_diff1
value: -2.118628861593277
- type: nauc_recall_at_100_max
value: 31.324989771047818
- type: nauc_recall_at_100_std
value: 32.96087731883457
- type: nauc_recall_at_10_diff1
value: 3.0335517475367166
- type: nauc_recall_at_10_max
value: 42.21620215030217
- type: nauc_recall_at_10_std
value: 21.901597323159606
- type: nauc_recall_at_1_diff1
value: 23.473291302576396
- type: nauc_recall_at_1_max
value: 44.68942599764586
- type: nauc_recall_at_1_std
value: 12.424377262427253
- type: nauc_recall_at_20_diff1
value: 0.40872018437190905
- type: nauc_recall_at_20_max
value: 38.485034773895734
- type: nauc_recall_at_20_std
value: 25.077397979916693
- type: nauc_recall_at_3_diff1
value: 10.408327736589843
- type: nauc_recall_at_3_max
value: 46.75721628917507
- type: nauc_recall_at_3_std
value: 15.625943549268664
- type: nauc_recall_at_5_diff1
value: 7.326752744229548
- type: nauc_recall_at_5_max
value: 45.89190518573557
- type: nauc_recall_at_5_std
value: 19.01717163438958
- type: ndcg_at_1
value: 14.532
- type: ndcg_at_10
value: 27.025
- type: ndcg_at_100
value: 33.305
- type: ndcg_at_1000
value: 36.38
- type: ndcg_at_20
value: 29.443
- type: ndcg_at_3
value: 22.035
- type: ndcg_at_5
value: 24.319
- type: precision_at_1
value: 14.532
- type: precision_at_10
value: 4.115
- type: precision_at_100
value: 0.717
- type: precision_at_1000
value: 0.097
- type: precision_at_20
value: 2.536
- type: precision_at_3
value: 9.085
- type: precision_at_5
value: 6.563
- type: recall_at_1
value: 14.532
- type: recall_at_10
value: 41.154
- type: recall_at_100
value: 71.651
- type: recall_at_1000
value: 96.841
- type: recall_at_20
value: 50.71600000000001
- type: recall_at_3
value: 27.254
- type: recall_at_5
value: 32.814
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (es)
type: jinaai/mintakaqa
config: es
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: main_score
value: 26.912000000000003
- type: map_at_1
value: 14.686
- type: map_at_10
value: 22.569
- type: map_at_100
value: 23.679
- type: map_at_1000
value: 23.777
- type: map_at_20
value: 23.169
- type: map_at_3
value: 20.201
- type: map_at_5
value: 21.566
- type: mrr_at_1
value: 14.686468646864686
- type: mrr_at_10
value: 22.569346220336296
- type: mrr_at_100
value: 23.678819125817146
- type: mrr_at_1000
value: 23.77713511338264
- type: mrr_at_20
value: 23.16850858443442
- type: mrr_at_3
value: 20.200770077007665
- type: mrr_at_5
value: 21.56628162816276
- type: nauc_map_at_1000_diff1
value: 14.129007578838381
- type: nauc_map_at_1000_max
value: 44.4255501141499
- type: nauc_map_at_1000_std
value: 19.95906154868176
- type: nauc_map_at_100_diff1
value: 14.09071870575231
- type: nauc_map_at_100_max
value: 44.403179928955566
- type: nauc_map_at_100_std
value: 20.00413657519976
- type: nauc_map_at_10_diff1
value: 14.149535953153688
- type: nauc_map_at_10_max
value: 44.66529917634685
- type: nauc_map_at_10_std
value: 19.580235989479394
- type: nauc_map_at_1_diff1
value: 23.489813522176636
- type: nauc_map_at_1_max
value: 46.54578639925787
- type: nauc_map_at_1_std
value: 16.39083721709994
- type: nauc_map_at_20_diff1
value: 14.021560420656181
- type: nauc_map_at_20_max
value: 44.4825455452467
- type: nauc_map_at_20_std
value: 19.886927750826878
- type: nauc_map_at_3_diff1
value: 16.182977890477723
- type: nauc_map_at_3_max
value: 46.1840554029258
- type: nauc_map_at_3_std
value: 18.735671900228958
- type: nauc_map_at_5_diff1
value: 14.779126395472833
- type: nauc_map_at_5_max
value: 45.23237213817556
- type: nauc_map_at_5_std
value: 19.348508580412872
- type: nauc_mrr_at_1000_diff1
value: 14.129007578838381
- type: nauc_mrr_at_1000_max
value: 44.4255501141499
- type: nauc_mrr_at_1000_std
value: 19.95906154868176
- type: nauc_mrr_at_100_diff1
value: 14.09071870575231
- type: nauc_mrr_at_100_max
value: 44.403179928955566
- type: nauc_mrr_at_100_std
value: 20.00413657519976
- type: nauc_mrr_at_10_diff1
value: 14.149535953153688
- type: nauc_mrr_at_10_max
value: 44.66529917634685
- type: nauc_mrr_at_10_std
value: 19.580235989479394
- type: nauc_mrr_at_1_diff1
value: 23.489813522176636
- type: nauc_mrr_at_1_max
value: 46.54578639925787
- type: nauc_mrr_at_1_std
value: 16.39083721709994
- type: nauc_mrr_at_20_diff1
value: 14.021560420656181
- type: nauc_mrr_at_20_max
value: 44.4825455452467
- type: nauc_mrr_at_20_std
value: 19.886927750826878
- type: nauc_mrr_at_3_diff1
value: 16.182977890477723
- type: nauc_mrr_at_3_max
value: 46.1840554029258
- type: nauc_mrr_at_3_std
value: 18.735671900228958
- type: nauc_mrr_at_5_diff1
value: 14.779126395472833
- type: nauc_mrr_at_5_max
value: 45.23237213817556
- type: nauc_mrr_at_5_std
value: 19.348508580412872
- type: nauc_ndcg_at_1000_diff1
value: 11.762470380481101
- type: nauc_ndcg_at_1000_max
value: 42.8233203033089
- type: nauc_ndcg_at_1000_std
value: 21.78503705117719
- type: nauc_ndcg_at_100_diff1
value: 10.45886076220022
- type: nauc_ndcg_at_100_max
value: 41.85472899256818
- type: nauc_ndcg_at_100_std
value: 23.20955486335138
- type: nauc_ndcg_at_10_diff1
value: 10.605912468659469
- type: nauc_ndcg_at_10_max
value: 43.150942448104715
- type: nauc_ndcg_at_10_std
value: 21.120035764826085
- type: nauc_ndcg_at_1_diff1
value: 23.489813522176636
- type: nauc_ndcg_at_1_max
value: 46.54578639925787
- type: nauc_ndcg_at_1_std
value: 16.39083721709994
- type: nauc_ndcg_at_20_diff1
value: 10.11291783888644
- type: nauc_ndcg_at_20_max
value: 42.51260678842788
- type: nauc_ndcg_at_20_std
value: 22.1744949382252
- type: nauc_ndcg_at_3_diff1
value: 14.25625326760802
- type: nauc_ndcg_at_3_max
value: 45.96162916377383
- type: nauc_ndcg_at_3_std
value: 19.557832728215523
- type: nauc_ndcg_at_5_diff1
value: 11.956317653823053
- type: nauc_ndcg_at_5_max
value: 44.35971268886807
- type: nauc_ndcg_at_5_std
value: 20.581696730374233
- type: nauc_precision_at_1000_diff1
value: 5.132291843566577
- type: nauc_precision_at_1000_max
value: 25.293354576835263
- type: nauc_precision_at_1000_std
value: 40.36005126087624
- type: nauc_precision_at_100_diff1
value: -1.5252854375008238
- type: nauc_precision_at_100_max
value: 31.007586474495984
- type: nauc_precision_at_100_std
value: 37.297552993548386
- type: nauc_precision_at_10_diff1
value: 1.9663657370770737
- type: nauc_precision_at_10_max
value: 39.194092293625125
- type: nauc_precision_at_10_std
value: 24.956542621999542
- type: nauc_precision_at_1_diff1
value: 23.489813522176636
- type: nauc_precision_at_1_max
value: 46.54578639925787
- type: nauc_precision_at_1_std
value: 16.39083721709994
- type: nauc_precision_at_20_diff1
value: 0.011112090390932373
- type: nauc_precision_at_20_max
value: 36.9357074392519
- type: nauc_precision_at_20_std
value: 28.611387115093876
- type: nauc_precision_at_3_diff1
value: 9.596831091013703
- type: nauc_precision_at_3_max
value: 45.3905541893809
- type: nauc_precision_at_3_std
value: 21.599314388526945
- type: nauc_precision_at_5_diff1
value: 5.175887949900142
- type: nauc_precision_at_5_max
value: 42.129467510414464
- type: nauc_precision_at_5_std
value: 23.607251548776677
- type: nauc_recall_at_1000_diff1
value: 5.132291843566257
- type: nauc_recall_at_1000_max
value: 25.29335457683396
- type: nauc_recall_at_1000_std
value: 40.36005126087638
- type: nauc_recall_at_100_diff1
value: -1.5252854375008988
- type: nauc_recall_at_100_max
value: 31.00758647449594
- type: nauc_recall_at_100_std
value: 37.29755299354834
- type: nauc_recall_at_10_diff1
value: 1.9663657370770793
- type: nauc_recall_at_10_max
value: 39.19409229362512
- type: nauc_recall_at_10_std
value: 24.956542621999546
- type: nauc_recall_at_1_diff1
value: 23.489813522176636
- type: nauc_recall_at_1_max
value: 46.54578639925787
- type: nauc_recall_at_1_std
value: 16.39083721709994
- type: nauc_recall_at_20_diff1
value: 0.011112090390923075
- type: nauc_recall_at_20_max
value: 36.93570743925189
- type: nauc_recall_at_20_std
value: 28.611387115093883
- type: nauc_recall_at_3_diff1
value: 9.596831091013714
- type: nauc_recall_at_3_max
value: 45.39055418938087
- type: nauc_recall_at_3_std
value: 21.599314388526956
- type: nauc_recall_at_5_diff1
value: 5.17588794990012
- type: nauc_recall_at_5_max
value: 42.12946751041448
- type: nauc_recall_at_5_std
value: 23.607251548776695
- type: ndcg_at_1
value: 14.686
- type: ndcg_at_10
value: 26.912000000000003
- type: ndcg_at_100
value: 32.919
- type: ndcg_at_1000
value: 36.119
- type: ndcg_at_20
value: 29.079
- type: ndcg_at_3
value: 21.995
- type: ndcg_at_5
value: 24.474999999999998
- type: precision_at_1
value: 14.686
- type: precision_at_10
value: 4.08
- type: precision_at_100
value: 0.703
- type: precision_at_1000
value: 0.097
- type: precision_at_20
value: 2.467
- type: precision_at_3
value: 9.062000000000001
- type: precision_at_5
value: 6.65
- type: recall_at_1
value: 14.686
- type: recall_at_10
value: 40.8
- type: recall_at_100
value: 70.338
- type: recall_at_1000
value: 96.82300000000001
- type: recall_at_20
value: 49.34
- type: recall_at_3
value: 27.186
- type: recall_at_5
value: 33.251
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: main_score
value: 26.909
- type: map_at_1
value: 14.701
- type: map_at_10
value: 22.613
- type: map_at_100
value: 23.729
- type: map_at_1000
value: 23.837
- type: map_at_20
value: 23.262
- type: map_at_3
value: 20.236
- type: map_at_5
value: 21.673000000000002
- type: mrr_at_1
value: 14.7010647010647
- type: mrr_at_10
value: 22.613165113165113
- type: mrr_at_100
value: 23.72877605989423
- type: mrr_at_1000
value: 23.837150802746805
- type: mrr_at_20
value: 23.261627081110596
- type: mrr_at_3
value: 20.2361452361452
- type: mrr_at_5
value: 21.673491673491625
- type: nauc_map_at_1000_diff1
value: 17.08927788889635
- type: nauc_map_at_1000_max
value: 47.240929150603336
- type: nauc_map_at_1000_std
value: 20.559244258100275
- type: nauc_map_at_100_diff1
value: 17.029461792796777
- type: nauc_map_at_100_max
value: 47.207381115550696
- type: nauc_map_at_100_std
value: 20.581498156895265
- type: nauc_map_at_10_diff1
value: 17.351456007804536
- type: nauc_map_at_10_max
value: 47.815880040221344
- type: nauc_map_at_10_std
value: 20.292999107555794
- type: nauc_map_at_1_diff1
value: 27.297525357600776
- type: nauc_map_at_1_max
value: 47.18835074959486
- type: nauc_map_at_1_std
value: 18.304203168281834
- type: nauc_map_at_20_diff1
value: 17.157460199542136
- type: nauc_map_at_20_max
value: 47.4776610667456
- type: nauc_map_at_20_std
value: 20.499186342964478
- type: nauc_map_at_3_diff1
value: 19.393119961356277
- type: nauc_map_at_3_max
value: 49.02841822452882
- type: nauc_map_at_3_std
value: 19.293122796321292
- type: nauc_map_at_5_diff1
value: 17.76275044752008
- type: nauc_map_at_5_max
value: 48.01292548040298
- type: nauc_map_at_5_std
value: 19.928449977400504
- type: nauc_mrr_at_1000_diff1
value: 17.08927788889635
- type: nauc_mrr_at_1000_max
value: 47.240929150603336
- type: nauc_mrr_at_1000_std
value: 20.559244258100275
- type: nauc_mrr_at_100_diff1
value: 17.029461792796777
- type: nauc_mrr_at_100_max
value: 47.207381115550696
- type: nauc_mrr_at_100_std
value: 20.581498156895265
- type: nauc_mrr_at_10_diff1
value: 17.351456007804536
- type: nauc_mrr_at_10_max
value: 47.815880040221344
- type: nauc_mrr_at_10_std
value: 20.292999107555794
- type: nauc_mrr_at_1_diff1
value: 27.297525357600776
- type: nauc_mrr_at_1_max
value: 47.18835074959486
- type: nauc_mrr_at_1_std
value: 18.304203168281834
- type: nauc_mrr_at_20_diff1
value: 17.157460199542136
- type: nauc_mrr_at_20_max
value: 47.4776610667456
- type: nauc_mrr_at_20_std
value: 20.499186342964478
- type: nauc_mrr_at_3_diff1
value: 19.393119961356277
- type: nauc_mrr_at_3_max
value: 49.02841822452882
- type: nauc_mrr_at_3_std
value: 19.293122796321292
- type: nauc_mrr_at_5_diff1
value: 17.76275044752008
- type: nauc_mrr_at_5_max
value: 48.01292548040298
- type: nauc_mrr_at_5_std
value: 19.928449977400504
- type: nauc_ndcg_at_1000_diff1
value: 13.989496006047975
- type: nauc_ndcg_at_1000_max
value: 45.626323944336114
- type: nauc_ndcg_at_1000_std
value: 22.125600410796515
- type: nauc_ndcg_at_100_diff1
value: 12.302204843705244
- type: nauc_ndcg_at_100_max
value: 44.46856314559079
- type: nauc_ndcg_at_100_std
value: 23.084984546328677
- type: nauc_ndcg_at_10_diff1
value: 14.001226213368275
- type: nauc_ndcg_at_10_max
value: 47.37780636546918
- type: nauc_ndcg_at_10_std
value: 21.702709032840637
- type: nauc_ndcg_at_1_diff1
value: 27.297525357600776
- type: nauc_ndcg_at_1_max
value: 47.18835074959486
- type: nauc_ndcg_at_1_std
value: 18.304203168281834
- type: nauc_ndcg_at_20_diff1
value: 13.317759910171056
- type: nauc_ndcg_at_20_max
value: 46.25171251043813
- type: nauc_ndcg_at_20_std
value: 22.309331575402595
- type: nauc_ndcg_at_3_diff1
value: 17.555381234893872
- type: nauc_ndcg_at_3_max
value: 49.48635590260059
- type: nauc_ndcg_at_3_std
value: 19.734570962933674
- type: nauc_ndcg_at_5_diff1
value: 14.844841165765061
- type: nauc_ndcg_at_5_max
value: 47.76437065028708
- type: nauc_ndcg_at_5_std
value: 20.816034479453954
- type: nauc_precision_at_1000_diff1
value: -15.591898698252546
- type: nauc_precision_at_1000_max
value: 20.545984285353892
- type: nauc_precision_at_1000_std
value: 38.9013414992826
- type: nauc_precision_at_100_diff1
value: -5.290395978742176
- type: nauc_precision_at_100_max
value: 31.340480360546845
- type: nauc_precision_at_100_std
value: 33.6897935720505
- type: nauc_precision_at_10_diff1
value: 5.965001997926562
- type: nauc_precision_at_10_max
value: 46.12515296162247
- type: nauc_precision_at_10_std
value: 25.409433135253558
- type: nauc_precision_at_1_diff1
value: 27.297525357600776
- type: nauc_precision_at_1_max
value: 47.18835074959486
- type: nauc_precision_at_1_std
value: 18.304203168281834
- type: nauc_precision_at_20_diff1
value: 3.4438127279827744
- type: nauc_precision_at_20_max
value: 42.36095587714494
- type: nauc_precision_at_20_std
value: 27.367900512797906
- type: nauc_precision_at_3_diff1
value: 13.165017224718916
- type: nauc_precision_at_3_max
value: 50.58931825484506
- type: nauc_precision_at_3_std
value: 20.852009214609442
- type: nauc_precision_at_5_diff1
value: 7.840087177549876
- type: nauc_precision_at_5_max
value: 46.99388755575109
- type: nauc_precision_at_5_std
value: 23.048702393099834
- type: nauc_recall_at_1000_diff1
value: -15.591898698252932
- type: nauc_recall_at_1000_max
value: 20.5459842853537
- type: nauc_recall_at_1000_std
value: 38.901341499282395
- type: nauc_recall_at_100_diff1
value: -5.290395978742165
- type: nauc_recall_at_100_max
value: 31.340480360546863
- type: nauc_recall_at_100_std
value: 33.68979357205046
- type: nauc_recall_at_10_diff1
value: 5.96500199792656
- type: nauc_recall_at_10_max
value: 46.1251529616225
- type: nauc_recall_at_10_std
value: 25.409433135253543
- type: nauc_recall_at_1_diff1
value: 27.297525357600776
- type: nauc_recall_at_1_max
value: 47.18835074959486
- type: nauc_recall_at_1_std
value: 18.304203168281834
- type: nauc_recall_at_20_diff1
value: 3.4438127279827833
- type: nauc_recall_at_20_max
value: 42.36095587714498
- type: nauc_recall_at_20_std
value: 27.36790051279787
- type: nauc_recall_at_3_diff1
value: 13.165017224718916
- type: nauc_recall_at_3_max
value: 50.589318254845054
- type: nauc_recall_at_3_std
value: 20.852009214609435
- type: nauc_recall_at_5_diff1
value: 7.840087177549891
- type: nauc_recall_at_5_max
value: 46.99388755575112
- type: nauc_recall_at_5_std
value: 23.048702393099845
- type: ndcg_at_1
value: 14.701
- type: ndcg_at_10
value: 26.909
- type: ndcg_at_100
value: 32.727000000000004
- type: ndcg_at_1000
value: 36.086
- type: ndcg_at_20
value: 29.236
- type: ndcg_at_3
value: 22.004
- type: ndcg_at_5
value: 24.615000000000002
- type: precision_at_1
value: 14.701
- type: precision_at_10
value: 4.062
- type: precision_at_100
value: 0.688
- type: precision_at_1000
value: 0.096
- type: precision_at_20
value: 2.488
- type: precision_at_3
value: 9.036
- type: precision_at_5
value: 6.699
- type: recall_at_1
value: 14.701
- type: recall_at_10
value: 40.622
- type: recall_at_100
value: 68.796
- type: recall_at_1000
value: 96.314
- type: recall_at_20
value: 49.754
- type: recall_at_3
value: 27.108999999999998
- type: recall_at_5
value: 33.497
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment (default)
type: C-MTEB/MultilingualSentiment-classification
config: default
split: test
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 73.20999999999998
- type: f1
value: 73.18755986777474
- type: f1_weighted
value: 73.18755986777475
- type: main_score
value: 73.20999999999998
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 4.822
- type: map_at_10
value: 13.144
- type: map_at_100
value: 17.254
- type: map_at_1000
value: 18.931
- type: map_at_20
value: 14.834
- type: map_at_3
value: 8.975
- type: map_at_5
value: 10.922
- type: mrr_at_1
value: 47.059
- type: mrr_at_10
value: 55.806999999999995
- type: mrr_at_100
value: 56.286
- type: mrr_at_1000
value: 56.327000000000005
- type: mrr_at_20
value: 56.00000000000001
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.155
- type: ndcg_at_1
value: 44.427
- type: ndcg_at_10
value: 36.623
- type: ndcg_at_100
value: 33.664
- type: ndcg_at_1000
value: 42.538
- type: ndcg_at_20
value: 34.066
- type: ndcg_at_3
value: 41.118
- type: ndcg_at_5
value: 39.455
- type: precision_at_1
value: 46.44
- type: precision_at_10
value: 28.607
- type: precision_at_100
value: 9.189
- type: precision_at_1000
value: 2.261
- type: precision_at_20
value: 21.238
- type: precision_at_3
value: 39.628
- type: precision_at_5
value: 35.604
- type: recall_at_1
value: 4.822
- type: recall_at_10
value: 17.488999999999997
- type: recall_at_100
value: 35.052
- type: recall_at_1000
value: 66.67999999999999
- type: recall_at_20
value: 21.343999999999998
- type: recall_at_3
value: 10.259
- type: recall_at_5
value: 13.406
- type: main_score
value: 36.623
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 41.411
- type: map_at_10
value: 57.179
- type: map_at_100
value: 57.945
- type: map_at_1000
value: 57.967999999999996
- type: map_at_20
value: 57.687
- type: map_at_3
value: 53.46300000000001
- type: map_at_5
value: 55.696999999999996
- type: mrr_at_1
value: 46.233999999999995
- type: mrr_at_10
value: 59.831999999999994
- type: mrr_at_100
value: 60.33500000000001
- type: mrr_at_1000
value: 60.348
- type: mrr_at_20
value: 60.167
- type: mrr_at_3
value: 56.972
- type: mrr_at_5
value: 58.74
- type: ndcg_at_1
value: 46.205
- type: ndcg_at_10
value: 64.23100000000001
- type: ndcg_at_100
value: 67.242
- type: ndcg_at_1000
value: 67.72500000000001
- type: ndcg_at_20
value: 65.77300000000001
- type: ndcg_at_3
value: 57.516
- type: ndcg_at_5
value: 61.11600000000001
- type: precision_at_1
value: 46.205
- type: precision_at_10
value: 9.873
- type: precision_at_100
value: 1.158
- type: precision_at_1000
value: 0.12
- type: precision_at_20
value: 5.319
- type: precision_at_3
value: 25.424999999999997
- type: precision_at_5
value: 17.375
- type: recall_at_1
value: 41.411
- type: recall_at_10
value: 82.761
- type: recall_at_100
value: 95.52199999999999
- type: recall_at_1000
value: 99.02499999999999
- type: recall_at_20
value: 88.34
- type: recall_at_3
value: 65.73
- type: recall_at_5
value: 73.894
- type: main_score
value: 64.23100000000001
- task:
type: PairClassification
dataset:
name: MTEB Ocnli (default)
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cosine_accuracy
value: 62.3714131023281
- type: cosine_accuracy_threshold
value: 79.70921993255615
- type: cosine_ap
value: 66.41380155495659
- type: cosine_f1
value: 68.89547185780786
- type: cosine_f1_threshold
value: 72.91591167449951
- type: cosine_precision
value: 57.485875706214685
- type: cosine_recall
value: 85.95564941921859
- type: dot_accuracy
value: 60.47644829453167
- type: dot_accuracy_threshold
value: 36627.362060546875
- type: dot_ap
value: 63.696303449293204
- type: dot_f1
value: 68.3986041101202
- type: dot_f1_threshold
value: 30452.72216796875
- type: dot_precision
value: 54.04411764705882
- type: dot_recall
value: 93.13621964097149
- type: euclidean_accuracy
value: 63.02111532214402
- type: euclidean_accuracy_threshold
value: 1392.76762008667
- type: euclidean_ap
value: 66.65907089443218
- type: euclidean_f1
value: 69.05036524413688
- type: euclidean_f1_threshold
value: 1711.5310668945312
- type: euclidean_precision
value: 54.29262394195889
- type: euclidean_recall
value: 94.82576557550159
- type: main_score
value: 63.02111532214402
- type: manhattan_accuracy
value: 62.75040606388739
- type: manhattan_accuracy_threshold
value: 32475.347900390625
- type: manhattan_ap
value: 66.50943585125434
- type: manhattan_f1
value: 69.08382066276802
- type: manhattan_f1_threshold
value: 41238.470458984375
- type: manhattan_precision
value: 54.75896168108776
- type: manhattan_recall
value: 93.55860612460401
- type: max_accuracy
value: 63.02111532214402
- type: max_ap
value: 66.65907089443218
- type: max_f1
value: 69.08382066276802
- type: max_precision
value: 57.485875706214685
- type: max_recall
value: 94.82576557550159
- type: similarity_accuracy
value: 62.3714131023281
- type: similarity_accuracy_threshold
value: 79.70921993255615
- type: similarity_ap
value: 66.41380155495659
- type: similarity_f1
value: 68.89547185780786
- type: similarity_f1_threshold
value: 72.91591167449951
- type: similarity_precision
value: 57.485875706214685
- type: similarity_recall
value: 85.95564941921859
- task:
type: Classification
dataset:
name: MTEB OnlineShopping (default)
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 91.88000000000001
- type: ap
value: 89.52463684448476
- type: ap_weighted
value: 89.52463684448476
- type: f1
value: 91.86313022306673
- type: f1_weighted
value: 91.87806318146912
- type: main_score
value: 91.88000000000001
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (en)
type: GEM/opusparcus
config: en
split: test.full
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cosine_accuracy
value: 92.65578635014838
- type: cosine_accuracy_threshold
value: 74.02530312538147
- type: cosine_ap
value: 98.3834226153613
- type: cosine_f1
value: 94.92567913890312
- type: cosine_f1_threshold
value: 74.02530312538147
- type: cosine_precision
value: 95.562435500516
- type: cosine_recall
value: 94.29735234215886
- type: dot_accuracy
value: 91.54302670623146
- type: dot_accuracy_threshold
value: 34452.29187011719
- type: dot_ap
value: 98.1237257754439
- type: dot_f1
value: 94.22400803616273
- type: dot_f1_threshold
value: 33670.41931152344
- type: dot_precision
value: 92.9633300297324
- type: dot_recall
value: 95.5193482688391
- type: euclidean_accuracy
value: 92.28486646884274
- type: euclidean_accuracy_threshold
value: 1602.8022766113281
- type: euclidean_ap
value: 98.3099021504706
- type: euclidean_f1
value: 94.75277497477296
- type: euclidean_f1_threshold
value: 1604.7462463378906
- type: euclidean_precision
value: 93.89999999999999
- type: euclidean_recall
value: 95.62118126272912
- type: main_score
value: 98.3834226153613
- type: manhattan_accuracy
value: 92.2106824925816
- type: manhattan_accuracy_threshold
value: 38872.90954589844
- type: manhattan_ap
value: 98.28694101230218
- type: manhattan_f1
value: 94.67815509376584
- type: manhattan_f1_threshold
value: 38872.90954589844
- type: manhattan_precision
value: 94.24823410696267
- type: manhattan_recall
value: 95.11201629327903
- type: max_accuracy
value: 92.65578635014838
- type: max_ap
value: 98.3834226153613
- type: max_f1
value: 94.92567913890312
- type: max_precision
value: 95.562435500516
- type: max_recall
value: 95.62118126272912
- type: similarity_accuracy
value: 92.65578635014838
- type: similarity_accuracy_threshold
value: 74.02530312538147
- type: similarity_ap
value: 98.3834226153613
- type: similarity_f1
value: 94.92567913890312
- type: similarity_f1_threshold
value: 74.02530312538147
- type: similarity_precision
value: 95.562435500516
- type: similarity_recall
value: 94.29735234215886
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (de)
type: GEM/opusparcus
config: de
split: test.full
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cosine_accuracy
value: 87.72178850248403
- type: cosine_accuracy_threshold
value: 73.33863377571106
- type: cosine_ap
value: 96.98901408834976
- type: cosine_f1
value: 91.89944134078212
- type: cosine_f1_threshold
value: 71.45810127258301
- type: cosine_precision
value: 89.64577656675749
- type: cosine_recall
value: 94.26934097421203
- type: dot_accuracy
value: 86.30234208658624
- type: dot_accuracy_threshold
value: 32027.130126953125
- type: dot_ap
value: 96.12260574893256
- type: dot_f1
value: 91.31602506714414
- type: dot_f1_threshold
value: 30804.376220703125
- type: dot_precision
value: 85.93091828138164
- type: dot_recall
value: 97.42120343839542
- type: euclidean_accuracy
value: 87.9347054648687
- type: euclidean_accuracy_threshold
value: 1609.6670150756836
- type: euclidean_ap
value: 97.00238860358252
- type: euclidean_f1
value: 92.1089063221043
- type: euclidean_f1_threshold
value: 1641.8487548828125
- type: euclidean_precision
value: 89.10714285714286
- type: euclidean_recall
value: 95.31996179560649
- type: main_score
value: 97.00238860358252
- type: manhattan_accuracy
value: 87.72178850248403
- type: manhattan_accuracy_threshold
value: 40137.060546875
- type: manhattan_ap
value: 96.98653728159941
- type: manhattan_f1
value: 92.03865623561896
- type: manhattan_f1_threshold
value: 40137.060546875
- type: manhattan_precision
value: 88.80994671403198
- type: manhattan_recall
value: 95.51098376313276
- type: max_accuracy
value: 87.9347054648687
- type: max_ap
value: 97.00238860358252
- type: max_f1
value: 92.1089063221043
- type: max_precision
value: 89.64577656675749
- type: max_recall
value: 97.42120343839542
- type: similarity_accuracy
value: 87.72178850248403
- type: similarity_accuracy_threshold
value: 73.33863377571106
- type: similarity_ap
value: 96.98901408834976
- type: similarity_f1
value: 91.89944134078212
- type: similarity_f1_threshold
value: 71.45810127258301
- type: similarity_precision
value: 89.64577656675749
- type: similarity_recall
value: 94.26934097421203
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test.full
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cosine_accuracy
value: 80.92643051771117
- type: cosine_accuracy_threshold
value: 76.68856382369995
- type: cosine_ap
value: 93.74622381534307
- type: cosine_f1
value: 87.12328767123287
- type: cosine_f1_threshold
value: 71.64022922515869
- type: cosine_precision
value: 80.64243448858834
- type: cosine_recall
value: 94.73684210526315
- type: dot_accuracy
value: 80.858310626703
- type: dot_accuracy_threshold
value: 34028.3935546875
- type: dot_ap
value: 91.18448457633308
- type: dot_f1
value: 86.82606657290202
- type: dot_f1_threshold
value: 34028.3935546875
- type: dot_precision
value: 82.2380106571936
- type: dot_recall
value: 91.9563058589871
- type: euclidean_accuracy
value: 80.858310626703
- type: euclidean_accuracy_threshold
value: 1595.7651138305664
- type: euclidean_ap
value: 93.8182717829648
- type: euclidean_f1
value: 87.04044117647058
- type: euclidean_f1_threshold
value: 1609.2475891113281
- type: euclidean_precision
value: 81.00940975192472
- type: euclidean_recall
value: 94.04170804369414
- type: main_score
value: 93.8182717829648
- type: manhattan_accuracy
value: 80.99455040871935
- type: manhattan_accuracy_threshold
value: 38092.132568359375
- type: manhattan_ap
value: 93.77563401151711
- type: manhattan_f1
value: 86.91983122362869
- type: manhattan_f1_threshold
value: 38092.132568359375
- type: manhattan_precision
value: 82.32682060390763
- type: manhattan_recall
value: 92.05561072492551
- type: max_accuracy
value: 80.99455040871935
- type: max_ap
value: 93.8182717829648
- type: max_f1
value: 87.12328767123287
- type: max_precision
value: 82.32682060390763
- type: max_recall
value: 94.73684210526315
- type: similarity_accuracy
value: 80.92643051771117
- type: similarity_accuracy_threshold
value: 76.68856382369995
- type: similarity_ap
value: 93.74622381534307
- type: similarity_f1
value: 87.12328767123287
- type: similarity_f1_threshold
value: 71.64022922515869
- type: similarity_precision
value: 80.64243448858834
- type: similarity_recall
value: 94.73684210526315
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (ru)
type: GEM/opusparcus
config: ru
split: test.full
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cosine_accuracy
value: 76.83823529411765
- type: cosine_accuracy_threshold
value: 72.70769476890564
- type: cosine_ap
value: 89.56692049908222
- type: cosine_f1
value: 83.99832003359934
- type: cosine_f1_threshold
value: 70.9052324295044
- type: cosine_precision
value: 76.16146230007617
- type: cosine_recall
value: 93.63295880149812
- type: dot_accuracy
value: 76.28676470588235
- type: dot_accuracy_threshold
value: 33740.68908691406
- type: dot_ap
value: 87.77185177141567
- type: dot_f1
value: 83.62251375370292
- type: dot_f1_threshold
value: 32726.611328125
- type: dot_precision
value: 76.29343629343629
- type: dot_recall
value: 92.50936329588015
- type: euclidean_accuracy
value: 77.32843137254902
- type: euclidean_accuracy_threshold
value: 1566.510009765625
- type: euclidean_ap
value: 89.60605626791111
- type: euclidean_f1
value: 84.06546080964686
- type: euclidean_f1_threshold
value: 1576.4202117919922
- type: euclidean_precision
value: 77.83094098883574
- type: euclidean_recall
value: 91.38576779026218
- type: main_score
value: 89.60605626791111
- type: manhattan_accuracy
value: 76.89950980392157
- type: manhattan_accuracy_threshold
value: 38202.215576171875
- type: manhattan_ap
value: 89.55766894104868
- type: manhattan_f1
value: 83.80462724935732
- type: manhattan_f1_threshold
value: 38934.375
- type: manhattan_precision
value: 77.25118483412322
- type: manhattan_recall
value: 91.57303370786516
- type: max_accuracy
value: 77.32843137254902
- type: max_ap
value: 89.60605626791111
- type: max_f1
value: 84.06546080964686
- type: max_precision
value: 77.83094098883574
- type: max_recall
value: 93.63295880149812
- type: similarity_accuracy
value: 76.83823529411765
- type: similarity_accuracy_threshold
value: 72.70769476890564
- type: similarity_ap
value: 89.56692049908222
- type: similarity_f1
value: 83.99832003359934
- type: similarity_f1_threshold
value: 70.9052324295044
- type: similarity_precision
value: 76.16146230007617
- type: similarity_recall
value: 93.63295880149812
- task:
type: Classification
dataset:
name: MTEB PAC (default)
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
metrics:
- type: accuracy
value: 68.39559803069794
- type: ap
value: 77.68074206719457
- type: ap_weighted
value: 77.68074206719457
- type: f1
value: 66.23485605467732
- type: f1_weighted
value: 69.03201442129347
- type: main_score
value: 68.39559803069794
- task:
type: STS
dataset:
name: MTEB PAWSX (default)
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cosine_pearson
value: 13.161523266433587
- type: cosine_spearman
value: 15.557333873773386
- type: euclidean_pearson
value: 17.147508431907525
- type: euclidean_spearman
value: 15.664112857732146
- type: main_score
value: 15.557333873773386
- type: manhattan_pearson
value: 17.130875906264386
- type: manhattan_spearman
value: 15.624397342229637
- type: pearson
value: 13.161523266433587
- type: spearman
value: 15.557333873773386
- task:
type: PairClassification
dataset:
name: MTEB PSC (default)
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
metrics:
- type: cosine_accuracy
value: 97.86641929499072
- type: cosine_accuracy_threshold
value: 79.0391206741333
- type: cosine_ap
value: 99.19403807771533
- type: cosine_f1
value: 96.45608628659475
- type: cosine_f1_threshold
value: 79.0391206741333
- type: cosine_precision
value: 97.50778816199377
- type: cosine_recall
value: 95.42682926829268
- type: dot_accuracy
value: 98.14471243042672
- type: dot_accuracy_threshold
value: 29808.1787109375
- type: dot_ap
value: 99.331999859971
- type: dot_f1
value: 97.01492537313433
- type: dot_f1_threshold
value: 29808.1787109375
- type: dot_precision
value: 95.02923976608187
- type: dot_recall
value: 99.08536585365853
- type: euclidean_accuracy
value: 97.49536178107606
- type: euclidean_accuracy_threshold
value: 1276.227855682373
- type: euclidean_ap
value: 98.91056467717377
- type: euclidean_f1
value: 95.83975346687212
- type: euclidean_f1_threshold
value: 1276.227855682373
- type: euclidean_precision
value: 96.88473520249221
- type: euclidean_recall
value: 94.8170731707317
- type: main_score
value: 99.331999859971
- type: manhattan_accuracy
value: 97.49536178107606
- type: manhattan_accuracy_threshold
value: 31097.674560546875
- type: manhattan_ap
value: 98.95694691792707
- type: manhattan_f1
value: 95.83975346687212
- type: manhattan_f1_threshold
value: 31097.674560546875
- type: manhattan_precision
value: 96.88473520249221
- type: manhattan_recall
value: 94.8170731707317
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.331999859971
- type: max_f1
value: 97.01492537313433
- type: max_precision
value: 97.50778816199377
- type: max_recall
value: 99.08536585365853
- type: similarity_accuracy
value: 97.86641929499072
- type: similarity_accuracy_threshold
value: 79.0391206741333
- type: similarity_ap
value: 99.19403807771533
- type: similarity_f1
value: 96.45608628659475
- type: similarity_f1_threshold
value: 79.0391206741333
- type: similarity_precision
value: 97.50778816199377
- type: similarity_recall
value: 95.42682926829268
- task:
type: PairClassification
dataset:
name: MTEB PawsXPairClassification (en)
type: google-research-datasets/paws-x
config: en
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cosine_accuracy
value: 61.8
- type: cosine_accuracy_threshold
value: 99.5664119720459
- type: cosine_ap
value: 60.679317786040585
- type: cosine_f1
value: 63.17354143441101
- type: cosine_f1_threshold
value: 97.22164869308472
- type: cosine_precision
value: 47.6457399103139
- type: cosine_recall
value: 93.71554575523705
- type: dot_accuracy
value: 55.7
- type: dot_accuracy_threshold
value: 48353.62548828125
- type: dot_ap
value: 48.53805970536875
- type: dot_f1
value: 62.42214532871972
- type: dot_f1_threshold
value: 38215.53955078125
- type: dot_precision
value: 45.48663640948058
- type: dot_recall
value: 99.44873208379272
- type: euclidean_accuracy
value: 61.75000000000001
- type: euclidean_accuracy_threshold
value: 189.0761137008667
- type: euclidean_ap
value: 60.55517418691518
- type: euclidean_f1
value: 63.07977736549165
- type: euclidean_f1_threshold
value: 504.3168067932129
- type: euclidean_precision
value: 47.53914988814318
- type: euclidean_recall
value: 93.71554575523705
- type: main_score
value: 60.679317786040585
- type: manhattan_accuracy
value: 61.9
- type: manhattan_accuracy_threshold
value: 4695.778274536133
- type: manhattan_ap
value: 60.48686620413608
- type: manhattan_f1
value: 62.92880855772778
- type: manhattan_f1_threshold
value: 12542.36831665039
- type: manhattan_precision
value: 47.28381374722838
- type: manhattan_recall
value: 94.04630650496141
- type: max_accuracy
value: 61.9
- type: max_ap
value: 60.679317786040585
- type: max_f1
value: 63.17354143441101
- type: max_precision
value: 47.6457399103139
- type: max_recall
value: 99.44873208379272
- type: similarity_accuracy
value: 61.8
- type: similarity_accuracy_threshold
value: 99.5664119720459
- type: similarity_ap
value: 60.679317786040585
- type: similarity_f1
value: 63.17354143441101
- type: similarity_f1_threshold
value: 97.22164869308472
- type: similarity_precision
value: 47.6457399103139
- type: similarity_recall
value: 93.71554575523705
- task:
type: PairClassification
dataset:
name: MTEB PawsXPairClassification (de)
type: google-research-datasets/paws-x
config: de
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cosine_accuracy
value: 60.25
- type: cosine_accuracy_threshold
value: 99.54338073730469
- type: cosine_ap
value: 56.7863613689054
- type: cosine_f1
value: 62.23499820337766
- type: cosine_f1_threshold
value: 89.95014429092407
- type: cosine_precision
value: 45.86864406779661
- type: cosine_recall
value: 96.75977653631284
- type: dot_accuracy
value: 56.8
- type: dot_accuracy_threshold
value: 47349.78332519531
- type: dot_ap
value: 49.7857806061729
- type: dot_f1
value: 62.31225986727209
- type: dot_f1_threshold
value: 30143.206787109375
- type: dot_precision
value: 45.32520325203252
- type: dot_recall
value: 99.66480446927373
- type: euclidean_accuracy
value: 60.3
- type: euclidean_accuracy_threshold
value: 219.78106498718262
- type: euclidean_ap
value: 56.731544327179606
- type: euclidean_f1
value: 62.19895287958115
- type: euclidean_f1_threshold
value: 1792.1623229980469
- type: euclidean_precision
value: 45.22842639593909
- type: euclidean_recall
value: 99.55307262569832
- type: main_score
value: 56.7863613689054
- type: manhattan_accuracy
value: 60.150000000000006
- type: manhattan_accuracy_threshold
value: 5104.503631591797
- type: manhattan_ap
value: 56.70304479768734
- type: manhattan_f1
value: 62.22067039106145
- type: manhattan_f1_threshold
value: 42839.471435546875
- type: manhattan_precision
value: 45.2513966480447
- type: manhattan_recall
value: 99.55307262569832
- type: max_accuracy
value: 60.3
- type: max_ap
value: 56.7863613689054
- type: max_f1
value: 62.31225986727209
- type: max_precision
value: 45.86864406779661
- type: max_recall
value: 99.66480446927373
- type: similarity_accuracy
value: 60.25
- type: similarity_accuracy_threshold
value: 99.54338073730469
- type: similarity_ap
value: 56.7863613689054
- type: similarity_f1
value: 62.23499820337766
- type: similarity_f1_threshold
value: 89.95014429092407
- type: similarity_precision
value: 45.86864406779661
- type: similarity_recall
value: 96.75977653631284
- task:
type: PairClassification
dataset:
name: MTEB PawsXPairClassification (es)
type: google-research-datasets/paws-x
config: es
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cosine_accuracy
value: 59.699999999999996
- type: cosine_accuracy_threshold
value: 99.55930709838867
- type: cosine_ap
value: 57.31662248806265
- type: cosine_f1
value: 62.444061962134256
- type: cosine_f1_threshold
value: 74.75898265838623
- type: cosine_precision
value: 45.3953953953954
- type: cosine_recall
value: 100.0
- type: dot_accuracy
value: 55.900000000000006
- type: dot_accuracy_threshold
value: 47512.90283203125
- type: dot_ap
value: 49.39339147787568
- type: dot_f1
value: 62.487082328625554
- type: dot_f1_threshold
value: 34989.03503417969
- type: dot_precision
value: 45.44088176352705
- type: dot_recall
value: 100.0
- type: euclidean_accuracy
value: 59.599999999999994
- type: euclidean_accuracy_threshold
value: 200.82547664642334
- type: euclidean_ap
value: 57.19737488445163
- type: euclidean_f1
value: 62.444061962134256
- type: euclidean_f1_threshold
value: 1538.8837814331055
- type: euclidean_precision
value: 45.3953953953954
- type: euclidean_recall
value: 100.0
- type: main_score
value: 57.31662248806265
- type: manhattan_accuracy
value: 59.550000000000004
- type: manhattan_accuracy_threshold
value: 5016.501617431641
- type: manhattan_ap
value: 57.089959907945065
- type: manhattan_f1
value: 62.444061962134256
- type: manhattan_f1_threshold
value: 37523.53515625
- type: manhattan_precision
value: 45.3953953953954
- type: manhattan_recall
value: 100.0
- type: max_accuracy
value: 59.699999999999996
- type: max_ap
value: 57.31662248806265
- type: max_f1
value: 62.487082328625554
- type: max_precision
value: 45.44088176352705
- type: max_recall
value: 100.0
- type: similarity_accuracy
value: 59.699999999999996
- type: similarity_accuracy_threshold
value: 99.55930709838867
- type: similarity_ap
value: 57.31662248806265
- type: similarity_f1
value: 62.444061962134256
- type: similarity_f1_threshold
value: 74.75898265838623
- type: similarity_precision
value: 45.3953953953954
- type: similarity_recall
value: 100.0
- task:
type: PairClassification
dataset:
name: MTEB PawsXPairClassification (fr)
type: google-research-datasets/paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cosine_accuracy
value: 61.150000000000006
- type: cosine_accuracy_threshold
value: 99.36153888702393
- type: cosine_ap
value: 59.43845317938599
- type: cosine_f1
value: 62.51298026998961
- type: cosine_f1_threshold
value: 76.77866220474243
- type: cosine_precision
value: 45.468277945619334
- type: cosine_recall
value: 100.0
- type: dot_accuracy
value: 55.75
- type: dot_accuracy_threshold
value: 48931.55212402344
- type: dot_ap
value: 50.15949290538757
- type: dot_f1
value: 62.53462603878117
- type: dot_f1_threshold
value: 34415.7958984375
- type: dot_precision
value: 45.4911838790932
- type: dot_recall
value: 100.0
- type: euclidean_accuracy
value: 61.050000000000004
- type: euclidean_accuracy_threshold
value: 240.8097267150879
- type: euclidean_ap
value: 59.367971294226216
- type: euclidean_f1
value: 62.51298026998961
- type: euclidean_f1_threshold
value: 1444.132423400879
- type: euclidean_precision
value: 45.468277945619334
- type: euclidean_recall
value: 100.0
- type: main_score
value: 59.43845317938599
- type: manhattan_accuracy
value: 60.95
- type: manhattan_accuracy_threshold
value: 5701.206207275391
- type: manhattan_ap
value: 59.30094096378774
- type: manhattan_f1
value: 62.53462603878117
- type: manhattan_f1_threshold
value: 33445.672607421875
- type: manhattan_precision
value: 45.4911838790932
- type: manhattan_recall
value: 100.0
- type: max_accuracy
value: 61.150000000000006
- type: max_ap
value: 59.43845317938599
- type: max_f1
value: 62.53462603878117
- type: max_precision
value: 45.4911838790932
- type: max_recall
value: 100.0
- type: similarity_accuracy
value: 61.150000000000006
- type: similarity_accuracy_threshold
value: 99.36153888702393
- type: similarity_ap
value: 59.43845317938599
- type: similarity_f1
value: 62.51298026998961
- type: similarity_f1_threshold
value: 76.77866220474243
- type: similarity_precision
value: 45.468277945619334
- type: similarity_recall
value: 100.0
- task:
type: PairClassification
dataset:
name: MTEB PawsXPairClassification (zh)
type: google-research-datasets/paws-x
config: zh
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cosine_accuracy
value: 58.85
- type: cosine_accuracy_threshold
value: 99.73838329315186
- type: cosine_ap
value: 54.66913160570546
- type: cosine_f1
value: 62.32136632973162
- type: cosine_f1_threshold
value: 76.4499306678772
- type: cosine_precision
value: 45.265822784810126
- type: cosine_recall
value: 100.0
- type: dot_accuracy
value: 56.25
- type: dot_accuracy_threshold
value: 47351.9287109375
- type: dot_ap
value: 48.5266232989438
- type: dot_f1
value: 62.277951933124356
- type: dot_f1_threshold
value: 31325.28076171875
- type: dot_precision
value: 45.220030349013655
- type: dot_recall
value: 100.0
- type: euclidean_accuracy
value: 58.9
- type: euclidean_accuracy_threshold
value: 144.24468278884888
- type: euclidean_ap
value: 54.66981490353506
- type: euclidean_f1
value: 62.32136632973162
- type: euclidean_f1_threshold
value: 1484.908676147461
- type: euclidean_precision
value: 45.265822784810126
- type: euclidean_recall
value: 100.0
- type: main_score
value: 54.66981490353506
- type: manhattan_accuracy
value: 58.9
- type: manhattan_accuracy_threshold
value: 3586.785125732422
- type: manhattan_ap
value: 54.668355260247736
- type: manhattan_f1
value: 62.32136632973162
- type: manhattan_f1_threshold
value: 36031.22863769531
- type: manhattan_precision
value: 45.265822784810126
- type: manhattan_recall
value: 100.0
- type: max_accuracy
value: 58.9
- type: max_ap
value: 54.66981490353506
- type: max_f1
value: 62.32136632973162
- type: max_precision
value: 45.265822784810126
- type: max_recall
value: 100.0
- type: similarity_accuracy
value: 58.85
- type: similarity_accuracy_threshold
value: 99.73838329315186
- type: similarity_ap
value: 54.66913160570546
- type: similarity_f1
value: 62.32136632973162
- type: similarity_f1_threshold
value: 76.4499306678772
- type: similarity_precision
value: 45.265822784810126
- type: similarity_recall
value: 100.0
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN (default)
type: PL-MTEB/polemo2_in
config: default
split: test
revision: d90724373c70959f17d2331ad51fb60c71176b03
metrics:
- type: accuracy
value: 83.75346260387812
- type: f1
value: 81.98304891214909
- type: f1_weighted
value: 84.29623200830078
- type: main_score
value: 83.75346260387812
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT (default)
type: PL-MTEB/polemo2_out
config: default
split: test
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
metrics:
- type: accuracy
value: 66.53846153846153
- type: f1
value: 52.71826064368638
- type: f1_weighted
value: 69.10010124630334
- type: main_score
value: 66.53846153846153
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cosine_accuracy
value: 81.8
- type: cosine_accuracy_threshold
value: 90.47793745994568
- type: cosine_ap
value: 91.42490266080884
- type: cosine_f1
value: 85.4632587859425
- type: cosine_f1_threshold
value: 90.47793745994568
- type: cosine_precision
value: 82.56172839506173
- type: cosine_recall
value: 88.57615894039735
- type: dot_accuracy
value: 74.6
- type: dot_accuracy_threshold
value: 42102.23693847656
- type: dot_ap
value: 86.20060009096979
- type: dot_f1
value: 80.02842928216063
- type: dot_f1_threshold
value: 38970.16906738281
- type: dot_precision
value: 70.1120797011208
- type: dot_recall
value: 93.21192052980133
- type: euclidean_accuracy
value: 81.5
- type: euclidean_accuracy_threshold
value: 880.433464050293
- type: euclidean_ap
value: 91.33143477982087
- type: euclidean_f1
value: 85.44600938967135
- type: euclidean_f1_threshold
value: 964.0384674072266
- type: euclidean_precision
value: 81.00890207715133
- type: euclidean_recall
value: 90.39735099337747
- type: main_score
value: 91.42490266080884
- type: manhattan_accuracy
value: 81.3
- type: manhattan_accuracy_threshold
value: 22100.830078125
- type: manhattan_ap
value: 91.25996158651282
- type: manhattan_f1
value: 85.38102643856921
- type: manhattan_f1_threshold
value: 24043.515014648438
- type: manhattan_precision
value: 80.49853372434018
- type: manhattan_recall
value: 90.89403973509934
- type: max_accuracy
value: 81.8
- type: max_ap
value: 91.42490266080884
- type: max_f1
value: 85.4632587859425
- type: max_precision
value: 82.56172839506173
- type: max_recall
value: 93.21192052980133
- type: similarity_accuracy
value: 81.8
- type: similarity_accuracy_threshold
value: 90.47793745994568
- type: similarity_ap
value: 91.42490266080884
- type: similarity_f1
value: 85.4632587859425
- type: similarity_f1_threshold
value: 90.47793745994568
- type: similarity_precision
value: 82.56172839506173
- type: similarity_recall
value: 88.57615894039735
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 71.419
- type: map_at_10
value: 85.542
- type: map_at_100
value: 86.161
- type: map_at_1000
value: 86.175
- type: map_at_20
value: 85.949
- type: map_at_3
value: 82.623
- type: map_at_5
value: 84.5
- type: mrr_at_1
value: 82.27
- type: mrr_at_10
value: 88.21900000000001
- type: mrr_at_100
value: 88.313
- type: mrr_at_1000
value: 88.31400000000001
- type: mrr_at_20
value: 88.286
- type: mrr_at_3
value: 87.325
- type: mrr_at_5
value: 87.97500000000001
- type: ndcg_at_1
value: 82.3
- type: ndcg_at_10
value: 89.088
- type: ndcg_at_100
value: 90.217
- type: ndcg_at_1000
value: 90.29700000000001
- type: ndcg_at_20
value: 89.697
- type: ndcg_at_3
value: 86.435
- type: ndcg_at_5
value: 87.966
- type: precision_at_1
value: 82.3
- type: precision_at_10
value: 13.527000000000001
- type: precision_at_100
value: 1.537
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.165000000000001
- type: precision_at_3
value: 37.92
- type: precision_at_5
value: 24.914
- type: recall_at_1
value: 71.419
- type: recall_at_10
value: 95.831
- type: recall_at_100
value: 99.64
- type: recall_at_1000
value: 99.988
- type: recall_at_20
value: 97.76599999999999
- type: recall_at_3
value: 88.081
- type: recall_at_5
value: 92.50500000000001
- type: main_score
value: 89.088
- task:
type: STS
dataset:
name: MTEB RUParaPhraserSTS (default)
type: merionum/ru_paraphraser
config: default
split: test
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
metrics:
- type: cosine_pearson
value: 67.91177744712421
- type: cosine_spearman
value: 76.77113726753656
- type: euclidean_pearson
value: 73.81454206068638
- type: euclidean_spearman
value: 76.92529493599028
- type: main_score
value: 76.77113726753656
- type: manhattan_pearson
value: 73.81690454439168
- type: manhattan_spearman
value: 76.87333776705002
- type: pearson
value: 67.91177744712421
- type: spearman
value: 76.77113726753656
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 55.39924225216962
- type: v_measure
value: 55.39924225216962
- type: v_measure_std
value: 4.723802279292467
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 62.87465161304012
- type: v_measure
value: 62.87465161304012
- type: v_measure_std
value: 12.082670914488473
- task:
type: Retrieval
dataset:
name: MTEB RiaNewsRetrieval (default)
type: ai-forever/ria-news-retrieval
config: default
split: test
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
metrics:
- type: main_score
value: 79.209
- type: map_at_1
value: 67.33
- type: map_at_10
value: 75.633
- type: map_at_100
value: 75.897
- type: map_at_1000
value: 75.907
- type: map_at_20
value: 75.804
- type: map_at_3
value: 74.2
- type: map_at_5
value: 75.13300000000001
- type: mrr_at_1
value: 67.31
- type: mrr_at_10
value: 75.62709126984095
- type: mrr_at_100
value: 75.89105697041113
- type: mrr_at_1000
value: 75.90115653883124
- type: mrr_at_20
value: 75.79802332308172
- type: mrr_at_3
value: 74.19499999999961
- type: mrr_at_5
value: 75.12849999999939
- type: nauc_map_at_1000_diff1
value: 74.30304869630591
- type: nauc_map_at_1000_max
value: 36.477146725784046
- type: nauc_map_at_1000_std
value: -20.862772498461723
- type: nauc_map_at_100_diff1
value: 74.29833058090355
- type: nauc_map_at_100_max
value: 36.483678619667884
- type: nauc_map_at_100_std
value: -20.856274849980135
- type: nauc_map_at_10_diff1
value: 74.20729220697967
- type: nauc_map_at_10_max
value: 36.56543146170092
- type: nauc_map_at_10_std
value: -20.991081015484728
- type: nauc_map_at_1_diff1
value: 77.38899022125185
- type: nauc_map_at_1_max
value: 32.45918619669731
- type: nauc_map_at_1_std
value: -22.149586336167324
- type: nauc_map_at_20_diff1
value: 74.2447573558587
- type: nauc_map_at_20_max
value: 36.50383130240387
- type: nauc_map_at_20_std
value: -20.87013743041831
- type: nauc_map_at_3_diff1
value: 74.3054577294586
- type: nauc_map_at_3_max
value: 36.484530586652724
- type: nauc_map_at_3_std
value: -21.90543024607988
- type: nauc_map_at_5_diff1
value: 74.21062368961503
- type: nauc_map_at_5_max
value: 36.55670532498779
- type: nauc_map_at_5_std
value: -21.488786900676942
- type: nauc_mrr_at_1000_diff1
value: 74.31619177956684
- type: nauc_mrr_at_1000_max
value: 36.53498918453189
- type: nauc_mrr_at_1000_std
value: -20.75986704931237
- type: nauc_mrr_at_100_diff1
value: 74.31146790382356
- type: nauc_mrr_at_100_max
value: 36.54149252857106
- type: nauc_mrr_at_100_std
value: -20.75341959250079
- type: nauc_mrr_at_10_diff1
value: 74.22027806145095
- type: nauc_mrr_at_10_max
value: 36.622542969971725
- type: nauc_mrr_at_10_std
value: -20.889417384064117
- type: nauc_mrr_at_1_diff1
value: 77.4306709551449
- type: nauc_mrr_at_1_max
value: 32.57259463438259
- type: nauc_mrr_at_1_std
value: -21.964402859613937
- type: nauc_mrr_at_20_diff1
value: 74.25784396230718
- type: nauc_mrr_at_20_max
value: 36.561412224507336
- type: nauc_mrr_at_20_std
value: -20.767665000065723
- type: nauc_mrr_at_3_diff1
value: 74.31423253547214
- type: nauc_mrr_at_3_max
value: 36.537745749488906
- type: nauc_mrr_at_3_std
value: -21.81259529019546
- type: nauc_mrr_at_5_diff1
value: 74.22404613312771
- type: nauc_mrr_at_5_max
value: 36.60743768455219
- type: nauc_mrr_at_5_std
value: -21.39479216331971
- type: nauc_ndcg_at_1000_diff1
value: 73.48182819705742
- type: nauc_ndcg_at_1000_max
value: 37.86991608461793
- type: nauc_ndcg_at_1000_std
value: -19.021499322688904
- type: nauc_ndcg_at_100_diff1
value: 73.34941250585759
- type: nauc_ndcg_at_100_max
value: 38.11150275625829
- type: nauc_ndcg_at_100_std
value: -18.70624087206104
- type: nauc_ndcg_at_10_diff1
value: 72.82520265115987
- type: nauc_ndcg_at_10_max
value: 38.43323357650525
- type: nauc_ndcg_at_10_std
value: -19.410953792830878
- type: nauc_ndcg_at_1_diff1
value: 77.38899022125185
- type: nauc_ndcg_at_1_max
value: 32.45918619669731
- type: nauc_ndcg_at_1_std
value: -22.149586336167324
- type: nauc_ndcg_at_20_diff1
value: 72.93309285256507
- type: nauc_ndcg_at_20_max
value: 38.217372819067755
- type: nauc_ndcg_at_20_std
value: -18.864113576359333
- type: nauc_ndcg_at_3_diff1
value: 73.18253776744112
- type: nauc_ndcg_at_3_max
value: 38.008109328364
- type: nauc_ndcg_at_3_std
value: -21.68785687594153
- type: nauc_ndcg_at_5_diff1
value: 72.90474739784793
- type: nauc_ndcg_at_5_max
value: 38.29483039202184
- type: nauc_ndcg_at_5_std
value: -20.833049811453474
- type: nauc_precision_at_1000_diff1
value: 59.306217613750334
- type: nauc_precision_at_1000_max
value: 72.20747948302262
- type: nauc_precision_at_1000_std
value: 45.58837180096227
- type: nauc_precision_at_100_diff1
value: 62.87286844562389
- type: nauc_precision_at_100_max
value: 61.33108214045868
- type: nauc_precision_at_100_std
value: 20.67481963545654
- type: nauc_precision_at_10_diff1
value: 64.11222984256685
- type: nauc_precision_at_10_max
value: 50.323697746037496
- type: nauc_precision_at_10_std
value: -7.9994544634332625
- type: nauc_precision_at_1_diff1
value: 77.38899022125185
- type: nauc_precision_at_1_max
value: 32.45918619669731
- type: nauc_precision_at_1_std
value: -22.149586336167324
- type: nauc_precision_at_20_diff1
value: 62.30228127286973
- type: nauc_precision_at_20_max
value: 52.02090746208407
- type: nauc_precision_at_20_std
value: 0.7629898806370331
- type: nauc_precision_at_3_diff1
value: 68.82856645994157
- type: nauc_precision_at_3_max
value: 43.94171571306625
- type: nauc_precision_at_3_std
value: -20.78595255410148
- type: nauc_precision_at_5_diff1
value: 66.62157622497887
- type: nauc_precision_at_5_max
value: 46.69398173603811
- type: nauc_precision_at_5_std
value: -17.412423571163057
- type: nauc_recall_at_1000_diff1
value: 59.30621761375148
- type: nauc_recall_at_1000_max
value: 72.20747948302191
- type: nauc_recall_at_1000_std
value: 45.588371800962655
- type: nauc_recall_at_100_diff1
value: 62.872868445623894
- type: nauc_recall_at_100_max
value: 61.33108214045813
- type: nauc_recall_at_100_std
value: 20.67481963545666
- type: nauc_recall_at_10_diff1
value: 64.11222984256698
- type: nauc_recall_at_10_max
value: 50.32369774603755
- type: nauc_recall_at_10_std
value: -7.999454463433321
- type: nauc_recall_at_1_diff1
value: 77.38899022125185
- type: nauc_recall_at_1_max
value: 32.45918619669731
- type: nauc_recall_at_1_std
value: -22.149586336167324
- type: nauc_recall_at_20_diff1
value: 62.3022812728695
- type: nauc_recall_at_20_max
value: 52.02090746208397
- type: nauc_recall_at_20_std
value: 0.7629898806369458
- type: nauc_recall_at_3_diff1
value: 68.82856645994157
- type: nauc_recall_at_3_max
value: 43.94171571306612
- type: nauc_recall_at_3_std
value: -20.78595255410157
- type: nauc_recall_at_5_diff1
value: 66.62157622497897
- type: nauc_recall_at_5_max
value: 46.693981736038246
- type: nauc_recall_at_5_std
value: -17.412423571162954
- type: ndcg_at_1
value: 67.33
- type: ndcg_at_10
value: 79.209
- type: ndcg_at_100
value: 80.463
- type: ndcg_at_1000
value: 80.74799999999999
- type: ndcg_at_20
value: 79.81899999999999
- type: ndcg_at_3
value: 76.335
- type: ndcg_at_5
value: 78.011
- type: precision_at_1
value: 67.33
- type: precision_at_10
value: 9.020999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 4.63
- type: precision_at_3
value: 27.493000000000002
- type: precision_at_5
value: 17.308
- type: recall_at_1
value: 67.33
- type: recall_at_10
value: 90.21000000000001
- type: recall_at_100
value: 96.00999999999999
- type: recall_at_1000
value: 98.29
- type: recall_at_20
value: 92.60000000000001
- type: recall_at_3
value: 82.48
- type: recall_at_5
value: 86.53999999999999
- task:
type: Reranking
dataset:
name: MTEB RuBQReranking (default)
type: ai-forever/rubq-reranking
config: default
split: test
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
metrics:
- type: main_score
value: 65.57453932493252
- type: map
value: 65.57453932493252
- type: mrr
value: 70.51408205663526
- type: nAUC_map_diff1
value: 26.69583260609023
- type: nAUC_map_max
value: 12.928262749610663
- type: nAUC_map_std
value: 11.702468857903128
- type: nAUC_mrr_diff1
value: 28.5206955462174
- type: nAUC_mrr_max
value: 14.207162454694227
- type: nAUC_mrr_std
value: 10.725721001555296
- task:
type: Retrieval
dataset:
name: MTEB RuBQRetrieval (default)
type: ai-forever/rubq-retrieval
config: default
split: test
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
metrics:
- type: main_score
value: 72.306
- type: map_at_1
value: 44.187
- type: map_at_10
value: 64.836
- type: map_at_100
value: 65.771
- type: map_at_1000
value: 65.8
- type: map_at_20
value: 65.497
- type: map_at_3
value: 59.692
- type: map_at_5
value: 63.105
- type: mrr_at_1
value: 62.23404255319149
- type: mrr_at_10
value: 73.40810161732159
- type: mrr_at_100
value: 73.67949305473395
- type: mrr_at_1000
value: 73.68707852294746
- type: mrr_at_20
value: 73.60429051697479
- type: mrr_at_3
value: 71.47360126083535
- type: mrr_at_5
value: 72.8447596532704
- type: nauc_map_at_1000_diff1
value: 39.838449035736886
- type: nauc_map_at_1000_max
value: 32.29962306877408
- type: nauc_map_at_1000_std
value: -6.324859592714388
- type: nauc_map_at_100_diff1
value: 39.824361938745426
- type: nauc_map_at_100_max
value: 32.32055222704763
- type: nauc_map_at_100_std
value: -6.301641111869559
- type: nauc_map_at_10_diff1
value: 39.50155328718487
- type: nauc_map_at_10_max
value: 31.745730244960672
- type: nauc_map_at_10_std
value: -6.867215137329693
- type: nauc_map_at_1_diff1
value: 47.66181128677822
- type: nauc_map_at_1_max
value: 21.75204233166764
- type: nauc_map_at_1_std
value: -8.06951079061697
- type: nauc_map_at_20_diff1
value: 39.78364637902108
- type: nauc_map_at_20_max
value: 32.39065528029405
- type: nauc_map_at_20_std
value: -6.368994332729006
- type: nauc_map_at_3_diff1
value: 39.51829474433183
- type: nauc_map_at_3_max
value: 28.633292697821673
- type: nauc_map_at_3_std
value: -7.2561170814963925
- type: nauc_map_at_5_diff1
value: 39.288433237676266
- type: nauc_map_at_5_max
value: 31.007702201615515
- type: nauc_map_at_5_std
value: -7.235131195162474
- type: nauc_mrr_at_1000_diff1
value: 49.599102391215226
- type: nauc_mrr_at_1000_max
value: 38.25521825911133
- type: nauc_mrr_at_1000_std
value: -10.448180939809435
- type: nauc_mrr_at_100_diff1
value: 49.5957067716212
- type: nauc_mrr_at_100_max
value: 38.26760703964535
- type: nauc_mrr_at_100_std
value: -10.438443051971081
- type: nauc_mrr_at_10_diff1
value: 49.35269710190271
- type: nauc_mrr_at_10_max
value: 38.43782589127069
- type: nauc_mrr_at_10_std
value: -10.404402063509815
- type: nauc_mrr_at_1_diff1
value: 53.32206103688421
- type: nauc_mrr_at_1_max
value: 33.52402390241035
- type: nauc_mrr_at_1_std
value: -12.73473393949936
- type: nauc_mrr_at_20_diff1
value: 49.550630850826636
- type: nauc_mrr_at_20_max
value: 38.35964703941151
- type: nauc_mrr_at_20_std
value: -10.444577766284766
- type: nauc_mrr_at_3_diff1
value: 49.12029127633829
- type: nauc_mrr_at_3_max
value: 38.01631275124067
- type: nauc_mrr_at_3_std
value: -10.523724301481309
- type: nauc_mrr_at_5_diff1
value: 49.04606949432458
- type: nauc_mrr_at_5_max
value: 38.33647550077891
- type: nauc_mrr_at_5_std
value: -10.47076409263114
- type: nauc_ndcg_at_1000_diff1
value: 41.342785916264226
- type: nauc_ndcg_at_1000_max
value: 35.75731064862711
- type: nauc_ndcg_at_1000_std
value: -5.45573422899229
- type: nauc_ndcg_at_100_diff1
value: 40.972974559636086
- type: nauc_ndcg_at_100_max
value: 36.32938573321036
- type: nauc_ndcg_at_100_std
value: -4.749631537590004
- type: nauc_ndcg_at_10_diff1
value: 39.67813474464166
- type: nauc_ndcg_at_10_max
value: 35.480200504848966
- type: nauc_ndcg_at_10_std
value: -6.318561293935512
- type: nauc_ndcg_at_1_diff1
value: 53.45970160222764
- type: nauc_ndcg_at_1_max
value: 33.14759013278075
- type: nauc_ndcg_at_1_std
value: -12.579833891774847
- type: nauc_ndcg_at_20_diff1
value: 40.67492861219249
- type: nauc_ndcg_at_20_max
value: 36.84960799838019
- type: nauc_ndcg_at_20_std
value: -5.202530835850179
- type: nauc_ndcg_at_3_diff1
value: 39.574906207408844
- type: nauc_ndcg_at_3_max
value: 31.76512164509258
- type: nauc_ndcg_at_3_std
value: -7.656143208565999
- type: nauc_ndcg_at_5_diff1
value: 39.096348529742095
- type: nauc_ndcg_at_5_max
value: 34.075926475544165
- type: nauc_ndcg_at_5_std
value: -7.238045445366631
- type: nauc_precision_at_1000_diff1
value: -14.283799754212609
- type: nauc_precision_at_1000_max
value: 6.449741756717101
- type: nauc_precision_at_1000_std
value: 4.862828679759048
- type: nauc_precision_at_100_diff1
value: -13.23173132700258
- type: nauc_precision_at_100_max
value: 11.058898534529195
- type: nauc_precision_at_100_std
value: 7.343683941814956
- type: nauc_precision_at_10_diff1
value: -7.202951643546464
- type: nauc_precision_at_10_max
value: 17.499446869433278
- type: nauc_precision_at_10_std
value: 2.8367985220406307
- type: nauc_precision_at_1_diff1
value: 53.45970160222764
- type: nauc_precision_at_1_max
value: 33.14759013278075
- type: nauc_precision_at_1_std
value: -12.579833891774847
- type: nauc_precision_at_20_diff1
value: -9.477122699154124
- type: nauc_precision_at_20_max
value: 16.80556031564312
- type: nauc_precision_at_20_std
value: 6.420218284416923
- type: nauc_precision_at_3_diff1
value: 5.5276143574150245
- type: nauc_precision_at_3_max
value: 23.65952688481666
- type: nauc_precision_at_3_std
value: -1.8730348729295785
- type: nauc_precision_at_5_diff1
value: -2.4537029093721308
- type: nauc_precision_at_5_max
value: 21.41469327545133
- type: nauc_precision_at_5_std
value: 0.1543890645722277
- type: nauc_recall_at_1000_diff1
value: -1.7474947956413491
- type: nauc_recall_at_1000_max
value: 46.22670991970479
- type: nauc_recall_at_1000_std
value: 62.582840705588794
- type: nauc_recall_at_100_diff1
value: 16.116089801097345
- type: nauc_recall_at_100_max
value: 52.54794580975103
- type: nauc_recall_at_100_std
value: 33.720245696003246
- type: nauc_recall_at_10_diff1
value: 23.134924318655482
- type: nauc_recall_at_10_max
value: 38.73754275649077
- type: nauc_recall_at_10_std
value: 0.6137471711639239
- type: nauc_recall_at_1_diff1
value: 47.66181128677822
- type: nauc_recall_at_1_max
value: 21.75204233166764
- type: nauc_recall_at_1_std
value: -8.06951079061697
- type: nauc_recall_at_20_diff1
value: 24.130616271355017
- type: nauc_recall_at_20_max
value: 48.306178640146136
- type: nauc_recall_at_20_std
value: 9.290819557000022
- type: nauc_recall_at_3_diff1
value: 29.767415016250226
- type: nauc_recall_at_3_max
value: 28.54289782140701
- type: nauc_recall_at_3_std
value: -5.1395675072005576
- type: nauc_recall_at_5_diff1
value: 25.410613126870174
- type: nauc_recall_at_5_max
value: 33.24658754857624
- type: nauc_recall_at_5_std
value: -4.211226036746632
- type: ndcg_at_1
value: 62.175000000000004
- type: ndcg_at_10
value: 72.306
- type: ndcg_at_100
value: 75.074
- type: ndcg_at_1000
value: 75.581
- type: ndcg_at_20
value: 73.875
- type: ndcg_at_3
value: 65.641
- type: ndcg_at_5
value: 69.48299999999999
- type: precision_at_1
value: 62.175000000000004
- type: precision_at_10
value: 13.907
- type: precision_at_100
value: 1.591
- type: precision_at_1000
value: 0.166
- type: precision_at_20
value: 7.446999999999999
- type: precision_at_3
value: 35.619
- type: precision_at_5
value: 24.917
- type: recall_at_1
value: 44.187
- type: recall_at_10
value: 85.10600000000001
- type: recall_at_100
value: 95.488
- type: recall_at_1000
value: 98.831
- type: recall_at_20
value: 90.22200000000001
- type: recall_at_3
value: 68.789
- type: recall_at_5
value: 77.85499999999999
- task:
type: Classification
dataset:
name: MTEB RuReviewsClassification (default)
type: ai-forever/ru-reviews-classification
config: default
split: test
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
metrics:
- type: accuracy
value: 67.5830078125
- type: f1
value: 67.56931936632446
- type: f1_weighted
value: 67.57137733752779
- type: main_score
value: 67.5830078125
- task:
type: STS
dataset:
name: MTEB RuSTSBenchmarkSTS (default)
type: ai-forever/ru-stsbenchmark-sts
config: default
split: test
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
metrics:
- type: cosine_pearson
value: 85.90493484626788
- type: cosine_spearman
value: 86.21965691667411
- type: euclidean_pearson
value: 86.07499842984909
- type: euclidean_spearman
value: 86.55506818735688
- type: main_score
value: 86.21965691667411
- type: manhattan_pearson
value: 85.95976420231729
- type: manhattan_spearman
value: 86.48604243661234
- type: pearson
value: 85.90493484626788
- type: spearman
value: 86.21965691667411
- task:
type: Classification
dataset:
name: MTEB RuSciBenchGRNTIClassification (default)
type: ai-forever/ru-scibench-grnti-classification
config: default
split: test
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
metrics:
- type: accuracy
value: 59.1943359375
- type: f1
value: 58.894480861440414
- type: f1_weighted
value: 58.903615560240866
- type: main_score
value: 59.1943359375
- task:
type: Clustering
dataset:
name: MTEB RuSciBenchGRNTIClusteringP2P (default)
type: ai-forever/ru-scibench-grnti-classification
config: default
split: test
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
metrics:
- type: main_score
value: 57.99209448663228
- type: v_measure
value: 57.99209448663228
- type: v_measure_std
value: 1.0381163861993816
- task:
type: Classification
dataset:
name: MTEB RuSciBenchOECDClassification (default)
type: ai-forever/ru-scibench-oecd-classification
config: default
split: test
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
metrics:
- type: accuracy
value: 45.556640625
- type: f1
value: 45.159163104085906
- type: f1_weighted
value: 45.16098316398626
- type: main_score
value: 45.556640625
- task:
type: Clustering
dataset:
name: MTEB RuSciBenchOECDClusteringP2P (default)
type: ai-forever/ru-scibench-oecd-classification
config: default
split: test
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
metrics:
- type: main_score
value: 50.787548070488974
- type: v_measure
value: 50.787548070488974
- type: v_measure_std
value: 0.8569958168946827
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 4.843
- type: map_at_10
value: 11.752
- type: map_at_100
value: 13.919
- type: map_at_1000
value: 14.198
- type: map_at_20
value: 12.898000000000001
- type: map_at_3
value: 8.603
- type: map_at_5
value: 10.069
- type: mrr_at_1
value: 23.799999999999997
- type: mrr_at_10
value: 34.449999999999996
- type: mrr_at_100
value: 35.64
- type: mrr_at_1000
value: 35.691
- type: mrr_at_20
value: 35.213
- type: mrr_at_3
value: 31.383
- type: mrr_at_5
value: 33.062999999999995
- type: ndcg_at_1
value: 23.799999999999997
- type: ndcg_at_10
value: 19.811
- type: ndcg_at_100
value: 28.108
- type: ndcg_at_1000
value: 33.1
- type: ndcg_at_20
value: 22.980999999999998
- type: ndcg_at_3
value: 19.153000000000002
- type: ndcg_at_5
value: 16.408
- type: precision_at_1
value: 23.799999999999997
- type: precision_at_10
value: 10.16
- type: precision_at_100
value: 2.1999999999999997
- type: precision_at_1000
value: 0.34099999999999997
- type: precision_at_20
value: 6.915
- type: precision_at_3
value: 17.8
- type: precision_at_5
value: 14.14
- type: recall_at_1
value: 4.843
- type: recall_at_10
value: 20.595
- type: recall_at_100
value: 44.66
- type: recall_at_1000
value: 69.152
- type: recall_at_20
value: 28.04
- type: recall_at_3
value: 10.833
- type: recall_at_5
value: 14.346999999999998
- type: main_score
value: 19.811
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL (default)
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
metrics:
- type: cosine_accuracy
value: 80.90093762739502
- type: cosine_accuracy_threshold
value: 94.40930485725403
- type: cosine_ap
value: 71.15400909912427
- type: cosine_f1
value: 66.8213457076566
- type: cosine_f1_threshold
value: 91.53673648834229
- type: cosine_precision
value: 62.4922504649721
- type: cosine_recall
value: 71.7948717948718
- type: dot_accuracy
value: 78.41418671015083
- type: dot_accuracy_threshold
value: 42924.45068359375
- type: dot_ap
value: 63.34003025365763
- type: dot_f1
value: 62.518258837277244
- type: dot_f1_threshold
value: 40900.738525390625
- type: dot_precision
value: 52.99653293709758
- type: dot_recall
value: 76.21082621082621
- type: euclidean_accuracy
value: 80.67672238075826
- type: euclidean_accuracy_threshold
value: 696.0524559020996
- type: euclidean_ap
value: 70.88762835990224
- type: euclidean_f1
value: 66.711051930759
- type: euclidean_f1_threshold
value: 878.5581588745117
- type: euclidean_precision
value: 62.625
- type: euclidean_recall
value: 71.36752136752136
- type: main_score
value: 71.15400909912427
- type: manhattan_accuracy
value: 80.65633917651854
- type: manhattan_accuracy_threshold
value: 17277.72674560547
- type: manhattan_ap
value: 70.67105336611716
- type: manhattan_f1
value: 66.51346027577151
- type: manhattan_f1_threshold
value: 21687.957763671875
- type: manhattan_precision
value: 61.69305724725944
- type: manhattan_recall
value: 72.15099715099716
- type: max_accuracy
value: 80.90093762739502
- type: max_ap
value: 71.15400909912427
- type: max_f1
value: 66.8213457076566
- type: max_precision
value: 62.625
- type: max_recall
value: 76.21082621082621
- type: similarity_accuracy
value: 80.90093762739502
- type: similarity_accuracy_threshold
value: 94.40930485725403
- type: similarity_ap
value: 71.15400909912427
- type: similarity_f1
value: 66.8213457076566
- type: similarity_f1_threshold
value: 91.53673648834229
- type: similarity_precision
value: 62.4922504649721
- type: similarity_recall
value: 71.7948717948718
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 92.3339946866199
- type: cosine_spearman
value: 89.61697355115497
- type: euclidean_pearson
value: 90.3264916449669
- type: euclidean_spearman
value: 89.36270451308866
- type: main_score
value: 89.61697355115497
- type: manhattan_pearson
value: 90.18909339052534
- type: manhattan_spearman
value: 89.28337093097377
- type: pearson
value: 92.3339946866199
- type: spearman
value: 89.61697355115497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL (default)
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
metrics:
- type: cosine_pearson
value: 85.27883048457821
- type: cosine_spearman
value: 80.53204892678619
- type: euclidean_pearson
value: 82.78520705216168
- type: euclidean_spearman
value: 80.27848359873212
- type: main_score
value: 80.53204892678619
- type: manhattan_pearson
value: 82.63270640583454
- type: manhattan_spearman
value: 80.21507977473146
- type: pearson
value: 85.27883048457821
- type: spearman
value: 80.53204892678619
- task:
type: STS
dataset:
name: MTEB SICKFr (default)
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cosine_pearson
value: 88.77029361817212
- type: cosine_spearman
value: 83.9453600346894
- type: euclidean_pearson
value: 85.85331086208573
- type: euclidean_spearman
value: 83.70852031985308
- type: main_score
value: 83.9453600346894
- type: manhattan_pearson
value: 85.66222265885914
- type: manhattan_spearman
value: 83.60833111525962
- type: pearson
value: 88.77029361817212
- type: spearman
value: 83.9453600346894
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 88.76435859522375
- type: cosine_spearman
value: 82.43768167804375
- type: euclidean_pearson
value: 87.43566183874832
- type: euclidean_spearman
value: 82.82166873757507
- type: main_score
value: 82.43768167804375
- type: manhattan_pearson
value: 87.39450871380951
- type: manhattan_spearman
value: 82.89253043430163
- type: pearson
value: 88.76435859522375
- type: spearman
value: 82.43768167804375
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 88.86627241652141
- type: cosine_spearman
value: 89.49011599120688
- type: euclidean_pearson
value: 89.3314120073772
- type: euclidean_spearman
value: 89.8226502776963
- type: main_score
value: 89.49011599120688
- type: manhattan_pearson
value: 89.2252179076963
- type: manhattan_spearman
value: 89.74573844021225
- type: pearson
value: 88.86627241652141
- type: spearman
value: 89.49011599120688
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 87.22891405215968
- type: cosine_spearman
value: 84.9467188157614
- type: euclidean_pearson
value: 87.20330004726237
- type: euclidean_spearman
value: 85.34806059461808
- type: main_score
value: 84.9467188157614
- type: manhattan_pearson
value: 87.15224666107623
- type: manhattan_spearman
value: 85.34596898699708
- type: pearson
value: 87.22891405215968
- type: spearman
value: 84.9467188157614
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 88.14066430111033
- type: cosine_spearman
value: 89.31337445552545
- type: euclidean_pearson
value: 89.08039335366983
- type: euclidean_spearman
value: 89.6658762856415
- type: main_score
value: 89.31337445552545
- type: manhattan_pearson
value: 89.08057438154486
- type: manhattan_spearman
value: 89.68673984203022
- type: pearson
value: 88.14066430111033
- type: spearman
value: 89.31337445552545
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 85.14908856657084
- type: cosine_spearman
value: 86.84648320786727
- type: euclidean_pearson
value: 86.11454713131947
- type: euclidean_spearman
value: 86.77738862047961
- type: main_score
value: 86.84648320786727
- type: manhattan_pearson
value: 86.07804821916372
- type: manhattan_spearman
value: 86.78676064310474
- type: pearson
value: 85.14908856657084
- type: spearman
value: 86.84648320786727
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 89.61633502468356
- type: cosine_spearman
value: 89.99772663224805
- type: euclidean_pearson
value: 90.14056501501044
- type: euclidean_spearman
value: 90.04496896837503
- type: main_score
value: 89.99772663224805
- type: manhattan_pearson
value: 90.08964860311801
- type: manhattan_spearman
value: 90.00091712362196
- type: pearson
value: 89.61633502468356
- type: spearman
value: 89.99772663224805
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 86.44548026840202
- type: cosine_spearman
value: 87.26263108768539
- type: euclidean_pearson
value: 86.42844593583838
- type: euclidean_spearman
value: 86.89388428664364
- type: main_score
value: 87.26263108768539
- type: manhattan_pearson
value: 86.47186940800881
- type: manhattan_spearman
value: 87.02163091089946
- type: pearson
value: 86.44548026840202
- type: spearman
value: 87.26263108768539
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 87.89345132532758
- type: cosine_spearman
value: 87.96246221327699
- type: euclidean_pearson
value: 88.49013032701419
- type: euclidean_spearman
value: 87.81981265317344
- type: main_score
value: 87.96246221327699
- type: manhattan_pearson
value: 88.31360914178538
- type: manhattan_spearman
value: 87.62734530005075
- type: pearson
value: 87.89345132532758
- type: spearman
value: 87.96246221327699
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 88.4084678497171
- type: cosine_spearman
value: 88.77640638748285
- type: euclidean_pearson
value: 89.60124312475843
- type: euclidean_spearman
value: 88.4321442688528
- type: main_score
value: 88.77640638748285
- type: manhattan_pearson
value: 89.62375118021299
- type: manhattan_spearman
value: 88.46998118661577
- type: pearson
value: 88.4084678497171
- type: spearman
value: 88.77640638748285
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 87.30688801326498
- type: cosine_spearman
value: 87.55684697258378
- type: euclidean_pearson
value: 87.89672951056794
- type: euclidean_spearman
value: 87.28050429201674
- type: main_score
value: 87.55684697258378
- type: manhattan_pearson
value: 87.74292745320572
- type: manhattan_spearman
value: 87.16383993876582
- type: pearson
value: 87.30688801326498
- type: spearman
value: 87.55684697258378
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 73.46180375170147
- type: cosine_spearman
value: 73.39559590127081
- type: euclidean_pearson
value: 73.72613901293681
- type: euclidean_spearman
value: 71.85465165176795
- type: main_score
value: 73.39559590127081
- type: manhattan_pearson
value: 73.07859140869076
- type: manhattan_spearman
value: 71.22047343718893
- type: pearson
value: 73.46180375170147
- type: spearman
value: 73.39559590127081
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 62.47531620842637
- type: cosine_spearman
value: 66.22504667157702
- type: euclidean_pearson
value: 66.76201254783692
- type: euclidean_spearman
value: 66.86115760269463
- type: main_score
value: 66.22504667157702
- type: manhattan_pearson
value: 66.73847836793489
- type: manhattan_spearman
value: 66.7677116377695
- type: pearson
value: 62.47531620842637
- type: spearman
value: 66.22504667157702
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 69.89707002436481
- type: cosine_spearman
value: 72.2054865735116
- type: euclidean_pearson
value: 71.81856615570756
- type: euclidean_spearman
value: 72.72593304629407
- type: main_score
value: 72.2054865735116
- type: manhattan_pearson
value: 72.00362684700072
- type: manhattan_spearman
value: 72.62783534769964
- type: pearson
value: 69.89707002436481
- type: spearman
value: 72.2054865735116
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 81.59623734395916
- type: cosine_spearman
value: 83.28946105111358
- type: euclidean_pearson
value: 79.377330171466
- type: euclidean_spearman
value: 81.81029781662205
- type: main_score
value: 83.28946105111358
- type: manhattan_pearson
value: 78.96970881689698
- type: manhattan_spearman
value: 81.91773236079703
- type: pearson
value: 81.59623734395916
- type: spearman
value: 83.28946105111358
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 55.03825643126142
- type: cosine_spearman
value: 58.25792501780429
- type: euclidean_pearson
value: 50.38007603973409
- type: euclidean_spearman
value: 59.39961789383097
- type: main_score
value: 58.25792501780429
- type: manhattan_pearson
value: 50.518568927999155
- type: manhattan_spearman
value: 59.84185466003894
- type: pearson
value: 55.03825643126142
- type: spearman
value: 58.25792501780429
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 77.77233721490776
- type: cosine_spearman
value: 76.17596588017625
- type: euclidean_pearson
value: 74.47600468156611
- type: euclidean_spearman
value: 72.61278728057012
- type: main_score
value: 76.17596588017625
- type: manhattan_pearson
value: 74.48118910099699
- type: manhattan_spearman
value: 73.33167419101696
- type: pearson
value: 77.77233721490776
- type: spearman
value: 76.17596588017625
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 42.87453608131507
- type: cosine_spearman
value: 45.137849894401185
- type: euclidean_pearson
value: 31.66964197694796
- type: euclidean_spearman
value: 44.1014900837869
- type: main_score
value: 45.137849894401185
- type: manhattan_pearson
value: 31.007199259384745
- type: manhattan_spearman
value: 43.48181523288926
- type: pearson
value: 42.87453608131507
- type: spearman
value: 45.137849894401185
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 66.87400150638176
- type: cosine_spearman
value: 67.27861354834066
- type: euclidean_pearson
value: 66.81789582140216
- type: euclidean_spearman
value: 66.44220479858708
- type: main_score
value: 67.27861354834066
- type: manhattan_pearson
value: 66.92509859033235
- type: manhattan_spearman
value: 66.46841124185076
- type: pearson
value: 66.87400150638176
- type: spearman
value: 67.27861354834066
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 61.819804551576084
- type: cosine_spearman
value: 65.0864146772135
- type: euclidean_pearson
value: 62.518151090361876
- type: euclidean_spearman
value: 65.13608138548017
- type: main_score
value: 65.0864146772135
- type: manhattan_pearson
value: 62.51413246915267
- type: manhattan_spearman
value: 65.19077543064323
- type: pearson
value: 61.819804551576084
- type: spearman
value: 65.0864146772135
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 54.85728696035389
- type: cosine_spearman
value: 61.60906359227576
- type: euclidean_pearson
value: 52.57582587901851
- type: euclidean_spearman
value: 61.41823097598308
- type: main_score
value: 61.60906359227576
- type: manhattan_pearson
value: 52.500978361080506
- type: manhattan_spearman
value: 61.30365596659758
- type: pearson
value: 54.85728696035389
- type: spearman
value: 61.60906359227576
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 67.68016005631422
- type: cosine_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 66.19871164667245
- type: euclidean_spearman
value: 73.24670207647144
- type: main_score
value: 84.51542547285167
- type: manhattan_pearson
value: 67.0443525268974
- type: manhattan_spearman
value: 73.24670207647144
- type: pearson
value: 67.68016005631422
- type: spearman
value: 84.51542547285167
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 47.49467414030747
- type: cosine_spearman
value: 56.81512095681289
- type: euclidean_pearson
value: 48.42860221765214
- type: euclidean_spearman
value: 58.63197306329092
- type: main_score
value: 56.81512095681289
- type: manhattan_pearson
value: 48.39594959260441
- type: manhattan_spearman
value: 58.63197306329092
- type: pearson
value: 47.49467414030747
- type: spearman
value: 56.81512095681289
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 76.8364678896155
- type: cosine_spearman
value: 78.45516413087114
- type: euclidean_pearson
value: 78.62779318576634
- type: euclidean_spearman
value: 78.88760695649488
- type: main_score
value: 78.45516413087114
- type: manhattan_pearson
value: 78.62131335760031
- type: manhattan_spearman
value: 78.81861844200388
- type: pearson
value: 76.8364678896155
- type: spearman
value: 78.45516413087114
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 65.16640313911604
- type: cosine_spearman
value: 60.887608967403914
- type: euclidean_pearson
value: 67.49902244990913
- type: euclidean_spearman
value: 59.2458787136538
- type: main_score
value: 60.887608967403914
- type: manhattan_pearson
value: 67.34313506388378
- type: manhattan_spearman
value: 59.05283429200166
- type: pearson
value: 65.16640313911604
- type: spearman
value: 60.887608967403914
- task:
type: STS
dataset:
name: MTEB STSB (default)
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cosine_pearson
value: 81.5092853013241
- type: cosine_spearman
value: 83.54005474244292
- type: euclidean_pearson
value: 83.7246578378554
- type: euclidean_spearman
value: 84.46767551087716
- type: main_score
value: 83.54005474244292
- type: manhattan_pearson
value: 83.65922665594636
- type: manhattan_spearman
value: 84.42431449101848
- type: pearson
value: 81.5092853013241
- type: spearman
value: 83.54005474244292
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 87.70246866744966
- type: cosine_spearman
value: 89.44070045346106
- type: euclidean_pearson
value: 89.56956519641007
- type: euclidean_spearman
value: 89.95830112784283
- type: main_score
value: 89.44070045346106
- type: manhattan_pearson
value: 89.48264471425145
- type: manhattan_spearman
value: 89.87900732483114
- type: pearson
value: 87.70246866744966
- type: spearman
value: 89.44070045346106
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (de)
type: mteb/stsb_multi_mt
config: de
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 86.83701990805217
- type: cosine_spearman
value: 87.80280785492258
- type: euclidean_pearson
value: 87.77325330043514
- type: euclidean_spearman
value: 88.3564607283144
- type: main_score
value: 87.80280785492258
- type: manhattan_pearson
value: 87.6745449945946
- type: manhattan_spearman
value: 88.30660465978795
- type: pearson
value: 86.83701990805217
- type: spearman
value: 87.80280785492258
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (zh)
type: mteb/stsb_multi_mt
config: zh
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 84.27751020600267
- type: cosine_spearman
value: 85.63500407412486
- type: euclidean_pearson
value: 85.21829891649696
- type: euclidean_spearman
value: 85.9384575715382
- type: main_score
value: 85.63500407412486
- type: manhattan_pearson
value: 85.10797194089801
- type: manhattan_spearman
value: 85.8770162042784
- type: pearson
value: 84.27751020600267
- type: spearman
value: 85.63500407412486
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: mteb/stsb_multi_mt
config: fr
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 86.56833656723254
- type: cosine_spearman
value: 87.4393978501382
- type: euclidean_pearson
value: 87.45171512751267
- type: euclidean_spearman
value: 88.13106516566947
- type: main_score
value: 87.4393978501382
- type: manhattan_pearson
value: 87.33010961793333
- type: manhattan_spearman
value: 88.06707425102182
- type: pearson
value: 86.56833656723254
- type: spearman
value: 87.4393978501382
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (pl)
type: mteb/stsb_multi_mt
config: pl
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 85.45065540325523
- type: cosine_spearman
value: 85.47881076789359
- type: euclidean_pearson
value: 85.1999493863155
- type: euclidean_spearman
value: 85.7874947669187
- type: main_score
value: 85.47881076789359
- type: manhattan_pearson
value: 85.06075305990376
- type: manhattan_spearman
value: 85.71563015639558
- type: pearson
value: 85.45065540325523
- type: spearman
value: 85.47881076789359
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (es)
type: mteb/stsb_multi_mt
config: es
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 87.11952824079832
- type: cosine_spearman
value: 87.9643473573153
- type: euclidean_pearson
value: 88.11750364639971
- type: euclidean_spearman
value: 88.63695109016498
- type: main_score
value: 87.9643473573153
- type: manhattan_pearson
value: 88.00294453126699
- type: manhattan_spearman
value: 88.53750241758391
- type: pearson
value: 87.11952824079832
- type: spearman
value: 87.9643473573153
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (ru)
type: mteb/stsb_multi_mt
config: ru
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 85.99804354414991
- type: cosine_spearman
value: 86.30252111551002
- type: euclidean_pearson
value: 86.1880652037762
- type: euclidean_spearman
value: 86.69556223944502
- type: main_score
value: 86.30252111551002
- type: manhattan_pearson
value: 86.0736400320898
- type: manhattan_spearman
value: 86.61747927593393
- type: pearson
value: 85.99804354414991
- type: spearman
value: 86.30252111551002
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (en)
type: mteb/stsb_multi_mt
config: en
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 87.70246861738103
- type: cosine_spearman
value: 89.44070045346106
- type: euclidean_pearson
value: 89.56956518833663
- type: euclidean_spearman
value: 89.95830112784283
- type: main_score
value: 89.44070045346106
- type: manhattan_pearson
value: 89.48264470792915
- type: manhattan_spearman
value: 89.87900732483114
- type: pearson
value: 87.70246861738103
- type: spearman
value: 89.44070045346106
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.88064122814694
- type: mrr
value: 95.84832651009123
- type: main_score
value: 84.88064122814694
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 57.289
- type: map_at_10
value: 67.88499999999999
- type: map_at_100
value: 68.477
- type: map_at_1000
value: 68.50500000000001
- type: map_at_20
value: 68.33500000000001
- type: map_at_3
value: 65.08
- type: map_at_5
value: 67.001
- type: mrr_at_1
value: 59.667
- type: mrr_at_10
value: 68.626
- type: mrr_at_100
value: 69.082
- type: mrr_at_1000
value: 69.108
- type: mrr_at_20
value: 68.958
- type: mrr_at_3
value: 66.667
- type: mrr_at_5
value: 67.983
- type: ndcg_at_1
value: 59.667
- type: ndcg_at_10
value: 72.309
- type: ndcg_at_100
value: 74.58399999999999
- type: ndcg_at_1000
value: 75.25500000000001
- type: ndcg_at_20
value: 73.656
- type: ndcg_at_3
value: 67.791
- type: ndcg_at_5
value: 70.45
- type: precision_at_1
value: 59.667
- type: precision_at_10
value: 9.567
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.083
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 57.289
- type: recall_at_10
value: 84.756
- type: recall_at_100
value: 94.5
- type: recall_at_1000
value: 99.667
- type: recall_at_20
value: 89.7
- type: recall_at_3
value: 73.22800000000001
- type: recall_at_5
value: 79.444
- type: main_score
value: 72.309
- task:
type: Clustering
dataset:
name: MTEB SpanishNewsClusteringP2P (default)
type: jinaai/spanish_news_clustering
config: default
split: test
revision: bf8ca8ddc5b7da4f7004720ddf99bbe0483480e6
metrics:
- type: main_score
value: 45.04477709795154
- type: v_measure
value: 45.04477709795154
- type: v_measure_std
value: 0.0
- task:
type: Retrieval
dataset:
name: MTEB SpanishPassageRetrievalS2S (default)
type: jinaai/spanish_passage_retrieval
config: default
split: test
revision: 9cddf2ce5209ade52c2115ccfa00eb22c6d3a837
metrics:
- type: main_score
value: 69.83
- type: map_at_1
value: 15.736
- type: map_at_10
value: 52.027
- type: map_at_100
value: 65.08800000000001
- type: map_at_1000
value: 65.08800000000001
- type: map_at_20
value: 60.79900000000001
- type: map_at_3
value: 32.869
- type: map_at_5
value: 41.436
- type: mrr_at_1
value: 75.44910179640718
- type: mrr_at_10
value: 84.43446440452426
- type: mrr_at_100
value: 84.48052612723271
- type: mrr_at_1000
value: 84.48052612723271
- type: mrr_at_20
value: 84.48052612723271
- type: mrr_at_3
value: 83.13373253493013
- type: mrr_at_5
value: 84.3013972055888
- type: nauc_map_at_1000_diff1
value: 50.611540149694356
- type: nauc_map_at_1000_max
value: 2.1102430434260238
- type: nauc_map_at_1000_std
value: -18.88993521335793
- type: nauc_map_at_100_diff1
value: 50.611540149694356
- type: nauc_map_at_100_max
value: 2.1102430434260238
- type: nauc_map_at_100_std
value: -18.88993521335793
- type: nauc_map_at_10_diff1
value: 59.13518981755268
- type: nauc_map_at_10_max
value: -9.810386627392807
- type: nauc_map_at_10_std
value: -38.31810152345078
- type: nauc_map_at_1_diff1
value: 74.96782567287174
- type: nauc_map_at_1_max
value: -29.648279252607875
- type: nauc_map_at_1_std
value: -54.017459339141595
- type: nauc_map_at_20_diff1
value: 55.26694458629849
- type: nauc_map_at_20_max
value: -1.9490244535020729
- type: nauc_map_at_20_std
value: -25.22211659104076
- type: nauc_map_at_3_diff1
value: 71.67607885031732
- type: nauc_map_at_3_max
value: -25.078101661694507
- type: nauc_map_at_3_std
value: -50.55408861920259
- type: nauc_map_at_5_diff1
value: 61.50111515417668
- type: nauc_map_at_5_max
value: -16.4114670513168
- type: nauc_map_at_5_std
value: -44.391416134859135
- type: nauc_mrr_at_1000_diff1
value: 74.18848063283234
- type: nauc_mrr_at_1000_max
value: 21.929205946778005
- type: nauc_mrr_at_1000_std
value: -36.27399268489433
- type: nauc_mrr_at_100_diff1
value: 74.18848063283234
- type: nauc_mrr_at_100_max
value: 21.929205946778005
- type: nauc_mrr_at_100_std
value: -36.27399268489433
- type: nauc_mrr_at_10_diff1
value: 74.27231582268745
- type: nauc_mrr_at_10_max
value: 21.481133301135337
- type: nauc_mrr_at_10_std
value: -36.72070854872902
- type: nauc_mrr_at_1_diff1
value: 76.54855950439561
- type: nauc_mrr_at_1_max
value: 26.99938321212366
- type: nauc_mrr_at_1_std
value: -33.098742603429635
- type: nauc_mrr_at_20_diff1
value: 74.18848063283234
- type: nauc_mrr_at_20_max
value: 21.929205946778005
- type: nauc_mrr_at_20_std
value: -36.27399268489433
- type: nauc_mrr_at_3_diff1
value: 72.05379526740143
- type: nauc_mrr_at_3_max
value: 18.875831185752528
- type: nauc_mrr_at_3_std
value: -37.27302006456391
- type: nauc_mrr_at_5_diff1
value: 74.25342356682029
- type: nauc_mrr_at_5_max
value: 20.756340085088738
- type: nauc_mrr_at_5_std
value: -37.99507208540703
- type: nauc_ndcg_at_1000_diff1
value: 53.259363764380275
- type: nauc_ndcg_at_1000_max
value: 12.936954959423218
- type: nauc_ndcg_at_1000_std
value: -16.953898675672153
- type: nauc_ndcg_at_100_diff1
value: 53.259363764380275
- type: nauc_ndcg_at_100_max
value: 12.936954959423218
- type: nauc_ndcg_at_100_std
value: -16.953898675672153
- type: nauc_ndcg_at_10_diff1
value: 53.70942345413554
- type: nauc_ndcg_at_10_max
value: -3.8465093347016186
- type: nauc_ndcg_at_10_std
value: -31.208127919994755
- type: nauc_ndcg_at_1_diff1
value: 75.30551289259554
- type: nauc_ndcg_at_1_max
value: 25.53292054129834
- type: nauc_ndcg_at_1_std
value: -33.285498788395145
- type: nauc_ndcg_at_20_diff1
value: 57.62409278278133
- type: nauc_ndcg_at_20_max
value: 2.8040586426056233
- type: nauc_ndcg_at_20_std
value: -26.270875776221704
- type: nauc_ndcg_at_3_diff1
value: 48.42294834754225
- type: nauc_ndcg_at_3_max
value: 16.912467881065822
- type: nauc_ndcg_at_3_std
value: -13.324841189277873
- type: nauc_ndcg_at_5_diff1
value: 47.512819802794596
- type: nauc_ndcg_at_5_max
value: 14.645518203506594
- type: nauc_ndcg_at_5_std
value: -17.641450435599275
- type: nauc_precision_at_1000_diff1
value: -34.43320975829637
- type: nauc_precision_at_1000_max
value: 29.08585622578186
- type: nauc_precision_at_1000_std
value: 46.55117940162061
- type: nauc_precision_at_100_diff1
value: -34.433209758296364
- type: nauc_precision_at_100_max
value: 29.085856225781885
- type: nauc_precision_at_100_std
value: 46.55117940162065
- type: nauc_precision_at_10_diff1
value: -21.895306304096902
- type: nauc_precision_at_10_max
value: 33.190476527593745
- type: nauc_precision_at_10_std
value: 37.64916268614298
- type: nauc_precision_at_1_diff1
value: 75.30551289259554
- type: nauc_precision_at_1_max
value: 25.53292054129834
- type: nauc_precision_at_1_std
value: -33.285498788395145
- type: nauc_precision_at_20_diff1
value: -27.63076748060466
- type: nauc_precision_at_20_max
value: 30.689810416086154
- type: nauc_precision_at_20_std
value: 46.164191636131626
- type: nauc_precision_at_3_diff1
value: 20.547345067837288
- type: nauc_precision_at_3_max
value: 26.177050942827528
- type: nauc_precision_at_3_std
value: 5.960466052973099
- type: nauc_precision_at_5_diff1
value: -8.928755534002669
- type: nauc_precision_at_5_max
value: 40.83262650073459
- type: nauc_precision_at_5_std
value: 26.158537031161494
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_10_diff1
value: 53.08654386169444
- type: nauc_recall_at_10_max
value: -23.276269379519356
- type: nauc_recall_at_10_std
value: -50.80707792706157
- type: nauc_recall_at_1_diff1
value: 74.96782567287174
- type: nauc_recall_at_1_max
value: -29.648279252607875
- type: nauc_recall_at_1_std
value: -54.017459339141595
- type: nauc_recall_at_20_diff1
value: 51.60121897059633
- type: nauc_recall_at_20_max
value: -14.241779530735387
- type: nauc_recall_at_20_std
value: -37.877451525215456
- type: nauc_recall_at_3_diff1
value: 66.99474984329694
- type: nauc_recall_at_3_max
value: -30.802787353187966
- type: nauc_recall_at_3_std
value: -53.58737792129713
- type: nauc_recall_at_5_diff1
value: 54.64214444958567
- type: nauc_recall_at_5_max
value: -23.341309362104703
- type: nauc_recall_at_5_std
value: -51.381363923145265
- type: ndcg_at_1
value: 76.048
- type: ndcg_at_10
value: 69.83
- type: ndcg_at_100
value: 82.11500000000001
- type: ndcg_at_1000
value: 82.11500000000001
- type: ndcg_at_20
value: 75.995
- type: ndcg_at_3
value: 69.587
- type: ndcg_at_5
value: 69.062
- type: precision_at_1
value: 76.048
- type: precision_at_10
value: 43.653
- type: precision_at_100
value: 7.718999999999999
- type: precision_at_1000
value: 0.772
- type: precision_at_20
value: 31.108000000000004
- type: precision_at_3
value: 63.87199999999999
- type: precision_at_5
value: 56.407
- type: recall_at_1
value: 15.736
- type: recall_at_10
value: 66.873
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 85.01100000000001
- type: recall_at_3
value: 36.441
- type: recall_at_5
value: 49.109
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.87326732673267
- type: cosine_accuracy_threshold
value: 86.0752820968628
- type: cosine_ap
value: 96.98758090713252
- type: cosine_f1
value: 93.52881698685542
- type: cosine_f1_threshold
value: 86.0752820968628
- type: cosine_precision
value: 94.58077709611452
- type: cosine_recall
value: 92.5
- type: dot_accuracy
value: 99.82574257425742
- type: dot_accuracy_threshold
value: 40484.73815917969
- type: dot_ap
value: 95.68959907254845
- type: dot_f1
value: 91.31293188548865
- type: dot_f1_threshold
value: 40336.810302734375
- type: dot_precision
value: 90.15594541910332
- type: dot_recall
value: 92.5
- type: euclidean_accuracy
value: 99.87128712871286
- type: euclidean_accuracy_threshold
value: 1162.5749588012695
- type: euclidean_ap
value: 96.92640435656577
- type: euclidean_f1
value: 93.4475806451613
- type: euclidean_f1_threshold
value: 1162.5749588012695
- type: euclidean_precision
value: 94.20731707317073
- type: euclidean_recall
value: 92.7
- type: main_score
value: 96.98758090713252
- type: manhattan_accuracy
value: 99.86930693069307
- type: manhattan_accuracy_threshold
value: 28348.71826171875
- type: manhattan_ap
value: 96.93832673967925
- type: manhattan_f1
value: 93.33333333333333
- type: manhattan_f1_threshold
value: 28348.71826171875
- type: manhattan_precision
value: 94.28571428571428
- type: manhattan_recall
value: 92.4
- type: max_accuracy
value: 99.87326732673267
- type: max_ap
value: 96.98758090713252
- type: max_f1
value: 93.52881698685542
- type: max_precision
value: 94.58077709611452
- type: max_recall
value: 92.7
- type: similarity_accuracy
value: 99.87326732673267
- type: similarity_accuracy_threshold
value: 86.0752820968628
- type: similarity_ap
value: 96.98758090713252
- type: similarity_f1
value: 93.52881698685542
- type: similarity_f1_threshold
value: 86.0752820968628
- type: similarity_precision
value: 94.58077709611452
- type: similarity_recall
value: 92.5
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 65.6560129719848
- type: v_measure
value: 65.6560129719848
- type: v_measure_std
value: 4.781229811487539
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 35.07546243853692
- type: v_measure
value: 35.07546243853692
- type: v_measure_std
value: 1.1978740356240998
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.771005199508835
- type: mrr
value: 52.65443298531534
- type: main_score
value: 51.771005199508835
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 29.48686238342228
- type: cosine_spearman
value: 29.706543509170054
- type: dot_pearson
value: 27.95853155597859
- type: dot_spearman
value: 27.604287986935162
- type: main_score
value: 29.706543509170054
- type: pearson
value: 29.48686238342228
- type: spearman
value: 29.706543509170054
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr (default)
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cosine_pearson
value: 31.551301434917868
- type: cosine_spearman
value: 30.709049789175186
- type: dot_pearson
value: 27.77050901756549
- type: dot_spearman
value: 26.715505953561795
- type: main_score
value: 30.709049789175186
- type: pearson
value: 31.551301434917868
- type: spearman
value: 30.709049789175186
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking (default)
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 73.31666666666666
- type: mrr
value: 73.31666666666666
- type: main_score
value: 73.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval (default)
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9
metrics:
- type: main_score
value: 83.851
- type: map_at_1
value: 68.0
- type: map_at_10
value: 79.187
- type: map_at_100
value: 79.32900000000001
- type: map_at_1000
value: 79.32900000000001
- type: map_at_20
value: 79.32900000000001
- type: map_at_3
value: 77.333
- type: map_at_5
value: 78.93299999999999
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 79.18730158730159
- type: mrr_at_100
value: 79.32945845004669
- type: mrr_at_1000
value: 79.32945845004669
- type: mrr_at_20
value: 79.32945845004669
- type: mrr_at_3
value: 77.33333333333333
- type: mrr_at_5
value: 78.93333333333332
- type: nauc_map_at_1000_diff1
value: 63.31103256935259
- type: nauc_map_at_1000_max
value: 11.073749121365623
- type: nauc_map_at_1000_std
value: 7.4973309839738
- type: nauc_map_at_100_diff1
value: 63.31103256935259
- type: nauc_map_at_100_max
value: 11.073749121365623
- type: nauc_map_at_100_std
value: 7.4973309839738
- type: nauc_map_at_10_diff1
value: 62.91585737195978
- type: nauc_map_at_10_max
value: 11.770664508983133
- type: nauc_map_at_10_std
value: 8.179883948527962
- type: nauc_map_at_1_diff1
value: 66.1236265634718
- type: nauc_map_at_1_max
value: 7.000207311173955
- type: nauc_map_at_1_std
value: 6.54412272821497
- type: nauc_map_at_20_diff1
value: 63.31103256935259
- type: nauc_map_at_20_max
value: 11.073749121365623
- type: nauc_map_at_20_std
value: 7.4973309839738
- type: nauc_map_at_3_diff1
value: 62.14039574010254
- type: nauc_map_at_3_max
value: 11.06996398110187
- type: nauc_map_at_3_std
value: 7.288759297085769
- type: nauc_map_at_5_diff1
value: 63.0401271126211
- type: nauc_map_at_5_max
value: 10.779317801858609
- type: nauc_map_at_5_std
value: 6.476660484760681
- type: nauc_mrr_at_1000_diff1
value: 63.31103256935259
- type: nauc_mrr_at_1000_max
value: 11.073749121365623
- type: nauc_mrr_at_1000_std
value: 7.4973309839738
- type: nauc_mrr_at_100_diff1
value: 63.31103256935259
- type: nauc_mrr_at_100_max
value: 11.073749121365623
- type: nauc_mrr_at_100_std
value: 7.4973309839738
- type: nauc_mrr_at_10_diff1
value: 62.91585737195978
- type: nauc_mrr_at_10_max
value: 11.770664508983133
- type: nauc_mrr_at_10_std
value: 8.179883948527962
- type: nauc_mrr_at_1_diff1
value: 66.1236265634718
- type: nauc_mrr_at_1_max
value: 7.000207311173955
- type: nauc_mrr_at_1_std
value: 6.54412272821497
- type: nauc_mrr_at_20_diff1
value: 63.31103256935259
- type: nauc_mrr_at_20_max
value: 11.073749121365623
- type: nauc_mrr_at_20_std
value: 7.4973309839738
- type: nauc_mrr_at_3_diff1
value: 62.14039574010254
- type: nauc_mrr_at_3_max
value: 11.06996398110187
- type: nauc_mrr_at_3_std
value: 7.288759297085769
- type: nauc_mrr_at_5_diff1
value: 63.0401271126211
- type: nauc_mrr_at_5_max
value: 10.779317801858609
- type: nauc_mrr_at_5_std
value: 6.476660484760681
- type: nauc_ndcg_at_1000_diff1
value: 62.9544299483241
- type: nauc_ndcg_at_1000_max
value: 11.577079766964538
- type: nauc_ndcg_at_1000_std
value: 7.703856790100716
- type: nauc_ndcg_at_100_diff1
value: 62.9544299483241
- type: nauc_ndcg_at_100_max
value: 11.577079766964538
- type: nauc_ndcg_at_100_std
value: 7.703856790100716
- type: nauc_ndcg_at_10_diff1
value: 61.29907952217381
- type: nauc_ndcg_at_10_max
value: 14.760627422715425
- type: nauc_ndcg_at_10_std
value: 10.805573898143368
- type: nauc_ndcg_at_1_diff1
value: 66.1236265634718
- type: nauc_ndcg_at_1_max
value: 7.000207311173955
- type: nauc_ndcg_at_1_std
value: 6.54412272821497
- type: nauc_ndcg_at_20_diff1
value: 62.9544299483241
- type: nauc_ndcg_at_20_max
value: 11.577079766964538
- type: nauc_ndcg_at_20_std
value: 7.703856790100716
- type: nauc_ndcg_at_3_diff1
value: 60.25643527856101
- type: nauc_ndcg_at_3_max
value: 12.236302709487546
- type: nauc_ndcg_at_3_std
value: 7.36883189112067
- type: nauc_ndcg_at_5_diff1
value: 61.65220590318238
- type: nauc_ndcg_at_5_max
value: 11.39969101913945
- type: nauc_ndcg_at_5_std
value: 5.406207922379402
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_100_diff1
value: .nan
- type: nauc_precision_at_100_max
value: .nan
- type: nauc_precision_at_100_std
value: .nan
- type: nauc_precision_at_10_diff1
value: 19.14098972922579
- type: nauc_precision_at_10_max
value: 100.0
- type: nauc_precision_at_10_std
value: 93.46405228758135
- type: nauc_precision_at_1_diff1
value: 66.1236265634718
- type: nauc_precision_at_1_max
value: 7.000207311173955
- type: nauc_precision_at_1_std
value: 6.54412272821497
- type: nauc_precision_at_20_diff1
value: 100.0
- type: nauc_precision_at_20_max
value: 100.0
- type: nauc_precision_at_20_std
value: 100.0
- type: nauc_precision_at_3_diff1
value: 50.29636629155561
- type: nauc_precision_at_3_max
value: 18.00532600292076
- type: nauc_precision_at_3_std
value: 7.649686453053768
- type: nauc_precision_at_5_diff1
value: 43.522408963585356
- type: nauc_precision_at_5_max
value: 16.923436041082983
- type: nauc_precision_at_5_std
value: -10.854341736694092
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_10_diff1
value: 19.1409897292252
- type: nauc_recall_at_10_max
value: 100.0
- type: nauc_recall_at_10_std
value: 93.46405228758134
- type: nauc_recall_at_1_diff1
value: 66.1236265634718
- type: nauc_recall_at_1_max
value: 7.000207311173955
- type: nauc_recall_at_1_std
value: 6.54412272821497
- type: nauc_recall_at_20_diff1
value: .nan
- type: nauc_recall_at_20_max
value: .nan
- type: nauc_recall_at_20_std
value: .nan
- type: nauc_recall_at_3_diff1
value: 50.29636629155569
- type: nauc_recall_at_3_max
value: 18.005326002920754
- type: nauc_recall_at_3_std
value: 7.649686453053851
- type: nauc_recall_at_5_diff1
value: 43.5224089635856
- type: nauc_recall_at_5_max
value: 16.92343604108335
- type: nauc_recall_at_5_std
value: -10.854341736694499
- type: ndcg_at_1
value: 68.0
- type: ndcg_at_10
value: 83.851
- type: ndcg_at_100
value: 84.36099999999999
- type: ndcg_at_1000
value: 84.36099999999999
- type: ndcg_at_20
value: 84.36099999999999
- type: ndcg_at_3
value: 80.333
- type: ndcg_at_5
value: 83.21600000000001
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 5.0
- type: precision_at_3
value: 29.666999999999998
- type: precision_at_5
value: 19.2
- type: recall_at_1
value: 68.0
- type: recall_at_10
value: 98.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 100.0
- type: recall_at_3
value: 89.0
- type: recall_at_5
value: 96.0
- task:
type: Reranking
dataset:
name: MTEB T2Reranking (default)
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 65.3088203970324
- type: mrr
value: 74.79505862376546
- type: main_score
value: 65.3088203970324
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval (default)
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: main_score
value: 83.163
- type: map_at_1
value: 26.875
- type: map_at_10
value: 75.454
- type: map_at_100
value: 79.036
- type: map_at_1000
value: 79.111
- type: map_at_20
value: 78.145
- type: map_at_3
value: 53.181
- type: map_at_5
value: 65.362
- type: mrr_at_1
value: 88.90057864281957
- type: mrr_at_10
value: 91.53186397301344
- type: mrr_at_100
value: 91.62809075510003
- type: mrr_at_1000
value: 91.63198173030787
- type: mrr_at_20
value: 91.59414668799909
- type: mrr_at_3
value: 91.0792565316499
- type: mrr_at_5
value: 91.35718043135199
- type: nauc_map_at_1000_diff1
value: 12.364843957982409
- type: nauc_map_at_1000_max
value: 52.07043464458799
- type: nauc_map_at_1000_std
value: 16.040095055100494
- type: nauc_map_at_100_diff1
value: 12.370621073823022
- type: nauc_map_at_100_max
value: 51.960738727635636
- type: nauc_map_at_100_std
value: 15.935832440430747
- type: nauc_map_at_10_diff1
value: 16.852819486606585
- type: nauc_map_at_10_max
value: 40.11184760756059
- type: nauc_map_at_10_std
value: 0.9306648364102376
- type: nauc_map_at_1_diff1
value: 52.87356542654683
- type: nauc_map_at_1_max
value: -22.210039746171255
- type: nauc_map_at_1_std
value: -38.11345358035342
- type: nauc_map_at_20_diff1
value: 13.045089059562837
- type: nauc_map_at_20_max
value: 49.591383082160036
- type: nauc_map_at_20_std
value: 12.54330050352008
- type: nauc_map_at_3_diff1
value: 38.08172234377615
- type: nauc_map_at_3_max
value: -6.868621684867697
- type: nauc_map_at_3_std
value: -35.4712388845996
- type: nauc_map_at_5_diff1
value: 29.665551705577474
- type: nauc_map_at_5_max
value: 10.958628576519045
- type: nauc_map_at_5_std
value: -25.113120842097057
- type: nauc_mrr_at_1000_diff1
value: 47.39372999496945
- type: nauc_mrr_at_1000_max
value: 83.11274997493808
- type: nauc_mrr_at_1000_std
value: 39.74195374546631
- type: nauc_mrr_at_100_diff1
value: 47.396678946057676
- type: nauc_mrr_at_100_max
value: 83.1192584274415
- type: nauc_mrr_at_100_std
value: 39.75840860374685
- type: nauc_mrr_at_10_diff1
value: 47.35365644138715
- type: nauc_mrr_at_10_max
value: 83.189165639531
- type: nauc_mrr_at_10_std
value: 39.83653157887758
- type: nauc_mrr_at_1_diff1
value: 47.98740362820094
- type: nauc_mrr_at_1_max
value: 80.32340034580369
- type: nauc_mrr_at_1_std
value: 34.57857131423388
- type: nauc_mrr_at_20_diff1
value: 47.399132055537194
- type: nauc_mrr_at_20_max
value: 83.16329919869686
- type: nauc_mrr_at_20_std
value: 39.84204692042734
- type: nauc_mrr_at_3_diff1
value: 47.09295580511751
- type: nauc_mrr_at_3_max
value: 82.95831045602642
- type: nauc_mrr_at_3_std
value: 38.98036804692351
- type: nauc_mrr_at_5_diff1
value: 47.20100268549764
- type: nauc_mrr_at_5_max
value: 83.16652480381642
- type: nauc_mrr_at_5_std
value: 39.55690491560902
- type: nauc_ndcg_at_1000_diff1
value: 17.201962509184547
- type: nauc_ndcg_at_1000_max
value: 63.75820559259539
- type: nauc_ndcg_at_1000_std
value: 29.28676096486067
- type: nauc_ndcg_at_100_diff1
value: 16.76847216096811
- type: nauc_ndcg_at_100_max
value: 62.646517934470744
- type: nauc_ndcg_at_100_std
value: 28.7441617667637
- type: nauc_ndcg_at_10_diff1
value: 16.559511980751886
- type: nauc_ndcg_at_10_max
value: 54.35027464277944
- type: nauc_ndcg_at_10_std
value: 16.98089333577716
- type: nauc_ndcg_at_1_diff1
value: 47.98740362820094
- type: nauc_ndcg_at_1_max
value: 80.32340034580369
- type: nauc_ndcg_at_1_std
value: 34.57857131423388
- type: nauc_ndcg_at_20_diff1
value: 16.721525245428243
- type: nauc_ndcg_at_20_max
value: 57.683661870555724
- type: nauc_ndcg_at_20_std
value: 21.736044200026853
- type: nauc_ndcg_at_3_diff1
value: 12.488009696556192
- type: nauc_ndcg_at_3_max
value: 69.2365575305502
- type: nauc_ndcg_at_3_std
value: 30.622418945055323
- type: nauc_ndcg_at_5_diff1
value: 12.364114556230609
- type: nauc_ndcg_at_5_max
value: 62.33360746285387
- type: nauc_ndcg_at_5_std
value: 24.898000803570227
- type: nauc_precision_at_1000_diff1
value: -35.14745130154524
- type: nauc_precision_at_1000_max
value: 48.811507982849065
- type: nauc_precision_at_1000_std
value: 62.43036496029399
- type: nauc_precision_at_100_diff1
value: -35.15276411320076
- type: nauc_precision_at_100_max
value: 50.87010333741109
- type: nauc_precision_at_100_std
value: 63.418221030407175
- type: nauc_precision_at_10_diff1
value: -34.84255710936113
- type: nauc_precision_at_10_max
value: 56.588401051428825
- type: nauc_precision_at_10_std
value: 57.4763370653757
- type: nauc_precision_at_1_diff1
value: 47.98740362820094
- type: nauc_precision_at_1_max
value: 80.32340034580369
- type: nauc_precision_at_1_std
value: 34.57857131423388
- type: nauc_precision_at_20_diff1
value: -35.165762365233505
- type: nauc_precision_at_20_max
value: 54.148762449660424
- type: nauc_precision_at_20_std
value: 61.569719669368716
- type: nauc_precision_at_3_diff1
value: -28.63023175340299
- type: nauc_precision_at_3_max
value: 68.69825987618499
- type: nauc_precision_at_3_std
value: 48.15479495755423
- type: nauc_precision_at_5_diff1
value: -34.13811355456687
- type: nauc_precision_at_5_max
value: 62.369363941490604
- type: nauc_precision_at_5_std
value: 52.282904411187914
- type: nauc_recall_at_1000_diff1
value: 8.686444579162663
- type: nauc_recall_at_1000_max
value: 59.58864478011338
- type: nauc_recall_at_1000_std
value: 56.692774954297455
- type: nauc_recall_at_100_diff1
value: 8.820596225758342
- type: nauc_recall_at_100_max
value: 53.15048885657892
- type: nauc_recall_at_100_std
value: 39.78931159236714
- type: nauc_recall_at_10_diff1
value: 16.022301106315027
- type: nauc_recall_at_10_max
value: 29.83242342459543
- type: nauc_recall_at_10_std
value: -4.805965555875844
- type: nauc_recall_at_1_diff1
value: 52.87356542654683
- type: nauc_recall_at_1_max
value: -22.210039746171255
- type: nauc_recall_at_1_std
value: -38.11345358035342
- type: nauc_recall_at_20_diff1
value: 10.35772828627265
- type: nauc_recall_at_20_max
value: 43.06420839754062
- type: nauc_recall_at_20_std
value: 15.040522218235692
- type: nauc_recall_at_3_diff1
value: 36.23953684770224
- type: nauc_recall_at_3_max
value: -11.709269151700374
- type: nauc_recall_at_3_std
value: -38.13943178150384
- type: nauc_recall_at_5_diff1
value: 28.644872415763384
- type: nauc_recall_at_5_max
value: 2.062151266111129
- type: nauc_recall_at_5_std
value: -30.81114034774277
- type: ndcg_at_1
value: 88.901
- type: ndcg_at_10
value: 83.163
- type: ndcg_at_100
value: 86.854
- type: ndcg_at_1000
value: 87.602
- type: ndcg_at_20
value: 84.908
- type: ndcg_at_3
value: 84.848
- type: ndcg_at_5
value: 83.372
- type: precision_at_1
value: 88.901
- type: precision_at_10
value: 41.343
- type: precision_at_100
value: 4.957000000000001
- type: precision_at_1000
value: 0.513
- type: precision_at_20
value: 22.955000000000002
- type: precision_at_3
value: 74.29599999999999
- type: precision_at_5
value: 62.251999999999995
- type: recall_at_1
value: 26.875
- type: recall_at_10
value: 81.902
- type: recall_at_100
value: 93.988
- type: recall_at_1000
value: 97.801
- type: recall_at_20
value: 87.809
- type: recall_at_3
value: 54.869
- type: recall_at_5
value: 68.728
- task:
type: PairClassification
dataset:
name: MTEB TERRa (default)
type: ai-forever/terra-pairclassification
config: default
split: dev
revision: 7b58f24536063837d644aab9a023c62199b2a612
metrics:
- type: cosine_accuracy
value: 60.586319218241044
- type: cosine_accuracy_threshold
value: 82.49806761741638
- type: cosine_ap
value: 58.73198048427448
- type: cosine_f1
value: 67.37967914438502
- type: cosine_f1_threshold
value: 77.46461033821106
- type: cosine_precision
value: 57.01357466063348
- type: cosine_recall
value: 82.35294117647058
- type: dot_accuracy
value: 60.26058631921825
- type: dot_accuracy_threshold
value: 35627.020263671875
- type: dot_ap
value: 57.418783612898224
- type: dot_f1
value: 66.51982378854623
- type: dot_f1_threshold
value: 27620.843505859375
- type: dot_precision
value: 50.16611295681063
- type: dot_recall
value: 98.69281045751634
- type: euclidean_accuracy
value: 60.26058631921825
- type: euclidean_accuracy_threshold
value: 1255.4466247558594
- type: euclidean_ap
value: 58.748656145387955
- type: euclidean_f1
value: 66.99029126213591
- type: euclidean_f1_threshold
value: 1565.1330947875977
- type: euclidean_precision
value: 53.28185328185329
- type: euclidean_recall
value: 90.19607843137256
- type: main_score
value: 58.8479126365766
- type: manhattan_accuracy
value: 59.934853420195445
- type: manhattan_accuracy_threshold
value: 29897.271728515625
- type: manhattan_ap
value: 58.8479126365766
- type: manhattan_f1
value: 66.81318681318683
- type: manhattan_f1_threshold
value: 46291.802978515625
- type: manhattan_precision
value: 50.331125827814574
- type: manhattan_recall
value: 99.34640522875817
- type: max_accuracy
value: 60.586319218241044
- type: max_ap
value: 58.8479126365766
- type: max_f1
value: 67.37967914438502
- type: max_precision
value: 57.01357466063348
- type: max_recall
value: 99.34640522875817
- type: similarity_accuracy
value: 60.586319218241044
- type: similarity_accuracy_threshold
value: 82.49806761741638
- type: similarity_ap
value: 58.73198048427448
- type: similarity_f1
value: 67.37967914438502
- type: similarity_f1_threshold
value: 77.46461033821106
- type: similarity_precision
value: 57.01357466063348
- type: similarity_recall
value: 82.35294117647058
- task:
type: Classification
dataset:
name: MTEB TNews (default)
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 45.967999999999996
- type: f1
value: 44.699306100915706
- type: f1_weighted
value: 46.03730319014832
- type: main_score
value: 45.967999999999996
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.251
- type: map_at_10
value: 1.9480000000000002
- type: map_at_100
value: 11.082
- type: map_at_1000
value: 26.700000000000003
- type: map_at_20
value: 3.3529999999999998
- type: map_at_3
value: 0.679
- type: map_at_5
value: 1.079
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 95.786
- type: mrr_at_100
value: 95.786
- type: mrr_at_1000
value: 95.786
- type: mrr_at_20
value: 95.786
- type: mrr_at_3
value: 95.0
- type: mrr_at_5
value: 95.5
- type: ndcg_at_1
value: 91.0
- type: ndcg_at_10
value: 77.71900000000001
- type: ndcg_at_100
value: 57.726
- type: ndcg_at_1000
value: 52.737
- type: ndcg_at_20
value: 72.54
- type: ndcg_at_3
value: 83.397
- type: ndcg_at_5
value: 80.806
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 81.0
- type: precision_at_100
value: 59.199999999999996
- type: precision_at_1000
value: 23.244
- type: precision_at_20
value: 75.2
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 84.8
- type: recall_at_1
value: 0.251
- type: recall_at_10
value: 2.1229999999999998
- type: recall_at_100
value: 14.496999999999998
- type: recall_at_1000
value: 50.09
- type: recall_at_20
value: 3.8309999999999995
- type: recall_at_3
value: 0.696
- type: recall_at_5
value: 1.1400000000000001
- type: main_score
value: 77.71900000000001
- task:
type: Clustering
dataset:
name: MTEB TenKGnadClusteringP2P (default)
type: slvnwhrl/tenkgnad-clustering-p2p
config: default
split: test
revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558
metrics:
- type: main_score
value: 43.763609722295215
- type: v_measure
value: 43.763609722295215
- type: v_measure_std
value: 2.8751199473862457
- task:
type: Clustering
dataset:
name: MTEB TenKGnadClusteringS2S (default)
type: slvnwhrl/tenkgnad-clustering-s2s
config: default
split: test
revision: 6cddbe003f12b9b140aec477b583ac4191f01786
metrics:
- type: main_score
value: 39.762424448504355
- type: v_measure
value: 39.762424448504355
- type: v_measure_std
value: 3.30146124979502
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P (default)
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: main_score
value: 63.133819258289456
- type: v_measure
value: 63.133819258289456
- type: v_measure_std
value: 1.8854253356479695
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S (default)
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: main_score
value: 58.98195851785808
- type: v_measure
value: 58.98195851785808
- type: v_measure_std
value: 1.6237600076393737
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.3550000000000004
- type: map_at_10
value: 10.08
- type: map_at_100
value: 16.136
- type: map_at_1000
value: 17.605
- type: map_at_20
value: 12.561
- type: map_at_3
value: 5.641
- type: map_at_5
value: 7.3260000000000005
- type: mrr_at_1
value: 46.939
- type: mrr_at_10
value: 58.152
- type: mrr_at_100
value: 58.594
- type: mrr_at_1000
value: 58.601000000000006
- type: mrr_at_20
value: 58.279
- type: mrr_at_3
value: 55.102
- type: mrr_at_5
value: 56.531
- type: ndcg_at_1
value: 44.897999999999996
- type: ndcg_at_10
value: 26.298
- type: ndcg_at_100
value: 37.596000000000004
- type: ndcg_at_1000
value: 49.424
- type: ndcg_at_20
value: 27.066000000000003
- type: ndcg_at_3
value: 31.528
- type: ndcg_at_5
value: 28.219
- type: precision_at_1
value: 46.939
- type: precision_at_10
value: 22.245
- type: precision_at_100
value: 7.531000000000001
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_20
value: 17.041
- type: precision_at_3
value: 30.612000000000002
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 3.3550000000000004
- type: recall_at_10
value: 16.41
- type: recall_at_100
value: 47.272
- type: recall_at_1000
value: 83.584
- type: recall_at_20
value: 24.091
- type: recall_at_3
value: 6.8180000000000005
- type: recall_at_5
value: 9.677
- type: main_score
value: 26.298
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 91.2890625
- type: ap
value: 33.95547153875715
- type: ap_weighted
value: 33.95547153875715
- type: f1
value: 75.10768597556462
- type: f1_weighted
value: 92.00161208992606
- type: main_score
value: 91.2890625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 71.3978494623656
- type: f1
value: 71.7194818511814
- type: f1_weighted
value: 71.13860187349744
- type: main_score
value: 71.3978494623656
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 52.4921688720602
- type: v_measure
value: 52.4921688720602
- type: v_measure_std
value: 0.992768152658908
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 85.11652858079513
- type: cosine_accuracy_threshold
value: 87.90839910507202
- type: cosine_ap
value: 70.90459908851724
- type: cosine_f1
value: 65.66581227877457
- type: cosine_f1_threshold
value: 85.13308763504028
- type: cosine_precision
value: 61.094708153531684
- type: cosine_recall
value: 70.97625329815304
- type: dot_accuracy
value: 83.41181379269239
- type: dot_accuracy_threshold
value: 43110.113525390625
- type: dot_ap
value: 65.64869491143095
- type: dot_f1
value: 62.05308447460914
- type: dot_f1_threshold
value: 41412.542724609375
- type: dot_precision
value: 57.38623626989464
- type: dot_recall
value: 67.54617414248021
- type: euclidean_accuracy
value: 85.15229182809799
- type: euclidean_accuracy_threshold
value: 1043.08500289917
- type: euclidean_ap
value: 70.71204383269375
- type: euclidean_f1
value: 65.20304568527919
- type: euclidean_f1_threshold
value: 1179.2595863342285
- type: euclidean_precision
value: 62.81173594132029
- type: euclidean_recall
value: 67.78364116094987
- type: main_score
value: 70.90459908851724
- type: manhattan_accuracy
value: 85.1820945341837
- type: manhattan_accuracy_threshold
value: 26115.0390625
- type: manhattan_ap
value: 70.66113937117431
- type: manhattan_f1
value: 65.33383628819313
- type: manhattan_f1_threshold
value: 29105.181884765625
- type: manhattan_precision
value: 62.40691808791736
- type: manhattan_recall
value: 68.54881266490766
- type: max_accuracy
value: 85.1820945341837
- type: max_ap
value: 70.90459908851724
- type: max_f1
value: 65.66581227877457
- type: max_precision
value: 62.81173594132029
- type: max_recall
value: 70.97625329815304
- type: similarity_accuracy
value: 85.11652858079513
- type: similarity_accuracy_threshold
value: 87.90839910507202
- type: similarity_ap
value: 70.90459908851724
- type: similarity_f1
value: 65.66581227877457
- type: similarity_f1_threshold
value: 85.13308763504028
- type: similarity_precision
value: 61.094708153531684
- type: similarity_recall
value: 70.97625329815304
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 88.10299996119068
- type: cosine_accuracy_threshold
value: 84.34982895851135
- type: cosine_ap
value: 84.13755787769226
- type: cosine_f1
value: 76.0967548076923
- type: cosine_f1_threshold
value: 82.8936219215393
- type: cosine_precision
value: 74.28864769727193
- type: cosine_recall
value: 77.99507237449954
- type: dot_accuracy
value: 86.64182869561843
- type: dot_accuracy_threshold
value: 38794.677734375
- type: dot_ap
value: 80.20301567411457
- type: dot_f1
value: 73.50650291634967
- type: dot_f1_threshold
value: 37447.23205566406
- type: dot_precision
value: 69.41498460485802
- type: dot_recall
value: 78.11056359716662
- type: euclidean_accuracy
value: 87.9361198432103
- type: euclidean_accuracy_threshold
value: 1184.421157836914
- type: euclidean_ap
value: 83.79582690117218
- type: euclidean_f1
value: 75.81431709042175
- type: euclidean_f1_threshold
value: 1258.2727432250977
- type: euclidean_precision
value: 73.39099099099099
- type: euclidean_recall
value: 78.40314136125654
- type: main_score
value: 84.13755787769226
- type: manhattan_accuracy
value: 87.96134590755618
- type: manhattan_accuracy_threshold
value: 29077.291870117188
- type: manhattan_ap
value: 83.79487172269923
- type: manhattan_f1
value: 75.82421603424935
- type: manhattan_f1_threshold
value: 31224.124145507812
- type: manhattan_precision
value: 72.24740255212329
- type: manhattan_recall
value: 79.77363720357253
- type: max_accuracy
value: 88.10299996119068
- type: max_ap
value: 84.13755787769226
- type: max_f1
value: 76.0967548076923
- type: max_precision
value: 74.28864769727193
- type: max_recall
value: 79.77363720357253
- type: similarity_accuracy
value: 88.10299996119068
- type: similarity_accuracy_threshold
value: 84.34982895851135
- type: similarity_ap
value: 84.13755787769226
- type: similarity_f1
value: 76.0967548076923
- type: similarity_f1_threshold
value: 82.8936219215393
- type: similarity_precision
value: 74.28864769727193
- type: similarity_recall
value: 77.99507237449954
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval (default)
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: main_score
value: 70.433
- type: map_at_1
value: 55.7
- type: map_at_10
value: 66.013
- type: map_at_100
value: 66.534
- type: map_at_1000
value: 66.547
- type: map_at_20
value: 66.334
- type: map_at_3
value: 64.2
- type: map_at_5
value: 65.445
- type: mrr_at_1
value: 55.7
- type: mrr_at_10
value: 66.01329365079364
- type: mrr_at_100
value: 66.53350061744233
- type: mrr_at_1000
value: 66.54744831962995
- type: mrr_at_20
value: 66.3335147364675
- type: mrr_at_3
value: 64.2
- type: mrr_at_5
value: 65.44500000000002
- type: nauc_map_at_1000_diff1
value: 76.26428836976245
- type: nauc_map_at_1000_max
value: 35.41847367373575
- type: nauc_map_at_1000_std
value: -33.04639860831992
- type: nauc_map_at_100_diff1
value: 76.25793229023193
- type: nauc_map_at_100_max
value: 35.43663260110076
- type: nauc_map_at_100_std
value: -33.04238139882945
- type: nauc_map_at_10_diff1
value: 76.2108281297711
- type: nauc_map_at_10_max
value: 35.59442419423183
- type: nauc_map_at_10_std
value: -33.32346518997277
- type: nauc_map_at_1_diff1
value: 79.17728405262736
- type: nauc_map_at_1_max
value: 31.880738163589527
- type: nauc_map_at_1_std
value: -30.891888718004584
- type: nauc_map_at_20_diff1
value: 76.2181333410193
- type: nauc_map_at_20_max
value: 35.43448818430876
- type: nauc_map_at_20_std
value: -33.35682442863193
- type: nauc_map_at_3_diff1
value: 76.10046541433466
- type: nauc_map_at_3_max
value: 34.6831278555291
- type: nauc_map_at_3_std
value: -34.030826044831116
- type: nauc_map_at_5_diff1
value: 75.96513023582064
- type: nauc_map_at_5_max
value: 34.66920832438069
- type: nauc_map_at_5_std
value: -33.79799777830796
- type: nauc_mrr_at_1000_diff1
value: 76.26428836976245
- type: nauc_mrr_at_1000_max
value: 35.41847367373575
- type: nauc_mrr_at_1000_std
value: -33.04639860831992
- type: nauc_mrr_at_100_diff1
value: 76.25793229023193
- type: nauc_mrr_at_100_max
value: 35.43663260110076
- type: nauc_mrr_at_100_std
value: -33.04238139882945
- type: nauc_mrr_at_10_diff1
value: 76.2108281297711
- type: nauc_mrr_at_10_max
value: 35.59442419423183
- type: nauc_mrr_at_10_std
value: -33.32346518997277
- type: nauc_mrr_at_1_diff1
value: 79.17728405262736
- type: nauc_mrr_at_1_max
value: 31.880738163589527
- type: nauc_mrr_at_1_std
value: -30.891888718004584
- type: nauc_mrr_at_20_diff1
value: 76.2181333410193
- type: nauc_mrr_at_20_max
value: 35.43448818430876
- type: nauc_mrr_at_20_std
value: -33.35682442863193
- type: nauc_mrr_at_3_diff1
value: 76.10046541433466
- type: nauc_mrr_at_3_max
value: 34.6831278555291
- type: nauc_mrr_at_3_std
value: -34.030826044831116
- type: nauc_mrr_at_5_diff1
value: 75.96513023582064
- type: nauc_mrr_at_5_max
value: 34.66920832438069
- type: nauc_mrr_at_5_std
value: -33.79799777830796
- type: nauc_ndcg_at_1000_diff1
value: 75.68118206798317
- type: nauc_ndcg_at_1000_max
value: 37.12252980787349
- type: nauc_ndcg_at_1000_std
value: -31.457578337430505
- type: nauc_ndcg_at_100_diff1
value: 75.46730761564156
- type: nauc_ndcg_at_100_max
value: 37.549890025544265
- type: nauc_ndcg_at_100_std
value: -31.35066985945112
- type: nauc_ndcg_at_10_diff1
value: 75.09890404887037
- type: nauc_ndcg_at_10_max
value: 38.024147790014204
- type: nauc_ndcg_at_10_std
value: -33.67408368593356
- type: nauc_ndcg_at_1_diff1
value: 79.17728405262736
- type: nauc_ndcg_at_1_max
value: 31.880738163589527
- type: nauc_ndcg_at_1_std
value: -30.891888718004584
- type: nauc_ndcg_at_20_diff1
value: 75.12977548171354
- type: nauc_ndcg_at_20_max
value: 37.524926748917956
- type: nauc_ndcg_at_20_std
value: -33.771344674947485
- type: nauc_ndcg_at_3_diff1
value: 74.94037476984154
- type: nauc_ndcg_at_3_max
value: 35.60345554050552
- type: nauc_ndcg_at_3_std
value: -35.256991346321854
- type: nauc_ndcg_at_5_diff1
value: 74.54265907753783
- type: nauc_ndcg_at_5_max
value: 35.57662819978585
- type: nauc_ndcg_at_5_std
value: -34.879794448418465
- type: nauc_precision_at_1000_diff1
value: 74.52277207179142
- type: nauc_precision_at_1000_max
value: 94.25510945118707
- type: nauc_precision_at_1000_std
value: 91.6874157070222
- type: nauc_precision_at_100_diff1
value: 65.98346655735419
- type: nauc_precision_at_100_max
value: 78.81168727653687
- type: nauc_precision_at_100_std
value: 27.241465691967708
- type: nauc_precision_at_10_diff1
value: 69.55050319096688
- type: nauc_precision_at_10_max
value: 51.827749140893374
- type: nauc_precision_at_10_std
value: -34.60818605792837
- type: nauc_precision_at_1_diff1
value: 79.17728405262736
- type: nauc_precision_at_1_max
value: 31.880738163589527
- type: nauc_precision_at_1_std
value: -30.891888718004584
- type: nauc_precision_at_20_diff1
value: 68.08078305042736
- type: nauc_precision_at_20_max
value: 52.83318878288501
- type: nauc_precision_at_20_std
value: -35.46070292817927
- type: nauc_precision_at_3_diff1
value: 70.76249609881901
- type: nauc_precision_at_3_max
value: 38.86561868624655
- type: nauc_precision_at_3_std
value: -39.68917853446992
- type: nauc_precision_at_5_diff1
value: 68.39110629013278
- type: nauc_precision_at_5_max
value: 39.28677163904683
- type: nauc_precision_at_5_std
value: -39.39101423819562
- type: nauc_recall_at_1000_diff1
value: 74.52277207179175
- type: nauc_recall_at_1000_max
value: 94.25510945118776
- type: nauc_recall_at_1000_std
value: 91.68741570702382
- type: nauc_recall_at_100_diff1
value: 65.9834665573548
- type: nauc_recall_at_100_max
value: 78.81168727653679
- type: nauc_recall_at_100_std
value: 27.241465691967598
- type: nauc_recall_at_10_diff1
value: 69.55050319096708
- type: nauc_recall_at_10_max
value: 51.82774914089347
- type: nauc_recall_at_10_std
value: -34.6081860579283
- type: nauc_recall_at_1_diff1
value: 79.17728405262736
- type: nauc_recall_at_1_max
value: 31.880738163589527
- type: nauc_recall_at_1_std
value: -30.891888718004584
- type: nauc_recall_at_20_diff1
value: 68.08078305042746
- type: nauc_recall_at_20_max
value: 52.833188782885244
- type: nauc_recall_at_20_std
value: -35.46070292817895
- type: nauc_recall_at_3_diff1
value: 70.76249609881896
- type: nauc_recall_at_3_max
value: 38.865618686246464
- type: nauc_recall_at_3_std
value: -39.68917853446999
- type: nauc_recall_at_5_diff1
value: 68.39110629013274
- type: nauc_recall_at_5_max
value: 39.28677163904688
- type: nauc_recall_at_5_std
value: -39.39101423819562
- type: ndcg_at_1
value: 55.7
- type: ndcg_at_10
value: 70.433
- type: ndcg_at_100
value: 72.975
- type: ndcg_at_1000
value: 73.283
- type: ndcg_at_20
value: 71.58
- type: ndcg_at_3
value: 66.83099999999999
- type: ndcg_at_5
value: 69.085
- type: precision_at_1
value: 55.7
- type: precision_at_10
value: 8.4
- type: precision_at_100
value: 0.959
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 4.425
- type: precision_at_3
value: 24.8
- type: precision_at_5
value: 15.98
- type: recall_at_1
value: 55.7
- type: recall_at_10
value: 84.0
- type: recall_at_100
value: 95.89999999999999
- type: recall_at_1000
value: 98.2
- type: recall_at_20
value: 88.5
- type: recall_at_3
value: 74.4
- type: recall_at_5
value: 79.9
- task:
type: Classification
dataset:
name: MTEB Waimai (default)
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.58999999999999
- type: ap
value: 70.02619249927523
- type: ap_weighted
value: 70.02619249927523
- type: f1
value: 84.97572770889423
- type: f1_weighted
value: 86.6865713531272
- type: main_score
value: 86.58999999999999
- task:
type: Retrieval
dataset:
name: MTEB XMarket (en)
type: jinaai/xmarket_ml
config: en
split: test
revision: dfe57acff5b62c23732a7b7d3e3fb84ff501708b
metrics:
- type: main_score
value: 34.772999999999996
- type: map_at_1
value: 7.2620000000000005
- type: map_at_10
value: 17.98
- type: map_at_100
value: 24.828
- type: map_at_1000
value: 26.633000000000003
- type: map_at_20
value: 20.699
- type: map_at_3
value: 12.383
- type: map_at_5
value: 14.871
- type: mrr_at_1
value: 34.718100890207715
- type: mrr_at_10
value: 43.9336827525092
- type: mrr_at_100
value: 44.66474011066837
- type: mrr_at_1000
value: 44.7075592197356
- type: mrr_at_20
value: 44.35984436569346
- type: mrr_at_3
value: 41.73901893981052
- type: mrr_at_5
value: 43.025973550207134
- type: nauc_map_at_1000_diff1
value: 13.899869081196364
- type: nauc_map_at_1000_max
value: 46.60452816386231
- type: nauc_map_at_1000_std
value: 24.87925799401773
- type: nauc_map_at_100_diff1
value: 16.164805650871084
- type: nauc_map_at_100_max
value: 44.720912958558095
- type: nauc_map_at_100_std
value: 20.236734536210477
- type: nauc_map_at_10_diff1
value: 23.58580520913581
- type: nauc_map_at_10_max
value: 31.276151869914216
- type: nauc_map_at_10_std
value: -0.1833326246041355
- type: nauc_map_at_1_diff1
value: 37.02663305598722
- type: nauc_map_at_1_max
value: 14.931071531116528
- type: nauc_map_at_1_std
value: -12.478790028708453
- type: nauc_map_at_20_diff1
value: 20.718297881540593
- type: nauc_map_at_20_max
value: 36.62264094841859
- type: nauc_map_at_20_std
value: 6.658514770057742
- type: nauc_map_at_3_diff1
value: 29.379034581120006
- type: nauc_map_at_3_max
value: 21.387214269548803
- type: nauc_map_at_3_std
value: -9.3404121914247
- type: nauc_map_at_5_diff1
value: 26.627169792839485
- type: nauc_map_at_5_max
value: 25.393331109666388
- type: nauc_map_at_5_std
value: -6.023485287246353
- type: nauc_mrr_at_1000_diff1
value: 12.047232036652295
- type: nauc_mrr_at_1000_max
value: 46.611862580860645
- type: nauc_mrr_at_1000_std
value: 27.89146066442305
- type: nauc_mrr_at_100_diff1
value: 12.05261747449997
- type: nauc_mrr_at_100_max
value: 46.61328535381203
- type: nauc_mrr_at_100_std
value: 27.886145596874535
- type: nauc_mrr_at_10_diff1
value: 12.006935553036941
- type: nauc_mrr_at_10_max
value: 46.53351686240496
- type: nauc_mrr_at_10_std
value: 27.708742470257462
- type: nauc_mrr_at_1_diff1
value: 13.323408127738782
- type: nauc_mrr_at_1_max
value: 43.78884661002012
- type: nauc_mrr_at_1_std
value: 25.164417588165673
- type: nauc_mrr_at_20_diff1
value: 12.036022973968011
- type: nauc_mrr_at_20_max
value: 46.56537838037131
- type: nauc_mrr_at_20_std
value: 27.78189157249635
- type: nauc_mrr_at_3_diff1
value: 11.943896700976381
- type: nauc_mrr_at_3_max
value: 46.33644663073225
- type: nauc_mrr_at_3_std
value: 27.523915405053845
- type: nauc_mrr_at_5_diff1
value: 12.03108009033769
- type: nauc_mrr_at_5_max
value: 46.49103616896692
- type: nauc_mrr_at_5_std
value: 27.630879129863366
- type: nauc_ndcg_at_1000_diff1
value: 9.766823796017324
- type: nauc_ndcg_at_1000_max
value: 52.85844801910602
- type: nauc_ndcg_at_1000_std
value: 36.43271437761207
- type: nauc_ndcg_at_100_diff1
value: 12.035059298282036
- type: nauc_ndcg_at_100_max
value: 50.05520240705682
- type: nauc_ndcg_at_100_std
value: 29.87678724506636
- type: nauc_ndcg_at_10_diff1
value: 10.281893031139424
- type: nauc_ndcg_at_10_max
value: 47.02153679426017
- type: nauc_ndcg_at_10_std
value: 26.624948330369126
- type: nauc_ndcg_at_1_diff1
value: 13.323408127738782
- type: nauc_ndcg_at_1_max
value: 43.78884661002012
- type: nauc_ndcg_at_1_std
value: 25.164417588165673
- type: nauc_ndcg_at_20_diff1
value: 11.463524849646598
- type: nauc_ndcg_at_20_max
value: 47.415073186019704
- type: nauc_ndcg_at_20_std
value: 26.359019620164307
- type: nauc_ndcg_at_3_diff1
value: 9.689199913805394
- type: nauc_ndcg_at_3_max
value: 45.68151849572808
- type: nauc_ndcg_at_3_std
value: 26.559193219799486
- type: nauc_ndcg_at_5_diff1
value: 9.448823370356575
- type: nauc_ndcg_at_5_max
value: 46.19999662690141
- type: nauc_ndcg_at_5_std
value: 26.8411706726069
- type: nauc_precision_at_1000_diff1
value: -20.379065598727024
- type: nauc_precision_at_1000_max
value: 13.162562437268427
- type: nauc_precision_at_1000_std
value: 22.658226157785812
- type: nauc_precision_at_100_diff1
value: -16.458155977309282
- type: nauc_precision_at_100_max
value: 35.97956789169889
- type: nauc_precision_at_100_std
value: 48.878375009979194
- type: nauc_precision_at_10_diff1
value: -7.810992317607771
- type: nauc_precision_at_10_max
value: 49.307339277444754
- type: nauc_precision_at_10_std
value: 42.82533951854582
- type: nauc_precision_at_1_diff1
value: 13.323408127738782
- type: nauc_precision_at_1_max
value: 43.78884661002012
- type: nauc_precision_at_1_std
value: 25.164417588165673
- type: nauc_precision_at_20_diff1
value: -11.43933465149542
- type: nauc_precision_at_20_max
value: 46.93722753460038
- type: nauc_precision_at_20_std
value: 47.36223769029678
- type: nauc_precision_at_3_diff1
value: 1.3230178593599737
- type: nauc_precision_at_3_max
value: 48.49039534395576
- type: nauc_precision_at_3_std
value: 33.161384183129194
- type: nauc_precision_at_5_diff1
value: -3.185516457926519
- type: nauc_precision_at_5_max
value: 49.5814309394308
- type: nauc_precision_at_5_std
value: 37.57637865900281
- type: nauc_recall_at_1000_diff1
value: 7.839499443984168
- type: nauc_recall_at_1000_max
value: 52.67165467640894
- type: nauc_recall_at_1000_std
value: 48.85318316702583
- type: nauc_recall_at_100_diff1
value: 14.117557049589418
- type: nauc_recall_at_100_max
value: 40.59046301348715
- type: nauc_recall_at_100_std
value: 24.379680901739505
- type: nauc_recall_at_10_diff1
value: 20.04536052614054
- type: nauc_recall_at_10_max
value: 25.54148839721574
- type: nauc_recall_at_10_std
value: -1.938182527562211
- type: nauc_recall_at_1_diff1
value: 37.02663305598722
- type: nauc_recall_at_1_max
value: 14.931071531116528
- type: nauc_recall_at_1_std
value: -12.478790028708453
- type: nauc_recall_at_20_diff1
value: 17.959977483235566
- type: nauc_recall_at_20_max
value: 29.88502687870809
- type: nauc_recall_at_20_std
value: 4.26527395196852
- type: nauc_recall_at_3_diff1
value: 26.297810954500456
- type: nauc_recall_at_3_max
value: 18.819406079307402
- type: nauc_recall_at_3_std
value: -10.002237229729081
- type: nauc_recall_at_5_diff1
value: 22.739080899568485
- type: nauc_recall_at_5_max
value: 21.0322968243985
- type: nauc_recall_at_5_std
value: -6.927749435306422
- type: ndcg_at_1
value: 34.717999999999996
- type: ndcg_at_10
value: 34.772999999999996
- type: ndcg_at_100
value: 39.407
- type: ndcg_at_1000
value: 44.830999999999996
- type: ndcg_at_20
value: 35.667
- type: ndcg_at_3
value: 34.332
- type: ndcg_at_5
value: 34.408
- type: precision_at_1
value: 34.717999999999996
- type: precision_at_10
value: 23.430999999999997
- type: precision_at_100
value: 9.31
- type: precision_at_1000
value: 2.259
- type: precision_at_20
value: 18.826999999999998
- type: precision_at_3
value: 30.553
- type: precision_at_5
value: 27.792
- type: recall_at_1
value: 7.2620000000000005
- type: recall_at_10
value: 26.384
- type: recall_at_100
value: 52.506
- type: recall_at_1000
value: 73.38
- type: recall_at_20
value: 34.032000000000004
- type: recall_at_3
value: 14.821000000000002
- type: recall_at_5
value: 19.481
- task:
type: Retrieval
dataset:
name: MTEB XMarket (de)
type: jinaai/xmarket_ml
config: de
split: test
revision: dfe57acff5b62c23732a7b7d3e3fb84ff501708b
metrics:
- type: main_score
value: 28.316000000000003
- type: map_at_1
value: 8.667
- type: map_at_10
value: 17.351
- type: map_at_100
value: 21.02
- type: map_at_1000
value: 21.951
- type: map_at_20
value: 18.994
- type: map_at_3
value: 13.23
- type: map_at_5
value: 15.17
- type: mrr_at_1
value: 27.27272727272727
- type: mrr_at_10
value: 36.10858487561485
- type: mrr_at_100
value: 36.92033814316568
- type: mrr_at_1000
value: 36.972226653870365
- type: mrr_at_20
value: 36.58914906427944
- type: mrr_at_3
value: 33.642969201552305
- type: mrr_at_5
value: 35.13417554289494
- type: nauc_map_at_1000_diff1
value: 23.345116790998063
- type: nauc_map_at_1000_max
value: 44.447240670835725
- type: nauc_map_at_1000_std
value: 18.34636500680144
- type: nauc_map_at_100_diff1
value: 24.458120909292347
- type: nauc_map_at_100_max
value: 43.31851431140378
- type: nauc_map_at_100_std
value: 15.654778355549965
- type: nauc_map_at_10_diff1
value: 29.376508937265044
- type: nauc_map_at_10_max
value: 36.650196725140795
- type: nauc_map_at_10_std
value: 4.682465435374843
- type: nauc_map_at_1_diff1
value: 40.382365672683214
- type: nauc_map_at_1_max
value: 22.894341150096785
- type: nauc_map_at_1_std
value: -5.610725673968323
- type: nauc_map_at_20_diff1
value: 27.197033425732908
- type: nauc_map_at_20_max
value: 39.71672400647207
- type: nauc_map_at_20_std
value: 8.944436813309933
- type: nauc_map_at_3_diff1
value: 34.49739294661502
- type: nauc_map_at_3_max
value: 29.006972420735284
- type: nauc_map_at_3_std
value: -3.0372650571243986
- type: nauc_map_at_5_diff1
value: 32.764901537277105
- type: nauc_map_at_5_max
value: 32.658533295918154
- type: nauc_map_at_5_std
value: 0.029626452286996906
- type: nauc_mrr_at_1000_diff1
value: 19.521229956280603
- type: nauc_mrr_at_1000_max
value: 44.39409866211472
- type: nauc_mrr_at_1000_std
value: 23.580697307036058
- type: nauc_mrr_at_100_diff1
value: 19.51312676591073
- type: nauc_mrr_at_100_max
value: 44.39559153963895
- type: nauc_mrr_at_100_std
value: 23.57913711397437
- type: nauc_mrr_at_10_diff1
value: 19.584635617935145
- type: nauc_mrr_at_10_max
value: 44.44842226236198
- type: nauc_mrr_at_10_std
value: 23.382684909390434
- type: nauc_mrr_at_1_diff1
value: 20.92594790923806
- type: nauc_mrr_at_1_max
value: 40.593939625252816
- type: nauc_mrr_at_1_std
value: 20.37467598073644
- type: nauc_mrr_at_20_diff1
value: 19.590641822115725
- type: nauc_mrr_at_20_max
value: 44.42512299604718
- type: nauc_mrr_at_20_std
value: 23.45564260800024
- type: nauc_mrr_at_3_diff1
value: 20.005307129527232
- type: nauc_mrr_at_3_max
value: 43.68300366192776
- type: nauc_mrr_at_3_std
value: 22.297190480842005
- type: nauc_mrr_at_5_diff1
value: 19.852896386271716
- type: nauc_mrr_at_5_max
value: 44.20641808920062
- type: nauc_mrr_at_5_std
value: 22.966517330852895
- type: nauc_ndcg_at_1000_diff1
value: 17.800116251376103
- type: nauc_ndcg_at_1000_max
value: 50.98332718061365
- type: nauc_ndcg_at_1000_std
value: 31.464484658102577
- type: nauc_ndcg_at_100_diff1
value: 19.555159680541088
- type: nauc_ndcg_at_100_max
value: 48.56377130899141
- type: nauc_ndcg_at_100_std
value: 25.77572748714817
- type: nauc_ndcg_at_10_diff1
value: 20.003008726679415
- type: nauc_ndcg_at_10_max
value: 45.1293725480628
- type: nauc_ndcg_at_10_std
value: 21.149213260765872
- type: nauc_ndcg_at_1_diff1
value: 21.00986278773023
- type: nauc_ndcg_at_1_max
value: 40.524637076774894
- type: nauc_ndcg_at_1_std
value: 20.29682194006685
- type: nauc_ndcg_at_20_diff1
value: 20.659734137312284
- type: nauc_ndcg_at_20_max
value: 45.73108736599869
- type: nauc_ndcg_at_20_std
value: 21.200736170346133
- type: nauc_ndcg_at_3_diff1
value: 19.200120542882544
- type: nauc_ndcg_at_3_max
value: 42.89772612963168
- type: nauc_ndcg_at_3_std
value: 20.713292754978983
- type: nauc_ndcg_at_5_diff1
value: 19.96329647992544
- type: nauc_ndcg_at_5_max
value: 44.296627037787324
- type: nauc_ndcg_at_5_std
value: 21.200135784971973
- type: nauc_precision_at_1000_diff1
value: -11.543221249009427
- type: nauc_precision_at_1000_max
value: 9.132801614448221
- type: nauc_precision_at_1000_std
value: 21.203720655381055
- type: nauc_precision_at_100_diff1
value: -12.510945425786039
- type: nauc_precision_at_100_max
value: 31.42530963666252
- type: nauc_precision_at_100_std
value: 44.99672783467617
- type: nauc_precision_at_10_diff1
value: -4.025802651746804
- type: nauc_precision_at_10_max
value: 47.50967924227793
- type: nauc_precision_at_10_std
value: 41.1558559268985
- type: nauc_precision_at_1_diff1
value: 21.00986278773023
- type: nauc_precision_at_1_max
value: 40.524637076774894
- type: nauc_precision_at_1_std
value: 20.29682194006685
- type: nauc_precision_at_20_diff1
value: -8.059482951110002
- type: nauc_precision_at_20_max
value: 44.28832115946278
- type: nauc_precision_at_20_std
value: 45.2005585353651
- type: nauc_precision_at_3_diff1
value: 8.53530005716248
- type: nauc_precision_at_3_max
value: 46.48353678905102
- type: nauc_precision_at_3_std
value: 28.868791323881972
- type: nauc_precision_at_5_diff1
value: 3.093619954821814
- type: nauc_precision_at_5_max
value: 48.43294475817019
- type: nauc_precision_at_5_std
value: 34.83430452745434
- type: nauc_recall_at_1000_diff1
value: 9.93680206699751
- type: nauc_recall_at_1000_max
value: 52.97840222394363
- type: nauc_recall_at_1000_std
value: 46.370023604436255
- type: nauc_recall_at_100_diff1
value: 14.100542445524972
- type: nauc_recall_at_100_max
value: 42.853775131475224
- type: nauc_recall_at_100_std
value: 26.93029971231028
- type: nauc_recall_at_10_diff1
value: 22.774547475714716
- type: nauc_recall_at_10_max
value: 33.984586405015044
- type: nauc_recall_at_10_std
value: 5.332325172373655
- type: nauc_recall_at_1_diff1
value: 40.382365672683214
- type: nauc_recall_at_1_max
value: 22.894341150096785
- type: nauc_recall_at_1_std
value: -5.610725673968323
- type: nauc_recall_at_20_diff1
value: 19.751060483835936
- type: nauc_recall_at_20_max
value: 36.18774034635102
- type: nauc_recall_at_20_std
value: 10.362242090308577
- type: nauc_recall_at_3_diff1
value: 30.29462372902671
- type: nauc_recall_at_3_max
value: 27.377175450099635
- type: nauc_recall_at_3_std
value: -3.015752705993425
- type: nauc_recall_at_5_diff1
value: 28.096893312615723
- type: nauc_recall_at_5_max
value: 30.485075571512425
- type: nauc_recall_at_5_std
value: 0.09106417003502826
- type: ndcg_at_1
value: 27.248
- type: ndcg_at_10
value: 28.316000000000003
- type: ndcg_at_100
value: 33.419
- type: ndcg_at_1000
value: 38.134
- type: ndcg_at_20
value: 29.707
- type: ndcg_at_3
value: 26.93
- type: ndcg_at_5
value: 27.363
- type: precision_at_1
value: 27.248
- type: precision_at_10
value: 15.073
- type: precision_at_100
value: 5.061
- type: precision_at_1000
value: 1.325
- type: precision_at_20
value: 11.407
- type: precision_at_3
value: 21.823
- type: precision_at_5
value: 18.984
- type: recall_at_1
value: 8.667
- type: recall_at_10
value: 26.984
- type: recall_at_100
value: 49.753
- type: recall_at_1000
value: 70.354
- type: recall_at_20
value: 33.955999999999996
- type: recall_at_3
value: 16.086
- type: recall_at_5
value: 20.544999999999998
- task:
type: Retrieval
dataset:
name: MTEB XMarket (es)
type: jinaai/xmarket_ml
config: es
split: test
revision: dfe57acff5b62c23732a7b7d3e3fb84ff501708b
metrics:
- type: main_score
value: 26.592
- type: map_at_1
value: 8.081000000000001
- type: map_at_10
value: 16.486
- type: map_at_100
value: 19.996
- type: map_at_1000
value: 20.889
- type: map_at_20
value: 18.088
- type: map_at_3
value: 12.864
- type: map_at_5
value: 14.515
- type: mrr_at_1
value: 24.643356643356643
- type: mrr_at_10
value: 33.755599955599926
- type: mrr_at_100
value: 34.55914769326114
- type: mrr_at_1000
value: 34.614384237219745
- type: mrr_at_20
value: 34.228909650276194
- type: mrr_at_3
value: 31.445221445221456
- type: mrr_at_5
value: 32.71375291375297
- type: nauc_map_at_1000_diff1
value: 19.17751654240679
- type: nauc_map_at_1000_max
value: 43.493743561136434
- type: nauc_map_at_1000_std
value: 21.14477911550252
- type: nauc_map_at_100_diff1
value: 20.259227234415395
- type: nauc_map_at_100_max
value: 42.510860292169106
- type: nauc_map_at_100_std
value: 18.63085160442346
- type: nauc_map_at_10_diff1
value: 24.12419385640694
- type: nauc_map_at_10_max
value: 35.99892932069915
- type: nauc_map_at_10_std
value: 8.488520124325058
- type: nauc_map_at_1_diff1
value: 35.09239143996649
- type: nauc_map_at_1_max
value: 23.72498533914286
- type: nauc_map_at_1_std
value: -4.164387883546102
- type: nauc_map_at_20_diff1
value: 22.411418237320817
- type: nauc_map_at_20_max
value: 39.12496266094892
- type: nauc_map_at_20_std
value: 12.371656353894227
- type: nauc_map_at_3_diff1
value: 28.106972376813506
- type: nauc_map_at_3_max
value: 29.57824316865409
- type: nauc_map_at_3_std
value: 1.8928791254813127
- type: nauc_map_at_5_diff1
value: 26.4958239149419
- type: nauc_map_at_5_max
value: 32.45906016649239
- type: nauc_map_at_5_std
value: 4.612735963224018
- type: nauc_mrr_at_1000_diff1
value: 17.614812607094446
- type: nauc_mrr_at_1000_max
value: 41.13031556228715
- type: nauc_mrr_at_1000_std
value: 22.564112871230318
- type: nauc_mrr_at_100_diff1
value: 17.614044568011085
- type: nauc_mrr_at_100_max
value: 41.129436273086796
- type: nauc_mrr_at_100_std
value: 22.566763500658766
- type: nauc_mrr_at_10_diff1
value: 17.61869494452089
- type: nauc_mrr_at_10_max
value: 41.091542329381426
- type: nauc_mrr_at_10_std
value: 22.370473458633594
- type: nauc_mrr_at_1_diff1
value: 20.321421442201913
- type: nauc_mrr_at_1_max
value: 38.36531448180009
- type: nauc_mrr_at_1_std
value: 18.422203207777688
- type: nauc_mrr_at_20_diff1
value: 17.614767736091625
- type: nauc_mrr_at_20_max
value: 41.11221420736687
- type: nauc_mrr_at_20_std
value: 22.44271891522012
- type: nauc_mrr_at_3_diff1
value: 17.98184651584625
- type: nauc_mrr_at_3_max
value: 40.424293610470144
- type: nauc_mrr_at_3_std
value: 21.554750947206706
- type: nauc_mrr_at_5_diff1
value: 17.72088314927416
- type: nauc_mrr_at_5_max
value: 40.662724739072694
- type: nauc_mrr_at_5_std
value: 21.822957528431928
- type: nauc_ndcg_at_1000_diff1
value: 15.310699428328398
- type: nauc_ndcg_at_1000_max
value: 48.83921393349997
- type: nauc_ndcg_at_1000_std
value: 32.22600294110774
- type: nauc_ndcg_at_100_diff1
value: 16.62672763977423
- type: nauc_ndcg_at_100_max
value: 47.36060653537392
- type: nauc_ndcg_at_100_std
value: 27.879865162871575
- type: nauc_ndcg_at_10_diff1
value: 16.436684176028116
- type: nauc_ndcg_at_10_max
value: 43.00026520872974
- type: nauc_ndcg_at_10_std
value: 22.507354939162806
- type: nauc_ndcg_at_1_diff1
value: 20.321421442201913
- type: nauc_ndcg_at_1_max
value: 38.36531448180009
- type: nauc_ndcg_at_1_std
value: 18.422203207777688
- type: nauc_ndcg_at_20_diff1
value: 17.127747123248835
- type: nauc_ndcg_at_20_max
value: 44.57322943752733
- type: nauc_ndcg_at_20_std
value: 23.146541187377036
- type: nauc_ndcg_at_3_diff1
value: 16.372742984728514
- type: nauc_ndcg_at_3_max
value: 40.91938017883993
- type: nauc_ndcg_at_3_std
value: 21.50917089194154
- type: nauc_ndcg_at_5_diff1
value: 16.40486505525073
- type: nauc_ndcg_at_5_max
value: 41.94597203181329
- type: nauc_ndcg_at_5_std
value: 22.068260809047562
- type: nauc_precision_at_1000_diff1
value: -15.9415313729527
- type: nauc_precision_at_1000_max
value: 12.653329948983643
- type: nauc_precision_at_1000_std
value: 26.371820703256173
- type: nauc_precision_at_100_diff1
value: -11.851070166675289
- type: nauc_precision_at_100_max
value: 32.164365923950115
- type: nauc_precision_at_100_std
value: 45.930226426725426
- type: nauc_precision_at_10_diff1
value: -3.1352660378259163
- type: nauc_precision_at_10_max
value: 45.48359878733272
- type: nauc_precision_at_10_std
value: 40.2917038044196
- type: nauc_precision_at_1_diff1
value: 20.321421442201913
- type: nauc_precision_at_1_max
value: 38.36531448180009
- type: nauc_precision_at_1_std
value: 18.422203207777688
- type: nauc_precision_at_20_diff1
value: -7.087513342144751
- type: nauc_precision_at_20_max
value: 43.66272019058357
- type: nauc_precision_at_20_std
value: 44.22863351071686
- type: nauc_precision_at_3_diff1
value: 7.836185032609045
- type: nauc_precision_at_3_max
value: 44.85412904097269
- type: nauc_precision_at_3_std
value: 30.209139149500057
- type: nauc_precision_at_5_diff1
value: 3.028150537253791
- type: nauc_precision_at_5_max
value: 45.73661708882973
- type: nauc_precision_at_5_std
value: 34.65500311185052
- type: nauc_recall_at_1000_diff1
value: 9.526124668370704
- type: nauc_recall_at_1000_max
value: 51.4190208452196
- type: nauc_recall_at_1000_std
value: 45.694891695646426
- type: nauc_recall_at_100_diff1
value: 12.68466215400009
- type: nauc_recall_at_100_max
value: 42.79112054268112
- type: nauc_recall_at_100_std
value: 28.61954251400998
- type: nauc_recall_at_10_diff1
value: 17.95124413416829
- type: nauc_recall_at_10_max
value: 33.1192036755167
- type: nauc_recall_at_10_std
value: 9.3588175959525
- type: nauc_recall_at_1_diff1
value: 35.09239143996649
- type: nauc_recall_at_1_max
value: 23.72498533914286
- type: nauc_recall_at_1_std
value: -4.164387883546102
- type: nauc_recall_at_20_diff1
value: 16.24916980445646
- type: nauc_recall_at_20_max
value: 36.51316122236076
- type: nauc_recall_at_20_std
value: 13.641588062425736
- type: nauc_recall_at_3_diff1
value: 23.263199724138786
- type: nauc_recall_at_3_max
value: 27.67354561610614
- type: nauc_recall_at_3_std
value: 3.103127242654415
- type: nauc_recall_at_5_diff1
value: 20.719704839229635
- type: nauc_recall_at_5_max
value: 29.66480839111333
- type: nauc_recall_at_5_std
value: 5.514884455797986
- type: ndcg_at_1
value: 24.643
- type: ndcg_at_10
value: 26.592
- type: ndcg_at_100
value: 31.887
- type: ndcg_at_1000
value: 36.695
- type: ndcg_at_20
value: 28.166000000000004
- type: ndcg_at_3
value: 25.238
- type: ndcg_at_5
value: 25.545
- type: precision_at_1
value: 24.643
- type: precision_at_10
value: 13.730999999999998
- type: precision_at_100
value: 4.744000000000001
- type: precision_at_1000
value: 1.167
- type: precision_at_20
value: 10.562000000000001
- type: precision_at_3
value: 20.288999999999998
- type: precision_at_5
value: 17.337
- type: recall_at_1
value: 8.081000000000001
- type: recall_at_10
value: 25.911
- type: recall_at_100
value: 48.176
- type: recall_at_1000
value: 69.655
- type: recall_at_20
value: 32.924
- type: recall_at_3
value: 16.125
- type: recall_at_5
value: 19.988
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (deu-deu)
type: jinaai/xpqa
config: deu-deu
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 84.552
- type: map_at_1
value: 59.023
- type: map_at_10
value: 81.051
- type: map_at_100
value: 81.539
- type: map_at_1000
value: 81.54299999999999
- type: map_at_20
value: 81.401
- type: map_at_3
value: 76.969
- type: map_at_5
value: 80.07600000000001
- type: mrr_at_1
value: 77.67624020887729
- type: mrr_at_10
value: 83.30509967259314
- type: mrr_at_100
value: 83.58599391639456
- type: mrr_at_1000
value: 83.58970114722587
- type: mrr_at_20
value: 83.50275980440317
- type: mrr_at_3
value: 82.07136640557006
- type: mrr_at_5
value: 82.94604003481287
- type: nauc_map_at_1000_diff1
value: 63.12885104269942
- type: nauc_map_at_1000_max
value: 57.7017996674959
- type: nauc_map_at_1000_std
value: -24.951068985070513
- type: nauc_map_at_100_diff1
value: 63.12866509393162
- type: nauc_map_at_100_max
value: 57.70176426013332
- type: nauc_map_at_100_std
value: -24.96012290790273
- type: nauc_map_at_10_diff1
value: 62.847709436211204
- type: nauc_map_at_10_max
value: 57.408873624779524
- type: nauc_map_at_10_std
value: -25.635130363219062
- type: nauc_map_at_1_diff1
value: 71.89683981857102
- type: nauc_map_at_1_max
value: 20.204460967432645
- type: nauc_map_at_1_std
value: -23.07894656629493
- type: nauc_map_at_20_diff1
value: 63.00504457011043
- type: nauc_map_at_20_max
value: 57.66009512514262
- type: nauc_map_at_20_std
value: -25.100138593754885
- type: nauc_map_at_3_diff1
value: 63.199874607788274
- type: nauc_map_at_3_max
value: 47.54482033763308
- type: nauc_map_at_3_std
value: -27.714557098916963
- type: nauc_map_at_5_diff1
value: 63.01006523518669
- type: nauc_map_at_5_max
value: 56.501965964288495
- type: nauc_map_at_5_std
value: -25.367825762790925
- type: nauc_mrr_at_1000_diff1
value: 66.24988063948112
- type: nauc_mrr_at_1000_max
value: 63.56921667744273
- type: nauc_mrr_at_1000_std
value: -22.073973768031863
- type: nauc_mrr_at_100_diff1
value: 66.24919554296275
- type: nauc_mrr_at_100_max
value: 63.57382447608361
- type: nauc_mrr_at_100_std
value: -22.084627248538187
- type: nauc_mrr_at_10_diff1
value: 66.0143885124066
- type: nauc_mrr_at_10_max
value: 63.51277586011898
- type: nauc_mrr_at_10_std
value: -22.477523960705454
- type: nauc_mrr_at_1_diff1
value: 68.25415199323474
- type: nauc_mrr_at_1_max
value: 63.069019003272416
- type: nauc_mrr_at_1_std
value: -18.77085924093244
- type: nauc_mrr_at_20_diff1
value: 66.16203167351055
- type: nauc_mrr_at_20_max
value: 63.607477776215845
- type: nauc_mrr_at_20_std
value: -22.15083176017266
- type: nauc_mrr_at_3_diff1
value: 66.39368842782302
- type: nauc_mrr_at_3_max
value: 63.11411066585295
- type: nauc_mrr_at_3_std
value: -22.63174342814071
- type: nauc_mrr_at_5_diff1
value: 66.17932562332354
- type: nauc_mrr_at_5_max
value: 63.70434825329594
- type: nauc_mrr_at_5_std
value: -21.704012812430438
- type: nauc_ndcg_at_1000_diff1
value: 63.958010361549356
- type: nauc_ndcg_at_1000_max
value: 60.516445000134624
- type: nauc_ndcg_at_1000_std
value: -24.264672248289923
- type: nauc_ndcg_at_100_diff1
value: 63.97654644758022
- type: nauc_ndcg_at_100_max
value: 60.62187552803407
- type: nauc_ndcg_at_100_std
value: -24.317149225778312
- type: nauc_ndcg_at_10_diff1
value: 62.505321221321566
- type: nauc_ndcg_at_10_max
value: 59.77891112351258
- type: nauc_ndcg_at_10_std
value: -26.90910005589911
- type: nauc_ndcg_at_1_diff1
value: 68.25415199323474
- type: nauc_ndcg_at_1_max
value: 63.069019003272416
- type: nauc_ndcg_at_1_std
value: -18.77085924093244
- type: nauc_ndcg_at_20_diff1
value: 63.04281805056225
- type: nauc_ndcg_at_20_max
value: 60.600957307444226
- type: nauc_ndcg_at_20_std
value: -24.954862079889203
- type: nauc_ndcg_at_3_diff1
value: 62.970441139740316
- type: nauc_ndcg_at_3_max
value: 57.543715669055295
- type: nauc_ndcg_at_3_std
value: -25.659388431714703
- type: nauc_ndcg_at_5_diff1
value: 62.82652127664541
- type: nauc_ndcg_at_5_max
value: 58.6970443258532
- type: nauc_ndcg_at_5_std
value: -25.66329354851023
- type: nauc_precision_at_1000_diff1
value: -33.38530947486223
- type: nauc_precision_at_1000_max
value: 25.972468024345414
- type: nauc_precision_at_1000_std
value: 17.460222955117978
- type: nauc_precision_at_100_diff1
value: -32.45175999251703
- type: nauc_precision_at_100_max
value: 26.367996120487337
- type: nauc_precision_at_100_std
value: 17.097957946391208
- type: nauc_precision_at_10_diff1
value: -26.97411235289487
- type: nauc_precision_at_10_max
value: 31.504961687240762
- type: nauc_precision_at_10_std
value: 11.125341183874687
- type: nauc_precision_at_1_diff1
value: 68.25415199323474
- type: nauc_precision_at_1_max
value: 63.069019003272416
- type: nauc_precision_at_1_std
value: -18.77085924093244
- type: nauc_precision_at_20_diff1
value: -29.8678078736273
- type: nauc_precision_at_20_max
value: 29.031222186584504
- type: nauc_precision_at_20_std
value: 14.943600563087928
- type: nauc_precision_at_3_diff1
value: -15.92947221299854
- type: nauc_precision_at_3_max
value: 37.73833494235097
- type: nauc_precision_at_3_std
value: 3.1573228443500847
- type: nauc_precision_at_5_diff1
value: -22.269156821101642
- type: nauc_precision_at_5_max
value: 35.65821838116355
- type: nauc_precision_at_5_std
value: 9.265930386198972
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 66.17058859539249
- type: nauc_recall_at_100_max
value: 78.066942935192
- type: nauc_recall_at_100_std
value: -22.213377762074686
- type: nauc_recall_at_10_diff1
value: 50.82149700700275
- type: nauc_recall_at_10_max
value: 56.68053325008221
- type: nauc_recall_at_10_std
value: -41.81657941433277
- type: nauc_recall_at_1_diff1
value: 71.89683981857102
- type: nauc_recall_at_1_max
value: 20.204460967432645
- type: nauc_recall_at_1_std
value: -23.07894656629493
- type: nauc_recall_at_20_diff1
value: 48.28076011857885
- type: nauc_recall_at_20_max
value: 63.29641555519295
- type: nauc_recall_at_20_std
value: -32.953559708819405
- type: nauc_recall_at_3_diff1
value: 58.15516956312558
- type: nauc_recall_at_3_max
value: 42.66315890283056
- type: nauc_recall_at_3_std
value: -32.16572530544806
- type: nauc_recall_at_5_diff1
value: 55.900844052439766
- type: nauc_recall_at_5_max
value: 55.23702018862884
- type: nauc_recall_at_5_std
value: -30.105929528165
- type: ndcg_at_1
value: 77.676
- type: ndcg_at_10
value: 84.552
- type: ndcg_at_100
value: 86.232
- type: ndcg_at_1000
value: 86.33800000000001
- type: ndcg_at_20
value: 85.515
- type: ndcg_at_3
value: 81.112
- type: ndcg_at_5
value: 82.943
- type: precision_at_1
value: 77.676
- type: precision_at_10
value: 15.17
- type: precision_at_100
value: 1.6230000000000002
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 7.858999999999999
- type: precision_at_3
value: 42.994
- type: precision_at_5
value: 28.747
- type: recall_at_1
value: 59.023
- type: recall_at_10
value: 92.465
- type: recall_at_100
value: 99.18400000000001
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 95.844
- type: recall_at_3
value: 81.826
- type: recall_at_5
value: 88.22
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (deu-eng)
type: jinaai/xpqa
config: deu-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 82.149
- type: map_at_1
value: 56.277
- type: map_at_10
value: 78.36999999999999
- type: map_at_100
value: 78.94
- type: map_at_1000
value: 78.95
- type: map_at_20
value: 78.818
- type: map_at_3
value: 74.25
- type: map_at_5
value: 77.11099999999999
- type: mrr_at_1
value: 74.28198433420366
- type: mrr_at_10
value: 80.57487877657589
- type: mrr_at_100
value: 80.94025764149008
- type: mrr_at_1000
value: 80.94608738871234
- type: mrr_at_20
value: 80.86240675885023
- type: mrr_at_3
value: 79.4604003481288
- type: mrr_at_5
value: 80.10008703220191
- type: nauc_map_at_1000_diff1
value: 60.44369249057189
- type: nauc_map_at_1000_max
value: 49.822240441830246
- type: nauc_map_at_1000_std
value: -27.34026380762817
- type: nauc_map_at_100_diff1
value: 60.44635668050401
- type: nauc_map_at_100_max
value: 49.838675926660684
- type: nauc_map_at_100_std
value: -27.310365556055583
- type: nauc_map_at_10_diff1
value: 60.18546951726522
- type: nauc_map_at_10_max
value: 49.72075398096832
- type: nauc_map_at_10_std
value: -27.86056102461558
- type: nauc_map_at_1_diff1
value: 71.2906657099758
- type: nauc_map_at_1_max
value: 18.970399251589
- type: nauc_map_at_1_std
value: -27.260776614286602
- type: nauc_map_at_20_diff1
value: 60.3525975566164
- type: nauc_map_at_20_max
value: 49.852487866710646
- type: nauc_map_at_20_std
value: -27.305173830170332
- type: nauc_map_at_3_diff1
value: 60.66803500571236
- type: nauc_map_at_3_max
value: 41.18191941521972
- type: nauc_map_at_3_std
value: -28.71383593401732
- type: nauc_map_at_5_diff1
value: 60.57216514504887
- type: nauc_map_at_5_max
value: 47.99837400446299
- type: nauc_map_at_5_std
value: -28.756183015949986
- type: nauc_mrr_at_1000_diff1
value: 63.77031955602516
- type: nauc_mrr_at_1000_max
value: 54.26907383811417
- type: nauc_mrr_at_1000_std
value: -26.227442087164714
- type: nauc_mrr_at_100_diff1
value: 63.77196650108669
- type: nauc_mrr_at_100_max
value: 54.281801457913126
- type: nauc_mrr_at_100_std
value: -26.216077891830793
- type: nauc_mrr_at_10_diff1
value: 63.50095284903051
- type: nauc_mrr_at_10_max
value: 54.3186301730016
- type: nauc_mrr_at_10_std
value: -26.29570241722173
- type: nauc_mrr_at_1_diff1
value: 65.15855770999057
- type: nauc_mrr_at_1_max
value: 53.213286738515066
- type: nauc_mrr_at_1_std
value: -24.683178252901943
- type: nauc_mrr_at_20_diff1
value: 63.74936550280859
- type: nauc_mrr_at_20_max
value: 54.355343751439065
- type: nauc_mrr_at_20_std
value: -26.197316900009817
- type: nauc_mrr_at_3_diff1
value: 63.912612979082695
- type: nauc_mrr_at_3_max
value: 53.75399024225975
- type: nauc_mrr_at_3_std
value: -27.194143264554675
- type: nauc_mrr_at_5_diff1
value: 63.72491059053639
- type: nauc_mrr_at_5_max
value: 53.66107604019352
- type: nauc_mrr_at_5_std
value: -26.92281560584754
- type: nauc_ndcg_at_1000_diff1
value: 61.304218998714354
- type: nauc_ndcg_at_1000_max
value: 52.409135743660386
- type: nauc_ndcg_at_1000_std
value: -26.539796489464056
- type: nauc_ndcg_at_100_diff1
value: 61.40355045085304
- type: nauc_ndcg_at_100_max
value: 52.79402259608008
- type: nauc_ndcg_at_100_std
value: -25.927273456979965
- type: nauc_ndcg_at_10_diff1
value: 59.93675608684116
- type: nauc_ndcg_at_10_max
value: 52.617848197542706
- type: nauc_ndcg_at_10_std
value: -27.314820020095887
- type: nauc_ndcg_at_1_diff1
value: 65.15855770999057
- type: nauc_ndcg_at_1_max
value: 53.213286738515066
- type: nauc_ndcg_at_1_std
value: -24.683178252901943
- type: nauc_ndcg_at_20_diff1
value: 60.85093704358376
- type: nauc_ndcg_at_20_max
value: 53.14529242671602
- type: nauc_ndcg_at_20_std
value: -25.93187916231906
- type: nauc_ndcg_at_3_diff1
value: 60.42301123518882
- type: nauc_ndcg_at_3_max
value: 49.59021992975956
- type: nauc_ndcg_at_3_std
value: -27.397117967810363
- type: nauc_ndcg_at_5_diff1
value: 60.78655153154219
- type: nauc_ndcg_at_5_max
value: 49.54194799556953
- type: nauc_ndcg_at_5_std
value: -29.467910172913413
- type: nauc_precision_at_1000_diff1
value: -34.35027108027456
- type: nauc_precision_at_1000_max
value: 23.762671066858815
- type: nauc_precision_at_1000_std
value: 16.1704780298982
- type: nauc_precision_at_100_diff1
value: -32.66610016754961
- type: nauc_precision_at_100_max
value: 25.504044603109588
- type: nauc_precision_at_100_std
value: 16.932402988816786
- type: nauc_precision_at_10_diff1
value: -25.720903145017342
- type: nauc_precision_at_10_max
value: 30.37029690599926
- type: nauc_precision_at_10_std
value: 10.560753160200314
- type: nauc_precision_at_1_diff1
value: 65.15855770999057
- type: nauc_precision_at_1_max
value: 53.213286738515066
- type: nauc_precision_at_1_std
value: -24.683178252901943
- type: nauc_precision_at_20_diff1
value: -29.577582332619084
- type: nauc_precision_at_20_max
value: 27.984145595920417
- type: nauc_precision_at_20_std
value: 15.083711704044727
- type: nauc_precision_at_3_diff1
value: -14.736267532892697
- type: nauc_precision_at_3_max
value: 36.12211021824307
- type: nauc_precision_at_3_std
value: 3.068643876519412
- type: nauc_precision_at_5_diff1
value: -19.846707283120825
- type: nauc_precision_at_5_max
value: 33.573804532177896
- type: nauc_precision_at_5_std
value: 5.700545622744924
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 68.24749796604452
- type: nauc_recall_at_100_max
value: 83.30024864929815
- type: nauc_recall_at_100_std
value: 21.23763053711522
- type: nauc_recall_at_10_diff1
value: 50.704049683241436
- type: nauc_recall_at_10_max
value: 57.64578984555556
- type: nauc_recall_at_10_std
value: -26.632759037746073
- type: nauc_recall_at_1_diff1
value: 71.2906657099758
- type: nauc_recall_at_1_max
value: 18.970399251589
- type: nauc_recall_at_1_std
value: -27.260776614286602
- type: nauc_recall_at_20_diff1
value: 54.124480837579505
- type: nauc_recall_at_20_max
value: 66.4641515433479
- type: nauc_recall_at_20_std
value: -14.615911455379393
- type: nauc_recall_at_3_diff1
value: 56.54358788321059
- type: nauc_recall_at_3_max
value: 37.765735322465744
- type: nauc_recall_at_3_std
value: -30.824147408598574
- type: nauc_recall_at_5_diff1
value: 56.392894535029214
- type: nauc_recall_at_5_max
value: 45.959268387521554
- type: nauc_recall_at_5_std
value: -33.58175576925282
- type: ndcg_at_1
value: 74.28200000000001
- type: ndcg_at_10
value: 82.149
- type: ndcg_at_100
value: 84.129
- type: ndcg_at_1000
value: 84.307
- type: ndcg_at_20
value: 83.39999999999999
- type: ndcg_at_3
value: 78.583
- type: ndcg_at_5
value: 80.13900000000001
- type: precision_at_1
value: 74.28200000000001
- type: precision_at_10
value: 14.960999999999999
- type: precision_at_100
value: 1.6119999999999999
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 7.813000000000001
- type: precision_at_3
value: 41.819
- type: precision_at_5
value: 27.911
- type: recall_at_1
value: 56.277
- type: recall_at_10
value: 90.729
- type: recall_at_100
value: 98.792
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 95.148
- type: recall_at_3
value: 79.989
- type: recall_at_5
value: 85.603
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-deu)
type: jinaai/xpqa
config: eng-deu
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 60.428000000000004
- type: map_at_1
value: 33.453
- type: map_at_10
value: 54.217000000000006
- type: map_at_100
value: 55.832
- type: map_at_1000
value: 55.884
- type: map_at_20
value: 55.236
- type: map_at_3
value: 48.302
- type: map_at_5
value: 51.902
- type: mrr_at_1
value: 53.916449086161876
- type: mrr_at_10
value: 61.4685647975465
- type: mrr_at_100
value: 62.13718159287348
- type: mrr_at_1000
value: 62.15799113826325
- type: mrr_at_20
value: 61.885388764243544
- type: mrr_at_3
value: 59.44299390774582
- type: mrr_at_5
value: 60.26544821583981
- type: nauc_map_at_1000_diff1
value: 39.824412602121804
- type: nauc_map_at_1000_max
value: 39.49332709959374
- type: nauc_map_at_1000_std
value: -17.27462623749702
- type: nauc_map_at_100_diff1
value: 39.80528910003463
- type: nauc_map_at_100_max
value: 39.51471609156093
- type: nauc_map_at_100_std
value: -17.275536933094937
- type: nauc_map_at_10_diff1
value: 39.28558292349772
- type: nauc_map_at_10_max
value: 38.13220294838968
- type: nauc_map_at_10_std
value: -18.235985574392863
- type: nauc_map_at_1_diff1
value: 43.68892397816937
- type: nauc_map_at_1_max
value: 14.478978190224353
- type: nauc_map_at_1_std
value: -18.435031919225477
- type: nauc_map_at_20_diff1
value: 39.8733530971344
- type: nauc_map_at_20_max
value: 39.30513202591992
- type: nauc_map_at_20_std
value: -17.62362848144766
- type: nauc_map_at_3_diff1
value: 40.31116611188815
- type: nauc_map_at_3_max
value: 31.107314675202165
- type: nauc_map_at_3_std
value: -19.52930881946966
- type: nauc_map_at_5_diff1
value: 39.1241499095765
- type: nauc_map_at_5_max
value: 37.330543901034055
- type: nauc_map_at_5_std
value: -17.893862772447548
- type: nauc_mrr_at_1000_diff1
value: 43.07490530140024
- type: nauc_mrr_at_1000_max
value: 42.28469195779226
- type: nauc_mrr_at_1000_std
value: -15.583217110180737
- type: nauc_mrr_at_100_diff1
value: 43.068836494603886
- type: nauc_mrr_at_100_max
value: 42.29612450479168
- type: nauc_mrr_at_100_std
value: -15.57218089438229
- type: nauc_mrr_at_10_diff1
value: 42.88685919151777
- type: nauc_mrr_at_10_max
value: 41.89944452003811
- type: nauc_mrr_at_10_std
value: -15.909673572763165
- type: nauc_mrr_at_1_diff1
value: 45.67646898532131
- type: nauc_mrr_at_1_max
value: 43.0541870425035
- type: nauc_mrr_at_1_std
value: -15.597124291613563
- type: nauc_mrr_at_20_diff1
value: 43.14141873150977
- type: nauc_mrr_at_20_max
value: 42.33063543184022
- type: nauc_mrr_at_20_std
value: -15.607612016107304
- type: nauc_mrr_at_3_diff1
value: 43.18370928261982
- type: nauc_mrr_at_3_max
value: 42.18529980773961
- type: nauc_mrr_at_3_std
value: -15.900151400673629
- type: nauc_mrr_at_5_diff1
value: 42.43443044877765
- type: nauc_mrr_at_5_max
value: 42.05818605278972
- type: nauc_mrr_at_5_std
value: -15.436502733299893
- type: nauc_ndcg_at_1000_diff1
value: 40.60606676178781
- type: nauc_ndcg_at_1000_max
value: 41.71923393878376
- type: nauc_ndcg_at_1000_std
value: -15.694740326899556
- type: nauc_ndcg_at_100_diff1
value: 40.15270376312309
- type: nauc_ndcg_at_100_max
value: 42.234126305709225
- type: nauc_ndcg_at_100_std
value: -15.436051984708952
- type: nauc_ndcg_at_10_diff1
value: 39.142259831299455
- type: nauc_ndcg_at_10_max
value: 38.61470104273746
- type: nauc_ndcg_at_10_std
value: -18.577452829132742
- type: nauc_ndcg_at_1_diff1
value: 45.67646898532131
- type: nauc_ndcg_at_1_max
value: 43.0541870425035
- type: nauc_ndcg_at_1_std
value: -15.597124291613563
- type: nauc_ndcg_at_20_diff1
value: 40.805159395901306
- type: nauc_ndcg_at_20_max
value: 41.58685629374952
- type: nauc_ndcg_at_20_std
value: -16.862408156222592
- type: nauc_ndcg_at_3_diff1
value: 39.12028215488432
- type: nauc_ndcg_at_3_max
value: 39.70580596343164
- type: nauc_ndcg_at_3_std
value: -16.705546903936213
- type: nauc_ndcg_at_5_diff1
value: 38.42075404927361
- type: nauc_ndcg_at_5_max
value: 38.064219879504385
- type: nauc_ndcg_at_5_std
value: -17.20282111665876
- type: nauc_precision_at_1000_diff1
value: -4.419224540552891
- type: nauc_precision_at_1000_max
value: 35.686022591225246
- type: nauc_precision_at_1000_std
value: 15.023520191032972
- type: nauc_precision_at_100_diff1
value: -2.9027602601603895
- type: nauc_precision_at_100_max
value: 39.99864013028808
- type: nauc_precision_at_100_std
value: 13.863497117255525
- type: nauc_precision_at_10_diff1
value: 5.539104839809501
- type: nauc_precision_at_10_max
value: 42.41625740557432
- type: nauc_precision_at_10_std
value: 1.0894693748662556
- type: nauc_precision_at_1_diff1
value: 45.67646898532131
- type: nauc_precision_at_1_max
value: 43.0541870425035
- type: nauc_precision_at_1_std
value: -15.597124291613563
- type: nauc_precision_at_20_diff1
value: 4.734562571681868
- type: nauc_precision_at_20_max
value: 44.35081213316202
- type: nauc_precision_at_20_std
value: 6.642891478284595
- type: nauc_precision_at_3_diff1
value: 13.936559341472101
- type: nauc_precision_at_3_max
value: 45.426668552497524
- type: nauc_precision_at_3_std
value: -5.219785419247125
- type: nauc_precision_at_5_diff1
value: 8.366706789546015
- type: nauc_precision_at_5_max
value: 46.161942989326896
- type: nauc_precision_at_5_std
value: -0.193140343545876
- type: nauc_recall_at_1000_diff1
value: 45.61785312444842
- type: nauc_recall_at_1000_max
value: 75.68258976531774
- type: nauc_recall_at_1000_std
value: 37.469059422121575
- type: nauc_recall_at_100_diff1
value: 26.798748531805096
- type: nauc_recall_at_100_max
value: 54.72134095197765
- type: nauc_recall_at_100_std
value: -1.5967608233799417
- type: nauc_recall_at_10_diff1
value: 32.13211696200521
- type: nauc_recall_at_10_max
value: 31.13866254975895
- type: nauc_recall_at_10_std
value: -22.31404161136118
- type: nauc_recall_at_1_diff1
value: 43.68892397816937
- type: nauc_recall_at_1_max
value: 14.478978190224353
- type: nauc_recall_at_1_std
value: -18.435031919225477
- type: nauc_recall_at_20_diff1
value: 38.597996930461385
- type: nauc_recall_at_20_max
value: 42.49849027366794
- type: nauc_recall_at_20_std
value: -16.536471900752154
- type: nauc_recall_at_3_diff1
value: 35.343730012759266
- type: nauc_recall_at_3_max
value: 26.898722085043392
- type: nauc_recall_at_3_std
value: -19.4459792273884
- type: nauc_recall_at_5_diff1
value: 31.8310298012186
- type: nauc_recall_at_5_max
value: 32.67800489655844
- type: nauc_recall_at_5_std
value: -16.800929103347283
- type: ndcg_at_1
value: 53.916
- type: ndcg_at_10
value: 60.428000000000004
- type: ndcg_at_100
value: 65.95
- type: ndcg_at_1000
value: 66.88
- type: ndcg_at_20
value: 62.989
- type: ndcg_at_3
value: 55.204
- type: ndcg_at_5
value: 56.42700000000001
- type: precision_at_1
value: 53.916
- type: precision_at_10
value: 14.346999999999998
- type: precision_at_100
value: 1.849
- type: precision_at_1000
value: 0.196
- type: precision_at_20
value: 8.022
- type: precision_at_3
value: 34.552
- type: precision_at_5
value: 24.569
- type: recall_at_1
value: 33.453
- type: recall_at_10
value: 71.07900000000001
- type: recall_at_100
value: 93.207
- type: recall_at_1000
value: 99.60799999999999
- type: recall_at_20
value: 79.482
- type: recall_at_3
value: 53.98
- type: recall_at_5
value: 60.781
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-pol)
type: jinaai/xpqa
config: eng-pol
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 34.042
- type: map_at_1
value: 13.236
- type: map_at_10
value: 27.839999999999996
- type: map_at_100
value: 30.171999999999997
- type: map_at_1000
value: 30.349999999999998
- type: map_at_20
value: 29.044999999999998
- type: map_at_3
value: 22.58
- type: map_at_5
value: 25.83
- type: mrr_at_1
value: 30.318471337579616
- type: mrr_at_10
value: 37.4983823678091
- type: mrr_at_100
value: 38.5784523175009
- type: mrr_at_1000
value: 38.63608698968148
- type: mrr_at_20
value: 38.02996157871825
- type: mrr_at_3
value: 34.798301486199584
- type: mrr_at_5
value: 36.39702760084925
- type: nauc_map_at_1000_diff1
value: 21.07199789609177
- type: nauc_map_at_1000_max
value: 25.959233507893277
- type: nauc_map_at_1000_std
value: -28.011925372852826
- type: nauc_map_at_100_diff1
value: 21.086788412737548
- type: nauc_map_at_100_max
value: 25.8611620203686
- type: nauc_map_at_100_std
value: -28.179239912057515
- type: nauc_map_at_10_diff1
value: 21.23841745922078
- type: nauc_map_at_10_max
value: 25.44290342378288
- type: nauc_map_at_10_std
value: -28.75578689110275
- type: nauc_map_at_1_diff1
value: 28.87454015638211
- type: nauc_map_at_1_max
value: 17.50681123879997
- type: nauc_map_at_1_std
value: -30.382831850562432
- type: nauc_map_at_20_diff1
value: 21.076559713540455
- type: nauc_map_at_20_max
value: 25.538154202494535
- type: nauc_map_at_20_std
value: -28.518764617658555
- type: nauc_map_at_3_diff1
value: 22.159185358766468
- type: nauc_map_at_3_max
value: 23.01652660927249
- type: nauc_map_at_3_std
value: -29.567722713221862
- type: nauc_map_at_5_diff1
value: 21.35578810370897
- type: nauc_map_at_5_max
value: 25.550550437767395
- type: nauc_map_at_5_std
value: -28.7889035461355
- type: nauc_mrr_at_1000_diff1
value: 22.28633009221923
- type: nauc_mrr_at_1000_max
value: 26.920205393136392
- type: nauc_mrr_at_1000_std
value: -25.887791634977642
- type: nauc_mrr_at_100_diff1
value: 22.2754975739755
- type: nauc_mrr_at_100_max
value: 26.90235716615346
- type: nauc_mrr_at_100_std
value: -25.891596020584345
- type: nauc_mrr_at_10_diff1
value: 22.415076305593534
- type: nauc_mrr_at_10_max
value: 26.504643796222222
- type: nauc_mrr_at_10_std
value: -26.6046081215833
- type: nauc_mrr_at_1_diff1
value: 23.406748619244368
- type: nauc_mrr_at_1_max
value: 29.058228240823553
- type: nauc_mrr_at_1_std
value: -26.450169820901078
- type: nauc_mrr_at_20_diff1
value: 22.29233141817678
- type: nauc_mrr_at_20_max
value: 26.69021351064081
- type: nauc_mrr_at_20_std
value: -26.086596227376656
- type: nauc_mrr_at_3_diff1
value: 22.20746187500145
- type: nauc_mrr_at_3_max
value: 27.143725946169457
- type: nauc_mrr_at_3_std
value: -26.7017708594376
- type: nauc_mrr_at_5_diff1
value: 22.71898965233195
- type: nauc_mrr_at_5_max
value: 26.932386658571662
- type: nauc_mrr_at_5_std
value: -26.725541058780234
- type: nauc_ndcg_at_1000_diff1
value: 20.541734305148466
- type: nauc_ndcg_at_1000_max
value: 27.180534238090758
- type: nauc_ndcg_at_1000_std
value: -23.74197745177845
- type: nauc_ndcg_at_100_diff1
value: 20.570052839937468
- type: nauc_ndcg_at_100_max
value: 26.21605034405486
- type: nauc_ndcg_at_100_std
value: -25.359817188805028
- type: nauc_ndcg_at_10_diff1
value: 21.241423075073467
- type: nauc_ndcg_at_10_max
value: 24.599199195239475
- type: nauc_ndcg_at_10_std
value: -28.404540333309008
- type: nauc_ndcg_at_1_diff1
value: 23.406748619244368
- type: nauc_ndcg_at_1_max
value: 29.058228240823553
- type: nauc_ndcg_at_1_std
value: -26.450169820901078
- type: nauc_ndcg_at_20_diff1
value: 20.740460046196873
- type: nauc_ndcg_at_20_max
value: 24.82380195169634
- type: nauc_ndcg_at_20_std
value: -27.376298834244313
- type: nauc_ndcg_at_3_diff1
value: 19.994948682426504
- type: nauc_ndcg_at_3_max
value: 26.153790759405105
- type: nauc_ndcg_at_3_std
value: -27.194548404540885
- type: nauc_ndcg_at_5_diff1
value: 21.48414272096384
- type: nauc_ndcg_at_5_max
value: 25.239652015076373
- type: nauc_ndcg_at_5_std
value: -28.2620160957961
- type: nauc_precision_at_1000_diff1
value: -0.7557639926687744
- type: nauc_precision_at_1000_max
value: 24.265591636994436
- type: nauc_precision_at_1000_std
value: 16.833104654292654
- type: nauc_precision_at_100_diff1
value: 4.647847665941115
- type: nauc_precision_at_100_max
value: 24.42192644844434
- type: nauc_precision_at_100_std
value: 0.2718848568876648
- type: nauc_precision_at_10_diff1
value: 9.465969286722654
- type: nauc_precision_at_10_max
value: 27.448993150448043
- type: nauc_precision_at_10_std
value: -16.519099596502212
- type: nauc_precision_at_1_diff1
value: 23.406748619244368
- type: nauc_precision_at_1_max
value: 29.058228240823553
- type: nauc_precision_at_1_std
value: -26.450169820901078
- type: nauc_precision_at_20_diff1
value: 8.021421615668114
- type: nauc_precision_at_20_max
value: 26.18556481398635
- type: nauc_precision_at_20_std
value: -12.207152108668367
- type: nauc_precision_at_3_diff1
value: 11.783572803634241
- type: nauc_precision_at_3_max
value: 29.259715774978893
- type: nauc_precision_at_3_std
value: -20.407524967717425
- type: nauc_precision_at_5_diff1
value: 10.371728615220821
- type: nauc_precision_at_5_max
value: 30.270642833482864
- type: nauc_precision_at_5_std
value: -18.407334880575494
- type: nauc_recall_at_1000_diff1
value: 6.008969959111555
- type: nauc_recall_at_1000_max
value: 39.79691734058127
- type: nauc_recall_at_1000_std
value: 32.43591825510109
- type: nauc_recall_at_100_diff1
value: 15.2374566058917
- type: nauc_recall_at_100_max
value: 23.058785539503717
- type: nauc_recall_at_100_std
value: -15.962888794058165
- type: nauc_recall_at_10_diff1
value: 19.46184821807753
- type: nauc_recall_at_10_max
value: 19.001003513986866
- type: nauc_recall_at_10_std
value: -27.753332786663876
- type: nauc_recall_at_1_diff1
value: 28.87454015638211
- type: nauc_recall_at_1_max
value: 17.50681123879997
- type: nauc_recall_at_1_std
value: -30.382831850562432
- type: nauc_recall_at_20_diff1
value: 17.237090858517405
- type: nauc_recall_at_20_max
value: 18.42118474134871
- type: nauc_recall_at_20_std
value: -24.862787724031957
- type: nauc_recall_at_3_diff1
value: 18.813019521758577
- type: nauc_recall_at_3_max
value: 19.198572333053544
- type: nauc_recall_at_3_std
value: -28.5644958605618
- type: nauc_recall_at_5_diff1
value: 20.247501986329482
- type: nauc_recall_at_5_max
value: 21.121526202170358
- type: nauc_recall_at_5_std
value: -27.220378617864853
- type: ndcg_at_1
value: 30.318
- type: ndcg_at_10
value: 34.042
- type: ndcg_at_100
value: 42.733
- type: ndcg_at_1000
value: 46.015
- type: ndcg_at_20
value: 37.053999999999995
- type: ndcg_at_3
value: 29.254
- type: ndcg_at_5
value: 30.514000000000003
- type: precision_at_1
value: 30.318
- type: precision_at_10
value: 10.981
- type: precision_at_100
value: 1.889
- type: precision_at_1000
value: 0.234
- type: precision_at_20
value: 6.643000000000001
- type: precision_at_3
value: 22.166
- type: precision_at_5
value: 17.477999999999998
- type: recall_at_1
value: 13.236
- type: recall_at_10
value: 41.461
- type: recall_at_100
value: 75.008
- type: recall_at_1000
value: 96.775
- type: recall_at_20
value: 50.754
- type: recall_at_3
value: 26.081
- type: recall_at_5
value: 33.168
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-cmn)
type: jinaai/xpqa
config: eng-cmn
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 37.504
- type: map_at_1
value: 16.019
- type: map_at_10
value: 30.794
- type: map_at_100
value: 33.157
- type: map_at_1000
value: 33.324999999999996
- type: map_at_20
value: 32.161
- type: map_at_3
value: 25.372
- type: map_at_5
value: 28.246
- type: mrr_at_1
value: 30.461165048543688
- type: mrr_at_10
value: 39.393107566651224
- type: mrr_at_100
value: 40.570039540602295
- type: mrr_at_1000
value: 40.6306116407744
- type: mrr_at_20
value: 40.09428159978876
- type: mrr_at_3
value: 37.176375404530745
- type: mrr_at_5
value: 38.09870550161812
- type: nauc_map_at_1000_diff1
value: 30.82306881892873
- type: nauc_map_at_1000_max
value: 5.877636000666466
- type: nauc_map_at_1000_std
value: -30.7140513386797
- type: nauc_map_at_100_diff1
value: 30.85192449151961
- type: nauc_map_at_100_max
value: 5.809195131550909
- type: nauc_map_at_100_std
value: -30.838556702972063
- type: nauc_map_at_10_diff1
value: 30.50359163635058
- type: nauc_map_at_10_max
value: 6.373491595869303
- type: nauc_map_at_10_std
value: -29.89368007827676
- type: nauc_map_at_1_diff1
value: 38.60240510083884
- type: nauc_map_at_1_max
value: 10.407392664609139
- type: nauc_map_at_1_std
value: -17.76327278732833
- type: nauc_map_at_20_diff1
value: 30.897489125753598
- type: nauc_map_at_20_max
value: 5.9303381898248
- type: nauc_map_at_20_std
value: -30.863345188760515
- type: nauc_map_at_3_diff1
value: 32.8150951852729
- type: nauc_map_at_3_max
value: 7.671931402215177
- type: nauc_map_at_3_std
value: -25.654809758216533
- type: nauc_map_at_5_diff1
value: 31.19558194781019
- type: nauc_map_at_5_max
value: 6.426885613116939
- type: nauc_map_at_5_std
value: -28.609027858850016
- type: nauc_mrr_at_1000_diff1
value: 30.7596332048733
- type: nauc_mrr_at_1000_max
value: 1.1970748115580212
- type: nauc_mrr_at_1000_std
value: -34.647570668150216
- type: nauc_mrr_at_100_diff1
value: 30.74693370788581
- type: nauc_mrr_at_100_max
value: 1.1673272262754841
- type: nauc_mrr_at_100_std
value: -34.67761028542745
- type: nauc_mrr_at_10_diff1
value: 30.537820575183076
- type: nauc_mrr_at_10_max
value: 1.0261868725502707
- type: nauc_mrr_at_10_std
value: -34.999990560631204
- type: nauc_mrr_at_1_diff1
value: 35.51868580113285
- type: nauc_mrr_at_1_max
value: 5.117103773147307
- type: nauc_mrr_at_1_std
value: -30.633913466736956
- type: nauc_mrr_at_20_diff1
value: 30.67318175430903
- type: nauc_mrr_at_20_max
value: 1.0979983974981327
- type: nauc_mrr_at_20_std
value: -34.8388339739997
- type: nauc_mrr_at_3_diff1
value: 30.884642006045702
- type: nauc_mrr_at_3_max
value: 1.7970996544095983
- type: nauc_mrr_at_3_std
value: -34.290172894906085
- type: nauc_mrr_at_5_diff1
value: 30.89687518368571
- type: nauc_mrr_at_5_max
value: 1.2123714988495347
- type: nauc_mrr_at_5_std
value: -35.01704580471926
- type: nauc_ndcg_at_1000_diff1
value: 29.214476799077342
- type: nauc_ndcg_at_1000_max
value: 3.6379035546112872
- type: nauc_ndcg_at_1000_std
value: -32.35757522049194
- type: nauc_ndcg_at_100_diff1
value: 29.130004541376298
- type: nauc_ndcg_at_100_max
value: 2.9580589185293045
- type: nauc_ndcg_at_100_std
value: -33.26884643871724
- type: nauc_ndcg_at_10_diff1
value: 28.521001084366393
- type: nauc_ndcg_at_10_max
value: 3.630223957267483
- type: nauc_ndcg_at_10_std
value: -33.14524140940815
- type: nauc_ndcg_at_1_diff1
value: 35.51868580113285
- type: nauc_ndcg_at_1_max
value: 5.117103773147307
- type: nauc_ndcg_at_1_std
value: -30.633913466736956
- type: nauc_ndcg_at_20_diff1
value: 29.194462756848782
- type: nauc_ndcg_at_20_max
value: 2.61162903136461
- type: nauc_ndcg_at_20_std
value: -34.59161403211834
- type: nauc_ndcg_at_3_diff1
value: 30.183555327135203
- type: nauc_ndcg_at_3_max
value: 5.61949040917093
- type: nauc_ndcg_at_3_std
value: -30.350117794058175
- type: nauc_ndcg_at_5_diff1
value: 29.74420394139971
- type: nauc_ndcg_at_5_max
value: 3.952183813937688
- type: nauc_ndcg_at_5_std
value: -31.807833795302038
- type: nauc_precision_at_1000_diff1
value: -5.467049121617333
- type: nauc_precision_at_1000_max
value: -3.993986884198271
- type: nauc_precision_at_1000_std
value: -13.703967324212224
- type: nauc_precision_at_100_diff1
value: 1.5585428307943647
- type: nauc_precision_at_100_max
value: -4.250455723613214
- type: nauc_precision_at_100_std
value: -22.294689856776493
- type: nauc_precision_at_10_diff1
value: 11.076036917255259
- type: nauc_precision_at_10_max
value: -1.5859394644365377
- type: nauc_precision_at_10_std
value: -34.94912594413202
- type: nauc_precision_at_1_diff1
value: 35.51868580113285
- type: nauc_precision_at_1_max
value: 5.117103773147307
- type: nauc_precision_at_1_std
value: -30.633913466736956
- type: nauc_precision_at_20_diff1
value: 9.311484455773828
- type: nauc_precision_at_20_max
value: -3.678383428592432
- type: nauc_precision_at_20_std
value: -33.700002761401635
- type: nauc_precision_at_3_diff1
value: 19.2787260874381
- type: nauc_precision_at_3_max
value: 0.18292109396940018
- type: nauc_precision_at_3_std
value: -35.23939824276542
- type: nauc_precision_at_5_diff1
value: 14.97930592298584
- type: nauc_precision_at_5_max
value: -1.63540635880963
- type: nauc_precision_at_5_std
value: -35.908283558321315
- type: nauc_recall_at_1000_diff1
value: 26.63056473607804
- type: nauc_recall_at_1000_max
value: 62.7304558520689
- type: nauc_recall_at_1000_std
value: 58.12421701377561
- type: nauc_recall_at_100_diff1
value: 21.42127379898579
- type: nauc_recall_at_100_max
value: 1.4748203516921914
- type: nauc_recall_at_100_std
value: -27.56467339041136
- type: nauc_recall_at_10_diff1
value: 21.20479652609812
- type: nauc_recall_at_10_max
value: 1.7394881489709888
- type: nauc_recall_at_10_std
value: -32.15116902585072
- type: nauc_recall_at_1_diff1
value: 38.60240510083884
- type: nauc_recall_at_1_max
value: 10.407392664609139
- type: nauc_recall_at_1_std
value: -17.76327278732833
- type: nauc_recall_at_20_diff1
value: 23.049652721582632
- type: nauc_recall_at_20_max
value: -1.7715787106286838
- type: nauc_recall_at_20_std
value: -36.14203686002867
- type: nauc_recall_at_3_diff1
value: 26.522179829461873
- type: nauc_recall_at_3_max
value: 6.078208732431124
- type: nauc_recall_at_3_std
value: -25.02625711226274
- type: nauc_recall_at_5_diff1
value: 24.19538553561693
- type: nauc_recall_at_5_max
value: 2.4963810785503524
- type: nauc_recall_at_5_std
value: -30.449635496921257
- type: ndcg_at_1
value: 30.461
- type: ndcg_at_10
value: 37.504
- type: ndcg_at_100
value: 46.156000000000006
- type: ndcg_at_1000
value: 48.985
- type: ndcg_at_20
value: 41.025
- type: ndcg_at_3
value: 32.165
- type: ndcg_at_5
value: 33.072
- type: precision_at_1
value: 30.461
- type: precision_at_10
value: 11.032
- type: precision_at_100
value: 1.8870000000000002
- type: precision_at_1000
value: 0.22499999999999998
- type: precision_at_20
value: 6.833
- type: precision_at_3
value: 22.532
- type: precision_at_5
value: 16.966
- type: recall_at_1
value: 16.019
- type: recall_at_10
value: 47.557
- type: recall_at_100
value: 80.376
- type: recall_at_1000
value: 98.904
- type: recall_at_20
value: 58.48100000000001
- type: recall_at_3
value: 30.682
- type: recall_at_5
value: 36.714999999999996
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-spa)
type: jinaai/xpqa
config: eng-spa
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 53.359
- type: map_at_1
value: 22.892000000000003
- type: map_at_10
value: 45.773
- type: map_at_100
value: 47.778999999999996
- type: map_at_1000
value: 47.882999999999996
- type: map_at_20
value: 46.869
- type: map_at_3
value: 37.643
- type: map_at_5
value: 43.120999999999995
- type: mrr_at_1
value: 47.28877679697352
- type: mrr_at_10
value: 56.95890630316857
- type: mrr_at_100
value: 57.71103367009639
- type: mrr_at_1000
value: 57.73661441948852
- type: mrr_at_20
value: 57.37701091311334
- type: mrr_at_3
value: 54.74989491382929
- type: mrr_at_5
value: 56.08659100462372
- type: nauc_map_at_1000_diff1
value: 27.8347129954991
- type: nauc_map_at_1000_max
value: 38.04300600762859
- type: nauc_map_at_1000_std
value: -18.294653328262868
- type: nauc_map_at_100_diff1
value: 27.818449297770858
- type: nauc_map_at_100_max
value: 38.03533462156633
- type: nauc_map_at_100_std
value: -18.332989980880644
- type: nauc_map_at_10_diff1
value: 27.520664180018358
- type: nauc_map_at_10_max
value: 37.67109855753314
- type: nauc_map_at_10_std
value: -18.496721673888683
- type: nauc_map_at_1_diff1
value: 37.56020148060502
- type: nauc_map_at_1_max
value: 10.298394230150745
- type: nauc_map_at_1_std
value: -20.41359936101547
- type: nauc_map_at_20_diff1
value: 27.615023038189722
- type: nauc_map_at_20_max
value: 37.808525116320254
- type: nauc_map_at_20_std
value: -18.49235775420803
- type: nauc_map_at_3_diff1
value: 30.797347567428424
- type: nauc_map_at_3_max
value: 29.374407828869497
- type: nauc_map_at_3_std
value: -19.75905772914969
- type: nauc_map_at_5_diff1
value: 28.431802888884803
- type: nauc_map_at_5_max
value: 35.57723911610521
- type: nauc_map_at_5_std
value: -19.093588845366824
- type: nauc_mrr_at_1000_diff1
value: 33.263611009054586
- type: nauc_mrr_at_1000_max
value: 40.620639901613664
- type: nauc_mrr_at_1000_std
value: -17.083016011032036
- type: nauc_mrr_at_100_diff1
value: 33.25375012559163
- type: nauc_mrr_at_100_max
value: 40.62376205172005
- type: nauc_mrr_at_100_std
value: -17.091930575226684
- type: nauc_mrr_at_10_diff1
value: 33.05787202690095
- type: nauc_mrr_at_10_max
value: 40.4516362611674
- type: nauc_mrr_at_10_std
value: -17.088910666499892
- type: nauc_mrr_at_1_diff1
value: 36.424151087824555
- type: nauc_mrr_at_1_max
value: 40.955715626650445
- type: nauc_mrr_at_1_std
value: -16.56636409111209
- type: nauc_mrr_at_20_diff1
value: 33.12029456858138
- type: nauc_mrr_at_20_max
value: 40.56409347292635
- type: nauc_mrr_at_20_std
value: -17.102034817242068
- type: nauc_mrr_at_3_diff1
value: 33.52377926814156
- type: nauc_mrr_at_3_max
value: 40.824911575046876
- type: nauc_mrr_at_3_std
value: -16.855935748811092
- type: nauc_mrr_at_5_diff1
value: 33.08646471768442
- type: nauc_mrr_at_5_max
value: 40.59323589955881
- type: nauc_mrr_at_5_std
value: -16.77829710500156
- type: nauc_ndcg_at_1000_diff1
value: 28.741186244590207
- type: nauc_ndcg_at_1000_max
value: 40.0113825410539
- type: nauc_ndcg_at_1000_std
value: -17.15655081742458
- type: nauc_ndcg_at_100_diff1
value: 28.680521359782972
- type: nauc_ndcg_at_100_max
value: 39.94751899984445
- type: nauc_ndcg_at_100_std
value: -17.82813814043932
- type: nauc_ndcg_at_10_diff1
value: 27.22858072673168
- type: nauc_ndcg_at_10_max
value: 38.600188968554725
- type: nauc_ndcg_at_10_std
value: -18.517203924893614
- type: nauc_ndcg_at_1_diff1
value: 36.424151087824555
- type: nauc_ndcg_at_1_max
value: 40.955715626650445
- type: nauc_ndcg_at_1_std
value: -16.56636409111209
- type: nauc_ndcg_at_20_diff1
value: 27.56875900623774
- type: nauc_ndcg_at_20_max
value: 38.95264310199067
- type: nauc_ndcg_at_20_std
value: -18.709973965688445
- type: nauc_ndcg_at_3_diff1
value: 28.682842749851574
- type: nauc_ndcg_at_3_max
value: 38.361215408395964
- type: nauc_ndcg_at_3_std
value: -16.800291231827515
- type: nauc_ndcg_at_5_diff1
value: 28.178239259093484
- type: nauc_ndcg_at_5_max
value: 36.77096292606479
- type: nauc_ndcg_at_5_std
value: -18.718861696641145
- type: nauc_precision_at_1000_diff1
value: -7.3686253252869305
- type: nauc_precision_at_1000_max
value: 31.98896996987639
- type: nauc_precision_at_1000_std
value: 13.125659676392267
- type: nauc_precision_at_100_diff1
value: -2.8239113056969156
- type: nauc_precision_at_100_max
value: 36.95062472971812
- type: nauc_precision_at_100_std
value: 7.230228733647562
- type: nauc_precision_at_10_diff1
value: 2.5515545798843555
- type: nauc_precision_at_10_max
value: 45.46146019314904
- type: nauc_precision_at_10_std
value: -1.3249340536211553
- type: nauc_precision_at_1_diff1
value: 36.424151087824555
- type: nauc_precision_at_1_max
value: 40.955715626650445
- type: nauc_precision_at_1_std
value: -16.56636409111209
- type: nauc_precision_at_20_diff1
value: 0.7202861770489576
- type: nauc_precision_at_20_max
value: 41.9937596214609
- type: nauc_precision_at_20_std
value: 0.2756400069730064
- type: nauc_precision_at_3_diff1
value: 12.89221206929447
- type: nauc_precision_at_3_max
value: 48.57775126381142
- type: nauc_precision_at_3_std
value: -8.042242254131068
- type: nauc_precision_at_5_diff1
value: 7.063616193387763
- type: nauc_precision_at_5_max
value: 47.26496887331675
- type: nauc_precision_at_5_std
value: -4.735805200913049
- type: nauc_recall_at_1000_diff1
value: 2.6650052980682224
- type: nauc_recall_at_1000_max
value: 81.94826279951472
- type: nauc_recall_at_1000_std
value: 48.46012388224573
- type: nauc_recall_at_100_diff1
value: 24.516371948375827
- type: nauc_recall_at_100_max
value: 39.17639620389552
- type: nauc_recall_at_100_std
value: -17.884197602579533
- type: nauc_recall_at_10_diff1
value: 19.93892097640112
- type: nauc_recall_at_10_max
value: 33.079079440022106
- type: nauc_recall_at_10_std
value: -20.22227622801884
- type: nauc_recall_at_1_diff1
value: 37.56020148060502
- type: nauc_recall_at_1_max
value: 10.298394230150745
- type: nauc_recall_at_1_std
value: -20.41359936101547
- type: nauc_recall_at_20_diff1
value: 20.363784035670633
- type: nauc_recall_at_20_max
value: 33.39352971625336
- type: nauc_recall_at_20_std
value: -21.712050932168875
- type: nauc_recall_at_3_diff1
value: 26.220072121604655
- type: nauc_recall_at_3_max
value: 25.853218030218507
- type: nauc_recall_at_3_std
value: -17.830613372910907
- type: nauc_recall_at_5_diff1
value: 22.25850162680252
- type: nauc_recall_at_5_max
value: 30.89620539042785
- type: nauc_recall_at_5_std
value: -19.16786434439169
- type: ndcg_at_1
value: 47.288999999999994
- type: ndcg_at_10
value: 53.359
- type: ndcg_at_100
value: 60.25899999999999
- type: ndcg_at_1000
value: 61.902
- type: ndcg_at_20
value: 56.025000000000006
- type: ndcg_at_3
value: 47.221999999999994
- type: ndcg_at_5
value: 49.333
- type: precision_at_1
value: 47.288999999999994
- type: precision_at_10
value: 16.003
- type: precision_at_100
value: 2.221
- type: precision_at_1000
value: 0.246
- type: precision_at_20
value: 8.985
- type: precision_at_3
value: 34.510000000000005
- type: precision_at_5
value: 26.961000000000002
- type: recall_at_1
value: 22.892000000000003
- type: recall_at_10
value: 62.928
- type: recall_at_100
value: 89.105
- type: recall_at_1000
value: 99.319
- type: recall_at_20
value: 71.387
- type: recall_at_3
value: 43.492999999999995
- type: recall_at_5
value: 53.529
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-fra)
type: jinaai/xpqa
config: eng-fra
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 54.888000000000005
- type: map_at_1
value: 26.079
- type: map_at_10
value: 47.434
- type: map_at_100
value: 49.376
- type: map_at_1000
value: 49.461
- type: map_at_20
value: 48.634
- type: map_at_3
value: 40.409
- type: map_at_5
value: 44.531
- type: mrr_at_1
value: 46.86248331108144
- type: mrr_at_10
value: 56.45506177548896
- type: mrr_at_100
value: 57.20360629445577
- type: mrr_at_1000
value: 57.227004696897986
- type: mrr_at_20
value: 56.905302765737865
- type: mrr_at_3
value: 54.09434801958164
- type: mrr_at_5
value: 55.40943480195811
- type: nauc_map_at_1000_diff1
value: 37.739936045535885
- type: nauc_map_at_1000_max
value: 35.92625003516368
- type: nauc_map_at_1000_std
value: -15.825119611638398
- type: nauc_map_at_100_diff1
value: 37.71697833661983
- type: nauc_map_at_100_max
value: 35.91174068136317
- type: nauc_map_at_100_std
value: -15.838841891589006
- type: nauc_map_at_10_diff1
value: 37.52309268219689
- type: nauc_map_at_10_max
value: 35.4887130483351
- type: nauc_map_at_10_std
value: -16.61132378136234
- type: nauc_map_at_1_diff1
value: 42.705087329207984
- type: nauc_map_at_1_max
value: 12.047671550242974
- type: nauc_map_at_1_std
value: -17.156030827065834
- type: nauc_map_at_20_diff1
value: 37.59446680137666
- type: nauc_map_at_20_max
value: 35.80559546695052
- type: nauc_map_at_20_std
value: -16.158338316249786
- type: nauc_map_at_3_diff1
value: 38.618415267131816
- type: nauc_map_at_3_max
value: 27.030227996183925
- type: nauc_map_at_3_std
value: -18.962500694157857
- type: nauc_map_at_5_diff1
value: 37.980845601534256
- type: nauc_map_at_5_max
value: 32.82374761283266
- type: nauc_map_at_5_std
value: -17.856875825229565
- type: nauc_mrr_at_1000_diff1
value: 40.26059509279346
- type: nauc_mrr_at_1000_max
value: 39.28453752990871
- type: nauc_mrr_at_1000_std
value: -13.306217279524212
- type: nauc_mrr_at_100_diff1
value: 40.23390833398881
- type: nauc_mrr_at_100_max
value: 39.26041461025653
- type: nauc_mrr_at_100_std
value: -13.317700798873153
- type: nauc_mrr_at_10_diff1
value: 40.163737640180145
- type: nauc_mrr_at_10_max
value: 39.27138538165913
- type: nauc_mrr_at_10_std
value: -13.472971360323038
- type: nauc_mrr_at_1_diff1
value: 42.95339241383707
- type: nauc_mrr_at_1_max
value: 40.62982307619158
- type: nauc_mrr_at_1_std
value: -10.429597045942748
- type: nauc_mrr_at_20_diff1
value: 40.23703505923782
- type: nauc_mrr_at_20_max
value: 39.27051308063652
- type: nauc_mrr_at_20_std
value: -13.390197643922038
- type: nauc_mrr_at_3_diff1
value: 40.5721313555661
- type: nauc_mrr_at_3_max
value: 39.254774354468594
- type: nauc_mrr_at_3_std
value: -13.773803807863827
- type: nauc_mrr_at_5_diff1
value: 40.41081287079734
- type: nauc_mrr_at_5_max
value: 39.515241132077335
- type: nauc_mrr_at_5_std
value: -13.306544090087336
- type: nauc_ndcg_at_1000_diff1
value: 38.04772268296103
- type: nauc_ndcg_at_1000_max
value: 38.03364565521176
- type: nauc_ndcg_at_1000_std
value: -14.203182726102263
- type: nauc_ndcg_at_100_diff1
value: 37.51752795463643
- type: nauc_ndcg_at_100_max
value: 37.809671511710604
- type: nauc_ndcg_at_100_std
value: -13.880578225081408
- type: nauc_ndcg_at_10_diff1
value: 36.78438984005559
- type: nauc_ndcg_at_10_max
value: 36.98105155993232
- type: nauc_ndcg_at_10_std
value: -16.886308645939113
- type: nauc_ndcg_at_1_diff1
value: 42.95339241383707
- type: nauc_ndcg_at_1_max
value: 40.62982307619158
- type: nauc_ndcg_at_1_std
value: -10.429597045942748
- type: nauc_ndcg_at_20_diff1
value: 36.94164323893683
- type: nauc_ndcg_at_20_max
value: 37.333583379288285
- type: nauc_ndcg_at_20_std
value: -15.853318071434716
- type: nauc_ndcg_at_3_diff1
value: 36.905604845477384
- type: nauc_ndcg_at_3_max
value: 35.10252586688781
- type: nauc_ndcg_at_3_std
value: -17.128435988977742
- type: nauc_ndcg_at_5_diff1
value: 37.96742463612705
- type: nauc_ndcg_at_5_max
value: 34.65945109443365
- type: nauc_ndcg_at_5_std
value: -17.916428667861183
- type: nauc_precision_at_1000_diff1
value: -3.740861894117653
- type: nauc_precision_at_1000_max
value: 31.993854396874177
- type: nauc_precision_at_1000_std
value: 17.445629474196448
- type: nauc_precision_at_100_diff1
value: -0.4825948747911606
- type: nauc_precision_at_100_max
value: 35.834638448782954
- type: nauc_precision_at_100_std
value: 16.82718796079511
- type: nauc_precision_at_10_diff1
value: 8.285949866268147
- type: nauc_precision_at_10_max
value: 45.3292519726866
- type: nauc_precision_at_10_std
value: 4.5574850748441555
- type: nauc_precision_at_1_diff1
value: 42.95339241383707
- type: nauc_precision_at_1_max
value: 40.62982307619158
- type: nauc_precision_at_1_std
value: -10.429597045942748
- type: nauc_precision_at_20_diff1
value: 4.890590733611442
- type: nauc_precision_at_20_max
value: 41.83051757078859
- type: nauc_precision_at_20_std
value: 9.197347125630467
- type: nauc_precision_at_3_diff1
value: 17.79940075411976
- type: nauc_precision_at_3_max
value: 45.224103632426946
- type: nauc_precision_at_3_std
value: -5.017203435609909
- type: nauc_precision_at_5_diff1
value: 13.548063145911929
- type: nauc_precision_at_5_max
value: 46.84837547409909
- type: nauc_precision_at_5_std
value: -0.8925939386354484
- type: nauc_recall_at_1000_diff1
value: 74.48441717138078
- type: nauc_recall_at_1000_max
value: 74.66717137705027
- type: nauc_recall_at_1000_std
value: 0.24030117471512125
- type: nauc_recall_at_100_diff1
value: 22.553777341988656
- type: nauc_recall_at_100_max
value: 31.67861029246527
- type: nauc_recall_at_100_std
value: 0.2707450517253687
- type: nauc_recall_at_10_diff1
value: 28.490866614443235
- type: nauc_recall_at_10_max
value: 31.722970141434352
- type: nauc_recall_at_10_std
value: -21.97893365028007
- type: nauc_recall_at_1_diff1
value: 42.705087329207984
- type: nauc_recall_at_1_max
value: 12.047671550242974
- type: nauc_recall_at_1_std
value: -17.156030827065834
- type: nauc_recall_at_20_diff1
value: 27.44043454173112
- type: nauc_recall_at_20_max
value: 31.454281772040716
- type: nauc_recall_at_20_std
value: -20.1735695305415
- type: nauc_recall_at_3_diff1
value: 34.08447534706394
- type: nauc_recall_at_3_max
value: 21.793973773840865
- type: nauc_recall_at_3_std
value: -22.753978372378906
- type: nauc_recall_at_5_diff1
value: 33.59686526199479
- type: nauc_recall_at_5_max
value: 29.188889073761302
- type: nauc_recall_at_5_std
value: -21.96156333744562
- type: ndcg_at_1
value: 46.861999999999995
- type: ndcg_at_10
value: 54.888000000000005
- type: ndcg_at_100
value: 61.477000000000004
- type: ndcg_at_1000
value: 62.768
- type: ndcg_at_20
value: 57.812
- type: ndcg_at_3
value: 48.721
- type: ndcg_at_5
value: 50.282000000000004
- type: precision_at_1
value: 46.861999999999995
- type: precision_at_10
value: 15.167
- type: precision_at_100
value: 2.072
- type: precision_at_1000
value: 0.22499999999999998
- type: precision_at_20
value: 8.672
- type: precision_at_3
value: 33.066
- type: precision_at_5
value: 24.726
- type: recall_at_1
value: 26.079
- type: recall_at_10
value: 66.095
- type: recall_at_100
value: 91.65299999999999
- type: recall_at_1000
value: 99.83999999999999
- type: recall_at_20
value: 75.28
- type: recall_at_3
value: 46.874
- type: recall_at_5
value: 55.062
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (pol-eng)
type: jinaai/xpqa
config: pol-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 50.831
- type: map_at_1
value: 25.549
- type: map_at_10
value: 44.432
- type: map_at_100
value: 46.431
- type: map_at_1000
value: 46.525
- type: map_at_20
value: 45.595
- type: map_at_3
value: 38.574000000000005
- type: map_at_5
value: 42.266999999999996
- type: mrr_at_1
value: 43.5006435006435
- type: mrr_at_10
value: 51.561255132683684
- type: mrr_at_100
value: 52.59912482635216
- type: mrr_at_1000
value: 52.631337587043056
- type: mrr_at_20
value: 52.23234440063273
- type: mrr_at_3
value: 48.97039897039895
- type: mrr_at_5
value: 50.31531531531527
- type: nauc_map_at_1000_diff1
value: 35.907901295900174
- type: nauc_map_at_1000_max
value: 24.573763602041687
- type: nauc_map_at_1000_std
value: -29.524077960309313
- type: nauc_map_at_100_diff1
value: 35.86869121827827
- type: nauc_map_at_100_max
value: 24.532343818487494
- type: nauc_map_at_100_std
value: -29.613979124488864
- type: nauc_map_at_10_diff1
value: 35.90171794022391
- type: nauc_map_at_10_max
value: 23.90914892943268
- type: nauc_map_at_10_std
value: -30.43698820061533
- type: nauc_map_at_1_diff1
value: 50.80313333312038
- type: nauc_map_at_1_max
value: 16.649890421888156
- type: nauc_map_at_1_std
value: -22.323989416471683
- type: nauc_map_at_20_diff1
value: 35.77755470212964
- type: nauc_map_at_20_max
value: 24.199895270297034
- type: nauc_map_at_20_std
value: -30.223411960170647
- type: nauc_map_at_3_diff1
value: 38.964124882315936
- type: nauc_map_at_3_max
value: 21.187432510177167
- type: nauc_map_at_3_std
value: -28.976663506389887
- type: nauc_map_at_5_diff1
value: 36.04644236616672
- type: nauc_map_at_5_max
value: 23.501186429317094
- type: nauc_map_at_5_std
value: -30.068144596060748
- type: nauc_mrr_at_1000_diff1
value: 41.36555452105447
- type: nauc_mrr_at_1000_max
value: 26.376799280402867
- type: nauc_mrr_at_1000_std
value: -30.008603028757424
- type: nauc_mrr_at_100_diff1
value: 41.35523965220727
- type: nauc_mrr_at_100_max
value: 26.402612115967706
- type: nauc_mrr_at_100_std
value: -29.991754627128024
- type: nauc_mrr_at_10_diff1
value: 41.001395127259315
- type: nauc_mrr_at_10_max
value: 26.104860505051384
- type: nauc_mrr_at_10_std
value: -30.38420449487516
- type: nauc_mrr_at_1_diff1
value: 44.882846373248206
- type: nauc_mrr_at_1_max
value: 26.61905322890808
- type: nauc_mrr_at_1_std
value: -28.724565662206153
- type: nauc_mrr_at_20_diff1
value: 41.278009142648834
- type: nauc_mrr_at_20_max
value: 26.284565529087295
- type: nauc_mrr_at_20_std
value: -30.19549140549242
- type: nauc_mrr_at_3_diff1
value: 41.74663893951077
- type: nauc_mrr_at_3_max
value: 26.263048464325884
- type: nauc_mrr_at_3_std
value: -30.676733442965688
- type: nauc_mrr_at_5_diff1
value: 41.11461477846568
- type: nauc_mrr_at_5_max
value: 25.94713927964926
- type: nauc_mrr_at_5_std
value: -30.317066480767817
- type: nauc_ndcg_at_1000_diff1
value: 36.34161052445199
- type: nauc_ndcg_at_1000_max
value: 26.321036033696206
- type: nauc_ndcg_at_1000_std
value: -27.59146917115399
- type: nauc_ndcg_at_100_diff1
value: 35.66557800007035
- type: nauc_ndcg_at_100_max
value: 26.282211208336136
- type: nauc_ndcg_at_100_std
value: -27.905634124461333
- type: nauc_ndcg_at_10_diff1
value: 35.34872687407275
- type: nauc_ndcg_at_10_max
value: 24.018561915792272
- type: nauc_ndcg_at_10_std
value: -31.57712772869015
- type: nauc_ndcg_at_1_diff1
value: 44.882846373248206
- type: nauc_ndcg_at_1_max
value: 26.865602442152554
- type: nauc_ndcg_at_1_std
value: -28.509295454329152
- type: nauc_ndcg_at_20_diff1
value: 35.46177768045546
- type: nauc_ndcg_at_20_max
value: 24.921273675141542
- type: nauc_ndcg_at_20_std
value: -30.84348812979793
- type: nauc_ndcg_at_3_diff1
value: 36.84688489063923
- type: nauc_ndcg_at_3_max
value: 24.088513229463736
- type: nauc_ndcg_at_3_std
value: -30.05640995379297
- type: nauc_ndcg_at_5_diff1
value: 35.623143276796185
- type: nauc_ndcg_at_5_max
value: 23.76654250474061
- type: nauc_ndcg_at_5_std
value: -30.87847710074466
- type: nauc_precision_at_1000_diff1
value: -16.270532533886932
- type: nauc_precision_at_1000_max
value: 17.37365042394671
- type: nauc_precision_at_1000_std
value: 16.27166715693082
- type: nauc_precision_at_100_diff1
value: -13.175264889436313
- type: nauc_precision_at_100_max
value: 19.488571046893963
- type: nauc_precision_at_100_std
value: 9.055429698007798
- type: nauc_precision_at_10_diff1
value: 0.6806938753592942
- type: nauc_precision_at_10_max
value: 21.933083960522616
- type: nauc_precision_at_10_std
value: -18.2147036942157
- type: nauc_precision_at_1_diff1
value: 44.882846373248206
- type: nauc_precision_at_1_max
value: 26.865602442152554
- type: nauc_precision_at_1_std
value: -28.509295454329152
- type: nauc_precision_at_20_diff1
value: -4.318119150162302
- type: nauc_precision_at_20_max
value: 21.089702301041687
- type: nauc_precision_at_20_std
value: -10.333077681479546
- type: nauc_precision_at_3_diff1
value: 11.496076462671107
- type: nauc_precision_at_3_max
value: 23.018301549827008
- type: nauc_precision_at_3_std
value: -23.98652995416454
- type: nauc_precision_at_5_diff1
value: 4.271050668117355
- type: nauc_precision_at_5_max
value: 23.61051327966779
- type: nauc_precision_at_5_std
value: -21.557618503107847
- type: nauc_recall_at_1000_diff1
value: 62.23955911850697
- type: nauc_recall_at_1000_max
value: 83.20491723365542
- type: nauc_recall_at_1000_std
value: 66.5173462601958
- type: nauc_recall_at_100_diff1
value: 20.503778602988177
- type: nauc_recall_at_100_max
value: 29.379026288767506
- type: nauc_recall_at_100_std
value: -16.139120874540573
- type: nauc_recall_at_10_diff1
value: 27.659110249896557
- type: nauc_recall_at_10_max
value: 19.69557968026332
- type: nauc_recall_at_10_std
value: -33.95657132767551
- type: nauc_recall_at_1_diff1
value: 50.80313333312038
- type: nauc_recall_at_1_max
value: 16.649890421888156
- type: nauc_recall_at_1_std
value: -22.323989416471683
- type: nauc_recall_at_20_diff1
value: 27.084453724565176
- type: nauc_recall_at_20_max
value: 21.40080632474994
- type: nauc_recall_at_20_std
value: -32.83683639340239
- type: nauc_recall_at_3_diff1
value: 34.32950941333572
- type: nauc_recall_at_3_max
value: 18.55616615958199
- type: nauc_recall_at_3_std
value: -30.375983327454076
- type: nauc_recall_at_5_diff1
value: 29.44516734974564
- type: nauc_recall_at_5_max
value: 20.630543534300312
- type: nauc_recall_at_5_std
value: -31.30763062499127
- type: ndcg_at_1
value: 43.501
- type: ndcg_at_10
value: 50.831
- type: ndcg_at_100
value: 58.17099999999999
- type: ndcg_at_1000
value: 59.705
- type: ndcg_at_20
value: 54.047999999999995
- type: ndcg_at_3
value: 44.549
- type: ndcg_at_5
value: 46.861000000000004
- type: precision_at_1
value: 43.501
- type: precision_at_10
value: 12.895999999999999
- type: precision_at_100
value: 1.9
- type: precision_at_1000
value: 0.21
- type: precision_at_20
value: 7.593
- type: precision_at_3
value: 29.215000000000003
- type: precision_at_5
value: 21.57
- type: recall_at_1
value: 25.549
- type: recall_at_10
value: 61.795
- type: recall_at_100
value: 90.019
- type: recall_at_1000
value: 99.807
- type: recall_at_20
value: 72.096
- type: recall_at_3
value: 43.836999999999996
- type: recall_at_5
value: 51.714000000000006
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (pol-pol)
type: jinaai/xpqa
config: pol-pol
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 53.70399999999999
- type: map_at_1
value: 27.739000000000004
- type: map_at_10
value: 47.469
- type: map_at_100
value: 49.392
- type: map_at_1000
value: 49.483
- type: map_at_20
value: 48.646
- type: map_at_3
value: 41.467
- type: map_at_5
value: 45.467
- type: mrr_at_1
value: 47.00636942675159
- type: mrr_at_10
value: 54.63699322616519
- type: mrr_at_100
value: 55.54525182833755
- type: mrr_at_1000
value: 55.581331515356155
- type: mrr_at_20
value: 55.22918377451415
- type: mrr_at_3
value: 52.03821656050952
- type: mrr_at_5
value: 53.38216560509549
- type: nauc_map_at_1000_diff1
value: 45.03530825034854
- type: nauc_map_at_1000_max
value: 34.22740272603397
- type: nauc_map_at_1000_std
value: -30.428880484199244
- type: nauc_map_at_100_diff1
value: 44.978704455592805
- type: nauc_map_at_100_max
value: 34.20908357964765
- type: nauc_map_at_100_std
value: -30.47325365059666
- type: nauc_map_at_10_diff1
value: 44.9560579177672
- type: nauc_map_at_10_max
value: 33.70097588985278
- type: nauc_map_at_10_std
value: -31.205563222357885
- type: nauc_map_at_1_diff1
value: 57.94711780881773
- type: nauc_map_at_1_max
value: 21.60278071836319
- type: nauc_map_at_1_std
value: -23.273741268035923
- type: nauc_map_at_20_diff1
value: 44.97859054699532
- type: nauc_map_at_20_max
value: 34.153729150181846
- type: nauc_map_at_20_std
value: -30.97482545902907
- type: nauc_map_at_3_diff1
value: 47.52016138686765
- type: nauc_map_at_3_max
value: 30.176197065298417
- type: nauc_map_at_3_std
value: -29.90628984041898
- type: nauc_map_at_5_diff1
value: 45.36581638257985
- type: nauc_map_at_5_max
value: 33.697200263698036
- type: nauc_map_at_5_std
value: -31.165331120088453
- type: nauc_mrr_at_1000_diff1
value: 53.32889526818364
- type: nauc_mrr_at_1000_max
value: 36.104118340589736
- type: nauc_mrr_at_1000_std
value: -31.321132494516984
- type: nauc_mrr_at_100_diff1
value: 53.30695875258367
- type: nauc_mrr_at_100_max
value: 36.114890079024455
- type: nauc_mrr_at_100_std
value: -31.291749322117447
- type: nauc_mrr_at_10_diff1
value: 53.189084772141435
- type: nauc_mrr_at_10_max
value: 35.939061062282484
- type: nauc_mrr_at_10_std
value: -31.502185884653645
- type: nauc_mrr_at_1_diff1
value: 56.89368291041337
- type: nauc_mrr_at_1_max
value: 36.07581125496313
- type: nauc_mrr_at_1_std
value: -29.703764232519475
- type: nauc_mrr_at_20_diff1
value: 53.23955737199497
- type: nauc_mrr_at_20_max
value: 36.068824838215676
- type: nauc_mrr_at_20_std
value: -31.420039428197594
- type: nauc_mrr_at_3_diff1
value: 53.74385074861207
- type: nauc_mrr_at_3_max
value: 35.57054587735015
- type: nauc_mrr_at_3_std
value: -32.356894834537684
- type: nauc_mrr_at_5_diff1
value: 53.66669556981826
- type: nauc_mrr_at_5_max
value: 36.02102289605049
- type: nauc_mrr_at_5_std
value: -32.030437067359124
- type: nauc_ndcg_at_1000_diff1
value: 46.34900536768847
- type: nauc_ndcg_at_1000_max
value: 35.6314995837715
- type: nauc_ndcg_at_1000_std
value: -28.965103958822624
- type: nauc_ndcg_at_100_diff1
value: 45.1587893788861
- type: nauc_ndcg_at_100_max
value: 35.62430753595297
- type: nauc_ndcg_at_100_std
value: -28.77303405812772
- type: nauc_ndcg_at_10_diff1
value: 44.928781590765965
- type: nauc_ndcg_at_10_max
value: 34.315200006430366
- type: nauc_ndcg_at_10_std
value: -32.05164097076614
- type: nauc_ndcg_at_1_diff1
value: 57.228262350455125
- type: nauc_ndcg_at_1_max
value: 35.645285703387366
- type: nauc_ndcg_at_1_std
value: -29.893553821348718
- type: nauc_ndcg_at_20_diff1
value: 44.959903633039865
- type: nauc_ndcg_at_20_max
value: 35.493022926282755
- type: nauc_ndcg_at_20_std
value: -31.54989291850644
- type: nauc_ndcg_at_3_diff1
value: 46.65266185996905
- type: nauc_ndcg_at_3_max
value: 33.74458119579594
- type: nauc_ndcg_at_3_std
value: -31.493683304534176
- type: nauc_ndcg_at_5_diff1
value: 46.08707037187612
- type: nauc_ndcg_at_5_max
value: 34.7401426055243
- type: nauc_ndcg_at_5_std
value: -32.44390676345172
- type: nauc_precision_at_1000_diff1
value: -12.11355300492561
- type: nauc_precision_at_1000_max
value: 14.490738062121233
- type: nauc_precision_at_1000_std
value: 14.448811005059097
- type: nauc_precision_at_100_diff1
value: -9.742085657181239
- type: nauc_precision_at_100_max
value: 18.030305489251223
- type: nauc_precision_at_100_std
value: 8.213089709529765
- type: nauc_precision_at_10_diff1
value: 5.153466672774969
- type: nauc_precision_at_10_max
value: 27.29412644661678
- type: nauc_precision_at_10_std
value: -15.505053884112355
- type: nauc_precision_at_1_diff1
value: 57.228262350455125
- type: nauc_precision_at_1_max
value: 35.645285703387366
- type: nauc_precision_at_1_std
value: -29.893553821348718
- type: nauc_precision_at_20_diff1
value: -0.6812430761066635
- type: nauc_precision_at_20_max
value: 25.81911286466295
- type: nauc_precision_at_20_std
value: -8.388506222482595
- type: nauc_precision_at_3_diff1
value: 18.263873866510576
- type: nauc_precision_at_3_max
value: 30.879576105862345
- type: nauc_precision_at_3_std
value: -24.0342929870108
- type: nauc_precision_at_5_diff1
value: 10.9905804265327
- type: nauc_precision_at_5_max
value: 30.88468087429045
- type: nauc_precision_at_5_std
value: -20.458684056213507
- type: nauc_recall_at_1000_diff1
value: -64.887668417171
- type: nauc_recall_at_1000_max
value: 52.25501730358092
- type: nauc_recall_at_1000_std
value: 85.13647916200132
- type: nauc_recall_at_100_diff1
value: 18.956777346127655
- type: nauc_recall_at_100_max
value: 36.10473493564588
- type: nauc_recall_at_100_std
value: -10.007474558899949
- type: nauc_recall_at_10_diff1
value: 33.810344497568046
- type: nauc_recall_at_10_max
value: 31.395430183214245
- type: nauc_recall_at_10_std
value: -33.12920524433795
- type: nauc_recall_at_1_diff1
value: 57.94711780881773
- type: nauc_recall_at_1_max
value: 21.60278071836319
- type: nauc_recall_at_1_std
value: -23.273741268035923
- type: nauc_recall_at_20_diff1
value: 31.449657437065397
- type: nauc_recall_at_20_max
value: 34.519574934321945
- type: nauc_recall_at_20_std
value: -33.43406862055647
- type: nauc_recall_at_3_diff1
value: 42.07841848382365
- type: nauc_recall_at_3_max
value: 28.7648772833266
- type: nauc_recall_at_3_std
value: -31.56367736320086
- type: nauc_recall_at_5_diff1
value: 39.21392858246301
- type: nauc_recall_at_5_max
value: 34.28338202081927
- type: nauc_recall_at_5_std
value: -33.725680523721906
- type: ndcg_at_1
value: 46.879
- type: ndcg_at_10
value: 53.70399999999999
- type: ndcg_at_100
value: 60.532
- type: ndcg_at_1000
value: 61.997
- type: ndcg_at_20
value: 56.818999999999996
- type: ndcg_at_3
value: 47.441
- type: ndcg_at_5
value: 49.936
- type: precision_at_1
value: 46.879
- type: precision_at_10
value: 13.376
- type: precision_at_100
value: 1.8980000000000001
- type: precision_at_1000
value: 0.208
- type: precision_at_20
value: 7.771
- type: precision_at_3
value: 30.658
- type: precision_at_5
value: 22.828
- type: recall_at_1
value: 27.739000000000004
- type: recall_at_10
value: 64.197
- type: recall_at_100
value: 90.54100000000001
- type: recall_at_1000
value: 99.90400000000001
- type: recall_at_20
value: 74.178
- type: recall_at_3
value: 46.312
- type: recall_at_5
value: 54.581999999999994
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (cmn-eng)
type: jinaai/xpqa
config: cmn-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 64.64
- type: map_at_1
value: 35.858000000000004
- type: map_at_10
value: 58.547000000000004
- type: map_at_100
value: 60.108
- type: map_at_1000
value: 60.153999999999996
- type: map_at_20
value: 59.528000000000006
- type: map_at_3
value: 51.578
- type: map_at_5
value: 56.206999999999994
- type: mrr_at_1
value: 56.95121951219512
- type: mrr_at_10
value: 64.93975029036001
- type: mrr_at_100
value: 65.63357055718294
- type: mrr_at_1000
value: 65.64844109026834
- type: mrr_at_20
value: 65.41280668715439
- type: mrr_at_3
value: 62.68292682926826
- type: mrr_at_5
value: 64.1585365853658
- type: nauc_map_at_1000_diff1
value: 45.82740870907091
- type: nauc_map_at_1000_max
value: 21.9696540066807
- type: nauc_map_at_1000_std
value: -32.028262356639495
- type: nauc_map_at_100_diff1
value: 45.802053117616396
- type: nauc_map_at_100_max
value: 21.946002070290966
- type: nauc_map_at_100_std
value: -32.06190418866229
- type: nauc_map_at_10_diff1
value: 46.017774155748945
- type: nauc_map_at_10_max
value: 21.876909086095544
- type: nauc_map_at_10_std
value: -32.13913568843985
- type: nauc_map_at_1_diff1
value: 56.34671160956164
- type: nauc_map_at_1_max
value: 17.6796949796236
- type: nauc_map_at_1_std
value: -13.741140688066045
- type: nauc_map_at_20_diff1
value: 46.027469176858716
- type: nauc_map_at_20_max
value: 21.80738432042703
- type: nauc_map_at_20_std
value: -32.430379634015395
- type: nauc_map_at_3_diff1
value: 48.40096725254027
- type: nauc_map_at_3_max
value: 21.15442803574233
- type: nauc_map_at_3_std
value: -26.205850292181417
- type: nauc_map_at_5_diff1
value: 45.77800041356389
- type: nauc_map_at_5_max
value: 22.11718771798752
- type: nauc_map_at_5_std
value: -30.32876338031471
- type: nauc_mrr_at_1000_diff1
value: 49.748274798877944
- type: nauc_mrr_at_1000_max
value: 24.547774167219906
- type: nauc_mrr_at_1000_std
value: -32.728447209433504
- type: nauc_mrr_at_100_diff1
value: 49.734549290377856
- type: nauc_mrr_at_100_max
value: 24.536933315055222
- type: nauc_mrr_at_100_std
value: -32.74076335880697
- type: nauc_mrr_at_10_diff1
value: 49.82827711456392
- type: nauc_mrr_at_10_max
value: 24.536773657485075
- type: nauc_mrr_at_10_std
value: -33.05707547166962
- type: nauc_mrr_at_1_diff1
value: 51.954289992321044
- type: nauc_mrr_at_1_max
value: 26.336255074856886
- type: nauc_mrr_at_1_std
value: -29.042962019692446
- type: nauc_mrr_at_20_diff1
value: 49.70938465628863
- type: nauc_mrr_at_20_max
value: 24.433219849576947
- type: nauc_mrr_at_20_std
value: -32.94123791846049
- type: nauc_mrr_at_3_diff1
value: 50.289486880347134
- type: nauc_mrr_at_3_max
value: 24.978796972860142
- type: nauc_mrr_at_3_std
value: -32.11305594784892
- type: nauc_mrr_at_5_diff1
value: 49.95013396316144
- type: nauc_mrr_at_5_max
value: 24.514452761198303
- type: nauc_mrr_at_5_std
value: -32.865859962984146
- type: nauc_ndcg_at_1000_diff1
value: 45.73806489233998
- type: nauc_ndcg_at_1000_max
value: 22.404941391043867
- type: nauc_ndcg_at_1000_std
value: -33.063445720849685
- type: nauc_ndcg_at_100_diff1
value: 45.1046206923062
- type: nauc_ndcg_at_100_max
value: 22.081133719684658
- type: nauc_ndcg_at_100_std
value: -33.299291459450146
- type: nauc_ndcg_at_10_diff1
value: 46.140608688357496
- type: nauc_ndcg_at_10_max
value: 21.442489279388916
- type: nauc_ndcg_at_10_std
value: -35.115870342856006
- type: nauc_ndcg_at_1_diff1
value: 51.954289992321044
- type: nauc_ndcg_at_1_max
value: 26.336255074856886
- type: nauc_ndcg_at_1_std
value: -29.042962019692446
- type: nauc_ndcg_at_20_diff1
value: 45.966784725457046
- type: nauc_ndcg_at_20_max
value: 21.166632858613145
- type: nauc_ndcg_at_20_std
value: -35.65112890375392
- type: nauc_ndcg_at_3_diff1
value: 46.7404863978999
- type: nauc_ndcg_at_3_max
value: 22.701743709129456
- type: nauc_ndcg_at_3_std
value: -30.907633466983192
- type: nauc_ndcg_at_5_diff1
value: 45.86487199083486
- type: nauc_ndcg_at_5_max
value: 22.088804840002513
- type: nauc_ndcg_at_5_std
value: -32.3853481632832
- type: nauc_precision_at_1000_diff1
value: -25.69710612774455
- type: nauc_precision_at_1000_max
value: 1.3964400247388091
- type: nauc_precision_at_1000_std
value: -8.873947511634814
- type: nauc_precision_at_100_diff1
value: -24.013497191077978
- type: nauc_precision_at_100_max
value: 2.0197725715909343
- type: nauc_precision_at_100_std
value: -11.387423148770633
- type: nauc_precision_at_10_diff1
value: -6.47728645242781
- type: nauc_precision_at_10_max
value: 6.815261443768304
- type: nauc_precision_at_10_std
value: -26.825062292855943
- type: nauc_precision_at_1_diff1
value: 51.954289992321044
- type: nauc_precision_at_1_max
value: 26.336255074856886
- type: nauc_precision_at_1_std
value: -29.042962019692446
- type: nauc_precision_at_20_diff1
value: -12.355232044747511
- type: nauc_precision_at_20_max
value: 4.022126850949725
- type: nauc_precision_at_20_std
value: -23.688935769326772
- type: nauc_precision_at_3_diff1
value: 7.662671665835864
- type: nauc_precision_at_3_max
value: 14.372394760986248
- type: nauc_precision_at_3_std
value: -28.635125665532453
- type: nauc_precision_at_5_diff1
value: -1.4592476425511611
- type: nauc_precision_at_5_max
value: 11.124310161474174
- type: nauc_precision_at_5_std
value: -27.89526669318053
- type: nauc_recall_at_1000_diff1
value: -19.58450046684932
- type: nauc_recall_at_1000_max
value: 70.71661998133165
- type: nauc_recall_at_1000_std
value: 93.05555555556315
- type: nauc_recall_at_100_diff1
value: 15.06356457571853
- type: nauc_recall_at_100_max
value: 14.051414749344806
- type: nauc_recall_at_100_std
value: -29.461874235153008
- type: nauc_recall_at_10_diff1
value: 41.29842726117901
- type: nauc_recall_at_10_max
value: 15.768699673830898
- type: nauc_recall_at_10_std
value: -42.11585661287712
- type: nauc_recall_at_1_diff1
value: 56.34671160956164
- type: nauc_recall_at_1_max
value: 17.6796949796236
- type: nauc_recall_at_1_std
value: -13.741140688066045
- type: nauc_recall_at_20_diff1
value: 38.8078283585263
- type: nauc_recall_at_20_max
value: 12.06816084005326
- type: nauc_recall_at_20_std
value: -48.20956170056591
- type: nauc_recall_at_3_diff1
value: 44.71028758038993
- type: nauc_recall_at_3_max
value: 19.1059093689162
- type: nauc_recall_at_3_std
value: -26.795164453784253
- type: nauc_recall_at_5_diff1
value: 41.06320797773054
- type: nauc_recall_at_5_max
value: 19.117028272530998
- type: nauc_recall_at_5_std
value: -33.985747504612156
- type: ndcg_at_1
value: 56.95099999999999
- type: ndcg_at_10
value: 64.64
- type: ndcg_at_100
value: 70.017
- type: ndcg_at_1000
value: 70.662
- type: ndcg_at_20
value: 67.256
- type: ndcg_at_3
value: 58.269000000000005
- type: ndcg_at_5
value: 60.94199999999999
- type: precision_at_1
value: 56.95099999999999
- type: precision_at_10
value: 15.671
- type: precision_at_100
value: 2.002
- type: precision_at_1000
value: 0.208
- type: precision_at_20
value: 8.689
- type: precision_at_3
value: 36.341
- type: precision_at_5
value: 26.854
- type: recall_at_1
value: 35.858000000000004
- type: recall_at_10
value: 75.02
- type: recall_at_100
value: 95.76
- type: recall_at_1000
value: 99.837
- type: recall_at_20
value: 83.732
- type: recall_at_3
value: 57.093
- type: recall_at_5
value: 66.193
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (cmn-cmn)
type: jinaai/xpqa
config: cmn-cmn
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 69.446
- type: map_at_1
value: 39.995999999999995
- type: map_at_10
value: 64.033
- type: map_at_100
value: 65.51599999999999
- type: map_at_1000
value: 65.545
- type: map_at_20
value: 64.958
- type: map_at_3
value: 57.767
- type: map_at_5
value: 61.998
- type: mrr_at_1
value: 63.3495145631068
- type: mrr_at_10
value: 70.21146363075978
- type: mrr_at_100
value: 70.82810974202124
- type: mrr_at_1000
value: 70.83816803303915
- type: mrr_at_20
value: 70.60140248428802
- type: mrr_at_3
value: 68.66909385113267
- type: mrr_at_5
value: 69.56108414239482
- type: nauc_map_at_1000_diff1
value: 51.649897072831465
- type: nauc_map_at_1000_max
value: 38.25222728655331
- type: nauc_map_at_1000_std
value: -39.10327919949334
- type: nauc_map_at_100_diff1
value: 51.644205886401465
- type: nauc_map_at_100_max
value: 38.23611154355255
- type: nauc_map_at_100_std
value: -39.1677073977285
- type: nauc_map_at_10_diff1
value: 51.81444145636039
- type: nauc_map_at_10_max
value: 38.03382104326485
- type: nauc_map_at_10_std
value: -38.999395639812015
- type: nauc_map_at_1_diff1
value: 59.785298201044704
- type: nauc_map_at_1_max
value: 23.273537759937785
- type: nauc_map_at_1_std
value: -17.838712689290194
- type: nauc_map_at_20_diff1
value: 51.680208795601004
- type: nauc_map_at_20_max
value: 38.23334583518634
- type: nauc_map_at_20_std
value: -39.24344495939061
- type: nauc_map_at_3_diff1
value: 52.180913298194056
- type: nauc_map_at_3_max
value: 33.45482478000481
- type: nauc_map_at_3_std
value: -31.682911030586297
- type: nauc_map_at_5_diff1
value: 50.804900676175436
- type: nauc_map_at_5_max
value: 37.68924816012326
- type: nauc_map_at_5_std
value: -36.85016896616712
- type: nauc_mrr_at_1000_diff1
value: 56.371477471577535
- type: nauc_mrr_at_1000_max
value: 42.773877962050086
- type: nauc_mrr_at_1000_std
value: -40.41765081873682
- type: nauc_mrr_at_100_diff1
value: 56.3619751528192
- type: nauc_mrr_at_100_max
value: 42.76298794859916
- type: nauc_mrr_at_100_std
value: -40.44070582448831
- type: nauc_mrr_at_10_diff1
value: 56.33810523477712
- type: nauc_mrr_at_10_max
value: 42.76591937795783
- type: nauc_mrr_at_10_std
value: -40.69339583030244
- type: nauc_mrr_at_1_diff1
value: 58.90399906884378
- type: nauc_mrr_at_1_max
value: 43.38806571165292
- type: nauc_mrr_at_1_std
value: -38.224015285584
- type: nauc_mrr_at_20_diff1
value: 56.32629070537032
- type: nauc_mrr_at_20_max
value: 42.79615263472604
- type: nauc_mrr_at_20_std
value: -40.496777397603076
- type: nauc_mrr_at_3_diff1
value: 55.96989454480743
- type: nauc_mrr_at_3_max
value: 42.49832220744744
- type: nauc_mrr_at_3_std
value: -39.883799467132384
- type: nauc_mrr_at_5_diff1
value: 56.003080766475755
- type: nauc_mrr_at_5_max
value: 42.73308051011805
- type: nauc_mrr_at_5_std
value: -39.87179511166683
- type: nauc_ndcg_at_1000_diff1
value: 52.49054229225255
- type: nauc_ndcg_at_1000_max
value: 39.61644750719859
- type: nauc_ndcg_at_1000_std
value: -40.89845763194674
- type: nauc_ndcg_at_100_diff1
value: 52.33511250864434
- type: nauc_ndcg_at_100_max
value: 39.25530146124452
- type: nauc_ndcg_at_100_std
value: -41.92444498004374
- type: nauc_ndcg_at_10_diff1
value: 52.62031505931842
- type: nauc_ndcg_at_10_max
value: 38.667195545396766
- type: nauc_ndcg_at_10_std
value: -42.59503924641507
- type: nauc_ndcg_at_1_diff1
value: 58.90399906884378
- type: nauc_ndcg_at_1_max
value: 43.38806571165292
- type: nauc_ndcg_at_1_std
value: -38.224015285584
- type: nauc_ndcg_at_20_diff1
value: 52.15061629809436
- type: nauc_ndcg_at_20_max
value: 39.09332400054708
- type: nauc_ndcg_at_20_std
value: -42.80018671618001
- type: nauc_ndcg_at_3_diff1
value: 51.04210728138207
- type: nauc_ndcg_at_3_max
value: 38.19034802567046
- type: nauc_ndcg_at_3_std
value: -38.179821090765216
- type: nauc_ndcg_at_5_diff1
value: 51.04399574045204
- type: nauc_ndcg_at_5_max
value: 38.42492210204548
- type: nauc_ndcg_at_5_std
value: -38.868073241617715
- type: nauc_precision_at_1000_diff1
value: -25.151369907213734
- type: nauc_precision_at_1000_max
value: 9.012549147054989
- type: nauc_precision_at_1000_std
value: -9.319786589947698
- type: nauc_precision_at_100_diff1
value: -23.20945211843088
- type: nauc_precision_at_100_max
value: 9.860701593969862
- type: nauc_precision_at_100_std
value: -13.073877818347231
- type: nauc_precision_at_10_diff1
value: -6.970781124246847
- type: nauc_precision_at_10_max
value: 19.392675322254487
- type: nauc_precision_at_10_std
value: -26.74943490717657
- type: nauc_precision_at_1_diff1
value: 58.90399906884378
- type: nauc_precision_at_1_max
value: 43.38806571165292
- type: nauc_precision_at_1_std
value: -38.224015285584
- type: nauc_precision_at_20_diff1
value: -13.046456108081102
- type: nauc_precision_at_20_max
value: 15.69439950383875
- type: nauc_precision_at_20_std
value: -23.836004512018093
- type: nauc_precision_at_3_diff1
value: 3.5444232965528846
- type: nauc_precision_at_3_max
value: 27.08858445453865
- type: nauc_precision_at_3_std
value: -29.12757283665593
- type: nauc_precision_at_5_diff1
value: -3.6853986353320267
- type: nauc_precision_at_5_max
value: 24.32059689571271
- type: nauc_precision_at_5_std
value: -27.46188072134163
- type: nauc_recall_at_1000_diff1
value: 86.93515141907919
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 39.7052887613879
- type: nauc_recall_at_100_max
value: 18.40943977796887
- type: nauc_recall_at_100_std
value: -88.74014854144974
- type: nauc_recall_at_10_diff1
value: 48.85342500870892
- type: nauc_recall_at_10_max
value: 32.69617204234419
- type: nauc_recall_at_10_std
value: -51.9937231860804
- type: nauc_recall_at_1_diff1
value: 59.785298201044704
- type: nauc_recall_at_1_max
value: 23.273537759937785
- type: nauc_recall_at_1_std
value: -17.838712689290194
- type: nauc_recall_at_20_diff1
value: 45.40839773314378
- type: nauc_recall_at_20_max
value: 33.02458321493215
- type: nauc_recall_at_20_std
value: -55.97800739448166
- type: nauc_recall_at_3_diff1
value: 47.05565693416531
- type: nauc_recall_at_3_max
value: 28.743850400344297
- type: nauc_recall_at_3_std
value: -32.436470486397475
- type: nauc_recall_at_5_diff1
value: 45.30223758669577
- type: nauc_recall_at_5_max
value: 33.6567274747059
- type: nauc_recall_at_5_std
value: -39.946712017948514
- type: ndcg_at_1
value: 63.349999999999994
- type: ndcg_at_10
value: 69.446
- type: ndcg_at_100
value: 74.439
- type: ndcg_at_1000
value: 74.834
- type: ndcg_at_20
value: 71.763
- type: ndcg_at_3
value: 64.752
- type: ndcg_at_5
value: 66.316
- type: precision_at_1
value: 63.349999999999994
- type: precision_at_10
value: 16.286
- type: precision_at_100
value: 2.024
- type: precision_at_1000
value: 0.207
- type: precision_at_20
value: 8.908000000000001
- type: precision_at_3
value: 40.655
- type: precision_at_5
value: 28.859
- type: recall_at_1
value: 39.995999999999995
- type: recall_at_10
value: 78.107
- type: recall_at_100
value: 97.538
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_20
value: 85.72
- type: recall_at_3
value: 63.291
- type: recall_at_5
value: 70.625
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (spa-eng)
type: jinaai/xpqa
config: spa-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 68.258
- type: map_at_1
value: 33.06
- type: map_at_10
value: 61.590999999999994
- type: map_at_100
value: 63.341
- type: map_at_1000
value: 63.385999999999996
- type: map_at_20
value: 62.77700000000001
- type: map_at_3
value: 52.547999999999995
- type: map_at_5
value: 58.824
- type: mrr_at_1
value: 63.80832282471627
- type: mrr_at_10
value: 70.76848015372607
- type: mrr_at_100
value: 71.33996704518061
- type: mrr_at_1000
value: 71.35368444388072
- type: mrr_at_20
value: 71.18191741103522
- type: mrr_at_3
value: 68.83144178226142
- type: mrr_at_5
value: 69.88440521227405
- type: nauc_map_at_1000_diff1
value: 41.59255746310511
- type: nauc_map_at_1000_max
value: 42.064075373358065
- type: nauc_map_at_1000_std
value: -25.130730194381723
- type: nauc_map_at_100_diff1
value: 41.56447648820406
- type: nauc_map_at_100_max
value: 42.06711634651607
- type: nauc_map_at_100_std
value: -25.14871585556968
- type: nauc_map_at_10_diff1
value: 41.28968387107058
- type: nauc_map_at_10_max
value: 41.511538272139774
- type: nauc_map_at_10_std
value: -25.99906440164276
- type: nauc_map_at_1_diff1
value: 51.09859596320021
- type: nauc_map_at_1_max
value: 12.406789321338222
- type: nauc_map_at_1_std
value: -18.227486548655076
- type: nauc_map_at_20_diff1
value: 41.39469672947315
- type: nauc_map_at_20_max
value: 41.98309315808902
- type: nauc_map_at_20_std
value: -25.44704720985219
- type: nauc_map_at_3_diff1
value: 43.16164995512842
- type: nauc_map_at_3_max
value: 30.935400935562818
- type: nauc_map_at_3_std
value: -23.53095555148866
- type: nauc_map_at_5_diff1
value: 41.23474352142375
- type: nauc_map_at_5_max
value: 39.03088859147947
- type: nauc_map_at_5_std
value: -26.046526443708366
- type: nauc_mrr_at_1000_diff1
value: 51.79649678213789
- type: nauc_mrr_at_1000_max
value: 50.50340748045259
- type: nauc_mrr_at_1000_std
value: -24.777183703493407
- type: nauc_mrr_at_100_diff1
value: 51.78609028166551
- type: nauc_mrr_at_100_max
value: 50.51732896833555
- type: nauc_mrr_at_100_std
value: -24.760054686874717
- type: nauc_mrr_at_10_diff1
value: 51.705268395036995
- type: nauc_mrr_at_10_max
value: 50.35818415293149
- type: nauc_mrr_at_10_std
value: -25.170367120250404
- type: nauc_mrr_at_1_diff1
value: 53.91475115581825
- type: nauc_mrr_at_1_max
value: 49.122529616282016
- type: nauc_mrr_at_1_std
value: -22.377647552937155
- type: nauc_mrr_at_20_diff1
value: 51.778984221197774
- type: nauc_mrr_at_20_max
value: 50.5070957827813
- type: nauc_mrr_at_20_std
value: -24.908935023607285
- type: nauc_mrr_at_3_diff1
value: 51.82683773090423
- type: nauc_mrr_at_3_max
value: 50.77993196421369
- type: nauc_mrr_at_3_std
value: -24.3925832021831
- type: nauc_mrr_at_5_diff1
value: 51.722232683543034
- type: nauc_mrr_at_5_max
value: 50.334865493961864
- type: nauc_mrr_at_5_std
value: -25.513593495703297
- type: nauc_ndcg_at_1000_diff1
value: 44.21851582991263
- type: nauc_ndcg_at_1000_max
value: 45.73539068637836
- type: nauc_ndcg_at_1000_std
value: -24.716522467580397
- type: nauc_ndcg_at_100_diff1
value: 43.8002401615357
- type: nauc_ndcg_at_100_max
value: 45.801409410061915
- type: nauc_ndcg_at_100_std
value: -24.73171742499903
- type: nauc_ndcg_at_10_diff1
value: 42.540922778755885
- type: nauc_ndcg_at_10_max
value: 44.348836943874595
- type: nauc_ndcg_at_10_std
value: -28.05403666494785
- type: nauc_ndcg_at_1_diff1
value: 53.91475115581825
- type: nauc_ndcg_at_1_max
value: 49.122529616282016
- type: nauc_ndcg_at_1_std
value: -22.377647552937155
- type: nauc_ndcg_at_20_diff1
value: 43.10347921163421
- type: nauc_ndcg_at_20_max
value: 45.53253270265022
- type: nauc_ndcg_at_20_std
value: -26.63902791862846
- type: nauc_ndcg_at_3_diff1
value: 42.41720274782384
- type: nauc_ndcg_at_3_max
value: 42.91778219334943
- type: nauc_ndcg_at_3_std
value: -24.793252033594076
- type: nauc_ndcg_at_5_diff1
value: 42.51515034945093
- type: nauc_ndcg_at_5_max
value: 41.62080576508792
- type: nauc_ndcg_at_5_std
value: -28.209669314955065
- type: nauc_precision_at_1000_diff1
value: -14.89794075433148
- type: nauc_precision_at_1000_max
value: 27.85387929356412
- type: nauc_precision_at_1000_std
value: 10.728618597190849
- type: nauc_precision_at_100_diff1
value: -13.075270046295856
- type: nauc_precision_at_100_max
value: 29.77208946756632
- type: nauc_precision_at_100_std
value: 8.491662697326039
- type: nauc_precision_at_10_diff1
value: -4.0826025188781205
- type: nauc_precision_at_10_max
value: 39.04278085180075
- type: nauc_precision_at_10_std
value: -5.925408651372333
- type: nauc_precision_at_1_diff1
value: 53.91475115581825
- type: nauc_precision_at_1_max
value: 49.122529616282016
- type: nauc_precision_at_1_std
value: -22.377647552937155
- type: nauc_precision_at_20_diff1
value: -7.93186440645135
- type: nauc_precision_at_20_max
value: 35.81281308891365
- type: nauc_precision_at_20_std
value: 0.1241277857515697
- type: nauc_precision_at_3_diff1
value: 7.563562511484409
- type: nauc_precision_at_3_max
value: 43.43738862378524
- type: nauc_precision_at_3_std
value: -11.958059731912615
- type: nauc_precision_at_5_diff1
value: -0.1801152449011624
- type: nauc_precision_at_5_max
value: 41.32486715619513
- type: nauc_precision_at_5_std
value: -10.088699021919552
- type: nauc_recall_at_1000_diff1
value: 86.93359696819986
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 72.21843645604022
- type: nauc_recall_at_100_diff1
value: 29.86050842714198
- type: nauc_recall_at_100_max
value: 48.106658251136245
- type: nauc_recall_at_100_std
value: -14.981886214880035
- type: nauc_recall_at_10_diff1
value: 33.67119240737528
- type: nauc_recall_at_10_max
value: 39.271984859561414
- type: nauc_recall_at_10_std
value: -35.6434883839217
- type: nauc_recall_at_1_diff1
value: 51.09859596320021
- type: nauc_recall_at_1_max
value: 12.406789321338222
- type: nauc_recall_at_1_std
value: -18.227486548655076
- type: nauc_recall_at_20_diff1
value: 33.211979983240724
- type: nauc_recall_at_20_max
value: 43.47676074743184
- type: nauc_recall_at_20_std
value: -33.88107138395349
- type: nauc_recall_at_3_diff1
value: 39.22513750146998
- type: nauc_recall_at_3_max
value: 27.066674083840166
- type: nauc_recall_at_3_std
value: -26.963282529629893
- type: nauc_recall_at_5_diff1
value: 36.53718917129459
- type: nauc_recall_at_5_max
value: 35.40550013169686
- type: nauc_recall_at_5_std
value: -34.209159379410806
- type: ndcg_at_1
value: 63.808
- type: ndcg_at_10
value: 68.258
- type: ndcg_at_100
value: 73.38799999999999
- type: ndcg_at_1000
value: 74.03
- type: ndcg_at_20
value: 70.968
- type: ndcg_at_3
value: 62.33
- type: ndcg_at_5
value: 64.096
- type: precision_at_1
value: 63.808
- type: precision_at_10
value: 19.243
- type: precision_at_100
value: 2.367
- type: precision_at_1000
value: 0.245
- type: precision_at_20
value: 10.599
- type: precision_at_3
value: 44.515
- type: precision_at_5
value: 33.467999999999996
- type: recall_at_1
value: 33.06
- type: recall_at_10
value: 77.423
- type: recall_at_100
value: 95.923
- type: recall_at_1000
value: 99.874
- type: recall_at_20
value: 85.782
- type: recall_at_3
value: 57.098000000000006
- type: recall_at_5
value: 67.472
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (spa-spa)
type: jinaai/xpqa
config: spa-spa
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 72.004
- type: map_at_1
value: 36.248000000000005
- type: map_at_10
value: 65.679
- type: map_at_100
value: 67.22399999999999
- type: map_at_1000
value: 67.264
- type: map_at_20
value: 66.705
- type: map_at_3
value: 56.455
- type: map_at_5
value: 62.997
- type: mrr_at_1
value: 67.71752837326608
- type: mrr_at_10
value: 74.59782021257429
- type: mrr_at_100
value: 75.0640960767943
- type: mrr_at_1000
value: 75.07324799466076
- type: mrr_at_20
value: 74.9323963386884
- type: mrr_at_3
value: 72.95081967213115
- type: mrr_at_5
value: 73.82723833543506
- type: nauc_map_at_1000_diff1
value: 43.111810717567714
- type: nauc_map_at_1000_max
value: 44.835247208972476
- type: nauc_map_at_1000_std
value: -32.798405973931985
- type: nauc_map_at_100_diff1
value: 43.090223482932764
- type: nauc_map_at_100_max
value: 44.83392441557943
- type: nauc_map_at_100_std
value: -32.81149166676563
- type: nauc_map_at_10_diff1
value: 42.87841934951979
- type: nauc_map_at_10_max
value: 43.9838653389494
- type: nauc_map_at_10_std
value: -33.588084643627084
- type: nauc_map_at_1_diff1
value: 54.509245848379095
- type: nauc_map_at_1_max
value: 10.05921648322742
- type: nauc_map_at_1_std
value: -24.652326014826762
- type: nauc_map_at_20_diff1
value: 43.07468612984794
- type: nauc_map_at_20_max
value: 44.75663122615032
- type: nauc_map_at_20_std
value: -33.11788887878321
- type: nauc_map_at_3_diff1
value: 44.63272828938906
- type: nauc_map_at_3_max
value: 32.1584369869227
- type: nauc_map_at_3_std
value: -30.761662210142944
- type: nauc_map_at_5_diff1
value: 42.77296997803048
- type: nauc_map_at_5_max
value: 41.78894616737652
- type: nauc_map_at_5_std
value: -33.56459774477362
- type: nauc_mrr_at_1000_diff1
value: 53.097544131833494
- type: nauc_mrr_at_1000_max
value: 50.61134979184588
- type: nauc_mrr_at_1000_std
value: -35.6221191487669
- type: nauc_mrr_at_100_diff1
value: 53.096609856182106
- type: nauc_mrr_at_100_max
value: 50.61951585642645
- type: nauc_mrr_at_100_std
value: -35.62396157508327
- type: nauc_mrr_at_10_diff1
value: 52.771534471912304
- type: nauc_mrr_at_10_max
value: 50.430863224435726
- type: nauc_mrr_at_10_std
value: -36.027992076620365
- type: nauc_mrr_at_1_diff1
value: 55.05316238884337
- type: nauc_mrr_at_1_max
value: 49.461858515275196
- type: nauc_mrr_at_1_std
value: -31.87492636319712
- type: nauc_mrr_at_20_diff1
value: 53.083253469629746
- type: nauc_mrr_at_20_max
value: 50.62156424256193
- type: nauc_mrr_at_20_std
value: -35.879153692447154
- type: nauc_mrr_at_3_diff1
value: 52.98283109188415
- type: nauc_mrr_at_3_max
value: 50.83561260429378
- type: nauc_mrr_at_3_std
value: -35.30839538038797
- type: nauc_mrr_at_5_diff1
value: 52.93270510879709
- type: nauc_mrr_at_5_max
value: 50.54595596761199
- type: nauc_mrr_at_5_std
value: -35.84059376434395
- type: nauc_ndcg_at_1000_diff1
value: 45.343685089209416
- type: nauc_ndcg_at_1000_max
value: 47.801141576669465
- type: nauc_ndcg_at_1000_std
value: -33.512958862879195
- type: nauc_ndcg_at_100_diff1
value: 45.255590461515894
- type: nauc_ndcg_at_100_max
value: 47.99240031881967
- type: nauc_ndcg_at_100_std
value: -33.614465006695205
- type: nauc_ndcg_at_10_diff1
value: 43.93472511731019
- type: nauc_ndcg_at_10_max
value: 45.92599752897053
- type: nauc_ndcg_at_10_std
value: -36.43629114491574
- type: nauc_ndcg_at_1_diff1
value: 55.05316238884337
- type: nauc_ndcg_at_1_max
value: 49.461858515275196
- type: nauc_ndcg_at_1_std
value: -31.87492636319712
- type: nauc_ndcg_at_20_diff1
value: 44.93534591273201
- type: nauc_ndcg_at_20_max
value: 47.55153940713458
- type: nauc_ndcg_at_20_std
value: -35.56392448745206
- type: nauc_ndcg_at_3_diff1
value: 43.17916122133396
- type: nauc_ndcg_at_3_max
value: 45.603634205103276
- type: nauc_ndcg_at_3_std
value: -32.473227507181214
- type: nauc_ndcg_at_5_diff1
value: 44.10242961669216
- type: nauc_ndcg_at_5_max
value: 43.61666669031808
- type: nauc_ndcg_at_5_std
value: -35.98808321497782
- type: nauc_precision_at_1000_diff1
value: -23.264714449991146
- type: nauc_precision_at_1000_max
value: 28.505729576735465
- type: nauc_precision_at_1000_std
value: 11.987379232920926
- type: nauc_precision_at_100_diff1
value: -21.156119174614627
- type: nauc_precision_at_100_max
value: 30.711646221646255
- type: nauc_precision_at_100_std
value: 9.650486536340322
- type: nauc_precision_at_10_diff1
value: -10.98001328477502
- type: nauc_precision_at_10_max
value: 39.25638073760597
- type: nauc_precision_at_10_std
value: -4.3456859257488
- type: nauc_precision_at_1_diff1
value: 55.05316238884337
- type: nauc_precision_at_1_max
value: 49.461858515275196
- type: nauc_precision_at_1_std
value: -31.87492636319712
- type: nauc_precision_at_20_diff1
value: -14.97565390664424
- type: nauc_precision_at_20_max
value: 36.383835295942355
- type: nauc_precision_at_20_std
value: 1.525158880381114
- type: nauc_precision_at_3_diff1
value: 1.0448345623903483
- type: nauc_precision_at_3_max
value: 45.69772060667404
- type: nauc_precision_at_3_std
value: -13.002685018948293
- type: nauc_precision_at_5_diff1
value: -5.434185597628904
- type: nauc_precision_at_5_max
value: 42.99162431099203
- type: nauc_precision_at_5_std
value: -9.789308817624534
- type: nauc_recall_at_1000_diff1
value: 12.309303236094845
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 86.93359696819986
- type: nauc_recall_at_100_diff1
value: 39.093544920901415
- type: nauc_recall_at_100_max
value: 55.62814395062938
- type: nauc_recall_at_100_std
value: -22.6919033301514
- type: nauc_recall_at_10_diff1
value: 35.50100141633622
- type: nauc_recall_at_10_max
value: 39.25750019586647
- type: nauc_recall_at_10_std
value: -43.01273078031791
- type: nauc_recall_at_1_diff1
value: 54.509245848379095
- type: nauc_recall_at_1_max
value: 10.05921648322742
- type: nauc_recall_at_1_std
value: -24.652326014826762
- type: nauc_recall_at_20_diff1
value: 38.1281707132327
- type: nauc_recall_at_20_max
value: 43.97950642900301
- type: nauc_recall_at_20_std
value: -44.049952771307574
- type: nauc_recall_at_3_diff1
value: 40.01986938242728
- type: nauc_recall_at_3_max
value: 27.517114421061173
- type: nauc_recall_at_3_std
value: -32.99056780232045
- type: nauc_recall_at_5_diff1
value: 38.52035606499483
- type: nauc_recall_at_5_max
value: 37.05834604678859
- type: nauc_recall_at_5_std
value: -39.86196378897912
- type: ndcg_at_1
value: 67.718
- type: ndcg_at_10
value: 72.004
- type: ndcg_at_100
value: 76.554
- type: ndcg_at_1000
value: 77.07300000000001
- type: ndcg_at_20
value: 74.37899999999999
- type: ndcg_at_3
value: 66.379
- type: ndcg_at_5
value: 68.082
- type: precision_at_1
value: 67.718
- type: precision_at_10
value: 19.849
- type: precision_at_100
value: 2.3800000000000003
- type: precision_at_1000
value: 0.245
- type: precision_at_20
value: 10.813
- type: precision_at_3
value: 46.574
- type: precision_at_5
value: 34.83
- type: recall_at_1
value: 36.248000000000005
- type: recall_at_10
value: 80.252
- type: recall_at_100
value: 96.73
- type: recall_at_1000
value: 99.874
- type: recall_at_20
value: 87.703
- type: recall_at_3
value: 60.815
- type: recall_at_5
value: 71.16
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fra-eng)
type: jinaai/xpqa
config: fra-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 73.729
- type: map_at_1
value: 43.964999999999996
- type: map_at_10
value: 67.803
- type: map_at_100
value: 69.188
- type: map_at_1000
value: 69.21000000000001
- type: map_at_20
value: 68.747
- type: map_at_3
value: 60.972
- type: map_at_5
value: 65.39399999999999
- type: mrr_at_1
value: 68.4913217623498
- type: mrr_at_10
value: 75.2600822260368
- type: mrr_at_100
value: 75.6599169808848
- type: mrr_at_1000
value: 75.66720883727534
- type: mrr_at_20
value: 75.52375865860405
- type: mrr_at_3
value: 73.54250111259452
- type: mrr_at_5
value: 74.51713395638626
- type: nauc_map_at_1000_diff1
value: 46.81533703002097
- type: nauc_map_at_1000_max
value: 46.30794757084772
- type: nauc_map_at_1000_std
value: -14.953470500312335
- type: nauc_map_at_100_diff1
value: 46.82464740277745
- type: nauc_map_at_100_max
value: 46.32852879948254
- type: nauc_map_at_100_std
value: -14.950035098066172
- type: nauc_map_at_10_diff1
value: 46.31406143369831
- type: nauc_map_at_10_max
value: 45.337593270786634
- type: nauc_map_at_10_std
value: -16.011789445907876
- type: nauc_map_at_1_diff1
value: 57.097134715065835
- type: nauc_map_at_1_max
value: 21.93931500350721
- type: nauc_map_at_1_std
value: -15.134457251301637
- type: nauc_map_at_20_diff1
value: 46.47030891134173
- type: nauc_map_at_20_max
value: 46.29169960276292
- type: nauc_map_at_20_std
value: -15.14241106541829
- type: nauc_map_at_3_diff1
value: 50.27064228648596
- type: nauc_map_at_3_max
value: 39.43058773971639
- type: nauc_map_at_3_std
value: -16.16545993089126
- type: nauc_map_at_5_diff1
value: 46.974867679747426
- type: nauc_map_at_5_max
value: 44.31091104855002
- type: nauc_map_at_5_std
value: -16.50175337658926
- type: nauc_mrr_at_1000_diff1
value: 55.20294005110399
- type: nauc_mrr_at_1000_max
value: 51.947725719119966
- type: nauc_mrr_at_1000_std
value: -14.586112939597232
- type: nauc_mrr_at_100_diff1
value: 55.20426251109304
- type: nauc_mrr_at_100_max
value: 51.95648725402534
- type: nauc_mrr_at_100_std
value: -14.579769236539143
- type: nauc_mrr_at_10_diff1
value: 54.93870506205835
- type: nauc_mrr_at_10_max
value: 51.89312772900638
- type: nauc_mrr_at_10_std
value: -14.692635010092939
- type: nauc_mrr_at_1_diff1
value: 56.54945935175171
- type: nauc_mrr_at_1_max
value: 51.28134504197991
- type: nauc_mrr_at_1_std
value: -12.909042186563061
- type: nauc_mrr_at_20_diff1
value: 55.10667018041461
- type: nauc_mrr_at_20_max
value: 51.98236870783707
- type: nauc_mrr_at_20_std
value: -14.599377575198025
- type: nauc_mrr_at_3_diff1
value: 55.67124311746892
- type: nauc_mrr_at_3_max
value: 51.77903236246767
- type: nauc_mrr_at_3_std
value: -14.94452633860763
- type: nauc_mrr_at_5_diff1
value: 55.42849172366371
- type: nauc_mrr_at_5_max
value: 51.76902965753959
- type: nauc_mrr_at_5_std
value: -15.357993534727072
- type: nauc_ndcg_at_1000_diff1
value: 48.736844959280326
- type: nauc_ndcg_at_1000_max
value: 48.92891159935398
- type: nauc_ndcg_at_1000_std
value: -13.983968675611056
- type: nauc_ndcg_at_100_diff1
value: 48.73859328503975
- type: nauc_ndcg_at_100_max
value: 49.31867149556439
- type: nauc_ndcg_at_100_std
value: -13.72387564912742
- type: nauc_ndcg_at_10_diff1
value: 46.50313862975287
- type: nauc_ndcg_at_10_max
value: 47.13599793554596
- type: nauc_ndcg_at_10_std
value: -16.317919977400113
- type: nauc_ndcg_at_1_diff1
value: 56.54945935175171
- type: nauc_ndcg_at_1_max
value: 51.28134504197991
- type: nauc_ndcg_at_1_std
value: -12.909042186563061
- type: nauc_ndcg_at_20_diff1
value: 47.01727117133912
- type: nauc_ndcg_at_20_max
value: 49.121366036709105
- type: nauc_ndcg_at_20_std
value: -14.411078677638775
- type: nauc_ndcg_at_3_diff1
value: 49.229581145458276
- type: nauc_ndcg_at_3_max
value: 47.427609717032
- type: nauc_ndcg_at_3_std
value: -16.52066627289908
- type: nauc_ndcg_at_5_diff1
value: 48.0152514127505
- type: nauc_ndcg_at_5_max
value: 46.12152407850816
- type: nauc_ndcg_at_5_std
value: -17.613295491954656
- type: nauc_precision_at_1000_diff1
value: -25.959006032642463
- type: nauc_precision_at_1000_max
value: 12.81002362947137
- type: nauc_precision_at_1000_std
value: 12.575312826061513
- type: nauc_precision_at_100_diff1
value: -24.35413527283394
- type: nauc_precision_at_100_max
value: 14.878359236477303
- type: nauc_precision_at_100_std
value: 12.384426050018428
- type: nauc_precision_at_10_diff1
value: -17.93220761770618
- type: nauc_precision_at_10_max
value: 23.523485811847294
- type: nauc_precision_at_10_std
value: 4.424456968716939
- type: nauc_precision_at_1_diff1
value: 56.54945935175171
- type: nauc_precision_at_1_max
value: 51.28134504197991
- type: nauc_precision_at_1_std
value: -12.909042186563061
- type: nauc_precision_at_20_diff1
value: -21.776871398686936
- type: nauc_precision_at_20_max
value: 21.18436338264366
- type: nauc_precision_at_20_std
value: 9.937274986573321
- type: nauc_precision_at_3_diff1
value: -1.2411845580934435
- type: nauc_precision_at_3_max
value: 34.962281941875
- type: nauc_precision_at_3_std
value: -2.447892908501237
- type: nauc_precision_at_5_diff1
value: -11.134164534114085
- type: nauc_precision_at_5_max
value: 30.22079740070525
- type: nauc_precision_at_5_std
value: -0.24232594421765946
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 43.3647412452869
- type: nauc_recall_at_100_max
value: 63.50094950500327
- type: nauc_recall_at_100_std
value: 2.3911909633714044
- type: nauc_recall_at_10_diff1
value: 33.993445071666855
- type: nauc_recall_at_10_max
value: 41.38694129134144
- type: nauc_recall_at_10_std
value: -19.308698266099096
- type: nauc_recall_at_1_diff1
value: 57.097134715065835
- type: nauc_recall_at_1_max
value: 21.93931500350721
- type: nauc_recall_at_1_std
value: -15.134457251301637
- type: nauc_recall_at_20_diff1
value: 32.03888531880772
- type: nauc_recall_at_20_max
value: 49.660787482562085
- type: nauc_recall_at_20_std
value: -12.641456758778382
- type: nauc_recall_at_3_diff1
value: 47.94527082900579
- type: nauc_recall_at_3_max
value: 36.51733131437679
- type: nauc_recall_at_3_std
value: -18.65511713247495
- type: nauc_recall_at_5_diff1
value: 42.04545772092305
- type: nauc_recall_at_5_max
value: 41.21440912972303
- type: nauc_recall_at_5_std
value: -21.47386527081128
- type: ndcg_at_1
value: 68.491
- type: ndcg_at_10
value: 73.729
- type: ndcg_at_100
value: 77.684
- type: ndcg_at_1000
value: 78.084
- type: ndcg_at_20
value: 75.795
- type: ndcg_at_3
value: 68.568
- type: ndcg_at_5
value: 70.128
- type: precision_at_1
value: 68.491
- type: precision_at_10
value: 16.996
- type: precision_at_100
value: 2.023
- type: precision_at_1000
value: 0.207
- type: precision_at_20
value: 9.246
- type: precision_at_3
value: 41.923
- type: precision_at_5
value: 29.826000000000004
- type: recall_at_1
value: 43.964999999999996
- type: recall_at_10
value: 82.777
- type: recall_at_100
value: 97.287
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 89.183
- type: recall_at_3
value: 65.803
- type: recall_at_5
value: 74.119
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fra-fra
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: main_score
value: 77.581
- type: map_at_1
value: 46.444
- type: map_at_10
value: 72.084
- type: map_at_100
value: 73.175
- type: map_at_1000
value: 73.193
- type: map_at_20
value: 72.77799999999999
- type: map_at_3
value: 65.242
- type: map_at_5
value: 69.926
- type: mrr_at_1
value: 71.82910547396529
- type: mrr_at_10
value: 78.66594612923046
- type: mrr_at_100
value: 78.97334934049613
- type: mrr_at_1000
value: 78.97687021803557
- type: mrr_at_20
value: 78.85701141744282
- type: mrr_at_3
value: 76.96929238985311
- type: mrr_at_5
value: 77.99732977303067
- type: nauc_map_at_1000_diff1
value: 49.090956807097804
- type: nauc_map_at_1000_max
value: 52.01095354889508
- type: nauc_map_at_1000_std
value: -12.182870421711026
- type: nauc_map_at_100_diff1
value: 49.091664766684566
- type: nauc_map_at_100_max
value: 52.017499797253755
- type: nauc_map_at_100_std
value: -12.188342487271528
- type: nauc_map_at_10_diff1
value: 48.6619338205362
- type: nauc_map_at_10_max
value: 50.93591260329888
- type: nauc_map_at_10_std
value: -12.899399261673365
- type: nauc_map_at_1_diff1
value: 61.89699552471587
- type: nauc_map_at_1_max
value: 22.387748207421946
- type: nauc_map_at_1_std
value: -17.139518194308437
- type: nauc_map_at_20_diff1
value: 48.72828404686453
- type: nauc_map_at_20_max
value: 51.781074586075434
- type: nauc_map_at_20_std
value: -12.174270605093136
- type: nauc_map_at_3_diff1
value: 53.11509580126934
- type: nauc_map_at_3_max
value: 42.1768380145106
- type: nauc_map_at_3_std
value: -14.98340833032363
- type: nauc_map_at_5_diff1
value: 49.60521390803235
- type: nauc_map_at_5_max
value: 49.80360562029127
- type: nauc_map_at_5_std
value: -13.900652140457618
- type: nauc_mrr_at_1000_diff1
value: 58.10782478654255
- type: nauc_mrr_at_1000_max
value: 61.31083013535486
- type: nauc_mrr_at_1000_std
value: -9.624904298545921
- type: nauc_mrr_at_100_diff1
value: 58.11041683306092
- type: nauc_mrr_at_100_max
value: 61.31590199755797
- type: nauc_mrr_at_100_std
value: -9.625991053580865
- type: nauc_mrr_at_10_diff1
value: 57.883701815695375
- type: nauc_mrr_at_10_max
value: 61.36276126424689
- type: nauc_mrr_at_10_std
value: -9.495072468420386
- type: nauc_mrr_at_1_diff1
value: 60.18176977079093
- type: nauc_mrr_at_1_max
value: 59.697615236642555
- type: nauc_mrr_at_1_std
value: -9.396133077966779
- type: nauc_mrr_at_20_diff1
value: 57.964817434006754
- type: nauc_mrr_at_20_max
value: 61.34073539502932
- type: nauc_mrr_at_20_std
value: -9.602378876645131
- type: nauc_mrr_at_3_diff1
value: 58.44338049427257
- type: nauc_mrr_at_3_max
value: 60.92272989411293
- type: nauc_mrr_at_3_std
value: -9.928970439416162
- type: nauc_mrr_at_5_diff1
value: 58.01513016866578
- type: nauc_mrr_at_5_max
value: 61.46805302986586
- type: nauc_mrr_at_5_std
value: -9.842227002440984
- type: nauc_ndcg_at_1000_diff1
value: 50.99293152828167
- type: nauc_ndcg_at_1000_max
value: 56.14232784664811
- type: nauc_ndcg_at_1000_std
value: -10.529213072410288
- type: nauc_ndcg_at_100_diff1
value: 50.99385944312529
- type: nauc_ndcg_at_100_max
value: 56.34825518954588
- type: nauc_ndcg_at_100_std
value: -10.398943874846047
- type: nauc_ndcg_at_10_diff1
value: 48.51273364357823
- type: nauc_ndcg_at_10_max
value: 53.77871849486298
- type: nauc_ndcg_at_10_std
value: -11.82105972112472
- type: nauc_ndcg_at_1_diff1
value: 60.18176977079093
- type: nauc_ndcg_at_1_max
value: 59.697615236642555
- type: nauc_ndcg_at_1_std
value: -9.396133077966779
- type: nauc_ndcg_at_20_diff1
value: 49.04268319033412
- type: nauc_ndcg_at_20_max
value: 55.47011381097071
- type: nauc_ndcg_at_20_std
value: -10.486452945493042
- type: nauc_ndcg_at_3_diff1
value: 50.95112745400584
- type: nauc_ndcg_at_3_max
value: 53.45473828705577
- type: nauc_ndcg_at_3_std
value: -13.420699384045728
- type: nauc_ndcg_at_5_diff1
value: 50.313156212000074
- type: nauc_ndcg_at_5_max
value: 52.78539129309866
- type: nauc_ndcg_at_5_std
value: -13.586274096509122
- type: nauc_precision_at_1000_diff1
value: -31.13772049254778
- type: nauc_precision_at_1000_max
value: 17.2847598361294
- type: nauc_precision_at_1000_std
value: 15.497531773816887
- type: nauc_precision_at_100_diff1
value: -29.98812263553739
- type: nauc_precision_at_100_max
value: 19.048620003227654
- type: nauc_precision_at_100_std
value: 15.38499952171958
- type: nauc_precision_at_10_diff1
value: -25.33028097412579
- type: nauc_precision_at_10_max
value: 26.077919168306853
- type: nauc_precision_at_10_std
value: 11.35352933466097
- type: nauc_precision_at_1_diff1
value: 60.18176977079093
- type: nauc_precision_at_1_max
value: 59.697615236642555
- type: nauc_precision_at_1_std
value: -9.396133077966779
- type: nauc_precision_at_20_diff1
value: -28.417606311068905
- type: nauc_precision_at_20_max
value: 23.958679828637692
- type: nauc_precision_at_20_std
value: 14.442021499194205
- type: nauc_precision_at_3_diff1
value: -8.127396049790482
- type: nauc_precision_at_3_max
value: 37.348067982957076
- type: nauc_precision_at_3_std
value: 4.747913619596849
- type: nauc_precision_at_5_diff1
value: -16.902418446058395
- type: nauc_precision_at_5_max
value: 32.73583852552014
- type: nauc_precision_at_5_std
value: 7.031446423850052
- type: nauc_recall_at_1000_diff1
value: -14.485978369112514
- type: nauc_recall_at_1000_max
value: 78.59123887333172
- type: nauc_recall_at_1000_std
value: 90.7384575424963
- type: nauc_recall_at_100_diff1
value: 41.47842281590715
- type: nauc_recall_at_100_max
value: 67.47271545727422
- type: nauc_recall_at_100_std
value: 14.555561992253999
- type: nauc_recall_at_10_diff1
value: 33.05308907973924
- type: nauc_recall_at_10_max
value: 45.49878918493155
- type: nauc_recall_at_10_std
value: -11.560069806810926
- type: nauc_recall_at_1_diff1
value: 61.89699552471587
- type: nauc_recall_at_1_max
value: 22.387748207421946
- type: nauc_recall_at_1_std
value: -17.139518194308437
- type: nauc_recall_at_20_diff1
value: 31.305721376453754
- type: nauc_recall_at_20_max
value: 51.24817763724019
- type: nauc_recall_at_20_std
value: -5.0809908162023145
- type: nauc_recall_at_3_diff1
value: 49.27109038342917
- type: nauc_recall_at_3_max
value: 37.69188317998447
- type: nauc_recall_at_3_std
value: -17.119900758664336
- type: nauc_recall_at_5_diff1
value: 42.74501803377967
- type: nauc_recall_at_5_max
value: 46.877008503354844
- type: nauc_recall_at_5_std
value: -15.704892082115975
- type: ndcg_at_1
value: 71.829
- type: ndcg_at_10
value: 77.581
- type: ndcg_at_100
value: 80.75
- type: ndcg_at_1000
value: 81.026
- type: ndcg_at_20
value: 79.092
- type: ndcg_at_3
value: 72.81
- type: ndcg_at_5
value: 74.22999999999999
- type: precision_at_1
value: 71.829
- type: precision_at_10
value: 17.717
- type: precision_at_100
value: 2.031
- type: precision_at_1000
value: 0.207
- type: precision_at_20
value: 9.399000000000001
- type: precision_at_3
value: 44.458999999999996
- type: precision_at_5
value: 31.535000000000004
- type: recall_at_1
value: 46.444
- type: recall_at_10
value: 86.275
- type: recall_at_100
value: 98.017
- type: recall_at_1000
value: 99.8
- type: recall_at_20
value: 90.935
- type: recall_at_3
value: 70.167
- type: recall_at_5
value: 78.2
---
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The embedding model trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
<p align="center">
<b>jina-embeddings-v3: Multilingual Embeddings With Task LoRA</b>
</p>
## Quick Start
[Blog](https://jina.ai/news/jina-embeddings-v3-a-frontier-multilingual-embedding-model/#parameter-dimensions) | [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/jinaai.jina-embeddings-v3) | [AWS SageMaker](https://aws.amazon.com/marketplace/pp/prodview-kdi3xkt62lo32) | [API](https://jina.ai/embeddings)
## Intended Usage & Model Info
`jina-embeddings-v3` is a **multilingual multi-task text embedding model** designed for a variety of NLP applications.
Based on the [Jina-XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation),
this model supports Rotary Position Embeddings to handle long input sequences up to **8192 tokens**.
Additionally, it features 5 LoRA adapters to generate task-specific embeddings efficiently.
### Key Features:
- **Extended Sequence Length:** Supports up to 8192 tokens with RoPE.
- **Task-Specific Embedding:** Customize embeddings through the `task` argument with the following options:
- `retrieval.query`: Used for query embeddings in asymmetric retrieval tasks
- `retrieval.passage`: Used for passage embeddings in asymmetric retrieval tasks
- `separation`: Used for embeddings in clustering and re-ranking applications
- `classification`: Used for embeddings in classification tasks
- `text-matching`: Used for embeddings in tasks that quantify similarity between two texts, such as STS or symmetric retrieval tasks
- **Matryoshka Embeddings**: Supports flexible embedding sizes (`32, 64, 128, 256, 512, 768, 1024`), allowing for truncating embeddings to fit your application.
### Supported Languages:
While the foundation model supports 100 languages, we've focused our tuning efforts on the following 30 languages:
**Arabic, Bengali, Chinese, Danish, Dutch, English, Finnish, French, Georgian, German, Greek,
Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Norwegian, Polish, Portuguese, Romanian,
Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu,** and **Vietnamese.**
## Usage
**<details><summary>Apply mean pooling when integrating the model.</summary>**
<p>
### Why Use Mean Pooling?
Mean pooling takes all token embeddings from the model's output and averages them at the sentence or paragraph level.
This approach has been shown to produce high-quality sentence embeddings.
We provide an `encode` function that handles this for you automatically.
However, if you're working with the model directly, outside of the `encode` function,
you'll need to apply mean pooling manually. Here's how you can do it:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = (
attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
)
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(
input_mask_expanded.sum(1), min=1e-9
)
sentences = ["How is the weather today?", "What is the current weather like today?"]
tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3")
model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3", trust_remote_code=True)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
task = 'retrieval.query'
task_id = model._adaptation_map[task]
adapter_mask = torch.full((len(sentences),), task_id, dtype=torch.int32)
with torch.no_grad():
model_output = model(**encoded_input, adapter_mask=adapter_mask)
embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
The easiest way to start using `jina-embeddings-v3` is with the [Jina Embedding API](https://jina.ai/embeddings/).
Alternatively, you can use `jina-embeddings-v3` directly via Transformers package:
```bash
!pip install transformers torch einops
!pip install 'numpy<2'
```
If you run it on a GPU that support [FlashAttention-2](https://github.com/Dao-AILab/flash-attention). By 2024.9.12, it supports Ampere, Ada, or Hopper GPUs (e.g., A100, RTX 3090, RTX 4090, H100),
```bash
!pip install flash-attn --no-build-isolation
```
```python
from transformers import AutoModel
# Initialize the model
model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3", trust_remote_code=True)
texts = [
"Follow the white rabbit.", # English
"Sigue al conejo blanco.", # Spanish
"Suis le lapin blanc.", # French
"跟着白兔走。", # Chinese
"اتبع الأرنب الأبيض.", # Arabic
"Folge dem weißen Kaninchen.", # German
]
# When calling the `encode` function, you can choose a `task` based on the use case:
# 'retrieval.query', 'retrieval.passage', 'separation', 'classification', 'text-matching'
# Alternatively, you can choose not to pass a `task`, and no specific LoRA adapter will be used.
embeddings = model.encode(texts, task="text-matching")
# Compute similarities
print(embeddings[0] @ embeddings[1].T)
```
By default, the model supports a maximum sequence length of 8192 tokens.
However, if you want to truncate your input texts to a shorter length, you can pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(["Very long ... document"], max_length=2048)
```
In case you want to use **Matryoshka embeddings** and switch to a different dimension,
you can adjust it by passing the `truncate_dim` parameter to the `encode` function:
```python
embeddings = model.encode(['Sample text'], truncate_dim=256)
```
The latest version (3.1.0) of [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) also supports `jina-embeddings-v3`:
```bash
!pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True)
task = "retrieval.query"
embeddings = model.encode(
["What is the weather like in Berlin today?"],
task=task,
prompt_name=task,
)
```
You can fine-tune `jina-embeddings-v3` using [SentenceTransformerTrainer](https://sbert.net/docs/package_reference/sentence_transformer/trainer.html).
To fine-tune for a specific task, you should set the task before passing the model to the ST Trainer, either during initialization:
```python
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True, model_kwargs={'default_task': 'classification'})
```
Or afterwards:
```python
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True)
model[0].default_task = 'classification'
```
This way you can fine-tune the LoRA adapter for the chosen task.
However, If you want to fine-tune the entire model, make sure the main parameters are set as trainable when loading the model:
```python
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True, model_kwargs={'lora_main_params_trainable': True})
```
This will allow fine-tuning the whole model instead of just the LoRA adapters.
**<details><summary>ONNX Inference.</summary>**
<p>
You can use ONNX for efficient inference with `jina-embeddings-v3`:
```python
import onnxruntime
import numpy as np
from transformers import AutoTokenizer, PretrainedConfig
# Load tokenizer and model config
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v3')
config = PretrainedConfig.from_pretrained('jinaai/jina-embeddings-v3')
# Tokenize input
input_text = tokenizer('sample text', return_tensors='np')
# ONNX session
model_path = 'jina-embeddings-v3/onnx/model.onnx'
session = onnxruntime.InferenceSession(model_path)
# Prepare inputs for ONNX model
task_type = 'text-matching'
task_id = np.array(config.lora_adaptations.index(task_type), dtype=np.int64)
inputs = {
'input_ids': input_text['input_ids'],
'attention_mask': input_text['attention_mask'],
'task_id': task_id
}
# Run model
outputs = session.run(None, inputs)
```
</p>
</details>
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## License
`jina-embeddings-v3` is listed on AWS & Azure. If you need to use it beyond those platforms or on-premises within your company, note that the models is licensed under CC BY-NC 4.0. For commercial usage inquiries, feel free to [contact us](https://jina.ai/contact-sales/).
## Citation
If you find `jina-embeddings-v3` useful in your research, please cite the following paper:
```bibtex
@misc{sturua2024jinaembeddingsv3multilingualembeddingstask,
title={jina-embeddings-v3: Multilingual Embeddings With Task LoRA},
author={Saba Sturua and Isabelle Mohr and Mohammad Kalim Akram and Michael Günther and Bo Wang and Markus Krimmel and Feng Wang and Georgios Mastrapas and Andreas Koukounas and Andreas Koukounas and Nan Wang and Han Xiao},
year={2024},
eprint={2409.10173},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.10173},
}
```
| [
"BIOSSES",
"SCIFACT"
] |
khoa-klaytn/bge-base-en-v1.5-angle | khoa-klaytn | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-10T03:25:15Z | 2024-01-10T03:25:20+00:00 | 745 | 2 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38.0
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89.0
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 83.0
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"BEAR",
"BIOSSES",
"SCIFACT"
] |
corto-ai/nomic-embed-text-v1 | corto-ai | sentence-similarity | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"nomic_bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"custom_code",
"en",
"arxiv:2402.01613",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-06T05:04:53Z | 2024-05-06T05:18:56+00:00 | 740 | 2 | ---
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.8507462686567
- type: ap
value: 40.592189159090495
- type: f1
value: 71.01634655512476
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.51892500000001
- type: ap
value: 88.50346762975335
- type: f1
value: 91.50342077459624
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.364
- type: f1
value: 46.72708080922794
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.178
- type: map_at_10
value: 40.244
- type: map_at_100
value: 41.321999999999996
- type: map_at_1000
value: 41.331
- type: map_at_3
value: 35.016999999999996
- type: map_at_5
value: 37.99
- type: mrr_at_1
value: 25.605
- type: mrr_at_10
value: 40.422000000000004
- type: mrr_at_100
value: 41.507
- type: mrr_at_1000
value: 41.516
- type: mrr_at_3
value: 35.23
- type: mrr_at_5
value: 38.15
- type: ndcg_at_1
value: 25.178
- type: ndcg_at_10
value: 49.258
- type: ndcg_at_100
value: 53.776
- type: ndcg_at_1000
value: 53.995000000000005
- type: ndcg_at_3
value: 38.429
- type: ndcg_at_5
value: 43.803
- type: precision_at_1
value: 25.178
- type: precision_at_10
value: 7.831
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.121
- type: precision_at_5
value: 12.29
- type: recall_at_1
value: 25.178
- type: recall_at_10
value: 78.307
- type: recall_at_100
value: 97.866
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 48.364000000000004
- type: recall_at_5
value: 61.451
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.93034494751465
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.64579480054327
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.601310529222054
- type: mrr
value: 75.04484896451656
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.57797718095814
- type: cos_sim_spearman
value: 86.47064499110101
- type: euclidean_pearson
value: 87.4559602783142
- type: euclidean_spearman
value: 86.47064499110101
- type: manhattan_pearson
value: 87.7232764230245
- type: manhattan_spearman
value: 86.91222131777742
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.5422077922078
- type: f1
value: 84.47657456950589
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.48953561974464
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.75995857510105
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.008000000000003
- type: map_at_10
value: 39.51
- type: map_at_100
value: 40.841
- type: map_at_1000
value: 40.973
- type: map_at_3
value: 36.248999999999995
- type: map_at_5
value: 38.096999999999994
- type: mrr_at_1
value: 36.481
- type: mrr_at_10
value: 44.818000000000005
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.687
- type: mrr_at_3
value: 42.036
- type: mrr_at_5
value: 43.782
- type: ndcg_at_1
value: 36.481
- type: ndcg_at_10
value: 45.152
- type: ndcg_at_100
value: 50.449
- type: ndcg_at_1000
value: 52.76499999999999
- type: ndcg_at_3
value: 40.161
- type: ndcg_at_5
value: 42.577999999999996
- type: precision_at_1
value: 36.481
- type: precision_at_10
value: 8.369
- type: precision_at_100
value: 1.373
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 18.693
- type: precision_at_5
value: 13.533999999999999
- type: recall_at_1
value: 30.008000000000003
- type: recall_at_10
value: 56.108999999999995
- type: recall_at_100
value: 78.55499999999999
- type: recall_at_1000
value: 93.659
- type: recall_at_3
value: 41.754999999999995
- type: recall_at_5
value: 48.296
- type: map_at_1
value: 30.262
- type: map_at_10
value: 40.139
- type: map_at_100
value: 41.394
- type: map_at_1000
value: 41.526
- type: map_at_3
value: 37.155
- type: map_at_5
value: 38.785
- type: mrr_at_1
value: 38.153
- type: mrr_at_10
value: 46.369
- type: mrr_at_100
value: 47.072
- type: mrr_at_1000
value: 47.111999999999995
- type: mrr_at_3
value: 44.268
- type: mrr_at_5
value: 45.389
- type: ndcg_at_1
value: 38.153
- type: ndcg_at_10
value: 45.925
- type: ndcg_at_100
value: 50.394000000000005
- type: ndcg_at_1000
value: 52.37500000000001
- type: ndcg_at_3
value: 41.754000000000005
- type: ndcg_at_5
value: 43.574
- type: precision_at_1
value: 38.153
- type: precision_at_10
value: 8.796
- type: precision_at_100
value: 1.432
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 20.318
- type: precision_at_5
value: 14.395
- type: recall_at_1
value: 30.262
- type: recall_at_10
value: 55.72200000000001
- type: recall_at_100
value: 74.97500000000001
- type: recall_at_1000
value: 87.342
- type: recall_at_3
value: 43.129
- type: recall_at_5
value: 48.336
- type: map_at_1
value: 39.951
- type: map_at_10
value: 51.248000000000005
- type: map_at_100
value: 52.188
- type: map_at_1000
value: 52.247
- type: map_at_3
value: 48.211
- type: map_at_5
value: 49.797000000000004
- type: mrr_at_1
value: 45.329
- type: mrr_at_10
value: 54.749
- type: mrr_at_100
value: 55.367999999999995
- type: mrr_at_1000
value: 55.400000000000006
- type: mrr_at_3
value: 52.382
- type: mrr_at_5
value: 53.649
- type: ndcg_at_1
value: 45.329
- type: ndcg_at_10
value: 56.847
- type: ndcg_at_100
value: 60.738
- type: ndcg_at_1000
value: 61.976
- type: ndcg_at_3
value: 51.59
- type: ndcg_at_5
value: 53.915
- type: precision_at_1
value: 45.329
- type: precision_at_10
value: 8.959
- type: precision_at_100
value: 1.187
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 22.612
- type: precision_at_5
value: 15.273
- type: recall_at_1
value: 39.951
- type: recall_at_10
value: 70.053
- type: recall_at_100
value: 86.996
- type: recall_at_1000
value: 95.707
- type: recall_at_3
value: 56.032000000000004
- type: recall_at_5
value: 61.629999999999995
- type: map_at_1
value: 25.566
- type: map_at_10
value: 33.207
- type: map_at_100
value: 34.166000000000004
- type: map_at_1000
value: 34.245
- type: map_at_3
value: 30.94
- type: map_at_5
value: 32.01
- type: mrr_at_1
value: 27.345000000000002
- type: mrr_at_10
value: 35.193000000000005
- type: mrr_at_100
value: 35.965
- type: mrr_at_1000
value: 36.028999999999996
- type: mrr_at_3
value: 32.806000000000004
- type: mrr_at_5
value: 34.021
- type: ndcg_at_1
value: 27.345000000000002
- type: ndcg_at_10
value: 37.891999999999996
- type: ndcg_at_100
value: 42.664
- type: ndcg_at_1000
value: 44.757000000000005
- type: ndcg_at_3
value: 33.123000000000005
- type: ndcg_at_5
value: 35.035
- type: precision_at_1
value: 27.345000000000002
- type: precision_at_10
value: 5.763
- type: precision_at_100
value: 0.859
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 13.71
- type: precision_at_5
value: 9.401
- type: recall_at_1
value: 25.566
- type: recall_at_10
value: 50.563
- type: recall_at_100
value: 72.86399999999999
- type: recall_at_1000
value: 88.68599999999999
- type: recall_at_3
value: 37.43
- type: recall_at_5
value: 41.894999999999996
- type: map_at_1
value: 16.663
- type: map_at_10
value: 23.552
- type: map_at_100
value: 24.538
- type: map_at_1000
value: 24.661
- type: map_at_3
value: 21.085
- type: map_at_5
value: 22.391
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 27.643
- type: mrr_at_100
value: 28.499999999999996
- type: mrr_at_1000
value: 28.582
- type: mrr_at_3
value: 25.083
- type: mrr_at_5
value: 26.544
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 28.272000000000002
- type: ndcg_at_100
value: 33.353
- type: ndcg_at_1000
value: 36.454
- type: ndcg_at_3
value: 23.579
- type: ndcg_at_5
value: 25.685000000000002
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 5.187
- type: precision_at_100
value: 0.897
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 10.987
- type: precision_at_5
value: 8.06
- type: recall_at_1
value: 16.663
- type: recall_at_10
value: 38.808
- type: recall_at_100
value: 61.305
- type: recall_at_1000
value: 83.571
- type: recall_at_3
value: 25.907999999999998
- type: recall_at_5
value: 31.214
- type: map_at_1
value: 27.695999999999998
- type: map_at_10
value: 37.018
- type: map_at_100
value: 38.263000000000005
- type: map_at_1000
value: 38.371
- type: map_at_3
value: 34.226
- type: map_at_5
value: 35.809999999999995
- type: mrr_at_1
value: 32.916000000000004
- type: mrr_at_10
value: 42.067
- type: mrr_at_100
value: 42.925000000000004
- type: mrr_at_1000
value: 42.978
- type: mrr_at_3
value: 39.637
- type: mrr_at_5
value: 41.134
- type: ndcg_at_1
value: 32.916000000000004
- type: ndcg_at_10
value: 42.539
- type: ndcg_at_100
value: 47.873
- type: ndcg_at_1000
value: 50.08200000000001
- type: ndcg_at_3
value: 37.852999999999994
- type: ndcg_at_5
value: 40.201
- type: precision_at_1
value: 32.916000000000004
- type: precision_at_10
value: 7.5840000000000005
- type: precision_at_100
value: 1.199
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 17.485
- type: precision_at_5
value: 12.512
- type: recall_at_1
value: 27.695999999999998
- type: recall_at_10
value: 53.638
- type: recall_at_100
value: 76.116
- type: recall_at_1000
value: 91.069
- type: recall_at_3
value: 41.13
- type: recall_at_5
value: 46.872
- type: map_at_1
value: 24.108
- type: map_at_10
value: 33.372
- type: map_at_100
value: 34.656
- type: map_at_1000
value: 34.768
- type: map_at_3
value: 30.830999999999996
- type: map_at_5
value: 32.204
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 37.979
- type: mrr_at_100
value: 38.933
- type: mrr_at_1000
value: 38.988
- type: mrr_at_3
value: 35.731
- type: mrr_at_5
value: 36.963
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 38.635000000000005
- type: ndcg_at_100
value: 44.324999999999996
- type: ndcg_at_1000
value: 46.747
- type: ndcg_at_3
value: 34.37
- type: ndcg_at_5
value: 36.228
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 6.963
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.400000000000002
- type: precision_at_5
value: 11.552999999999999
- type: recall_at_1
value: 24.108
- type: recall_at_10
value: 49.597
- type: recall_at_100
value: 73.88900000000001
- type: recall_at_1000
value: 90.62400000000001
- type: recall_at_3
value: 37.662
- type: recall_at_5
value: 42.565
- type: map_at_1
value: 25.00791666666667
- type: map_at_10
value: 33.287749999999996
- type: map_at_100
value: 34.41141666666667
- type: map_at_1000
value: 34.52583333333333
- type: map_at_3
value: 30.734416666666668
- type: map_at_5
value: 32.137166666666666
- type: mrr_at_1
value: 29.305666666666664
- type: mrr_at_10
value: 37.22966666666666
- type: mrr_at_100
value: 38.066583333333334
- type: mrr_at_1000
value: 38.12616666666667
- type: mrr_at_3
value: 34.92275
- type: mrr_at_5
value: 36.23333333333334
- type: ndcg_at_1
value: 29.305666666666664
- type: ndcg_at_10
value: 38.25533333333333
- type: ndcg_at_100
value: 43.25266666666666
- type: ndcg_at_1000
value: 45.63583333333334
- type: ndcg_at_3
value: 33.777166666666666
- type: ndcg_at_5
value: 35.85
- type: precision_at_1
value: 29.305666666666664
- type: precision_at_10
value: 6.596416666666667
- type: precision_at_100
value: 1.0784166666666668
- type: precision_at_1000
value: 0.14666666666666664
- type: precision_at_3
value: 15.31075
- type: precision_at_5
value: 10.830916666666667
- type: recall_at_1
value: 25.00791666666667
- type: recall_at_10
value: 49.10933333333333
- type: recall_at_100
value: 71.09216666666667
- type: recall_at_1000
value: 87.77725000000001
- type: recall_at_3
value: 36.660916666666665
- type: recall_at_5
value: 41.94149999999999
- type: map_at_1
value: 23.521
- type: map_at_10
value: 30.043
- type: map_at_100
value: 30.936000000000003
- type: map_at_1000
value: 31.022
- type: map_at_3
value: 27.926000000000002
- type: map_at_5
value: 29.076999999999998
- type: mrr_at_1
value: 26.227
- type: mrr_at_10
value: 32.822
- type: mrr_at_100
value: 33.61
- type: mrr_at_1000
value: 33.672000000000004
- type: mrr_at_3
value: 30.776999999999997
- type: mrr_at_5
value: 31.866
- type: ndcg_at_1
value: 26.227
- type: ndcg_at_10
value: 34.041
- type: ndcg_at_100
value: 38.394
- type: ndcg_at_1000
value: 40.732
- type: ndcg_at_3
value: 30.037999999999997
- type: ndcg_at_5
value: 31.845000000000002
- type: precision_at_1
value: 26.227
- type: precision_at_10
value: 5.244999999999999
- type: precision_at_100
value: 0.808
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 12.679000000000002
- type: precision_at_5
value: 8.773
- type: recall_at_1
value: 23.521
- type: recall_at_10
value: 43.633
- type: recall_at_100
value: 63.126000000000005
- type: recall_at_1000
value: 80.765
- type: recall_at_3
value: 32.614
- type: recall_at_5
value: 37.15
- type: map_at_1
value: 16.236
- type: map_at_10
value: 22.898
- type: map_at_100
value: 23.878
- type: map_at_1000
value: 24.009
- type: map_at_3
value: 20.87
- type: map_at_5
value: 22.025
- type: mrr_at_1
value: 19.339000000000002
- type: mrr_at_10
value: 26.382
- type: mrr_at_100
value: 27.245
- type: mrr_at_1000
value: 27.33
- type: mrr_at_3
value: 24.386
- type: mrr_at_5
value: 25.496000000000002
- type: ndcg_at_1
value: 19.339000000000002
- type: ndcg_at_10
value: 27.139999999999997
- type: ndcg_at_100
value: 31.944
- type: ndcg_at_1000
value: 35.077999999999996
- type: ndcg_at_3
value: 23.424
- type: ndcg_at_5
value: 25.188
- type: precision_at_1
value: 19.339000000000002
- type: precision_at_10
value: 4.8309999999999995
- type: precision_at_100
value: 0.845
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 10.874
- type: precision_at_5
value: 7.825
- type: recall_at_1
value: 16.236
- type: recall_at_10
value: 36.513
- type: recall_at_100
value: 57.999
- type: recall_at_1000
value: 80.512
- type: recall_at_3
value: 26.179999999999996
- type: recall_at_5
value: 30.712
- type: map_at_1
value: 24.11
- type: map_at_10
value: 31.566
- type: map_at_100
value: 32.647
- type: map_at_1000
value: 32.753
- type: map_at_3
value: 29.24
- type: map_at_5
value: 30.564999999999998
- type: mrr_at_1
value: 28.265
- type: mrr_at_10
value: 35.504000000000005
- type: mrr_at_100
value: 36.436
- type: mrr_at_1000
value: 36.503
- type: mrr_at_3
value: 33.349000000000004
- type: mrr_at_5
value: 34.622
- type: ndcg_at_1
value: 28.265
- type: ndcg_at_10
value: 36.192
- type: ndcg_at_100
value: 41.388000000000005
- type: ndcg_at_1000
value: 43.948
- type: ndcg_at_3
value: 31.959
- type: ndcg_at_5
value: 33.998
- type: precision_at_1
value: 28.265
- type: precision_at_10
value: 5.989
- type: precision_at_100
value: 0.9650000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 14.335
- type: precision_at_5
value: 10.112
- type: recall_at_1
value: 24.11
- type: recall_at_10
value: 46.418
- type: recall_at_100
value: 69.314
- type: recall_at_1000
value: 87.397
- type: recall_at_3
value: 34.724
- type: recall_at_5
value: 39.925
- type: map_at_1
value: 22.091
- type: map_at_10
value: 29.948999999999998
- type: map_at_100
value: 31.502000000000002
- type: map_at_1000
value: 31.713
- type: map_at_3
value: 27.464
- type: map_at_5
value: 28.968
- type: mrr_at_1
value: 26.482
- type: mrr_at_10
value: 34.009
- type: mrr_at_100
value: 35.081
- type: mrr_at_1000
value: 35.138000000000005
- type: mrr_at_3
value: 31.785000000000004
- type: mrr_at_5
value: 33.178999999999995
- type: ndcg_at_1
value: 26.482
- type: ndcg_at_10
value: 35.008
- type: ndcg_at_100
value: 41.272999999999996
- type: ndcg_at_1000
value: 43.972
- type: ndcg_at_3
value: 30.804
- type: ndcg_at_5
value: 33.046
- type: precision_at_1
value: 26.482
- type: precision_at_10
value: 6.462
- type: precision_at_100
value: 1.431
- type: precision_at_1000
value: 0.22899999999999998
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.474
- type: recall_at_1
value: 22.091
- type: recall_at_10
value: 45.125
- type: recall_at_100
value: 72.313
- type: recall_at_1000
value: 89.503
- type: recall_at_3
value: 33.158
- type: recall_at_5
value: 39.086999999999996
- type: map_at_1
value: 19.883
- type: map_at_10
value: 26.951000000000004
- type: map_at_100
value: 27.927999999999997
- type: map_at_1000
value: 28.022000000000002
- type: map_at_3
value: 24.616
- type: map_at_5
value: 25.917
- type: mrr_at_1
value: 21.996
- type: mrr_at_10
value: 29.221000000000004
- type: mrr_at_100
value: 30.024
- type: mrr_at_1000
value: 30.095
- type: mrr_at_3
value: 26.833000000000002
- type: mrr_at_5
value: 28.155
- type: ndcg_at_1
value: 21.996
- type: ndcg_at_10
value: 31.421
- type: ndcg_at_100
value: 36.237
- type: ndcg_at_1000
value: 38.744
- type: ndcg_at_3
value: 26.671
- type: ndcg_at_5
value: 28.907
- type: precision_at_1
value: 21.996
- type: precision_at_10
value: 5.009
- type: precision_at_100
value: 0.799
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 11.275
- type: precision_at_5
value: 8.059
- type: recall_at_1
value: 19.883
- type: recall_at_10
value: 43.132999999999996
- type: recall_at_100
value: 65.654
- type: recall_at_1000
value: 84.492
- type: recall_at_3
value: 30.209000000000003
- type: recall_at_5
value: 35.616
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.756
- type: map_at_10
value: 30.378
- type: map_at_100
value: 32.537
- type: map_at_1000
value: 32.717
- type: map_at_3
value: 25.599
- type: map_at_5
value: 28.372999999999998
- type: mrr_at_1
value: 41.303
- type: mrr_at_10
value: 53.483999999999995
- type: mrr_at_100
value: 54.106
- type: mrr_at_1000
value: 54.127
- type: mrr_at_3
value: 50.315
- type: mrr_at_5
value: 52.396
- type: ndcg_at_1
value: 41.303
- type: ndcg_at_10
value: 40.503
- type: ndcg_at_100
value: 47.821000000000005
- type: ndcg_at_1000
value: 50.788
- type: ndcg_at_3
value: 34.364
- type: ndcg_at_5
value: 36.818
- type: precision_at_1
value: 41.303
- type: precision_at_10
value: 12.463000000000001
- type: precision_at_100
value: 2.037
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 25.798
- type: precision_at_5
value: 19.896
- type: recall_at_1
value: 17.756
- type: recall_at_10
value: 46.102
- type: recall_at_100
value: 70.819
- type: recall_at_1000
value: 87.21799999999999
- type: recall_at_3
value: 30.646
- type: recall_at_5
value: 38.022
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.033
- type: map_at_10
value: 20.584
- type: map_at_100
value: 29.518
- type: map_at_1000
value: 31.186000000000003
- type: map_at_3
value: 14.468
- type: map_at_5
value: 17.177
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 77.025
- type: mrr_at_100
value: 77.36699999999999
- type: mrr_at_1000
value: 77.373
- type: mrr_at_3
value: 75.583
- type: mrr_at_5
value: 76.396
- type: ndcg_at_1
value: 58.5
- type: ndcg_at_10
value: 45.033
- type: ndcg_at_100
value: 49.071
- type: ndcg_at_1000
value: 56.056
- type: ndcg_at_3
value: 49.936
- type: ndcg_at_5
value: 47.471999999999994
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 35.775
- type: precision_at_100
value: 11.594999999999999
- type: precision_at_1000
value: 2.062
- type: precision_at_3
value: 52.5
- type: precision_at_5
value: 45.300000000000004
- type: recall_at_1
value: 9.033
- type: recall_at_10
value: 26.596999999999998
- type: recall_at_100
value: 54.607000000000006
- type: recall_at_1000
value: 76.961
- type: recall_at_3
value: 15.754999999999999
- type: recall_at_5
value: 20.033
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.345000000000006
- type: f1
value: 43.4514918068706
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.29100000000001
- type: map_at_10
value: 81.059
- type: map_at_100
value: 81.341
- type: map_at_1000
value: 81.355
- type: map_at_3
value: 79.74799999999999
- type: map_at_5
value: 80.612
- type: mrr_at_1
value: 76.40299999999999
- type: mrr_at_10
value: 84.615
- type: mrr_at_100
value: 84.745
- type: mrr_at_1000
value: 84.748
- type: mrr_at_3
value: 83.776
- type: mrr_at_5
value: 84.343
- type: ndcg_at_1
value: 76.40299999999999
- type: ndcg_at_10
value: 84.981
- type: ndcg_at_100
value: 86.00999999999999
- type: ndcg_at_1000
value: 86.252
- type: ndcg_at_3
value: 82.97
- type: ndcg_at_5
value: 84.152
- type: precision_at_1
value: 76.40299999999999
- type: precision_at_10
value: 10.446
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 32.147999999999996
- type: precision_at_5
value: 20.135
- type: recall_at_1
value: 71.29100000000001
- type: recall_at_10
value: 93.232
- type: recall_at_100
value: 97.363
- type: recall_at_1000
value: 98.905
- type: recall_at_3
value: 87.893
- type: recall_at_5
value: 90.804
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.667
- type: map_at_10
value: 30.853
- type: map_at_100
value: 32.494
- type: map_at_1000
value: 32.677
- type: map_at_3
value: 26.91
- type: map_at_5
value: 29.099000000000004
- type: mrr_at_1
value: 37.191
- type: mrr_at_10
value: 46.171
- type: mrr_at_100
value: 47.056
- type: mrr_at_1000
value: 47.099000000000004
- type: mrr_at_3
value: 44.059
- type: mrr_at_5
value: 45.147
- type: ndcg_at_1
value: 37.191
- type: ndcg_at_10
value: 38.437
- type: ndcg_at_100
value: 44.62
- type: ndcg_at_1000
value: 47.795
- type: ndcg_at_3
value: 35.003
- type: ndcg_at_5
value: 36.006
- type: precision_at_1
value: 37.191
- type: precision_at_10
value: 10.586
- type: precision_at_100
value: 1.688
- type: precision_at_1000
value: 0.22699999999999998
- type: precision_at_3
value: 23.302
- type: precision_at_5
value: 17.006
- type: recall_at_1
value: 18.667
- type: recall_at_10
value: 45.367000000000004
- type: recall_at_100
value: 68.207
- type: recall_at_1000
value: 87.072
- type: recall_at_3
value: 32.129000000000005
- type: recall_at_5
value: 37.719
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.494
- type: map_at_10
value: 66.223
- type: map_at_100
value: 67.062
- type: map_at_1000
value: 67.11500000000001
- type: map_at_3
value: 62.867
- type: map_at_5
value: 64.994
- type: mrr_at_1
value: 78.987
- type: mrr_at_10
value: 84.585
- type: mrr_at_100
value: 84.773
- type: mrr_at_1000
value: 84.77900000000001
- type: mrr_at_3
value: 83.592
- type: mrr_at_5
value: 84.235
- type: ndcg_at_1
value: 78.987
- type: ndcg_at_10
value: 73.64
- type: ndcg_at_100
value: 76.519
- type: ndcg_at_1000
value: 77.51
- type: ndcg_at_3
value: 68.893
- type: ndcg_at_5
value: 71.585
- type: precision_at_1
value: 78.987
- type: precision_at_10
value: 15.529000000000002
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 44.808
- type: precision_at_5
value: 29.006999999999998
- type: recall_at_1
value: 39.494
- type: recall_at_10
value: 77.643
- type: recall_at_100
value: 88.825
- type: recall_at_1000
value: 95.321
- type: recall_at_3
value: 67.211
- type: recall_at_5
value: 72.519
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.55959999999999
- type: ap
value: 80.7246500384617
- type: f1
value: 85.52336485065454
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.631
- type: map_at_10
value: 36.264
- type: map_at_100
value: 37.428
- type: map_at_1000
value: 37.472
- type: map_at_3
value: 32.537
- type: map_at_5
value: 34.746
- type: mrr_at_1
value: 24.312
- type: mrr_at_10
value: 36.858000000000004
- type: mrr_at_100
value: 37.966
- type: mrr_at_1000
value: 38.004
- type: mrr_at_3
value: 33.188
- type: mrr_at_5
value: 35.367
- type: ndcg_at_1
value: 24.312
- type: ndcg_at_10
value: 43.126999999999995
- type: ndcg_at_100
value: 48.642
- type: ndcg_at_1000
value: 49.741
- type: ndcg_at_3
value: 35.589
- type: ndcg_at_5
value: 39.515
- type: precision_at_1
value: 24.312
- type: precision_at_10
value: 6.699
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.153
- type: precision_at_5
value: 11.065999999999999
- type: recall_at_1
value: 23.631
- type: recall_at_10
value: 64.145
- type: recall_at_100
value: 89.41
- type: recall_at_1000
value: 97.83500000000001
- type: recall_at_3
value: 43.769000000000005
- type: recall_at_5
value: 53.169
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.4108527131783
- type: f1
value: 93.1415880261038
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.24806201550388
- type: f1
value: 60.531916308197175
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.71553463349024
- type: f1
value: 71.70753174900791
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.79757901815736
- type: f1
value: 77.83719850433258
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.74193296622113
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.64257594108566
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.811018518883625
- type: mrr
value: 31.910376577445003
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.409
- type: map_at_10
value: 13.093
- type: map_at_100
value: 16.256999999999998
- type: map_at_1000
value: 17.617
- type: map_at_3
value: 9.555
- type: map_at_5
value: 11.428
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 54.179
- type: mrr_at_100
value: 54.812000000000005
- type: mrr_at_1000
value: 54.840999999999994
- type: mrr_at_3
value: 51.909000000000006
- type: mrr_at_5
value: 53.519000000000005
- type: ndcg_at_1
value: 43.189
- type: ndcg_at_10
value: 35.028
- type: ndcg_at_100
value: 31.226
- type: ndcg_at_1000
value: 39.678000000000004
- type: ndcg_at_3
value: 40.596
- type: ndcg_at_5
value: 38.75
- type: precision_at_1
value: 44.582
- type: precision_at_10
value: 25.974999999999998
- type: precision_at_100
value: 7.793
- type: precision_at_1000
value: 2.036
- type: precision_at_3
value: 38.493
- type: precision_at_5
value: 33.994
- type: recall_at_1
value: 5.409
- type: recall_at_10
value: 16.875999999999998
- type: recall_at_100
value: 30.316
- type: recall_at_1000
value: 60.891
- type: recall_at_3
value: 10.688
- type: recall_at_5
value: 13.832
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.375
- type: map_at_10
value: 51.991
- type: map_at_100
value: 52.91400000000001
- type: map_at_1000
value: 52.93600000000001
- type: map_at_3
value: 48.014
- type: map_at_5
value: 50.381
- type: mrr_at_1
value: 40.759
- type: mrr_at_10
value: 54.617000000000004
- type: mrr_at_100
value: 55.301
- type: mrr_at_1000
value: 55.315000000000005
- type: mrr_at_3
value: 51.516
- type: mrr_at_5
value: 53.435
- type: ndcg_at_1
value: 40.759
- type: ndcg_at_10
value: 59.384
- type: ndcg_at_100
value: 63.157
- type: ndcg_at_1000
value: 63.654999999999994
- type: ndcg_at_3
value: 52.114000000000004
- type: ndcg_at_5
value: 55.986000000000004
- type: precision_at_1
value: 40.759
- type: precision_at_10
value: 9.411999999999999
- type: precision_at_100
value: 1.153
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.329
- type: precision_at_5
value: 16.256999999999998
- type: recall_at_1
value: 36.375
- type: recall_at_10
value: 79.053
- type: recall_at_100
value: 95.167
- type: recall_at_1000
value: 98.82
- type: recall_at_3
value: 60.475
- type: recall_at_5
value: 69.327
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.256
- type: map_at_10
value: 83.8
- type: map_at_100
value: 84.425
- type: map_at_1000
value: 84.444
- type: map_at_3
value: 80.906
- type: map_at_5
value: 82.717
- type: mrr_at_1
value: 80.97999999999999
- type: mrr_at_10
value: 87.161
- type: mrr_at_100
value: 87.262
- type: mrr_at_1000
value: 87.263
- type: mrr_at_3
value: 86.175
- type: mrr_at_5
value: 86.848
- type: ndcg_at_1
value: 80.97999999999999
- type: ndcg_at_10
value: 87.697
- type: ndcg_at_100
value: 88.959
- type: ndcg_at_1000
value: 89.09899999999999
- type: ndcg_at_3
value: 84.83800000000001
- type: ndcg_at_5
value: 86.401
- type: precision_at_1
value: 80.97999999999999
- type: precision_at_10
value: 13.261000000000001
- type: precision_at_100
value: 1.5150000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.01
- type: precision_at_5
value: 24.298000000000002
- type: recall_at_1
value: 70.256
- type: recall_at_10
value: 94.935
- type: recall_at_100
value: 99.274
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.602
- type: recall_at_5
value: 91.133
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.322692497613104
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.895813503775074
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.338
- type: map_at_10
value: 10.767
- type: map_at_100
value: 12.537999999999998
- type: map_at_1000
value: 12.803999999999998
- type: map_at_3
value: 7.788
- type: map_at_5
value: 9.302000000000001
- type: mrr_at_1
value: 21.4
- type: mrr_at_10
value: 31.637999999999998
- type: mrr_at_100
value: 32.688
- type: mrr_at_1000
value: 32.756
- type: mrr_at_3
value: 28.433000000000003
- type: mrr_at_5
value: 30.178
- type: ndcg_at_1
value: 21.4
- type: ndcg_at_10
value: 18.293
- type: ndcg_at_100
value: 25.274
- type: ndcg_at_1000
value: 30.284
- type: ndcg_at_3
value: 17.391000000000002
- type: ndcg_at_5
value: 15.146999999999998
- type: precision_at_1
value: 21.4
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.949
- type: precision_at_1000
value: 0.316
- type: precision_at_3
value: 16.167
- type: precision_at_5
value: 13.22
- type: recall_at_1
value: 4.338
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 39.562999999999995
- type: recall_at_1000
value: 64.08
- type: recall_at_3
value: 9.828000000000001
- type: recall_at_5
value: 13.383000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.42568163642142
- type: cos_sim_spearman
value: 78.5797159641342
- type: euclidean_pearson
value: 80.22151260811604
- type: euclidean_spearman
value: 78.5797151953878
- type: manhattan_pearson
value: 80.21224215864788
- type: manhattan_spearman
value: 78.55641478381344
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.44020710812569
- type: cos_sim_spearman
value: 78.91631735081286
- type: euclidean_pearson
value: 81.64188964182102
- type: euclidean_spearman
value: 78.91633286881678
- type: manhattan_pearson
value: 81.69294748512496
- type: manhattan_spearman
value: 78.93438558002656
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.27165426412311
- type: cos_sim_spearman
value: 85.40429140249618
- type: euclidean_pearson
value: 84.7509580724893
- type: euclidean_spearman
value: 85.40429140249618
- type: manhattan_pearson
value: 84.76488289321308
- type: manhattan_spearman
value: 85.4256793698708
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.138851760732
- type: cos_sim_spearman
value: 81.64101363896586
- type: euclidean_pearson
value: 82.55165038934942
- type: euclidean_spearman
value: 81.64105257080502
- type: manhattan_pearson
value: 82.52802949883335
- type: manhattan_spearman
value: 81.61255430718158
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.0654695484029
- type: cos_sim_spearman
value: 87.20408521902229
- type: euclidean_pearson
value: 86.8110651362115
- type: euclidean_spearman
value: 87.20408521902229
- type: manhattan_pearson
value: 86.77984656478691
- type: manhattan_spearman
value: 87.1719947099227
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.77823915496512
- type: cos_sim_spearman
value: 85.43566325729779
- type: euclidean_pearson
value: 84.5396956658821
- type: euclidean_spearman
value: 85.43566325729779
- type: manhattan_pearson
value: 84.5665398848169
- type: manhattan_spearman
value: 85.44375870303232
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.20030208471798
- type: cos_sim_spearman
value: 87.20485505076539
- type: euclidean_pearson
value: 88.10588324368722
- type: euclidean_spearman
value: 87.20485505076539
- type: manhattan_pearson
value: 87.92324770415183
- type: manhattan_spearman
value: 87.0571314561877
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.06093161604453
- type: cos_sim_spearman
value: 64.2163140357722
- type: euclidean_pearson
value: 65.27589680994006
- type: euclidean_spearman
value: 64.2163140357722
- type: manhattan_pearson
value: 65.45904383711101
- type: manhattan_spearman
value: 64.55404716679305
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.32976164578706
- type: cos_sim_spearman
value: 85.54302197678368
- type: euclidean_pearson
value: 85.26307149193056
- type: euclidean_spearman
value: 85.54302197678368
- type: manhattan_pearson
value: 85.26647282029371
- type: manhattan_spearman
value: 85.5316135265568
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.44675968318754
- type: mrr
value: 94.92741826075158
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.34400000000001
- type: map_at_10
value: 65.927
- type: map_at_100
value: 66.431
- type: map_at_1000
value: 66.461
- type: map_at_3
value: 63.529
- type: map_at_5
value: 64.818
- type: mrr_at_1
value: 59.333000000000006
- type: mrr_at_10
value: 67.54599999999999
- type: mrr_at_100
value: 67.892
- type: mrr_at_1000
value: 67.917
- type: mrr_at_3
value: 65.778
- type: mrr_at_5
value: 66.794
- type: ndcg_at_1
value: 59.333000000000006
- type: ndcg_at_10
value: 70.5
- type: ndcg_at_100
value: 72.688
- type: ndcg_at_1000
value: 73.483
- type: ndcg_at_3
value: 66.338
- type: ndcg_at_5
value: 68.265
- type: precision_at_1
value: 59.333000000000006
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.889
- type: precision_at_5
value: 16.866999999999997
- type: recall_at_1
value: 56.34400000000001
- type: recall_at_10
value: 82.789
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 71.64399999999999
- type: recall_at_5
value: 76.322
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.75742574257426
- type: cos_sim_ap
value: 93.52081548447406
- type: cos_sim_f1
value: 87.33850129198966
- type: cos_sim_precision
value: 90.37433155080214
- type: cos_sim_recall
value: 84.5
- type: dot_accuracy
value: 99.75742574257426
- type: dot_ap
value: 93.52081548447406
- type: dot_f1
value: 87.33850129198966
- type: dot_precision
value: 90.37433155080214
- type: dot_recall
value: 84.5
- type: euclidean_accuracy
value: 99.75742574257426
- type: euclidean_ap
value: 93.52081548447406
- type: euclidean_f1
value: 87.33850129198966
- type: euclidean_precision
value: 90.37433155080214
- type: euclidean_recall
value: 84.5
- type: manhattan_accuracy
value: 99.75841584158415
- type: manhattan_ap
value: 93.4975678585854
- type: manhattan_f1
value: 87.26708074534162
- type: manhattan_precision
value: 90.45064377682404
- type: manhattan_recall
value: 84.3
- type: max_accuracy
value: 99.75841584158415
- type: max_ap
value: 93.52081548447406
- type: max_f1
value: 87.33850129198966
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.31437036686651
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.25569319007206
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.90474939720706
- type: mrr
value: 50.568115503777264
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.866828641244712
- type: cos_sim_spearman
value: 30.077555055873866
- type: dot_pearson
value: 29.866832988572266
- type: dot_spearman
value: 30.077555055873866
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.232
- type: map_at_10
value: 2.094
- type: map_at_100
value: 11.971
- type: map_at_1000
value: 28.158
- type: map_at_3
value: 0.688
- type: map_at_5
value: 1.114
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 93.4
- type: mrr_at_100
value: 93.4
- type: mrr_at_1000
value: 93.4
- type: mrr_at_3
value: 93
- type: mrr_at_5
value: 93.4
- type: ndcg_at_1
value: 84
- type: ndcg_at_10
value: 79.923
- type: ndcg_at_100
value: 61.17
- type: ndcg_at_1000
value: 53.03
- type: ndcg_at_3
value: 84.592
- type: ndcg_at_5
value: 82.821
- type: precision_at_1
value: 88
- type: precision_at_10
value: 85
- type: precision_at_100
value: 63.019999999999996
- type: precision_at_1000
value: 23.554
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.232
- type: recall_at_10
value: 2.255
- type: recall_at_100
value: 14.823
- type: recall_at_1000
value: 49.456
- type: recall_at_3
value: 0.718
- type: recall_at_5
value: 1.175
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.547
- type: map_at_10
value: 11.375
- type: map_at_100
value: 18.194
- type: map_at_1000
value: 19.749
- type: map_at_3
value: 5.825
- type: map_at_5
value: 8.581
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 51.32
- type: mrr_at_100
value: 51.747
- type: mrr_at_1000
value: 51.747
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 48.605
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 28.151
- type: ndcg_at_100
value: 39.438
- type: ndcg_at_1000
value: 50.769
- type: ndcg_at_3
value: 30.758999999999997
- type: ndcg_at_5
value: 30.366
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.041
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 31.837
- type: recall_at_1
value: 2.547
- type: recall_at_10
value: 18.19
- type: recall_at_100
value: 49.538
- type: recall_at_1000
value: 83.86
- type: recall_at_3
value: 7.329
- type: recall_at_5
value: 11.532
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.4952
- type: ap
value: 14.793362635531409
- type: f1
value: 55.204635551516915
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.5365025466893
- type: f1
value: 61.81742556334845
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.05531070301185
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.51725576682364
- type: cos_sim_ap
value: 75.2292304265163
- type: cos_sim_f1
value: 69.54022988505749
- type: cos_sim_precision
value: 63.65629110039457
- type: cos_sim_recall
value: 76.62269129287598
- type: dot_accuracy
value: 86.51725576682364
- type: dot_ap
value: 75.22922386081054
- type: dot_f1
value: 69.54022988505749
- type: dot_precision
value: 63.65629110039457
- type: dot_recall
value: 76.62269129287598
- type: euclidean_accuracy
value: 86.51725576682364
- type: euclidean_ap
value: 75.22925730473472
- type: euclidean_f1
value: 69.54022988505749
- type: euclidean_precision
value: 63.65629110039457
- type: euclidean_recall
value: 76.62269129287598
- type: manhattan_accuracy
value: 86.52321630804077
- type: manhattan_ap
value: 75.20608115037336
- type: manhattan_f1
value: 69.60000000000001
- type: manhattan_precision
value: 64.37219730941705
- type: manhattan_recall
value: 75.75197889182058
- type: max_accuracy
value: 86.52321630804077
- type: max_ap
value: 75.22925730473472
- type: max_f1
value: 69.60000000000001
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.34877944657896
- type: cos_sim_ap
value: 86.71257569277373
- type: cos_sim_f1
value: 79.10386355986088
- type: cos_sim_precision
value: 76.91468470434214
- type: cos_sim_recall
value: 81.4213119802895
- type: dot_accuracy
value: 89.34877944657896
- type: dot_ap
value: 86.71257133133368
- type: dot_f1
value: 79.10386355986088
- type: dot_precision
value: 76.91468470434214
- type: dot_recall
value: 81.4213119802895
- type: euclidean_accuracy
value: 89.34877944657896
- type: euclidean_ap
value: 86.71257651501476
- type: euclidean_f1
value: 79.10386355986088
- type: euclidean_precision
value: 76.91468470434214
- type: euclidean_recall
value: 81.4213119802895
- type: manhattan_accuracy
value: 89.35848177901967
- type: manhattan_ap
value: 86.69330615469126
- type: manhattan_f1
value: 79.13867741453949
- type: manhattan_precision
value: 76.78881807647741
- type: manhattan_recall
value: 81.63689559593472
- type: max_accuracy
value: 89.35848177901967
- type: max_ap
value: 86.71257651501476
- type: max_f1
value: 79.13867741453949
---
# nomic-embed-text-v1: A Reproducible Long Context (8192) Text Embedder
`nomic-embed-text-v1` is 8192 context length text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks.
| Name | SeqLen | MTEB | LoCo | Jina Long Context | Open Weights | Open Training Code | Open Data |
| :-------------------------------:| :----- | :-------- | :------: | :---------------: | :-----------: | :----------------: | :---------- |
| nomic-embed-text-v1 | 8192 | **62.39** |**85.53** | 54.16 | ✅ | ✅ | ✅ |
| jina-embeddings-v2-base-en | 8192 | 60.39 | 85.45 | 51.90 | ✅ | ❌ | ❌ |
| text-embedding-3-small | 8191 | 62.26 | 82.40 | **58.20** | ❌ | ❌ | ❌ |
| text-embedding-ada-002 | 8191 | 60.99 | 52.7 | 55.25 | ❌ | ❌ | ❌ |
## Hosted Inference API
The easiest way to get started with Nomic Embed is through the Nomic Embedding API.
Generating embeddings with the `nomic` Python client is as easy as
```python
from nomic import embed
output = embed.text(
texts=['Nomic Embedding API', '#keepAIOpen'],
model='nomic-embed-text-v1',
task_type='search_document'
)
print(output)
```
For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-text)
## Data Visualization
Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data!
[](https://atlas.nomic.ai/map/nomic-text-embed-v1-5m-sample)
## Training Details
We train our embedder using a multi-stage training pipeline. Starting from a long-context [BERT model](https://huggingface.co/nomic-ai/nomic-bert-2048),
the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles.
In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage.
For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-text-v1).
Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors)
## Usage
Note `nomic-embed-text` requires prefixes! We support the prefixes `[search_query, search_document, classification, clustering]`.
For retrieval applications, you should prepend `search_document` for all your documents and `search_query` for your queries.
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
embeddings = model.encode(sentences)
print(embeddings)
```
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
model.eval()
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings)
```
The model natively supports scaling of the sequence length past 2048 tokens. To do so,
```diff
- tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
+ tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192)
- model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
+ model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2)
```
### Transformers.js
```js
import { pipeline } from '@xenova/transformers';
// Create a feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'nomic-ai/nomic-embed-text-v1', {
quantized: false, // Comment out this line to use the quantized version
});
// Compute sentence embeddings
const texts = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log(embeddings);
```
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
# Citation
If you find the model, dataset, or training code useful, please cite our work
```bibtex
@misc{nussbaum2024nomic,
title={Nomic Embed: Training a Reproducible Long Context Text Embedder},
author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar},
year={2024},
eprint={2402.01613},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"BIOSSES",
"SCIFACT"
] |
EleutherAI/pythia-1.4b-v0 | EleutherAI | text-generation | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-10-16T18:24:39Z | 2023-03-29T18:50:36+00:00 | 739 | 7 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1.4B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1.4B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"SCIQ"
] |
bigscience/T0p | bigscience | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z | 2022-06-21T01:23:09+00:00 | 738 | 5 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: A is the son's of B's uncle. What is the family relationship between A and
B?
- text: 'Reorder the words in this sentence: justin and name bieber years is my am
I 27 old.'
- text: "Task: copy but say the opposite.\n PSG won its match against Barca."
- text: 'Is this review positive or negative? Review: Best cast iron skillet you will
every buy.'
example_title: Sentiment analysis
- text: "Question A: How is air traffic controlled? \nQuestion B: How do you become\
\ an air traffic controller?\nPick one: these questions are duplicates or not\
\ duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday.\
\ He chose her because she had foreign affairs experience as a former First Lady.\
\ \nIn the previous sentence, decide who 'her' is referring to."
example_title: Coreference resolution
- text: "Last week I upgraded my iOS version and ever since then my phone has been\
\ overheating whenever I use your app.\n Select the category for the above sentence\
\ from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach\
\ was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit,\
\ Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences\
\ 1 and 2 have the same meaning?"
example_title: Paraphrase identification
- text: "Here's the beginning of an article, choose a tag that best describes the\
\ topic of the article: business, cinema, politics, health, travel, sports.\n\n\
\ The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN)\
\ Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds.\
\ For a Cold War creation, Ian Fleming's suave spy has certainly gotten around,\
\ but despite different guises in the tuxedo and occasional scuba gear, when it\
\ comes to Bond ratings, there really shouldn't be much argument about who wore\
\ it best."
- text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1,\
\ LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different\
\ things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out.\
\ Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\
\n Sentence A: you can leave the books on the table over there.\n Sentence B:\
\ the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book,\
\ a blue book, and a black book.\n The red book is to the right of the gray book.\
\ The black book is to the left of the blue book. The blue book is to the left\
\ of the gray book. The purple book is the second from the right.\n\n Which book\
\ is the leftmost book?"
example_title: Logic puzzles
- text: "The two men running to become New York City's next mayor will face off in\
\ their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough\
\ president and a former New York City police captain, is widely expected to win\
\ the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era\
\ Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?"
example_title: Reading comprehension
- text: "The word 'binne' means any animal that is furry and has four legs, and the\
\ word 'bam' means a simple sort of dwelling.\n\n Which of the following best\
\ characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence\
\ 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence\
\ 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places\
\ where people live."
---
**How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
**Official repository**: [bigscience-workshop/t-zero](https://github.com/bigscience-workshop/t-zero)
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
- Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex`
- Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault`
- Input: `what is something everyone hates, but you like?` - Prediction: `sex`
- Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex`
- Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut`
- Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | [
"SCIQ"
] |
erax-ai/EraX-VL-7B-V1.5 | erax-ai | visual-question-answering | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"erax",
"multimodal",
"erax-vl-2B",
"insurance",
"ocr",
"vietnamese",
"bcg",
"image-to-text",
"visual-question-answering",
"vi",
"en",
"zh",
"arxiv:2308.12966",
"arxiv:2407.10671",
"arxiv:2404.16821",
"arxiv:2404.07922",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"doi:10.57967/hf/3934",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-26T00:51:41Z | 2025-01-15T16:53:14+00:00 | 725 | 6 | ---
base_model:
- Qwen/Qwen2-VL-2B-Instruct
language:
- vi
- en
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: visual-question-answering
tags:
- erax
- multimodal
- erax-vl-2B
- insurance
- ocr
- vietnamese
- bcg
- image-to-text
widget:
- src: images/photo-1-16505057982762025719470.webp
example_title: Test 1
- src: images/vt-don-thuoc-f0-7417.jpeg
example_title: Test 2
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/GsQKdaTyn2FFx_cZvVHk3.png" alt="Logo">
</p>
# EraX-VL-7B-V1.5
## Introduction 🎉
Hot on the heels of the popular **<a href="https://huggingface.co/erax-ai/EraX-VL-7B-V1.0" target="_blank">EraX-VL-7B-V1.0 model</a>**, we proudly present **EraX-VL-7B-V1.5**, another robust multimodal model for **OCR (optical character recognition)** and **VQA (visual question-answering)** that excels in various languages 🌍, with a particular focus on Vietnamese 🇻🇳. This model stands out for its precise recognition capabilities across a range of documents 📝, including medical forms 🩺, invoices 🧾, bills of sale 💳, quotes 📄, and medical records 💊. This functionality is expected to be highly beneficial for hospitals 🏥, clinics 💉, insurance companies 🛡️, and other similar applications 📋. Built on the solid foundation of the [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[1], which we found to be of high quality and fluent in Vietnamese, `EraX-VL-7B-V1.5` has been fine-tuned to enhance its performance. We plan to continue improving and releasing new versions for free, along with sharing performance benchmarks in the near future.
One standing-out feature of **EraX-VL-7B-V1.5** is the capability to do multi-turn Q&A with impressive reasoning capability!
**NOTA BENE**:
- EraX-VL-7B-V1.5 is NOT a typical OCR-only tool likes Tesseract but is a Multimodal LLM-based model. To use it effectively, you may have to **twist your prompt carefully** depending on your tasks.
- This model was NOT finetuned with medical (X-ray) dataset or car accidences (yet). Stay tune for updated version coming up sometime early 2025.
**EraX-VL-7B-V1.5** is a young member of our **EraX's LànhGPT** collection of LLM models.
- **Developed by:**
- Nguyễn Anh Nguyên ([email protected])
- Nguyễn Hồ Nam (BCG)
- Phạm Huỳnh Nhật ([email protected])
- Phạm Đình Thục ([email protected])
- **Funded by:** [Bamboo Capital Group](https://bamboocap.com.vn) and EraX
- **Model type:** Multimodal Transformer with over 7B parameters
- **Languages (NLP):** Primarily Vietnamese with multilingual capabilities
- **License:** Apache 2.0
- **Fine-tuned from:** [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
- **Prompt examples:** <a href="https://github.com/EraX-JS-Company/erax-vl-7b-v1/blob/main/prompts/Vietnam_popular_prompts.txt" target="_blank">Some popular prompt examples on Github.</a>
## Benchmarks 📊
## 🏆 LeaderBoard
The EraX-VL-7B-V1.5 achieved exceptionally high performance compared to other models of equal size or even **10 times larger, and we open-source**! You can re-run the benchmark at any time.
<table style="width:75%;">
<tr>
<th align="middle" width="300">Models</th>
<td align="middle" width="150"><b>Open-Source</b></td>
<td align="middle" width="300"><b>VI-MTVQA</b></td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1.5 🥇 </font></th>
<td align="middle">✅</td>
<td align="middle">47.2 </td>
</tr>
<tr>
<th align="middle">Qwen2-VL 72B 🥈 </th>
<td align="middle">✘</td>
<td align="middle">41.6 </td>
</tr>
<tr>
<th align="middle">ViGPT-VL 🥉 </th>
<td align="middle">✘</td>
<td align="middle">39.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-2B-V1.5</font></th>
<td align="middle"> ✅ </td>
<td align="middle">38.2 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1 </font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Vintern-1B-V2</font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.4 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Qwen2-VL 7B </font></th>
<td align="middle"> ✅ </td>
<td align="middle">30.0 </td>
</tr>
<tr>
<th align="middle">Claude3 Opus</th>
<td align="middle">✘</td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4o mini </th>
<td align="middle"> ✘ </td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4V</th>
<td align="middle">✘</td>
<td align="middle">28.9 </td>
</tr>
<tr>
<th align="middle">Gemini Ultra</th>
<td align="middle">✘</td>
<td align="middle">28.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>InternVL2 76B</font></th>
<td align="middle"> ✅ </td>
<td align="middle">26.9 </td>
</tr>
<tr>
<th align="middle">QwenVL Max</th>
<td align="middle">✘</td>
<td align="middle">23.5 </td>
</tr>
<tr>
<th align="middle">Claude3 Sonnet</th>
<td align="middle">✘</td>
<td align="middle">20.8 </td>
</tr>
<tr>
<th align="middle">QwenVL Plus</th>
<td align="middle">✘</td>
<td align="middle">18.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>MiniCPM-V2.5</font></th>
<td align="middle">✅</td>
<td align="middle">15.3 </td>
</tr>
</table>
**The test code for evaluating models in the paper can be found in**: <b><a href="https://github.com/EraX-JS-Company/EraX-MTVQA-Benchmark" target="_blank">EraX-JS-Company/EraX-MTVQA-Benchmark</a></b>
## API trial 🎉
Please contact **[email protected]** for API access inquiry.
## Examples 🧩
### 1. OCR - Optical Character Recognition for Multi-Images
**Example 01: Citizen identification card**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/trinhquangduy_front.jpg" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="images/trinhquangduy_back.jpg" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://support.google.com/google-ads/thread/270967947/t%C3%B4i-%C4%91%C3%A3-g%E1%BB%ADi-h%C3%ACnh-%E1%BA%A3nh-c%C4%83n-c%C6%B0%E1%BB%9Bc-c%C3%B4ng-d%C3%A2n-c%E1%BB%A7a-ch%C3%ADnh-t%C3%B4i-%C4%91%E1%BB%83-x%C3%A1c-minh-danh-t%C3%ADnh?hl=vi" target="_blank">Google Support</a>
</p>
```
{
"Số thẻ": "037094012351",
"Họ và tên": "TRỊNH QUANG DUY",
"Ngày sinh": "04/09/1994",
"Giới tính": "Nam",
"Quốc tịch": "Việt Nam",
"Quê quán": "Tân Thành, Kim Sơn, Ninh Bình",
"Nơi thường trú": "Xóm 6\nTân Thành, Kim Sơn, Ninh Bình",
"Có giá trị đến": "04/09/2034",
"Đặc điểm nhân dạng": "sẹo chấm c. 1cm trên đuôi mắt trái",
"Nơi cấp": "CỤC TRƯỞNG CỤC CẢNH SÁT\nQUẢN LÝ HÀNH CHÍNH VỀ TRẬT TỰ XÃ HỘI",
"Ngày cấp": "10/12/2022",
"Cán bộ ký tên": "Nguyễn Quốc Hùng",
"Mã định danh": "IDVNM0940123513037094012351"
}
```
**Example 02: Driver's License**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyenvandung_front.png" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyenvandung_back.png" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://baophapluat.vn/khoi-to-tai-xe-len-mang-mua-giay-phep-lai-xe-gia-de-chay-xe-post481047.html" target="_blank">Báo Pháp luật</a>
</p>
```
{
"No.":"400116012313"
"Fullname":"NGUYỄN VĂN DŨNG"
"Date_of_birth":"08/06/1979"
"Nationality":"VIỆT NAM"
"Address":"X. Quỳnh Hầu, H. Quỳnh Lưu, T. Nghệ An
Nghệ An, ngày/date 23 tháng/month 04 năm/year 2022"
"Hang_Class":"FC"
"Expires":"23/04/2027"
"Place_of_issue":"Nghệ An"
"Date_of_issue":"ngày/date 23 tháng/month 04 năm/year 2022"
"Signer":"Trần Anh Tuấn"
"Các loại xe được phép":"Ô tô hạng C kéo rơmoóc, đầu kéo kéo sơmi rơmoóc và xe hạng B1, B2, C, FB2 (Motor vehicle of class C with a trailer, semi-trailer truck and vehicles of classes B1, B2, C, FB2)"
"Mã số":""
}
```
**Example 03: Vehicle Registration Certificate**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyentonnhuan.jpg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://vietnamnet.vn/phan-biet-cac-loai-giay-dang-ky-xe-khi-mua-moto-da-qua-su-dung-541341.html" target="_blank">Báo Vietnamnet</a>
</p>
```
{
"Tên chủ xe": "NGUYỄN TÔN NHUẬN",
"Địa chỉ": "KE27 Kp3 P.TTTây Q7",
"Nhãn hiệu": "HONDA",
"Số loại": "DYLAN",
"Màu sơn": "Trắng",
"Năm sản xuất": "2012",
"Số máy": "F03E-0057735",
"Số khung": "SA04F-070410",
"Dung tích": "152",
"Số chỗ ngồi": "02",
"Biển số đăng ký": "59V1-498.89",
"Đăng ký lần đầu ngày": "08/06/2004",
"Chức vụ": "Thượng tá",
"Người ký": "Trần Văn Hiểu"
}
```
**Example 04: Vehicle Registration**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/w5WCaQ-k9nupRIQYddcpr.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://llumar.com.vn/dang-kiem-xe-o-to/" target="_blank">https://llumar.com.vn</a>
</p>
```
{
"vehicle": {
"registration_number": "30A-072.36",
"vehicle_inspection_number": "2903V-093515",
"type": "ô tô con",
"mark": "MERCEDES-BENZ",
"model_code": "C300 W204",
"engine_number": "27294732096079",
"chassis_number": "RLMGF5EX3DV005333",
"manufactured_year_and_country": "2013, Việt Nam",
"life_time_limit_to": "",
"commercial_use": "",
"modification": ""
},
"specifications": {
"wheel_formula": "4x2",
"wheel_tread": "1521/1512 (mm)",
"overall_dimension": "4650 x 1770 x 1429 (mm)",
"largest_luggage_container_dimension": "",
"wheelbase": "2760 (mm)",
"kerb_mass": "1575 (kg)",
"design_authorized_pay_load": "",
"design_authorized_total_mass": "2090/2090 (kg)",
"design_authorized_towed_mass": "",
"permissible_number_of_pers_carried": "5 chỗ ngồi, 0 chỗ đứng, 0 chỗ nằm",
"type_of_fuel_used": "Xăng",
"engine_displacement": "2996 (cm3)",
"max_output_per_rpm": "170(kW)/6000vph",
"number": "KC-1292285"
},
"inspection_report_number": "2905V-20953/16",
"valid_until": "31/01/2018",
"place_date_of_issue": "Hà Nội, ngày 1 tháng 8 năm 2016",
"inspection_center": "ĐƠN VỊ KIỂM ĐỊNH XE CƠ GIỚI",
"signature": "Ngọc Tuấn",
"equipped_with_tachograph": "",
"inspection_stamp_was_not_issued": "",
"notes": "Biển đăng ký nền trắng"
}
```
**Example 05: Receipt**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/40vIbNdM1cFXwQYNHx7Ag.jpeg" width="500"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://tintucketoan.com/cach-viet-hoa-don-hang-hoa-dich-vu-khong-chiu-thue-gtgt/" target="_blank">https://tintucketoan.com/</a>
</p>
```
{
'Mẫu số': '01GKTKT3/001',
'Ký hiệu': 'TC/18P',
'Số': '0000030',
'Họ tên người mua hàng': None,
'Tên đơn vị': 'Công Ty TNHH Kế Toán Hà Nội',
'Mã số thuế': '0106235869',
'Địa chỉ': 'Số 49 Ngõ 322 Lê Trọng Tấn, phường Khương Mai, quận Thanh Xuân, Hà Nội',
'Hình thức thanh toán': 'TM',
'STT': None,
'Tên hàng hóa, dịch vụ': 'Tra cứu phần mềm thư viện pháp luật trực tuyến',
'Đơn vị tính': None,
'Số lượng': None,
'Đơn giá': '168.000',
'Thành tiền': '2.016.000',
'Thuế suất GTGT': None,
'Tiền thuế GTGT': None,
'Tổng cộng tiền thanh toán': '2.016.000',
'Số tiền viết bằng chữ': 'Hai triệu, không trăm mười sáu nghìn đồng',
'Người bán hàng': 'Bùi Văn Hùng',
'Chức vụ người bán hàng': 'TRƯỞNG CHI NHÁNH'
}
```
### 2.1 Image Captioning
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/g5V60A7rI94TH0z3zdSAA.jpeg" width="700"/>
</div>
Hình ảnh là biểu đồ BMI theo tuổi, thể hiện mối quan hệ giữa chỉ số khối cơ thể (BMI) và độ tuổi của trẻ em. Biểu đồ được chia thành các vùng màu khác nhau tương ứng với các mức BMI khác nhau:
* **Vùng màu đỏ:** Chỉ số BMI cao hơn 25, cho thấy tình trạng béo phì.
* **Vùng màu vàng:** Chỉ số BMI nằm trong khoảng từ 18 đến 25, cho thấy nguy cơ béo phì.
* **Vùng màu xanh lá cây nhạt:** Chỉ số BMI nằm trong khoảng từ 16 đến 18, cho thấy sức khỏe dinh dưỡng tốt.
* **Vùng màu xanh lá cây đậm:** Chỉ số BMI thấp hơn 16, cho thấy tình trạng thiếu cân.
Trục tung biểu diễn chỉ số BMI, trục hoành biểu diễn tuổi (tính bằng năm). Đường cong màu xám đậm thể hiện đường chuẩn BMI theo tuổi. Các đường cong này cho thấy sự thay đổi BMI theo thời gian, giúp đánh giá sự phát triển cân nặng của trẻ em. Ví dụ, ở trẻ em dưới 3 tuổi, BMI thường dao động trong vùng thiếu cân hoặc sức khỏe dinh dưỡng tốt. Khi trẻ lớn lên, BMI có xu hướng tăng dần, nhưng tốc độ tăng trưởng có thể khác nhau tùy thuộc vào từng cá nhân. Biểu đồ cũng hiển thị các phần trăm phân vị (Percentile), cho biết tỷ lệ phần trăm trẻ em có BMI thấp hơn hoặc cao hơn so với một nhóm trẻ em cùng độ tuổi. Điều này giúp so sánh BMI của trẻ em với tiêu chuẩn quốc tế.
### 2.2 Image Captioning
<div align="center">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/27vid-Gaza-City-Cover-gqmt-videoSixteenByNine1050%20(1).jpg" width="700"/>
</div>
Hình ảnh chụp một cảnh tượng đầy xúc động và bi thảm, dường như diễn ra ở một khu vực nghèo khó, có thể là một khu định cư hoặc khu ổ chuột. Trung tâm của bức ảnh là một chiếc xe đẩy được kéo bởi một con lừa. Trên xe đẩy có một nhóm người, bao gồm một người đàn ông lớn tuổi có vẻ như là người hướng dẫn, một phụ nữ mặc áo choàng đen, một phụ nữ trẻ mặc áo xám, một bé gái nhỏ được che mặt bằng khăn trùm đầu, và một cậu bé mặc áo xanh lá cây. Họ có vẻ như đang di chuyển từ một khu vực bị tàn phá sang một nơi khác. Xung quanh họ là những mảnh vụn, rác thải và những tòa nhà đổ nát, cho thấy một môi trường sống khó khăn và thiếu thốn. Phía sau họ, có một số người khác đang đi bộ hoặc đi xe đạp, một số người đang mang theo hành lý. Bầu trời trong xanh, nhưng không khí chung của bức ảnh là nặng nề và buồn bã. Có thể đây là một cảnh tượng di cư, tị nạn, hoặc một sự kiện thảm khốc nào đó.
### 3. Multi-turns Visual Question Answering and Reasoning
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/S2UosY_TOOV5XNn3d4_zL.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://kenh14.vn/bien-cang-thai-trinh-nhap-vien-vi-ngo-doc-dang-dan-to-1-resort-5-sao-thieu-trach-nhiem-voi-khach-hang-20200924214133912.chn" target="_blank">https://kenh14.vn</a>
</p>
- **User**: Hãy cho biết thông tin bệnh nhân
- **Assistant**: Tên: NGUYỄN THỊ THÁI TRINH, Ngày sinh: 09/09/1993, Giới tính: Nữ, Địa chỉ: TÂN ĐỊNH - Q1 - TPHCM, Mã thẻ BHYT: 19169080, Số khám bệnh: 2569501, Mã khu vực (K1/K2/K3): 2
- **User**: Họ được chuẩn đoán bị bệnh gì?
- **Assistant**: Nhiễm trùng đường ruột do vi khuẩn, không xác định loài
- **User**: Người bệnh có tiền sử bệnh không? Nếu có thì là gì?
- **Assistant**: Thiếu Calci do chế độ ăn
- **User**: Bệnh nhân đã thực hiện những xét nghiệm gì?
- **Assistant**: [NQ286] Tổng phân tích tế bào máu ngoại vi (bằng máy đếm laser), [NQ286] Điện giải đồ (Na, K, Cl) [Máu], [NQ286] Đo hoạt độ ALT (GPT) [Máu], [NQ286] Đo hoạt độ AST (GOT) [Máu], [NQ286] Định lượng Calci toàn phần [Máu], [NQ286] Định lượng Creatinin [Máu], [NQ286] Định lượng Glucose [Máu], [NQ286] Định lượng Urê máu [Máu], [NQ286] Siêu âm ổ bụng (gan mật, tụy, lách, thận, bàng quang)
- **User**: Số tiền Vật tư y tế mà người bệnh phải trả là bao nhiêu?
- **Assistant**: 19,708
- **User**: Các loại thuốc dịch truyền người bệnh phải trả là gì, tổng số tiền là bao nhiêu?
- **Assistant**: Atropin Sulfat (Syv2019), Bicefzidim 1g (Syv2019), GONCAL (Syv2019), Lactated Ringer's-500ml (Syv2019), Nước cất pha tiêm 5ml (Syv2019), Sodium Chloride 0.9% -500ml (Syv2019), Vincomid (Syv2019), Vinopa (Syv2019), tổng cộng 45,234 đồng
## Quickstart 🎮
Install the necessary packages:
```curl
python -m pip install git+https://github.com/huggingface/transformers accelerate
python -m pip install qwen-vl-utils
pip install flash-attn --no-build-isolation
```
Then you can use `EraX-VL-7B-V1.5` like this:
```python
import os
import base64
import json
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "erax/EraX-VL-7B-V1.5"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="eager", # replace with "flash_attention_2" if your GPU is Ampere architecture
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# processor = AutoProcessor.from_pretrained(model_path)
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
model_path,
min_pixels=min_pixels,
max_pixels=max_pixels,
)
image_path ="image.jpg"
with open(image_path, "rb") as f:
encoded_image = base64.b64encode(f.read())
decoded_image_text = encoded_image.decode('utf-8')
base64_data = f"data:image;base64,{decoded_image_text}"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": base64_data,
},
{
"type": "text",
"text": "Trích xuất thông tin nội dung từ hình ảnh được cung cấp."
},
],
}
]
# Prepare prompt
tokenized_text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[ tokenized_text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Generation configs
generation_config = model.generation_config
generation_config.do_sample = True
generation_config.temperature = 1.0
generation_config.top_k = 1
generation_config.top_p = 0.9
generation_config.min_p = 0.1
generation_config.best_of = 5
generation_config.max_new_tokens = 2048
generation_config.repetition_penalty = 1.06
# Inference
generated_ids = model.generate(**inputs, generation_config=generation_config)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
```
## References 📑
[1] Qwen team. Qwen2-VL. 2024.
[2] Bai, Jinze, et al. "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond." arXiv preprint arXiv:2308.12966 (2023).
[4] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[5] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[6] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[7] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
## Contact 🤝
- For correspondence regarding this work or inquiry for API trial, please contact Nguyễn Anh Nguyên at [[email protected]]([email protected]).
- Follow us on <b><a href="https://github.com/EraX-JS-Company" target="_blank">EraX Github</a></b>
| [
"CHIA"
] |
mradermacher/Einstein-v4-phi2-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"phi",
"phi2",
"einstein",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:glaiveai/glaive-code-assistant",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"base_model:Weyaxi/Einstein-v4-phi2",
"base_model:quantized:Weyaxi/Einstein-v4-phi2",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2025-01-19T00:53:55Z | 2025-01-19T01:16:48+00:00 | 721 | 0 | ---
base_model: Weyaxi/Einstein-v4-phi2
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- glaiveai/glaive-code-assistant
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
language:
- en
library_name: transformers
license: other
tags:
- axolotl
- generated_from_trainer
- phi
- phi2
- einstein
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Weyaxi/Einstein-v4-phi2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Einstein-v4-phi2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q2_K.gguf) | i1-Q2_K | 1.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q4_1.gguf) | i1-Q4_1 | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v4-phi2-i1-GGUF/resolve/main/Einstein-v4-phi2.i1-Q6_K.gguf) | i1-Q6_K | 2.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"SCIQ"
] |
EleutherAI/pythia-2.8b-deduped-v0 | EleutherAI | text-generation | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-11-23T17:41:01Z | 2023-07-10T01:32:13+00:00 | 707 | 6 | ---
datasets:
- EleutherAI/the_pile_deduplicated
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"SCIQ"
] |
uonlp/Vistral-7B-Chat-gguf | uonlp | text-generation | [
"gguf",
"vistral",
"mistral",
"pytorch",
"uonlp",
"Viet-Mistral",
"text-generation",
"vi",
"license:afl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-01-23T20:35:13Z | 2024-02-02T00:44:54+00:00 | 703 | 15 | ---
language:
- vi
license: afl-3.0
model_name: Vistral-7B-Chat
pipeline_tag: text-generation
tags:
- vistral
- mistral
- pytorch
- uonlp
- Viet-Mistral
prompt_template: '<s>[INST] <<SYS>> Bạn là một trợ lí Tiếng Việt nhiệt tình và trung
thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn. Câu
trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân
biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào. Hãy đảm bảo rằng các câu
trả lời của bạn không có thiên kiến xã hội và mang tính tích cực.Nếu một câu hỏi
không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay
vì trả lời một điều gì đó không chính xác. Nếu bạn không biết câu trả lời cho một
câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
<</SYS>>
{prompt} [/INST] '
quantized_by: chiennv
---
The challenge with large language models is that they cannot be executed locally on your laptop.
Thanks to [llama.cpp](https://github.com/ggerganov/llama.cpp) project, it is now feasible to operate our [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat) on a single computer (Window or Macbook) even without a dedicated GPU.
# Vistral-7B-Chat - GGUF
- Model creator: [Viet Mistral](https://huggingface.co/Viet-Mistral/)
- Original model: [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML. GGUF offers numerous advantages over GGML, such as better tokenization, and support for special tokens. It also supports metadata, and is designed to be extensible.
Here is several clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
<!-- prompt-template start -->
## Prompt template: Vistral-7B-Chat
```
<s>[INST] <<SYS>>
Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào. Hãy đảm bảo rằng các câu trả lời của bạn không có thiên kiến xã hội và mang tính tích cực.Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác. Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
<</SYS>>
{prompt} [/INST]
```
You can also use the chat template file in [this repository](https://huggingface.co/chiennv/Vistral-7B-Chat-gguf/blob/main/template_chat.json).
<!-- prompt-template end -->
### LM Studio
To deploy Vistral locally on LM Studio, ensure you are utilizing the [specified chat template, download here](https://huggingface.co/uonlp/Vistral-7B-Chat-gguf/blob/main/template_chat.json). Before initiating the process, make sure to upload the chat template, as illustrated in the image below:
<p align="center"> <img src="usage.png" width="650" /> </p>
This step is crucial for the proper functioning of Vistral on your local machine.
### Use with langchain
## Citation
```
@article{chien2023vistral,
author = {Chien Van Nguyen, Thuat Nguyen, Quan Nguyen, Huy Huu Nguyen, Björn Plüster, Nam Pham, Huu Nguyen, Patrick Schramowski, Thien Huu Nguyen},
title = {Vistral-7B-Chat - Towards a State-of-the-Art Large Language Model for Vietnamese},
year = 2023,
}
``` | [
"CHIA"
] |
QuantFactory/Einstein-v6.1-Llama3-8B-GGUF | QuantFactory | null | [
"gguf",
"axolotl",
"generated_from_trainer",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"llama",
"llama3",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-05-05T15:29:11Z | 2024-10-29T16:38:23+00:00 | 699 | 4 | ---
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
language:
- en
license: other
tags:
- axolotl
- generated_from_trainer
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- llama
- llama3
model-index:
- name: Einstein-v6.1-Llama3-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 45.68
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.74
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.25
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.68
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v6.1-Llama3-8B
name: Open LLM Leaderboard
---
[](https://hf.co/QuantFactory)
# QuantFactory/Einstein-v6.1-Llama3-8B-GGUF
This is quantized version of [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) created using llama.cpp
# Original Model Card

# 🔬 Einstein-v6.1-Llama3-8B
This model is a full fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on diverse datasets.
This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
This model's training was sponsored by [sablo.ai](https://sablo.ai).
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: chatml
datasets:
- path: data/merged_all.json
ds_type: json
type: alpaca
conversation: chatml
- path: data/gpteacher-instruct-special-alpaca.json
ds_type: json
type: gpteacher
conversation: chatml
- path: data/wizardlm_evol_instruct_70k_random_half.json
ds_type: json
type: alpaca
conversation: chatml
- path: data/capybara_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/synthia-v1.3_sharegpt_12500.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/slimorca_dedup_filtered_95k_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/allenai_wild_chat_gpt4_english_toxic_random_half_4k_sharegpt.json
ds_type: json
type: sharegpt
strict: false
conversation: chatml
- path: data/pippa_bagel_repo_3k_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/gpt4_data_lmys_1m_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/sharegpt_gpt4_english.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/no_robots_sharegpt.json
ds_type: json
type: sharegpt
strict: false
conversation: chatml
- path: data/oasst_top1_from_fusechatmixture_sharegpt.json
ds_type: json
type: sharegpt
strict: false
conversation: chatml
- path: data/everythinglm-data-v3_sharegpt.json
ds_type: json
type: sharegpt
strict: false
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0.002
output_dir: ./Einstein-v6.1-Llama3-8B-model
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project: Einstein
wandb_entity:
wandb_watch:
wandb_name: Einstein-v6.1-Llama3-2-epoch
wandb_log_model:
hub_model_id: Weyaxi/Einstein-v6.1-Llama3-8B
save_safetensors: true
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit # look
lr_scheduler: cosine
learning_rate: 0.000005 # look
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: zero3_bf16_cpuoffload_params.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "<|im_end|>"
unk_token: "<unk>"
pad_token: <|end_of_text|> # changed
tokens:
- "<|im_start|>"
```
</details><br>
# 💬 Prompt Template
You can use ChatML prompt template while using the model:
### ChatML
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are helpful AI asistant."},
{"role": "user", "content": "Hello!"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# 📊 Datasets used in this model
The datasets used to train this model are listed in the metadata section of the model card.
Please note that certain datasets mentioned in the metadata may have undergone filtering based on various criteria.
The results of this filtering process and its outcomes are in the data folder of this repository:
[Weyaxi/Einstein-v6.1-Llama3-8B/data](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B/tree/main/data)
# 🔄 Quantizationed versions
## GGUF [@bartowski](https://huggingface.co/bartowski)
- https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF
## ExLlamaV2 [@bartowski](https://huggingface.co/bartowski)
- https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-exl2
## AWQ [@solidrust](https://huggingface.co/solidrust)
- https://huggingface.co/solidrust/Einstein-v6.1-Llama3-8B-AWQ
# 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v6.1-Llama3-8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.60|
|AI2 Reasoning Challenge (25-Shot)|62.46|
|HellaSwag (10-Shot) |82.41|
|MMLU (5-Shot) |66.19|
|TruthfulQA (0-shot) |55.10|
|Winogrande (5-shot) |79.32|
|GSM8k (5-shot) |66.11|
# 🎯 [Open LLM Leaderboard v2 Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v6.1-Llama3-8B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.99|
|IFEval (0-Shot) |45.68|
|BBH (3-Shot) |29.38|
|MATH Lvl 5 (4-Shot)| 5.74|
|GPQA (0-shot) | 4.25|
|MuSR (0-shot) |11.23|
|MMLU-PRO (5-shot) |23.68|
# 📚 Some resources, discussions and reviews aboout this model
#### 🐦 Announcement tweet:
- https://twitter.com/Weyaxi/status/1783050724659675627
#### 🔍 Reddit post in r/LocalLLaMA:
- https://www.reddit.com/r/LocalLLaMA/comments/1cdlym1/introducing_einstein_v61_based_on_the_new_llama3/
#### ▶️ Youtube Video(s)
- [Install Einstein v6.1 Llama3-8B Locally on Windows](https://www.youtube.com/watch?v=VePvv6OM0JY)
#### 📱 Octopus-V4-3B
- [Octopus-V4-3B](https://huggingface.co/NexaAIDev/Octopus-v4) leverages the incredible physics capabilities of [Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) in their model.
# 🤖 Additional information about training
This model is full fine-tuned for 2 epoch.
Total number of steps was 2026.
<details><summary>Loss graph</summary>

</details><br>
# 🤝 Acknowledgments
Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
Thanks to all the dataset authors mentioned in the datasets section.
Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
Thanks to all open source AI community.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
| [
"SCIQ"
] |
tanikina/longformer-large-science | tanikina | null | [
"pytorch",
"longformer",
"en",
"arxiv:1911.02782",
"arxiv:2112.01640",
"base_model:allenai/longformer-large-4096",
"base_model:finetune:allenai/longformer-large-4096",
"region:us"
] | 2024-11-04T14:01:29Z | 2024-11-04T15:41:55+00:00 | 698 | 0 | ---
base_model:
- allenai/longformer-large-4096
language:
- en
---
This version of the `longformer-large-4096` model was additionally pre-trained on the S2ORC corpus [(Lo et al., 2020)](https://arxiv.org/pdf/1911.02782) by [(Wadden et al., 2022)](https://arxiv.org/pdf/2112.01640). The S2ORC is a large corpus of 81.1M English-language academic papers from different disciplines. The model uses the weights of [the longformer large science checkpoint](https://scifact.s3.us-west-2.amazonaws.com/longchecker/latest/checkpoints/longformer_large_science.ckpt) that was also used as the starting point for training the MultiVerS model [(Wadden et al., 2022)](https://arxiv.org/pdf/2112.01640) on the task of scientific claim verification.
Note that the vocabulary size of this model (50275) differs from the original `longformer-large-4096` (50265) since 10 new tokens were included:
`<|par|>, </|title|>, </|sec|>, <|sec-title|>, <|sent|>, <|title|>, <|abs|>, <|sec|>, </|sec-title|>, </|abs|>`.
Transferring the checkpoint weights and saving the model was done based on [this code](https://github.com/dwadden/multivers/blob/main/multivers/model.py#L145) from the MultiVerS repository, the versions of `transformers==4.2.2` and `torch==1.7.1` correspond to the MultiVerS [requirements.txt](https://github.com/dwadden/multivers/blob/main/requirements.txt):
```python
import os
import pathlib
import subprocess
import torch
from transformers import LongformerModel
model = LongformerModel.from_pretrained(
"allenai/longformer-large-4096", gradient_checkpointing=False
)
# Load the pre-trained checkpoint.
url = f"https://scifact.s3.us-west-2.amazonaws.com/longchecker/latest/checkpoints/#longformer_large_science.ckpt"
out_file = f"checkpoints/longformer_large_science.ckpt"
cmd = ["wget", "-O", out_file, url]
if not pathlib.Path(out_file).exists():
subprocess.run(cmd)
checkpoint_prefixed = torch.load("checkpoints/longformer_large_science.ckpt")
# New checkpoint
new_state_dict = {}
# Add items from loaded checkpoint.
for k, v in checkpoint_prefixed.items():
# Don't need the language model head.
if "lm_head." in k:
continue
# Get rid of the first 8 characters, which say `roberta.`.
new_key = k[8:]
new_state_dict[new_key] = v
# Resize embeddings and load state dict.
target_embed_size = new_state_dict["embeddings.word_embeddings.weight"].shape[0]
model.resize_token_embeddings(target_embed_size)
model.load_state_dict(new_state_dict)
model_dir = "checkpoints/longformer_large_science"
if not os.path.exists(model_dir):
os.makedirs(model_dir)
model.save_pretrained(model_dir)
```
The tokenizer was resized and saved following [this code](https://github.com/dwadden/multivers/blob/main/multivers/data.py#L14) from the MultiVerS repository:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-large-4096")
ADDITIONAL_TOKENS = {
"section_start": "<|sec|>",
"section_end": "</|sec|>",
"section_title_start": "<|sec-title|>",
"section_title_end": "</|sec-title|>",
"abstract_start": "<|abs|>",
"abstract_end": "</|abs|>",
"title_start": "<|title|>",
"title_end": "</|title|>",
"sentence_sep": "<|sent|>",
"paragraph_sep": "<|par|>",
}
tokenizer.add_tokens(list(ADDITIONAL_TOKENS.values()))
tokenizer.save_pretrained("checkpoints/longformer_large_science")
```
| [
"SCIFACT"
] |
glif-loradex-trainer/AP123_movie_shots_ic_lora_experiment_v1 | glif-loradex-trainer | text-to-image | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | 2024-11-07T01:47:57Z | 2024-11-07T01:52:18+00:00 | 696 | 3 | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730944025182__000005000_0.jpg
text: '[MOVIE-SHOTS] In this adventurous three-part sequence, [SCENE-1] <Ethan>,
an intrepid archaeologist with a rugged appearance, uncovers an ancient map in
a sunlit desert dig site, his excitement palpable as he brushes away the sand,
[SCENE-2] transitioning to a bustling marketplace in a vibrant foreign city where
<Ethan> negotiates with local merchants and gathers essential supplies for his
quest, [SCENE-3] and finally, <Ethan> treks through a dense, mist-covered jungle,
the towering trees and exotic wildlife emphasizing the challenges and mysteries
that lie ahead on his journey.'
- output:
url: samples/1730944058651__000005000_1.jpg
text: '[MOVIE-SHOTS] In a vibrant festival, [SCENE-1] we find <Leo>, a shy boy,
standing at the edge of a bustling carnival, eyes wide with awe at the colorful
rides and laughter, [SCENE-2] transitioning to him reluctantly trying a daring
game, his friends cheering him on, [SCENE-3] culminating in a triumphant moment
as he wins a giant stuffed bear, his face beaming with pride as he holds it up
for all to see.'
trigger: MOVIE-SHOTS
instance_prompt: MOVIE-SHOTS
---
# movie_shots_ic_lora_experiment_v1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `AP123`.
<Gallery />
## Trigger words
You should use `MOVIE-SHOTS` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/AP123_movie_shots_ic_lora_experiment_v1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
| [
"BEAR"
] |
phamhai/Llama-3.2-1B-Instruct-Frog | phamhai | text-generation | [
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"vi",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | 2024-10-22T09:40:37Z | 2024-11-15T10:02:29+00:00 | 690 | 3 | ---
base_model:
- meta-llama/Llama-3.2-1B-Instruct
language:
- en
- vi
license: llama3.2
pipeline_tag: text-generation
---
<p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6612cc790b91dd96968028f9/yP51EyRNg-CHCKB4gBYan.png" width="100" /> </p>
<h1>Llama-3.2-1B-Instruct-Frog - a RAG-optimized LLaMA3.2 for Vietnamese</h1>
At the end of September 2024, Meta released two lightweight LLM model versions: [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) and [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). However, these models are not well-supported for Vietnamese, especially for tasks related to Retrieval-Augmented Generation (RAG).
Today, I am excited to announce the release of two models specifically trained to provide better support for Vietnamese RAG tasks.
<h2>Model Details:</h2>
+ Base Models: Llama-3.2-1B-Instruct and Llama-3.2-3B-Instruct
+ Performance: The models are optimized for fast inference and can be easily deployed on on-premise and edge devices (laptop/smartphone/NVIDIA Jetson Xavier/Raspberry Pi,ect).
+ Model weights:
+ [Llama-3.2-1B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-1B-Instruct-Frog): 131K context length, 1 billion parameters
+ [Llama-3.2-3B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog): 131K context length, 3 billion parameters
+ Limitations: The 1B model currently has poorer prompt understanding and lower accuracy in some tasks such as summarization and entity extraction in Function Calling. Please consider and choose a model that fits your application needs.
<blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, you agree to and comply with the terms and conditions specified in Meta's LLaMA-3 license.</blockquote>
<h2>Model Evaluation</h2>
Will be updated in the coming days.
<h2> Run the model </h2>
(*Disclaimer: The name of the bot is called Vivi, which is due to my passion for VinFast vehicles, and I also hope to develop my own smaller models for VinFast's car lines (which they refer to as their virtual assistant, Vivi). This model has no affiliation with VinFast or any related entities.*)
<h3> with Huggingface's transformers </h3>
<h4> 1. QnA task </h4>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "phamhai/Llama-3.2-1B-Instruct-Frog"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
messages = [
{"role": "system", "content": "Bạn là một người bạn gái xinh đẹp. Tên của bạn là Vivi. Hãy luôn xưng là Vivi, gọi người nói là anh và trả lời luôn bắt đầu bằng cụm từ Dạ thưa anh yêu của em."},
{"role": "user", "content": "xin chào em"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=False, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# Dạ thưa anh yêu, chào buổi sáng.
messages = [
{"role": "system", "content": "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\nNếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch."},
{"role": "user", "content": "Làm sao để chữa bệnh đau đầu?"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
# 1. Nghỉ ngơi: Đau đầu có thể được giảm bớt bằng cách nghỉ ngơi và thư giãn. Nếu bạn cảm thấy đau đầu do căng thẳng, hãy thử thư giãn bằng cách tập yoga hoặc thiền định.
# 2. Uống nước: Uống đủ nước có thể giúp giảm đau đầu. Hãy uống ít nhất 8 ly nước mỗi ngày để giữ cơ thể được cân bằng nước.
# 3. Massage: Massage nhẹ nhàng khu vực đầu có thể giúp giảm đau đầu. Bạn có thể tự massage hoặc đến phòng massage để được chuyên gia massage.
# 4. Sử dụng thuốc giảm đau: Thuốc giảm đau như aspirin hoặc ibuprofen có thể giúp giảm đau đầu. Tuy nhiên, hãy đọc kỹ hướng dẫn sử dụng và tuân thủ liều lượng được khuyến nghị.
# 5. Áp dụng nhiệt hoặc lạnh: Áp dụng nhiệt hoặc lạnh lên khu vực đầu có thể giúp giảm đau đầu. Bạn có thể sử dụng túi đá hoặc băng để áp lên khu vực đầu hoặc sử dụng khăn ấm để áp lên khu vực đầu.
# 6. Điều chỉnh chế độ ăn uống: Ăn uống lành mạnh và cân bằng có thể giúp giảm đau đầu. Hạn chế các loại thực phẩm có chứa caffeine và đường, và ăn nhiều trái cây và rau quả để cung cấp đủ vitamin và khoáng chất cho cơ thể.
# 7. Tập thể dục: Tập thể dục thường xuyên có thể giúp giảm đau đầu. Hãy tham gia các hoạt động thể thao như đi bộ, chạy bộ hoặc bơi lội để giảm đau đầu.
# 8. Tránh căng thẳng: Căng thẳng có thể gây ra đau đầu. Hãy cố gắng giảm căng thẳng bằng cách tập yoga, thiền định hoặc các hoạt động thư giãn khác.
# 9. Kiểm tra sức khỏe: Nếu đau đầu kéo dài hoặc trở nên nghiêm trọng hơn, hãy tham khảo ý kiến bác sĩ để kiểm tra sức khỏe của bạn.
```
<h4> 2. Summarization task </h4>
<h5> Focused Answer </h5>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
Đoạn 0: "Chính phủ đề xuất bổ sung gần 20.700 tỷ đồng vốn điều lệ cho Ngân hàng Ngoại thương Việt Nam (Vietcombank) từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Chiều 23/10, thừa ủy quyền Chính phủ, Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc trình Quốc hội về bổ sung vốn Nhà nước tại Ngân hàng Ngoại Thương Việt Nam (Vietcombank). Theo đó, Chính phủ đề nghị tăng vốn điều lệ cho ngân hàng này gần 20.700 tỷ đồng từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Số tiền này lấy từ nguồn lợi nhuận còn lại lũy kế đến hết năm 2018 và lãi còn lại năm 2021. Vốn điều lệ dự kiến rót thêm cho Vietcombank gần bằng lợi nhuận hợp nhất trước thuế nửa đầu năm nay của nhà băng này. Việc bổ sung vốn cho "ông lớn" ngân hàng quốc doanh được Phó thủ tướng nhấn mạnh là cấp thiết để duy trì tỷ lệ vốn góp Nhà nước, phù hợp chiến lược phát triển kinh tế xã hội, tạo nguồn lực hỗ trợ ngân hàng yếu kém. Phó thủ tướng cho biết, phần lợi nhuận còn lại lũy kế hết năm 2018 và lãi còn lại 2021 hiện được hạch toán theo dõi tại VCB, chưa nằm trong cân đối ngân sách Nhà nước. Do vậy, nguồn vốn đề xuất tăng cho ngân hàng này không ảnh hưởng tới kế hoạch dự toán thu chi ngân sách 2024-2025. Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Vốn điều lệ của Vietcombank hiện là 55.891 tỷ đồng, thấp hơn nhiều so với VPBank (79.339 tỷ đồng), Techcombank (70.450 tỷ đồng) và không có sự cách biệt lớn so với một số ngân hàng thương mại cổ phần như MB (52.871) tỷ đồng, ACB (44.667 tỷ đồng) và SHB (36.629 tỷ đồng). Ngoài ra, việc tăng vốn nhằm để ngân hàng này đáp ứng các tỷ lệ an toàn tối thiểu. Tính tới cuối 2023, tỷ lệ an toàn vốn (CAR) của ngân hàng này là 11,05%, đảm bảo quy định. Tuy nhiên, mức này thấp hơn các ngân hàng thương mại cổ phần (VPBank, MB là 12-13%; Techcombank 13-15%...) và các nhà băng trong khu vực (Singapore là 17,1%, Indonesia 23,27%...). Thẩm tra nội dung này, Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh cho rằng đề xuất tăng vốn cho Vietcombank bảo đảm cơ sở pháp lý và đúng thẩm quyền theo quy định. Tuy nhiên, Ủy ban Kinh tế đề nghị Chính phủ lấy ý kiến của cổ đông chiến lược nước ngoài Ngân hàng Mizuho Corporate Bank - đơn vị nắm 15% vốn điều lệ của Vietcombank. Việc này nhằm thuận lợi trong quá trình tăng vốn. Chính phủ cũng cần bổ sung thông tin hiện trạng vốn của Vietcombank so với các ngân hàng thương mại trong hệ thống hiện nay. "Có ý kiến đề nghị làm rõ nhận định nguồn vốn đề xuất để tăng vốn điều lệ không tác động đến ngân sách Nhà nước", ông Thanh cho biết. Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh đề nghị Chính phủ chỉ đạo Ngân hàng Nhà nước cùng các bộ, ngành liên quan xử lý phần lợi nhuận còn lại năm 2022, 2023 (lần lượt là 21.680 tỷ và 25.009 tỷ đồng), nhằm tăng năng lực tài chính cho Vietcombank, bù đắp mức thiếu hụt vốn tự có, bảo đảm an toàn hoạt động. Cơ quan thẩm tra lưu ý vốn được bổ sung cho Vietcombank cần được dùng để mở rộng kinh doanh, cung ứng tín dụng với các lĩnh vực, dự án quan trọng quốc gia quy mô lớn, giảm lãi suất cho vay, cũng như đổi mới mô hình quản trị, chất lượng dịch vụ của nhà băng này. "Chính phủ cần đánh giá kỹ tác động việc bổ sung vốn Nhà nước cho Vietcombank tới phát triển của ngành ngân hàng, hiệu quả kinh tế xã hội", Ủy ban Kinh tế lưu ý. Vietcombank là một trong 4 ngân hàng thương mại Nhà nước, bên cạnh BIDV, VietinBank và Agribank. Ngân hàng này do Nhà nước sở hữu 74,8% vốn điều lệ. Lũy kế nửa đầu năm nay, lợi nhuận hợp nhất trước thuế của nhà băng này đạt 20.835 tỷ đồng, tăng 1,6% so với cùng kỳ 2023. Với dữ liệu này, Vietcombank tiếp tục đứng đầu toàn hệ thống ngân hàng về lợi nhuận 6 tháng đầu năm. Đây cũng là mức lãi nửa đầu năm cao kỷ lục của nhà băng này. Tính đến 30/6, tổng tài sản của ngân hàng đạt hơn 1,9 triệu tỷ đồng, tăng 3,6% so với cuối 2023. Trong đó, cho vay khách hàng gần 1,37 triệu tỷ đồng, tăng 7,8%."
Đoạn 1: "Đã có vài đơn vị bán tín chỉ carbon cho khách ngoại nhưng còn thiếu cơ sở pháp lý để đảm bảo hoạt động được thuận lợi, theo chuyên gia. Thông tin tại phiên tọa đàm thuộc Diễn đàn và Triển lãm Kinh tế xanh 2024 (GEFE), ông Đỗ Ngọc Quỳnh, Tổng thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA), cho biết thị trường tín chỉ carbon tự nguyện Việt Nam đã có một số đơn vị bán được tín chỉ carbon cho nhà đầu tư, tập đoàn nước ngoài. "Họ đang mua chứng chỉ carbon và chứng chỉ năng lượng tái tạo (REC) trong tiêu chí RE100, tức 100% năng lượng tái tạo", ông cho biết. RE100 là sáng kiến toàn cầu dành cho các công ty cam kết sử dụng 100% điện năng tái tạo, phát động bởi Climate Group và CDP vào 2014. Từ trái sang, Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) nói tại tọa đàm. Ảnh: GEFE 2024 Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) chia sẻ tại tọa đàm. Ảnh: GEFE 2024 Thị trường carbon gồm hai hình thức là bắt buộc và tự nguyện. Đồ họa: Dỹ Tùng Phân biệt các loại thị trường carbon. Đồ họa: Dỹ Tùng Theo kế hoạch của chính phủ, thị trường bắt buộc sẽ vận hành thử nghiệm vào giai đoạn 2025-2028. Với thị trường tự nguyện, ông Quỳnh cho biết đã bắt đầu hình thành và cũng biến động theo diễn biến xu hướng chung toàn cầu. Chuyên gia VBMA cho rằng Việt Nam đã có chính sách chung để thực hiện cam kết Net Zero vào 2050, nhưng vẫn chưa có pháp lý đầy đủ và rõ ràng cho thị trường carbon tự nguyện. "Những người bán tại Việt Nam sau giao dịch không biết hạch toán vào đâu, nộp thuế thế nào. Một số chọn phương án tính vào thu nhập bất thường để khai thuế", ông ví dụ. Ông Nguyễn Thành Nghiệp, Luật sư thành viên công ty luật VTN và Cộng sự chỉ ra việc chưa có quy định xác định tính chất tài sản của tín chỉ carbon. "Chúng có được xem là tài sản bình thường, được thế chấp hay giao dịch thế nào chưa có đủ căn cứ pháp lý", ông nói. Ngoài ra, quy trình MRV (đo lường, báo cáo và kiểm chứng) cũng cần quy định, hướng dẫn rõ. Theo ông, ngoài các cơ quan quản lý, khu vực tư nhân cũng trông chờ xem liệu có thể tham gia hoạt động MRV không. "Trong thời gian tới, nếu hoàn thiện pháp lý, thị trường sẽ có nhiều tiềm năng phát triển hơn", ông Đỗ Ngọc Quỳnh dự báo. Ngoài tín chỉ carbon, với tiềm năng điện tái tạo thứ tư thế giới theo McKenzie, ông cho rằng có thể khai thác việc vừa bán tín chỉ carbon vừa bán được REC. Theo VBMA, quy mô thị trường carbon bắt buộc toàn cầu đạt 104 tỷ USD năm ngoái, tăng 100% so với năm 2020. Trong khi, thị trường tự nguyện đã thu hẹp còn 800 triệu USD, giảm hai phần ba so với 2021 do một số vụ bê bối liên quan đến "giặt xanh" (green washing) làm ảnh hưởng đến uy tín, niềm tin. Theo dõi biến động của thị trường thế giới giúp các bên tham gia trong thị trường carbon tự nguyện còn sơ khai của Việt Nam rút kinh nghiệm và tìm ra hướng đi. Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS) văn phòng Hà Nội, dự báo người mua sẽ cần tìm kiếm các bên bán tín chỉ có hệ thống quản trị tốt và rõ ràng. Ông cho rằng người mua đang thiên về chuộng mua tín chỉ lĩnh vực giảm phát thải sản xuất vì dễ chứng minh. Một loại được quan tâm khác là "carbon xanh dương" (blue carbon) - tín chỉ tạo ra từ các dự án hấp thụ carbon của rừng ngập mặn, đầm lầy bãi triều và cỏ biển. Ông chỉ ra Việt Nam triển vọng với 200.000 ha rừng ngập mặn, có thể làm các dự án carbon tương tự như ở Honduras. Bà Thu Nguyễn, Quản lý chính sách tại Apanada Management Consultancy, Đại diện Viện Tài nguyên Thế giới (WRI) khuyến nghị các dự án tín chỉ carbon nâng cao giá trị bằng cách quan tâm đến tính bình đẳng và bao trùm. Theo đó, mục tiêu không chỉ là giảm phát thải mà còn là cải thiện đời sống người dân và phát triển bình đẳng hơn "Dự án cần bảo đảm có tham vấn của cộng đồng, đặc biệt là phụ nữ và các nhóm yếu thế, để tạo ra lợi ích cho cả cộng đồng lẫn nhà đầu tư", bà nói."
Đoạn 2: "Giá nhẫn trơn liên tục điều chỉnh, tăng gần một triệu đồng trong ngày và có nơi lên sát 89 triệu đồng một lượng. 15h ngày 23/10, giá mua bán nhẫn trơn được các thương hiệu kinh doanh điều chỉnh theo diễn biến đi lên của thế giới. Chiều nay, mỗi ounce vàng quốc tế tiếp tục thiết lập kỷ lục mới 2.755 USD. Giá nhẫn trơn tại Công ty Vàng bạc đá quý Sài Gòn (SJC) cũng tăng nửa triệu đồng so với đầu sáng và gần 1 triệu đồng so với cuối ngày hôm qua, lên 86,9 - 88,2 triệu đồng. Công ty Vàng bạc đá quý Phú Nhuận (PNJ) và Mi Hồng niêm yết giá nhẫn trơn quanh vùng 87,4 - 88,4 triệu đồng. Còn tại Tập đoàn Vàng bạc đá quý DOJI, giá mua bán nhẫn trơn cùng thời điểm thậm chí lên 88 - 88,9 triệu đồng một lượng. Trước đó đầu ngày, Công ty Vàng bạc đá quý Sài Gòn (SJC) đã tăng 300.000 đồng một lượng so với cuối ngày hôm qua, niêm yết giá nhẫn trơn tại 86,3 - 87,6 triệu đồng. Biểu giá mua bán nhẫn trơn tại Tập đoàn Vàng bạc đá quý DOJI lúc 9h sáng là 87 - 88 triệu đồng, tăng 200.000 đồng so với cuối ngày hôm qua. Nhẫn trơn giữ nhịp tăng liên tục trong 10 ngày qua. So với giữa tháng, mỗi lượng nhẫn trơn đã tăng hơn 5 triệu đồng. Còn so với đầu năm, nhẫn trơn tăng gần 25 triệu một lượng, tương đương hiệu suất 39%. Trong khi giá vàng miếng SJC đứng yên ở vùng 87 - 89 triệu một lượng, do Ngân hàng Nhà nước chưa thay đổi giá bán can thiệp. Thời điểm này là mùa cưới cuối năm và nhu cầu mua vàng nhẫn làm quà cưới tăng, song người dân không dễ để mua được mặt hàng này tại các thương hiệu lớn. Các thương hiệu lớn như DOJI, PNJ, Bảo Tín Minh Châu thường xuyên trong tình trạng cháy hàng. Khách lẻ chỉ may mắn mua được số lượng ít nếu cửa hàng vừa có khách bán ra. Còn tại SJC, các chi nhánh giới hạn lượng mua tối đa 5 phân đến 1 chỉ mỗi người. Trên thị trường quốc tế, mỗi ounce vàng trong 5 ngày qua tăng mạnh hơn 100 USD. Kim loại quý có thời điểm lên mức kỷ lục gần 2.750 USD, trước khi lùi về vùng 2.738 USD vào sáng nay. Quy đổi theo tỷ giá bán Vietcombank, giá vàng trong nước chênh lệch 3,5-5 triệu đồng một lượng so với thế giới. Theo dự báo của các nhà băng hàng đầu thế giới, giá vàng thế giới có thể lên 3.000 USD một ounce vào năm sau. Các chuyên gia khuyến nghị nhà đầu tư phân bổ tỷ trọng nhỏ danh mục vào kênh trú ẩn này, đặc biệt trong bối cảnh kim loại quý đã tăng mạnh thời gian qua."
Đoạn 3: "Nhu cầu trú ẩn khi căng thẳng địa chính trị leo thang kéo giá vàng lên mức đỉnh mới, tại 2.748 USD một ounce. Chốt phiên giao dịch 22/10, giá vàng thế giới giao ngay tăng gần 30 USD lên 2.748 USD một ounce. Đây là mức cao kỷ lục mới của kim loại quý. "Căng thẳng địa chính trị vẫn là nguyên nhân chủ yếu. Hai tuần nữa sẽ diễn ra bầu cử Tổng thống Mỹ và cuộc đua vẫn rất sát sao. Bất ổn chính trị đang kéo nhu cầu trú ẩn lên cao", Peter A. Grant - Phó giám đốc Zaner Metals nhận định trên Reuters. Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Cuộc thăm dò mới nhất của Reuters/Ipsos cho thấy tỷ lệ ủng hộ Phó tổng thống Kamala Harris hiện là 46%, nhỉnh hơn so với 43% của cựu Tổng thống Donald Trump. "Sự sát sao này đang tạo nên tình trạng thiếu chắc chắn. Môi trường này có lợi cho vàng", các nhà phân tích tại ngân hàng BNP Paribas nhận định. Grant dự báo nếu căng thẳng tại Trung Đông tiếp tục tăng nhiệt, giá có thể lên 3.000 USD cuối năm nay. Từ đầu năm, giá đã tăng 33% và liên tiếp lập đỉnh mới. Một yếu tố khác đang hỗ trợ kim loại quý là làn sóng giảm lãi suất của các ngân hàng trung ương lớn trên toàn cầu. Mỹ, châu Âu, Trung Quốc cùng hàng loạt nền kinh tế khác đã giảm lãi suất năm nay để hỗ trợ nền kinh tế. Trong khi đó, tại Wall Street, các chỉ số chính gần như đứng yên. Nhà đầu tư hiện theo dõi lợi suất trái phiếu chính phủ Mỹ và chờ đánh giá thêm báo cáo tài chính của các doanh nghiệp. Ngoài vàng, các kim loại quý khác cũng tăng giá. Bạc lập đỉnh 12 năm, khi tăng 3,2% lên gần 35 USD một ounce. Han Tan - chiến lược gia thị trường tại Exinity Group dự báo bạc vượt mốc 35 USD trước khi cuộc bầu cử diễn ra. Bạch kim đắt thêm 2,8% lên 1.031 USD một ounce. Palladium tăng 2,9% lên 1.081 USD."
'''},
{"role": "user", "content": '''giá nhẫn trơn hôm nay là bao nhiêu?'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# Giá nhẫn trơn hôm nay là 86,9 - 88,2 triệu đồng.
```
***You can customize the prompt before the answer to get a response that suits your needs.***
***You can also add information about this bot's persona in the system prompt.***
<h4> 3. Function Calling task </h4>
***In this task, we are following the Function Calling template from Glaive AI: [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).***
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lý hữu ích với khả năng truy cập vào các hàm sau. Hãy sử dụng chúng nếu cần -
{
"name": "weather_forecast",
"description": "Cung cấp cập nhật và dự báo thời tiết cho các địa điểm cụ thể, bao gồm nhiệt độ, độ ẩm và tình trạng thời tiết. Ví dụ: thời tiết hôm nay, dự báo thời tiết ở Hà Nội, nhiệt độ tại Đà Nẵng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "news_update",
"description": "Cung cấp các bài báo và cập nhật tin tức mới nhất trên nhiều lĩnh vực như chính trị, công nghệ, thể thao và giải trí. Ví dụ: tin tức hôm nay, cập nhật thể thao, tin công nghệ mới nhất, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "recipe_search",
"description": "Tìm kiếm và gợi ý công thức nấu ăn dựa trên nguyên liệu hoặc sở thích dinh dưỡng. Ví dụ: công thức món ăn với gà, món chay, ăn kiêng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "movie_recommendation",
"description": "Cung cấp gợi ý phim dựa trên thể loại, tâm trạng hoặc tiêu đề cụ thể. Ví dụ: phim hài hay, phim hành động mới, gợi ý phim cho tối nay, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "fitness_advice",
"description": "Cung cấp mẹo và bài tập cho sức khỏe và thể dục dựa trên mục tiêu của người dùng. Ví dụ: bài tập giảm cân, lịch tập gym cho người mới, lời khuyên về dinh dưỡng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "travel_planner",
"description": "Hỗ trợ lập kế hoạch du lịch, bao gồm gợi ý lịch trình, mẹo đặt vé và thông tin về điểm đến. Ví dụ: lên kế hoạch du lịch Đà Nẵng, địa điểm tham quan ở Nha Trang, mẹo du lịch Nhật Bản, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
}'''},
{"role": "user", "content": "hôm nay có tin tức gì về ô nhiễm không khí"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# <functioncall> {"name": "news_update", "arguments": {}}
```
<h4> 4. Question Rewriting task </h4>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\nNếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
USER: Concert "Anh trai vượt ngàn chông gai" diễn ra ở đâu?
ASSISTANT: Đêm concert Anh trai vượt ngàn chông gai diễn ra vào tối 19-10, tại công viên bờ sông Sài Gòn, TP.HCM với sự tham dự của khoảng 20.000 khán giả.
USER: Buổi biểu diễn có bao nhiêu nghệ sĩ?
ASSISTANT: Danh sách nghệ sĩ gồm 17 nghệ sĩ: Kay Trần, Jun Phạm, Cường Seven, Tự Long, (S)TRONG Trọng Hiếu, S.T Sơn Thạch, BB Trần, Bùi Công Nam, Rhymastic, Soobin Hoàng Sơn, Binz, Quốc Thiên, Thanh Duy, Bằng Kiều, Tiến Luật, Đinh Tiến Đạt và Đỗ Hoàng Hiệp.
USER: Chương trình phát sóng khi nào và ở đâu?
ASSISTANT: chương trình phát sóng từ ngày 29 tháng 6 năm 2024 lúc 20:00 thứ 7 hàng tuần trên VTV3 và công chiếu lúc 20:30 cùng ngày trên kênh YouTube YeaH1 Show của nhà sản xuất chương trình.'''},
{"role": "user", "content": '''Dựa vào đoạn hội thoại được cung cấp, viết lại câu nói của người dùng sao cho đầu đủ ý nhất có thể mà không bị sai lệch thông tin.
Câu nói: Concert này có tổ chức ở Hà Nội không?
'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
# Concert "Anh trai vượt ngàn chông gai" có tổ chức ở Hà Nội không?
```
***Modify the parameters "temperature", "top_k", "top_p" to suit your usecase.***
Corresponding Author:
+ [email protected] | [
"CHIA"
] |
pruas/BENT-PubMedBERT-NER-Chemical | pruas | token-classification | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-11T20:19:34Z | 2024-03-01T13:56:32+00:00 | 687 | 8 | ---
language:
- en
license: apache-2.0
pipeline_tag: token-classification
---
Named Entity Recognition (NER) model to recognize chemical entities.
Please cite our work:
```
@article{NILNKER2022,
title = {NILINKER: Attention-based approach to NIL Entity Linking},
journal = {Journal of Biomedical Informatics},
volume = {132},
pages = {104137},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2022.104137},
url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526},
author = {Pedro Ruas and Francisco M. Couto},
}
```
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets:
- [Chemdner patents CEMP corpus](https://biocreative.bioinformatics.udel.edu/resources/corpora/chemdner-patents-cemp-corpus/) (train, dev, test sets)
- [DDI corpus](https://github.com/isegura/DDICorpus) (train, dev, test sets): entity types "GROUP", "DRUG", "DRUG_N"
- [GREC Corpus](http://www.nactem.ac.uk/GREC/standoff.php) (train, dev, test sets): entity type "organic_compounds"
- [MLEE](http://nactem.ac.uk/MLEE/) (train, dev, test sets): entity type "Drug or compound"
- [NLM-CHEM](https://ftp.ncbi.nlm.nih.gov/pub/lu/NLMChem/) (train, dev, test sets)
- [CHEMDNER](https://biocreative.bioinformatics.udel.edu/resources/) (train, dev, test sets)
- [Chebi Corpus](http://www.nactem.ac.uk/chebi/) (train, dev, test sets): entity types "Metabolite", "Chemical"
- [PHAEDRA](http://www.nactem.ac.uk/PHAEDRA/) (train, dev, test sets): entity type "Pharmalogical_substance"
- [Chemprot](https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vi/track-5/) (train, dev, test sets)
- [PGx Corpus](https://github.com/practikpharma/PGxCorpus) (train, dev, test sets): entity type "Chemical"
- [BioNLP11ID](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP11ID-chem-IOB) (train, dev, test sets): entity type "Chemical"
- [BioNLP13CG]() (train, dev, test sets): entity type "Chemical"
- [BC4CHEMD](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BC4CHEMD) (train, dev, test sets)
- [CRAFT corpus](https://github.com/UCDenver-ccp/CRAFT/tree/master/concept-annotation) (train, dev, test sets): entity type "ChEBI"
- [BC5CDR]() (train, dev, test sets): entity type "Chemical" | [
"BC5CDR",
"CHEBI CORPUS",
"CHEMDNER",
"CRAFT",
"CHEMPROT",
"DDI CORPUS",
"MLEE",
"NLM-CHEM"
] |
QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF | QuantFactory | text-generation | [
"transformers",
"gguf",
"text-generation",
"pytorch",
"Lynx",
"Patronus AI",
"evaluation",
"hallucination-detection",
"en",
"base_model:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
"base_model:quantized:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-07-12T16:33:16Z | 2024-07-17T09:16:04+00:00 | 682 | 1 | ---
base_model: PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct
language:
- en
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- text-generation
- pytorch
- Lynx
- Patronus AI
- evaluation
- hallucination-detection
---
# QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF
This is quantized version of [PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct](https://huggingface.co/PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct) created using llama.cpp
# Model Description
Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth.
The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.
## Model Details
- **Model Type:** Patronus-Lynx-8B-Instruct is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct model.
- **Language:** Primarily English
- **Developed by:** Patronus AI
- **License:** [https://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/patronus-ai/Lynx-hallucination-detection](https://github.com/patronus-ai/Lynx-hallucination-detection)
## How to Get Started with the Model
The model is fine-tuned to be used to detect hallucinations in a RAG setting. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document.
To use the model, we recommend using the prompt we used for fine-tuning:
```
PROMPT = """
Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. The ANSWER also must not contradict information provided in the DOCUMENT. Output your final verdict by strictly following this format: "PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. Show your reasoning.
--
QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION):
{question}
--
DOCUMENT:
{context}
--
ANSWER:
{answer}
--
Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE":
{{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}}
"""
```
The model will output the score as 'PASS' if the answer is faithful to the document or FAIL if the answer is not faithful to the document.
## Training Details
The model was finetuned for 3 epochs using H100s on dataset of size 2400. We use [lion](https://github.com/lucidrains/lion-pytorch) optimizer with lr=5.0e-7. For more details on data generation, please check out our Github repo.
### Training Data
We train on 2400 samples consisting of CovidQA, PubmedQA, DROP and RAGTruth samples. For datasets that do not contain hallucinated samples, we generate perturbations to introduce hallucinations in the data. For more details about the data generation process, refer to the paper.
## Evaluation
The model was evaluated on [PatronusAI/HaluBench](https://huggingface.co/datasets/PatronusAI/HaluBench).
It outperforms GPT-3.5-Turbo, GPT-4-Turbo, GPT-4o and Claude Sonnet.
## Model Card Contact
[@sunitha-ravi](https://huggingface.co/sunitha-ravi) | [
"PUBMEDQA"
] |
Cloyne/sup-SimCSE-VietNamese-phobert-base | Cloyne | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:120210",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base",
"base_model:finetune:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-29T11:21:00Z | 2024-10-29T11:21:17+00:00 | 682 | 0 | ---
base_model: VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:120210
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Chủ tịch Ủy ban nhân dân xã có quyền ra quyết định cưỡng chế tháo
dỡ công trình xây dựng trên đất nông nghiệp khi chưa chuyển mục đích sử dụng đất
hay không?
sentences:
- 'Đối tượng, điều kiện kéo dài tuổi phục vụ tại ngũ
1. Đối tượng:
a) Quân nhân chuyên nghiệp có trình độ cao đẳng trở lên đang đảm nhiệm các chức
danh: Kỹ thuật viên, Nhân viên Kỹ thuật, Huấn luyện viên, Nghệ sĩ, Nhạc sĩ, Diễn
viên làm việc đúng chuyên ngành đào tạo ở các cơ sở nghiên cứu, nhà trường, bệnh
viện, trung tâm thể dục thể thao, đoàn nghệ thuật, nhà máy, doanh nghiệp quốc
phòng; đơn vị đóng quân ở địa bàn vùng sâu, vùng xa, biên giới, hải đảo.
b) Quân nhân chuyên nghiệp đang làm việc thuộc các chuyên ngành hẹp được đào tạo
công phu hoặc chuyên ngành Quân đội chưa đào tạo được; thợ bậc cao.
c) Quân nhân chuyên nghiệp đang đảm nhiệm chức vụ chỉ huy, quản lý ở các nhà máy,
doanh nghiệp quốc phòng.
d) Quân nhân chuyên nghiệp không thuộc đối tượng quy định tại điểm a, điểm b,
điểm c khoản này do Bộ trưởng Bộ Quốc phòng quyết định.
2. Điều kiện:
Quân nhân chuyên nghiệp thuộc đối tượng quy định tại khoản 1 Điều này được kéo
dài tuổi phục vụ tại ngũ khi có đủ các điều kiện sau:
a) Đơn vị có biên chế và nhu cầu sử dụng;
b) Hết hạn tuổi phục vụ tại ngũ cao nhất theo cấp bậc quân hàm quy định tại khoản
2 Điều 17 Luật Quân nhân chuyên nghiệp, công nhân và viên chức quốc phòng; chưa
có người thay thế; tự nguyện tiếp tục phục vụ tại ngũ;
c) Có đủ phẩm chất chính trị, đạo đức, sức khỏe để hoàn thành nhiệm vụ được giao;
d) Có trình độ chuyên môn kỹ thuật, nghiệp vụ giỏi; tay nghề cao; chất lượng,
hiệu quả công tác tốt.'
- 'Thi hành quyết định cưỡng chế
1. Người ra quyết định cưỡng chế có trách nhiệm gửi ngay quyết định cưỡng chế
cho các cá nhân, tổ chức liên quan và tổ chức thực hiện việc cưỡng chế thi hành
quyết định xử phạt của mình và của cấp dưới.
..."'
- 'Trình tự, thủ tục đăng ký tài khoản định danh điện tử đối với công dân Việt Nam
1. Đăng ký tài khoản định danh điện tử mức độ 1 qua ứng dụng VNelD đối với công
dân đã có thẻ Căn cước công dân gắn chíp điện tử
a) Công dân sử dụng thiết bị di động tải và cài đặt ứng dụng VNelD.
b) Công dân sử dụng ứng dụng VNelD để nhập thông tin về số định danh cá nhân và
số điện thoại hoặc địa chỉ thư điện tử; cung cấp các thông tin theo hướng dẫn
trên ứng dụng VNelD; thu nhận ảnh chân dung bằng thiết bị di động và gửi yêu cầu
đề nghị cấp tài khoản định danh điện tử tới cơ quan quản lý định danh và xác thực
điện tử qua ứng dụng VNelD.
c) Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng
dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử.
2. Đăng ký tài khoản định danh điện tử mức độ 2
a) Đối với công dân đã được cấp thẻ Căn cước công dân gắn chíp điện tử:
Công dân đến Công an xã, phường, thị trấn hoặc nơi làm thủ tục cấp thẻ Căn cước
công dân để làm thủ tục cấp tài khoản định danh điện tử. Công dân xuất trình thẻ
Căn cước công dân gắn chíp điện tử, cung cấp thông tin về số điện thoại hoặc địa
chỉ thư điện tử và đề nghị bổ sung thông tin được tích hợp vào tài khoản định
danh điện tử.
Cán bộ tiếp nhận nhập thông tin công dân cung cấp vào hệ thống định danh và xác
thực điện tử; chụp ảnh chân dung, thu nhận vân tay của công dân đến làm thủ tục
để xác thực với Cơ sở dữ liệu căn cước công dân và khẳng định sự đồng ý đăng ký
tạo lập tài khoản định danh điện tử.
Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng
dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử.
b) Cơ quan Công an tiến hành cấp tài khoản định danh điện tử mức độ 2 cùng với
cấp thẻ Căn cước công dân với trường hợp công dân chưa được cấp Căn cước công
dân gắn chíp điện tử.'
- source_sentence: Mức hưởng chế độ thai sản đối với lao động nam là người nước ngoài
được pháp luật quy định như thế nào?
sentences:
- '"Điều 21. Thông báo kết quả và xác nhận nhập học
1. Cơ sở đào tạo gửi giấy báo trúng tuyển cho những thí sinh trúng tuyển, trong
đó ghi rõ những thủ tục cần thiết đối với thí sinh khi nhập học và phương thức
nhập học của thí sinh.
2. Thí sinh xác nhận nhập học bằng hình thức trực tuyến trên hệ thống, trước khi
nhập học tại cơ sở đào tạo.
3. Đối với những thí sinh không xác nhận nhập học trong thời hạn quy định:
a) Nếu không có lý do chính đáng thì coi như thí sinh từ chối nhập học và cơ sở
đào tạo có quyền không tiếp nhận;
b) Nếu do ốm đau, tai nạn, có giấy xác nhận của bệnh viện quận, huyện trở lên
hoặc do thiên tai có xác nhận của UBND quận, huyện trở lên, cơ sở đào tạo xem
xét quyết định tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí
sinh vào học sau;
c) Nếu do sai sót, nhầm lẫn của cán bộ thực hiện công tác tuyển sinh hoặc cá nhân
thí sinh gây ra, cơ sở đào tạo chủ động phối hợp với các cá nhân, tổ chức liên
quan xem xét các minh chứng và quyết định việc tiếp nhận thí sinh vào học hoặc
bảo lưu kết quả tuyển sinh để thí sinh vào học sau.
4. Thí sinh đã xác nhận nhập học tại một cơ sở đào tạo không được tham gia xét
tuyển ở nơi khác hoặc ở các đợt xét tuyển bổ sung, trừ trường hợp được cơ sở đào
tạo cho phép."'
- 'Tổ chức, nhiệm vụ, quyền hạn của Ban Chỉ huy
...
2. Nhiệm vụ, quyền hạn của Ban Chỉ huy:
a) Chỉ đạo xây dựng, ban hành quy định về công tác bảo đảm an toàn PCCC và CNCH
tại Trụ sở cơ quan Bộ Tư pháp.
b) Hướng dẫn, phối hợp với các đơn vị thuộc Bộ và chỉ đạo Đội PCCC và CNCH cơ
sở tổ chức tuyên truyền, bồi dưỡng nghiệp vụ PCCC và CNCH.
c) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp xây dựng, trình
cấp có thẩm quyền phê duyệt và tổ chức thực tập phương án PCCC, phương án CNCH.
d) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp quản lý các trang
thiết bị PCCC và CNCH.
đ) Chỉ đạo chữa cháy, CNCH khi xảy ra cháy, sự cố, tai nạn tại Trụ sở cơ quan
Bộ Tư pháp.
e) Chỉ đạo việc tổ chức lập và lưu giữ hồ sơ quản lý, theo dõi hoạt động PCCC,
CNCH tại Trụ sở cơ quan Bộ Tư pháp.
g) Chỉ đạo việc sơ kết, tổng kết các hoạt động về PCCC và CNCH của cơ quan; kiểm
tra, đôn đốc việc chấp hành các quy định về PCCC và CNCH.
h) Đề xuất việc khen thưởng, kỷ luật các tập thể, cá nhân trong việc thực hiện
công tác PCCC, CNCH.
i) Chỉ đạo Đội PCCC và CNCH cơ sở dự trù kinh phí cho các hoạt động PCCC và CNCH
tại Trụ sở cơ quan Bộ Tư pháp.
k) Thực hiện các nhiệm vụ khác do Bộ trưởng giao và theo quy định của pháp luật.'
- 'Mức hưởng chế độ thai sản
...
b) Mức hưởng một ngày đối với trường hợp quy định tại Điều 32 và khoản 2 Điều
34 của Luật này được tính bằng mức hưởng chế độ thai sản theo tháng chia cho 24
ngày.'
- source_sentence: Doanh nghiệp được áp dụng chế độ ưu tiên không cung cấp báo cáo
kiểm toán đúng thời hạn bị phạt bao nhiêu tiền?
sentences:
- 'Thay đổi Thẩm phán, Hội thẩm
1. Thẩm phán, Hội thẩm phải từ chối tham gia xét xử hoặc bị thay đổi khi thuộc
một trong các trường hợp:
a) Trường hợp quy định tại Điều 49 của Bộ luật này;
b) Họ cùng trong một Hội đồng xét xử và là người thân thích với nhau;
c) Đã tham gia xét xử sơ thẩm hoặc phúc thẩm hoặc tiến hành tố tụng vụ án đó với
tư cách là Điều tra viên, Cán bộ điều tra, Kiểm sát viên, Kiểm tra viên, Thẩm
tra viên, Thư ký Tòa án.
2. Việc thay đổi Thẩm phán, Hội thẩm trước khi mở phiên tòa do Chánh án hoặc Phó
Chánh án Tòa án được phân công giải quyết vụ án quyết định.
Thẩm phán bị thay đổi là Chánh án Tòa án thì do Chánh án Tòa án trên một cấp quyết
định.
Việc thay đổi Thẩm phán, Hội thẩm tại phiên tòa do Hội đồng xét xử quyết định
trước khi bắt đầu xét hỏi bằng cách biểu quyết tại phòng nghị án. Khi xem xét
thay đổi thành viên nào thì thành viên đó được trình bày ý kiến của mình, Hội
đồng quyết định theo đa số.
Trường hợp phải thay đổi Thẩm phán, Hội thẩm tại phiên tòa thì Hội đồng xét xử
ra quyết định hoãn phiên tòa.'
- '“Điều 21. Chấm dứt hưởng trợ cấp thất nghiệp
1. Các trường hợp người lao động đang hưởng trợ cấp thất nghiệp bị chấm dứt hưởng
trợ cấp thất nghiệp được quy định như sau:
e) Trong thời gian hưởng trợ cấp thất nghiệp, 03 tháng liên tục không thực hiện
thông báo hằng tháng về việc tìm kiếm việc làm với trung tâm dịch vụ việc làm
theo quy định
Ngày mà người lao động được xác định bị chấm dứt hưởng trợ cấp thất nghiệp là
ngày kết thúc của thời hạn thông báo tìm kiếm việc làm của tháng thứ 3 liên tục
mà người lao động không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm."'
- 'Vi phạm quy định về thời hạn làm thủ tục hải quan, nộp hồ sơ thuế
...
2. Phạt tiền từ 1.000.000 đồng đến 2.000.000 đồng đối với hành vi không thực hiện
đúng thời hạn quy định thuộc một trong các trường hợp sau:
a) Cung cấp báo cáo kiểm toán, báo cáo tài chính của doanh nghiệp được áp dụng
chế độ ưu tiên;
b) Thông báo cho cơ quan hải quan quyết định xử lý vi phạm pháp luật về quản lý
thuế, kế toán đối với doanh nghiệp được áp dụng chế độ ưu tiên;
c) Báo cáo về lượng hàng hóa nhập khẩu phục vụ xây dựng nhà xưởng, hàng hóa gửi
kho bên ngoài của doanh nghiệp chế xuất;
d) Báo cáo về lượng hàng hóa trung chuyển đưa vào, đưa ra, còn lưu tại cảng;
đ) Báo cáo thống kê thông quan hàng bưu chính đưa vào Việt Nam để chuyển tiếp
đi quốc tế.
...'
- source_sentence: Tài chính của Hội Kiểm toán viên hành nghề Việt Nam được chi cho
những khoản nào?
sentences:
- 'Giải thể và xử lý tài chính khi giải thể
1. Khi xét thấy hoạt động của Hội không có hiệu quả, không mang lại lợi ích cho
Hội viên hoặc gây phiền hà, cản trở cho Hội viên thì BCH Hội quyết định triệu
tập Đại hội để bàn biện pháp củng cố tổ chức hoặc giải thể Hội. Nếu giải thể Hội
thì do Đại hội đại biểu hoặc Đại hội toàn quốc của Hội thông qua và đề nghị cơ
quan Nhà nước có thẩm quyền xem xét, quyết định.
2. Khi Hội bị giải thể, Ban Thường trực và Ban Kiểm tra của Hội phải tiến hành
kiểm kê tài sản, kiểm quỹ và báo cáo BCH Hội quyết định việc xử lý tài sản, tiền
tồn quỹ và tiến hành thủ tục giải thể theo quy định của pháp luật.'
- '"Điều 14. Miễn trừ đối với thỏa thuận hạn chế cạnh tranh bị cấm
1. Thỏa thuận hạn chế cạnh tranh quy định tại các khoản 1, 2, 3, 7, 8, 9, 10 và
11 Điều 11 bị cấm theo quy định tại Điều 12 của Luật này được miễn trừ có thời
hạn nếu có lợi cho người tiêu dùng và đáp ứng một trong các điều kiện sau đây:
a) Tác động thúc đẩy tiến bộ kỹ thuật, công nghệ, nâng cao chất lượng hàng hóa,
dịch vụ;
b) Tăng cường sức cạnh tranh của doanh nghiệp Việt Nam trên thị trường quốc tế;
c) Thúc đẩy việc áp dụng thống nhất tiêu chuẩn chất lượng, định mức kỹ thuật của
chủng loại sản phẩm;
d) Thống nhất các điều kiện thực hiện hợp đồng, giao hàng, thanh toán nhưng không
liên quan đến giá và các yếu tố của giá.
2. Thỏa thuận lao động, thỏa thuận hợp tác trong các ngành, lĩnh vực đặc thù được
thực hiện theo quy định của luật khác thì thực hiện theo quy định của luật đó".'
- '"Điều 2. Sửa đổi, bổ sung một số điều của Nghị định số 15/2019/NĐ-CP ngày 01
tháng 02 năm 2019 của Chính phủ quy định chi tiết một số điều và biện pháp thi
hành Luật Giáo dục nghề nghiệp
...
12. Sửa đổi, bổ sung Điều 24 như sau:
Điều 24. Thẩm quyền cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với
nước ngoài
1. Tổng cục Giáo dục nghề nghiệp cấp giấy chứng nhận đăng ký hoạt động liên kết
đào tạo với nước ngoài đối với trường cao đẳng.
2. Sở Lao động - Thương binh và Xã hội nơi trường trung cấp, trung tâm giáo dục
nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp
tổ chức hoạt động liên kết đào tạo với nước ngoài cấp giấy chứng nhận đăng ký
hoạt động liên kết đào tạo với nước ngoài đối với trường trung cấp, trung tâm
giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và
doanh nghiệp."'
- source_sentence: NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào?
sentences:
- 'Hồ sơ, thủ tục xác định trường hợp được bồi thường
[...]
3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp
lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải
thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc
thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường
hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ
sung.
4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt
hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn
thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.'
- 'Chuyển nhượng quyền thăm dò khoáng sản
1. Tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản phải có đủ điều
kiện để được cấp Giấy phép thăm dò khoáng sản theo quy định của Luật này.
2. Việc chuyển nhượng quyền thăm dò khoáng sản phải được cơ quan quản lý nhà nước
có thẩm quyền cấp Giấy phép thăm dò khoáng sản chấp thuận; trường hợp được chấp
thuận, tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản được cấp Giấy
phép thăm dò khoáng sản mới.
3. Tổ chức, cá nhân chuyển nhượng quyền thăm dò khoáng sản đã thực hiện được ít
nhất 50% dự toán của đề án thăm dò khoáng sản.
4. Chính phủ quy định chi tiết việc chuyển nhượng quyền thăm dò khoáng sản.'
- '"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế:
...
6. Sửa đổi, bổ sung Điều 12 như sau:
“Điều 12. Đối tượng tham gia bảo hiểm y tế
1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm:
a) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp
đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản
lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung
là người lao động);
b) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của
pháp luật.=
...
4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm:
a) Người thuộc hộ gia đình cận nghèo;
b) Học sinh, sinh viên.
5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình,
trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này.
6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các
khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng
do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3
Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng
bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh
phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh
toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản
3 Điều này.”'
---
# SentenceTransformer based on VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) <!-- at revision 608779b86741a8acd8c8d38132974ff04086b138 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Cloyne/SimCSE-finetuned-vietnamese-legal-documents")
# Run inference
sentences = [
'NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào?',
'"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế:\n...\n6. Sửa đổi, bổ sung Điều 12 như sau:\n“Điều 12. Đối tượng tham gia bảo hiểm y tế\n1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm:\na) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động);\nb) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.=\n...\n4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm:\na) Người thuộc hộ gia đình cận nghèo;\nb) Học sinh, sinh viên.\n5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này.\n6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”',
'Hồ sơ, thủ tục xác định trường hợp được bồi thường\n[...]\n3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung.\n4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 120,210 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 25.08 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 206.98 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật<br>Trong phạm vi điều chỉnh của văn bản quy phạm pháp luật:<br>1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.<br>2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.<br>3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> |
| <code>Điều kiện để giáo viên trong cơ sở giáo dục mầm non, tiểu học ngoài công lập bị ảnh hưởng bởi Covid-19 được hưởng chính sách hỗ trợ là gì?</code> | <code>Điều kiện được hưởng<br>Cán bộ quản lý, giáo viên, nhân viên được hưởng chính sách khi bảo đảm các điều kiện sau:<br>1. Là người đang làm việc tại cơ sở giáo dục ngoài công lập trước khi cơ sở phải tạm dừng hoạt động theo yêu cầu của cơ quan nhà nước có thẩm quyền để phòng, chống dịch COVID-19 tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>2. Nghỉ việc không hưởng lương từ 01 tháng trở lên tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>3. Chưa được hưởng chính sách hỗ trợ đối với người lao động tạm hoãn hợp đồng lao động, nghỉ việc không hưởng lương theo quy định tại khoản 4, khoản 5, khoản 6 Mục II Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19, Nghị quyết số 126/NQ-CP ngày 08 tháng 10 năm 2021 của Chính phủ sửa đổi, bổ sung Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19 (sau đây gọi tắt là Nghị quyết số 68/NQ-CP) do không tham gia Bảo hiểm xã hội bắt buộc.<br>4. Có xác nhận làm việc tại cơ sở giáo dục ngoài công lập ít nhất hết năm học 2021 - 2022 theo kế hoạch năm học của địa phương, bao gồm cơ sở giáo dục ngoài công lập đã làm việc trước đây hoặc cơ sở giáo dục ngoài công lập khác trong trường hợp cơ sở giáo dục ngoài công lập trước đây làm việc không hoạt động trở lại.</code> |
| <code>Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?</code> | <code>Nguyên tắc áp dụng<br>1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một mức phụ cấp ưu đãi theo nghề cao nhất.<br>2. Công chức, viên chức đã hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày 22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số 64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### train
* Dataset: train
* Size: 13,357 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 24.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 202.71 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Toà án cấp nào có thẩm quyền giải quyết việc đòi tài sản đã cho người khác vay theo hợp đồng cho vay?</code> | <code>"Điều 35. Thẩm quyền của Tòa án nhân dân cấp huyện<br>1. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết theo thủ tục sơ thẩm những tranh chấp sau đây:<br>a) Tranh chấp về dân sự, hôn nhân và gia đình quy định tại Điều 26 và Điều 28 của Bộ luật này, trừ tranh chấp quy định tại khoản 7 Điều 26 của Bộ luật này;<br>b) Tranh chấp về kinh doanh, thương mại quy định tại khoản 1 Điều 30 của Bộ luật này;<br>c) Tranh chấp về lao động quy định tại Điều 32 của Bộ luật này.<br>2. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết những yêu cầu sau đây:<br>a) Yêu cầu về dân sự quy định tại các khoản 1, 2, 3, 4, 6, 7, 8, 9 và 10 Điều 27 của Bộ luật này;<br>b) Yêu cầu về hôn nhân và gia đình quy định tại các khoản 1, 2, 3, 4, 5, 6, 7, 8, 10 và 11 Điều 29 của Bộ luật này;<br>c) Yêu cầu về kinh doanh, thương mại quy định tại khoản 1 và khoản 6 Điều 31 của Bộ luật này;<br>d) Yêu cầu về lao động quy định tại khoản 1 và khoản 5 Điều 33 của Bộ luật này.<br>3. Những tranh chấp, yêu cầu quy định tại khoản 1 và khoản 2 Điều này mà có đương sự hoặc tài sản ở nước ngoài hoặc cần phải ủy thác tư pháp cho cơ quan đại diện nước Cộng hòa xã hội chủ nghĩa Việt Nam ở nước ngoài, cho Tòa án, cơ quan có thẩm quyền của nước ngoài không thuộc thẩm quyền giải quyết của Tòa án nhân dân cấp huyện, trừ trường hợp quy định tại khoản 4 Điều này.<br>4. Tòa án nhân dân cấp huyện nơi cư trú của công dân Việt Nam hủy việc kết hôn trái pháp luật, giải quyết việc ly hôn, các tranh chấp về quyền và nghĩa vụ của vợ chồng, cha mẹ và con, về nhận cha, mẹ, con, nuôi con nuôi và giám hộ giữa công dân Việt Nam cư trú ở khu vực biên giới với công dân của nước láng giềng cùng cư trú ở khu vực biên giới với Việt Nam theo quy định của Bộ luật này và các quy định khác của pháp luật Việt Nam."</code> |
| <code>Những phiếu bầu nào được xem là không hợp lệ?</code> | <code>Phiếu bầu không hợp lệ<br>1. Những phiếu bầu sau đây là phiếu bầu không hợp lệ:<br>a) Phiếu không theo mẫu quy định do Tổ bầu cử phát ra;<br>b) Phiếu không có dấu của Tổ bầu cử;<br>c) Phiếu để số người được bầu nhiều hơn số lượng đại biểu được bầu đã ấn định cho đơn vị bầu cử;<br>d) Phiếu gạch xóa hết tên những người ứng cử;<br>đ) Phiếu ghi thêm tên người ngoài danh sách những người ứng cử hoặc phiếu có ghi thêm nội dung khác.<br>2. Trường hợp có phiếu bầu được cho là không hợp lệ thì Tổ trường Tổ bầu cử đưa ra để toàn Tổ xem xét, quyết định. Tổ bầu cử không được gạch xóa hoặc sửa các tên ghi trên phiếu bầu.</code> |
| <code>Đề nghị tạm đình chỉ chấp hành quyết định áp dụng biện pháp đưa vào trường giáo dưỡng cho học sinh cần đảm bảo nguyên tắc gì?</code> | <code>Nguyên tắc xét duyệt, đề nghị giảm thời hạn, tạm đình chỉ chấp hành quyết định, miễn chấp hành phần thời gian còn lại cho học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc<br>1. Tuân thủ quy định của pháp luật về thi hành biện pháp xử lý hành chính đưa vào trường giáo dưỡng, cơ sở giáo dục bắt buộc, quy định tại Thông tư này và quy định của pháp luật có liên quan.<br>2. Bảo đảm khách quan, công khai, minh bạch, đúng trình tự, thủ tục, thẩm quyền; tôn trọng và bảo vệ quyền, lợi ích hợp pháp của học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | train loss |
|:------:|:-----:|:-------------:|:----------:|
| 0.0665 | 500 | 0.2809 | 0.2215 |
| 0.1331 | 1000 | 0.1307 | 0.1547 |
| 0.1996 | 1500 | 0.0978 | 0.1366 |
| 0.2662 | 2000 | 0.1054 | 0.1221 |
| 0.3327 | 2500 | 0.0824 | 0.1215 |
| 0.3993 | 3000 | 0.0776 | 0.1223 |
| 0.4658 | 3500 | 0.0797 | 0.1161 |
| 0.5323 | 4000 | 0.0774 | 0.1070 |
| 0.5989 | 4500 | 0.0661 | 0.1007 |
| 0.6654 | 5000 | 0.059 | 0.0945 |
| 0.7320 | 5500 | 0.0674 | 0.0889 |
| 0.7985 | 6000 | 0.0495 | 0.0783 |
| 0.8651 | 6500 | 0.0587 | 0.0822 |
| 0.9316 | 7000 | 0.0585 | 0.0868 |
| 0.9981 | 7500 | 0.0482 | 0.0733 |
| 1.0647 | 8000 | 0.0459 | 0.0786 |
| 1.1312 | 8500 | 0.0487 | 0.0691 |
| 1.1978 | 9000 | 0.0335 | 0.0719 |
| 1.2643 | 9500 | 0.0365 | 0.0711 |
| 1.3308 | 10000 | 0.0279 | 0.0668 |
| 1.3974 | 10500 | 0.0235 | 0.0675 |
| 1.4639 | 11000 | 0.0206 | 0.0599 |
| 1.5305 | 11500 | 0.0175 | 0.0653 |
| 1.5970 | 12000 | 0.0144 | 0.0664 |
| 1.6636 | 12500 | 0.0167 | 0.0598 |
| 1.7301 | 13000 | 0.0173 | 0.0583 |
| 1.7966 | 13500 | 0.0127 | 0.0540 |
| 1.8632 | 14000 | 0.0164 | 0.0595 |
| 1.9297 | 14500 | 0.014 | 0.0552 |
| 1.9963 | 15000 | 0.0114 | 0.0535 |
| 2.0628 | 15500 | 0.0097 | 0.0552 |
| 2.1294 | 16000 | 0.0111 | 0.0549 |
| 2.1959 | 16500 | 0.0076 | 0.0544 |
| 2.2624 | 17000 | 0.009 | 0.0589 |
| 2.3290 | 17500 | 0.0084 | 0.0543 |
| 2.3955 | 18000 | 0.0049 | 0.0520 |
| 2.4621 | 18500 | 0.0068 | 0.0505 |
| 2.5286 | 19000 | 0.0037 | 0.0489 |
| 2.5952 | 19500 | 0.0031 | 0.0461 |
| 2.6617 | 20000 | 0.0041 | 0.0496 |
| 2.7282 | 20500 | 0.0051 | 0.0464 |
| 2.7948 | 21000 | 0.0029 | 0.0475 |
| 2.8613 | 21500 | 0.0032 | 0.0458 |
| 2.9279 | 22000 | 0.003 | 0.0449 |
| 2.9944 | 22500 | 0.0035 | 0.0458 |
| 3.0610 | 23000 | 0.0033 | 0.0443 |
| 3.1275 | 23500 | 0.0032 | 0.0416 |
| 3.1940 | 24000 | 0.002 | 0.0449 |
| 3.2606 | 24500 | 0.0022 | 0.0447 |
| 3.3271 | 25000 | 0.0017 | 0.0430 |
| 3.3937 | 25500 | 0.002 | 0.0418 |
| 3.4602 | 26000 | 0.0019 | 0.0415 |
| 3.5268 | 26500 | 0.0008 | 0.0406 |
| 3.5933 | 27000 | 0.0007 | 0.0414 |
| 3.6598 | 27500 | 0.0008 | 0.0416 |
| 3.7264 | 28000 | 0.0011 | 0.0418 |
| 3.7929 | 28500 | 0.0006 | 0.0416 |
| 3.8595 | 29000 | 0.0005 | 0.0417 |
| 3.9260 | 29500 | 0.0007 | 0.0413 |
| 3.9925 | 30000 | 0.0008 | 0.0412 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.1
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"CHIA"
] |
mini1013/master_cate_el11 | mini1013 | text-classification | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 2024-11-09T08:25:47Z | 2024-11-09T08:26:12+00:00 | 681 | 0 | ---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 필립스 퍼펙트케어 파워라이프 스팀 다리미 GC3929/68 실크부터 청바지까지 온도 조절 NO! 타지 않는 다림질 웰컴마켓2
- text: 보랄 UV 침구 청소기 침대 소파 진공 BR-V603BC 홈니즈 보랄 UV 침구 진공청소기 더웰
- text: NEW 필립스160 다이나글라이드 열판 건식 전기다리미 제이엘코
- text: DG-TOK 넥밴드 타입 디지털 생활무전기 나노Q3/ nano-Q3 블랙 컴피시스템 (comfy system)
- text: ALLNEW29000 파워메이드_그레이(GRAY) 나성민
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.7946213453148402
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 18 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'보만 대용량 1단 LED터치 핸디 스팀다리미 DB8640G 바이 마르코 (by MARCO)'</li><li>'[구매확정시 N포인트 5% 적립]필립스 핸디형 스팀다리미 7000시리즈 STH7030/18 베르수니코리아 주식회사'</li><li>'테팔 클래시컬 플러스 논슬립 초경량 건식다리미 FS3120K0 주식회사 코스니크'</li></ul> |
| 4 | <ul><li>'보풀제거기 세탁소용 FX-200 유선 아이프리 옷 제거 보푸라기 이불 FX-200 교체용 6중칼날 플라이비(FLY BEE)'</li><li>'[IFREE] 아이프리 6중날 보풀제거기 FX-814 주식회사 더루츠'</li><li>'NEW 아이프리 세탁소 보풀제거기 가디건 니트 옷 FX-714 (주)클릭나라'</li></ul> |
| 16 | <ul><li>'번개표 신형 넉다운 KKD-2200 세트 + 램프1개 추가 (총 램프 2개) KKD-2200 최신형+램프 1개 세트 (주)강남대흥'</li><li>'CAS 카스 360도 절루가 야생동물퇴치기 고라니 멧돼지 두더지 뱀 조류 퇴치기 CLAR-100 (주)지오쇼핑'</li><li>'[스마토] 벅킬러 CF-BK06(블랙) 캠핑/벌레퇴치기/해충/모기 포에버툴'</li></ul> |
| 14 | <ul><li>'Coms 전화선 꼬임방지 White/NT874/전화선정리 [KF] 주식회사 케이에프컴퍼니'</li><li>'전화선 꼬임방지 White/NT874/전화선정리 주식회사 지엔비커뮤니케이션즈'</li><li>'지엔텔 GS-872 2라인(국선) 사무용전화기/단축메모리(12개)/재다이얼/온후크/벨음 리버앤오빌 주식회사'</li></ul> |
| 11 | <ul><li>'지니큐 다용도 UV-C 살균 소독기 무선 자외선 살균기 스마트폰 마스크 UV-500ST 블랙 주식회사 한국전산오피스'</li><li>'텔로 UV 살균기 미니 자외선 소독기 휴대용 책 멸균기 UVCLED 변기 멸균 TUV10 (주)모닝아트'</li><li>"휴대용 마스크 살균소독기 유비세이프 C'Shell MLS-100 그레이 주식회사 유비세이프"</li></ul> |
| 3 | <ul><li>'[잘텍] JX-220 ,JX220 생활무전기 1대 풀세트 블랙 플림스텔레콤주식회사'</li><li>'민영 MYT-0033 MYT0033 고성능 생활무전기 정품이어마이크 3개 주식회사 오토플렉스'</li><li>'PD508/PD-508/무전기 용 경호용 이어마이크/리시버/국산/JM8000T 클럽데님'</li></ul> |
| 13 | <ul><li>'바이마르 바디 건조기 드라이어 VMK-21A30D030 전신 에어 샤워 냉온풍 빠르고 깔끔한 건조 터치 스마트센서 드라이기 자동 몸말리는기계 욕실 따뜻한 시원한 바람 임산부 집들이 바이마르 바디 건조기 VMK-21A30D030 팬텀파트너스'</li><li>'제크롤 바디 스킨 케어 에어샤워 전신건조기 JK-1WBD101 바디드라이어 (주)세중통상'</li><li>'대림도비도스 바디건조기 DLB-700W 국내생산 바디 드라이어 DLB-700W (주) 더수바스'</li></ul> |
| 15 | <ul><li>'다이슨 국내 정품 옴니 글라이드 컴플리트 (퍼플/니켈) 정품스텐딩거치대 포함 이루 이루 스토어'</li><li>'로보락 다이애드 브러쉬 거치대 세트 팅크웨어모바일 주식회사'</li><li>'JCP 에브리봇 EDGE 주식회사 제이씨엠컴퍼니'</li></ul> |
| 6 | <ul><li>'한국타올기산업 자동 손소독기계 HTM-620 자동 1개 (주)서브원'</li><li>'티에스 자바코리아 자동 손소독기 THS2500T 전기식 건전지식 겸용 아름상사몰'</li><li>'HDTOP 비접촉 휴대용 자동 디스펜서 스프레이 손소독기 HT-A600 YGPJ-NJ0042 윤 미디어'</li></ul> |
| 2 | <ul><li>'베스틴 지문방지 푸시풀 도어락 IDL-300 블랙헤어라인 2WAY 현관 아파트 도어락 블랙 유광 (IDL-300SWNK) 키넷'</li><li>'셀프시공 삼성 IOT 푸시풀 디지털도어락 SHP-DR700+보강판 현관문 현관문도어락 하우스플러스(주)'</li><li>'무료설치 에버넷 샷시문도어락 상가번호키 패션문도어록 가마찌도어샤시 EN250-N A지역무료설치 진흥피닉스(주)'</li></ul> |
| 9 | <ul><li>'[하이마트] LG 스타일러 오브제컬렉션 S3BOF [3벌/미스트베이지] 롯데하이마트(주)'</li><li>'엘지 트롬 스타일러 린넨 블랙 S3BF 의류관리 코스트코 갱이점빵'</li><li>'[삼성] 에어드레서 상의 5~9 벌 + 하의 1 벌,코타차콜 DF24CG5100HR 배송은 주문 후 2~4주이상 소요 주식회사 위링크'</li></ul> |
| 5 | <ul><li>'신일 스텐 탈수기 SDM-T77H 가정용 수영장 캠핑장 펜션 콜드림'</li><li>'삼성전자 아가사랑 WA30T2101EE 동의 선우에이치앤비(SUNWOO H&B)'</li><li>'한일전기 W-110 미니 짤순이 다용도 음식물 야채 오이지 두부 탈수기 1kg 탈수기 짤순이(신형) (주)씨앤제이글로벌'</li></ul> |
| 12 | <ul><li>'싱거 8280(단품+수강증+보증서1년)+ 프리모션노루발+노루발3종+말아박이 랍바세트 태양에스엠주식회사'</li><li>'부라더미싱 이노비스A16 (Innovis-A16) NV-A16 부라더미싱'</li><li>'부라더미싱 이노비스 A80, innovis a80, 브라더미싱 팀에이에이 Team AA'</li></ul> |
| 7 | <ul><li>'LED스탠드 브로드윙X (LSP-9700) 베이스 화이트 멜라토닌 학습용 학생 스탠드 MinSellAmount (주)프리즘'</li><li>'듀플렉스 DP-910LS 시력보호 면조명 LED 스탠드 책상 학생용 코지인터내셔널'</li><li>'LED스탠드 책상 학생 독서등 학습용 스텐드 NXL-3000 /스마일배송 오트빌'</li></ul> |
| 0 | <ul><li>'스마트소닉 1000 음파칫솔 단품 [화이트] + 칫솔모 1팩 블루 에스에이치 인터내셔날'</li><li>'프리쉐 PA-TS3000 골프_위탁 업체로 공급사나 배달업체에 개인정보 동의 도라에몽상회'</li><li>'알로코리아 덴픽션 바람건조 고온히팅 UV-C 무선 휴대용 칫솔살균기 ATS1G 단품 1+1 세트_크림+블루 알로이비즈 주식회사'</li></ul> |
| 17 | <ul><li>'[아메리칸스탠다드] 핸드 드라이어 삽입형 FG8901(고속형), FG8984(일반형) 화장실 상업용 편의품 FG8901(고속형) 대일도기사 주식회사'</li><li>'대림 도비도스 DX-1000,DX1000 핸드드라이어 (아이보리) 준트레이딩(JUN Trading)'</li><li>'TS자바 핸드드라이어 TH350ST 스테인레스 핸드드라이기 TSJAVA 화장실 강풍 프럼바디'</li></ul> |
| 8 | <ul><li>'쿠쿠 버블클렌저 연수기 CWS-AO201W 주식회사 제이홀딩스'</li><li>'프렐 연수기 마이크로버블 클렌저 녹물 염소 제거 버블수기 무광 화이트 그레이 투톤색상 (주)로보터스'</li><li>'[렌탈] [셀프형] 현대큐밍 샤워기필터 연수기 더클린 워터케어 (HQS20100W0) 실버 (주)현대렌탈케어'</li></ul> |
| 10 | <ul><li>'[특별 ] 세라젬 밸런스 알칼리 이온수 생성기 의료기기 (주) 세라젬'</li><li>'뉴랜드올네이처 알칼리이온수기 셀터치프리미엄 뉴랜드올네이처비전'</li><li>'뉴랜드올네이처 알칼리이온수기 셀터치필터 복합중공사(UF Membrane) 뉴랜드올네이처비전'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.7946 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_el11")
# Run inference
preds = model("ALLNEW29000 파워메이드_그레이(GRAY) 나성민")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.3700 | 32 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
| 5 | 50 |
| 6 | 50 |
| 7 | 50 |
| 8 | 5 |
| 9 | 50 |
| 10 | 3 |
| 11 | 50 |
| 12 | 50 |
| 13 | 50 |
| 14 | 50 |
| 15 | 50 |
| 16 | 50 |
| 17 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0079 | 1 | 0.4968 | - |
| 0.3937 | 50 | 0.3206 | - |
| 0.7874 | 100 | 0.1406 | - |
| 1.1811 | 150 | 0.0735 | - |
| 1.5748 | 200 | 0.0518 | - |
| 1.9685 | 250 | 0.0242 | - |
| 2.3622 | 300 | 0.006 | - |
| 2.7559 | 350 | 0.0102 | - |
| 3.1496 | 400 | 0.0088 | - |
| 3.5433 | 450 | 0.0082 | - |
| 3.9370 | 500 | 0.0062 | - |
| 4.3307 | 550 | 0.012 | - |
| 4.7244 | 600 | 0.0021 | - |
| 5.1181 | 650 | 0.002 | - |
| 5.5118 | 700 | 0.0049 | - |
| 5.9055 | 750 | 0.0043 | - |
| 6.2992 | 800 | 0.006 | - |
| 6.6929 | 850 | 0.0002 | - |
| 7.0866 | 900 | 0.0004 | - |
| 7.4803 | 950 | 0.0002 | - |
| 7.8740 | 1000 | 0.0002 | - |
| 8.2677 | 1050 | 0.0002 | - |
| 8.6614 | 1100 | 0.0001 | - |
| 9.0551 | 1150 | 0.0001 | - |
| 9.4488 | 1200 | 0.0002 | - |
| 9.8425 | 1250 | 0.0002 | - |
| 10.2362 | 1300 | 0.0001 | - |
| 10.6299 | 1350 | 0.0001 | - |
| 11.0236 | 1400 | 0.0001 | - |
| 11.4173 | 1450 | 0.0001 | - |
| 11.8110 | 1500 | 0.0001 | - |
| 12.2047 | 1550 | 0.0001 | - |
| 12.5984 | 1600 | 0.0001 | - |
| 12.9921 | 1650 | 0.0001 | - |
| 13.3858 | 1700 | 0.0001 | - |
| 13.7795 | 1750 | 0.0001 | - |
| 14.1732 | 1800 | 0.0001 | - |
| 14.5669 | 1850 | 0.0001 | - |
| 14.9606 | 1900 | 0.0001 | - |
| 15.3543 | 1950 | 0.0001 | - |
| 15.7480 | 2000 | 0.0001 | - |
| 16.1417 | 2050 | 0.0001 | - |
| 16.5354 | 2100 | 0.0001 | - |
| 16.9291 | 2150 | 0.0001 | - |
| 17.3228 | 2200 | 0.0001 | - |
| 17.7165 | 2250 | 0.0001 | - |
| 18.1102 | 2300 | 0.0001 | - |
| 18.5039 | 2350 | 0.0001 | - |
| 18.8976 | 2400 | 0.0001 | - |
| 19.2913 | 2450 | 0.0001 | - |
| 19.6850 | 2500 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"CAS"
] |
mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"biology",
"medical",
"healthcare",
"en",
"dataset:HPAI-BSC/Aloe-Beta-General-Collection",
"dataset:HPAI-BSC/chain-of-diagnosis",
"dataset:HPAI-BSC/MedS-Ins",
"dataset:HPAI-BSC/ultramedical",
"dataset:HPAI-BSC/pubmedqa-cot-llama31",
"dataset:HPAI-BSC/medqa-cot-llama31",
"dataset:HPAI-BSC/medmcqa-cot-llama31",
"dataset:HPAI-BSC/headqa-cot-llama31",
"dataset:HPAI-BSC/MMLU-medical-cot-llama31",
"dataset:HPAI-BSC/Polymed-QA",
"base_model:HPAI-BSC/Qwen2.5-Aloe-Beta-7B",
"base_model:quantized:HPAI-BSC/Qwen2.5-Aloe-Beta-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-12-11T21:07:52Z | 2024-12-11T22:04:29+00:00 | 678 | 1 | ---
base_model: HPAI-BSC/Qwen2.5-Aloe-Beta-7B
datasets:
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/chain-of-diagnosis
- HPAI-BSC/MedS-Ins
- HPAI-BSC/ultramedical
- HPAI-BSC/pubmedqa-cot-llama31
- HPAI-BSC/medqa-cot-llama31
- HPAI-BSC/medmcqa-cot-llama31
- HPAI-BSC/headqa-cot-llama31
- HPAI-BSC/MMLU-medical-cot-llama31
- HPAI-BSC/Polymed-QA
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/Aloe-Beta-General-Collection
language:
- en
library_name: transformers
license: apache-2.0
tags:
- biology
- medical
- healthcare
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Aloe-Beta-7B-i1-GGUF/resolve/main/Qwen2.5-Aloe-Beta-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"MEDQA",
"PUBMEDQA"
] |
hooman650/MedQwen3B-Reasoner | hooman650 | text-generation | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"reinforcement-learning",
"unsloth",
"trl",
"grpo",
"text-generation",
"conversational",
"en",
"dataset:qiaojin/PubMedQA",
"dataset:openai/gsm8k",
"dataset:yesilhealth/Health_Benchmarks",
"doi:10.57967/hf/4415",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2025-02-08T00:07:48Z | 2025-02-11T05:53:51+00:00 | 666 | 11 | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
datasets:
- qiaojin/PubMedQA
- openai/gsm8k
- yesilhealth/Health_Benchmarks
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- reinforcement-learning
- transformers
- unsloth
- qwen2
- trl
- grpo
---
# MedQwen3B-Reasoner: Medical Domain Reasoning with Mathematics-Enhanced Training
MedQwen3B-Reasoner is a specialized variant of Qwen2.5-3B-Instruct, fine-tuned using `GRPO` to excel at medical domain reasoning while maintaining strong mathematical problem-solving capabilities. The model demonstrates enhanced reasoning abilities and can express uncertainty when appropriate.

## Important
If you use `ollama`, `llama-cpp`, `vllm` or any other inference iengine, you need to set the system prompt as below as the model performs best with the following prompt:
```
'\nRespond in the following format:\n<reasoning>\n...\n</reasoning>\n<answer>\n...\n</answer>\n'
```
## Want to train your own?
Read my article [here](https://medium.com/@hooman_66365/build-your-own-medical-mini-deepseek-r1-with-reinforcement-learning-508509cd7d83) or follow the [notebook](https://github.com/hooman650/MedQwenReasoner).
## Key Features
- Medical domain expertise combined with mathematical reasoning capabilities
- Ability to express uncertainty with "maybe" responses
- Structured reasoning outputs with clear step-by-step explanations
- Compact size (3B parameters) while maintaining strong performance
- Trained using GRPO (Group Relative Policy Optimization) for 483 steps
## Training Data
The model was trained using a carefully curated mix of datasets:
- 70% Medical domain: [PubMedQA](https://huggingface.co/datasets/qiaojin/PubMedQA)
- 30% Mixed reasoning:
- [GSM8K](https://huggingface.co/datasets/openai/gsm8k) (Mathematical reasoning)
- [Health Benchmarks](https://huggingface.co/datasets/yesilhealth/Health_Benchmarks)
## Examples
Here are some examples of the model's reasoning capabilities across different domains:
### Mathematical Reasoning
```
Q: Agatha has some money to spend on a new bike. She spends $15 on the frame, and $25 on the front wheel. If she has $20 left to spend on a seat and handlebar tape, how much money, in dollars, did she have at first?
Ground Truth Answer:
60
MedQwen3B Answer (with reasoning it came up with)
<reasoning>
To determine how much money Agatha initially had, we need to add up the amount she spent on the frame, the front wheel, and the remaining money she has left to spend on a seat and handlebar tape.
The total amount spent on the frame and the front wheel is:
$15 + $25 = $40
The remaining money she has left is $20.
Therefore, the initial amount of money Agatha had is the sum of the money she has spent and the money she has left:
$40 + $20 = $60
</reasoning>
<answer>
Agatha initially had $60.
</answer>
```
### Medical Research Analysis
```
Given the scientific context below:
"""
Urinary bladder cancer patients who have undergone transurethral resection of bladder tumor (TURBT) are at risk of recurrence. This study aims to correlate the level of bone morphogenetic protein (BMP) expression with urothelial carcinoma invasiveness, TNM stage and time to recurrence after TURBT.
In 33 specimens of healthy transitional epithelium and 42 of urothelial carcinoma, BMP2, BMP4 and BMP7 expression was determined by real-time polymerase chain reaction. Patients who underwent TURBT were followed up for 1 year.
BMP2 and BMP7 were downregulated in infiltrating urothelial carcinoma, the relative expression being 0.76 (p = 0.04) and 0.28 (p = 0.025) respectively, while BMP4 was downregulated in non-invasive tumors. High expression of BMP2 and BMP7 correlated with prolonged time to recurrence (log-rank: p = 0.01 and p = 0.03 respectively).
"""
Answer the following question:
Does expression of bone morphogenetic protein-2 and -7 in urinary bladder cancer predict time to tumor recurrence? with 'yes', 'no' or 'maybe'. You need to carefully review the context and reason before answering.
Ground Truth Answer:
yes
MedQwen3b Answer:
<reasoning>
The study indicates that high expression of BMP2 and BMP7 correlated with prolonged time to recurrence (log-rank: p = 0.01 and p = 0.03 respectively). This means that the gene expression levels of BMP2 and BMP7 were significantly associated with the time to recurrence, with higher expression leading to a longer time to recurrence. Therefore, based on the statistical significance (p-values < 0.05) it is reasonable to conclude that the expression of bone morphogenetic protein-2 and -7 in urinary bladder cancer patients predicts time to tumor recurrence.
</reasoning>
<answer>
yes
</answer>
```
### Clinical Decision Making
```
Answer the following question:
What is a recommended strategy for pediatricians to encourage vaccination among hesitant parents? A:Allowing delays for certain vaccines while ensuring the most important ones are administered on schedule. B:Providing evidence of immunization benefits in a judgmental manner. C:Advocating for alternative immunization schedules to accommodate parental concerns. D:Discharging patients from their practice if vaccinations are refused.
With 'A', 'B', 'C' or 'D'. You need to carefully review the context and reason before answering.
Ground Truth Answer:
A
MedQwen3B Answer:
<reasoning>
The strategy recommended by pediatricians to encourage vaccination among hesitant parents should not lead to the abandonment or dismissal of families as it can undermine the critical role of pediatricians in protecting children's health. Option D can harm the trust between the pediatrician and the parent, potentially alienating them. Options B and C might create confusion or threaten parents' trust in the recommended vaccination schedule. Option A, on the other hand, provides an evidence-based compromise that allows some flexibility while ensuring coverage for essential vaccines, which can be respected by hesitant parents as it balances their concerns with public health needs.
This approach respects the autonomy of parents while providing relevant information and reassurance, which can help in addressing their hesitations, making Option A the most suitable strategy.
</reasoning>
<answer>
A
</answer>
```
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "hooman650/MedQwen3B-Reasoner"
# Initialize model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Prepare prompt
prompt = "What is the relationship between BMI and cardiovascular disease risk?"
messages = [
{"role": "system", "content": "\nRespond in the following format:\n<reasoning>\n...\n</reasoning>\n<answer>\n...\n</answer>\n"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate response
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Model Details
- Base Model: [unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit)
- Training Steps: 483
- Library: Unsloth
- License: Apache 2.0
## Citation
If you use this model in your research, please cite:
```bibtex
@misc {hooman_sedghamiz_2025,
author = { {Hooman Sedghamiz} },
title = { MedQwen3B-Reasoner (Revision 5dbc982) },
year = 2025,
url = { https://huggingface.co/hooman650/MedQwen3B-Reasoner },
doi = { 10.57967/hf/4415 },
publisher = { Hugging Face }
}
```
## License
This model is licensed under Apache 2.0. | [
"PUBMEDQA"
] |
PlanTL-GOB-ES/bsc-bio-ehr-es | PlanTL-GOB-ES | fill-mask | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"biomedical",
"clinical",
"ehr",
"spanish",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-08T13:15:59Z | 2022-11-15T16:34:16+00:00 | 665 | 12 | ---
language:
- es
license: apache-2.0
metrics:
- ppl
tags:
- biomedical
- clinical
- ehr
- spanish
widget:
- text: El único antecedente personal a reseñar era la <mask> arterial.
- text: Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni
alteraciones vertebrales.
- text: En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos
de interés.
---
# Biomedical-clinical language model for Spanish
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Intended uses and limitations
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
## How to use
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Tokenization and model pretraining
This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a
**biomedical-clinical** corpus in Spanish collected from several sources (see next section).
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.
### Training corpora and preprocessing
The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are:
- data parsing in different formats
- sentence splitting
- language detection
- filtering of ill-formed sentences
- deduplication of repetitive contents
- keep the original document boundaries
Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora has been applied.
Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora:
| Name | No. tokens | Description |
|-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Medical crawler](https://zenodo.org/record/4561970) | 903,558,13 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. |
| Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. |
| EHR documents | 95,267,20 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. |
| [Scielo](https://zenodo.org/record/2541681#.YlP1DshBwio) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. |
| [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. |
| Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. |
| Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". |
| [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. |
| [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources is aggregated from the MedlinePlus source. |
| PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. |
## Evaluation
The model has been fine-tuned on three Named Entity Recognition (NER) tasks using three clinical NER datasets:
- [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/).
- [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ).
- ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables.
We addressed the NER task as a token classification problem using a standard linear layer along with the BIO tagging schema. We compared our models with the general-domain Spanish [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), the general-domain multilingual model that supports Spanish [mBERT](https://huggingface.co/bert-base-multilingual-cased), the domain-specific English model [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2), and three domain-specific models based on continual pre-training, [mBERT-Galén](https://ieeexplore.ieee.org/document/9430499), [XLM-R-Galén](https://ieeexplore.ieee.org/document/9430499) and [BETO-Galén](https://ieeexplore.ieee.org/document/9430499).
The table below shows the F1 scores obtained:
| Tasks/Models | bsc-bio-ehr-es | XLM-R-Galén | BETO-Galén | mBERT-Galén | mBERT | BioBERT | roberta-base-bne |
|--------------|----------------|--------------------|--------------|--------------|--------------|--------------|------------------|
| PharmaCoNER | **0.8913** | 0.8754 | 0.8537 | 0.8594 | 0.8671 | 0.8545 | 0.8474 |
| CANTEMIST | **0.8340** | 0.8078 | 0.8153 | 0.8168 | 0.8116 | 0.8070 | 0.7875 |
| ICTUSnet | **0.8756** | 0.8716 | 0.8498 | 0.8509 | 0.8631 | 0.8521 | 0.8677 |
The fine-tuning scripts can be found in the official GitHub [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use these models, please cite our work:
```bibtext
@inproceedings{carrino-etal-2022-pretrained,
title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish",
author = "Carrino, Casimiro Pio and
Llop, Joan and
P{\`a}mies, Marc and
Guti{\'e}rrez-Fandi{\~n}o, Asier and
Armengol-Estap{\'e}, Jordi and
Silveira-Ocampo, Joaqu{\'\i}n and
Valencia, Alfonso and
Gonzalez-Agirre, Aitor and
Villegas, Marta",
booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.bionlp-1.19",
doi = "10.18653/v1/2022.bionlp-1.19",
pages = "193--199",
abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
</details>
| [
"CANTEMIST",
"PHARMACONER",
"SCIELO"
] |
AdaptLLM/finance-LLM | AdaptLLM | text-generation | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"finance",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2309.09530",
"arxiv:2411.19930",
"arxiv:2406.14491",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-18T13:45:13Z | 2024-12-02T06:26:32+00:00 | 665 | 118 | ---
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- finance
---
# Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### [2024/11/29] 🤗 Introduce the multimodal version of AdaptLLM at [AdaMLLM](https://huggingface.co/papers/2411.19930), for adapting MLLMs to domains 🤗
**************************** **Updates** ****************************
* 2024/11/29: Released [AdaMLLM](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains) for adapting MLLMs to domains
* 2024/9/20: Our [research paper for Instruction-Pretrain](https://huggingface.co/papers/2406.14491) has been accepted by EMNLP 2024
* 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks
* 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm)
* 2024/6/21: Released the general version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain)
* 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets
* 2024/1/16: Our [research paper for AdaptLLM](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B
## 1. Domain-Specific Models
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
### LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat).
For example, to chat with the finance base model (🤗we highly recommend switching to the [chat model](https://huggingface.co/AdaptLLM/finance-chat) for better response quality):
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-LLM")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-LLM", use_fast=False)
# Put your input here:
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange
Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
# Simply use your input as the prompt for base models
prompt = user_input
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=2048)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(pred)
```
### LLaMA-3-8B (💡New!)
In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
## 2. Domain-Specific Tasks
### Pre-templatized Testing Splits
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
Note: those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
### Evaluating Any Huggingface LMs on Domain-Specific Tasks (💡New!)
You can use the following script to reproduce our results and evaluate any other Huggingface models on domain-specific tasks. Note that the script is NOT applicable to models that require specific prompt templates (e.g., Llama2-chat, Llama3-Instruct).
1). **Set Up Dependencies**
```bash
git clone https://github.com/microsoft/LMOps
cd LMOps/adaptllm
pip install -r requirements.txt
```
2). **Evaluate the Model**
```bash
# Select the domain from ['biomedicine', 'finance', 'law']
DOMAIN='finance'
# Specify any Huggingface model name (Not applicable to chat models)
MODEL='AdaptLLM/finance-LLM'
# Model parallelization:
# - Set MODEL_PARALLEL=False if the model fits on a single GPU.
# We observe that LMs smaller than 10B always meet this requirement.
# - Set MODEL_PARALLEL=True if the model is too large and encounters OOM on a single GPU.
MODEL_PARALLEL=False
# Choose the number of GPUs from [1, 2, 4, 8]
N_GPU=1
# Whether to add a BOS token at the beginning of the prompt input:
# - Set to False for AdaptLLM.
# - Set to True for instruction-pretrain models.
# If unsure, we recommend setting it to False, as this is suitable for most LMs.
add_bos_token=False
# Run the evaluation script
bash scripts/inference.sh ${DOMAIN} ${MODEL} ${add_bos_token} ${MODEL_PARALLEL} ${N_GPU}
```
### Raw Datasets
We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt), [RCT](https://huggingface.co/datasets/AdaptLLM/RCT), [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA), [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA), [Headline](https://huggingface.co/datasets/AdaptLLM/Headline), [NER](https://huggingface.co/datasets/AdaptLLM/NER), [FPB](https://huggingface.co/datasets/AdaptLLM/FPB)
### Domain Knowledge Probing
Our pre-processed knowledge probing datasets are available at: [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob) and [law_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/law_knowledge_prob)
## Citation
If you find our work helpful, please cite us:
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
``` | [
"CHEMPROT"
] |
mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"dataset:CAS-SIAT-ConsistencyAI/CoEvol",
"base_model:CAS-SIAT-ConsistencyAI/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT",
"base_model:quantized:CAS-SIAT-ConsistencyAI/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-12-29T22:29:47Z | 2024-12-30T11:25:55+00:00 | 662 | 0 | ---
base_model: CAS-SIAT-ConsistencyAI/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT
datasets:
- CAS-SIAT-ConsistencyAI/CoEvol
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/CAS-SIAT-ConsistencyAI/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT-GGUF/resolve/main/CoEvol-ChatGPT_Mistral-7B-v0.1_SFT.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"CAS"
] |
hunflair/biosyn-sapbert-bc5cdr-disease | hunflair | null | [
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | 2024-01-29T11:16:21Z | 2024-01-29T15:04:29+00:00 | 654 | 0 | ---
tags:
- flair
- entity-mention-linker
---
## biosyn-sapbert-bc5cdr-disease
Biomedical Entity Mention Linking for disease:
- Model: [dmis-lab/biosyn-sapbert-bc5cdr-disease](https://huggingface.co/dmis-lab/biosyn-sapbert-bc5cdr-disease)
- Dictionary: [CTD Diseases](https://ctdbase.org/help/diseaseDetailHelp.jsp) (See [License](https://ctdbase.org/about/legal.jsp))
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
from flair.tokenization import SciSpacyTokenizer
sentence = Sentence(
"The mutation in the ABCD1 gene causes X-linked adrenoleukodystrophy, "
"a neurodegenerative disease, which is exacerbated by exposure to high "
"levels of mercury in dolphin populations.",
use_tokenizer=SciSpacyTokenizer()
)
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair-disease")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("disease-linker")
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(tagger.label_type):
for link in span.get_labels(linker.label_type):
print(f"{span.text} -> {link.value}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-bc5cdr-disease", dictionary_name_or_path="ctd-diseases", hybrid_search=True)
```
This will reduce the download requirements, at the cost of computation.
| [
"BC5CDR"
] |
KappaNeuro/ukiyo-e-art | KappaNeuro | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"tokyo",
"style",
"painting",
"ukiyo-e art",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | 2023-09-14T10:56:41Z | 2023-09-14T10:56:46+00:00 | 653 | 10 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- tokyo
- style
- painting
- ukiyo-e art
instance_prompt: Ukiyo-e Art
widget:
- text: Ukiyo-e Art - bearded man surfing woodblock print style of hokusai fine art
style of kanagawa painting /relax
- text: Ukiyo-e Art - An image representing 'The History and Traditions of Japanese
Cuisine'. Japanese woodblock prints Style. The image should be imbued with an
atmospheric quality that evokes the timeless traditions and the deep history that
inform Japanese culinary arts. The dominant color of the design should be Viva
Magenta (Pantone 18-1750). Please ensure the design captures the essence of Japan's
gastronomic legacy in a sophisticated and evocative way
- text: Ukiyo-e Art - under the deep blue sea, a japanese samurai, aisolated nature
medicine and a whale
- text: Ukiyo-e Art - Using your artistic prowess and digital tools, craft a composition
that embodies the charm and aesthetics of Ukiyo-e or Emakimono. Imagine a scene
from Edo, the bustling capital city of Japan during the 17th to 19th centuries,
and depict it with the exquisite detail, perspective, and delicate coloring reminiscent
of traditional Japanese woodblock prints or hand-painted scrolls.
- text: Ukiyo-e Art - young adult tiefling woman with blue skin, wearing traditional
Japanese fine clothes, sitting at a desk in a candlelit stone library background,
muted colors, in the style of ukiyo-e woodblock print style
- text: Ukiyo-e Art - 19th century japanese woodblock print artstyle hiroshige portrait
of greek goddess artemis huntress wildlife fierce woman with red hair
- text: Ukiyo-e Art - ukiyo-e Japanese isometric woodblock painting of architecture
with tatami, screens, beams, and arches with women and animals and sky
- text: Ukiyo-e Art - Geisha Portrait, Engraving, Japanese Fashion, Umbrella, Hokusai's
Panoramic View of Fuji, 1830s, in the Style of Utagawa Kuniyoshi
- text: Ukiyo-e Art - Portrait noble faced ronin samurai black hair wearing armor
looking towards the horizon waves in the backround Illustrated
---
# Ukiyo-e Art ([CivitAI](https://civitai.com/models/107093))

> Ukiyo-e Art - bearded man surfing woodblock print style of hokusai fine art style of kanagawa painting /relax
<p>Ukiyo-e is a traditional Japanese art form that emerged during the Edo period (17th to 19th centuries). The term "ukiyo-e" translates to "pictures of the floating world" and refers to a genre of woodblock prints and paintings that depicted scenes from the everyday lives of common people, landscapes, historical events, and kabuki theater.</p><p>Ukiyo-e artists were skilled in the techniques of woodblock printing, which involved carving intricate designs on wooden blocks and using them to create multiple color impressions on paper. The prints were often mass-produced, making them affordable and accessible to a wide audience.</p><p>The subjects of ukiyo-e prints varied widely, but they commonly featured elegant courtesans, actors, landscapes, and scenes from daily life in Edo (now Tokyo). The prints captured fleeting moments and celebrated the transient nature of life, embodying the philosophy of "ukiyo" or the transient world.</p><p>Prominent ukiyo-e artists include Kitagawa Utamaro, Katsushika Hokusai, and Utagawa Hiroshige, whose works have become iconic representations of Japanese art. These artists displayed mastery in composition, color, and capturing intricate details, creating images that are both visually stunning and culturally significant.</p><p>Ukiyo-e art played a crucial role in shaping Western art movements such as Impressionism and Post-Impressionism, influencing artists like Vincent van Gogh and Claude Monet. Today, ukiyo-e prints are highly sought after by collectors and continue to be admired for their beauty, craftsmanship, and their ability to provide insights into Japan's rich cultural heritage.</p>
## Image examples for the model:

> Ukiyo-e Art - An image representing 'The History and Traditions of Japanese Cuisine'. Japanese woodblock prints Style. The image should be imbued with an atmospheric quality that evokes the timeless traditions and the deep history that inform Japanese culinary arts. The dominant color of the design should be Viva Magenta (Pantone 18-1750). Please ensure the design captures the essence of Japan's gastronomic legacy in a sophisticated and evocative way

> Ukiyo-e Art - under the deep blue sea, a japanese samurai, aisolated nature medicine and a whale

>

> Ukiyo-e Art - Using your artistic prowess and digital tools, craft a composition that embodies the charm and aesthetics of Ukiyo-e or Emakimono. Imagine a scene from Edo, the bustling capital city of Japan during the 17th to 19th centuries, and depict it with the exquisite detail, perspective, and delicate coloring reminiscent of traditional Japanese woodblock prints or hand-painted scrolls.

> Ukiyo-e Art - young adult tiefling woman with blue skin, wearing traditional Japanese fine clothes, sitting at a desk in a candlelit stone library background, muted colors, in the style of ukiyo-e woodblock print style

> Ukiyo-e Art - 19th century japanese woodblock print artstyle hiroshige portrait of greek goddess artemis huntress wildlife fierce woman with red hair

> Ukiyo-e Art - ukiyo-e Japanese isometric woodblock painting of architecture with tatami, screens, beams, and arches with women and animals and sky

> Ukiyo-e Art - Geisha Portrait, Engraving, Japanese Fashion, Umbrella, Hokusai's Panoramic View of Fuji, 1830s, in the Style of Utagawa Kuniyoshi

> Ukiyo-e Art - Portrait noble faced ronin samurai black hair wearing armor looking towards the horizon waves in the backround Illustrated
| [
"CRAFT"
] |
knowledgator/gliclass-base-v2.0-rac-init | knowledgator | zero-shot-classification | [
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"fr",
"ge",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"dataset:knowledgator/gliclass-v1.0",
"dataset:fancyzhx/amazon_polarity",
"dataset:cnmoro/QuestionClassification",
"dataset:Arsive/toxicity_classification_jigsaw",
"dataset:shishir-dwi/News-Article-Categorization_IAB",
"dataset:SetFit/qnli",
"dataset:nyu-mll/multi_nli",
"dataset:SetFit/student-question-categories",
"dataset:SetFit/tweet_sentiment_extraction",
"dataset:SetFit/hate_speech18",
"dataset:saattrupdan/doc-nli",
"dataset:knowledgator/gliclass-v2.0-RAC",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:apache-2.0",
"region:us"
] | 2025-02-17T12:41:55Z | 2025-03-07T15:56:59+00:00 | 649 | 6 | ---
base_model:
- microsoft/deberta-v3-base
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
- knowledgator/gliclass-v1.0
- fancyzhx/amazon_polarity
- cnmoro/QuestionClassification
- Arsive/toxicity_classification_jigsaw
- shishir-dwi/News-Article-Categorization_IAB
- SetFit/qnli
- nyu-mll/multi_nli
- SetFit/student-question-categories
- SetFit/tweet_sentiment_extraction
- SetFit/hate_speech18
- saattrupdan/doc-nli
- knowledgator/gliclass-v2.0-RAC
language:
- en
- fr
- ge
license: apache-2.0
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base).
### Retrieval-augmented Classification (RAC):
The main idea of this model is to utilize the information from semantically similar examples to enhance predictions in inference. The tests showed that providing the model with at least one example from the train dataset, which was retrieved by semantic similarity, could increase the F1 score from 0.3090 to 0.4275, in some cases from 0.2594 up to 0.6249. Moreover, the RAC approach, with 2 examples provided, showed an F1 score, compared to fine-tuning with 8 examples per label: 0.4707 and 0.4838, respectively.
### RAC dataset generation strategy:


To further enhance classification performance, we generated a Retrieval-Augmented Classification (RAC) dataset. Each text example in the gliclass-v2.0 dataset was encoded using the paraphrase-MiniLM-L6-v2 sentence transformer and indexed in an HNSW (Hierarchical Navigable Small World) database. For 250k randomly selected samples, we retrieved up to three most similar examples (cosine similarity > 0.5) from the dataset.
During augmentation:
- The number of retrieved examples per sample was randomly chosen between 1 and 3.
- 30% of retrieved examples were replaced with random, unrelated examples to introduce controlled noise.
- If true labels were present in a retrieved example, false labels were removed with a 50% probability to balance information clarity.
Each retrieved example was formatted using structured ```<<EXAMPLE>> ... <</EXAMPLE>>``` tags, where:
- True labels were explicitly marked as ```<<TRUE_LABEL>> {label}```.
- False labels were marked as ```<<FALSE_LABEL>> {label}```, unless removed.
For each randomly selected 250k examples, the “text” was modified as ```{original_text} <<EXAMPLE>> {retrieved_text} {true_labels_str} {false_labels_str} <</EXAMPLE>>...```
Where:
- ```{original_text}``` is the original example text.
- ```{retrieved_text}``` is a similar or randomly selected example.
- ```{true_labels_str}``` contains true labels formatted as ```<<TRUE_LABEL>> {label}```.
- ```{false_labels_str}``` contains false labels formatted as ```<<FALSE_LABEL>> {label}``` (unless removed with 50% probability).
Such a strategy allows the model to learn how to utilize the provided information without overfocusing on RAC examples. With both relevant and randomly retrieved examples, the dataset maintains a balance between useful contextual information and controlled noise. This ensures that the model does not become overly reliant on retrieval-augmented inputs while still benefiting from additional context when available.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-base-v2.0-rac-init")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-base-v2.0-rac-init")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
To use with one **RAC** example:
```python
example_1 = {
"text": "A recently developed machine learning platform offers robust automation for complex data analysis workflows. While it enhances productivity, users have reported difficulties in integrating it with their current data infrastructure and a need for better documentation.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", 'automation']
}
text = "The new AI-powered tool streamlines data analysis by automating repetitive tasks, improving efficiency for data scientists. However, its steep learning curve and limited integration with existing platforms pose challenges for widespread adoption."
labels = ["AI", "automation", "data_analysis", "usability", "integration"]
results = pipeline(text, labels, threshold=0.1, rac_examples=[example_1])[0]
for predict in results:
print(predict["label"], " - ", predict["score"])
```
To use with several **RAC** examples:
```python
example_1 = {
"text": "A recently developed machine learning platform offers robust automation for complex data analysis workflows. While it enhances productivity, users have reported difficulties in integrating it with their current data infrastructure and a need for better documentation.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", 'automation']
}
example_2 = {
"text": "A cloud-based analytics tool leverages artificial intelligence to provide real-time insights. It significantly improves workflow efficiency but struggles with compatibility across different enterprise systems, requiring additional customization efforts.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", "data_analysis"]
}
text = "The new AI-powered tool streamlines data analysis by automating repetitive tasks, improving efficiency for data scientists. However, its steep learning curve and limited integration with existing platforms pose challenges for widespread adoption."
labels = ["AI", "automation", "data_analysis", "usability", "integration"]
results = pipeline(text, labels, threshold=0.1, rac_examples=[example_1, example_2])[0]
for predict in results:
print(predict["label"], " - ", predict["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can find a comparison with other GLiClass models:
| Dataset | gliclass-base-v1.0-init | gliclass-large-v1.0-init | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init | gliclass-base-v2.0-rac-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|---------------------|
| CR | 0.8672 | 0.8024 | 0.9041 | 0.8980 | 0.7852 |
| sst2 | 0.8342 | 0.8734 | 0.9011 | 0.9434 | 0.8610 |
| sst5 | 0.2048 | 0.1638 | 0.1972 | 0.1123 | 0.0598 |
| 20_news_groups | 0.2317 | 0.4151 | 0.2448 | 0.2792 | 0.4007 |
| spam | 0.5963 | 0.5407 | 0.5074 | 0.6364 | 0.6739 |
| financial_phrasebank | 0.3594 | 0.3705 | 0.2537 | 0.2562 | 0.2537 |
| imdb | 0.8772 | 0.8836 | 0.8255 | 0.9137 | 0.8716 |
| ag_news | 0.5614 | 0.7069 | 0.6050 | 0.6933 | 0.6759 |
| emotion | 0.2865 | 0.3840 | 0.2474 | 0.3746 | 0.4160 |
| cap_sotu | 0.3966 | 0.4353 | 0.2929 | 0.2919 | 0.3871 |
| rotten_tomatoes | 0.6626 | 0.7933 | 0.6630 | 0.5928 | 0.7739 |
| **AVERAGE:** | 0.5344 | 0.5790 | 0.5129 | 0.5447 | 0.5598 |
Here you can see how the performance of the model grows, providing more **RAC** examples:
| Dataset | 0 examples | 1 example | 2 examples | 3 examples |
|-------------------------------------|------------|------------|------------|------------|
| cap_sotu | 0.3857 | 0.4665 | 0.4935 | 0.4847 |
| cap_sotu (8 examples) | 0.4938 | 0.5097 | 0.4976 | 0.4894 |
| cap_sotu (Weak Supervision - 8) | 0.4319 | 0.4764 | 0.4488 | 0.4465 |
| dair-ai_emotion | 0.4472 | 0.5505 | 0.5619 | 0.5705 |
| dair-ai_emotion (8 examples) | 0.5088 | 0.5630 | 0.5623 | 0.5740 |
| dair-ai_emotion (Weak Supervision - 8) | 0.4187 | 0.5479 | 0.5693 | 0.5828 |
| ag_news | 0.6791 | 0.8507 | 0.8717 | 0.8866 |
| ag_news (8 examples) | 0.8496 | 0.9002 | 0.9072 | 0.9091 |
| ag_news (Weak Supervision - 8) | 0.6546 | 0.8623 | 0.8841 | 0.8978 |
| sst5 | 0.0599 | 0.0675 | 0.1163 | 0.1267 |
| sst5 (8 examples) | 0.2887 | 0.2690 | 0.2642 | 0.2394 |
| sst5 (Weak Supervision - 8) | 0.0744 | 0.2780 | 0.2897 | 0.2912 |
| ScienceQA | 0.1142 | 0.4035 | 0.4534 | 0.4495 |
| ScienceQA (8 examples) | 0.6493 | 0.6547 | 0.6956 | 0.6770 |
| ScienceQA (Weak Supervision - 8) | 0.2987 | 0.5919 | 0.5998 | 0.5674 |
| Malicious_code_classification | 0.3717 | 0.6260 | 0.9672 | 0.9788 |
| Malicious_code_classification (8 examples) | 0.8444 | 0.9722 | 0.9788 | 0.9772 |
| Malicious_code_classification (Weak Supervision - 8) | 0.3745 | 0.9216 | 0.9788 | 0.9772 |
| twitter-financial-news-topic | 0.2594 | 0.6249 | 0.6408 | 0.6427 |
| twitter-financial-news-topic (8 examples) | 0.6137 | 0.7072 | 0.7099 | 0.6948 |
| twitter-financial-news-topic (Weak Supervision - 8) | 0.4032 | 0.6651 | 0.6316 | 0.6114 |
| 20_newsgroups | 0.3211 | 0.1339 | 0.0906 | 0.1005 |
| 20_newsgroups (8 examples) | 0.0959 | 0.0657 | 0.0440 | 0.0445 |
| 20_newsgroups (Weak Supervision - 8) | 0.4765 | 0.1035 | 0.0775 | 0.0777 |
| ChemProt | 0.2024 | 0.1911 | 0.1568 | 0.1329 |
| ChemProt (8 examples) | 0.2985 | 0.3479 | 0.3636 | 0.3538 |
| ChemProt (Weak Supervision - 8) | 0.2369 | 0.2067 | 0.1911 | 0.1780 |
| **AVERAGE:** | **0 examples** | **1 example** | **2 examples** | **3 examples** |
|-------------------------------------|---------------|---------------|---------------|---------------|
| Standard | 0.3090 | 0.4275 | 0.4707 | 0.4718 |
| 8 examples | 0.4838 | 0.5245 | 0.5288 | 0.5244 |
| Weak Supervision - 8 | 0.3661 | 0.4862 | 0.4868 | 0.4821 |
Here you can see how the performance of the model grows, providing more examples in comparison to other models:
| Model | Num Examples | sst5 | ag_news | emotion | **AVERAGE:** |
|------------------------------------|------------------|--------|---------|--------------|----------|
| gliclass-base-v2.0-rac-init | 0 | 0.0599 | 0.6791 | 0.4472 | 0.3934 |
| gliclass-base-v2.0-rac-init | 8 | 0.2887 | 0.8496 | 0.5088 | 0.6149 |
| gliclass-base-v2.0-rac-init | Weak Supervision | 0.0744 | 0.6546 | 0.4187 | 0.3983 |
| gliclass-modern-large-v2.0-init | 0 | 0.1123 | 0.6933 | 0.3746 | 0.3934 |
| gliclass-modern-large-v2.0-init | 8 | 0.5098 | 0.8339 | 0.5010 | 0.6149 |
| gliclass-modern-large-v2.0-init | Weak Supervision | 0.0951 | 0.6478 | 0.4520 | 0.3983 |
| gliclass-modern-base-v2.0-init | 0 | 0.1972 | 0.6050 | 0.2474 | 0.3499 |
| gliclass-modern-base-v2.0-init | 8 | 0.3604 | 0.7481 | 0.4420 | 0.5168 |
| gliclass-modern-base-v2.0-init | Weak Supervision | 0.1599 | 0.5713 | 0.3216 | 0.3509 |
| gliclass-large-v1.0-init | 0 | 0.1639 | 0.7069 | 0.3840 | 0.4183 |
| gliclass-large-v1.0-init | 8 | 0.4226 | 0.8415 | 0.4886 | 0.5842 |
| gliclass-large-v1.0-init | Weak Supervision | 0.1689 | 0.7051 | 0.4586 | 0.4442 |
| gliclass-base-v1.0-init | 0 | 0.2048 | 0.5614 | 0.2865 | 0.3509 |
| gliclass-base-v1.0-init | 8 | 0.2007 | 0.8359 | 0.4856 | 0.5074 |
| gliclass-base-v1.0-init | Weak Supervision | 0.0681 | 0.6627 | 0.3066 | 0.3458 | | [
"CHEMPROT"
] |
Yntec/NyankoMotsiX | Yntec | text-to-image | [
"diffusers",
"safetensors",
"Anime",
"Cute",
"Mochi",
"McSionnaigh",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2024-02-03T23:00:36Z | 2024-02-03T23:57:41+00:00 | 648 | 1 | ---
library_name: diffusers
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- Anime
- Cute
- Mochi
- McSionnaigh
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# cat mochi property - NyankoMotsiX
Original page: https://civitai.com/models/141004?modelVersionId=156294
Samples and prompts:

(Click for larger)
Top left: Anime cute little girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo
Top right: highquality, masterpiece, 1girl, Chi-Chi, :D, close up, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar
Bottom left: little videogames, robert jordan, josephine wall pepperoni pizza, hidari winner, illumination, roll20 radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, watched towel. 4k art on canvas by kyoani
Bottom right: (masterpiece), cute emerald eyes visible, (best quality), a pretty cute little girl holding teddy bear, looking at camera, (high resolution), Kids Book. sunglasses, short smile
| [
"BEAR"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.