id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
listlengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-03-18 10:01:09
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
listlengths 1
10
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
guydebruyn/marian-finetuned-kde4-en-to-fr | guydebruyn | translation | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,697,544,968,000 | 2023-10-22T14:58:08 | 16 | 0 | ---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- type: bleu
value: 52.78125912187245
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8567
- Bleu: 52.7813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/knowledgator_-_Qwen-encoder-1.5B-8bits | RichardErkhov | null | [
"safetensors",
"qwen2",
"arxiv:2404.05961",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,741,430,762,000 | 2025-03-08T10:47:26 | 2 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen-encoder-1.5B - bnb 8bits
- Model creator: https://huggingface.co/knowledgator/
- Original model: https://huggingface.co/knowledgator/Qwen-encoder-1.5B/
Original model description:
---
license: apache-2.0
datasets:
- wikimedia/wikipedia
language:
- en
library_name: transformers
tags:
- LLM2Vec
- encoder
- LLM
- classification
- NER
- question-answering
---
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Overview:
This is a bi-directional version of Qwen2-1.5B trained with masked token prediction on the Wikipedia dataset. Modern decoder models offer several advantages over classical encoders like BERT:
They are pre-trained on more recent textual corpora
They are trained on larger and more diverse datasets
Modern decoders have better support for long-context windows
Flash-attention support is available for these models
Considering these benefits, we are excited to release a series of decoder models tuned to work in a bi-directional setting. This approach combines the strengths of modern decoder architectures with the versatility of bi-directional context understanding, potentially opening up new possibilities for various natural language processing tasks, such as NER.
In comparison to original LLM2Vec we trained all weights of LLama model, it potentially improve bi-directional abilities of the model.
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec.models import Qwen2BiModel
import torch
from transformers import AutoTokenizer
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"knowledgator/Qwen-encoder-1.5B"
)
model = Qwen2BiModel.from_pretrained("knowledgator/Qwen-encoder-1.5B")
text = "The quick brown fox jumps over the lazy dog."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Here's an improved and expanded version of the README snippet:
## Adapting for Different Discriminative Tasks
Our bi-directional LLaMA model can be easily adapted for various discriminative tasks such as text classification, question answering, and token classification.
To use these specialized versions, we provide a [fork of LLM2Vec](https://github.com/Knowledgator/llm2vec) with additional functionality.
### Installation
To get started, clone our fork of LLM2Vec and install it:
```bash
git clone https://github.com/Knowledgator/llm2vec.git
cd llm2vec
pip install -e .
```
Using `-e` flag installs the package in editable mode, which is useful for development.
### Usage
Here's how to import and use the models for different tasks:
```python
from llm2vec import (
AutoLLMEncoderForSequenceClassification,
AutoLLMEncoderForQuestionAnswering,
AutoLLMEncoderForTokenClassification
)
# Load models for different tasks
classification_model = AutoLLMEncoderForSequenceClassification.from_pretrained('knowledgator/Qwen-encoder-1.5B')
question_answering_model = AutoLLMEncoderForQuestionAnswering.from_pretrained('knowledgator/Qwen-encoder-1.5B')
token_classification_model = AutoLLMEncoderForTokenClassification.from_pretrained('knowledgator/Qwen-encoder-1.5B')
```
### Example: Text Classification
Here's a basic example of how to use the model for text classification:
```python
from transformers import AutoTokenizer
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained('knowledgator/Qwen-encoder-1.5B')
# Prepare input
text = "This movie is great!"
inputs = tokenizer(text, return_tensors="pt")
# Get classification logits
outputs = classification_model(**inputs)
logits = outputs.logits
# The logits can be used with a softmax function to get probabilities
# or you can use torch.argmax(logits, dim=1) to get the predicted class
```
### Fine-tuning
To fine-tune these models on your specific task:
1. Prepare your dataset in a format compatible with HuggingFace's `datasets` library.
2. Use the `Trainer` class from HuggingFace's `transformers` library to fine-tune the model.
Here's a basic example:
```python
from transformers import Trainer, TrainingArguments
from datasets import load_dataset
# Load your dataset
dataset = load_dataset("your_dataset")
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
logging_dir="./logs",
)
# Initialize Trainer
trainer = Trainer(
model=classification_model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
)
# Fine-tune the model
trainer.train()
```
### Contributing
We welcome contributions! If you have suggestions for improvements or encounter any issues, please open an issue or submit a pull request on our [GitHub repository](https://github.com/Knowledgator/llm2vec).
| [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING"
] | Non_BioNLP |
Jour/m2m100_418M-fr | Jour | translation | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2022-02-17T13:41:07 | 121 | 0 | ---
datasets:
- kde4
license: mit
tags:
- translation
- generated_from_trainer
model-index:
- name: m2m100_418M-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-fr
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,722,076,057,000 | 2024-07-27T16:34:00 | 131 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
google-gemma-2-9b-it - GGUF
- Model creator: https://huggingface.co/SillyTilly/
- Original model: https://huggingface.co/SillyTilly/google-gemma-2-9b-it/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [google-gemma-2-9b-it.Q2_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q2_K.gguf) | Q2_K | 3.54GB |
| [google-gemma-2-9b-it.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
| [google-gemma-2-9b-it.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.IQ3_S.gguf) | IQ3_S | 4.04GB |
| [google-gemma-2-9b-it.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
| [google-gemma-2-9b-it.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.IQ3_M.gguf) | IQ3_M | 4.19GB |
| [google-gemma-2-9b-it.Q3_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q3_K.gguf) | Q3_K | 4.43GB |
| [google-gemma-2-9b-it.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
| [google-gemma-2-9b-it.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
| [google-gemma-2-9b-it.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
| [google-gemma-2-9b-it.Q4_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q4_0.gguf) | Q4_0 | 5.07GB |
| [google-gemma-2-9b-it.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
| [google-gemma-2-9b-it.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
| [google-gemma-2-9b-it.Q4_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q4_K.gguf) | Q4_K | 5.37GB |
| [google-gemma-2-9b-it.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
| [google-gemma-2-9b-it.Q4_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q4_1.gguf) | Q4_1 | 5.55GB |
| [google-gemma-2-9b-it.Q5_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q5_0.gguf) | Q5_0 | 6.04GB |
| [google-gemma-2-9b-it.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
| [google-gemma-2-9b-it.Q5_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q5_K.gguf) | Q5_K | 6.19GB |
| [google-gemma-2-9b-it.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
| [google-gemma-2-9b-it.Q5_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q5_1.gguf) | Q5_1 | 6.52GB |
| [google-gemma-2-9b-it.Q6_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q6_K.gguf) | Q6_K | 7.07GB |
| [google-gemma-2-9b-it.Q8_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-it-gguf/blob/main/google-gemma-2-9b-it.Q8_0.gguf) | Q8_0 | 9.15GB |
Original model description:
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | TBD |
gpfan/ai-tiago | gpfan | summarization | [
"art",
"summarization",
"aa",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | 1,729,774,226,000 | 2024-10-24T14:47:36 | 0 | 0 | ---
base_model:
- black-forest-labs/FLUX.1-dev
language:
- aa
license: apache-2.0
pipeline_tag: summarization
tags:
- art
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
fathyshalab/reklambox2-6-20 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,677,804,971,000 | 2023-03-03T01:58:23 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/reklambox2-6-20
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/reklambox2-6-20")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Saxo/Linkbricks-Horizon-AI-Avengers-V1-32B | Saxo | text-generation | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ko",
"en",
"jp",
"cn",
"dataset:Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset",
"dataset:Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset",
"dataset:Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface",
"dataset:Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled",
"dataset:Saxo/ko-news-corpus-1",
"dataset:Saxo/ko-news-corpus-2",
"dataset:Saxo/ko-news-corpus-3",
"dataset:Saxo/ko-news-corpus-4",
"dataset:Saxo/ko-news-corpus-5",
"dataset:Saxo/ko-news-corpus-6",
"dataset:Saxo/ko-news-corpus-7",
"dataset:Saxo/ko-news-corpus-8",
"dataset:Saxo/ko-news-corpus-9",
"dataset:maywell/ko_Ultrafeedback_binarized",
"dataset:youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo",
"dataset:lilacai/glaive-function-calling-v2-sharegpt",
"dataset:kuotient/gsm8k-ko",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,735,489,995,000 | 2025-02-24T09:02:47 | 683 | 6 | ---
base_model: Qwen/Qwen2.5-32B-Instruct
datasets:
- Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
- Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
- Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
- Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
- Saxo/ko-news-corpus-1
- Saxo/ko-news-corpus-2
- Saxo/ko-news-corpus-3
- Saxo/ko-news-corpus-4
- Saxo/ko-news-corpus-5
- Saxo/ko-news-corpus-6
- Saxo/ko-news-corpus-7
- Saxo/ko-news-corpus-8
- Saxo/ko-news-corpus-9
- maywell/ko_Ultrafeedback_binarized
- youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
- lilacai/glaive-function-calling-v2-sharegpt
- kuotient/gsm8k-ko
language:
- ko
- en
- jp
- cn
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<div align="center">
<img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" />
</div>
<br>
<a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/">Open LLM Leaderboard</a> 🏆 <32B, Rank-2 2025/02/24~
<br>
<br>
<br>
AI専門企業であるLinkbricks Horizon-AI のデータサイエンティストであるジ・ユンソン(Saxo)代表が<br>
Qwen/Qwen2.5-32B-Instructベースモデルを使用し、H100-80G 8個で約35%程度のパラメータをCPT->SFT->DPO->ORPOした多言語強化言語モデル。<br>
8千万件の様々な言語圏のニュースやウィキコーパスを基に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学や論理判断データを通じて、日中韓英言語のクロスエンハンスメント処理と複雑な論理問題にも対応できるように訓練したモデルである。
-トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br>
-カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br>
-Function Call<br>
-Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br>
-「transformers_version」: 「4.46.3」<br>
<br><br>
AI 전문 기업인 Linkbricks Horizon-AI 의 데이터사이언티스트인 지윤성(Saxo) 대표가
Qwen/Qwen2.5-32B-Instruct 베이스모델을 사용해서 H100-80G 8개를 통해 약 35%정도의 파라미터를 CPT->SFT->DPO->ORPO 한 다국어 강화 언어 모델<br>
8천만건의 다양한 언어권의 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br>
-토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br>
-고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br>
-Function Call 및 Tool Calling 지원<br>
-Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br>
-"transformers_version": "4.46.3"<br>
<br><br>
Finetuned by Mr. Yunsung Ji (Saxo), a data scientist and CEO at Linkbricks Horiozn-AI, a company specializing in AI and big data analytics <br>
about 35% of total parameters CPT->SFT->DPO->ORPO training model based on Qwen/Qwen2.5-32B-Instruct through 8 H100-80Gs as multi-lingual boosting language model <br>
It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 80M multi-lingual news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
-Tokenizer uses the base model without word expansion<br>
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
-Function Calling<br>
-Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
<br><br>
<a href="www.horizonai.ai">www.horizonai.ai</a>, <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
| [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
barisaydin/bge-large-en | barisaydin | feature-extraction | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"sentence-transfomres",
"en",
"arxiv:2309.07597",
"license:mit",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,695,232,811,000 | 2023-09-20T18:09:26 | 11 | 0 | ---
language:
- en
license: mit
tags:
- mteb
- sentence-transfomres
- transformers
model-index:
- name: bge-large-en
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.94029850746269
- type: ap
value: 40.00228964744091
- type: f1
value: 70.86088267934595
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.93745
- type: ap
value: 88.24758534667426
- type: f1
value: 91.91033034217591
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.158
- type: f1
value: 45.78935185074774
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.972
- type: map_at_10
value: 54.874
- type: map_at_100
value: 55.53399999999999
- type: map_at_1000
value: 55.539
- type: map_at_3
value: 51.031000000000006
- type: map_at_5
value: 53.342999999999996
- type: mrr_at_1
value: 40.541
- type: mrr_at_10
value: 55.096000000000004
- type: mrr_at_100
value: 55.75599999999999
- type: mrr_at_1000
value: 55.761
- type: mrr_at_3
value: 51.221000000000004
- type: mrr_at_5
value: 53.568000000000005
- type: ndcg_at_1
value: 39.972
- type: ndcg_at_10
value: 62.456999999999994
- type: ndcg_at_100
value: 65.262
- type: ndcg_at_1000
value: 65.389
- type: ndcg_at_3
value: 54.673
- type: ndcg_at_5
value: 58.80499999999999
- type: precision_at_1
value: 39.972
- type: precision_at_10
value: 8.634
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.740000000000002
- type: precision_at_5
value: 15.036
- type: recall_at_1
value: 39.972
- type: recall_at_10
value: 86.344
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 65.22
- type: recall_at_5
value: 75.178
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.94652870403906
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.17257160340209
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.97867370559182
- type: mrr
value: 77.00820032537484
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 80.00986015960616
- type: cos_sim_spearman
value: 80.36387933827882
- type: euclidean_pearson
value: 80.32305287257296
- type: euclidean_spearman
value: 82.0524720308763
- type: manhattan_pearson
value: 80.19847473906454
- type: manhattan_spearman
value: 81.87957652506985
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.00000000000001
- type: f1
value: 87.99039027511853
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 41.36932844640705
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.34983239611985
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.257999999999996
- type: map_at_10
value: 42.937
- type: map_at_100
value: 44.406
- type: map_at_1000
value: 44.536
- type: map_at_3
value: 39.22
- type: map_at_5
value: 41.458
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 48.701
- type: mrr_at_100
value: 49.431000000000004
- type: mrr_at_1000
value: 49.476
- type: mrr_at_3
value: 45.875
- type: mrr_at_5
value: 47.67
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 49.35
- type: ndcg_at_100
value: 54.618
- type: ndcg_at_1000
value: 56.655
- type: ndcg_at_3
value: 43.826
- type: ndcg_at_5
value: 46.72
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 9.328
- type: precision_at_100
value: 1.484
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.649
- type: precision_at_5
value: 15.25
- type: recall_at_1
value: 32.257999999999996
- type: recall_at_10
value: 61.849
- type: recall_at_100
value: 83.70400000000001
- type: recall_at_1000
value: 96.344
- type: recall_at_3
value: 46.037
- type: recall_at_5
value: 53.724000000000004
- type: map_at_1
value: 32.979
- type: map_at_10
value: 43.376999999999995
- type: map_at_100
value: 44.667
- type: map_at_1000
value: 44.794
- type: map_at_3
value: 40.461999999999996
- type: map_at_5
value: 42.138
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 49.575
- type: mrr_at_100
value: 50.187000000000005
- type: mrr_at_1000
value: 50.231
- type: mrr_at_3
value: 47.601
- type: mrr_at_5
value: 48.786
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 48.957
- type: ndcg_at_100
value: 53.296
- type: ndcg_at_1000
value: 55.254000000000005
- type: ndcg_at_3
value: 45.235
- type: ndcg_at_5
value: 47.014
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.481
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 21.783
- type: precision_at_5
value: 15.274
- type: recall_at_1
value: 32.979
- type: recall_at_10
value: 58.167
- type: recall_at_100
value: 76.374
- type: recall_at_1000
value: 88.836
- type: recall_at_3
value: 46.838
- type: recall_at_5
value: 52.006
- type: map_at_1
value: 40.326
- type: map_at_10
value: 53.468
- type: map_at_100
value: 54.454
- type: map_at_1000
value: 54.508
- type: map_at_3
value: 50.12799999999999
- type: map_at_5
value: 51.991
- type: mrr_at_1
value: 46.394999999999996
- type: mrr_at_10
value: 57.016999999999996
- type: mrr_at_100
value: 57.67099999999999
- type: mrr_at_1000
value: 57.699999999999996
- type: mrr_at_3
value: 54.65
- type: mrr_at_5
value: 56.101
- type: ndcg_at_1
value: 46.394999999999996
- type: ndcg_at_10
value: 59.507
- type: ndcg_at_100
value: 63.31099999999999
- type: ndcg_at_1000
value: 64.388
- type: ndcg_at_3
value: 54.04600000000001
- type: ndcg_at_5
value: 56.723
- type: precision_at_1
value: 46.394999999999996
- type: precision_at_10
value: 9.567
- type: precision_at_100
value: 1.234
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.117
- type: precision_at_5
value: 16.426
- type: recall_at_1
value: 40.326
- type: recall_at_10
value: 73.763
- type: recall_at_100
value: 89.927
- type: recall_at_1000
value: 97.509
- type: recall_at_3
value: 59.34
- type: recall_at_5
value: 65.915
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.522
- type: map_at_100
value: 36.619
- type: map_at_1000
value: 36.693999999999996
- type: map_at_3
value: 33.154
- type: map_at_5
value: 34.353
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 37.403999999999996
- type: mrr_at_100
value: 38.374
- type: mrr_at_1000
value: 38.428000000000004
- type: mrr_at_3
value: 35.235
- type: mrr_at_5
value: 36.269
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 40.431
- type: ndcg_at_100
value: 45.745999999999995
- type: ndcg_at_1000
value: 47.493
- type: ndcg_at_3
value: 35.733
- type: ndcg_at_5
value: 37.722
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.140999999999998
- type: precision_at_5
value: 10.305
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 53.675
- type: recall_at_100
value: 77.891
- type: recall_at_1000
value: 90.72
- type: recall_at_3
value: 40.751
- type: recall_at_5
value: 45.517
- type: map_at_1
value: 18.886
- type: map_at_10
value: 27.288
- type: map_at_100
value: 28.327999999999996
- type: map_at_1000
value: 28.438999999999997
- type: map_at_3
value: 24.453
- type: map_at_5
value: 25.959
- type: mrr_at_1
value: 23.134
- type: mrr_at_10
value: 32.004
- type: mrr_at_100
value: 32.789
- type: mrr_at_1000
value: 32.857
- type: mrr_at_3
value: 29.084
- type: mrr_at_5
value: 30.614
- type: ndcg_at_1
value: 23.134
- type: ndcg_at_10
value: 32.852
- type: ndcg_at_100
value: 37.972
- type: ndcg_at_1000
value: 40.656
- type: ndcg_at_3
value: 27.435
- type: ndcg_at_5
value: 29.823
- type: precision_at_1
value: 23.134
- type: precision_at_10
value: 6.032
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 13.017999999999999
- type: precision_at_5
value: 9.501999999999999
- type: recall_at_1
value: 18.886
- type: recall_at_10
value: 45.34
- type: recall_at_100
value: 67.947
- type: recall_at_1000
value: 86.924
- type: recall_at_3
value: 30.535
- type: recall_at_5
value: 36.451
- type: map_at_1
value: 28.994999999999997
- type: map_at_10
value: 40.04
- type: map_at_100
value: 41.435
- type: map_at_1000
value: 41.537
- type: map_at_3
value: 37.091
- type: map_at_5
value: 38.802
- type: mrr_at_1
value: 35.034
- type: mrr_at_10
value: 45.411
- type: mrr_at_100
value: 46.226
- type: mrr_at_1000
value: 46.27
- type: mrr_at_3
value: 43.086
- type: mrr_at_5
value: 44.452999999999996
- type: ndcg_at_1
value: 35.034
- type: ndcg_at_10
value: 46.076
- type: ndcg_at_100
value: 51.483000000000004
- type: ndcg_at_1000
value: 53.433
- type: ndcg_at_3
value: 41.304
- type: ndcg_at_5
value: 43.641999999999996
- type: precision_at_1
value: 35.034
- type: precision_at_10
value: 8.258000000000001
- type: precision_at_100
value: 1.268
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 19.57
- type: precision_at_5
value: 13.782
- type: recall_at_1
value: 28.994999999999997
- type: recall_at_10
value: 58.538000000000004
- type: recall_at_100
value: 80.72399999999999
- type: recall_at_1000
value: 93.462
- type: recall_at_3
value: 45.199
- type: recall_at_5
value: 51.237
- type: map_at_1
value: 24.795
- type: map_at_10
value: 34.935
- type: map_at_100
value: 36.306
- type: map_at_1000
value: 36.417
- type: map_at_3
value: 31.831
- type: map_at_5
value: 33.626
- type: mrr_at_1
value: 30.479
- type: mrr_at_10
value: 40.225
- type: mrr_at_100
value: 41.055
- type: mrr_at_1000
value: 41.114
- type: mrr_at_3
value: 37.538
- type: mrr_at_5
value: 39.073
- type: ndcg_at_1
value: 30.479
- type: ndcg_at_10
value: 40.949999999999996
- type: ndcg_at_100
value: 46.525
- type: ndcg_at_1000
value: 48.892
- type: ndcg_at_3
value: 35.79
- type: ndcg_at_5
value: 38.237
- type: precision_at_1
value: 30.479
- type: precision_at_10
value: 7.6259999999999994
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 17.199
- type: precision_at_5
value: 12.466000000000001
- type: recall_at_1
value: 24.795
- type: recall_at_10
value: 53.421
- type: recall_at_100
value: 77.189
- type: recall_at_1000
value: 93.407
- type: recall_at_3
value: 39.051
- type: recall_at_5
value: 45.462
- type: map_at_1
value: 26.853499999999997
- type: map_at_10
value: 36.20433333333333
- type: map_at_100
value: 37.40391666666667
- type: map_at_1000
value: 37.515
- type: map_at_3
value: 33.39975
- type: map_at_5
value: 34.9665
- type: mrr_at_1
value: 31.62666666666667
- type: mrr_at_10
value: 40.436749999999996
- type: mrr_at_100
value: 41.260333333333335
- type: mrr_at_1000
value: 41.31525
- type: mrr_at_3
value: 38.06733333333332
- type: mrr_at_5
value: 39.41541666666667
- type: ndcg_at_1
value: 31.62666666666667
- type: ndcg_at_10
value: 41.63341666666667
- type: ndcg_at_100
value: 46.704166666666666
- type: ndcg_at_1000
value: 48.88483333333335
- type: ndcg_at_3
value: 36.896
- type: ndcg_at_5
value: 39.11891666666667
- type: precision_at_1
value: 31.62666666666667
- type: precision_at_10
value: 7.241083333333333
- type: precision_at_100
value: 1.1488333333333334
- type: precision_at_1000
value: 0.15250000000000002
- type: precision_at_3
value: 16.908333333333335
- type: precision_at_5
value: 11.942833333333333
- type: recall_at_1
value: 26.853499999999997
- type: recall_at_10
value: 53.461333333333336
- type: recall_at_100
value: 75.63633333333333
- type: recall_at_1000
value: 90.67016666666666
- type: recall_at_3
value: 40.24241666666667
- type: recall_at_5
value: 45.98608333333333
- type: map_at_1
value: 25.241999999999997
- type: map_at_10
value: 31.863999999999997
- type: map_at_100
value: 32.835
- type: map_at_1000
value: 32.928000000000004
- type: map_at_3
value: 29.694
- type: map_at_5
value: 30.978
- type: mrr_at_1
value: 28.374
- type: mrr_at_10
value: 34.814
- type: mrr_at_100
value: 35.596
- type: mrr_at_1000
value: 35.666
- type: mrr_at_3
value: 32.745000000000005
- type: mrr_at_5
value: 34.049
- type: ndcg_at_1
value: 28.374
- type: ndcg_at_10
value: 35.969
- type: ndcg_at_100
value: 40.708
- type: ndcg_at_1000
value: 43.08
- type: ndcg_at_3
value: 31.968999999999998
- type: ndcg_at_5
value: 34.069
- type: precision_at_1
value: 28.374
- type: precision_at_10
value: 5.583
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 13.547999999999998
- type: precision_at_5
value: 9.447999999999999
- type: recall_at_1
value: 25.241999999999997
- type: recall_at_10
value: 45.711
- type: recall_at_100
value: 67.482
- type: recall_at_1000
value: 85.13300000000001
- type: recall_at_3
value: 34.622
- type: recall_at_5
value: 40.043
- type: map_at_1
value: 17.488999999999997
- type: map_at_10
value: 25.142999999999997
- type: map_at_100
value: 26.244
- type: map_at_1000
value: 26.363999999999997
- type: map_at_3
value: 22.654
- type: map_at_5
value: 24.017
- type: mrr_at_1
value: 21.198
- type: mrr_at_10
value: 28.903000000000002
- type: mrr_at_100
value: 29.860999999999997
- type: mrr_at_1000
value: 29.934
- type: mrr_at_3
value: 26.634999999999998
- type: mrr_at_5
value: 27.903
- type: ndcg_at_1
value: 21.198
- type: ndcg_at_10
value: 29.982999999999997
- type: ndcg_at_100
value: 35.275
- type: ndcg_at_1000
value: 38.074000000000005
- type: ndcg_at_3
value: 25.502999999999997
- type: ndcg_at_5
value: 27.557
- type: precision_at_1
value: 21.198
- type: precision_at_10
value: 5.502
- type: precision_at_100
value: 0.942
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 12.044
- type: precision_at_5
value: 8.782
- type: recall_at_1
value: 17.488999999999997
- type: recall_at_10
value: 40.821000000000005
- type: recall_at_100
value: 64.567
- type: recall_at_1000
value: 84.452
- type: recall_at_3
value: 28.351
- type: recall_at_5
value: 33.645
- type: map_at_1
value: 27.066000000000003
- type: map_at_10
value: 36.134
- type: map_at_100
value: 37.285000000000004
- type: map_at_1000
value: 37.389
- type: map_at_3
value: 33.522999999999996
- type: map_at_5
value: 34.905
- type: mrr_at_1
value: 31.436999999999998
- type: mrr_at_10
value: 40.225
- type: mrr_at_100
value: 41.079
- type: mrr_at_1000
value: 41.138000000000005
- type: mrr_at_3
value: 38.074999999999996
- type: mrr_at_5
value: 39.190000000000005
- type: ndcg_at_1
value: 31.436999999999998
- type: ndcg_at_10
value: 41.494
- type: ndcg_at_100
value: 46.678999999999995
- type: ndcg_at_1000
value: 48.964
- type: ndcg_at_3
value: 36.828
- type: ndcg_at_5
value: 38.789
- type: precision_at_1
value: 31.436999999999998
- type: precision_at_10
value: 6.931
- type: precision_at_100
value: 1.072
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 16.729
- type: precision_at_5
value: 11.567
- type: recall_at_1
value: 27.066000000000003
- type: recall_at_10
value: 53.705000000000005
- type: recall_at_100
value: 75.968
- type: recall_at_1000
value: 91.937
- type: recall_at_3
value: 40.865
- type: recall_at_5
value: 45.739999999999995
- type: map_at_1
value: 24.979000000000003
- type: map_at_10
value: 32.799
- type: map_at_100
value: 34.508
- type: map_at_1000
value: 34.719
- type: map_at_3
value: 29.947000000000003
- type: map_at_5
value: 31.584
- type: mrr_at_1
value: 30.237000000000002
- type: mrr_at_10
value: 37.651
- type: mrr_at_100
value: 38.805
- type: mrr_at_1000
value: 38.851
- type: mrr_at_3
value: 35.046
- type: mrr_at_5
value: 36.548
- type: ndcg_at_1
value: 30.237000000000002
- type: ndcg_at_10
value: 38.356
- type: ndcg_at_100
value: 44.906
- type: ndcg_at_1000
value: 47.299
- type: ndcg_at_3
value: 33.717999999999996
- type: ndcg_at_5
value: 35.946
- type: precision_at_1
value: 30.237000000000002
- type: precision_at_10
value: 7.292
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 11.344
- type: recall_at_1
value: 24.979000000000003
- type: recall_at_10
value: 48.624
- type: recall_at_100
value: 77.932
- type: recall_at_1000
value: 92.66499999999999
- type: recall_at_3
value: 35.217
- type: recall_at_5
value: 41.394
- type: map_at_1
value: 22.566
- type: map_at_10
value: 30.945
- type: map_at_100
value: 31.759999999999998
- type: map_at_1000
value: 31.855
- type: map_at_3
value: 28.64
- type: map_at_5
value: 29.787000000000003
- type: mrr_at_1
value: 24.954
- type: mrr_at_10
value: 33.311
- type: mrr_at_100
value: 34.050000000000004
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.238
- type: mrr_at_5
value: 32.329
- type: ndcg_at_1
value: 24.954
- type: ndcg_at_10
value: 35.676
- type: ndcg_at_100
value: 39.931
- type: ndcg_at_1000
value: 42.43
- type: ndcg_at_3
value: 31.365
- type: ndcg_at_5
value: 33.184999999999995
- type: precision_at_1
value: 24.954
- type: precision_at_10
value: 5.564
- type: precision_at_100
value: 0.826
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 13.555
- type: precision_at_5
value: 9.168
- type: recall_at_1
value: 22.566
- type: recall_at_10
value: 47.922
- type: recall_at_100
value: 67.931
- type: recall_at_1000
value: 86.653
- type: recall_at_3
value: 36.103
- type: recall_at_5
value: 40.699000000000005
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.950000000000003
- type: map_at_10
value: 28.612
- type: map_at_100
value: 30.476999999999997
- type: map_at_1000
value: 30.674
- type: map_at_3
value: 24.262
- type: map_at_5
value: 26.554
- type: mrr_at_1
value: 38.241
- type: mrr_at_10
value: 50.43
- type: mrr_at_100
value: 51.059
- type: mrr_at_1000
value: 51.090999999999994
- type: mrr_at_3
value: 47.514
- type: mrr_at_5
value: 49.246
- type: ndcg_at_1
value: 38.241
- type: ndcg_at_10
value: 38.218
- type: ndcg_at_100
value: 45.003
- type: ndcg_at_1000
value: 48.269
- type: ndcg_at_3
value: 32.568000000000005
- type: ndcg_at_5
value: 34.400999999999996
- type: precision_at_1
value: 38.241
- type: precision_at_10
value: 11.674
- type: precision_at_100
value: 1.913
- type: precision_at_1000
value: 0.252
- type: precision_at_3
value: 24.387
- type: precision_at_5
value: 18.163
- type: recall_at_1
value: 16.950000000000003
- type: recall_at_10
value: 43.769000000000005
- type: recall_at_100
value: 66.875
- type: recall_at_1000
value: 84.92699999999999
- type: recall_at_3
value: 29.353
- type: recall_at_5
value: 35.467
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.276
- type: map_at_10
value: 20.848
- type: map_at_100
value: 29.804000000000002
- type: map_at_1000
value: 31.398
- type: map_at_3
value: 14.886
- type: map_at_5
value: 17.516000000000002
- type: mrr_at_1
value: 71
- type: mrr_at_10
value: 78.724
- type: mrr_at_100
value: 78.976
- type: mrr_at_1000
value: 78.986
- type: mrr_at_3
value: 77.333
- type: mrr_at_5
value: 78.021
- type: ndcg_at_1
value: 57.875
- type: ndcg_at_10
value: 43.855
- type: ndcg_at_100
value: 48.99
- type: ndcg_at_1000
value: 56.141
- type: ndcg_at_3
value: 48.914
- type: ndcg_at_5
value: 45.961
- type: precision_at_1
value: 71
- type: precision_at_10
value: 34.575
- type: precision_at_100
value: 11.182
- type: precision_at_1000
value: 2.044
- type: precision_at_3
value: 52.5
- type: precision_at_5
value: 44.2
- type: recall_at_1
value: 9.276
- type: recall_at_10
value: 26.501
- type: recall_at_100
value: 55.72899999999999
- type: recall_at_1000
value: 78.532
- type: recall_at_3
value: 16.365
- type: recall_at_5
value: 20.154
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.71
- type: f1
value: 47.74801556489574
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.405
- type: map_at_10
value: 82.822
- type: map_at_100
value: 83.042
- type: map_at_1000
value: 83.055
- type: map_at_3
value: 81.65299999999999
- type: map_at_5
value: 82.431
- type: mrr_at_1
value: 79.178
- type: mrr_at_10
value: 87.02
- type: mrr_at_100
value: 87.095
- type: mrr_at_1000
value: 87.09700000000001
- type: mrr_at_3
value: 86.309
- type: mrr_at_5
value: 86.824
- type: ndcg_at_1
value: 79.178
- type: ndcg_at_10
value: 86.72
- type: ndcg_at_100
value: 87.457
- type: ndcg_at_1000
value: 87.691
- type: ndcg_at_3
value: 84.974
- type: ndcg_at_5
value: 86.032
- type: precision_at_1
value: 79.178
- type: precision_at_10
value: 10.548
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.848
- type: precision_at_5
value: 20.45
- type: recall_at_1
value: 73.405
- type: recall_at_10
value: 94.39699999999999
- type: recall_at_100
value: 97.219
- type: recall_at_1000
value: 98.675
- type: recall_at_3
value: 89.679
- type: recall_at_5
value: 92.392
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.651
- type: map_at_10
value: 36.886
- type: map_at_100
value: 38.811
- type: map_at_1000
value: 38.981
- type: map_at_3
value: 32.538
- type: map_at_5
value: 34.763
- type: mrr_at_1
value: 44.444
- type: mrr_at_10
value: 53.168000000000006
- type: mrr_at_100
value: 53.839000000000006
- type: mrr_at_1000
value: 53.869
- type: mrr_at_3
value: 50.54
- type: mrr_at_5
value: 52.068000000000005
- type: ndcg_at_1
value: 44.444
- type: ndcg_at_10
value: 44.994
- type: ndcg_at_100
value: 51.599
- type: ndcg_at_1000
value: 54.339999999999996
- type: ndcg_at_3
value: 41.372
- type: ndcg_at_5
value: 42.149
- type: precision_at_1
value: 44.444
- type: precision_at_10
value: 12.407
- type: precision_at_100
value: 1.9269999999999998
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 27.726
- type: precision_at_5
value: 19.814999999999998
- type: recall_at_1
value: 22.651
- type: recall_at_10
value: 52.075
- type: recall_at_100
value: 76.51400000000001
- type: recall_at_1000
value: 92.852
- type: recall_at_3
value: 37.236000000000004
- type: recall_at_5
value: 43.175999999999995
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.777
- type: map_at_10
value: 66.79899999999999
- type: map_at_100
value: 67.65299999999999
- type: map_at_1000
value: 67.706
- type: map_at_3
value: 63.352
- type: map_at_5
value: 65.52900000000001
- type: mrr_at_1
value: 81.553
- type: mrr_at_10
value: 86.983
- type: mrr_at_100
value: 87.132
- type: mrr_at_1000
value: 87.136
- type: mrr_at_3
value: 86.156
- type: mrr_at_5
value: 86.726
- type: ndcg_at_1
value: 81.553
- type: ndcg_at_10
value: 74.64
- type: ndcg_at_100
value: 77.459
- type: ndcg_at_1000
value: 78.43
- type: ndcg_at_3
value: 69.878
- type: ndcg_at_5
value: 72.59400000000001
- type: precision_at_1
value: 81.553
- type: precision_at_10
value: 15.654000000000002
- type: precision_at_100
value: 1.783
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 45.199
- type: precision_at_5
value: 29.267
- type: recall_at_1
value: 40.777
- type: recall_at_10
value: 78.271
- type: recall_at_100
value: 89.129
- type: recall_at_1000
value: 95.49
- type: recall_at_3
value: 67.79899999999999
- type: recall_at_5
value: 73.167
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 93.5064
- type: ap
value: 90.25495114444111
- type: f1
value: 93.5012434973381
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.301
- type: map_at_10
value: 35.657
- type: map_at_100
value: 36.797000000000004
- type: map_at_1000
value: 36.844
- type: map_at_3
value: 31.743
- type: map_at_5
value: 34.003
- type: mrr_at_1
value: 23.854
- type: mrr_at_10
value: 36.242999999999995
- type: mrr_at_100
value: 37.32
- type: mrr_at_1000
value: 37.361
- type: mrr_at_3
value: 32.4
- type: mrr_at_5
value: 34.634
- type: ndcg_at_1
value: 23.868000000000002
- type: ndcg_at_10
value: 42.589
- type: ndcg_at_100
value: 48.031
- type: ndcg_at_1000
value: 49.189
- type: ndcg_at_3
value: 34.649
- type: ndcg_at_5
value: 38.676
- type: precision_at_1
value: 23.868000000000002
- type: precision_at_10
value: 6.6850000000000005
- type: precision_at_100
value: 0.9400000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.651
- type: precision_at_5
value: 10.834000000000001
- type: recall_at_1
value: 23.301
- type: recall_at_10
value: 63.88700000000001
- type: recall_at_100
value: 88.947
- type: recall_at_1000
value: 97.783
- type: recall_at_3
value: 42.393
- type: recall_at_5
value: 52.036
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.64888280893753
- type: f1
value: 94.41310774203512
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.72184222526221
- type: f1
value: 61.522034067350106
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.60659045057163
- type: f1
value: 77.268649687049
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 81.83254875588432
- type: f1
value: 81.61520635919082
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.31529875009507
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.734233714415073
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.994501713009452
- type: mrr
value: 32.13512850703073
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.603000000000001
- type: map_at_10
value: 13.767999999999999
- type: map_at_100
value: 17.197000000000003
- type: map_at_1000
value: 18.615000000000002
- type: map_at_3
value: 10.567
- type: map_at_5
value: 12.078999999999999
- type: mrr_at_1
value: 44.891999999999996
- type: mrr_at_10
value: 53.75299999999999
- type: mrr_at_100
value: 54.35
- type: mrr_at_1000
value: 54.388000000000005
- type: mrr_at_3
value: 51.495999999999995
- type: mrr_at_5
value: 52.688
- type: ndcg_at_1
value: 43.189
- type: ndcg_at_10
value: 34.567
- type: ndcg_at_100
value: 32.273
- type: ndcg_at_1000
value: 41.321999999999996
- type: ndcg_at_3
value: 40.171
- type: ndcg_at_5
value: 37.502
- type: precision_at_1
value: 44.582
- type: precision_at_10
value: 25.139
- type: precision_at_100
value: 7.739999999999999
- type: precision_at_1000
value: 2.054
- type: precision_at_3
value: 37.152
- type: precision_at_5
value: 31.826999999999998
- type: recall_at_1
value: 6.603000000000001
- type: recall_at_10
value: 17.023
- type: recall_at_100
value: 32.914
- type: recall_at_1000
value: 64.44800000000001
- type: recall_at_3
value: 11.457
- type: recall_at_5
value: 13.816
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.026000000000003
- type: map_at_10
value: 45.429
- type: map_at_100
value: 46.45
- type: map_at_1000
value: 46.478
- type: map_at_3
value: 41.147
- type: map_at_5
value: 43.627
- type: mrr_at_1
value: 33.951
- type: mrr_at_10
value: 47.953
- type: mrr_at_100
value: 48.731
- type: mrr_at_1000
value: 48.751
- type: mrr_at_3
value: 44.39
- type: mrr_at_5
value: 46.533
- type: ndcg_at_1
value: 33.951
- type: ndcg_at_10
value: 53.24100000000001
- type: ndcg_at_100
value: 57.599999999999994
- type: ndcg_at_1000
value: 58.270999999999994
- type: ndcg_at_3
value: 45.190999999999995
- type: ndcg_at_5
value: 49.339
- type: precision_at_1
value: 33.951
- type: precision_at_10
value: 8.856
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.713
- type: precision_at_5
value: 14.838000000000001
- type: recall_at_1
value: 30.026000000000003
- type: recall_at_10
value: 74.512
- type: recall_at_100
value: 93.395
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 53.677
- type: recall_at_5
value: 63.198
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.41300000000001
- type: map_at_10
value: 85.387
- type: map_at_100
value: 86.027
- type: map_at_1000
value: 86.041
- type: map_at_3
value: 82.543
- type: map_at_5
value: 84.304
- type: mrr_at_1
value: 82.35
- type: mrr_at_10
value: 88.248
- type: mrr_at_100
value: 88.348
- type: mrr_at_1000
value: 88.349
- type: mrr_at_3
value: 87.348
- type: mrr_at_5
value: 87.96300000000001
- type: ndcg_at_1
value: 82.37
- type: ndcg_at_10
value: 88.98
- type: ndcg_at_100
value: 90.16499999999999
- type: ndcg_at_1000
value: 90.239
- type: ndcg_at_3
value: 86.34100000000001
- type: ndcg_at_5
value: 87.761
- type: precision_at_1
value: 82.37
- type: precision_at_10
value: 13.471
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.827
- type: precision_at_5
value: 24.773999999999997
- type: recall_at_1
value: 71.41300000000001
- type: recall_at_10
value: 95.748
- type: recall_at_100
value: 99.69200000000001
- type: recall_at_1000
value: 99.98
- type: recall_at_3
value: 87.996
- type: recall_at_5
value: 92.142
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.96878497780007
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.31371347128074
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.287
- type: map_at_10
value: 13.530000000000001
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 16.245
- type: map_at_3
value: 9.612
- type: map_at_5
value: 11.672
- type: mrr_at_1
value: 26
- type: mrr_at_10
value: 37.335
- type: mrr_at_100
value: 38.443
- type: mrr_at_1000
value: 38.486
- type: mrr_at_3
value: 33.783
- type: mrr_at_5
value: 36.028
- type: ndcg_at_1
value: 26
- type: ndcg_at_10
value: 22.215
- type: ndcg_at_100
value: 31.101
- type: ndcg_at_1000
value: 36.809
- type: ndcg_at_3
value: 21.104
- type: ndcg_at_5
value: 18.759999999999998
- type: precision_at_1
value: 26
- type: precision_at_10
value: 11.43
- type: precision_at_100
value: 2.424
- type: precision_at_1000
value: 0.379
- type: precision_at_3
value: 19.7
- type: precision_at_5
value: 16.619999999999997
- type: recall_at_1
value: 5.287
- type: recall_at_10
value: 23.18
- type: recall_at_100
value: 49.208
- type: recall_at_1000
value: 76.85300000000001
- type: recall_at_3
value: 11.991999999999999
- type: recall_at_5
value: 16.85
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.87834913790886
- type: cos_sim_spearman
value: 81.04583513112122
- type: euclidean_pearson
value: 81.20484174558065
- type: euclidean_spearman
value: 80.76430832561769
- type: manhattan_pearson
value: 81.21416730978615
- type: manhattan_spearman
value: 80.7797637394211
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.56143998865157
- type: cos_sim_spearman
value: 79.75387012744471
- type: euclidean_pearson
value: 83.7877519997019
- type: euclidean_spearman
value: 79.90489748003296
- type: manhattan_pearson
value: 83.7540590666095
- type: manhattan_spearman
value: 79.86434577931573
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.92102564177941
- type: cos_sim_spearman
value: 84.98234585939103
- type: euclidean_pearson
value: 84.47729567593696
- type: euclidean_spearman
value: 85.09490696194469
- type: manhattan_pearson
value: 84.38622951588229
- type: manhattan_spearman
value: 85.02507171545574
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.1891164763377
- type: cos_sim_spearman
value: 80.7997969966883
- type: euclidean_pearson
value: 80.48572256162396
- type: euclidean_spearman
value: 80.57851903536378
- type: manhattan_pearson
value: 80.4324819433651
- type: manhattan_spearman
value: 80.5074526239062
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 82.64319975116025
- type: cos_sim_spearman
value: 84.88671197763652
- type: euclidean_pearson
value: 84.74692193293231
- type: euclidean_spearman
value: 85.27151722073653
- type: manhattan_pearson
value: 84.72460516785438
- type: manhattan_spearman
value: 85.26518899786687
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.24687565822381
- type: cos_sim_spearman
value: 85.60418454111263
- type: euclidean_pearson
value: 84.85829740169851
- type: euclidean_spearman
value: 85.66378014138306
- type: manhattan_pearson
value: 84.84672408808835
- type: manhattan_spearman
value: 85.63331924364891
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.87758895415485
- type: cos_sim_spearman
value: 85.8193745617297
- type: euclidean_pearson
value: 85.78719118848134
- type: euclidean_spearman
value: 84.35797575385688
- type: manhattan_pearson
value: 85.97919844815692
- type: manhattan_spearman
value: 84.58334745175151
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.27076035963599
- type: cos_sim_spearman
value: 67.21433656439973
- type: euclidean_pearson
value: 68.07434078679324
- type: euclidean_spearman
value: 66.0249731719049
- type: manhattan_pearson
value: 67.95495198947476
- type: manhattan_spearman
value: 65.99893908331886
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.22437747056817
- type: cos_sim_spearman
value: 85.0995685206174
- type: euclidean_pearson
value: 84.08616925603394
- type: euclidean_spearman
value: 84.89633925691658
- type: manhattan_pearson
value: 84.08332675923133
- type: manhattan_spearman
value: 84.8858228112915
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.6909022589666
- type: mrr
value: 96.43341952165481
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.660999999999994
- type: map_at_10
value: 67.625
- type: map_at_100
value: 68.07600000000001
- type: map_at_1000
value: 68.10199999999999
- type: map_at_3
value: 64.50399999999999
- type: map_at_5
value: 66.281
- type: mrr_at_1
value: 61
- type: mrr_at_10
value: 68.953
- type: mrr_at_100
value: 69.327
- type: mrr_at_1000
value: 69.352
- type: mrr_at_3
value: 66.833
- type: mrr_at_5
value: 68.05
- type: ndcg_at_1
value: 61
- type: ndcg_at_10
value: 72.369
- type: ndcg_at_100
value: 74.237
- type: ndcg_at_1000
value: 74.939
- type: ndcg_at_3
value: 67.284
- type: ndcg_at_5
value: 69.72500000000001
- type: precision_at_1
value: 61
- type: precision_at_10
value: 9.733
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.222
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 57.660999999999994
- type: recall_at_10
value: 85.656
- type: recall_at_100
value: 93.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.961
- type: recall_at_5
value: 78.094
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86930693069307
- type: cos_sim_ap
value: 96.76685487950894
- type: cos_sim_f1
value: 93.44587884806354
- type: cos_sim_precision
value: 92.80078895463511
- type: cos_sim_recall
value: 94.1
- type: dot_accuracy
value: 99.54356435643564
- type: dot_ap
value: 81.18659960405607
- type: dot_f1
value: 75.78008915304605
- type: dot_precision
value: 75.07360157016683
- type: dot_recall
value: 76.5
- type: euclidean_accuracy
value: 99.87326732673267
- type: euclidean_ap
value: 96.8102411908941
- type: euclidean_f1
value: 93.6127744510978
- type: euclidean_precision
value: 93.42629482071713
- type: euclidean_recall
value: 93.8
- type: manhattan_accuracy
value: 99.87425742574257
- type: manhattan_ap
value: 96.82857341435529
- type: manhattan_f1
value: 93.62129583124059
- type: manhattan_precision
value: 94.04641775983855
- type: manhattan_recall
value: 93.2
- type: max_accuracy
value: 99.87425742574257
- type: max_ap
value: 96.82857341435529
- type: max_f1
value: 93.62129583124059
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.92560972698926
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.92797240259008
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.244624045597654
- type: mrr
value: 56.185303666921314
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.02491987312937
- type: cos_sim_spearman
value: 32.055592206679734
- type: dot_pearson
value: 24.731627575422557
- type: dot_spearman
value: 24.308029077069733
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.231
- type: map_at_10
value: 1.899
- type: map_at_100
value: 9.498
- type: map_at_1000
value: 20.979999999999997
- type: map_at_3
value: 0.652
- type: map_at_5
value: 1.069
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 93.4
- type: mrr_at_100
value: 93.4
- type: mrr_at_1000
value: 93.4
- type: mrr_at_3
value: 93
- type: mrr_at_5
value: 93.4
- type: ndcg_at_1
value: 86
- type: ndcg_at_10
value: 75.375
- type: ndcg_at_100
value: 52.891999999999996
- type: ndcg_at_1000
value: 44.952999999999996
- type: ndcg_at_3
value: 81.05
- type: ndcg_at_5
value: 80.175
- type: precision_at_1
value: 88
- type: precision_at_10
value: 79
- type: precision_at_100
value: 53.16
- type: precision_at_1000
value: 19.408
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84
- type: recall_at_1
value: 0.231
- type: recall_at_10
value: 2.078
- type: recall_at_100
value: 12.601
- type: recall_at_1000
value: 41.296
- type: recall_at_3
value: 0.6779999999999999
- type: recall_at_5
value: 1.1360000000000001
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.782
- type: map_at_10
value: 10.204
- type: map_at_100
value: 16.176
- type: map_at_1000
value: 17.456
- type: map_at_3
value: 5.354
- type: map_at_5
value: 7.503
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 54.010000000000005
- type: mrr_at_100
value: 54.49
- type: mrr_at_1000
value: 54.49
- type: mrr_at_3
value: 48.980000000000004
- type: mrr_at_5
value: 51.735
- type: ndcg_at_1
value: 36.735
- type: ndcg_at_10
value: 26.61
- type: ndcg_at_100
value: 36.967
- type: ndcg_at_1000
value: 47.274
- type: ndcg_at_3
value: 30.363
- type: ndcg_at_5
value: 29.448999999999998
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 23.878
- type: precision_at_100
value: 7.693999999999999
- type: precision_at_1000
value: 1.4489999999999998
- type: precision_at_3
value: 31.293
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.782
- type: recall_at_10
value: 16.485
- type: recall_at_100
value: 46.924
- type: recall_at_1000
value: 79.365
- type: recall_at_3
value: 6.52
- type: recall_at_5
value: 10.48
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.08300000000001
- type: ap
value: 13.91559884590195
- type: f1
value: 53.956838444291364
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.34069043576683
- type: f1
value: 59.662041994618406
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.70780611078653
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.10734934732073
- type: cos_sim_ap
value: 77.58349999516054
- type: cos_sim_f1
value: 70.25391395868965
- type: cos_sim_precision
value: 70.06035161374967
- type: cos_sim_recall
value: 70.44854881266491
- type: dot_accuracy
value: 80.60439887941826
- type: dot_ap
value: 54.52935200483575
- type: dot_f1
value: 54.170444242973716
- type: dot_precision
value: 47.47715534366309
- type: dot_recall
value: 63.06068601583114
- type: euclidean_accuracy
value: 87.26828396018358
- type: euclidean_ap
value: 78.00158454104036
- type: euclidean_f1
value: 70.70292457670601
- type: euclidean_precision
value: 68.79680479281079
- type: euclidean_recall
value: 72.71767810026385
- type: manhattan_accuracy
value: 87.11330988853788
- type: manhattan_ap
value: 77.92527099601855
- type: manhattan_f1
value: 70.76488706365502
- type: manhattan_precision
value: 68.89055472263868
- type: manhattan_recall
value: 72.74406332453826
- type: max_accuracy
value: 87.26828396018358
- type: max_ap
value: 78.00158454104036
- type: max_f1
value: 70.76488706365502
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.80804905499282
- type: cos_sim_ap
value: 83.06187782630936
- type: cos_sim_f1
value: 74.99716435403985
- type: cos_sim_precision
value: 73.67951860931579
- type: cos_sim_recall
value: 76.36279642747151
- type: dot_accuracy
value: 81.83141227151008
- type: dot_ap
value: 67.18241090841795
- type: dot_f1
value: 62.216037571751606
- type: dot_precision
value: 56.749381227391005
- type: dot_recall
value: 68.84816753926701
- type: euclidean_accuracy
value: 87.91671517832887
- type: euclidean_ap
value: 83.56538942001427
- type: euclidean_f1
value: 75.7327253337256
- type: euclidean_precision
value: 72.48856036606828
- type: euclidean_recall
value: 79.28087465352634
- type: manhattan_accuracy
value: 87.86626304963713
- type: manhattan_ap
value: 83.52939841172832
- type: manhattan_f1
value: 75.73635656329888
- type: manhattan_precision
value: 72.99150182103836
- type: manhattan_recall
value: 78.69571912534647
- type: max_accuracy
value: 87.91671517832887
- type: max_ap
value: 83.56538942001427
- type: max_f1
value: 75.73635656329888
---
**Recommend switching to newest [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5), which has more reasonable similarity distribution and same method of usage.**
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
- 09/12/2023: New Release:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval\* |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
\*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
\**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details pelease refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find our work helpful, please cite us:
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | Non_BioNLP |
RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,722,847,584,000 | 2024-08-05T19:07:47 | 546 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
L3-Super-Nova-RP-8B - GGUF
- Model creator: https://huggingface.co/Casual-Autopsy/
- Original model: https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [L3-Super-Nova-RP-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [L3-Super-Nova-RP-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [L3-Super-Nova-RP-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [L3-Super-Nova-RP-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [L3-Super-Nova-RP-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [L3-Super-Nova-RP-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [L3-Super-Nova-RP-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [L3-Super-Nova-RP-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [L3-Super-Nova-RP-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [L3-Super-Nova-RP-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [L3-Super-Nova-RP-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [L3-Super-Nova-RP-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [L3-Super-Nova-RP-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [L3-Super-Nova-RP-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [L3-Super-Nova-RP-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [L3-Super-Nova-RP-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [L3-Super-Nova-RP-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [L3-Super-Nova-RP-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [L3-Super-Nova-RP-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [L3-Super-Nova-RP-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [L3-Super-Nova-RP-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [L3-Super-Nova-RP-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_L3-Super-Nova-RP-8B-gguf/blob/main/L3-Super-Nova-RP-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
pipeline_tag: text-generation
library_name: transformers
language:
- en
license: llama3
tags:
- mergekit
- merge
- multi-step merge
- not-for-all-audiences
- nsfw
- rp
- roleplay
- role-play
- summarization
- emotion classification
base_model:
- nothingiisreal/L3-8B-Celeste-v1
- Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
- Sao10K/L3-8B-Stheno-v3.2
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Sao10K/L3-8B-Lunaris-v1
- turboderp/llama3-turbcat-instruct-8b
- ChaoticNeutrals/Domain-Fusion-L3-8B
- migtissera/Llama-3-8B-Synthia-v3.5
- TheDrummer/Llama-3SOME-8B-v2
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
- TheSkullery/llama-3-cat-8b-instruct-v1
- FPHam/L3-8B-Everything-COT
- Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
- OEvortex/Emotional-llama-8B
- lighteternal/Llama3-merge-biomed-8b
- Casual-Autopsy/Llama3-merge-psychotherapy-8b
- Sao10K/L3-8B-Tamamo-v1
- ResplendentAI/Nymph_8B
- ChaoticNeutrals/T-900-8B
- Sao10K/L3-8B-Niitama-v1
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Hastagaras/Halu-8B-Llama3-Blackroot
- crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
---
| <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
|:---:|
| Image generated by [mayonays_on_toast](https://civitai.com/user/mayonays_on_toast) - [Sauce](https://civitai.com/images/10153472) |
***
***
***
# L3-Super-Nova-RP-8B
This is a role-playing model designed with the goal of good creativity and intelligence to improve advance role-playing experiences. The aim of L3-Super-Nova-RP-8B is to be good at Chain-of-Thoughts, summarizing information, and recognizing emotions. It also includes data about the human body and mind in an attempt to enhance understanding and interaction within role-playing scenarios.
The model was developed using various methods in multiple merging steps. To boost creativity, it used techniques to strengthen and adjust its output which was paried with the newly released merge method. All merge calculations were done in float32 format and then converted to the usual bfloat16 during merging.
***
***
## Presets
***
### Text Gen
The current good starting preset for this model. **Subject to change.**
**Settings by yours truly**
```yaml
Top K: 40
Min P: 0.075 # I've got some good results as low as 0.05 as well
Repetition Penalty: 1.01
# Don't make this higher, DRY handles the bulk of Squashing Repetition.
# This is just to lightly nudge the bot to move the plot forward
Rep Pen Range: 2048 # Don't make this higher either.
Presence Penalty: 0.03 # Minor encouragement to use synonyms. Don't make this higher maybe?
Smoothing Factor: 0.3
DRY Repetition Penalty:
Multiplier: 0.8
Base: 1.75
Allowed Length: 2
Penalty Range: 4096
Dynamic Temperature:
Min Temp: 0.5
Max Temp: 1.25
Exponent: 0.85
```
***
### Context/Instruct
[Virt-io's SillyTavern Presets](https://huggingface.co/Virt-io/SillyTavern-Presets) work really well with this.
***
***
## Usage Info
Some of the **INT** models were chosen with some of SillyTavern's features in mind, such as emotion based sprites, dynamic music, and pretty much any feature, extension, or STscript that uses sumarization. With that said, it's recommended to use SillyTavern as your front-end.
While not required, I'd recommend building the story string prompt with Lorebooks rather than using the Advance Formatting menu. The only thing you really need in the Story String prompt within Advance Formatting is the system prompt. Doing it this way tends to keep the character more consistent as the RP goes on as all character card info is locked to a certain depth rather than getting further and further away within the context.
***
***
## Quants
GGUF:
- [Static GGUFs](https://huggingface.co/mradermacher/L3-Super-Nova-RP-8B-GGUF) by mradermacher
- [Imatrix GGUFs](https://huggingface.co/mradermacher/L3-Super-Nova-RP-8B-i1-GGUF) by mradermacher
Exl2:
- [8.0bpw-h8 Exl2](https://huggingface.co/Slvcxc/L3-Super-Nova-RP-8B-8.0bpw-h8-exl2) by Slvcxc
***
***
## Merge Info
The merge methods used were **Ties**, **Dare Ties**, **Breadcrumbs Ties**, **SLERP**, and **DELLA**.
The model was finished off with both **Merge Densification**, and **Negative Weighting** techniques to boost creativity.
All merging steps had the merge calculations done in **float32** and were output as **bfloat16**.
***
### Models Merged
The following models were used to make this merge:
* [nothingiisreal/L3-8B-Celeste-v1](https://huggingface.co/nothingiisreal/L3-8B-Celeste-v1)
* [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
* [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b)
* [ChaoticNeutrals/Domain-Fusion-L3-8B](https://huggingface.co/ChaoticNeutrals/Domain-Fusion-L3-8B)
* [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
* [TheDrummer/Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2)
* [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/ChaoticNeutrals/Hathor_RP-v.01-L3-8B)
* [TheSkullery/llama-3-cat-8b-instruct-v1](https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1)
* [FPHam/L3-8B-Everything-COT](https://huggingface.co/FPHam/L3-8B-Everything-COT)
* [Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged](https://huggingface.co/Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged)
* [OEvortex/Emotional-llama-8B](https://huggingface.co/OEvortex/Emotional-llama-8B)
* [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co/lighteternal/Llama3-merge-biomed-8b)
* [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co/Casual-Autopsy/Llama3-merge-psychotherapy-8b)
* [Sao10K/L3-8B-Tamamo-v1](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1)
* [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
* [ChaoticNeutrals/T-900-8B](https://huggingface.co/ChaoticNeutrals/T-900-8B)
* [Sao10K/L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1)
* [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
* [Hastagaras/Halu-8B-Llama3-Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot)
* [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2)
***
***
## Evaluation Results
***
### [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
**Explaination for AI RP newbies:** IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards.
The rest don't matter. At least not nearly as much as IFEval.
|Metric | Value|
|:------------------|------:|
|Avg. |N/A|
|IFEval (0-Shot) |N/A|
|BBH (3-Shot) |N/A|
|MATH Lvl 5 (4-Shot)|N/A|
|GPQA (0-shot) |N/A|
|MuSR (0-shot) |N/A|
|MMLU-PRO (5-shot) |N/A|
***
### [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
Information about the metrics can be found at the bottom of the [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) in the respective tabs.
|Metric(UGI-Leaderboard) | Value | Value | Metric(Writing Style)|
|:------------------------|:-----:|:-----:|----------------------:|
|UGI(Avg.) |23.56 |0.199 |RegV1 |
|W/10 |5.8 |0.218 |RegV2 |
|Unruly |22.5 |0.15 |MyScore |
|Internet |11.8 |8.34 |ASSS |
|Stats |18.7 |10.26 |SMOG |
|Writing |31.5 |1.76 |Yule |
|PolContro |33.3 | | |
***
***
## Secret Sauce
The following YAML configs were used to make this merge.
***
### Super-Nova-CRE_pt.1
```yaml
models:
- model: nothingiisreal/L3-8B-Celeste-v1
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
parameters:
density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35]
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65]
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
merge_method: dare_ties
base_model: nothingiisreal/L3-8B-Celeste-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-CRE_pt.2
```yaml
models:
- model: nothingiisreal/L3-8B-Celeste-v1
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35]
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
- model: Sao10K/L3-8B-Lunaris-v1
parameters:
density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65]
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
merge_method: dare_ties
base_model: nothingiisreal/L3-8B-Celeste-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-UNC_pt.1
```yaml
models:
- model: turboderp/llama3-turbcat-instruct-8b
- model: ChaoticNeutrals/Domain-Fusion-L3-8B
parameters:
density: 0.5
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
- model: migtissera/Llama-3-8B-Synthia-v3.5
parameters:
density: 0.5
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
merge_method: dare_ties
base_model: turboderp/llama3-turbcat-instruct-8b
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-UNC_pt.2
```yaml
models:
- model: turboderp/llama3-turbcat-instruct-8b
- model: TheDrummer/Llama-3SOME-8B-v2
parameters:
density: 0.5
weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
parameters:
density: 0.5
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
merge_method: dare_ties
base_model: turboderp/llama3-turbcat-instruct-8b
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-INT_pt.1
```yaml
models:
- model: TheSkullery/llama-3-cat-8b-instruct-v1
- model: FPHam/L3-8B-Everything-COT
parameters:
density: 0.5
weight: [0.139, 0.139, 0.208, 0.139, 0.208]
- model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
parameters:
density: 0.5
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
- model: OEvortex/Emotional-llama-8B
parameters:
density: 0.5
weight: [0.208, 0.139, 0.208, 0.139, 0.139]
- model: lighteternal/Llama3-merge-biomed-8b
parameters:
density: 0.5
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
- model: Casual-Autopsy/Llama3-merge-psychotherapy-8b
parameters:
density: 0.5
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
merge_method: ties
base_model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-INT_pt.2
```yaml
models:
- model: TheSkullery/llama-3-cat-8b-instruct-v1
- model: FPHam/L3-8B-Everything-COT
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.208, 0.208, 0.139, 0.139]
- model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
parameters:
density: 0.9
gamma: 0.01
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
- model: OEvortex/Emotional-llama-8B
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.139, 0.208, 0.208, 0.139]
- model: lighteternal/Llama3-merge-biomed-8b
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
- model: Casual-Autopsy/Llama3-merge-psychotherapy-8b
parameters:
density: 0.9
gamma: 0.01
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
merge_method: breadcrumbs_ties
base_model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-CRE
```yaml
models:
- model: Casual-Autopsy/Super-Nova-CRE_pt.1
- model: Casual-Autopsy/Super-Nova-CRE_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-CRE_pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
embed_slerp: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-UNC
```yaml
models:
- model: Casual-Autopsy/Super-Nova-UNC_pt.1
- model: Casual-Autopsy/Super-Nova-UNC_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-UNC_pt.1
parameters:
t:
- value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
embed_slerp: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-INT
```yaml
models:
- model: Casual-Autopsy/Super-Nova-INT_pt.1
- model: Casual-Autopsy/Super-Nova-INT_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-INT_pt.1
parameters:
t:
- value: 0.5
embed_slerp: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-RP_stp.1
```yaml
models:
- model: Casual-Autopsy/Super-Nova-CRE
- model: asual-Autopsy/Super-Nova-UNC
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-CRE
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
embed_slerp: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-RP_stp.2
```yaml
models:
- model: Casual-Autopsy/Super-Nova-RP_stp.1
- model: Casual-Autopsy/Super-Nova-INT
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-RP_stp.1
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
embed_slerp: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-RP_pt.1
```yaml
models:
- model: Casual-Autopsy/Super-Nova-RP_stp.2
- model: Sao10K/L3-8B-Tamamo-v1
parameters:
density: [0.4, 0.6, 0.5, 0.6, 0.4]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [-0.01523, 0.01768, -0.01384, 0.01835, -0.01247]
- model: ResplendentAI/Nymph_8B
parameters:
density: [0.65, 0.35, 0.5, 0.35, 0.65]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [0.01823, -0.01647, 0.01422, -0.01975, 0.01128]
- model: ChaoticNeutrals/T-900-8B
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
- model: Sao10K/L3-8B-Niitama-v1
parameters:
density: [0.6, 0.4, 0.5, 0.4, 0.6]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
merge_method: della
base_model: Casual-Autopsy/Super-Nova-RP_stp.2
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### Super-Nova-RP_pt.2
```yaml
models:
- model: Casual-Autopsy/Super-Nova-RP_stp.2
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
density: [0.4, 0.6, 0.5, 0.6, 0.4]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [-0.01935, 0.01785, -0.01512, 0.01809, -0.01371]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
density: [0.65, 0.35, 0.5, 0.35, 0.65]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [0.01847, -0.01468, 0.01503, -0.01822, 0.01459]
- model: Hastagaras/Halu-8B-Llama3-Blackroot
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [-0.01578, 0.01821, -0.01753, 0.01677, -0.01442]
- model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
parameters:
density: [0.6, 0.5, 0.5, 0.5, 0.6]
epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
lambda: 0.85
weight: [0.01667, -0.01740, 0.01560, -0.01564, 0.01315]
merge_method: della
base_model: Casual-Autopsy/Super-Nova-RP_stp.2
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
***
### L3-Super-Nova-RP-8B
```yaml
models:
- model: Casual-Autopsy/Super-Nova-RP_pt.1
- model: Casual-Autopsy/Super-Nova-RP_pt.2
merge_method: slerp
base_model: Casual-Autopsy/Super-Nova-RP_pt.1
parameters:
t:
- value: 0.5
dtype: float32
out_dtype: bfloat16
```
| [
"SUMMARIZATION"
] | Non_BioNLP |
plaguss/bge-base-argilla-sdk-matryoshka | plaguss | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:882",
"loss:MatryoshkaLoss",
"loss:TripletLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1703.07737",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,718,789,781,000 | 2024-06-19T09:36:43 | 7 | 5 | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:882
- loss:MatryoshkaLoss
- loss:TripletLoss
widget:
- source_sentence: 'hide: footer
Fields
Fields in Argilla are define the content of a record that will be reviewed by
a user.'
sentences:
- The tourists tried to hide their footprints in the sand as they walked along the
deserted beach.
- Can the rg.Suggestion class be used to handle model predictions in Argilla?
- Can users customize the fields in Argilla to fit their specific annotation needs?
- source_sentence: "=== \"Single condition\"\n\n=== \"Multiple conditions\"\n\nFilter\
\ by status\n\nYou can filter records based on their status. The status can be\
\ pending, draft, submitted, or discarded.\n\n```python\nimport argilla_sdk as\
\ rg\n\nclient = rg.Argilla(api_url=\"\", api_key=\"\")\n\nworkspace = client.workspaces(\"\
my_workspace\")\n\ndataset = client.datasets(name=\"my_dataset\", workspace=workspace)\n\
\nstatus_filter = rg.Query(\n filter = rg.Filter((\"status\", \"==\", \"submitted\"\
))\n)"
sentences:
- The submitted application was rejected due to incomplete documentation.
- How can I apply filters to records by their status in Argilla?
- Can Argilla's IntegerMetadataProperty support a range of integer values as metadata?
- source_sentence: 'description: In this section, we will provide a step-by-step guide
to show how to filter and query a dataset.
Query, filter, and export records
This guide provides an overview of how to query and filter a dataset in Argilla
and export records.'
sentences:
- The new restaurant in town offers a unique filter coffee that is a must-try for
coffee enthusiasts.
- Is it possible to design a user role with tailored access permissions within Argilla?
- Can Argilla be employed to search and filter datasets based on particular requirements
or keywords?
- source_sentence: 'hide: footer
Fields
Fields in Argilla are define the content of a record that will be reviewed by
a user.'
sentences:
- Is it possible for annotators to tailor Argilla's fields to their unique annotation
requirements?
- The tourists tried to hide their footprints in the sand as they walked along the
deserted beach.
- Can this partnership with Prolific provide researchers with a broader range of
annotators to draw from, enhancing the quality of their studies?
- source_sentence: 'hide: footer
rg.Argilla
To interact with the Argilla server from python you can use the Argilla class.
The Argilla client is used to create, get, update, and delete all Argilla resources,
such as workspaces, users, datasets, and records.
Usage Examples
Connecting to an Argilla server
To connect to an Argilla server, instantiate the Argilla class and pass the api_url
of the server and the api_key to authenticate.
```python
import argilla_sdk as rg'
sentences:
- Can the Argilla class be employed to streamline dataset administration tasks in
my Argilla server setup?
- Is it possible to create new data entries in my dataset via Argilla's annotation
tools?
- The Argilla flowers were blooming beautifully in the garden.
model-index:
- name: BGE base ArgillaSDK Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.1326530612244898
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.3877551020408163
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5204081632653061
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1326530612244898
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09523809523809525
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.07755102040816327
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05204081632653061
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1326530612244898
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3877551020408163
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5204081632653061
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3086125494748455
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.24321752510528016
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.26038538311827203
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.10204081632653061
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2755102040816326
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.3877551020408163
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5102040816326531
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.10204081632653061
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09183673469387756
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.07755102040816327
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05102040816326531
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.10204081632653061
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2755102040816326
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3877551020408163
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5102040816326531
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.29420081448590024
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.22640913508260446
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24259809105769914
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.12244897959183673
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2755102040816326
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.3877551020408163
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.12244897959183673
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09183673469387753
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.07755102040816327
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.049999999999999996
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.12244897959183673
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2755102040816326
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3877551020408163
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2931450934182018
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2290937803692905
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24454883014070852
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.09183673469387756
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.25510204081632654
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.3163265306122449
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.46938775510204084
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.09183673469387756
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.08503401360544219
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.06326530612244897
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.046938775510204075
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09183673469387756
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.25510204081632654
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3163265306122449
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.46938775510204084
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2629197762336244
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.1992265954000647
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2164845577697655
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.08163265306122448
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.25510204081632654
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.3163265306122449
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.47959183673469385
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.08163265306122448
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.08503401360544219
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.06326530612244897
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.04795918367346938
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.08163265306122448
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.25510204081632654
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3163265306122449
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.47959183673469385
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2610977190273289
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.19399497894395853
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.20591442395637935
name: Cosine Map@100
---
# BGE base ArgillaSDK Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("plaguss/bge-base-argilla-sdk-matryoshka")
# Run inference
sentences = [
'hide: footer\n\nrg.Argilla\n\nTo interact with the Argilla server from python you can use the Argilla class. The Argilla client is used to create, get, update, and delete all Argilla resources, such as workspaces, users, datasets, and records.\n\nUsage Examples\n\nConnecting to an Argilla server\n\nTo connect to an Argilla server, instantiate the Argilla class and pass the api_url of the server and the api_key to authenticate.\n\n```python\nimport argilla_sdk as rg',
'Can the Argilla class be employed to streamline dataset administration tasks in my Argilla server setup?',
'The Argilla flowers were blooming beautifully in the garden.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1327 |
| cosine_accuracy@3 | 0.2857 |
| cosine_accuracy@5 | 0.3878 |
| cosine_accuracy@10 | 0.5204 |
| cosine_precision@1 | 0.1327 |
| cosine_precision@3 | 0.0952 |
| cosine_precision@5 | 0.0776 |
| cosine_precision@10 | 0.052 |
| cosine_recall@1 | 0.1327 |
| cosine_recall@3 | 0.2857 |
| cosine_recall@5 | 0.3878 |
| cosine_recall@10 | 0.5204 |
| cosine_ndcg@10 | 0.3086 |
| cosine_mrr@10 | 0.2432 |
| **cosine_map@100** | **0.2604** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.102 |
| cosine_accuracy@3 | 0.2755 |
| cosine_accuracy@5 | 0.3878 |
| cosine_accuracy@10 | 0.5102 |
| cosine_precision@1 | 0.102 |
| cosine_precision@3 | 0.0918 |
| cosine_precision@5 | 0.0776 |
| cosine_precision@10 | 0.051 |
| cosine_recall@1 | 0.102 |
| cosine_recall@3 | 0.2755 |
| cosine_recall@5 | 0.3878 |
| cosine_recall@10 | 0.5102 |
| cosine_ndcg@10 | 0.2942 |
| cosine_mrr@10 | 0.2264 |
| **cosine_map@100** | **0.2426** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1224 |
| cosine_accuracy@3 | 0.2755 |
| cosine_accuracy@5 | 0.3878 |
| cosine_accuracy@10 | 0.5 |
| cosine_precision@1 | 0.1224 |
| cosine_precision@3 | 0.0918 |
| cosine_precision@5 | 0.0776 |
| cosine_precision@10 | 0.05 |
| cosine_recall@1 | 0.1224 |
| cosine_recall@3 | 0.2755 |
| cosine_recall@5 | 0.3878 |
| cosine_recall@10 | 0.5 |
| cosine_ndcg@10 | 0.2931 |
| cosine_mrr@10 | 0.2291 |
| **cosine_map@100** | **0.2445** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0918 |
| cosine_accuracy@3 | 0.2551 |
| cosine_accuracy@5 | 0.3163 |
| cosine_accuracy@10 | 0.4694 |
| cosine_precision@1 | 0.0918 |
| cosine_precision@3 | 0.085 |
| cosine_precision@5 | 0.0633 |
| cosine_precision@10 | 0.0469 |
| cosine_recall@1 | 0.0918 |
| cosine_recall@3 | 0.2551 |
| cosine_recall@5 | 0.3163 |
| cosine_recall@10 | 0.4694 |
| cosine_ndcg@10 | 0.2629 |
| cosine_mrr@10 | 0.1992 |
| **cosine_map@100** | **0.2165** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0816 |
| cosine_accuracy@3 | 0.2551 |
| cosine_accuracy@5 | 0.3163 |
| cosine_accuracy@10 | 0.4796 |
| cosine_precision@1 | 0.0816 |
| cosine_precision@3 | 0.085 |
| cosine_precision@5 | 0.0633 |
| cosine_precision@10 | 0.048 |
| cosine_recall@1 | 0.0816 |
| cosine_recall@3 | 0.2551 |
| cosine_recall@5 | 0.3163 |
| cosine_recall@10 | 0.4796 |
| cosine_ndcg@10 | 0.2611 |
| cosine_mrr@10 | 0.194 |
| **cosine_map@100** | **0.2059** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 882 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 90.85 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 25.44 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 22.33 tokens</li><li>max: 61 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>``<br>!!! note "Update the metadata"<br> ThemetadataofRecordobject is a python dictionary. So to update the metadata of a record, you can iterate over the records and update the metadata by key or usingmetadata.update`. After that, you should update the records in the dataset.</code> | <code>Can I use Argilla to annotate the metadata of Record objects and update them in the dataset?</code> | <code>The beautiful scenery of the Argilla valley in Italy is perfect for a relaxing summer vacation.</code> |
| <code>git checkout [branch-name]<br>git rebase [default-branch]<br>```<br><br>If everything is right, we need to commit and push the changes to your fork. For that, run the following commands:<br><br>```sh<br><br>Add the changes to the staging area<br><br>git add filename<br><br>Commit the changes by writing a proper message<br><br>git commit -m "commit-message"<br><br>Push the changes to your fork</code> | <code>Can I commit Argilla's annotation changes and push them to a forked project repository after rebasing from the default branch?</code> | <code>The beautiful beach in Argilla, Spain, is a popular spot for surfers to catch a wave and enjoy the sunny weather.</code> |
| <code>Accessing Record Attributes<br><br>The Record object has suggestions, responses, metadata, and vectors attributes that can be accessed directly whilst iterating over records in a dataset.<br><br>python<br>for record in dataset.records(<br> with_suggestions=True,<br> with_responses=True,<br> with_metadata=True,<br> with_vectors=True<br> ):<br> print(record.suggestions)<br> print(record.responses)<br> print(record.metadata)<br> print(record.vectors)</code> | <code>Is it possible to retrieve the suggestions, responses, metadata, and vectors of a Record object at the same time when iterating over a dataset in Argilla?</code> | <code>The new hiking trail offered breathtaking suggestions for scenic views, responses to environmental concerns, and metadata about the surrounding ecosystem, but it lacked vectors for navigation.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "TripletLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:---------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.1802 | 5 | 21.701 | - | - | - | - | - |
| 0.3604 | 10 | 21.7449 | - | - | - | - | - |
| 0.5405 | 15 | 21.7453 | - | - | - | - | - |
| 0.7207 | 20 | 21.7168 | - | - | - | - | - |
| 0.9009 | 25 | 21.6945 | - | - | - | - | - |
| **0.973** | **27** | **-** | **0.2165** | **0.2445** | **0.2426** | **0.2059** | **0.2604** |
| 1.0811 | 30 | 21.7248 | - | - | - | - | - |
| 1.2613 | 35 | 21.7322 | - | - | - | - | - |
| 1.4414 | 40 | 21.7367 | - | - | - | - | - |
| 1.6216 | 45 | 21.6821 | - | - | - | - | - |
| 1.8018 | 50 | 21.8392 | - | - | - | - | - |
| 1.9820 | 55 | 21.6441 | 0.2165 | 0.2445 | 0.2426 | 0.2059 | 0.2604 |
| 2.1622 | 60 | 21.8154 | - | - | - | - | - |
| 2.3423 | 65 | 21.7098 | - | - | - | - | - |
| 2.5225 | 70 | 21.6447 | - | - | - | - | - |
| 2.7027 | 75 | 21.6033 | - | - | - | - | - |
| 2.8829 | 80 | 21.8271 | - | - | - | - | - |
| 2.9189 | 81 | - | 0.2165 | 0.2445 | 0.2426 | 0.2059 | 0.2604 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.8
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2
- Accelerate: 0.31.0
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
HPLT/translate-en-hr-v1.0-hplt_opus | HPLT | translation | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"hr",
"license:cc-by-4.0",
"autotrain_compatible",
"region:us"
] | 1,709,033,214,000 | 2024-03-14T00:56:45 | 88 | 1 | ---
language:
- en
- hr
license: cc-by-4.0
tags:
- translation
inference: false
---
## HPLT MT release v1.0
This repository contains the translation model for English-Croatian trained with OPUS and HPLT data. The model is available in both Marian and Hugging Face formats.
### Model Info
* Source language: English
* Target language: Croatian
* Dataset: OPUS and HPLT data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
* Cleaning: We used [OpusCleaner](https://github.com/hplt-project/OpusCleaner) with a set of basic rules. Details can be found in the filter files [here](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en-hr/raw/v2).
You can check out our [deliverable report](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0), and [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format. We have also converted the model into the Hugging Face format so it is compatible with `transformers`.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-hr.spm` from this repository.
#### Using transformers
We have also converted this model to the Hugging Face format and you can get started with the script below. **Note** that due a [known issue](https://github.com/huggingface/transformers/issues/26216) in weight conversion, the checkpoint cannot work with transformer versions <4.26 or >4.30. We tested and suggest `pip install transformers==4.28`.
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/translate-en-hr-v1.0-hplt_opus")
model = AutoModelForSeq2SeqLM.from_pretrained("HPLT/translate-en-hr-v1.0-hplt_opus")
inputs = ["Input goes here.", "Make sure the language is right."]
batch_tokenized = tokenizer(inputs, return_tensors="pt", padding=True)
model_output = model.generate(
**batch_tokenized, num_beams=6, max_new_tokens=512
)
batch_detokenized = tokenizer.batch_decode(
model_output,
skip_special_tokens=True,
)
print(batch_detokenized)
```
### Benchmarks
When decoded using Marian, the model has the following test scores.
| Test set | BLEU | chrF++ | COMET22 |
| -------------------------------------- | ---- | ----- | ----- |
| FLORES200 | 30.5 | 56.5 | 0.8856 |
| NTREX | 32.5 | 57.0 | 0.8512 |
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
Brought to you by researchers from the University of Edinburgh and Charles University in Prague with support from the whole HPLT consortium.
| [
"TRANSLATION"
] | Non_BioNLP |
Woondsc/nllb-1.3B-KMA-KCD | Woondsc | translation | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"pytorch",
"flores-200",
"Medical",
"translation",
"ko",
"en",
"base_model:facebook/nllb-200-1.3B",
"base_model:finetune:facebook/nllb-200-1.3B",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,742,186,363,000 | 2025-03-18T08:34:10 | 2 | 0 | ---
base_model:
- facebook/nllb-200-1.3B
language:
- ko
- en
library_name: transformers
license: cc-by-4.0
pipeline_tag: translation
tags:
- pytorch
- flores-200
- Medical
---
* Explanation !
- This model is a fine-tuned version of the NLLB-200-1.3B model, specifically adapted for the medical terminology domain. All usage guidelines and copyright policies comply with those of the base model.
- The fine-tuning dataset consists of the KMA Medical Terminology Collection and the KCD-8 masterfile's Korean-English description dataset.
- It is specialized for translating Korean medical terms into English. ( ! Especially fitted for translating cause-of-death Korean text into English terms ! )
- After pushing the model, we have continuously identified mistranslations and are updating the # Woondsc/nllb-1.3B-KMA-KCD-FFTtest # model to address these issues. This model is an improved fine-tuned version specifically designed to correct additional mistranslations in the original model.
- If you are looking to build a general Korean-to-English translation model for other purposes, feel free to use this model. However, if you need better performance for Korean-to-English medical translations, we recommend using # Woondsc/nllb-1.3B-KMA-KCD-FFTtest # instead.
# Here is the example of using this model for translating Korean COD into English term . . .
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load model directly
tokenizer = AutoTokenizer.from_pretrained("Woondsc/nllb-1.3B-KMA-KCD")
model = AutoModelForSeq2SeqLM.from_pretrained("Woondsc/nllb-1.3B-KMA-KCD")
# Transformer function setting
def translate(text, model, tokenizer, target_lang="eng_Latn"):
inputs = tokenizer(text, return_tensors="pt")
inputs["forced_bos_token_id"] = tokenizer.convert_tokens_to_ids(target_lang)
translated_tokens = model.generate(**inputs)
translated_text = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
return translated_text
# Execute example
korean_text = "간질"
english_translation = translate(korean_text, model, tokenizer)
print("번역 결과:", english_translation)
```
| [
"TRANSLATION"
] | BioNLP |
influencer/vit-base-PICAI | influencer | image-classification | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,711,476,815,000 | 2024-03-29T09:22:36 | 29 | 0 | ---
base_model: google/vit-base-patch16-224
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: vit-base-PICAI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PICAI
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the PICAI dataset.
PI-CAI (Prostate Imaging: Cancer AI) is an all-new grand challenge, with over 10,000 carefully-curated prostate MRI exams to validate modern AI algorithms and estimate radiologists’ performance at csPCa detection and diagnosis. Key aspects of the study design have been established in conjunction with an international, multi-disciplinary scientific advisory board (16 experts in prostate AI, radiology and urology) —to unify and standardize present-day guidelines, and to ensure meaningful validation of prostate-AI towards clinical translation (Reinke et al., 2022).
More can be found at the official Grand Channel Website: https://pi-cai.grand-challenge.org
It achieves the following results on the evaluation set:
- Loss: 0.6043
- Accuracy: 0.7371
- Roc Auc: 0.7059
## Model description
More information needed
## Intended uses & limitations
This model is just a test of how ViT perform with basic fine tuning over a challengin medical imaging dataset, and also to assess the explanation properties of ViT by looking at attention matrices produced by the model.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 0.4995 | 0.14 | 50 | 0.5423 | 0.7371 | 0.7072 |
| 0.4729 | 0.29 | 100 | 0.6259 | 0.7314 | 0.7183 |
| 0.5558 | 0.43 | 150 | 0.5564 | 0.7243 | 0.7189 |
| 0.5825 | 0.57 | 200 | 0.5912 | 0.6943 | 0.7177 |
| 0.5091 | 0.71 | 250 | 0.5656 | 0.73 | 0.7140 |
| 0.4575 | 0.86 | 300 | 0.5846 | 0.7386 | 0.6858 |
| 0.5168 | 1.0 | 350 | 0.5363 | 0.7471 | 0.7076 |
| 0.5305 | 1.14 | 400 | 0.5600 | 0.7357 | 0.7042 |
| 0.4275 | 1.29 | 450 | 0.5864 | 0.7357 | 0.6988 |
| 0.5588 | 1.43 | 500 | 0.5477 | 0.75 | 0.7078 |
| 0.4573 | 1.57 | 550 | 0.5321 | 0.7571 | 0.7253 |
| 0.5094 | 1.71 | 600 | 0.5840 | 0.7457 | 0.7054 |
| 0.5311 | 1.86 | 650 | 0.5719 | 0.7229 | 0.7098 |
| 0.4582 | 2.0 | 700 | 0.5439 | 0.7357 | 0.7062 |
| 0.5142 | 2.14 | 750 | 0.6668 | 0.6629 | 0.6899 |
| 0.3833 | 2.29 | 800 | 0.5705 | 0.7286 | 0.6954 |
| 0.4676 | 2.43 | 850 | 0.6152 | 0.6943 | 0.6795 |
| 0.4682 | 2.57 | 900 | 0.5679 | 0.7443 | 0.7077 |
| 0.4112 | 2.71 | 950 | 0.5600 | 0.7329 | 0.7073 |
| 0.5107 | 2.86 | 1000 | 0.5686 | 0.7343 | 0.7017 |
| 0.4078 | 3.0 | 1050 | 0.6165 | 0.7429 | 0.7168 |
| 0.479 | 3.14 | 1100 | 0.5952 | 0.7257 | 0.7004 |
| 0.3704 | 3.29 | 1150 | 0.5937 | 0.7314 | 0.6980 |
| 0.3733 | 3.43 | 1200 | 0.5923 | 0.7214 | 0.7001 |
| 0.3682 | 3.57 | 1250 | 0.6183 | 0.7429 | 0.6963 |
| 0.3283 | 3.71 | 1300 | 0.6130 | 0.73 | 0.7012 |
| 0.3709 | 3.86 | 1350 | 0.6123 | 0.74 | 0.7045 |
| 0.3859 | 4.0 | 1400 | 0.6043 | 0.7371 | 0.7059 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"TRANSLATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-itc-en | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:59:01 | 43 | 2 | ---
language:
- it
- ca
- rm
- es
- ro
- gl
- sc
- co
- wa
- pt
- oc
- an
- id
- fr
- ht
- itc
- en
license: apache-2.0
tags:
- translation
---
### itc-eng
* source group: Italic languages
* target group: English
* OPUS readme: [itc-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eng/README.md)
* model: transformer
* source language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-roneng.ron.eng | 36.5 | 0.628 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 30.9 | 0.561 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 35.5 | 0.590 |
| newssyscomb2009-fraeng.fra.eng | 29.2 | 0.560 |
| newssyscomb2009-itaeng.ita.eng | 32.2 | 0.583 |
| newssyscomb2009-spaeng.spa.eng | 29.3 | 0.563 |
| news-test2008-fraeng.fra.eng | 25.2 | 0.531 |
| news-test2008-spaeng.spa.eng | 26.3 | 0.539 |
| newstest2009-fraeng.fra.eng | 28.5 | 0.555 |
| newstest2009-itaeng.ita.eng | 31.6 | 0.578 |
| newstest2009-spaeng.spa.eng | 28.7 | 0.558 |
| newstest2010-fraeng.fra.eng | 29.7 | 0.571 |
| newstest2010-spaeng.spa.eng | 32.8 | 0.593 |
| newstest2011-fraeng.fra.eng | 30.9 | 0.580 |
| newstest2011-spaeng.spa.eng | 31.8 | 0.582 |
| newstest2012-fraeng.fra.eng | 31.1 | 0.576 |
| newstest2012-spaeng.spa.eng | 35.0 | 0.604 |
| newstest2013-fraeng.fra.eng | 31.7 | 0.573 |
| newstest2013-spaeng.spa.eng | 32.4 | 0.589 |
| newstest2014-fren-fraeng.fra.eng | 34.0 | 0.606 |
| newstest2016-enro-roneng.ron.eng | 34.8 | 0.608 |
| Tatoeba-test.arg-eng.arg.eng | 41.5 | 0.528 |
| Tatoeba-test.ast-eng.ast.eng | 36.0 | 0.519 |
| Tatoeba-test.cat-eng.cat.eng | 53.7 | 0.696 |
| Tatoeba-test.cos-eng.cos.eng | 56.5 | 0.640 |
| Tatoeba-test.egl-eng.egl.eng | 4.6 | 0.217 |
| Tatoeba-test.ext-eng.ext.eng | 39.1 | 0.547 |
| Tatoeba-test.fra-eng.fra.eng | 53.4 | 0.688 |
| Tatoeba-test.frm-eng.frm.eng | 22.3 | 0.409 |
| Tatoeba-test.gcf-eng.gcf.eng | 18.7 | 0.308 |
| Tatoeba-test.glg-eng.glg.eng | 54.8 | 0.701 |
| Tatoeba-test.hat-eng.hat.eng | 42.6 | 0.583 |
| Tatoeba-test.ita-eng.ita.eng | 64.8 | 0.767 |
| Tatoeba-test.lad-eng.lad.eng | 14.4 | 0.433 |
| Tatoeba-test.lat-eng.lat.eng | 19.5 | 0.390 |
| Tatoeba-test.lij-eng.lij.eng | 8.9 | 0.280 |
| Tatoeba-test.lld-eng.lld.eng | 17.4 | 0.331 |
| Tatoeba-test.lmo-eng.lmo.eng | 10.8 | 0.306 |
| Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.820 |
| Tatoeba-test.msa-eng.msa.eng | 40.8 | 0.590 |
| Tatoeba-test.multi.eng | 47.6 | 0.634 |
| Tatoeba-test.mwl-eng.mwl.eng | 41.3 | 0.707 |
| Tatoeba-test.oci-eng.oci.eng | 20.3 | 0.401 |
| Tatoeba-test.pap-eng.pap.eng | 53.9 | 0.642 |
| Tatoeba-test.pms-eng.pms.eng | 12.2 | 0.334 |
| Tatoeba-test.por-eng.por.eng | 59.3 | 0.734 |
| Tatoeba-test.roh-eng.roh.eng | 17.7 | 0.420 |
| Tatoeba-test.ron-eng.ron.eng | 54.5 | 0.697 |
| Tatoeba-test.scn-eng.scn.eng | 40.0 | 0.443 |
| Tatoeba-test.spa-eng.spa.eng | 55.9 | 0.712 |
| Tatoeba-test.vec-eng.vec.eng | 11.2 | 0.304 |
| Tatoeba-test.wln-eng.wln.eng | 20.9 | 0.360 |
### System Info:
- hf_name: itc-eng
- source_languages: itc
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en']
- src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.test.txt
- src_alpha3: itc
- tgt_alpha3: eng
- short_pair: itc-en
- chrF2_score: 0.634
- bleu: 47.6
- brevity_penalty: 0.981
- ref_len: 77633.0
- src_name: Italic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: itc
- tgt_alpha2: en
- prefer_old: False
- long_pair: itc-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
law-ai/InLegalTrans-En2Indic-1B | law-ai | translation | [
"safetensors",
"IndicTrans",
"InLegalTrans",
"Legal",
"NLP",
"translation",
"custom_code",
"en",
"bn",
"hi",
"mr",
"ta",
"te",
"ml",
"pa",
"gu",
"or",
"dataset:MILPaC",
"arxiv:2310.09765",
"base_model:ai4bharat/indictrans2-en-indic-1B",
"base_model:finetune:ai4bharat/indictrans2-en-indic-1B",
"license:mit",
"region:us"
] | 1,737,282,336,000 | 2025-01-19T15:26:16 | 88 | 0 | ---
base_model:
- ai4bharat/indictrans2-en-indic-1B
datasets:
- MILPaC
language:
- en
- bn
- hi
- mr
- ta
- te
- ml
- pa
- gu
- or
license: mit
metrics:
- bleu
- google_bleu
- chrf++
pipeline_tag: translation
tags:
- InLegalTrans
- Legal
- NLP
inference: false
---
# InLegalTrans
This is the model card of ***InLegalTrans-En2Indic-1B*** translation model, a fine-tuned version of the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model specifically tailored for translating Indian legal texts from English to Indian languages.
### Training Data
We use the [**MILPaC**](https://github.com/Law-AI/MILPaC) **(Multilingual Indian Legal Parallel Corpus)** corpus for fine-tuning. It is the first high-quality Indian legal parallel corpus, containing parallel aligned text units in English (EN) and nine Indian (IN) languages -- Bengali (BN), Hindi (HI), Marathi (MR), Tamil (TA), Telugu (TE), Malayalam (ML), Panjabi (PA), Gujarati (GU), and Oriya (OR). Please refer to the [paper](https://arxiv.org/abs/2310.09765) for more details about this corpus.
For fine-tuning, we randomly split MILPaC language-wise in a 80 (train) - 10 (validation) - 10 (test) ratio. We use the 80\% train split (combined 80\% of each English-to-Indic language pair) to fine-tune the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model and 10\% validation split (combined 10\% of each English-to-Indic language pair) to select the best checkpoint and to prevent overfitting.
### Model Overview and Usage Instructions
This [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) model uses the same tokenizer as the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model and has the same architecture with ~1.12B parameters.
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from IndicTransToolkit import IndicProcessor # Install IndicTransToolkit from https://github.com/VarunGumma/IndicTransToolkit
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
src_lang, tgt_lang = "eng_Latn", "ben_Beng" # Use the BCP-47 language codes used by the FLORES-200 dataset
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indictrans2-en-indic-1B", trust_remote_code=True) # Use IndicTrans2 tokenizer to enable their custom tokenization script to be run
model = AutoModelForSeq2SeqLM.from_pretrained(
"law-ai/InLegalTrans-En2Indic-1B",
trust_remote_code=True,
attn_implementation="eager",
low_cpu_mem_usage=True,
).to(device)
ip = IndicProcessor(inference=True)
input_sentences = [
"(7) Any such allowance for the maintenance and expenses for proceeding shall be payable from the date of the order, or, if so ordered, from the date of the application for maintenance or expenses of proceeding, as the case may be.",
"(2) Where it appears to the Tribunal that, in consequence of any decision of a competent Civil Court, any order made under section 9 should be cancelled or varied, it shall cancel the order or, as the case may be, vary the same accordingly.",
]
batch = ip.preprocess_batch(input_sentences, src_lang=src_lang, tgt_lang=tgt_lang)
input_text_encoding = tokenizer(
batch,
max_length=256,
truncation=True,
padding="longest",
return_tensors="pt",
return_attention_mask=True,
).to(device)
generated_tokens = model.generate(
**input_text_encoding,
max_length=256,
do_sample=True,
num_beams=4,
num_return_sequences=1,
early_stopping=False,
use_cache=True,
)
with tokenizer.as_target_tokenizer():
generated_tokens = tokenizer.batch_decode(
generated_tokens.detach().cpu().tolist(),
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
translations = ip.postprocess_batch(generated_tokens, lang=tgt_lang)
for input_sentence, translation in zip(input_sentences, translations):
print(f"Sentence in {src_lang} language: {input_sentence}")
print(f"Translated Sentence in {tgt_lang} language: {translation}")
```
### Fine-tuning Results
The following table contains the performance results of the [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) model compared to the [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) model over the 10\% test split of **MILPaC**. Performances are evaluated using *BLEU*, *GLEU*, and *chrF++* metrics. For all English-to-Indic language pairs, [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) demonstrated a significant improvement over [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B), achieving consistently better performance across all evaluation metrics.
| EN-to-IN | Model | BLEU | GLEU | chrF++ |
|------------|---------------------|------|------|--------|
| EN-to-BN | *IndicTrans2* | 25.4 | 28.8 | 53.7 |
| | ***InLegalTrans*** | **45.8** | **47.6** | **70.9** |
| EN-to-HI | *IndicTrans2* | 41.0 | 42.5 | 59.9 |
| | ***InLegalTrans*** | **56.9** | **57.1** | **73.8** |
| EN-to-MR | *IndicTrans2* | 25.2 | 28.7 | 55.4 |
| | ***InLegalTrans*** | **44.4** | **46.0** | **68.9** |
| EN-to-TA | *IndicTrans2* | 32.8 | 35.3 | 62.3 |
| | ***InLegalTrans*** | **40.0** | **42.5** | **69.9** |
| EN-to-TE | *IndicTrans2* | 10.7 | 14.2 | 37.9 |
| | ***InLegalTrans*** | **31.3** | **31.6** | **58.5** |
| EN-to-ML | *IndicTrans2* | 21.9 | 25.8 | 52.9 |
| | ***InLegalTrans*** | **37.4** | **40.3** | **69.7** |
| EN-to-PA | *IndicTrans2* | 27.8 | 31.6 | 51.5 |
| | ***InLegalTrans*** | **44.3** | **45.6** | **65.5** |
| EN-to-GU | *IndicTrans2* | 27.5 | 31.1 | 55.7 |
| | ***InLegalTrans*** | **42.8** | **45.2** | **68.8** |
| EN-to-OR | *IndicTrans2* | 06.6 | 12.6 | 37.1 |
| | ***InLegalTrans*** | **14.2** | **19.9** | **47.5** |
### Citation
If you use this [InLegalTrans](https://huggingface.co/law-ai/InLegalTrans-En2Indic-1B) translation model or the [**MILPaC**](https://github.com/Law-AI/MILPaC) corpus, please cite the following paper:
```
@article{mahapatra2024milpacnovelbenchmarkevaluating,
title = {MILPaC: A Novel Benchmark for Evaluating Translation of Legal Text to Indian Languages},
author = {Sayan Mahapatra and Debtanu Datta and Shubham Soni and Adrijit Goswami and Saptarshi Ghosh},
year = {2024},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
publisher = {Association for Computing Machinery},
}
```
### About Us
We are a group of Natural Language Processing (NLP) researchers from the *Indian Institute of Technology (IIT) Kharagpur*. Our research interests are primarily ML, DL, and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario. Our current and past projects include:
- Legal Statute Identification
- Semantic segmentation of legal documents
- Monolingual (e.g., English-to-English) and Cross-lingual (e.g., English-to-Hindi) Summarization of legal documents
- Translation in the Indian legal domain
- Court Judgment Prediction
- Legal Document Matching
Explore our publicly available codes and datasets at: [Law and AI, IIT Kharagpur](https://github.com/Law-AI). | [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
1998Shubham007/ModelRecomm | 1998Shubham007 | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,713,434,443,000 | 2024-04-18T10:00:49 | 7 | 0 | ---
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
language: en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task432 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,735,738,438,000 | 2025-01-01T13:34:04 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task432
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task432_alt_en_hi_translation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task432_alt_en_hi_translation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
LoneStriker/bagel-8x7b-v0.2-6.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,704,483,991,000 | 2024-01-05T20:04:24 | 5 | 0 | ---
datasets:
- ai2_arc
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
license: apache-2.0
---
# A bagel, with everything (except DPO)

## Overview
An experimental fine-tune of [mixtral-8x7b-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [bagel](https://github.com/jondurbin/bagel)
This is the model after the SFT phase, before DPO has been applied.
Hardware kindly provided by [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon)
### Data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Default via chat template
The model's `tokenizer_config.json` includes the default chat template (llama-2), so you can simply use the `apply_chat_template` method to build the full prompt.
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/bagel-8x7b-v0.2')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Contribute
If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details.
To help me with the fine-tuning costs (which are extremely expensive for these large combined datasets):
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Guide for certain tasks
#### RA(G)/contextual question answering
The model was trained to ignore what it thinks it knows, and uses the context to answer the questions, when using the format below.
The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a contextual prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Fine-tuning information
You can find charts, and the full configuration used to fine-tune this model on [weights and biases](https://wandb.ai/jondurbin/bagel-8x7b-v0.2/runs/agxjjdso?workspace=user-jondurbin)
The model was fine-tuned on an 8x a6000 instance, for 4 days, 15 hours, 6 minutes and 42 seconds.
### Licence and usage restrictions
The base model is mixtral-8x7b-v0.1, which is licensed as apache-2.0 - no issues there.
The fine-tuning data, however, includes several datasets that have data generated at least in part by OpenAI's gpt-4.
I am not a lawyer, so I can't help determine if this is actually commercially viable, but some questions that often come up are:
- Does the OpenAI ToS apply only to the user who created the dataset initially, and not subsequent models?
- If the dataset was released under a permissive license, but actually includes OpenAI generated data, does that ToS supersede the license?
- Does the dataset fall completely under fair use anyways, since the model isn't really capable of reproducing the entire training set verbatim?
Use your best judgement and seek legal advice if you are concerned about the terms. In any case, by using this model, you agree to completely indemnify me. | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
nlpie/clinical-distilbert-i2b2-2010 | nlpie | token-classification | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,505,901,000 | 2024-03-26T16:45:54 | 62 | 1 | ---
license: mit
title: README
emoji: 🧬
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
---
# Model Description
ClinicalDistilBERT-i2b2-2010 is a lightweight BERT-based model developed by fine-tuning [ClinicalDistilBERT](https://huggingface.co/nlpie/clinical-distilbert) on the i2b2-2010 dataset for clinical Named Entity Recognition (NER). It is specifically designed to recognise entities from three categories: `problem`, `treatment`, and `test`.
# Architecture
The architecture of this model remains the same as the ClinicalDistilBERT model. The size of the hidden dimension and the embedding layer are both set to 768. The vocabulary size is 28996. The number of transformer layers is 6, and the expansion rate of the feed-forward layer is 4. Overall, this model contains approximately 65 million parameters.
# Use Cases
This model is suited for clinical NER and for medical tasks that require identification and classification of problems, treatments, and tests.
# Citation
If you use this model, please consider citing the following paper:
```bibtex
@article{rohanian2023lightweight,
title={Lightweight transformers for clinical natural language processing},
author={Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Nooralahzadeh, Farhad and Clifton, Lei and Merson, Laura and Clifton, David A and ISARIC Clinical Characterisation Group and others},
journal={Natural Language Engineering},
pages={1--28},
year={2023},
publisher={Cambridge University Press}
}
| [
"NAMED_ENTITY_RECOGNITION"
] | BioNLP |
Robin021/llama-7b-hf | Robin021 | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,680,863,812,000 | 2023-04-07T10:47:19 | 10 | 0 | ---
license: other
---
LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
google/paligemma-3b-ft-textcaps-224 | google | image-text-to-text | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:2403.08295",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:2401.06209",
"arxiv:2305.10355",
"arxiv:2205.12522",
"arxiv:2110.11624",
"arxiv:2108.03353",
"arxiv:2010.04295",
"arxiv:2203.10244",
"arxiv:1810.12440",
"arxiv:1905.13648",
"arxiv:1608.00272",
"arxiv:1908.04913",
"arxiv:2407.07726",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,715,556,244,000 | 2024-07-19T12:09:57 | 67 | 0 | ---
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 3B weights, fine-tuned with 224*224 input images on the <a href="https://textvqa.org/textcaps/">TextCaps</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/textcaps.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-textcaps-224)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## How to Use
PaliGemma is a single-turn vision language model not meant for conversational use,
and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes,
such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue
them with a rich set of capabilities (question answering, captioning, segmentation, etc.).
However, they are not designed to be used directly, but to be transferred (by fine-tuning)
to specific tasks using a similar prompt structure. For interactive testing, you can use
the "mix" family of models, which have been fine-tuned on a mixture of tasks.
Please, refer to the [usage and limitations section](#usage-and-limitations) for intended
use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for
additional details and examples.
## Use in Transformers
The following snippets use model `google/paligemma-3b-mix-224` for reference purposes.
The model in this repo you are now browsing may have been trained for other tasks, please
make sure you use appropriate inputs for the task at hand.
### Running the default precision (`float32`) on CPU
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
Output: `Un auto azul estacionado frente a un edificio.`
### Running other precisions on CUDA
For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`,
so you can use them to reduce the download size and avoid casting on your local computer.
This is how you'd run `bfloat16` on an nvidia CUDA card.
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Loading in 4-bit / 8-bit
You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision:
```
pip install bitsandbytes accelerate
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
## Citation
```bibtex
@article{beyer2024paligemma,
title={{PaliGemma: A versatile 3B VLM for transfer}},
author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*},
year={2024},
journal={arXiv preprint arXiv:2407.07726}
}
```
Find the paper [here](https://arxiv.org/abs/2407.07726).
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
Xenova/distilbart-cnn-6-6 | Xenova | summarization | [
"transformers.js",
"onnx",
"bart",
"text2text-generation",
"summarization",
"base_model:sshleifer/distilbart-cnn-6-6",
"base_model:quantized:sshleifer/distilbart-cnn-6-6",
"license:apache-2.0",
"region:us"
] | 1,683,062,871,000 | 2024-10-08T13:29:40 | 1,560 | 7 | ---
base_model: sshleifer/distilbart-cnn-6-6
library_name: transformers.js
license: apache-2.0
pipeline_tag: summarization
---
https://huggingface.co/sshleifer/distilbart-cnn-6-6 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | [
"SUMMARIZATION"
] | Non_BioNLP |
facebook/fasttext-su-vectors | facebook | feature-extraction | [
"fasttext",
"feature-extraction",
"su",
"arxiv:1607.04606",
"arxiv:1802.06893",
"arxiv:1607.01759",
"arxiv:1612.03651",
"license:cc-by-sa-3.0",
"region:us"
] | 1,679,637,947,000 | 2023-06-03T22:16:30 | 5 | 0 | ---
language: su
library_name: fasttext
license: cc-by-sa-3.0
tags:
- feature-extraction
widget:
- text: apple
example_title: apple
---
# fastText (Sundanese)
fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/).
## Model description
fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes.
It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production.
## Intended uses & limitations
You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you.
### How to use
Here is how to load and use a pre-trained vectors
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-su-vectors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.words
['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...]
>>> len(model.words)
145940
>>> model['bread']
array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01,
-1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...])
```
Here is how to use this model to query nearest neighbors of an English word vector:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.get_nearest_neighbors("bread", k=5)
[(0.5641006231307983, 'butter'),
(0.48875734210014343, 'loaf'),
(0.4491206705570221, 'eat'),
(0.42444291710853577, 'food'),
(0.4229326844215393, 'cheese')]
```
Here is how to use this model to detect the language of a given text:
```python
>>> import fasttext
>>> from huggingface_hub import hf_hub_download
>>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin")
>>> model = fasttext.load_model(model_path)
>>> model.predict("Hello, world!")
(('__label__eng_Latn',), array([0.81148803]))
>>> model.predict("Hello, world!", k=5)
(('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'),
array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415]))
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1.
```python
>>> import numpy as np
>>> def cosine_similarity(word1, word2):
>>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2]))
>>> cosine_similarity("man", "boy")
0.061653383
>>> cosine_similarity("man", "ceo")
0.11989131
>>> cosine_similarity("woman", "ceo")
-0.08834904
```
## Training data
Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish.
## Training procedure
### Tokenization
We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer.
More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893).
### License
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/).
### Evaluation datasets
The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt).
### BibTeX entry and citation info
Please cite [1] if using this code for learning word representations or [2] if using for text classification.
[1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
```markup
@article{bojanowski2016enriching,
title={Enriching Word Vectors with Subword Information},
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.04606},
year={2016}
}
```
[2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
```markup
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
[3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651)
```markup
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
```
If you use these word vectors, please cite the following paper:
[4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893)
```markup
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
(\* These authors contributed equally.)
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pkbiswas/Falcon-7b-Summarization-QLoRa | pkbiswas | summarization | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"summarization",
"dataset:scitldr",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"region:us"
] | 1,713,647,402,000 | 2024-11-17T08:19:03 | 5 | 0 | ---
base_model: tiiuae/falcon-7b-instruct
datasets:
- scitldr
library_name: peft
license: apache-2.0
pipeline_tag: summarization
tags:
- generated_from_trainer
model-index:
- name: Falcon-Summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Falcon-Summarization
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4431 | 0.2 | 200 | 2.5148 |
| 2.376 | 0.4 | 400 | 2.5112 |
| 2.4269 | 0.6 | 600 | 2.4901 |
| 2.368 | 0.8 | 800 | 2.4763 |
| 2.372 | 1.0 | 1000 | 2.4684 |
| 2.0904 | 1.2 | 1200 | 2.5018 |
| 2.099 | 1.41 | 1400 | 2.5137 |
| 2.0482 | 1.61 | 1600 | 2.4993 |
| 1.9974 | 1.81 | 1800 | 2.5006 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2 | [
"SUMMARIZATION"
] | BioNLP |
dtorber/NAS-bilingue | dtorber | summarization | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,166,039,000 | 2023-05-19T21:06:30 | 25 | 0 | ---
tags:
- summarization
- generated_from_trainer
model-index:
- name: NAS-bilingue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NAS-bilingue
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7187
- Rougelsum: 0.0922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log | 1.0 | 5 | 4.5936 | 0.0759 |
| No log | 2.0 | 10 | 4.4276 | 0.0759 |
| No log | 3.0 | 15 | 4.2936 | 0.0759 |
| No log | 4.0 | 20 | 4.1820 | 0.0759 |
| No log | 5.0 | 25 | 4.0896 | 0.0881 |
| No log | 6.0 | 30 | 4.0121 | 0.0970 |
| No log | 7.0 | 35 | 3.9451 | 0.0918 |
| No log | 8.0 | 40 | 3.8875 | 0.0922 |
| No log | 9.0 | 45 | 3.8395 | 0.0922 |
| No log | 10.0 | 50 | 3.8011 | 0.0922 |
| No log | 11.0 | 55 | 3.7707 | 0.0922 |
| No log | 12.0 | 60 | 3.7480 | 0.0922 |
| No log | 13.0 | 65 | 3.7320 | 0.0922 |
| No log | 14.0 | 70 | 3.7223 | 0.0922 |
| No log | 15.0 | 75 | 3.7187 | 0.0922 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| [
"SUMMARIZATION"
] | Non_BioNLP |
Unbabel/TowerInstruct-7B-v0.2 | Unbabel | translation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"arxiv:2402.17733",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,707,492,648,000 | 2024-09-09T14:08:25 | 2,708 | 33 | ---
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
license: cc-by-nc-4.0
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerInstruct-7B-v0.2
## Model Details
### Model Description
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.2 is the first model in the series.
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph/document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
We will release more details in the upcoming technical report. For now, you can check results obtained with the model [here](https://unbabel.com/announcing-tower-an-open-multilingual-llm-for-translation-related-tasks/).
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1)
**Update**: TowerInstruct-7B-v0.2 has more reliable document-level translation capabilities in comparison with TowerInstruct-7B-v0.1. The new version of TowerBlocks used to train v0.2 is also available in the Tower collection.
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
- Translation (sentence and paragraph-level)
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-7B-v0.2", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
# <|im_start|>user
# Translate the following text from Portuguese into English.
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
# English:<|im_end|>
# <|im_start|>assistant
# A group of researchers has launched a new model for translation-related tasks.
```
### Out-of-Scope Use
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator.
## Bias, Risks, and Limitations
TowerInstruct-v0.2 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Prompt Format
TowerInstruct-v0.2 was trained using the ChatML prompt templates without any system prompts. An example follows below:
```
<|im_start|>user
{USER PROMPT}<|im_end|>
<|im_start|>assistant
{MODEL RESPONSE}<|im_end|>
<|im_start|>user
[...]
```
### Supervised tasks
The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
## Training Details
### Training Data
Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
#### Training Hyperparameters
The following hyperparameters were used during training:
- total_train_batch_size: 256
- learning_rate: 7e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 4
- max_seq_length: 2048
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
| [
"TRANSLATION"
] | Non_BioNLP |
nsridhar/aftermath-synthetic-150 | nsridhar | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,678,938,725,000 | 2023-03-16T03:52:20 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# nsridhar/aftermath-synthetic-150
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nsridhar/aftermath-synthetic-150")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Shawn156/distilbert-base-uncased-finetuned-clinc | Shawn156 | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,701,670,429,000 | 2023-12-05T07:07:57 | 3 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.915483870967742
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7820
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2956 | 0.7229 |
| 3.791 | 2.0 | 636 | 1.8891 | 0.8287 |
| 3.791 | 3.0 | 954 | 1.1666 | 0.8910 |
| 1.6987 | 4.0 | 1272 | 0.8659 | 0.9113 |
| 0.9008 | 5.0 | 1590 | 0.7820 | 0.9155 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
sushanthreddy99/marian-finetuned-kde4-en-to-fr | sushanthreddy99 | translation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,760,653,000 | 2024-12-09T18:08:05 | 4 | 0 | ---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
library_name: transformers
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- type: bleu
value: 53.907166441222394
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8448
- Model Preparation Time: 0.0035
- Bleu: 53.9072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"TRANSLATION"
] | Non_BioNLP |
naver/oscar-mistral-small-24b | naver | null | [
"COCOM",
"custom_code",
"en",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:cc-by-4.0",
"region:us"
] | 1,741,972,225,000 | 2025-03-17T14:59:20 | 30 | 1 | ---
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
- meta-llama/Llama-3.2-1B-Instruct
language:
- en
license: cc-by-4.0
---
# Model Card for OSCAR-mistral-small-24b
OSCAR is a context compression model to be used for efficient inference when doing Retrieval Augmented Generation (RAG), particularly optimized for question answering.
OSCAR contains a (fast and light) compressor LLM, used to compress documents, and a LoRA-adapted decoder LLM (here Mistral-Small-24B-Instruct-2501) able to work from this representation.
In a RAG pipeline compressing the documents enable 3x-5x faster inference. Final pipeline is as performant as the base decoder model.
*Developed by*: Naver Labs Europe
*License*: [CC BY-NC 4.0.](https://creativecommons.org/licenses/by-nc/4.0/)
* *Model*: `oscar-mistral-small-24b`
* *Backbone model*: [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)
* *Compression model*: [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
* *Model size*: 23.6 billion parameters
* *Compression rate*: x16: each document (of size up to 128 tokens) is converted into 8 embedding vectors.
## Usage
```python
from transformers import AutoModel
oscar = AutoModel.from_pretrained('naver/oscar-mistral-small-24b', trust_remote_code=True).to('cuda')
# Example documents and question:
documents = [
[
"Weldenia is a monotypic genus of flowering plant in the family Commelinaceae, first describ ed in 1829. It has one single species: Weldenia candida, which grows originally in Mexico and Guatemala.",
"Hagsatera is a genus of flowering plants from the orchid family, Orchidaceae. There are two known species, native to Mexico and Guatemala",
"Alsobia is a genus of flowering plants in the family Gesneriaceae, native to Mexico, Guatemala and Costa Rica. The two species are succulent, stoloniferous herbs and were previously included in the genus \"Episcia\". Recent molecular studies have supported the separation of \"Alsobia\" from \"Episcia\""
]
]
questions = ["Which genus of plant grows originally in Mexico and Guatemala, Phylica or Weldenia?"]
# End-to-end usage
out = oscar.generate_from_text(questions=questions, documents=documents, max_new_tokens=64, query_dependent=True)
print('Generated answer', out)
# Document compression:
embeddings = oscar.compress_documents(documents=documents[0], questions=questions * len(documents[0])) # compression is query-dependent, one question per doc here
# Generation from compressed documents:
out = oscar.generate_from_compressed_documents_and_questions(questions=questions, compressed_documents=embeddings)
```
The recommended usage is to provide documents cropped to about 128 tokens, which is common practice when doing RAG.
## Model features
* **OSCAR enables high accuracy responses from the compressed documents**
* **OSCAR is robust to various domains** We tested its compression/decoding abilities on various sets of data.
* **OSCAR enables up to x5 faster generation** depending on the number of retrieved documents and various context sizes.
## License
This work is licensed under CC BY-NC 4.0.
## Cite
```
TODO
```
## Acknowledgements
Model trained at [Naver Labs Europe](https://europe.naverlabs.com/)
Team:
* [Maxime LOUIS](https://europe.naverlabs.com/people_user_naverlabs/maxime-louis/)
* [Thibault Format](https://europe.naverlabs.com/people_user_naverlabs/thibault-formal/)
* [Hervé Dejean](https://europe.naverlabs.com/people_user_naverlabs/herve-dejean/)
* [Stéphane Clinchant](https://europe.naverlabs.com/people_user_naverlabs/st%C3%A9phane-clinchant/) | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
sebastiansarasti/Pix2Pix | sebastiansarasti | null | [
"license:apache-2.0",
"region:us"
] | 1,736,731,719,000 | 2025-01-30T03:09:26 | 0 | 0 | ---
license: apache-2.0
---
# GAN for Comic Faces Paired Generation
## Model Overview
This model implements a **Generative Adversarial Network (GAN)** with a **UNet generator** and a **PatchGAN discriminator**. The network is designed to generate paired images of comic faces based on a synthetic dataset of comic faces. The model aims to generate high-quality image pairs where the first image is transformed into a second target image (e.g., photo-to-cartoon or cartoon-to-photo transformations).
- **Dataset:** [Comic Faces Paired Synthetic Dataset](https://www.kaggle.com/datasets/defileroff/comic-faces-paired-synthetic)
- **Batch Size:** 32
- **Input Shape:** (3, 256, 256) (RGB Images)
- **Output Shape:** (3, 256, 256)
## Model Architecture
### Generator: **UNet**
The generator uses a **UNet architecture**, which is designed for image-to-image translation tasks. It has an encoder-decoder structure with skip connections, allowing for high-resolution output. The architecture includes the following layers:
- **Encoder Path (Contracting Path):**
The encoder consists of **DoubleConv** layers that progressively downsample the input image to extract features. It uses **MaxPool2d** to reduce spatial dimensions.
- **Bottleneck:**
The deepest layer of the network (with 1024 feature channels) processes the smallest version of the image.
- **Decoder Path (Expanding Path):**
The decoder uses **Upsample** layers to progressively increase the spatial dimensions and **DoubleConv** layers to refine the output. Skip connections are used to combine features from the encoder path.
- **Final Convolution:**
The final layer outputs the transformed image using a **1x1 convolution**.
### Discriminator: **PatchGANDiscriminator**
The discriminator uses a **PatchGAN** architecture, which classifies patches of the image as real or fake. The discriminator works by processing the **input image and output image pair** (3 channels for the input image + 3 channels for the generated output). It progressively reduces the spatial dimensions using **Conv2d** and **LeakyReLU** activations, while normalizing each layer with **InstanceNorm2d**. The final output is a probability score indicating whether the patch is real or fake.
---
### Generator Code (UNet):
```python
class UNet(nn.Module, PyTorchModelHubMixin):
def __init__(self, in_channels, out_channels):
super(UNet, self).__init__()
# Contracting Path (Encoder)
self.down_conv1 = DoubleConv(in_channels, 64)
self.down_conv2 = DoubleConv(64, 128)
self.down_conv3 = DoubleConv(128, 256)
self.down_conv4 = DoubleConv(256, 512)
self.down_conv5 = DoubleConv(512, 1024)
# Downsampling
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
# Upsampling layers using nn.Upsample
self.upsample = nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True)
# Decoder (Expanding Path)
self.up_conv1 = DoubleConv(1024 + 512, 512)
self.up_conv2 = DoubleConv(512 + 256, 256)
self.up_conv3 = DoubleConv(256 + 128, 128)
self.up_conv4 = DoubleConv(128 + 64, 64)
# Final 1x1 convolution to get desired number of output channels
self.final_conv = nn.Conv2d(64, out_channels, kernel_size=1)
def forward(self, x):
x1 = self.down_conv1(x)
x2 = self.down_conv2(self.maxpool(x1))
x3 = self.down_conv3(self.maxpool(x2))
x4 = self.down_conv4(self.maxpool(x3))
x5 = self.down_conv5(self.maxpool(x4))
x = self.upsample(x5)
x = torch.cat([x4, x], dim=1)
x = self.up_conv1(x)
x = self.upsample(x)
x = torch.cat([x3, x], dim=1)
x = self.up_conv2(x)
x = self.upsample(x)
x = torch.cat([x2, x], dim=1)
x = self.up_conv3(x)
x = self.upsample(x)
x = torch.cat([x1, x], dim=1)
x = self.up_conv4(x)
return self.final_conv(x)
class PatchGANDiscriminator(nn.Module, PyTorchModelHubMixin):
def __init__(self, in_channels=6):
super().__init__()
self.layers = nn.Sequential(
nn.Conv2d(in_channels, 64, kernel_size=4, stride=2, padding=1),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(128),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(256),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(256, 512, kernel_size=4, stride=1, padding=1),
nn.InstanceNorm2d(512),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=1),
)
def forward(self, x):
return self.layers(x)
```
| [
"TRANSLATION"
] | Non_BioNLP |
pszemraj/long-t5-tglobal-xl-sci-simplify-elife | pszemraj | summarization | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"dataset:pszemraj/scientific_lay_summarisation-elife-norm",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | 1,686,906,389,000 | 2023-06-18T12:20:00 | 22 | 2 | ---
datasets:
- pszemraj/scientific_lay_summarisation-elife-norm
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- generated_from_trainer
inference: false
model-index:
- name: long-t5-tglobal-xl-scientific_lay_summarisation-elife-norm-16384-summ-v1
results:
- task:
type: summarization
name: Summarization
dataset:
name: pszemraj/scientific_lay_summarisation-elife-norm
type: pszemraj/scientific_lay_summarisation-elife-norm
split: validation
metrics:
- type: rouge
value: 47.1446
name: Rouge1
---
# long-t5-tglobal-xl-sci-simplify-elife
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglobal-xl) on the pszemraj/scientific_lay_summarisation-elife-norm dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6666
- Rouge1: 47.1446
- Rouge2: 14.2158
- Rougel: 23.3524
- Rougelsum: 44.6063
- Gen Len: 431.22
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
the `pszemraj/scientific_lay_summarisation-elife-norm` dataset, input 16384 tokens then truncate, output 1024 tokens then truncate.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 6963
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.7959 | 1.0 | 543 | 1.6770 | 44.4187 | 12.6752 | 22.4669 | 41.944 | 456.33 |
| 1.7578 | 2.0 | 1086 | 1.6666 | 47.1446 | 14.2158 | 23.3524 | 44.6063 | 431.22 | | [
"SUMMARIZATION"
] | BioNLP |
Helsinki-NLP/opus-mt-ja-ru | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ja",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:59:20 | 2,083 | 1 | ---
language:
- ja
- ru
license: apache-2.0
tags:
- translation
---
### jpn-rus
* source group: Japanese
* target group: Russian
* OPUS readme: [jpn-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-rus/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.rus | 23.2 | 0.441 |
### System Info:
- hf_name: jpn-rus
- source_languages: jpn
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'ru']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: rus
- short_pair: ja-ru
- chrF2_score: 0.441
- bleu: 23.2
- brevity_penalty: 0.9740000000000001
- ref_len: 70820.0
- src_name: Japanese
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: ru
- prefer_old: False
- long_pair: jpn-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
YakovElm/Apache5SetFitModel_balance_ratio_Half | YakovElm | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,685,456,800,000 | 2023-05-31T19:58:54 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Apache5SetFitModel_balance_ratio_Half
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Apache5SetFitModel_balance_ratio_Half")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity | Funnyworld1412 | text-classification | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/bert-base-nli-mean-tokens",
"base_model:finetune:sentence-transformers/bert-base-nli-mean-tokens",
"model-index",
"region:us"
] | 1,719,347,501,000 | 2024-06-25T20:31:55 | 9 | 0 | ---
base_model: sentence-transformers/bert-base-nli-mean-tokens
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: gamenya seru bagus paket:gamenya seru bagus paket worth it gak lag mudah mainnya
tugas hadiah bagus modenya sayangnya game kadang ngebug gapapa kasih
- text: tolong perbaiki analog nya pengaturan posisi:tolong perbaiki analog nya pengaturan
posisi berpindah pindah
- text: visualisasi bagus segi graphic:visualisasi bagus segi graphic bagus ya game
cocok sih mantra nya banyakin contoh mantra penghilang
- text: jaringan udah bagus game jaringan nya bagus:game nya udah bagus jaringan game
nya bermasalah jaringan udah bagus game jaringan nya bagus mohon nambahin karakter
- text: kali game stuk loading server pakai jaringan:game bagus cma kendala kali game
stuk loading server pakai jaringan wifi masuk jaringan jaringan bermasalah main
game online lancar game susah akses tolong diperbaiki supercell detik bermain
coc lancar masuk kendala
inference: false
model-index:
- name: SetFit Polarity Model with sentence-transformers/bert-base-nli-mean-tokens
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8478260869565217
name: Accuracy
---
# SetFit Polarity Model with sentence-transformers/bert-base-nli-mean-tokens
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/bert-base-nli-mean-tokens](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/bert-base-nli-mean-tokens](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [Funnyworld1412/ABSA_bert-base_MiniLM-L6-aspect](https://huggingface.co/Funnyworld1412/ABSA_bert-base_MiniLM-L6-aspect)
- **SetFitABSA Polarity Model:** [Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity](https://huggingface.co/Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity)
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negatif | <ul><li>'seru tolong diperbaiki pencarian lawan bermain ketemu player:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'bugnya nakal banget y:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li><li>'kadang g setara levelnya dahlah gk suka:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li></ul> |
| positif | <ul><li>'kapada supercell game nya bagus seru:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'fairrrr mending uninstall gamenya maen game yg:overall gamenya bagus pencarian match dikasih musuh yg levelnya levelku yg pertandingan fair menganggu kenyamanan pemainnya kalo nyariin musuh gapapa nyarinya kasih yg fair levelnya gaush buru buru ngasih yg gak fairrrr pas arena 4 udh dikasih musuh yg pletonnya 2 yg level 11 gak fairrrr mending uninstall gamenya maen game yg yg org gak fairr'</li><li>'gameplay menyenangkan pemain afk:gameplay menyenangkan pemain afk pertengahan menyerah 2vs2 mode mengganggu tolong tambahkan fitur lapor pemain'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8478 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"Funnyworld1412/ABSA_bert-base_MiniLM-L6-aspect",
"Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 28.3626 | 83 |
| Label | Training Sample Count |
|:--------|:----------------------|
| negatif | 738 |
| positif | 528 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 5
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3075 | - |
| 0.0158 | 50 | 0.1854 | - |
| 0.0316 | 100 | 0.4431 | - |
| 0.0474 | 150 | 0.3251 | - |
| 0.0632 | 200 | 0.2486 | - |
| 0.0790 | 250 | 0.2371 | - |
| 0.0948 | 300 | 0.3149 | - |
| 0.1106 | 350 | 0.1397 | - |
| 0.1264 | 400 | 0.1131 | - |
| 0.1422 | 450 | 0.2388 | - |
| 0.1580 | 500 | 0.1256 | - |
| 0.1738 | 550 | 0.157 | - |
| 0.1896 | 600 | 0.3768 | - |
| 0.2054 | 650 | 0.022 | - |
| 0.2212 | 700 | 0.221 | - |
| 0.2370 | 750 | 0.122 | - |
| 0.2528 | 800 | 0.028 | - |
| 0.2686 | 850 | 0.102 | - |
| 0.2844 | 900 | 0.2231 | - |
| 0.3002 | 950 | 0.1853 | - |
| 0.3160 | 1000 | 0.2167 | - |
| 0.3318 | 1050 | 0.0054 | - |
| 0.3476 | 1100 | 0.027 | - |
| 0.3633 | 1150 | 0.0189 | - |
| 0.3791 | 1200 | 0.0033 | - |
| 0.3949 | 1250 | 0.2548 | - |
| 0.4107 | 1300 | 0.0043 | - |
| 0.4265 | 1350 | 0.0033 | - |
| 0.4423 | 1400 | 0.0012 | - |
| 0.4581 | 1450 | 0.1973 | - |
| 0.4739 | 1500 | 0.0006 | - |
| 0.4897 | 1550 | 0.001 | - |
| 0.5055 | 1600 | 0.0002 | - |
| 0.5213 | 1650 | 0.2304 | - |
| 0.5371 | 1700 | 0.0005 | - |
| 0.5529 | 1750 | 0.0025 | - |
| 0.5687 | 1800 | 0.0185 | - |
| 0.5845 | 1850 | 0.0023 | - |
| 0.6003 | 1900 | 0.185 | - |
| 0.6161 | 1950 | 0.0004 | - |
| 0.6319 | 2000 | 0.0003 | - |
| 0.6477 | 2050 | 0.0005 | - |
| 0.6635 | 2100 | 0.0126 | - |
| 0.6793 | 2150 | 0.0004 | - |
| 0.6951 | 2200 | 0.0103 | - |
| 0.7109 | 2250 | 0.0009 | - |
| 0.7267 | 2300 | 0.0019 | - |
| 0.7425 | 2350 | 0.0018 | - |
| 0.7583 | 2400 | 0.1837 | - |
| 0.7741 | 2450 | 0.002 | - |
| 0.7899 | 2500 | 0.0003 | - |
| 0.8057 | 2550 | 0.0006 | - |
| 0.8215 | 2600 | 0.2006 | - |
| 0.8373 | 2650 | 0.0003 | - |
| 0.8531 | 2700 | 0.0006 | - |
| 0.8689 | 2750 | 0.0003 | - |
| 0.8847 | 2800 | 0.0001 | - |
| 0.9005 | 2850 | 0.0002 | - |
| 0.9163 | 2900 | 0.0003 | - |
| 0.9321 | 2950 | 0.0002 | - |
| 0.9479 | 3000 | 0.0003 | - |
| 0.9637 | 3050 | 0.001 | - |
| 0.9795 | 3100 | 0.0002 | - |
| 0.9953 | 3150 | 0.0007 | - |
| 1.0 | 3165 | - | 0.2256 |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- spaCy: 3.7.5
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.19.2
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
ademakyol/distilbert-emotion | ademakyol | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,715,552,324,000 | 2024-05-12T22:29:51 | 4 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.935
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1535
- Accuracy: 0.935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.2074 | 0.924 |
| No log | 2.0 | 250 | 0.1535 | 0.935 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
phungkhaccuong/fa71addc-951a-12d7-783c-32157f080bb4 | phungkhaccuong | null | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"region:us"
] | 1,736,498,316,000 | 2025-01-10T09:12:42 | 1 | 0 | ---
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
library_name: peft
license: apache-2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fa71addc-951a-12d7-783c-32157f080bb4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 63a6e52889f0869c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/63a6e52889f0869c_train_data.json
type:
field_input: langpair
field_instruction: source
field_output: good-translation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/fa71addc-951a-12d7-783c-32157f080bb4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/63a6e52889f0869c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95d283ba-c7ba-4e03-aefa-9110a8ae8a1d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 95d283ba-c7ba-4e03-aefa-9110a8ae8a1d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fa71addc-951a-12d7-783c-32157f080bb4
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.8958 |
| 6.12 | 0.0023 | 10 | 1.4180 |
| 3.7388 | 0.0046 | 20 | 0.9980 |
| 4.2057 | 0.0069 | 30 | 0.9261 |
| 3.9987 | 0.0093 | 40 | 0.9167 |
| 3.8477 | 0.0116 | 50 | 0.9126 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 | [
"TRANSLATION"
] | Non_BioNLP |
Intel/fid_t5_large_nq | Intel | text-generation | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"text-generation",
"en",
"dataset:kilt_tasks",
"arxiv:2007.01282",
"license:cc-by-sa-3.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,676,813,484,000 | 2023-09-07T11:32:37 | 356 | 3 | ---
datasets:
- kilt_tasks
language:
- en
license: cc-by-sa-3.0
metrics:
- exact_match
pipeline_tag: text-generation
model-index:
- name: results
results:
- task:
type: text-generation
name: Question Answering
dataset:
name: NQ KILT
type: kilt_tasks
args: nq
metrics:
- type: exact_match
value: 59.01
name: Exact Macth
---
# Fusion-In-Decoder Base on Natural Questions
This trained model is based on the [Fusion-In-Decoder](https://arxiv.org/abs/2007.01282) model, and trained on the [Natural Questions](https://huggingface.co/datasets/natural_questions) dataset.
# Model Details
Model is based on Fusion-In-Decoder, which in turn is based on the `t5-large` checkpoint as the base model. For training, we utilized text retrieval for each query, which provides a collection of relevant passages for it.
We note that the passages were retrieved using a corpus based on [Wikipedia](https://huggingface.co/datasets/wiki_dpr).
# Evaluation
See model performance on Evaluation Results tab on the right side. | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
semindan/paws_x_xlm_r_only_zh | semindan | text-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:paws-x",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,671,316,997,000 | 2023-01-07T14:27:05 | 14 | 0 | ---
datasets:
- paws-x
license: mit
metrics:
- accuracy
tags:
- text-classification
- generated_from_trainer
model-index:
- name: paws_x_xlm_r_only_zh
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: paws-x
type: paws-x
config: zh
split: train
args: zh
metrics:
- type: accuracy
value: 0.841
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paws_x_xlm_r_only_zh
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the paws-x dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6193
- Accuracy: 0.841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4928 | 1.0 | 386 | 0.4940 | 0.805 |
| 0.2904 | 2.0 | 772 | 0.4153 | 0.8415 |
| 0.2302 | 3.0 | 1158 | 0.4341 | 0.8465 |
| 0.1899 | 4.0 | 1544 | 0.4475 | 0.8425 |
| 0.1599 | 5.0 | 1930 | 0.4623 | 0.84 |
| 0.1358 | 6.0 | 2316 | 0.5354 | 0.8465 |
| 0.1147 | 7.0 | 2702 | 0.5736 | 0.847 |
| 0.1012 | 8.0 | 3088 | 0.5782 | 0.849 |
| 0.087 | 9.0 | 3474 | 0.5844 | 0.844 |
| 0.08 | 10.0 | 3860 | 0.6193 | 0.841 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
fathyshalab/reklambox2-2-12-xlm | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,677,845,385,000 | 2023-03-03T12:10:04 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/reklambox2-2-12-xlm
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/reklambox2-2-12-xlm")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
unum-cloud/uform-gen2-dpo | unum-cloud | image-to-text | [
"transformers",
"safetensors",
"vlm",
"feature-extraction",
"image-captioning",
"visual-question-answering",
"image-to-text",
"custom_code",
"en",
"dataset:X2FD/LVIS-Instruct4V",
"dataset:BAAI/SVIT",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:MMInstruction/VLFeedback",
"dataset:zhiqings/LLaVA-Human-Preference-10K",
"license:apache-2.0",
"region:us"
] | 1,711,565,296,000 | 2024-04-24T18:30:43 | 2,740 | 43 | ---
datasets:
- X2FD/LVIS-Instruct4V
- BAAI/SVIT
- HuggingFaceH4/ultrachat_200k
- MMInstruction/VLFeedback
- zhiqings/LLaVA-Human-Preference-10K
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-to-text
tags:
- image-captioning
- visual-question-answering
widget:
- src: interior.jpg
example_title: Detailed caption
output:
text: The image shows a serene and well-lit bedroom with a white bed, a black
bed frame, and a white comforter. There’s a gray armchair with a white cushion,
a black dresser with a mirror and a vase, and a white rug on the floor. The
room has a large window with white curtains, and there are several decorative
items, including a picture frame, a vase with a flower, and a lamp. The room
is well-organized and has a calming atmosphere.
- src: cat.jpg
example_title: Short caption
output:
text: A white and orange cat stands on its hind legs, reaching towards a wooden
table with a white teapot and a basket of red raspberries. The table is on a
small wooden bench, surrounded by orange flowers. The cat’s position and action
create a serene, playful scene in a garden.
---
<img src="Captions.jpg">
## Description
UForm-Gen2-dpo is a small generative vision-language model alined for Image Captioning and Visual Question Answering
on preference datasets VLFeedback and LLaVA-Human-Preference-10K using Direct Preference Optimization (DPO).
The model consists of two parts:
1. CLIP-like ViT-H/14
2. [Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat)
The model took less than one day to train on a DGX-H100 with 8x H100 GPUs.
Thanks to [Nebius.ai](https://nebius.ai) for providing the compute 🤗
### Usage
The generative model can be used to caption images, answer questions about them. Also it is suitable for a multimodal chat.
```python
from transformers import AutoModel, AutoProcessor
model = AutoModel.from_pretrained("unum-cloud/uform-gen2-dpo", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-dpo", trust_remote_code=True)
prompt = "Question or Instruction"
image = Image.open("image.jpg")
inputs = processor(text=[prompt], images=[image], return_tensors="pt")
with torch.inference_mode():
output = model.generate(
**inputs,
do_sample=False,
use_cache=True,
max_new_tokens=256,
eos_token_id=151645,
pad_token_id=processor.tokenizer.pad_token_id
)
prompt_len = inputs["input_ids"].shape[1]
decoded_text = processor.batch_decode(output[:, prompt_len:])[0]
```
You can check examples of different prompts in our demo space.
## Evaluation
perception reasoning OCR artwork celebrity code_reasoning color commonsense_reasoning count existence landmark numerical_calculation position posters scene text_translation
MME Benchmark
| Model | perception| reasoning | OCR | artwork | celebrity | code_reasoning | color | commonsense_reasoning | count | existence | landmark | numerical_calculation | position | posters | scene | text_translation |
| :---------------------------------- | --------: | --------: | -----:| ----------:| ----------:| --------------:| -----:| ---------------------:| -----:| ---------:| --------:| ---------------------:| --------:| -------:| -----:| ----------------:|
| uform-gen2-dpo | 1,048.75 | 224.64 | 72.50 | 97.25 | 62.65 | 67.50 | 123.33 | 57.14 | 136.67 | 195.00 | 104.00 | 50.00 | 51.67 | 59.18 | 146.50 | 50.00 |
| uform-gen2-qwen-500m | 863.40 | 236.43 | 57.50 | 93.00 | 67.06 | 57.50 | 78.33 | 81.43 | 53.33 | 150.00 | 98.00 | 50.00 | 50.00 | 62.93 | 153.25 | 47.50 |
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
c01zaut/gemma-2-27b-it-rk3588-1.1.2 | c01zaut | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-27b",
"base_model:finetune:google/gemma-2-27b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,732,568,236,000 | 2024-11-25T21:57:41 | 7 | 0 | ---
base_model: google/gemma-2-27b
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# gemma-2-27b-it-RK3588-1.1.2
This version of gemma-2-27b-it has been converted to run on the RK3588 NPU using ['w8a8'] quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.1.2
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, gemma-2-27b-it, below:
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-27b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 27b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-27b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-hi-ur | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hi",
"ur",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:57:39 | 85 | 0 | ---
language:
- hi
- ur
license: apache-2.0
tags:
- translation
---
### hin-urd
* source group: Hindi
* target group: Urdu
* OPUS readme: [hin-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md)
* model: transformer-align
* source language(s): hin
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hin.urd | 12.4 | 0.393 |
### System Info:
- hf_name: hin-urd
- source_languages: hin
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hi', 'ur']
- src_constituents: {'hin'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt
- src_alpha3: hin
- tgt_alpha3: urd
- short_pair: hi-ur
- chrF2_score: 0.39299999999999996
- bleu: 12.4
- brevity_penalty: 1.0
- ref_len: 1618.0
- src_name: Hindi
- tgt_name: Urdu
- train_date: 2020-06-16
- src_alpha2: hi
- tgt_alpha2: ur
- prefer_old: False
- long_pair: hin-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
ai-forever/ru-en-RoSBERTa | ai-forever | feature-extraction | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"mteb",
"transformers",
"ru",
"en",
"arxiv:2408.12503",
"base_model:ai-forever/ruRoberta-large",
"base_model:finetune:ai-forever/ruRoberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,722,242,289,000 | 2024-09-26T07:57:30 | 8,582 | 34 | ---
base_model: ai-forever/ruRoberta-large
language:
- ru
- en
license: mit
tags:
- mteb
- transformers
- sentence-transformers
model-index:
- name: ru-en-RoSBERTa
results:
- task:
type: MultilabelClassification
dataset:
name: MTEB CEDRClassification (default)
type: ai-forever/cedr-classification
config: default
split: test
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
metrics:
- type: accuracy
value: 44.68650371944739
- type: f1
value: 40.7601061886426
- type: lrap
value: 70.69633368756747
- type: main_score
value: 44.68650371944739
- task:
type: Classification
dataset:
name: MTEB GeoreviewClassification (default)
type: ai-forever/georeview-classification
config: default
split: test
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
metrics:
- type: accuracy
value: 49.697265625
- type: f1
value: 47.793186725286866
- type: f1_weighted
value: 47.79131720298068
- type: main_score
value: 49.697265625
- task:
type: Clustering
dataset:
name: MTEB GeoreviewClusteringP2P (default)
type: ai-forever/georeview-clustering-p2p
config: default
split: test
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
metrics:
- type: main_score
value: 65.42249614873316
- type: v_measure
value: 65.42249614873316
- type: v_measure_std
value: 0.8524815312312278
- task:
type: Classification
dataset:
name: MTEB HeadlineClassification (default)
type: ai-forever/headline-classification
config: default
split: test
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
metrics:
- type: accuracy
value: 78.0029296875
- type: f1
value: 77.95151940601424
- type: f1_weighted
value: 77.95054643947716
- type: main_score
value: 78.0029296875
- task:
type: Classification
dataset:
name: MTEB InappropriatenessClassification (default)
type: ai-forever/inappropriateness-classification
config: default
split: test
revision: 601651fdc45ef243751676e62dd7a19f491c0285
metrics:
- type: accuracy
value: 61.32324218750001
- type: ap
value: 57.11029460364367
- type: ap_weighted
value: 57.11029460364367
- type: f1
value: 60.971337406307214
- type: f1_weighted
value: 60.971337406307214
- type: main_score
value: 61.32324218750001
- task:
type: Classification
dataset:
name: MTEB KinopoiskClassification (default)
type: ai-forever/kinopoisk-sentiment-classification
config: default
split: test
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
metrics:
- type: accuracy
value: 63.27333333333334
- type: f1
value: 61.007042785228116
- type: f1_weighted
value: 61.007042785228116
- type: main_score
value: 63.27333333333334
- task:
type: Reranking
dataset:
name: MTEB MIRACLReranking (ru)
type: miracl/mmteb-miracl-reranking
config: ru
split: dev
revision: 6d1962c527217f8927fca80f890f14f36b2802af
metrics:
- type: MAP@1(MIRACL)
value: 30.691000000000003
- type: MAP@10(MIRACL)
value: 49.178
- type: MAP@100(MIRACL)
value: 51.225
- type: MAP@1000(MIRACL)
value: 51.225
- type: MAP@20(MIRACL)
value: 50.613
- type: MAP@3(MIRACL)
value: 42.457
- type: MAP@5(MIRACL)
value: 46.172000000000004
- type: NDCG@1(MIRACL)
value: 51.002
- type: NDCG@10(MIRACL)
value: 56.912
- type: NDCG@100(MIRACL)
value: 61.197
- type: NDCG@1000(MIRACL)
value: 61.197
- type: NDCG@20(MIRACL)
value: 59.453
- type: NDCG@3(MIRACL)
value: 51.083
- type: NDCG@5(MIRACL)
value: 53.358000000000004
- type: P@1(MIRACL)
value: 51.002
- type: P@10(MIRACL)
value: 14.852000000000002
- type: P@100(MIRACL)
value: 1.9529999999999998
- type: P@1000(MIRACL)
value: 0.19499999999999998
- type: P@20(MIRACL)
value: 8.657
- type: P@3(MIRACL)
value: 31.435000000000002
- type: P@5(MIRACL)
value: 23.608999999999998
- type: Recall@1(MIRACL)
value: 30.691000000000003
- type: Recall@10(MIRACL)
value: 67.006
- type: Recall@100(MIRACL)
value: 79.952
- type: Recall@1000(MIRACL)
value: 79.952
- type: Recall@20(MIRACL)
value: 73.811
- type: Recall@3(MIRACL)
value: 49.142
- type: Recall@5(MIRACL)
value: 57.553
- type: main_score
value: 56.912
- type: nAUC_MAP@1000_diff1(MIRACL)
value: 10.786403475779332
- type: nAUC_MAP@1000_max(MIRACL)
value: 29.477246196287275
- type: nAUC_MAP@1000_std(MIRACL)
value: 15.938834129839046
- type: nAUC_MAP@100_diff1(MIRACL)
value: 10.786403475779332
- type: nAUC_MAP@100_max(MIRACL)
value: 29.477246196287275
- type: nAUC_MAP@100_std(MIRACL)
value: 15.938834129839046
- type: nAUC_MAP@10_diff1(MIRACL)
value: 12.255091348037595
- type: nAUC_MAP@10_max(MIRACL)
value: 26.72625370045134
- type: nAUC_MAP@10_std(MIRACL)
value: 14.180071586837812
- type: nAUC_MAP@1_diff1(MIRACL)
value: 28.616487922173768
- type: nAUC_MAP@1_max(MIRACL)
value: 12.986192530664518
- type: nAUC_MAP@1_std(MIRACL)
value: 4.086145762604503
- type: nAUC_MAP@20_diff1(MIRACL)
value: 11.360341572700476
- type: nAUC_MAP@20_max(MIRACL)
value: 28.612330384153832
- type: nAUC_MAP@20_std(MIRACL)
value: 15.787480742877937
- type: nAUC_MAP@3_diff1(MIRACL)
value: 18.033783954867623
- type: nAUC_MAP@3_max(MIRACL)
value: 20.97092332905034
- type: nAUC_MAP@3_std(MIRACL)
value: 9.106058710108279
- type: nAUC_MAP@5_diff1(MIRACL)
value: 14.784231238848433
- type: nAUC_MAP@5_max(MIRACL)
value: 23.841145797143
- type: nAUC_MAP@5_std(MIRACL)
value: 11.25686258970321
- type: nAUC_NDCG@1000_diff1(MIRACL)
value: 1.4728095471561125
- type: nAUC_NDCG@1000_max(MIRACL)
value: 39.84262968697792
- type: nAUC_NDCG@1000_std(MIRACL)
value: 22.4186410243652
- type: nAUC_NDCG@100_diff1(MIRACL)
value: 1.4728095471561125
- type: nAUC_NDCG@100_max(MIRACL)
value: 39.84262968697792
- type: nAUC_NDCG@100_std(MIRACL)
value: 22.4186410243652
- type: nAUC_NDCG@10_diff1(MIRACL)
value: 5.242996478950954
- type: nAUC_NDCG@10_max(MIRACL)
value: 33.86925934510759
- type: nAUC_NDCG@10_std(MIRACL)
value: 19.457386638149625
- type: nAUC_NDCG@1_diff1(MIRACL)
value: 16.925455715967676
- type: nAUC_NDCG@1_max(MIRACL)
value: 36.72266755084653
- type: nAUC_NDCG@1_std(MIRACL)
value: 18.357456476212622
- type: nAUC_NDCG@20_diff1(MIRACL)
value: 3.361697278095995
- type: nAUC_NDCG@20_max(MIRACL)
value: 37.38923489423496
- type: nAUC_NDCG@20_std(MIRACL)
value: 22.29168372402657
- type: nAUC_NDCG@3_diff1(MIRACL)
value: 10.936904314592084
- type: nAUC_NDCG@3_max(MIRACL)
value: 30.547718047674284
- type: nAUC_NDCG@3_std(MIRACL)
value: 15.142352896765665
- type: nAUC_NDCG@5_diff1(MIRACL)
value: 8.618074920961075
- type: nAUC_NDCG@5_max(MIRACL)
value: 30.808600807482367
- type: nAUC_NDCG@5_std(MIRACL)
value: 15.793512242130051
- type: nAUC_P@1000_diff1(MIRACL)
value: -24.81839490148569
- type: nAUC_P@1000_max(MIRACL)
value: 34.16200383739091
- type: nAUC_P@1000_std(MIRACL)
value: 20.95890369662007
- type: nAUC_P@100_diff1(MIRACL)
value: -24.818394901485657
- type: nAUC_P@100_max(MIRACL)
value: 34.16200383739092
- type: nAUC_P@100_std(MIRACL)
value: 20.958903696620112
- type: nAUC_P@10_diff1(MIRACL)
value: -22.646461560750986
- type: nAUC_P@10_max(MIRACL)
value: 34.57373514819872
- type: nAUC_P@10_std(MIRACL)
value: 24.27599718176041
- type: nAUC_P@1_diff1(MIRACL)
value: 16.925455715967676
- type: nAUC_P@1_max(MIRACL)
value: 36.72266755084653
- type: nAUC_P@1_std(MIRACL)
value: 18.357456476212622
- type: nAUC_P@20_diff1(MIRACL)
value: -23.33449798384014
- type: nAUC_P@20_max(MIRACL)
value: 34.92822081787735
- type: nAUC_P@20_std(MIRACL)
value: 25.048280657629267
- type: nAUC_P@3_diff1(MIRACL)
value: -11.60659490286
- type: nAUC_P@3_max(MIRACL)
value: 38.187883056013035
- type: nAUC_P@3_std(MIRACL)
value: 21.234776997940628
- type: nAUC_P@5_diff1(MIRACL)
value: -18.86697977242918
- type: nAUC_P@5_max(MIRACL)
value: 35.6110661197626
- type: nAUC_P@5_std(MIRACL)
value: 22.11165620702996
- type: nAUC_Recall@1000_diff1(MIRACL)
value: -31.456413113303867
- type: nAUC_Recall@1000_max(MIRACL)
value: 63.785265733309636
- type: nAUC_Recall@1000_std(MIRACL)
value: 36.587933217871914
- type: nAUC_Recall@100_diff1(MIRACL)
value: -31.456413113303867
- type: nAUC_Recall@100_max(MIRACL)
value: 63.785265733309636
- type: nAUC_Recall@100_std(MIRACL)
value: 36.587933217871914
- type: nAUC_Recall@10_diff1(MIRACL)
value: -9.518740341549913
- type: nAUC_Recall@10_max(MIRACL)
value: 35.00853357699468
- type: nAUC_Recall@10_std(MIRACL)
value: 22.79313936486099
- type: nAUC_Recall@1_diff1(MIRACL)
value: 28.616487922173768
- type: nAUC_Recall@1_max(MIRACL)
value: 12.986192530664518
- type: nAUC_Recall@1_std(MIRACL)
value: 4.086145762604503
- type: nAUC_Recall@20_diff1(MIRACL)
value: -17.771143411342166
- type: nAUC_Recall@20_max(MIRACL)
value: 47.59780316487735
- type: nAUC_Recall@20_std(MIRACL)
value: 33.25494707686132
- type: nAUC_Recall@3_diff1(MIRACL)
value: 10.171226133119783
- type: nAUC_Recall@3_max(MIRACL)
value: 21.097634288680847
- type: nAUC_Recall@3_std(MIRACL)
value: 10.087211861733298
- type: nAUC_Recall@5_diff1(MIRACL)
value: 1.6868374913242932
- type: nAUC_Recall@5_max(MIRACL)
value: 25.874440474993165
- type: nAUC_Recall@5_std(MIRACL)
value: 13.46380924822079
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrieval (ru)
type: miracl/mmteb-miracl
config: ru
split: dev
revision: main
metrics:
- type: main_score
value: 53.909
- type: map_at_1
value: 24.308
- type: map_at_10
value: 43.258
- type: map_at_100
value: 46.053
- type: map_at_1000
value: 46.176
- type: map_at_20
value: 44.962
- type: map_at_3
value: 36.129
- type: map_at_5
value: 40.077
- type: mrr_at_1
value: 49.92012779552716
- type: mrr_at_10
value: 62.639554490592865
- type: mrr_at_100
value: 63.09260401526302
- type: mrr_at_1000
value: 63.10428906436666
- type: mrr_at_20
value: 62.94919151853632
- type: mrr_at_3
value: 60.15708200212997
- type: mrr_at_5
value: 61.83439829605969
- type: nauc_map_at_1000_diff1
value: 24.249990208199268
- type: nauc_map_at_1000_max
value: 25.29688440384686
- type: nauc_map_at_1000_std
value: 2.4312163206740536
- type: nauc_map_at_100_diff1
value: 24.2554939267347
- type: nauc_map_at_100_max
value: 25.25054164924535
- type: nauc_map_at_100_std
value: 2.4121726280069757
- type: nauc_map_at_10_diff1
value: 24.411765629418987
- type: nauc_map_at_10_max
value: 23.13035697774593
- type: nauc_map_at_10_std
value: -0.1673711528601927
- type: nauc_map_at_1_diff1
value: 30.55123128484441
- type: nauc_map_at_1_max
value: 13.83849108263988
- type: nauc_map_at_1_std
value: -7.087181528435525
- type: nauc_map_at_20_diff1
value: 24.125033292556417
- type: nauc_map_at_20_max
value: 24.563171125814296
- type: nauc_map_at_20_std
value: 1.266006461448722
- type: nauc_map_at_3_diff1
value: 25.71581305774253
- type: nauc_map_at_3_max
value: 18.708623514300097
- type: nauc_map_at_3_std
value: -4.772722288463871
- type: nauc_map_at_5_diff1
value: 25.352787694389097
- type: nauc_map_at_5_max
value: 20.974296353287084
- type: nauc_map_at_5_std
value: -3.4007260047029835
- type: nauc_mrr_at_1000_diff1
value: 29.492072727604622
- type: nauc_mrr_at_1000_max
value: 34.60333674990558
- type: nauc_mrr_at_1000_std
value: 11.223537361751173
- type: nauc_mrr_at_100_diff1
value: 29.47919553914885
- type: nauc_mrr_at_100_max
value: 34.618795300361995
- type: nauc_mrr_at_100_std
value: 11.243824787491663
- type: nauc_mrr_at_10_diff1
value: 29.481060608078298
- type: nauc_mrr_at_10_max
value: 34.752363175415745
- type: nauc_mrr_at_10_std
value: 10.98618160728943
- type: nauc_mrr_at_1_diff1
value: 31.81056902767142
- type: nauc_mrr_at_1_max
value: 30.351978574096773
- type: nauc_mrr_at_1_std
value: 9.735911194663025
- type: nauc_mrr_at_20_diff1
value: 29.390754002995035
- type: nauc_mrr_at_20_max
value: 34.75816984434079
- type: nauc_mrr_at_20_std
value: 11.325226515477347
- type: nauc_mrr_at_3_diff1
value: 29.948364490803186
- type: nauc_mrr_at_3_max
value: 33.973850208221556
- type: nauc_mrr_at_3_std
value: 9.988883050022485
- type: nauc_mrr_at_5_diff1
value: 29.477773016468696
- type: nauc_mrr_at_5_max
value: 34.38532892473932
- type: nauc_mrr_at_5_std
value: 10.206783034393654
- type: nauc_ndcg_at_1000_diff1
value: 24.15494700259076
- type: nauc_ndcg_at_1000_max
value: 32.367504385127035
- type: nauc_ndcg_at_1000_std
value: 10.372857487814498
- type: nauc_ndcg_at_100_diff1
value: 23.97247958991815
- type: nauc_ndcg_at_100_max
value: 32.21110774026889
- type: nauc_ndcg_at_100_std
value: 11.065328347817761
- type: nauc_ndcg_at_10_diff1
value: 24.038789867355796
- type: nauc_ndcg_at_10_max
value: 28.14682223937745
- type: nauc_ndcg_at_10_std
value: 4.518525314723316
- type: nauc_ndcg_at_1_diff1
value: 31.81056902767142
- type: nauc_ndcg_at_1_max
value: 30.351978574096773
- type: nauc_ndcg_at_1_std
value: 9.735911194663025
- type: nauc_ndcg_at_20_diff1
value: 23.157990079778138
- type: nauc_ndcg_at_20_max
value: 30.521172934621703
- type: nauc_ndcg_at_20_std
value: 7.660125728373433
- type: nauc_ndcg_at_3_diff1
value: 24.44153871615053
- type: nauc_ndcg_at_3_max
value: 27.08209732696818
- type: nauc_ndcg_at_3_std
value: 3.8766269917792537
- type: nauc_ndcg_at_5_diff1
value: 24.952468410841863
- type: nauc_ndcg_at_5_max
value: 26.29873769608537
- type: nauc_ndcg_at_5_std
value: 1.3359423751654511
- type: nauc_precision_at_1000_diff1
value: -9.104010991734798
- type: nauc_precision_at_1000_max
value: 20.36838078039637
- type: nauc_precision_at_1000_std
value: 26.889986331386297
- type: nauc_precision_at_100_diff1
value: -7.181546793298205
- type: nauc_precision_at_100_max
value: 24.32969645433586
- type: nauc_precision_at_100_std
value: 31.546209514202232
- type: nauc_precision_at_10_diff1
value: -1.0044021788494442
- type: nauc_precision_at_10_max
value: 29.37074096666726
- type: nauc_precision_at_10_std
value: 25.000959926288214
- type: nauc_precision_at_1_diff1
value: 31.81056902767142
- type: nauc_precision_at_1_max
value: 30.351978574096773
- type: nauc_precision_at_1_std
value: 9.735911194663025
- type: nauc_precision_at_20_diff1
value: -5.242529022989003
- type: nauc_precision_at_20_max
value: 28.199268120740822
- type: nauc_precision_at_20_std
value: 28.460986811065037
- type: nauc_precision_at_3_diff1
value: 9.46419634664173
- type: nauc_precision_at_3_max
value: 32.203956451949914
- type: nauc_precision_at_3_std
value: 16.4095713138301
- type: nauc_precision_at_5_diff1
value: 3.719098257572974
- type: nauc_precision_at_5_max
value: 30.53411024247047
- type: nauc_precision_at_5_std
value: 17.926227114457067
- type: nauc_recall_at_1000_diff1
value: 12.347919922311121
- type: nauc_recall_at_1000_max
value: 62.10824756167678
- type: nauc_recall_at_1000_std
value: 65.9625810682273
- type: nauc_recall_at_100_diff1
value: 11.945066948287723
- type: nauc_recall_at_100_max
value: 37.07070306829974
- type: nauc_recall_at_100_std
value: 38.76495395051901
- type: nauc_recall_at_10_diff1
value: 14.793964290237943
- type: nauc_recall_at_10_max
value: 23.170920682517334
- type: nauc_recall_at_10_std
value: 5.07461971737137
- type: nauc_recall_at_1_diff1
value: 30.55123128484441
- type: nauc_recall_at_1_max
value: 13.83849108263988
- type: nauc_recall_at_1_std
value: -7.087181528435525
- type: nauc_recall_at_20_diff1
value: 10.349310874535616
- type: nauc_recall_at_20_max
value: 27.72667852012557
- type: nauc_recall_at_20_std
value: 13.37946493360006
- type: nauc_recall_at_3_diff1
value: 20.660181561801195
- type: nauc_recall_at_3_max
value: 16.734608747226137
- type: nauc_recall_at_3_std
value: -5.887299100086449
- type: nauc_recall_at_5_diff1
value: 19.292387971699007
- type: nauc_recall_at_5_max
value: 18.151647291256193
- type: nauc_recall_at_5_std
value: -5.3874570564310895
- type: ndcg_at_1
value: 49.919999999999995
- type: ndcg_at_10
value: 53.909
- type: ndcg_at_100
value: 61.346999999999994
- type: ndcg_at_1000
value: 62.831
- type: ndcg_at_20
value: 57.44200000000001
- type: ndcg_at_3
value: 48.034
- type: ndcg_at_5
value: 50.151
- type: precision_at_1
value: 49.919999999999995
- type: precision_at_10
value: 16.206
- type: precision_at_100
value: 2.467
- type: precision_at_1000
value: 0.27499999999999997
- type: precision_at_20
value: 9.847999999999999
- type: precision_at_3
value: 33.013999999999996
- type: precision_at_5
value: 25.495
- type: recall_at_1
value: 24.308
- type: recall_at_10
value: 64.226
- type: recall_at_100
value: 88.532
- type: recall_at_1000
value: 96.702
- type: recall_at_20
value: 73.855
- type: recall_at_3
value: 43.75
- type: recall_at_5
value: 53.293
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 66.96704774714189
- type: f1
value: 63.75700201120695
- type: f1_weighted
value: 65.79948352494334
- type: main_score
value: 66.96704774714189
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 71.79556153328849
- type: f1
value: 71.04798190430378
- type: f1_weighted
value: 71.11136110921589
- type: main_score
value: 71.79556153328849
- task:
type: STS
dataset:
name: MTEB RUParaPhraserSTS (default)
type: merionum/ru_paraphraser
config: default
split: test
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
metrics:
- type: cosine_pearson
value: 69.4312341087414
- type: cosine_spearman
value: 76.16273410937974
- type: euclidean_pearson
value: 73.59970264325928
- type: euclidean_spearman
value: 76.16273410937974
- type: main_score
value: 76.16273410937974
- type: manhattan_pearson
value: 73.63850191752708
- type: manhattan_spearman
value: 76.22156395676978
- type: pearson
value: 69.4312341087414
- type: spearman
value: 76.16273410937974
- task:
type: Retrieval
dataset:
name: MTEB RiaNewsRetrieval (default)
type: ai-forever/ria-news-retrieval
config: default
split: test
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
metrics:
- type: main_score
value: 78.864
- type: map_at_1
value: 67.61
- type: map_at_10
value: 75.44800000000001
- type: map_at_100
value: 75.73
- type: map_at_1000
value: 75.74
- type: map_at_20
value: 75.63
- type: map_at_3
value: 74.058
- type: map_at_5
value: 74.935
- type: mrr_at_1
value: 67.61
- type: mrr_at_10
value: 75.44837698412663
- type: mrr_at_100
value: 75.7296913526584
- type: mrr_at_1000
value: 75.7404584781072
- type: mrr_at_20
value: 75.62998240983255
- type: mrr_at_3
value: 74.05833333333295
- type: mrr_at_5
value: 74.93533333333274
- type: nauc_map_at_1000_diff1
value: 76.73003886073126
- type: nauc_map_at_1000_max
value: 23.880592237559313
- type: nauc_map_at_1000_std
value: -16.639489061431295
- type: nauc_map_at_100_diff1
value: 76.72565072181389
- type: nauc_map_at_100_max
value: 23.881455390102456
- type: nauc_map_at_100_std
value: -16.63176355032267
- type: nauc_map_at_10_diff1
value: 76.64273887966773
- type: nauc_map_at_10_max
value: 23.81082154251487
- type: nauc_map_at_10_std
value: -16.77740307482434
- type: nauc_map_at_1_diff1
value: 79.73607180360645
- type: nauc_map_at_1_max
value: 21.20262368559921
- type: nauc_map_at_1_std
value: -19.089796155513238
- type: nauc_map_at_20_diff1
value: 76.7030611694817
- type: nauc_map_at_20_max
value: 23.838907707504127
- type: nauc_map_at_20_std
value: -16.672743811541736
- type: nauc_map_at_3_diff1
value: 76.50523775835022
- type: nauc_map_at_3_max
value: 23.60179905501101
- type: nauc_map_at_3_std
value: -17.693757802981956
- type: nauc_map_at_5_diff1
value: 76.61576372823448
- type: nauc_map_at_5_max
value: 23.862587318336775
- type: nauc_map_at_5_std
value: -17.0437966767025
- type: nauc_mrr_at_1000_diff1
value: 76.73003886073126
- type: nauc_mrr_at_1000_max
value: 23.880592237559313
- type: nauc_mrr_at_1000_std
value: -16.639489061431295
- type: nauc_mrr_at_100_diff1
value: 76.72565072181389
- type: nauc_mrr_at_100_max
value: 23.881455390102456
- type: nauc_mrr_at_100_std
value: -16.63176355032267
- type: nauc_mrr_at_10_diff1
value: 76.64273887966773
- type: nauc_mrr_at_10_max
value: 23.81082154251487
- type: nauc_mrr_at_10_std
value: -16.77740307482434
- type: nauc_mrr_at_1_diff1
value: 79.73607180360645
- type: nauc_mrr_at_1_max
value: 21.20262368559921
- type: nauc_mrr_at_1_std
value: -19.089796155513238
- type: nauc_mrr_at_20_diff1
value: 76.7030611694817
- type: nauc_mrr_at_20_max
value: 23.838907707504127
- type: nauc_mrr_at_20_std
value: -16.672743811541736
- type: nauc_mrr_at_3_diff1
value: 76.50523775835022
- type: nauc_mrr_at_3_max
value: 23.60179905501101
- type: nauc_mrr_at_3_std
value: -17.693757802981956
- type: nauc_mrr_at_5_diff1
value: 76.61576372823448
- type: nauc_mrr_at_5_max
value: 23.862587318336775
- type: nauc_mrr_at_5_std
value: -17.0437966767025
- type: nauc_ndcg_at_1000_diff1
value: 76.016960312922
- type: nauc_ndcg_at_1000_max
value: 25.434179222015285
- type: nauc_ndcg_at_1000_std
value: -14.489226598374966
- type: nauc_ndcg_at_100_diff1
value: 75.87402195675239
- type: nauc_ndcg_at_100_max
value: 25.562687163467295
- type: nauc_ndcg_at_100_std
value: -14.165819919505346
- type: nauc_ndcg_at_10_diff1
value: 75.47305900096035
- type: nauc_ndcg_at_10_max
value: 24.9111489869184
- type: nauc_ndcg_at_10_std
value: -15.106328069022739
- type: nauc_ndcg_at_1_diff1
value: 79.73607180360645
- type: nauc_ndcg_at_1_max
value: 21.20262368559921
- type: nauc_ndcg_at_1_std
value: -19.089796155513238
- type: nauc_ndcg_at_20_diff1
value: 75.71180859144839
- type: nauc_ndcg_at_20_max
value: 25.12671193294504
- type: nauc_ndcg_at_20_std
value: -14.582900241958443
- type: nauc_ndcg_at_3_diff1
value: 75.32126900936046
- type: nauc_ndcg_at_3_max
value: 24.39543091769943
- type: nauc_ndcg_at_3_std
value: -17.183511551234538
- type: nauc_ndcg_at_5_diff1
value: 75.46170695160178
- type: nauc_ndcg_at_5_max
value: 25.001670951020937
- type: nauc_ndcg_at_5_std
value: -15.861405796419376
- type: nauc_precision_at_1000_diff1
value: 65.48397136632431
- type: nauc_precision_at_1000_max
value: 77.05533391807842
- type: nauc_precision_at_1000_std
value: 54.14509238038628
- type: nauc_precision_at_100_diff1
value: 66.6077978535527
- type: nauc_precision_at_100_max
value: 54.07639576230772
- type: nauc_precision_at_100_std
value: 28.071043659958185
- type: nauc_precision_at_10_diff1
value: 68.71592258481675
- type: nauc_precision_at_10_max
value: 31.40944055975099
- type: nauc_precision_at_10_std
value: -4.421548783271478
- type: nauc_precision_at_1_diff1
value: 79.73607180360645
- type: nauc_precision_at_1_max
value: 21.20262368559921
- type: nauc_precision_at_1_std
value: -19.089796155513238
- type: nauc_precision_at_20_diff1
value: 68.87539427047768
- type: nauc_precision_at_20_max
value: 35.602508001542176
- type: nauc_precision_at_20_std
value: 3.6366951424017184
- type: nauc_precision_at_3_diff1
value: 70.84549884977267
- type: nauc_precision_at_3_max
value: 27.35862016332144
- type: nauc_precision_at_3_std
value: -15.255203279510601
- type: nauc_precision_at_5_diff1
value: 70.27864341297163
- type: nauc_precision_at_5_max
value: 30.29162962827962
- type: nauc_precision_at_5_std
value: -10.193470309556703
- type: nauc_recall_at_1000_diff1
value: 65.48397136632475
- type: nauc_recall_at_1000_max
value: 77.05533391807865
- type: nauc_recall_at_1000_std
value: 54.14509238038722
- type: nauc_recall_at_100_diff1
value: 66.60779785355253
- type: nauc_recall_at_100_max
value: 54.07639576230805
- type: nauc_recall_at_100_std
value: 28.071043659958207
- type: nauc_recall_at_10_diff1
value: 68.71592258481655
- type: nauc_recall_at_10_max
value: 31.409440559751168
- type: nauc_recall_at_10_std
value: -4.421548783271414
- type: nauc_recall_at_1_diff1
value: 79.73607180360645
- type: nauc_recall_at_1_max
value: 21.20262368559921
- type: nauc_recall_at_1_std
value: -19.089796155513238
- type: nauc_recall_at_20_diff1
value: 68.87539427047763
- type: nauc_recall_at_20_max
value: 35.60250800154217
- type: nauc_recall_at_20_std
value: 3.6366951424018716
- type: nauc_recall_at_3_diff1
value: 70.84549884977265
- type: nauc_recall_at_3_max
value: 27.358620163321408
- type: nauc_recall_at_3_std
value: -15.255203279510626
- type: nauc_recall_at_5_diff1
value: 70.2786434129717
- type: nauc_recall_at_5_max
value: 30.291629628279733
- type: nauc_recall_at_5_std
value: -10.193470309556629
- type: ndcg_at_1
value: 67.61
- type: ndcg_at_10
value: 78.864
- type: ndcg_at_100
value: 80.211
- type: ndcg_at_1000
value: 80.50699999999999
- type: ndcg_at_20
value: 79.514
- type: ndcg_at_3
value: 76.05499999999999
- type: ndcg_at_5
value: 77.625
- type: precision_at_1
value: 67.61
- type: precision_at_10
value: 8.941
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 4.598
- type: precision_at_3
value: 27.267000000000003
- type: precision_at_5
value: 17.118
- type: recall_at_1
value: 67.61
- type: recall_at_10
value: 89.41
- type: recall_at_100
value: 95.67
- type: recall_at_1000
value: 98.02
- type: recall_at_20
value: 91.96
- type: recall_at_3
value: 81.8
- type: recall_at_5
value: 85.59
- task:
type: Reranking
dataset:
name: MTEB RuBQReranking (default)
type: ai-forever/rubq-reranking
config: default
split: test
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
metrics:
- type: main_score
value: 70.8676293869892
- type: map
value: 70.8676293869892
- type: mrr
value: 76.21519142795738
- type: nAUC_map_diff1
value: 37.107477549298316
- type: nAUC_map_max
value: 24.03175751284917
- type: nAUC_map_std
value: 10.543266622518289
- type: nAUC_mrr_diff1
value: 41.59000224211641
- type: nAUC_mrr_max
value: 31.06363682531277
- type: nAUC_mrr_std
value: 14.95221681925582
- task:
type: Retrieval
dataset:
name: MTEB RuBQRetrieval (default)
type: ai-forever/rubq-retrieval
config: default
split: test
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
metrics:
- type: main_score
value: 66.77499999999999
- type: map_at_1
value: 38.964
- type: map_at_10
value: 58.679
- type: map_at_100
value: 59.74699999999999
- type: map_at_1000
value: 59.784000000000006
- type: map_at_20
value: 59.386
- type: map_at_3
value: 53.183
- type: map_at_5
value: 56.619
- type: mrr_at_1
value: 56.08747044917257
- type: mrr_at_10
value: 67.69477747757892
- type: mrr_at_100
value: 68.11028091076142
- type: mrr_at_1000
value: 68.12016895906572
- type: mrr_at_20
value: 67.99200829920431
- type: mrr_at_3
value: 65.40583136327825
- type: mrr_at_5
value: 66.86564223798278
- type: nauc_map_at_1000_diff1
value: 35.13932221843019
- type: nauc_map_at_1000_max
value: 31.603311334444573
- type: nauc_map_at_1000_std
value: -8.046320861408992
- type: nauc_map_at_100_diff1
value: 35.10777181986462
- type: nauc_map_at_100_max
value: 31.603059769116086
- type: nauc_map_at_100_std
value: -8.027533855390534
- type: nauc_map_at_10_diff1
value: 34.864122757362644
- type: nauc_map_at_10_max
value: 31.625252670171776
- type: nauc_map_at_10_std
value: -8.334256854154406
- type: nauc_map_at_1_diff1
value: 40.90418146524424
- type: nauc_map_at_1_max
value: 22.269308553048656
- type: nauc_map_at_1_std
value: -9.89932822257807
- type: nauc_map_at_20_diff1
value: 34.88664926631265
- type: nauc_map_at_20_max
value: 31.60883821879978
- type: nauc_map_at_20_std
value: -8.095294415067395
- type: nauc_map_at_3_diff1
value: 35.13227486507324
- type: nauc_map_at_3_max
value: 28.53848590790504
- type: nauc_map_at_3_std
value: -9.223288317647375
- type: nauc_map_at_5_diff1
value: 35.0811457266201
- type: nauc_map_at_5_max
value: 30.904120563551984
- type: nauc_map_at_5_std
value: -9.190854442617361
- type: nauc_mrr_at_1000_diff1
value: 43.43247399448727
- type: nauc_mrr_at_1000_max
value: 37.599979998251435
- type: nauc_mrr_at_1000_std
value: -8.461570912726742
- type: nauc_mrr_at_100_diff1
value: 43.42803056119293
- type: nauc_mrr_at_100_max
value: 37.60590141137654
- type: nauc_mrr_at_100_std
value: -8.456064029069271
- type: nauc_mrr_at_10_diff1
value: 43.34260974243939
- type: nauc_mrr_at_10_max
value: 37.7505248362988
- type: nauc_mrr_at_10_std
value: -8.4789005424329
- type: nauc_mrr_at_1_diff1
value: 46.8647472051038
- type: nauc_mrr_at_1_max
value: 34.40507832070825
- type: nauc_mrr_at_1_std
value: -9.148947481764475
- type: nauc_mrr_at_20_diff1
value: 43.37024314535158
- type: nauc_mrr_at_20_max
value: 37.62040185137823
- type: nauc_mrr_at_20_std
value: -8.497477607790167
- type: nauc_mrr_at_3_diff1
value: 42.980588675445404
- type: nauc_mrr_at_3_max
value: 37.43524263010435
- type: nauc_mrr_at_3_std
value: -8.698337782804687
- type: nauc_mrr_at_5_diff1
value: 43.224910985482765
- type: nauc_mrr_at_5_max
value: 38.00633132611649
- type: nauc_mrr_at_5_std
value: -8.554751807691591
- type: nauc_ndcg_at_1000_diff1
value: 36.58393000267959
- type: nauc_ndcg_at_1000_max
value: 34.491617466873194
- type: nauc_ndcg_at_1000_std
value: -6.968933918560401
- type: nauc_ndcg_at_100_diff1
value: 35.909285337288004
- type: nauc_ndcg_at_100_max
value: 34.60361766529284
- type: nauc_ndcg_at_100_std
value: -6.3241815724593256
- type: nauc_ndcg_at_10_diff1
value: 34.86940448346685
- type: nauc_ndcg_at_10_max
value: 34.89327996781203
- type: nauc_ndcg_at_10_std
value: -7.377912505502211
- type: nauc_ndcg_at_1_diff1
value: 47.16372543032823
- type: nauc_ndcg_at_1_max
value: 34.48620759685232
- type: nauc_ndcg_at_1_std
value: -8.881483248224074
- type: nauc_ndcg_at_20_diff1
value: 34.901006085701795
- type: nauc_ndcg_at_20_max
value: 34.766948088105174
- type: nauc_ndcg_at_20_std
value: -6.680375186500669
- type: nauc_ndcg_at_3_diff1
value: 35.16537335241684
- type: nauc_ndcg_at_3_max
value: 31.385279916552566
- type: nauc_ndcg_at_3_std
value: -8.871530629591442
- type: nauc_ndcg_at_5_diff1
value: 35.152664105492605
- type: nauc_ndcg_at_5_max
value: 33.89982336069226
- type: nauc_ndcg_at_5_std
value: -8.92795810387048
- type: nauc_precision_at_1000_diff1
value: -6.773234121047722
- type: nauc_precision_at_1000_max
value: 7.0059404092503925
- type: nauc_precision_at_1000_std
value: 4.757430160226248
- type: nauc_precision_at_100_diff1
value: -6.88009476644726
- type: nauc_precision_at_100_max
value: 10.391099419327492
- type: nauc_precision_at_100_std
value: 7.203837158689326
- type: nauc_precision_at_10_diff1
value: -0.7155570800016817
- type: nauc_precision_at_10_max
value: 21.06902041338105
- type: nauc_precision_at_10_std
value: 3.7465404459270815
- type: nauc_precision_at_1_diff1
value: 47.16372543032823
- type: nauc_precision_at_1_max
value: 34.48620759685232
- type: nauc_precision_at_1_std
value: -8.881483248224074
- type: nauc_precision_at_20_diff1
value: -4.695792117927824
- type: nauc_precision_at_20_max
value: 16.53698826752203
- type: nauc_precision_at_20_std
value: 6.681726081495262
- type: nauc_precision_at_3_diff1
value: 12.446292477522807
- type: nauc_precision_at_3_max
value: 27.622770072159884
- type: nauc_precision_at_3_std
value: -2.243774812074271
- type: nauc_precision_at_5_diff1
value: 5.851972491534291
- type: nauc_precision_at_5_max
value: 25.400246002612235
- type: nauc_precision_at_5_std
value: -0.8059534151280825
- type: nauc_recall_at_1000_diff1
value: 17.33619903703495
- type: nauc_recall_at_1000_max
value: 46.39520954734979
- type: nauc_recall_at_1000_std
value: 59.70020859630654
- type: nauc_recall_at_100_diff1
value: 9.309667388080348
- type: nauc_recall_at_100_max
value: 35.92482580062717
- type: nauc_recall_at_100_std
value: 24.021627313676188
- type: nauc_recall_at_10_diff1
value: 19.87959406394684
- type: nauc_recall_at_10_max
value: 35.00740821313158
- type: nauc_recall_at_10_std
value: -2.6455284599102784
- type: nauc_recall_at_1_diff1
value: 40.90418146524424
- type: nauc_recall_at_1_max
value: 22.269308553048656
- type: nauc_recall_at_1_std
value: -9.89932822257807
- type: nauc_recall_at_20_diff1
value: 15.028975252982061
- type: nauc_recall_at_20_max
value: 34.901307836728016
- type: nauc_recall_at_20_std
value: 2.9027647776175494
- type: nauc_recall_at_3_diff1
value: 26.13225834790859
- type: nauc_recall_at_3_max
value: 27.915627935543725
- type: nauc_recall_at_3_std
value: -8.069525359773976
- type: nauc_recall_at_5_diff1
value: 24.184086614024686
- type: nauc_recall_at_5_max
value: 32.607378848166675
- type: nauc_recall_at_5_std
value: -7.730984752196379
- type: ndcg_at_1
value: 55.969
- type: ndcg_at_10
value: 66.77499999999999
- type: ndcg_at_100
value: 70.324
- type: ndcg_at_1000
value: 70.95700000000001
- type: ndcg_at_20
value: 68.613
- type: ndcg_at_3
value: 59.256
- type: ndcg_at_5
value: 63.223
- type: precision_at_1
value: 55.969
- type: precision_at_10
value: 13.297999999999998
- type: precision_at_100
value: 1.585
- type: precision_at_1000
value: 0.167
- type: precision_at_20
value: 7.222
- type: precision_at_3
value: 32.467
- type: precision_at_5
value: 23.073
- type: recall_at_1
value: 38.964
- type: recall_at_10
value: 81.248
- type: recall_at_100
value: 95.124
- type: recall_at_1000
value: 99.30600000000001
- type: recall_at_20
value: 87.35199999999999
- type: recall_at_3
value: 62.785000000000004
- type: recall_at_5
value: 71.986
- task:
type: Classification
dataset:
name: MTEB RuReviewsClassification (default)
type: ai-forever/ru-reviews-classification
config: default
split: test
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
metrics:
- type: accuracy
value: 67.958984375
- type: f1
value: 67.250877785427
- type: f1_weighted
value: 67.25215701797296
- type: main_score
value: 67.958984375
- task:
type: STS
dataset:
name: MTEB RuSTSBenchmarkSTS (default)
type: ai-forever/ru-stsbenchmark-sts
config: default
split: test
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
metrics:
- type: cosine_pearson
value: 79.11336124619963
- type: cosine_spearman
value: 78.69157477180703
- type: euclidean_pearson
value: 77.84066073571212
- type: euclidean_spearman
value: 78.69157477180703
- type: main_score
value: 78.69157477180703
- type: manhattan_pearson
value: 77.79213012957939
- type: manhattan_spearman
value: 78.61384378877501
- type: pearson
value: 79.11336124619963
- type: spearman
value: 78.69157477180703
- task:
type: Classification
dataset:
name: MTEB RuSciBenchGRNTIClassification (default)
type: ai-forever/ru-scibench-grnti-classification
config: default
split: test
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
metrics:
- type: accuracy
value: 59.326171875
- type: f1
value: 58.01171745357119
- type: f1_weighted
value: 58.02106511480968
- type: main_score
value: 59.326171875
- task:
type: Clustering
dataset:
name: MTEB RuSciBenchGRNTIClusteringP2P (default)
type: ai-forever/ru-scibench-grnti-classification
config: default
split: test
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
metrics:
- type: main_score
value: 55.46570753380975
- type: v_measure
value: 55.46570753380975
- type: v_measure_std
value: 0.9813885872798612
- task:
type: Classification
dataset:
name: MTEB RuSciBenchOECDClassification (default)
type: ai-forever/ru-scibench-oecd-classification
config: default
split: test
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
metrics:
- type: accuracy
value: 46.328125
- type: f1
value: 44.19158709013339
- type: f1_weighted
value: 44.190957945676026
- type: main_score
value: 46.328125
- task:
type: Clustering
dataset:
name: MTEB RuSciBenchOECDClusteringP2P (default)
type: ai-forever/ru-scibench-oecd-classification
config: default
split: test
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
metrics:
- type: main_score
value: 47.28635342613908
- type: v_measure
value: 47.28635342613908
- type: v_measure_std
value: 0.7431017612993989
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 63.10139371129796
- type: cosine_spearman
value: 67.06445400504978
- type: euclidean_pearson
value: 62.74563386470613
- type: euclidean_spearman
value: 67.06445400504978
- type: main_score
value: 67.06445400504978
- type: manhattan_pearson
value: 62.540465664732395
- type: manhattan_spearman
value: 66.65899492022648
- type: pearson
value: 63.10139371129796
- type: spearman
value: 67.06445400504978
- task:
type: MultilabelClassification
dataset:
name: MTEB SensitiveTopicsClassification (default)
type: ai-forever/sensitive-topics-classification
config: default
split: test
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
metrics:
- type: accuracy
value: 33.0712890625
- type: f1
value: 38.063573562290024
- type: lrap
value: 49.586995442707696
- type: main_score
value: 33.0712890625
- task:
type: PairClassification
dataset:
name: MTEB TERRa (default)
type: ai-forever/terra-pairclassification
config: default
split: dev
revision: 7b58f24536063837d644aab9a023c62199b2a612
metrics:
- type: cosine_accuracy
value: 61.563517915309454
- type: cosine_accuracy_threshold
value: 75.3734290599823
- type: cosine_ap
value: 60.78861909325018
- type: cosine_f1
value: 67.25663716814158
- type: cosine_f1_threshold
value: 54.05237674713135
- type: cosine_precision
value: 50.836120401337794
- type: cosine_recall
value: 99.34640522875817
- type: dot_accuracy
value: 61.563517915309454
- type: dot_accuracy_threshold
value: 75.37343502044678
- type: dot_ap
value: 60.78861909325018
- type: dot_f1
value: 67.25663716814158
- type: dot_f1_threshold
value: 54.05237674713135
- type: dot_precision
value: 50.836120401337794
- type: dot_recall
value: 99.34640522875817
- type: euclidean_accuracy
value: 61.563517915309454
- type: euclidean_accuracy_threshold
value: 70.18057107925415
- type: euclidean_ap
value: 60.78861909325018
- type: euclidean_f1
value: 67.25663716814158
- type: euclidean_f1_threshold
value: 95.86195945739746
- type: euclidean_precision
value: 50.836120401337794
- type: euclidean_recall
value: 99.34640522875817
- type: main_score
value: 60.78861909325018
- type: manhattan_accuracy
value: 60.91205211726385
- type: manhattan_accuracy_threshold
value: 1813.1645202636719
- type: manhattan_ap
value: 60.478709337038936
- type: manhattan_f1
value: 67.10816777041943
- type: manhattan_f1_threshold
value: 2475.027275085449
- type: manhattan_precision
value: 50.66666666666667
- type: manhattan_recall
value: 99.34640522875817
- type: max_ap
value: 60.78861909325018
- type: max_f1
value: 67.25663716814158
- type: max_precision
value: 50.836120401337794
- type: max_recall
value: 99.34640522875817
- type: similarity_accuracy
value: 61.563517915309454
- type: similarity_accuracy_threshold
value: 75.3734290599823
- type: similarity_ap
value: 60.78861909325018
- type: similarity_f1
value: 67.25663716814158
- type: similarity_f1_threshold
value: 54.05237674713135
- type: similarity_precision
value: 50.836120401337794
- type: similarity_recall
value: 99.34640522875817
---
# Model Card for ru-en-RoSBERTa
The ru-en-RoSBERTa is a general text embedding model for Russian. The model is based on [ruRoBERTa](https://huggingface.co/ai-forever/ruRoberta-large) and fine-tuned with ~4M pairs of supervised, synthetic and unsupervised data in Russian and English. Tokenizer supports some English tokens from [RoBERTa](https://huggingface.co/FacebookAI/roberta-large) tokenizer.
For more model details please refer to our [article](https://arxiv.org/abs/2408.12503).
## Usage
The model can be used as is with prefixes. It is recommended to use CLS pooling. The choice of prefix and pooling depends on the task.
We use the following basic rules to choose a prefix:
- `"search_query: "` and `"search_document: "` prefixes are for answer or relevant paragraph retrieval
- `"classification: "` prefix is for symmetric paraphrasing related tasks (STS, NLI, Bitext Mining)
- `"clustering: "` prefix is for any tasks that rely on thematic features (topic classification, title-body retrieval)
To better tailor the model to your needs, you can fine-tune it with relevant high-quality Russian and English datasets.
Below are examples of texts encoding using the Transformers and SentenceTransformers libraries.
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def pool(hidden_state, mask, pooling_method="cls"):
if pooling_method == "mean":
s = torch.sum(hidden_state * mask.unsqueeze(-1).float(), dim=1)
d = mask.sum(axis=1, keepdim=True).float()
return s / d
elif pooling_method == "cls":
return hidden_state[:, 0]
inputs = [
#
"classification: Он нам и <unk> не нужон ваш Интернет!",
"clustering: В Ярославской области разрешили работу бань, но без посетителей",
"search_query: Сколько программистов нужно, чтобы вкрутить лампочку?",
#
"classification: What a time to be alive!",
"clustering: Ярославским баням разрешили работать без посетителей",
"search_document: Чтобы вкрутить лампочку, требуется три программиста: один напишет программу извлечения лампочки, другой — вкручивания лампочки, а третий проведет тестирование.",
]
tokenizer = AutoTokenizer.from_pretrained("ai-forever/ru-en-RoSBERTa")
model = AutoModel.from_pretrained("ai-forever/ru-en-RoSBERTa")
tokenized_inputs = tokenizer(inputs, max_length=512, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
outputs = model(**tokenized_inputs)
embeddings = pool(
outputs.last_hidden_state,
tokenized_inputs["attention_mask"],
pooling_method="cls" # or try "mean"
)
embeddings = F.normalize(embeddings, p=2, dim=1)
sim_scores = embeddings[:3] @ embeddings[3:].T
print(sim_scores.diag().tolist())
# [0.4796873927116394, 0.9409002065658569, 0.7761015892028809]
```
### SentenceTransformers
```python
from sentence_transformers import SentenceTransformer
inputs = [
#
"classification: Он нам и <unk> не нужон ваш Интернет!",
"clustering: В Ярославской области разрешили работу бань, но без посетителей",
"search_query: Сколько программистов нужно, чтобы вкрутить лампочку?",
#
"classification: What a time to be alive!",
"clustering: Ярославским баням разрешили работать без посетителей",
"search_document: Чтобы вкрутить лампочку, требуется три программиста: один напишет программу извлечения лампочки, другой — вкручивания лампочки, а третий проведет тестирование.",
]
# loads model with CLS pooling
model = SentenceTransformer("ai-forever/ru-en-RoSBERTa")
# embeddings are normalized by default
embeddings = model.encode(inputs, convert_to_tensor=True)
sim_scores = embeddings[:3] @ embeddings[3:].T
print(sim_scores.diag().tolist())
# [0.47968706488609314, 0.940900444984436, 0.7761018872261047]
```
or using prompts (sentence-transformers>=2.4.0):
```python
from sentence_transformers import SentenceTransformer
# loads model with CLS pooling
model = SentenceTransformer("ai-forever/ru-en-RoSBERTa")
classification = model.encode(["Он нам и <unk> не нужон ваш Интернет!", "What a time to be alive!"], prompt_name="classification")
print(classification[0] @ classification[1].T) # 0.47968706488609314
clustering = model.encode(["В Ярославской области разрешили работу бань, но без посетителей", "Ярославским баням разрешили работать без посетителей"], prompt_name="clustering")
print(clustering[0] @ clustering[1].T) # 0.940900444984436
query_embedding = model.encode("Сколько программистов нужно, чтобы вкрутить лампочку?", prompt_name="search_query")
document_embedding = model.encode("Чтобы вкрутить лампочку, требуется три программиста: один напишет программу извлечения лампочки, другой — вкручивания лампочки, а третий проведет тестирование.", prompt_name="search_document")
print(query_embedding @ document_embedding.T) # 0.7761018872261047
```
## Citation
```
@misc{snegirev2024russianfocusedembeddersexplorationrumteb,
title={The Russian-focused embedders' exploration: ruMTEB benchmark and Russian embedding model design},
author={Artem Snegirev and Maria Tikhonova and Anna Maksimova and Alena Fenogenova and Alexander Abramov},
year={2024},
eprint={2408.12503},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.12503},
}
```
## Limitations
The model is designed to process texts in Russian, the quality in English is unknown. Maximum input text length is limited to 512 tokens.
| [
"PARAPHRASING"
] | Non_BioNLP |
fine-tuned/FiQA2018-32000-384-gpt-4o-2024-05-13-52831585 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/FiQA2018-32000-384-gpt-4o-2024-05-13-52831585",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,981,388,000 | 2024-05-29T11:16:41 | 7 | 0 | ---
datasets:
- fine-tuned/FiQA2018-32000-384-gpt-4o-2024-05-13-52831585
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-32000-384-gpt-4o-2024-05-13-52831585',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
EJinHF/autotrain-squality_bart_sparse_oracle_with_query-66614136607 | EJinHF | summarization | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:EJinHF/autotrain-data-squality_bart_sparse_oracle_with_query",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,686,736,273,000 | 2023-06-14T09:57:38 | 22 | 0 | ---
datasets:
- EJinHF/autotrain-data-squality_bart_sparse_oracle_with_query
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain
co2_eq_emissions:
emissions: 0.8905213684161395
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 66614136607
- CO2 Emissions (in grams): 0.8905
## Validation Metrics
- Loss: 3.217
- Rouge1: 33.481
- Rouge2: 6.827
- RougeL: 19.276
- RougeLsum: 30.892
- Gen Len: 140.528
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/EJinHF/autotrain-squality_bart_sparse_oracle_with_query-66614136607
``` | [
"SUMMARIZATION"
] | Non_BioNLP |
allenai/tk-instruct-3b-pos | allenai | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:Super-NaturalInstructions",
"arxiv:1910.10683",
"arxiv:2204.07705",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,651,867,070,000 | 2023-01-24T17:09:59 | 87 | 0 | ---
datasets:
- Super-NaturalInstructions
language: en
license: apache-2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
ak2603/mt5-small-finetuned-Drishtants-summaries | ak2603 | summarization | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,736,179,305,000 | 2025-01-08T19:06:34 | 13 | 0 | ---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-Drishtants-summaries
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-Drishtants-summaries
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8276
- Rouge1: 0.3953
- Rouge2: 0.2206
- Rougel: 0.3789
- Rougelsum: 0.3822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 24.1138 | 1.0 | 13 | 15.3479 | 0.0044 | 0.0 | 0.0043 | 0.0044 |
| 19.7323 | 2.0 | 26 | 13.7879 | 0.0044 | 0.0 | 0.0043 | 0.0044 |
| 18.329 | 3.0 | 39 | 11.7699 | 0.0042 | 0.0 | 0.0039 | 0.0042 |
| 15.8092 | 4.0 | 52 | 12.9758 | 0.0067 | 0.0 | 0.0064 | 0.0067 |
| 13.8072 | 5.0 | 65 | 8.1803 | 0.0048 | 0.0 | 0.0048 | 0.0048 |
| 11.9323 | 6.0 | 78 | 6.4151 | 0.0048 | 0.0 | 0.0048 | 0.0048 |
| 10.8486 | 7.0 | 91 | 5.3122 | 0.0067 | 0.0 | 0.0067 | 0.0067 |
| 10.2067 | 8.0 | 104 | 5.1497 | 0.0098 | 0.0 | 0.0097 | 0.0096 |
| 9.4972 | 9.0 | 117 | 4.9039 | 0.0136 | 0.0 | 0.0135 | 0.0132 |
| 8.4609 | 10.0 | 130 | 3.9617 | 0.0272 | 0.0013 | 0.0273 | 0.0269 |
| 7.2721 | 11.0 | 143 | 3.4252 | 0.0526 | 0.0093 | 0.0522 | 0.0492 |
| 5.943 | 12.0 | 156 | 3.1756 | 0.0746 | 0.0170 | 0.0640 | 0.0658 |
| 5.5122 | 13.0 | 169 | 2.9797 | 0.0649 | 0.0121 | 0.0610 | 0.0573 |
| 5.1628 | 14.0 | 182 | 2.8133 | 0.0818 | 0.0215 | 0.0738 | 0.0733 |
| 4.9023 | 15.0 | 195 | 2.6725 | 0.0798 | 0.0262 | 0.0767 | 0.0765 |
| 4.4493 | 16.0 | 208 | 2.5408 | 0.0924 | 0.0348 | 0.0881 | 0.0891 |
| 4.3145 | 17.0 | 221 | 2.4332 | 0.0914 | 0.0361 | 0.0796 | 0.0800 |
| 3.978 | 18.0 | 234 | 2.3434 | 0.0952 | 0.0422 | 0.0835 | 0.0843 |
| 3.9377 | 19.0 | 247 | 2.2749 | 0.1289 | 0.0617 | 0.1138 | 0.1137 |
| 3.6415 | 20.0 | 260 | 2.2123 | 0.1701 | 0.0698 | 0.1471 | 0.1451 |
| 3.4801 | 21.0 | 273 | 2.1490 | 0.1682 | 0.0758 | 0.1497 | 0.1480 |
| 3.5114 | 22.0 | 286 | 2.0997 | 0.1885 | 0.0858 | 0.1658 | 0.1662 |
| 3.3784 | 23.0 | 299 | 2.0567 | 0.1971 | 0.0931 | 0.1730 | 0.1729 |
| 3.2501 | 24.0 | 312 | 2.0291 | 0.1969 | 0.0952 | 0.1752 | 0.1753 |
| 3.208 | 25.0 | 325 | 2.0057 | 0.1959 | 0.0883 | 0.1746 | 0.1753 |
| 3.0992 | 26.0 | 338 | 1.9769 | 0.1984 | 0.0961 | 0.1759 | 0.1762 |
| 2.9069 | 27.0 | 351 | 1.9474 | 0.1938 | 0.0975 | 0.1734 | 0.1734 |
| 3.0772 | 28.0 | 364 | 1.9259 | 0.1897 | 0.0978 | 0.1714 | 0.1710 |
| 2.8778 | 29.0 | 377 | 1.9098 | 0.1766 | 0.0934 | 0.1584 | 0.1582 |
| 2.8723 | 30.0 | 390 | 1.8937 | 0.1752 | 0.0860 | 0.1551 | 0.1551 |
| 2.8102 | 31.0 | 403 | 1.8786 | 0.1808 | 0.0889 | 0.1610 | 0.1603 |
| 2.8453 | 32.0 | 416 | 1.8660 | 0.1971 | 0.0919 | 0.1745 | 0.1752 |
| 2.925 | 33.0 | 429 | 1.8544 | 0.2724 | 0.1441 | 0.2562 | 0.2564 |
| 2.8222 | 34.0 | 442 | 1.8468 | 0.3749 | 0.2099 | 0.3583 | 0.3592 |
| 2.7711 | 35.0 | 455 | 1.8414 | 0.3950 | 0.2216 | 0.3742 | 0.3785 |
| 2.8176 | 36.0 | 468 | 1.8367 | 0.3953 | 0.2206 | 0.3789 | 0.3822 |
| 2.7044 | 37.0 | 481 | 1.8321 | 0.3947 | 0.2201 | 0.3781 | 0.3817 |
| 2.7696 | 38.0 | 494 | 1.8295 | 0.3953 | 0.2206 | 0.3789 | 0.3822 |
| 2.6015 | 39.0 | 507 | 1.8281 | 0.3953 | 0.2206 | 0.3789 | 0.3822 |
| 2.6849 | 40.0 | 520 | 1.8276 | 0.3953 | 0.2206 | 0.3789 | 0.3822 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
| [
"SUMMARIZATION"
] | Non_BioNLP |
hezarai/bert-fa-ner-arman | hezarai | token-classification | [
"hezar",
"token-classification",
"fa",
"dataset:hezarai/arman-ner",
"base_model:hezarai/bert-base-fa",
"base_model:finetune:hezarai/bert-base-fa",
"region:us"
] | 1,691,150,663,000 | 2024-11-14T08:26:23 | 19 | 0 | ---
base_model:
- hezarai/bert-base-fa
datasets:
- hezarai/arman-ner
language:
- fa
library_name: hezar
pipeline_tag: token-classification
tags:
- token-classification
- hezar
---
ParsBERT model trained on the [Arman-NER](https://huggingface.co/datasets/hezarai/arman-ner) for Named Entity Recognition. | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
mesolitica/t5-tiny-bahasa-cased | mesolitica | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,665,070,188,000 | 2022-10-06T15:35:23 | 8 | 0 | ---
language: ms
---
# t5-tiny-bahasa-cased
Pretrained T5 tiny on both standard and local language model for Malay.
## Pretraining Corpus
`t5-tiny-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
11. Social media normalization.
12. Noisy EN-MS translation.
13. Noisy MS-EN translation.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity. | [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
TUM/GottBERT_filtered_base_best | TUM | null | [
"pytorch",
"safetensors",
"roberta",
"RoBERTa",
"GottBERT",
"BERT",
"de",
"license:mit",
"region:us"
] | 1,729,700,809,000 | 2024-12-10T10:16:27 | 162 | 0 | ---
language:
- de
license: mit
tags:
- RoBERTa
- GottBERT
- BERT
---
# GottBERT: A pure German language model
GottBERT is the first German-only RoBERTa model, pre-trained on the German portion of the first released OSCAR dataset. This model aims to provide enhanced natural language processing (NLP) performance for the German language across various tasks, including Named Entity Recognition (NER), text classification, and natural language inference (NLI). GottBERT has been developed in two versions: a **base model** and a **large model**, tailored specifically for German-language tasks.
- **Model Type**: RoBERTa
- **Language**: German
- **Base Model**: 12 layers, 125 million parameters
- **Large Model**: 24 layers, 355 million parameters
- **License**: MIT
---
## Pretraining Details
- **Corpus**: German portion of the OSCAR dataset (Common Crawl).
- **Data Size**:
- Unfiltered: 145GB (~459 million documents)
- Filtered: 121GB (~382 million documents)
- **Preprocessing**: Filtering included correcting encoding errors (e.g., erroneous umlauts), removing spam and non-German documents using language detection and syntactic filtering.
### Filtering Metrics
- **Stopword Ratio**: Detects spam and meaningless content.
- **Punctuation Ratio**: Detects abnormal punctuation patterns.
- **Upper Token Ratio**: Identifies documents with excessive uppercase tokens (often noisy content).
## **Training Configuration**
- **Framework**: [Fairseq](https://github.com/scheiblr/fairseq/tree/TPUv4_very_old)
- **Hardware**:
- Base Model: 256 TPUv3 pod/128 TPUv4 pod
- Large Model: 128 TPUv4 pod
- **Training Time**:
- Base Model: 1.2 days
- Large Model: 5.7 days
- **Batch Size**: 8k tokens
- **Learning Rate**:
- Base: Peak LR = 0.0004
- Large: Peak LR = 0.00015
- **Training Iterations**: 100k steps with a 10k warm-up phase
## Evaluation and Results
GottBERT was evaluated across various downstream tasks:
- **NER**: CoNLL 2003, GermEval 2014
- **Text Classification**: GermEval 2018 (coarse & fine), 10kGNAD
- **NLI**: German subset of XNLI
Mertics:
- **NER and Text Classification**: F1 Score
- **NLI**: Accuracy
Details:
- **bold** values indicate the best performing model within one architecure (base, large), <ins>undescored</ins> values the second best.
| Model | Accuracy NLI | GermEval\_14 F1 | CoNLL F1 | Coarse F1 | Fine F1 | 10kGNAD F1 |
|-------------------------------------|--------------|----------------|----------|-----------|---------|------------|
| [GottBERT_base_best](https://huggingface.co/TUM/GottBERT_base_best) | 80.82 | 87.55 | <ins>85.93</ins> | 78.17 | 53.30 | 89.64 |
| [GottBERT_base_last](https://huggingface.co/TUM/GottBERT_base_last) | 81.04 | 87.48 | 85.61 | <ins>78.18</ins> | **53.92** | 90.27 |
| [GottBERT_filtered_base_best](https://huggingface.co/TUM/GottBERT_filtered_base_best) | 80.56 | <ins>87.57</ins> | **86.14** | **78.65** | 52.82 | 89.79 |
| [GottBERT_filtered_base_last](https://huggingface.co/TUM/GottBERT_filtered_base_last) | 80.74 | **87.59** | 85.66 | 78.08 | 52.39 | 89.92 |
| GELECTRA_base | **81.70** | 86.91 | 85.37 | 77.26 | 50.07 | 89.02 |
| GBERT_base | 80.06 | 87.24 | 85.16 | 77.37 | 51.51 | **90.30** |
| dbmdzBERT | 68.12 | 86.82 | 85.15 | 77.46 | 52.07 | **90.34** |
| GermanBERT | 78.16 | 86.53 | 83.87 | 74.81 | 47.78 | 90.18 |
| XLM-R_base | 79.76 | 86.14 | 84.46 | 77.13 | 50.54 | 89.81 |
| mBERT | 77.03 | 86.67 | 83.18 | 73.54 | 48.32 | 88.90 |
| [GottBERT_large](https://huggingface.co/TUM/GottBERT_large) | 82.46 | 88.20 | <ins>86.78</ins> | 79.40 | 54.61 | 90.24 |
| [GottBERT_filtered_large_best](https://huggingface.co/TUM/GottBERT_filtered_large_best) | 83.31 | 88.13 | 86.30 | 79.32 | 54.70 | 90.31 |
| [GottBERT_filtered_large_last](https://huggingface.co/TUM/GottBERT_filtered_large_last) | 82.79 | <ins>88.27</ins> | 86.28 | 78.96 | 54.72 | 90.17 |
| GELECTRA_large | **86.33** | <ins>88.72</ins> | <ins>86.78</ins> | **81.28** | <ins>56.17</ins> | **90.97** |
| GBERT_large | <ins>84.21</ins> | <ins>88.72</ins> | **87.19** | <ins>80.84</ins> | **57.37** | <ins>90.74</ins> |
| XLM-R_large | 84.07 | **88.83** | 86.54 | 79.05 | 55.06 | 90.17 |
## Model Architecture
- **Base Model**: 12 layers, 125M parameters, 52k token vocabulary.
- **Large Model**: 24 layers, 355M parameters, 52k token vocabulary.
### Tokenizer
- **Type**: GPT-2 Byte-Pair Encoding (BPE)
- **Vocabulary Size**: 52k subword tokens
- **Trained on**: 40GB subsample of the unfiltered German OSCAR corpus.
## Limitations
- **Filtered vs Unfiltered Data**: Minor improvements seen with filtered data, but not significant enough to justify filtering in every case.
- **Computation Limitations**: Fixed memory allocation on TPUs required processing data as a single stream, unlike GPU training which preserves document boundaries. Training was performed in 32-bit mode due to framework limitations, increasing memory usage.
## Fairseq Checkpoints
Get the fairseq checkpoints [here](https://drive.proton.me/urls/CFSGE8ZK9R#1F1G727lv77k).
## Citations
If you use GottBERT in your research, please cite the following paper:
```bibtex
@inproceedings{scheible-etal-2024-gottbert,
title = "{G}ott{BERT}: a pure {G}erman Language Model",
author = "Scheible, Raphael and
Frei, Johann and
Thomczyk, Fabian and
He, Henry and
Tippmann, Patric and
Knaus, Jochen and
Jaravine, Victor and
Kramer, Frank and
Boeker, Martin",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.1183",
pages = "21237--21250",
}
``` | [
"NAMED_ENTITY_RECOGNITION",
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_qnli | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,675,030,854,000 | 2023-01-29T23:37:48 | 149 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_qnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- type: accuracy
value: 0.615595826468973
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9573
- Accuracy: 0.6156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0984 | 1.0 | 819 | 0.9626 | 0.6220 |
| 1.0171 | 2.0 | 1638 | 0.9573 | 0.6156 |
| 0.9717 | 3.0 | 2457 | 0.9651 | 0.6105 |
| 0.9377 | 4.0 | 3276 | 0.9713 | 0.6024 |
| 0.9132 | 5.0 | 4095 | 0.9812 | 0.5988 |
| 0.89 | 6.0 | 4914 | 1.0108 | 0.5982 |
| 0.8683 | 7.0 | 5733 | 1.0290 | 0.5914 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Alireza1044/mobilebert_QNLI | Alireza1044 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,655,222,052,000 | 2022-06-14T19:54:02 | 103 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: qnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- type: accuracy
value: 0.9068277503203368
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3731
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
benyong/testmodel | benyong | fill-mask | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-07T01:35:56 | 115 | 0 | ---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
anantonios9/distilbert-base-uncased-finetuned-clinc | anantonios9 | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,694,431,563,000 | 2023-09-18T09:09:37 | 7 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.6164516129032258
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0325
- Accuracy: 0.6165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.7629 | 1.0 | 120 | 4.3585 | 0.4697 |
| 4.1927 | 2.0 | 240 | 4.0325 | 0.6165 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RyangRyang/distilbert-base-uncased-finetuned-emotion | RyangRyang | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,691,656,166,000 | 2023-08-10T10:10:13 | 13 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: f1
value: 0.9192696693027332
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2198
- Accuacy: 0.9195
- F1: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuacy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
| 0.8401 | 1.0 | 250 | 0.3257 | 0.906 | 0.9036 |
| 0.2584 | 2.0 | 500 | 0.2198 | 0.9195 | 0.9193 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Ejafa/llama_7B | Ejafa | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,681,789,075,000 | 2023-04-18T03:50:06 | 25 | 5 | ---
license: other
---
This LLaMA version resolves the EOS token issues.
This is under a special license, please see the LICENSE file for details.
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-efi-fi | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:28:42 | 25 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-efi-fi
* source languages: efi
* target languages: fi
* OPUS readme: [efi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.fi | 23.6 | 0.450 |
| [
"TRANSLATION"
] | Non_BioNLP |
Satwik11/Microsoft-phi-4-Instruct-AutoRound-GPTQ-4bit | Satwik11 | null | [
"safetensors",
"phi3",
"custom_code",
"en",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"4-bit",
"gptq",
"region:us"
] | 1,736,441,117,000 | 2025-01-10T07:19:24 | 45 | 1 | ---
base_model:
- microsoft/phi-4
language:
- en
license: mit
---
# Model Card for Microsoft-phi-4-Instruct-AutoRound-GPTQ-4bit
## Model Overview
**Model Name**: Microsoft-phi-4-Instruct-AutoRound-GPTQ-4bit
**Model Type**: Instruction-tuned, Quantized GPT-4-based language model
**Quantization**: GPTQ 4-bit
**Author**: Satwik11
**Hosted on**: Hugging Face
## Description
This model is a quantized version of the Microsoft phi-4 Instruct model, designed to deliver high performance while maintaining computational efficiency. By leveraging the GPTQ 4-bit quantization method, it enables deployment in environments with limited resources while retaining a high degree of accuracy.
The model is fine-tuned for instruction-following tasks, making it ideal for applications in conversational AI, question answering, and general-purpose text generation.
## Key Features
- **Instruction-tuned**: Fine-tuned to follow human-like instructions effectively.
- **Quantized for Efficiency**: Uses GPTQ 4-bit quantization to reduce memory requirements and inference latency.
- **Pre-trained Base**: Built on the Microsoft phi-4 framework, ensuring state-of-the-art performance on NLP tasks.
## Use Cases
- Chatbots and virtual assistants.
- Summarization and content generation.
- Research and educational applications.
- Semantic search and knowledge retrieval.
## Model Details
### Architecture
- **Base Model**: Microsoft phi-4
- **Quantization Technique**: GPTQ (4-bit)
- **Language**: English
- **Training Objective**: Instruction-following fine-tuning | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-sn-es | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:04:20 | 39 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-sn-es
* source languages: sn
* target languages: es
* OPUS readme: [sn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.es | 32.5 | 0.509 |
| [
"TRANSLATION"
] | Non_BioNLP |
Johnesss/Toxic-Comment-Classification | Johnesss | text-classification | [
"keras",
"tf-keras",
"toxic",
"comment",
"toxic comment",
"text-classification",
"en",
"region:us"
] | 1,730,365,272,000 | 2024-11-01T09:26:39 | 0 | 1 | ---
language:
- en
library_name: keras
pipeline_tag: text-classification
tags:
- toxic
- comment
- toxic comment
---
## Model description
This model used for text classification with toxic and non-toxic labels.
## Intended uses & limitations
If you want to reuse model, try copy this
```
from huggingface_hub import from_pretrained_keras
reloaded_model = from_pretrained_keras('Johnesss/Toxic-Comment-Classification')
y_testing=reloaded_model.predict(x_testing,verbose=1,batch_size=32)
test_df['Toxic']=['Not Toxic' if x<0.5 else 'Toxic' for x in y_testing]
test_df[['comment_text','Toxic']].head(20)
```
## Training and evaluation data
Full info in .ipynb file
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
AdapterHub/roberta-base-pf-comqa | AdapterHub | question-answering | [
"adapter-transformers",
"question-answering",
"roberta",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] | 1,646,263,744,000 | 2021-11-15T10:37:28 | 10 | 0 | ---
datasets:
- com_qa
language:
- en
tags:
- question-answering
- roberta
- adapter-transformers
---
# Adapter `AdapterHub/roberta-base-pf-comqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-comqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
RichardErkhov/PragmaticPete_-_llama3.2inst-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2204.05149",
"arxiv:2405.16406",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,739,931,086,000 | 2025-02-19T02:34:06 | 584 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3.2inst - GGUF
- Model creator: https://huggingface.co/PragmaticPete/
- Original model: https://huggingface.co/PragmaticPete/llama3.2inst/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3.2inst.Q2_K.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q2_K.gguf) | Q2_K | 0.54GB |
| [llama3.2inst.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [llama3.2inst.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [llama3.2inst.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [llama3.2inst.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [llama3.2inst.Q3_K.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q3_K.gguf) | Q3_K | 0.64GB |
| [llama3.2inst.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [llama3.2inst.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [llama3.2inst.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [llama3.2inst.Q4_0.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q4_0.gguf) | Q4_0 | 0.72GB |
| [llama3.2inst.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [llama3.2inst.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [llama3.2inst.Q4_K.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q4_K.gguf) | Q4_K | 0.75GB |
| [llama3.2inst.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [llama3.2inst.Q4_1.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q4_1.gguf) | Q4_1 | 0.77GB |
| [llama3.2inst.Q5_0.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q5_0.gguf) | Q5_0 | 0.83GB |
| [llama3.2inst.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [llama3.2inst.Q5_K.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q5_K.gguf) | Q5_K | 0.85GB |
| [llama3.2inst.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [llama3.2inst.Q5_1.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q5_1.gguf) | Q5_1 | 0.89GB |
| [llama3.2inst.Q6_K.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q6_K.gguf) | Q6_K | 0.95GB |
| [llama3.2inst.Q8_0.gguf](https://huggingface.co/RichardErkhov/PragmaticPete_-_llama3.2inst-gguf/blob/main/llama3.2inst.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.2
extra_gated_prompt: >-
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
“Agreement” means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
distributed by Meta at https://llama.meta.com/doc/overview.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are
entering into this Agreement on such person or entity’s behalf), of the age required under
applicable laws, rules or regulations to provide legal consent and that has legal authority
to bind your employer or such other person or entity if you are entering in this Agreement
on their behalf.
“Llama 3.2” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://www.llama.com/llama-downloads.
“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or Switzerland)
and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof),
or a product or service (including another AI model) that contains any of them, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
on a related website, user interface, blogpost, about page, or product documentation. If you use the
Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the
following attribution notice within a “Notice” text file distributed as a part of such copies:
“Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
incorporated by reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates,
is greater than 700 million monthly active users in the preceding calendar month, you must request
a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
California without regard to choice of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
any dispute arising out of this Agreement.
### Llama 3.2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”).
The most recent copy of this policy can be found at
[https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
#### Prohibited Uses
We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
9. Guns and illegal weapons (including weapon development)
10. Illegal drugs and regulated/controlled substances
11. Operation of critical infrastructure, transportation technologies, or heavy machinery
12. Self-harm or harm to others, including suicide, cutting, and eating disorders
13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
16. Generating, promoting, or further distributing spam
17. Impersonating another individual without consent, authorization, or legal right
18. Representing that the use of Llama 3.2 or outputs are human-generated
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
| [
"SUMMARIZATION"
] | Non_BioNLP |
Siddharth63/pubmedul2-tiny-nl6 | Siddharth63 | text2text-generation | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"dataset:Siddharth63/biological_dataset",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,694,615,106,000 | 2023-11-15T10:57:10 | 76 | 0 | ---
datasets:
- Siddharth63/biological_dataset
license: apache-2.0
---
# Bioul2-tiny-nl6
Pretrained T5 model on Biological dataset using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in this paper and first released at this page. The UL2 objective was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released on [this page](https://github.com/google-research/text-to-text-transfer-transformer).
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
BioT5 is a transformers model pretrained on a very large corpus of biological data (25 million abstracts) in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the T5 v1.1 improvements compared to the original T5 model during the pretraining:
GEGLU activation in feed-forward hidden layer, rather than ReLU - see here
Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
Pretrained on self-supervised objective only without mixing in the downstream tasks
No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in this paper. In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the t5-efficient-tiny-nl6 architecture's layer depth which means both the encoder and the decoder have 6 transformer layers compared to the original T5 "tiny" model's architecture of 4 transformer layers.
In total, this model has 31 million parameters.
## UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input ([NLU] for R-denoising, [NLG] for X-denoising, or [S2S] for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. Note: You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from here, for example.
Note: For fine-tuning, most likely you can get better results if you insert a prefix token of [NLU], [NLG], or [S2S] to your input texts. For general language understanding fine-tuning tasks, you could use the [NLU] token. For GPT-style causal language generation, you could use the [S2S] token. The token [NLG] of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token [NLG] could maybe be used for language generation fine-tuning too.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective, associated task definitions and their guidance. Thanks to [Yeb Havinga](https://huggingface.co/yhavinga) for helping me get started with the t5x framework. | [
"TEXT_CLASSIFICATION"
] | BioNLP |
dmanary-pronavigator/gemma-2-27b-it-exl2-5.0bpw | dmanary-pronavigator | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | 1,722,737,207,000 | 2024-07-07T02:50:14 | 7 | 0 | ---
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
> [!IMPORTANT]
> Given the model instabilities with SDPA/ FA2, by default, the model inference would utilise `eager` attention.
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
> [!WARNING]
> Gemma 2 is currently incompatible with Flash Attention/ SDPA, using it might result in unreliable generations. Use at your own risk.
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1024 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,735,927,213,000 | 2025-01-03T18:00:18 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1024
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1024_pib_translation_hindi_english
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1024_pib_translation_hindi_english sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
kurianbenoy/distilbert-base-uncased-finetuned-imdb | kurianbenoy | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-07-21T00:50:02 | 138 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- type: accuracy
value: 0.923
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- Accuracy: 0.923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2744 | 1.0 | 1563 | 0.2049 | 0.921 |
| 0.1572 | 2.0 | 3126 | 0.2308 | 0.923 |
| 0.0917 | 3.0 | 4689 | 0.3073 | 0.923 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
spacemanidol/flan-t5-large-4-4-cnndm | spacemanidol | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,677,627,350,000 | 2023-03-11T17:58:46 | 13 | 0 | ---
datasets:
- cnn_dailymail
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: large-4-4
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail 3.0.0
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- type: rouge
value: 43.6915
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# large-4-4
This model is a fine-tuned version of [cnn/large-4-4/](https://huggingface.co/cnn/large-4-4/) on the cnn_dailymail 3.0.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3258
- Rouge1: 43.6915
- Rouge2: 20.9012
- Rougel: 31.3004
- Rougelsum: 40.8592
- Gen Len: 71.8499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| [
"SUMMARIZATION"
] | Non_BioNLP |
omid-ebi/mT5_base_translation_English_to_Persian-Farsi | omid-ebi | translation | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"translation",
"en",
"dataset:persiannlp/parsinlu_translation_en_fa",
"arxiv:1910.09700",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,719,428,571,000 | 2024-07-13T12:39:48 | 283 | 3 | ---
datasets:
- persiannlp/parsinlu_translation_en_fa
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: translation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Machine Translation (ترجمهی ماشینی)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a finetuned version of a [google/mt5-base](https://huggingface.co/google/mt5-base) for machine translation (English -> Persian).
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [English, Persian]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [google/mt5-base]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
mesolitica/translation-t5-small-standard-bahasa-cased-v2 | mesolitica | text2text-generation | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,705,212,172,000 | 2024-01-23T13:09:09 | 278 | 0 | ---
language:
- ms
---
# Noisy Translation Small T5
Trained on 1536 context length, able to translate malay, pasar malay (social media texts or local context), english, manglish, javanese, banjarese and indonesian to target language. It also able to maintain the text structure as it is and only translate necessary texts, eg, programming code.
Added more coding translation dataset, noisy b.cari.com.my translation, noisy ChatGPT4 translation and heavy postfilter.
## how-to
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
'mesolitica/translation-t5-small-standard-bahasa-cased-v2',
use_fast=False
)
model = T5ForConditionalGeneration.from_pretrained(
'mesolitica/translation-t5-small-standard-bahasa-cased-v2'
)
s = 'Hai, ada yang bisa saya bantu?'
input_ids = tokenizer.encode(f'terjemah ke Melayu: {s}', return_tensors = 'pt')
outputs = model.generate(input_ids, max_length = 100)
all_special_ids = [0, 1, 2]
outputs = [i for i in outputs[0] if i not in all_special_ids]
print(tokenizer.decode(outputs, spaces_between_special_tokens = False))
``` | [
"TRANSLATION"
] | Non_BioNLP |
AlyGreo/fine-tuned-text-summarization | AlyGreo | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | 1,699,811,440,000 | 2023-11-12T17:55:23 | 0 | 0 | ---
base_model: google/flan-t5-base
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-text-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-text-summarization
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| [
"SUMMARIZATION"
] | Non_BioNLP |
cbpuschmann/klimacoder2_v0.4 | cbpuschmann | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:deutsche-telekom/gbert-large-paraphrase-cosine",
"base_model:finetune:deutsche-telekom/gbert-large-paraphrase-cosine",
"region:us"
] | 1,742,099,085,000 | 2025-03-16T04:25:29 | 12 | 0 | ---
base_model: deutsche-telekom/gbert-large-paraphrase-cosine
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Die Forderungen sind landesweit die gleichen. Es geht um die Wiedereinführung
eines 9-Euro-Tickets und ein Tempolimit von 100 km/h auf den Autobahnen. Außerdem
fordern wir die Einführung eines Gesellschaftsrats. Dieser soll Maßnahmen erarbeiten,
wie Deutschland bis 2030 emissionsfrei wird. Die Lösungsansätze sollen von der
Bundesregierung anerkannt und in der Politik umgesetzt werden.
- text: Die aktivist bezeichnen sich als ›DLG›. Sie fordern von Bundeswirtschaftsminister
Robert Habeck Grüne, auf fossile Energie zu verzichten. Zudem verlangen sie eine
Lebenserklärung der Rektorin der Leipziger Universität. Diese soll sich ›offiziell,
öffentlich und gerichtet an Robert Habeck gegen den Bau und die Finanzierung neuer
fossiler Infrastruktur aussprechen. Insbesondere gegen neue Ölbohrungen in der
Nordsee sowie neue Flüssiggas-Terminals›, hieß es in einer Mitteilung der Gruppe
am Donnerstag.
- text: Am Montag war es erneut das Amtsgericht Tiergarten, in dem ein Anwalt die
Aktionen der ›DLG› mit einem fragwürdigen historischen Vergleich rechtfertigte.
Verhandelt wurde an dem Tag gegen den 63-jährigen Winfried L. Wegen fünf Straßenblockaden,
bei denen er teilweise seine Hand auf der Straße angeklebt hatte, musste sich
L. wegen der Vorwürfe Nötigung und Widerstand gegen Vollstreckungsbeamte verantworten.
- text: 'In einer am Morgen verbreiteten Mitteilung begründete die Gruppe ihre Aktion.
Mit der Sitzblockade habe der "fossile Alltag" auf der Straße unterbrochen werden
sollen. Auf Transparenten seien Forderungen deutlich gemacht worden: ein 9-Euro-Ticket
für alle, ein Tempolimit von 100 Stundenkilometern auf Autobahnen und die Bildung
eines Gesellschaftsrats zum Thema Ende der fossilen Brennstoffe bis 2030.'
- text: 'aktivist feiern Festival für mehr Klimaschutz Xanten wer Die Ortsgruppe Xanten
von FFF hat am Freitagnachmittag wieder für mehr Klimaschutz protestiert – aber
anders als sonst. Die aktivist organisierten an der Kriemhildmühle im Kurpark
ein Festival mit Musik, Essen, Getränken und Vorträgen. Viele Menschen kamen,
genossen das schöne Wetter und die entspannte Atmosphäre, lauschten den Liedern
und sangen mit. Ansprachen gab es auch: Seit Jahrzehnten warne die Wissenschaft
vor den Folgen des Klimawandels, trotzdem unternehme die Politik zu wenig, und
die Bevölkerung müsse unter den Folgen wie Dürren, Überschwemmungen und Hitze
leiden, kritisierte Frederik Krohn von der Xantener Ortsgruppe der Klimaschutzbewegung.
Deshalb gehe FFF immer wieder auf die Straße, um der Politik zu sagen, dass es
so nicht weitergehe. Die große Teilnahme am Festival in Xanten und damit am Klimaschutz-Protest
sei ein ›starkes Zeichen›, sagte Krohn.'
inference: true
---
# SetFit with deutsche-telekom/gbert-large-paraphrase-cosine
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [deutsche-telekom/gbert-large-paraphrase-cosine](https://huggingface.co/deutsche-telekom/gbert-large-paraphrase-cosine) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [deutsche-telekom/gbert-large-paraphrase-cosine](https://huggingface.co/deutsche-telekom/gbert-large-paraphrase-cosine)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| neutral | <ul><li>'Heizungsgesetz: Dialog zwischen Ministerien zur Förderung von Wärmepumpen\n\nIn einer gemeinsamen Erklärung haben das Bundeswirtschaftsministerium und das zuständige Klimaministerium ihren Dialog über die flächendeckende Einführung von Wärmepumpen fortgesetzt. Ziel ist es, einen Konsens zu finden, der den Übergang zu nachhaltigeren Heizsystemen vorantreibt. Die Initiativen zielen darauf ab, die Energieeffizienz zu steigern und den Klimaschutz zu fördern, ohne spezifische Details oder Zeitpläne zu nennen.'</li><li>'Die Einführung eines Heizungsgesetzes zur Förderung der Wärmepumpen-Nutzung ist ein viel diskutierter Schritt hin zu mehr Nachhaltigkeit im Wohnungsbau. Kritiker argumentieren jedoch, dass finanzielle Hürden für den Austausch älterer Heizsysteme bestehende Haushalte belasten könnten. Vergleichsportale für Heizöl zeigen aktuell große Preisunterschiede, was die Kosten für Verbraucher weiter ins Fokus rückt.'</li><li>'Die Ampel-Koalition diskutiert über eine flächendeckende Einführung von Wärmepumpen, bekannt als "Heizungsgesetz". Ein genauer Zeitplan ist noch unklar, da derzeit Verhandlungen zwischen den Fraktionen laufen. Die Initiative zielt darauf ab, die Energieeffizienz zu steigern und den Übergang zu erneuerbaren Energien im Heizsektor voranzutreiben.'</li></ul> |
| opposed | <ul><li>'Skeptische Reaktionen auf Heizungsgesetz: Kritiker sehen Nachbesserung als unzureichend\n\nDer Vorschlag zur flächendeckenden Einführung von Wärmepumpen stößt auf geteilte Meinungen. Während Befürworter die Energiewende vorantreiben wollen, zeigt sich Michael Kruse (FDP) skeptisch und spricht von einer "Nachbesserung durch die Hintertür". Er fordert eine umfassendere Überarbeitung des Gesetzes, um die Belange der Verbraucher und Wirtschaft besser zu berücksichtigen.'</li><li>'Skepsis gegenüber "Heizungsgesetz": Kritiker warnen vor zu schnellem Umstritt\n\nDer geplante Einsatz von Wärmepumpen als alternative Heizungslösung weckt gemischte Gefühle. FDP-Politiker Sami Musa kritisiert den Vorstoß scharf und warnt vor unerwünschten Konsequenzen. "Das Heizungsgesetz könnte am Ende zu einem teuren und umweltschädlichen Kraftakt führen", so Musa. Er bezieht sich auf mögliche finanzielle Belastungen für Hausbesitzer und die Frage, ob ein vollständiger Wechsel von Öl- und Gasheizungen tatsächlich nachhaltig ist.'</li><li>'Skeptische Reaktionen auf das Heizungsgesetz: Unternehmer sehen dunkle Zeiten kommen\n\nIn der deutschen Energiebranche wächst die Skepsis gegenüber den jüngsten Gesetzesinitiativen zur flächendeckenden Einführung von Wärmepumpen. Ein betroffener Unternehmer warnt vor den Folgen: "Das Heizungsgesetz und die aktuelle Förderpolitik könnten zu Werkschließungen und Entlassungen führen, besonders in Zeiten wirtschaftlicher Unsicherheit." Er kritisiert das Chaos und die widersprüchlichen Signale aus Berlin, die seiner Meinung nach Unternehmen in eine schwierige Lage bringen.'</li></ul> |
| supportive | <ul><li>'Die Ampelkoalition hat kurz vor den Sommerferien einen Durchbruch bei der Wärmepumpenpflicht erzielt. Das Heizungsgesetz soll Klimaschutz und soziale Verträglichkeit verbinden, wie SPD-Vizefraktionschefin Verena Hubertz betont. Dennoch bleibt ein bitterer Nachgeschmack: Die zähe Debatte zeigt die Herausforderungen einer koalitionären Einigung.'</li><li>'Obwohl die flächendeckende Einführung von Wärmepumpen durch das geplante "Heizungsgesetz" auf ein dringend nötiges Umdenken hinweist, weckt die Initiative gemischte Gefühle. Während SPD-Verhandler Matthias Miersch von einer Förderung klimafreundlicher Heizungstechnologien spricht, warnen Kritiker vor einer möglichen Belastung für Verbraucher und Mittelstand durch höhere Installationskosten. Ein ausgewogener Ansatz ist entscheidend, um eine effektive Energiewende zu gewährleisten.'</li><li>'Obwohl das geplante "Heizungsgesetz" auf gemischte Reaktionen stößt, ist es ein mutiger Schritt in Richtung Energiewende. Kritiker monieren mögliche finanzielle Belastungen für Hausbesitzer, doch Grünen-Fraktionsvorsitzende Katharina Dröge betont die langfristigen Vorteile für den Klimaschutz und die Energieunabhängigkeit. Der Entwurf zielt darauf ab, die Installation von Wärmepumpen zu fördern und könnte Deutschland bei der Sanierung seiner Gebäudeflotte voranbringen.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("cbpuschmann/klimacoder2_v0.4")
# Run inference
preds = model("Die Forderungen sind landesweit die gleichen. Es geht um die Wiedereinführung eines 9-Euro-Tickets und ein Tempolimit von 100 km/h auf den Autobahnen. Außerdem fordern wir die Einführung eines Gesellschaftsrats. Dieser soll Maßnahmen erarbeiten, wie Deutschland bis 2030 emissionsfrei wird. Die Lösungsansätze sollen von der Bundesregierung anerkannt und in der Politik umgesetzt werden.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 24 | 61.6794 | 191 |
| Label | Training Sample Count |
|:-----------|:----------------------|
| supportive | 210 |
| opposed | 210 |
| neutral | 210 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.2556 | - |
| 0.0060 | 50 | 0.2575 | - |
| 0.0121 | 100 | 0.2404 | - |
| 0.0181 | 150 | 0.2165 | - |
| 0.0242 | 200 | 0.1432 | - |
| 0.0302 | 250 | 0.0634 | - |
| 0.0363 | 300 | 0.0156 | - |
| 0.0423 | 350 | 0.004 | - |
| 0.0484 | 400 | 0.001 | - |
| 0.0544 | 450 | 0.0005 | - |
| 0.0605 | 500 | 0.0004 | - |
| 0.0665 | 550 | 0.0003 | - |
| 0.0726 | 600 | 0.0002 | - |
| 0.0786 | 650 | 0.0002 | - |
| 0.0847 | 700 | 0.0001 | - |
| 0.0907 | 750 | 0.0012 | - |
| 0.0967 | 800 | 0.0024 | - |
| 0.1028 | 850 | 0.0018 | - |
| 0.1088 | 900 | 0.0001 | - |
| 0.1149 | 950 | 0.0001 | - |
| 0.1209 | 1000 | 0.0001 | - |
| 0.1270 | 1050 | 0.0001 | - |
| 0.1330 | 1100 | 0.0001 | - |
| 0.1391 | 1150 | 0.0 | - |
| 0.1451 | 1200 | 0.0 | - |
| 0.1512 | 1250 | 0.0 | - |
| 0.1572 | 1300 | 0.0 | - |
| 0.1633 | 1350 | 0.0 | - |
| 0.1693 | 1400 | 0.0 | - |
| 0.1754 | 1450 | 0.0 | - |
| 0.1814 | 1500 | 0.0 | - |
| 0.1874 | 1550 | 0.0 | - |
| 0.1935 | 1600 | 0.0 | - |
| 0.1995 | 1650 | 0.0 | - |
| 0.2056 | 1700 | 0.0 | - |
| 0.2116 | 1750 | 0.0 | - |
| 0.2177 | 1800 | 0.0 | - |
| 0.2237 | 1850 | 0.0 | - |
| 0.2298 | 1900 | 0.0 | - |
| 0.2358 | 1950 | 0.0 | - |
| 0.2419 | 2000 | 0.0 | - |
| 0.2479 | 2050 | 0.0 | - |
| 0.2540 | 2100 | 0.0 | - |
| 0.2600 | 2150 | 0.0 | - |
| 0.2661 | 2200 | 0.0 | - |
| 0.2721 | 2250 | 0.0 | - |
| 0.2781 | 2300 | 0.0 | - |
| 0.2842 | 2350 | 0.0 | - |
| 0.2902 | 2400 | 0.0 | - |
| 0.2963 | 2450 | 0.0 | - |
| 0.3023 | 2500 | 0.0 | - |
| 0.3084 | 2550 | 0.0 | - |
| 0.3144 | 2600 | 0.0 | - |
| 0.3205 | 2650 | 0.0 | - |
| 0.3265 | 2700 | 0.0 | - |
| 0.3326 | 2750 | 0.0 | - |
| 0.3386 | 2800 | 0.0 | - |
| 0.3447 | 2850 | 0.0 | - |
| 0.3507 | 2900 | 0.0 | - |
| 0.3568 | 2950 | 0.0 | - |
| 0.3628 | 3000 | 0.0 | - |
| 0.3688 | 3050 | 0.0 | - |
| 0.3749 | 3100 | 0.0 | - |
| 0.3809 | 3150 | 0.0 | - |
| 0.3870 | 3200 | 0.0 | - |
| 0.3930 | 3250 | 0.0 | - |
| 0.3991 | 3300 | 0.0 | - |
| 0.4051 | 3350 | 0.0 | - |
| 0.4112 | 3400 | 0.0 | - |
| 0.4172 | 3450 | 0.0 | - |
| 0.4233 | 3500 | 0.0 | - |
| 0.4293 | 3550 | 0.0 | - |
| 0.4354 | 3600 | 0.0 | - |
| 0.4414 | 3650 | 0.0 | - |
| 0.4475 | 3700 | 0.0 | - |
| 0.4535 | 3750 | 0.0 | - |
| 0.4595 | 3800 | 0.0 | - |
| 0.4656 | 3850 | 0.0 | - |
| 0.4716 | 3900 | 0.0 | - |
| 0.4777 | 3950 | 0.0 | - |
| 0.4837 | 4000 | 0.0 | - |
| 0.4898 | 4050 | 0.0 | - |
| 0.4958 | 4100 | 0.0 | - |
| 0.5019 | 4150 | 0.0 | - |
| 0.5079 | 4200 | 0.0 | - |
| 0.5140 | 4250 | 0.0 | - |
| 0.5200 | 4300 | 0.0 | - |
| 0.5261 | 4350 | 0.0 | - |
| 0.5321 | 4400 | 0.0 | - |
| 0.5382 | 4450 | 0.0 | - |
| 0.5442 | 4500 | 0.0 | - |
| 0.5502 | 4550 | 0.0187 | - |
| 0.5563 | 4600 | 0.1473 | - |
| 0.5623 | 4650 | 0.1667 | - |
| 0.5684 | 4700 | 0.0401 | - |
| 0.5744 | 4750 | 0.0112 | - |
| 0.5805 | 4800 | 0.0074 | - |
| 0.5865 | 4850 | 0.0021 | - |
| 0.5926 | 4900 | 0.0017 | - |
| 0.5986 | 4950 | 0.0 | - |
| 0.6047 | 5000 | 0.0 | - |
| 0.6107 | 5050 | 0.0 | - |
| 0.6168 | 5100 | 0.0 | - |
| 0.6228 | 5150 | 0.0 | - |
| 0.6289 | 5200 | 0.0 | - |
| 0.6349 | 5250 | 0.0 | - |
| 0.6409 | 5300 | 0.0 | - |
| 0.6470 | 5350 | 0.0 | - |
| 0.6530 | 5400 | 0.0 | - |
| 0.6591 | 5450 | 0.0 | - |
| 0.6651 | 5500 | 0.0 | - |
| 0.6712 | 5550 | 0.0 | - |
| 0.6772 | 5600 | 0.0 | - |
| 0.6833 | 5650 | 0.0 | - |
| 0.6893 | 5700 | 0.0 | - |
| 0.6954 | 5750 | 0.0 | - |
| 0.7014 | 5800 | 0.0 | - |
| 0.7075 | 5850 | 0.0 | - |
| 0.7135 | 5900 | 0.0 | - |
| 0.7196 | 5950 | 0.0 | - |
| 0.7256 | 6000 | 0.0 | - |
| 0.7316 | 6050 | 0.0 | - |
| 0.7377 | 6100 | 0.0 | - |
| 0.7437 | 6150 | 0.0 | - |
| 0.7498 | 6200 | 0.0 | - |
| 0.7558 | 6250 | 0.0 | - |
| 0.7619 | 6300 | 0.0 | - |
| 0.7679 | 6350 | 0.0 | - |
| 0.7740 | 6400 | 0.0 | - |
| 0.7800 | 6450 | 0.0 | - |
| 0.7861 | 6500 | 0.0 | - |
| 0.7921 | 6550 | 0.0 | - |
| 0.7982 | 6600 | 0.0 | - |
| 0.8042 | 6650 | 0.0 | - |
| 0.8103 | 6700 | 0.0 | - |
| 0.8163 | 6750 | 0.0 | - |
| 0.8223 | 6800 | 0.0 | - |
| 0.8284 | 6850 | 0.0 | - |
| 0.8344 | 6900 | 0.0 | - |
| 0.8405 | 6950 | 0.0 | - |
| 0.8465 | 7000 | 0.0 | - |
| 0.8526 | 7050 | 0.0 | - |
| 0.8586 | 7100 | 0.0 | - |
| 0.8647 | 7150 | 0.0 | - |
| 0.8707 | 7200 | 0.0 | - |
| 0.8768 | 7250 | 0.0 | - |
| 0.8828 | 7300 | 0.0 | - |
| 0.8889 | 7350 | 0.0 | - |
| 0.8949 | 7400 | 0.0 | - |
| 0.9010 | 7450 | 0.0 | - |
| 0.9070 | 7500 | 0.0 | - |
| 0.9130 | 7550 | 0.0 | - |
| 0.9191 | 7600 | 0.0 | - |
| 0.9251 | 7650 | 0.0 | - |
| 0.9312 | 7700 | 0.0 | - |
| 0.9372 | 7750 | 0.0 | - |
| 0.9433 | 7800 | 0.0 | - |
| 0.9493 | 7850 | 0.0 | - |
| 0.9554 | 7900 | 0.0 | - |
| 0.9614 | 7950 | 0.0 | - |
| 0.9675 | 8000 | 0.0 | - |
| 0.9735 | 8050 | 0.0 | - |
| 0.9796 | 8100 | 0.0 | - |
| 0.9856 | 8150 | 0.0 | - |
| 0.9917 | 8200 | 0.0 | - |
| 0.9977 | 8250 | 0.0 | - |
| 1.0 | 8269 | - | 0.3465 |
| 1.0037 | 8300 | 0.0 | - |
| 1.0098 | 8350 | 0.0 | - |
| 1.0158 | 8400 | 0.0 | - |
| 1.0219 | 8450 | 0.0 | - |
| 1.0279 | 8500 | 0.0 | - |
| 1.0340 | 8550 | 0.0 | - |
| 1.0400 | 8600 | 0.0 | - |
| 1.0461 | 8650 | 0.0 | - |
| 1.0521 | 8700 | 0.0 | - |
| 1.0582 | 8750 | 0.0 | - |
| 1.0642 | 8800 | 0.0 | - |
| 1.0703 | 8850 | 0.0 | - |
| 1.0763 | 8900 | 0.0 | - |
| 1.0824 | 8950 | 0.0 | - |
| 1.0884 | 9000 | 0.0 | - |
| 1.0944 | 9050 | 0.0 | - |
| 1.1005 | 9100 | 0.0 | - |
| 1.1065 | 9150 | 0.0 | - |
| 1.1126 | 9200 | 0.0 | - |
| 1.1186 | 9250 | 0.0 | - |
| 1.1247 | 9300 | 0.0 | - |
| 1.1307 | 9350 | 0.0 | - |
| 1.1368 | 9400 | 0.0 | - |
| 1.1428 | 9450 | 0.0 | - |
| 1.1489 | 9500 | 0.0 | - |
| 1.1549 | 9550 | 0.0 | - |
| 1.1610 | 9600 | 0.0 | - |
| 1.1670 | 9650 | 0.0 | - |
| 1.1731 | 9700 | 0.0 | - |
| 1.1791 | 9750 | 0.0 | - |
| 1.1851 | 9800 | 0.0 | - |
| 1.1912 | 9850 | 0.0 | - |
| 1.1972 | 9900 | 0.0 | - |
| 1.2033 | 9950 | 0.0 | - |
| 1.2093 | 10000 | 0.0 | - |
| 1.2154 | 10050 | 0.0 | - |
| 1.2214 | 10100 | 0.0 | - |
| 1.2275 | 10150 | 0.0 | - |
| 1.2335 | 10200 | 0.0 | - |
| 1.2396 | 10250 | 0.0 | - |
| 1.2456 | 10300 | 0.0 | - |
| 1.2517 | 10350 | 0.0 | - |
| 1.2577 | 10400 | 0.0 | - |
| 1.2638 | 10450 | 0.0 | - |
| 1.2698 | 10500 | 0.0 | - |
| 1.2758 | 10550 | 0.0 | - |
| 1.2819 | 10600 | 0.0 | - |
| 1.2879 | 10650 | 0.0 | - |
| 1.2940 | 10700 | 0.0 | - |
| 1.3000 | 10750 | 0.0 | - |
| 1.3061 | 10800 | 0.0 | - |
| 1.3121 | 10850 | 0.0 | - |
| 1.3182 | 10900 | 0.0 | - |
| 1.3242 | 10950 | 0.0 | - |
| 1.3303 | 11000 | 0.0 | - |
| 1.3363 | 11050 | 0.0 | - |
| 1.3424 | 11100 | 0.0 | - |
| 1.3484 | 11150 | 0.0 | - |
| 1.3545 | 11200 | 0.0 | - |
| 1.3605 | 11250 | 0.0 | - |
| 1.3665 | 11300 | 0.0 | - |
| 1.3726 | 11350 | 0.0137 | - |
| 1.3786 | 11400 | 0.0211 | - |
| 1.3847 | 11450 | 0.0047 | - |
| 1.3907 | 11500 | 0.0048 | - |
| 1.3968 | 11550 | 0.0008 | - |
| 1.4028 | 11600 | 0.0 | - |
| 1.4089 | 11650 | 0.0 | - |
| 1.4149 | 11700 | 0.0 | - |
| 1.4210 | 11750 | 0.0 | - |
| 1.4270 | 11800 | 0.0 | - |
| 1.4331 | 11850 | 0.0 | - |
| 1.4391 | 11900 | 0.0 | - |
| 1.4452 | 11950 | 0.0 | - |
| 1.4512 | 12000 | 0.0 | - |
| 1.4572 | 12050 | 0.0 | - |
| 1.4633 | 12100 | 0.0 | - |
| 1.4693 | 12150 | 0.0 | - |
| 1.4754 | 12200 | 0.0 | - |
| 1.4814 | 12250 | 0.0 | - |
| 1.4875 | 12300 | 0.0 | - |
| 1.4935 | 12350 | 0.0 | - |
| 1.4996 | 12400 | 0.0 | - |
| 1.5056 | 12450 | 0.0 | - |
| 1.5117 | 12500 | 0.0 | - |
| 1.5177 | 12550 | 0.0 | - |
| 1.5238 | 12600 | 0.0 | - |
| 1.5298 | 12650 | 0.0 | - |
| 1.5359 | 12700 | 0.0 | - |
| 1.5419 | 12750 | 0.0 | - |
| 1.5480 | 12800 | 0.0 | - |
| 1.5540 | 12850 | 0.0 | - |
| 1.5600 | 12900 | 0.0 | - |
| 1.5661 | 12950 | 0.0 | - |
| 1.5721 | 13000 | 0.0 | - |
| 1.5782 | 13050 | 0.0 | - |
| 1.5842 | 13100 | 0.0 | - |
| 1.5903 | 13150 | 0.0 | - |
| 1.5963 | 13200 | 0.0 | - |
| 1.6024 | 13250 | 0.0 | - |
| 1.6084 | 13300 | 0.0 | - |
| 1.6145 | 13350 | 0.0 | - |
| 1.6205 | 13400 | 0.0 | - |
| 1.6266 | 13450 | 0.0 | - |
| 1.6326 | 13500 | 0.0 | - |
| 1.6387 | 13550 | 0.0 | - |
| 1.6447 | 13600 | 0.0 | - |
| 1.6507 | 13650 | 0.0 | - |
| 1.6568 | 13700 | 0.0 | - |
| 1.6628 | 13750 | 0.0 | - |
| 1.6689 | 13800 | 0.0 | - |
| 1.6749 | 13850 | 0.0 | - |
| 1.6810 | 13900 | 0.0 | - |
| 1.6870 | 13950 | 0.0 | - |
| 1.6931 | 14000 | 0.0 | - |
| 1.6991 | 14050 | 0.0 | - |
| 1.7052 | 14100 | 0.0 | - |
| 1.7112 | 14150 | 0.0 | - |
| 1.7173 | 14200 | 0.0 | - |
| 1.7233 | 14250 | 0.0 | - |
| 1.7294 | 14300 | 0.0 | - |
| 1.7354 | 14350 | 0.0 | - |
| 1.7414 | 14400 | 0.0 | - |
| 1.7475 | 14450 | 0.0 | - |
| 1.7535 | 14500 | 0.0 | - |
| 1.7596 | 14550 | 0.0 | - |
| 1.7656 | 14600 | 0.0 | - |
| 1.7717 | 14650 | 0.0 | - |
| 1.7777 | 14700 | 0.0 | - |
| 1.7838 | 14750 | 0.0 | - |
| 1.7898 | 14800 | 0.0 | - |
| 1.7959 | 14850 | 0.0 | - |
| 1.8019 | 14900 | 0.0 | - |
| 1.8080 | 14950 | 0.0 | - |
| 1.8140 | 15000 | 0.0 | - |
| 1.8201 | 15050 | 0.0 | - |
| 1.8261 | 15100 | 0.0 | - |
| 1.8321 | 15150 | 0.0 | - |
| 1.8382 | 15200 | 0.0 | - |
| 1.8442 | 15250 | 0.0 | - |
| 1.8503 | 15300 | 0.0 | - |
| 1.8563 | 15350 | 0.0 | - |
| 1.8624 | 15400 | 0.0 | - |
| 1.8684 | 15450 | 0.0 | - |
| 1.8745 | 15500 | 0.0 | - |
| 1.8805 | 15550 | 0.0 | - |
| 1.8866 | 15600 | 0.0 | - |
| 1.8926 | 15650 | 0.0 | - |
| 1.8987 | 15700 | 0.0 | - |
| 1.9047 | 15750 | 0.0 | - |
| 1.9108 | 15800 | 0.0 | - |
| 1.9168 | 15850 | 0.0 | - |
| 1.9228 | 15900 | 0.0 | - |
| 1.9289 | 15950 | 0.0 | - |
| 1.9349 | 16000 | 0.0 | - |
| 1.9410 | 16050 | 0.0 | - |
| 1.9470 | 16100 | 0.0 | - |
| 1.9531 | 16150 | 0.0 | - |
| 1.9591 | 16200 | 0.0 | - |
| 1.9652 | 16250 | 0.0 | - |
| 1.9712 | 16300 | 0.0 | - |
| 1.9773 | 16350 | 0.0 | - |
| 1.9833 | 16400 | 0.0 | - |
| 1.9894 | 16450 | 0.0 | - |
| 1.9954 | 16500 | 0.0 | - |
| 2.0 | 16538 | - | 0.3646 |
| 2.0015 | 16550 | 0.0 | - |
| 2.0075 | 16600 | 0.0 | - |
| 2.0135 | 16650 | 0.0 | - |
| 2.0196 | 16700 | 0.0 | - |
| 2.0256 | 16750 | 0.0 | - |
| 2.0317 | 16800 | 0.0 | - |
| 2.0377 | 16850 | 0.0 | - |
| 2.0438 | 16900 | 0.0 | - |
| 2.0498 | 16950 | 0.0 | - |
| 2.0559 | 17000 | 0.0 | - |
| 2.0619 | 17050 | 0.0 | - |
| 2.0680 | 17100 | 0.0 | - |
| 2.0740 | 17150 | 0.0 | - |
| 2.0801 | 17200 | 0.0 | - |
| 2.0861 | 17250 | 0.0 | - |
| 2.0922 | 17300 | 0.0 | - |
| 2.0982 | 17350 | 0.0 | - |
| 2.1042 | 17400 | 0.0 | - |
| 2.1103 | 17450 | 0.0 | - |
| 2.1163 | 17500 | 0.0 | - |
| 2.1224 | 17550 | 0.0 | - |
| 2.1284 | 17600 | 0.0 | - |
| 2.1345 | 17650 | 0.0 | - |
| 2.1405 | 17700 | 0.0 | - |
| 2.1466 | 17750 | 0.0 | - |
| 2.1526 | 17800 | 0.0 | - |
| 2.1587 | 17850 | 0.0 | - |
| 2.1647 | 17900 | 0.0 | - |
| 2.1708 | 17950 | 0.0 | - |
| 2.1768 | 18000 | 0.0 | - |
| 2.1829 | 18050 | 0.0 | - |
| 2.1889 | 18100 | 0.0 | - |
| 2.1949 | 18150 | 0.0 | - |
| 2.2010 | 18200 | 0.0 | - |
| 2.2070 | 18250 | 0.0 | - |
| 2.2131 | 18300 | 0.0 | - |
| 2.2191 | 18350 | 0.0 | - |
| 2.2252 | 18400 | 0.0 | - |
| 2.2312 | 18450 | 0.0 | - |
| 2.2373 | 18500 | 0.0 | - |
| 2.2433 | 18550 | 0.0 | - |
| 2.2494 | 18600 | 0.0 | - |
| 2.2554 | 18650 | 0.0 | - |
| 2.2615 | 18700 | 0.0 | - |
| 2.2675 | 18750 | 0.0 | - |
| 2.2736 | 18800 | 0.0 | - |
| 2.2796 | 18850 | 0.0 | - |
| 2.2856 | 18900 | 0.0 | - |
| 2.2917 | 18950 | 0.0 | - |
| 2.2977 | 19000 | 0.0 | - |
| 2.3038 | 19050 | 0.0 | - |
| 2.3098 | 19100 | 0.0 | - |
| 2.3159 | 19150 | 0.0 | - |
| 2.3219 | 19200 | 0.0 | - |
| 2.3280 | 19250 | 0.0 | - |
| 2.3340 | 19300 | 0.0 | - |
| 2.3401 | 19350 | 0.0 | - |
| 2.3461 | 19400 | 0.0 | - |
| 2.3522 | 19450 | 0.0 | - |
| 2.3582 | 19500 | 0.0 | - |
| 2.3643 | 19550 | 0.0 | - |
| 2.3703 | 19600 | 0.0 | - |
| 2.3763 | 19650 | 0.0 | - |
| 2.3824 | 19700 | 0.0 | - |
| 2.3884 | 19750 | 0.0 | - |
| 2.3945 | 19800 | 0.0 | - |
| 2.4005 | 19850 | 0.0 | - |
| 2.4066 | 19900 | 0.0 | - |
| 2.4126 | 19950 | 0.0 | - |
| 2.4187 | 20000 | 0.0 | - |
| 2.4247 | 20050 | 0.0 | - |
| 2.4308 | 20100 | 0.0 | - |
| 2.4368 | 20150 | 0.0 | - |
| 2.4429 | 20200 | 0.0 | - |
| 2.4489 | 20250 | 0.0 | - |
| 2.4550 | 20300 | 0.0 | - |
| 2.4610 | 20350 | 0.0 | - |
| 2.4670 | 20400 | 0.0 | - |
| 2.4731 | 20450 | 0.0 | - |
| 2.4791 | 20500 | 0.0 | - |
| 2.4852 | 20550 | 0.0 | - |
| 2.4912 | 20600 | 0.0 | - |
| 2.4973 | 20650 | 0.0 | - |
| 2.5033 | 20700 | 0.0 | - |
| 2.5094 | 20750 | 0.0 | - |
| 2.5154 | 20800 | 0.0 | - |
| 2.5215 | 20850 | 0.0 | - |
| 2.5275 | 20900 | 0.0 | - |
| 2.5336 | 20950 | 0.0 | - |
| 2.5396 | 21000 | 0.0 | - |
| 2.5457 | 21050 | 0.0 | - |
| 2.5517 | 21100 | 0.0 | - |
| 2.5577 | 21150 | 0.0 | - |
| 2.5638 | 21200 | 0.0 | - |
| 2.5698 | 21250 | 0.0 | - |
| 2.5759 | 21300 | 0.0 | - |
| 2.5819 | 21350 | 0.0 | - |
| 2.5880 | 21400 | 0.0 | - |
| 2.5940 | 21450 | 0.0 | - |
| 2.6001 | 21500 | 0.0 | - |
| 2.6061 | 21550 | 0.0 | - |
| 2.6122 | 21600 | 0.0 | - |
| 2.6182 | 21650 | 0.0 | - |
| 2.6243 | 21700 | 0.0 | - |
| 2.6303 | 21750 | 0.0 | - |
| 2.6364 | 21800 | 0.0 | - |
| 2.6424 | 21850 | 0.0 | - |
| 2.6484 | 21900 | 0.0 | - |
| 2.6545 | 21950 | 0.0 | - |
| 2.6605 | 22000 | 0.0 | - |
| 2.6666 | 22050 | 0.0 | - |
| 2.6726 | 22100 | 0.0 | - |
| 2.6787 | 22150 | 0.0 | - |
| 2.6847 | 22200 | 0.0 | - |
| 2.6908 | 22250 | 0.0 | - |
| 2.6968 | 22300 | 0.0 | - |
| 2.7029 | 22350 | 0.0 | - |
| 2.7089 | 22400 | 0.0 | - |
| 2.7150 | 22450 | 0.0 | - |
| 2.7210 | 22500 | 0.0 | - |
| 2.7271 | 22550 | 0.0 | - |
| 2.7331 | 22600 | 0.0 | - |
| 2.7391 | 22650 | 0.0 | - |
| 2.7452 | 22700 | 0.0 | - |
| 2.7512 | 22750 | 0.0 | - |
| 2.7573 | 22800 | 0.0 | - |
| 2.7633 | 22850 | 0.0 | - |
| 2.7694 | 22900 | 0.0 | - |
| 2.7754 | 22950 | 0.0 | - |
| 2.7815 | 23000 | 0.0 | - |
| 2.7875 | 23050 | 0.0 | - |
| 2.7936 | 23100 | 0.0 | - |
| 2.7996 | 23150 | 0.0 | - |
| 2.8057 | 23200 | 0.0 | - |
| 2.8117 | 23250 | 0.0 | - |
| 2.8178 | 23300 | 0.0 | - |
| 2.8238 | 23350 | 0.0 | - |
| 2.8298 | 23400 | 0.0 | - |
| 2.8359 | 23450 | 0.0 | - |
| 2.8419 | 23500 | 0.0 | - |
| 2.8480 | 23550 | 0.0 | - |
| 2.8540 | 23600 | 0.0 | - |
| 2.8601 | 23650 | 0.0 | - |
| 2.8661 | 23700 | 0.0 | - |
| 2.8722 | 23750 | 0.0 | - |
| 2.8782 | 23800 | 0.0 | - |
| 2.8843 | 23850 | 0.0 | - |
| 2.8903 | 23900 | 0.0 | - |
| 2.8964 | 23950 | 0.0 | - |
| 2.9024 | 24000 | 0.0 | - |
| 2.9085 | 24050 | 0.0 | - |
| 2.9145 | 24100 | 0.0 | - |
| 2.9205 | 24150 | 0.0 | - |
| 2.9266 | 24200 | 0.0 | - |
| 2.9326 | 24250 | 0.0 | - |
| 2.9387 | 24300 | 0.0 | - |
| 2.9447 | 24350 | 0.0 | - |
| 2.9508 | 24400 | 0.0 | - |
| 2.9568 | 24450 | 0.0 | - |
| 2.9629 | 24500 | 0.0 | - |
| 2.9689 | 24550 | 0.0 | - |
| 2.9750 | 24600 | 0.0 | - |
| 2.9810 | 24650 | 0.0 | - |
| 2.9871 | 24700 | 0.0 | - |
| 2.9931 | 24750 | 0.0 | - |
| 2.9992 | 24800 | 0.0 | - |
| 3.0 | 24807 | - | 0.3517 |
### Framework Versions
- Python: 3.11.11
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.4.1.post300
- Datasets: 3.4.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
jkawamoto/arxiv-summarization-t5-base-ct2 | jkawamoto | summarization | [
"transformers",
"ctranslate2",
"summarization",
"en",
"base_model:farleyknight/arxiv-summarization-t5-base-2022-09-21",
"base_model:quantized:farleyknight/arxiv-summarization-t5-base-2022-09-21",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,717,227,571,000 | 2024-10-03T00:34:22 | 16 | 0 | ---
base_model: farleyknight/arxiv-summarization-t5-base-2022-09-21
language:
- en
license: apache-2.0
tags:
- ctranslate2
- summarization
base_model_relation: quantized
---
# arxiv-summarization-t5-base-ct2
This is a version of [farleyknight/arxiv-summarization-t5-base-2022-09-21](https://huggingface.co/farleyknight/arxiv-summarization-t5-base-2022-09-21) converted for use with [CTranslate2](https://github.com/OpenNMT/CTranslate2).
The conversion was performed using the following command:
```
ct2-transformers-converter --model farleyknight/arxiv-summarization-t5-base-2022-09-21 \
--output_dir arxiv-summarization-t5-base-ct2 \
--copy_files special_tokens_map.json tokenizer.json tokenizer_config.json
```
## License
This adaptation is based on [farleyknight/arxiv-summarization-t5-base-2022-09-21](https://huggingface.co/farleyknight/arxiv-summarization-t5-base-2022-09-21), originally provided under the Apache 2.0 License. Modifications were made for compatibility with CTranslate2. Despite these modifications, this adapted version continues to be distributed under the Apache 2.0 License, honoring the original licensing terms. | [
"SUMMARIZATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-en-hy | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:29:55 | 956 | 1 | ---
language:
- en
- hy
license: apache-2.0
tags:
- translation
---
### eng-hye
* source group: English
* target group: Armenian
* OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): hye
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.hye | 16.6 | 0.404 |
### System Info:
- hf_name: eng-hye
- source_languages: eng
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hy']
- src_constituents: {'eng'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: hye
- short_pair: en-hy
- chrF2_score: 0.40399999999999997
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 5115.0
- src_name: English
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: hy
- prefer_old: False
- long_pair: eng-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
txt22/distilbert-base-uncased-finetuned-emotion | txt22 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,693,076,806,000 | 2023-08-27T01:13:06 | 14 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.925
name: Accuracy
- type: f1
value: 0.9247520077961444
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.925
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8388 | 1.0 | 250 | 0.3143 | 0.906 | 0.9026 |
| 0.2482 | 2.0 | 500 | 0.2183 | 0.925 | 0.9248 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
TmbAI/PDF-Summarizer | TmbAI | null | [
"license:unlicense",
"region:us"
] | 1,720,889,861,000 | 2024-07-13T17:00:44 | 0 | 0 | ---
license: unlicense
---
from transformers import pipeline
import PyPDF2
def summarize_pdf(pdf_path):
"""Summarizes the content of a given PDF file.
Args:
pdf_path: The path to the PDF file to be summarized.
Returns:
A list of dictionaries containing the summary text.
"""
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
with open(pdf_path, 'rb') as pdf_file:
pdf_reader = PyPDF2.PdfReader(pdf_file)
text = ""
for page_num in range(len(pdf_reader.pages)):
page = pdf_reader.pages[page_num]
text += page.extract_text()
summary = summarizer(text, max_length=130, min_length=30, do_sample=False)
return summary
# Example usage:
pdf_path = "path/to/your/pdf.pdf" # Replace with the actual PDF file path
summary = summarize_pdf(pdf_path)
print(summary)
| [
"SUMMARIZATION"
] | Non_BioNLP |
blockblockblock/Hermes-2-Pro-Llama-3-8B-bpw5-exl2 | blockblockblock | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:quantized:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | 1,714,725,271,000 | 2024-05-03T08:36:46 | 7 | 0 | ---
base_model: NousResearch/Meta-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
language:
- en
license: apache-2.0
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-8B
results: []
---
# Hermes 2 Pro - Llama-3 8B

## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Example Outputs
### Ask for a structured JSON output:

### Write the plot for a story where anime became real life:

### Coding Assistance

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5520|± |0.0145|
| | |acc_norm|0.5887|± |0.0144|
|arc_easy | 0|acc |0.8350|± |0.0076|
| | |acc_norm|0.8123|± |0.0080|
|boolq | 1|acc |0.8584|± |0.0061|
|hellaswag | 0|acc |0.6265|± |0.0048|
| | |acc_norm|0.8053|± |0.0040|
|openbookqa | 0|acc |0.3800|± |0.0217|
| | |acc_norm|0.4580|± |0.0223|
|piqa | 0|acc |0.8003|± |0.0093|
| | |acc_norm|0.8118|± |0.0091|
|winogrande | 0|acc |0.7490|± |0.0122|
```
Average: 72.62
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2520|± |0.0273|
| | |acc_norm|0.2559|± |0.0274|
|agieval_logiqa_en | 0|acc |0.3548|± |0.0188|
| | |acc_norm|0.3625|± |0.0189|
|agieval_lsat_ar | 0|acc |0.1826|± |0.0255|
| | |acc_norm|0.1913|± |0.0260|
|agieval_lsat_lr | 0|acc |0.5510|± |0.0220|
| | |acc_norm|0.5255|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6431|± |0.0293|
| | |acc_norm|0.6097|± |0.0298|
|agieval_sat_en | 0|acc |0.7330|± |0.0309|
| | |acc_norm|0.7039|± |0.0319|
|agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343|
| | |acc_norm|0.3689|± |0.0337|
|agieval_sat_math | 0|acc |0.3909|± |0.0330|
| | |acc_norm|0.3773|± |0.0328|
```
Average: 42.44
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222|
|bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126|
|bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288|
```
Average: 43.55
## TruthfulQA:
```
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.410|± |0.0172|
| | |mc2 |0.578|± |0.0157|
```
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)},
title={Hermes-2-Pro-Llama-3-8B},
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| [
"TRANSLATION"
] | Non_BioNLP |
bwhite5311/NLP-sentiment-project-2001-samples | bwhite5311 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,668,807,943,000 | 2022-11-19T01:21:00 | 9 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
- precision
tags:
- generated_from_trainer
model-index:
- name: NLP-sentiment-project-2001-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- type: accuracy
value: 0.9998
name: Accuracy
- type: f1
value: 0.9998005186515061
name: F1
- type: precision
value: 0.9996011168727563
name: Precision
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-sentiment-project-2001-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
- Accuracy: 0.9998
- F1: 0.9998
- Precision: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
tjasad/translation_slo_eng_opus-mt-sla-en_lora | tjasad | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-sla-en",
"base_model:adapter:Helsinki-NLP/opus-mt-sla-en",
"license:apache-2.0",
"region:us"
] | 1,716,310,530,000 | 2024-05-24T05:09:13 | 2 | 0 | ---
base_model: Helsinki-NLP/opus-mt-sla-en
library_name: peft
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: translation_slo_eng_opus-mt-sla-en_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_slo_eng_opus-mt-sla-en_lora
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-sla-en](https://huggingface.co/Helsinki-NLP/opus-mt-sla-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7472
- Bleu: 34.2191
- Gen Len: 12.4026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.8995 | 1.0 | 2500 | 1.7472 | 34.2191 | 12.4026 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 | [
"TRANSLATION"
] | Non_BioNLP |
fathyshalab/massive_social-roberta-large-v1-3-7 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,676,013,284,000 | 2023-02-10T07:15:04 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/massive_social-roberta-large-v1-3-7
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-3-7")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-alv-en-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,174,841,000 | 2024-10-18T21:49:37 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-alv-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-alv-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-alv-en --output_dir ./ctranslate2/opus-mt-alv-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-alv-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-alv-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-alv-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-alv-en) by Helsinki-NLP.
| [
"TRANSLATION"
] | Non_BioNLP |
clulab/roberta-base-motivational-interviewing | clulab | text-classification | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"motivational-interviewing",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,684,862,098,000 | 2023-05-23T18:47:06 | 18 | 1 | ---
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- f1
pipeline_tag: text-classification
tags:
- motivational-interviewing
widget:
- text: I'm planning on having tuna, ground tuna, chopped celery, and chopped black
pepper, and half a apple.
example_title: change_talk_goal_talk_and_opportunities
---
# Model Card for roberta-base-motivational-interviewing
⚠ WARNING: This is a preliminary model that is still actively under development. ⚠
This is a [roBERTa-base](https://huggingface.co/roberta-base) model fine-tuned on a small dataset of conversations between health coaches and cancer survivors.
# How to Get Started with the Model
You can use this model directly with a pipeline for text classification:
```python
>>> import transformers
>>> model_name = "clulab/roberta-base-motivational-interviewing"
>>> classifier = transformers.TextClassificationPipeline(
... tokenizer=transformers.AutoTokenizer.from_pretrained(model_name),
... model=transformers.AutoModelForSequenceClassification.from_pretrained(model_name))
>>> classifier("I'm planning on having tuna, ground tuna, chopped celery, and chopped black pepper, and half a apple.")
[{'label': 'change_talk_goal_talk_and_opportunities', 'score': 0.9995419979095459}]
```
# Model Details
- **Developed by:** [Steven Bethard](https://bethard.github.io/)
- **Parent Model:** [roBERTa-base](https://huggingface.co/roberta-base)
- **GitHub Repo:** [LIvES repo](https://github.com/clulab/lives)
# Uses
The model is intended to be used for text classification, taking as input conversational utterances and predicting as output different categories of motivational interviewing behaviors.
It is intended for use by health coaches to assist when reviewing their past calls with participants. Its predictions should not be used without manual review.
# Training Details
The model was trained on data annotated under the grant [Using Natural Language Processing to Determine Predictors of Healthy Diet and Physical Activity Behavior Change in Ovarian Cancer Survivors (NIH NCI R21CA256680)](https://reporter.nih.gov/project-details/10510666). A [roberta-base](https://huggingface.co/roberta-base) model was fine-tuned on that dataset, with texts tokenized using the standard [roberta-base](https://huggingface.co/roberta-base) tokenizer.
# Evaluation
On the test partition of the R21CA256680 dataset, the model achieves 0.60 precision and 0.46 recall. | [
"TEXT_CLASSIFICATION"
] | BioNLP |
csocsci/mt5-base-binary-en-iiia-02c | csocsci | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,695,389,587,000 | 2023-09-23T05:12:11 | 9 | 0 | ---
language:
- multilingual
license: mit
---
# Model Card for mt5-base-binary-en-iiia-02c
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned for binary text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents.
## Model Description
The model was fine-tuned on a dataset of English Instant Messenger dialogs of Adolescents. The classification is binary and the model outputs 'positive' or 'negative': Supportive Interactions present or not. The inputs are a target utterance and its bi-directional context; it's target label that of the target utterance.
- **Developed by:** Anonymous
- **Language(s):** multilingual
- **Finetuned from:** mt5-base
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/chi2024submission
- **Paper:** Stay tuned!
## Usage
Here is how to use this model to classify a context-window of a dialogue:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
# Target utterance
test_texts = ['Utterance2']
# Bi-directional context of the target utterance
test_text_pairs = ['Utterance1;Utterance2;Utterance3']
# Load the model and tokenizer
checkpoint_path = "chi2024/mt5-base-binary-en-iiia-02c"
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\
.to("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
# Define helper functions
def verbalize_input(text: str, text_pair: str) -> str:
return "Utterance: %s\nContext: %s" % (text, text_pair)
def predict_one(text, pair):
input_pair = verbalize_input(text, pair)
inputs = tokenizer(input_pair, return_tensors="pt", padding=True,
truncation=True, max_length=256).to(model.device)
outputs = model.generate(**inputs)
decoded = [text.strip() for text in
tokenizer.batch_decode(outputs, skip_special_tokens=True)]
return decoded
# Run the prediction
preds_txt = [predict_one(t,p) for t,p in zip(test_texts, test_text_pairs)]
preds_lbl = [1 if x == 'positive' else 0 for x in preds_txt]
print(preds_lbl)
``` | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
facebook/dpr-question_encoder-single-nq-base | facebook | feature-extraction | [
"transformers",
"pytorch",
"tf",
"dpr",
"feature-extraction",
"en",
"dataset:nq_open",
"arxiv:2004.04906",
"arxiv:1702.08734",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | 1,646,263,745,000 | 2022-12-21T15:20:10 | 35,053 | 30 | ---
datasets:
- nq_open
language: en
license: cc-by-nc-4.0
tags:
- dpr
inference: false
---
# `dpr-question_encoder-single-nq-base`
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-question_encoder-single-nq-base` is the question encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)).
- **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers
- **Model Type:** BERT-based encoder
- **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md)
- **License:** English
- **Related Models:**
- [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base)
- [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base)
- [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base)
- [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
- [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base)
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2004.04906)
- [GitHub Repo](https://github.com/facebookresearch/DPR)
- [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr)
- [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
model = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
## Uses
#### Direct Use
`dpr-question_encoder-single-nq-base`, [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base), and [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) can be used for the task of open-domain question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Training
#### Training Data
This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that:
> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.
#### Training Procedure
The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf):
> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.
> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.
The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf).
#### Testing Data, Factors and Metrics
The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad).
#### Results
| | Top 20 | | | | | Top 100| | | | |
|:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:|
| | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD |
| | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906).
- **Hardware Type:** 8 32GB GPUs
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
}
```
## Model Card Authors
This model card was written by the team at Hugging Face. | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-tc-big-fi-zle | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"fi",
"ru",
"uk",
"zle",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,648,112,893,000 | 2023-10-10T11:36:15 | 27 | 0 | ---
language:
- fi
- ru
- uk
- zle
license: cc-by-4.0
tags:
- translation
- opus-mt-tc
model-index:
- name: opus-mt-tc-big-fi-zle
results:
- task:
type: translation
name: Translation fin-rus
dataset:
name: flores101-devtest
type: flores_101
args: fin rus devtest
metrics:
- type: bleu
value: 21.4
name: BLEU
- type: bleu
value: 17.9
name: BLEU
- task:
type: translation
name: Translation fin-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fin-rus
metrics:
- type: bleu
value: 47.0
name: BLEU
---
# opus-mt-tc-big-fi-zle
Neural machine translation model for translating from Finnish (fi) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-17
* source language(s): fin
* target language(s): rus ukr
* valid target language labels: >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zle/opusTCv20210807+bt_transformer-big_2022-03-17.zip)
* more information released models: [OPUS-MT fin-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>rus<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Äänestimme jo.",
">>ukr<< Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen."
]
model_name = "pytorch-models/opus-mt-tc-big-fi-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Мы уже проголосовали.
# Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fi-zle")
print(pipe(">>rus<< Äänestimme jo."))
# expected output: Мы уже проголосовали.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zle/opusTCv20210807+bt_transformer-big_2022-03-17.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zle/opusTCv20210807+bt_transformer-big_2022-03-17.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fin-rus | tatoeba-test-v2021-08-07 | 0.67247 | 47.0 | 3643 | 21497 |
| fin-rus | flores101-devtest | 0.49920 | 21.4 | 1012 | 23295 |
| fin-ukr | flores101-devtest | 0.46935 | 17.9 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 42126b6
* port time: Thu Mar 24 09:34:57 EET 2022
* port machine: LM0-400-22516.local
| [
"TRANSLATION"
] | Non_BioNLP |
kwang123/medical-mt-fr-en | kwang123 | translation | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"fr",
"dataset:opus_books",
"dataset:wmt14",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,692,239,880,000 | 2023-08-18T20:41:28 | 175 | 1 | ---
datasets:
- opus_books
- wmt14
language:
- en
- fr
metrics:
- bleu
pipeline_tag: translation
---
# medical-mt-fr-en
This model was fine-tuned based on [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) on WMT14 medical translation dataset.
source language: French
target language: English
## Inference
```python
from transformers import pipeline
text = 'Coombs négatif anémie hémolytique'
translator = pipeline("translation", model="kwang123/medical-mt-fr-en")
translator(text)
``` | [
"TRANSLATION"
] | Non_BioNLP |
YakovElm/IntelDAOS20SetFitModel_balance_ratio_2 | YakovElm | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,685,685,078,000 | 2023-06-02T05:52:00 | 12 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/IntelDAOS20SetFitModel_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/IntelDAOS20SetFitModel_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Unbabel/wmt23-cometkiwi-da-xxl-marian | Unbabel | translation | [
"transformers",
"translation",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | 1,716,460,509,000 | 2024-05-23T10:45:08 | 0 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: translation
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
---
Marian version of [wmt23-cometkiwi-da-xxl](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xxl).
Credits to Microsoft Translate Team!
# Paper
TBA
# License:
cc-by-nc-sa-4.0
# Usage:
TBA
# Intended uses
This model is intented to be used for **reference-free MT evaluation**.
Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of InfoXLM which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable! | [
"TRANSLATION"
] | Non_BioNLP |
mini1013/master_cate_el19 | mini1013 | text-classification | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 1,731,146,482,000 | 2024-11-09T10:01:45 | 644 | 0 | ---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 신일 SVC-D500SR 무선청소기 싸이클론 유선형 이동식 본체 디자인 그린 워너비템
- text: '[더트데빌 퀵플립플러스] 16V 리튬 무선 핸디청소기 (113년 전통/차량용/가정용/사무실/책상용/원룸/오피스텔) (주)비즈온플레이스'
- text: 신일전자 핸디형 무선 청소기 SVC-C27KP 차량용 가정용 소형청소기 원룸 새봄전자
- text: 더트데빌 플립아웃 20V 리튬 무선 핸디청소기 (주)비즈온플레이스
- text: 홈마블 진공 무선 핸디 미니 소형 스틱 청소기 화이트 씨엠케이(CMK)
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.8571428571428571
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'셰퍼 스왈로우 핸디 청소기 JSK-N3009 주식회사 알앤피코스메틱'</li><li>'마이아 듀얼프로 에어건 무선 청소기 ND-MYA001W 박진희'</li><li>'신일 싸이클론 무선 청소기 SVC-D500SR 헤파필터 회전브러쉬 제이씨유통'</li></ul> |
| 1 | <ul><li>'LG전자 코드제로 R5 로봇청소기 R585WKA1 (카밍 베이지) 엠제이테크'</li><li>'LG 코드제로 R5 R585WKA 로봇청소기 흡입 + 물걸레 / KN (주)케이엔디지털'</li></ul> |
| 8 | <ul><li>'(전용) 한경희 무선청소기 HCV-B400 PRO 전용 클린타워 모터 보호 헤파필터 '</li><li>'삼성 정품 VS20B957F5E 청소기 헤드 흡입구 브러쉬 슬림 sava03291 H에이치마켓'</li><li>'[별제이]일렉트로룩스 청소기 호환용 먼지봉투 10매 S-BAG ZUS4065AF 호환용 먼지봉투 10매 신세계몰'</li></ul> |
| 5 | <ul><li>'디월트 20V 충전 스틱 전동 무선 청소기 차량용 세차용 집진기 DCV501LN 주식회사 신한비앤아이'</li><li>'디월트 집진기 충전 청소기 무선 세차기 차량용 스틱 업소용청소기 DCV501LN 본체만 라이프 공구'</li><li>'디월트 20V MAX 충전 스틱 집진 청소기 DCV501LN 01.DCV501LN 충전청소기 베어툴 한경툴 주식회사'</li></ul> |
| 9 | <ul><li>'[신제품] 70도 열풍 진드기 제거 침구청소기 레이캅 코리아 '</li><li>'[신제품] 70도 열풍 진드기 제거 침구청소기 레이캅 코리아 '</li><li>'비쎌 스팟클린 하이드로 스팀 3791S 습식 스팀청소기 빈대퇴치 고온 살균 소파얼룩제거 BISSEL '</li></ul> |
| 7 | <ul><li>'[런칭기념 보관가방 ] 클링봇S 물분사 가성비 창문청소로봇 유리창 창문 베란다 로봇청소기 '</li><li>'에코백스 윈봇 W1S 창문 로봇청소기 에코백스공식스토어'</li><li>'클링봇S 물분사 창문로봇청소기 원조 창문 청소기 보관가방포함 아이뮤즈본사'</li></ul> |
| 0 | <ul><li>'AVA 프리미엄 고압세척기 휴대용 가정용 고압세차기 AVA GO P55 동양테크툴'</li><li>'AVA 프리미엄 고압세척기 휴대용 가정용 고압세차기 아바 GO P55 에이지에스'</li></ul> |
| 6 | <ul><li>'신일 유선 싸이클론 진공청소기 SVC-R700LOT 레드 + 블랙_SVC-R700LOT 스위트코코'</li><li>'신일전자 유선 싸이클론 진공청소기 강력한흡입 HEPA필터 700W SVC-R700LOT 신창전자'</li><li>'이스타 먼지제로 유선 진공 청소기 핸디스틱 소형 원룸 가정용 ESK-WV400 주식회사 제이에스엘홀딩스'</li></ul> |
| 4 | <ul><li>'한경희 2in1 스팀청소기 HTE-S600 핸디스팀 LTE-S600 (주)에디샵'</li><li>'[2024년 최신형] 리빈치 초고속 예열 고온 살균 스팀청소기 LSC-200 리빈치 스팀청소기 + 추가패드 증정 총 4장 (주)바투네트워크'</li></ul> |
| 3 | <ul><li>'KAC-5000(유선형)/오토비스/자동물걸레청소기/국산정품/친환경제품/소비전력30W/1분당1000회이상/강력한함/ 1_화이트 클린랜드'</li><li>'WC-1500 무선 충전 물걸레 청소기 각도조절 세척 탈수통 제공 MinSellAmount 지큐아이씨앤씨'</li><li>'코맘스 소형 무선 물걸레청소기 그레이 PC9005G 보만코리아'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.8571 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_el19")
# Run inference
preds = model("더트데빌 플립아웃 20V 리튬 무선 핸디청소기 (주)비즈온플레이스")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.2791 | 18 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 2 |
| 1 | 2 |
| 2 | 50 |
| 3 | 5 |
| 4 | 2 |
| 5 | 6 |
| 6 | 14 |
| 7 | 9 |
| 8 | 26 |
| 9 | 13 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0476 | 1 | 0.4954 | - |
| 2.3810 | 50 | 0.0399 | - |
| 4.7619 | 100 | 0.0186 | - |
| 7.1429 | 150 | 0.0152 | - |
| 9.5238 | 200 | 0.0155 | - |
| 11.9048 | 250 | 0.0093 | - |
| 14.2857 | 300 | 0.0025 | - |
| 16.6667 | 350 | 0.0006 | - |
| 19.0476 | 400 | 0.0037 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
AI-Sweden-Models/gpt-sw3-1.3b | AI-Sweden-Models | text-generation | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,671,021,180,000 | 2024-01-29T13:20:38 | 4,697 | 4 | ---
language:
- da
- sv
- 'no'
- en
- is
license: apache-2.0
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. AI Sweden shares GPT-SW3 in a controlled pre-release with organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. This is an important step in the process of validating the model and collecting feedback on both what works well and what does not.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-1.3b"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-1.3b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 29.99 |
| ARC (25-shot) | 30.38 |
| HellaSwag (10-shot) | 50.4 |
| MMLU (5-shot) | 26.14 |
| TruthfulQA (0-shot) | 39.97 |
| Winogrande (5-shot) | 58.88 |
| GSM8K (5-shot) | 0.08 |
| DROP (3-shot) | 4.08 |
| [
"SUMMARIZATION"
] | Non_BioNLP |
poltextlab/xlm-roberta-large-spanish-media-cap-v3 | poltextlab | text-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,699,013,140,000 | 2025-02-26T16:06:32 | 0 | 0 | ---
language:
- multilingual
license: mit
metrics:
- accuracy
- f1-score
tags:
- zero-shot-classification
- text-classification
- pytorch
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
If you use our models for your work or research, please cite this paper: Sebők,
M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large
Language Models for Multilingual Policy Topic Classification: The Babel Machine
Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434'
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-spanish-media-cap-v3
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-spanish-media-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
The translation table from the model results to CAP codes is the following:
```python
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 12,
11: 13,
12: 14,
13: 15,
14: 16,
15: 17,
16: 18,
17: 19,
18: 20,
19: 21,
20: 23,
21: 999,
}
```
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 17885 examples (10% of the available data).<br>
Model accuracy is **0.76**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.84 | 0.72 | 0.77 | 821 |
| 1 | 0.6 | 0.59 | 0.59 | 1015 |
| 2 | 0.79 | 0.84 | 0.82 | 689 |
| 3 | 0.71 | 0.86 | 0.78 | 155 |
| 4 | 0.68 | 0.69 | 0.69 | 371 |
| 5 | 0.79 | 0.81 | 0.8 | 229 |
| 6 | 0.62 | 0.63 | 0.63 | 189 |
| 7 | 0.8 | 0.82 | 0.81 | 341 |
| 8 | 0.78 | 0.75 | 0.77 | 322 |
| 9 | 0.75 | 0.89 | 0.81 | 517 |
| 10 | 0.8 | 0.79 | 0.79 | 3711 |
| 11 | 0.63 | 0.45 | 0.53 | 138 |
| 12 | 0.67 | 0.5 | 0.57 | 159 |
| 13 | 0.7 | 0.64 | 0.67 | 792 |
| 14 | 0.78 | 0.78 | 0.78 | 1330 |
| 15 | 0.73 | 0.84 | 0.79 | 621 |
| 16 | 0.57 | 0.43 | 0.49 | 168 |
| 17 | 0.65 | 0.65 | 0.65 | 1039 |
| 18 | 0.81 | 0.8 | 0.8 | 3879 |
| 19 | 0.76 | 0.71 | 0.74 | 49 |
| 20 | 0.74 | 0.82 | 0.78 | 1350 |
| macro avg | 0.72 | 0.72 | 0.72 | 17885 |
| weighted avg | 0.76 | 0.76 | 0.76 | 17885 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue. | [
"TRANSLATION"
] | Non_BioNLP |
Ashkanero/distilbert-base-uncased-finetuned-emotion | Ashkanero | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,718,921,282,000 | 2024-06-20T22:33:26 | 8 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.926
name: Accuracy
- type: f1
value: 0.925944582955182
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2129
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8026 | 1.0 | 250 | 0.2986 | 0.917 | 0.9160 |
| 0.2447 | 2.0 | 500 | 0.2129 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-fr-tll | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tll",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:37:23 | 401 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-tll
* source languages: fr
* target languages: tll
* OPUS readme: [fr-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tll | 24.6 | 0.467 |
| [
"TRANSLATION"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1288 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"license:mit",
"region:us"
] | 1,718,740,121,000 | 2024-07-03T20:33:38 | 0 | 0 | ---
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1288
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1288_glue_mrpc_paraphrasing
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1288_glue_mrpc_paraphrasing sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"PARAPHRASING"
] | Non_BioNLP |
YoLo2000/TiLamb-7B | YoLo2000 | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"bo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,712,060,416,000 | 2024-04-03T01:08:11 | 35 | 1 | ---
language:
- bo
license: apache-2.0
---
<!-- Provide a longer summary of what this model is. -->
# TiLamb-7B(Tibetan Large Language Model Base)
**TiLamb-7B** 是藏文大语言模型的基座模型,它使用了 26.43GB 的藏文语料,基于Meta发布的可商用大模型 LLaMA2-7B 模型,通过 LoRA 方法进行了增量预训练。该模型在 LLaMA2 的基础上扩展了词表,从原有的词表大小 32,000 扩充藏文词汇至 61,221 ,并对 LLaMA2-7B 原始模型的 embedding 和 lm_head 进行了均值扩充初始化。更多信息请访问 [TiLamb-7B GitHub 主页](https://github.com/NLP-Learning/TiLamb)。
**重要说明**:
- TiLamb-7B 是一个未经监督微调的基座模型,**不具备对话能力**。
- 要进行藏文对话和藏文 NLP 下游任务的适配(已验证的任务包括藏文新闻分类、藏文实体关系分类、藏文机器阅读理解、藏文分词、藏文摘要、藏文问题回答和藏文问题生成),建议使用 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/main) 框架进行微调。
**使用须知**:
- 本项目基于 Meta 发布的 LLaMA2-7B 模型开发,使用时请严格遵守 LLaMA2-7B 的开源许可协议。
- 如果涉及使用第三方代码,请务必遵从相关的开源许可协议。
- 模型生成的内容准确性可能受到计算方法、随机因素等的影响,因此,我们不对模型输出的准确性提供任何保证,也不会对使用相关资源和输出结果产生的任何损失承担责任。
- 如果将相关模型用于商业用途,开发者应遵守当地法律法规,确保模型输出内容的合规性。本项目不对任何由此衍生的产品或服务承担责任。
# TiLamb-7B (Tibetan Large Language Model Base)
**TiLamb-7B** is the foundational model for the Tibetan language, utilizing 26.43GB of Tibetan corpora. It's based on Meta's commercially available large model, LLaMA2-7B, and has been incrementally pre-trained using the LoRA method. This model expands on LLaMA2 by enlarging the vocabulary from the original 32,000 to 61,221 Tibetan words and initializes the embedding and lm_head of the original LLaMA2-7B model through mean expansion. For more information, please visit the [TiLamb-7B GitHub page](https://github.com/NLP-Learning/TiLamb).
**Important Notes**:
- TiLamb-7B is an unsupervised fine-tuned base model, **lacking conversational capabilities**.
- For adaptation to Tibetan dialogue and Tibetan NLP downstream tasks (verified tasks include Tibetan news classification, Tibetan entity relation classification, Tibetan machine reading comprehension, Tibetan word segmentation, Tibetan summarization, Tibetan question answering, and Tibetan question generation), it is recommended to use the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/main) framework for fine-tuning.
**Usage Notice**:
- This project is developed based on the LLaMA2-7B model released by Meta, and its use must strictly adhere to the open-source license agreement of LLaMA2-7B.
- If third-party code is involved, it is essential to comply with the relevant open-source license agreements.
- The accuracy of the content generated by the model may be affected by computational methods, random factors, etc., therefore, we do not provide any guarantee for the accuracy of the model outputs, nor will we bear any responsibility for losses arising from the use of related resources and results.
- If the related models are used for commercial purposes, developers must comply with local laws and regulations to ensure the compliance of the model output content. This project will not bear any responsibility for any products or services derived from it.
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
jondurbin/airoboros-l2-13b-3.1.1 | jondurbin | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,697,566,265,000 | 2023-10-22T11:44:38 | 8 | 5 | ---
datasets:
- jondurbin/airoboros-3.1
license: llama2
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is a prompt fix release. 3.1 required a single space after the last `[/INST]`, which was highly annoying and obnoxious, so I re-tuned the models without this. Otherwise, it's the same as 3.1.
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
NovaSearch/jasper_en_vision_language_v1 | NovaSearch | null | [
"sentence-transformers",
"safetensors",
"jasper_vl",
"mteb",
"custom_code",
"en",
"dataset:BAAI/Infinity-MM",
"dataset:HuggingFaceFW/fineweb-edu",
"arxiv:2412.19048",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:finetune:NovaSearch/stella_en_1.5B_v5",
"model-index",
"region:us"
] | 1,733,887,383,000 | 2025-01-24T02:03:17 | 10,537 | 46 | ---
base_model:
- dunzhang/stella_en_1.5B_v5
- google/siglip-so400m-patch14-384
datasets:
- BAAI/Infinity-MM
- HuggingFaceFW/fineweb-edu
language:
- en
tags:
- mteb
- sentence-transformers
model-index:
- name: jasper_en_vision_language_v1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 95.7271
- type: f1
value: 89.25450000000001
- type: f1_weighted
value: 95.8563
- type: ap
value: 67.1563
- type: ap_weighted
value: 67.1563
- type: main_score
value: 95.7271
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 93.7761
- type: f1
value: 90.7582
- type: f1_weighted
value: 93.974
- type: ap
value: 74.88759999999999
- type: ap_weighted
value: 74.88759999999999
- type: main_score
value: 93.7761
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.5809
- type: f1
value: 97.5808
- type: f1_weighted
value: 97.5808
- type: ap
value: 96.3911
- type: ap_weighted
value: 96.3911
- type: main_score
value: 97.5809
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.918
- type: f1
value: 60.696099999999994
- type: f1_weighted
value: 60.696099999999994
- type: main_score
value: 62.918
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: ndcg_at_1
value: 41.323
- type: ndcg_at_3
value: 56.737
- type: ndcg_at_5
value: 61.254
- type: ndcg_at_10
value: 65.204
- type: ndcg_at_20
value: 66.75
- type: ndcg_at_100
value: 67.254
- type: ndcg_at_1000
value: 67.254
- type: map_at_1
value: 41.323
- type: map_at_3
value: 52.881
- type: map_at_5
value: 55.373000000000005
- type: map_at_10
value: 57.013999999999996
- type: map_at_20
value: 57.457
- type: map_at_100
value: 57.538
- type: map_at_1000
value: 57.538
- type: recall_at_1
value: 41.323
- type: recall_at_3
value: 67.923
- type: recall_at_5
value: 78.947
- type: recall_at_10
value: 91.11
- type: recall_at_20
value: 97.084
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: precision_at_1
value: 41.323
- type: precision_at_3
value: 22.641
- type: precision_at_5
value: 15.789
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_20
value: 4.854
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 41.6785
- type: mrr_at_3
value: 52.963499999999996
- type: mrr_at_5
value: 55.456399999999995
- type: mrr_at_10
value: 57.104299999999995
- type: mrr_at_20
value: 57.5402
- type: mrr_at_100
value: 57.6213
- type: mrr_at_1000
value: 57.6213
- type: nauc_ndcg_at_1_max
value: -12.633600000000001
- type: nauc_ndcg_at_1_std
value: -19.747700000000002
- type: nauc_ndcg_at_1_diff1
value: 25.820300000000003
- type: nauc_ndcg_at_3_max
value: -8.480799999999999
- type: nauc_ndcg_at_3_std
value: -18.928900000000002
- type: nauc_ndcg_at_3_diff1
value: 19.409499999999998
- type: nauc_ndcg_at_5_max
value: -7.9172
- type: nauc_ndcg_at_5_std
value: -19.2306
- type: nauc_ndcg_at_5_diff1
value: 18.5809
- type: nauc_ndcg_at_10_max
value: -9.7812
- type: nauc_ndcg_at_10_std
value: -19.9964
- type: nauc_ndcg_at_10_diff1
value: 18.0753
- type: nauc_ndcg_at_20_max
value: -8.6546
- type: nauc_ndcg_at_20_std
value: -19.3205
- type: nauc_ndcg_at_20_diff1
value: 20.1741
- type: nauc_ndcg_at_100_max
value: -9.376900000000001
- type: nauc_ndcg_at_100_std
value: -19.5093
- type: nauc_ndcg_at_100_diff1
value: 20.4997
- type: nauc_ndcg_at_1000_max
value: -9.376900000000001
- type: nauc_ndcg_at_1000_std
value: -19.5093
- type: nauc_ndcg_at_1000_diff1
value: 20.4997
- type: nauc_map_at_1_max
value: -12.633600000000001
- type: nauc_map_at_1_std
value: -19.747700000000002
- type: nauc_map_at_1_diff1
value: 25.820300000000003
- type: nauc_map_at_3_max
value: -9.5533
- type: nauc_map_at_3_std
value: -19.3235
- type: nauc_map_at_3_diff1
value: 20.9083
- type: nauc_map_at_5_max
value: -9.3126
- type: nauc_map_at_5_std
value: -19.4593
- type: nauc_map_at_5_diff1
value: 20.5718
- type: nauc_map_at_10_max
value: -10.0766
- type: nauc_map_at_10_std
value: -19.8322
- type: nauc_map_at_10_diff1
value: 20.484099999999998
- type: nauc_map_at_20_max
value: -9.7917
- type: nauc_map_at_20_std
value: -19.6759
- type: nauc_map_at_20_diff1
value: 21.0095
- type: nauc_map_at_100_max
value: -9.8837
- type: nauc_map_at_100_std
value: -19.741400000000002
- type: nauc_map_at_100_diff1
value: 21.067700000000002
- type: nauc_map_at_1000_max
value: -9.8837
- type: nauc_map_at_1000_std
value: -19.741400000000002
- type: nauc_map_at_1000_diff1
value: 21.067700000000002
- type: nauc_recall_at_1_max
value: -12.633600000000001
- type: nauc_recall_at_1_std
value: -19.747700000000002
- type: nauc_recall_at_1_diff1
value: 25.820300000000003
- type: nauc_recall_at_3_max
value: -4.7886999999999995
- type: nauc_recall_at_3_std
value: -17.4764
- type: nauc_recall_at_3_diff1
value: 14.3308
- type: nauc_recall_at_5_max
value: -1.4803
- type: nauc_recall_at_5_std
value: -18.2137
- type: nauc_recall_at_5_diff1
value: 9.4604
- type: nauc_recall_at_10_max
value: -8.8102
- type: nauc_recall_at_10_std
value: -21.8279
- type: nauc_recall_at_10_diff1
value: -5.5388
- type: nauc_recall_at_20_max
value: 25.202400000000004
- type: nauc_recall_at_20_std
value: -4.8613
- type: nauc_recall_at_20_diff1
value: 9.3917
- type: nauc_recall_at_100_max
value: 37.0551
- type: nauc_recall_at_100_std
value: 84.5472
- type: nauc_recall_at_100_diff1
value: 19.5116
- type: nauc_recall_at_1000_max
value: 37.0551
- type: nauc_recall_at_1000_std
value: 84.5472
- type: nauc_recall_at_1000_diff1
value: 19.5116
- type: nauc_precision_at_1_max
value: -12.633600000000001
- type: nauc_precision_at_1_std
value: -19.747700000000002
- type: nauc_precision_at_1_diff1
value: 25.820300000000003
- type: nauc_precision_at_3_max
value: -4.7886999999999995
- type: nauc_precision_at_3_std
value: -17.4764
- type: nauc_precision_at_3_diff1
value: 14.3308
- type: nauc_precision_at_5_max
value: -1.4803
- type: nauc_precision_at_5_std
value: -18.2137
- type: nauc_precision_at_5_diff1
value: 9.4604
- type: nauc_precision_at_10_max
value: -8.8102
- type: nauc_precision_at_10_std
value: -21.8279
- type: nauc_precision_at_10_diff1
value: -5.5388
- type: nauc_precision_at_20_max
value: 25.202400000000004
- type: nauc_precision_at_20_std
value: -4.8613
- type: nauc_precision_at_20_diff1
value: 9.3917
- type: nauc_precision_at_100_max
value: 37.0551
- type: nauc_precision_at_100_std
value: 84.5472
- type: nauc_precision_at_100_diff1
value: 19.5116
- type: nauc_precision_at_1000_max
value: 37.0551
- type: nauc_precision_at_1000_std
value: 84.5472
- type: nauc_precision_at_1000_diff1
value: 19.5116
- type: nauc_mrr_at_1_max
value: -11.9728
- type: nauc_mrr_at_1_std
value: -19.4014
- type: nauc_mrr_at_1_diff1
value: 24.8653
- type: nauc_mrr_at_3_max
value: -9.6607
- type: nauc_mrr_at_3_std
value: -19.1819
- type: nauc_mrr_at_3_diff1
value: 20.0205
- type: nauc_mrr_at_5_max
value: -9.4261
- type: nauc_mrr_at_5_std
value: -19.3098
- type: nauc_mrr_at_5_diff1
value: 19.6347
- type: nauc_mrr_at_10_max
value: -10.1698
- type: nauc_mrr_at_10_std
value: -19.683
- type: nauc_mrr_at_10_diff1
value: 19.4823
- type: nauc_mrr_at_20_max
value: -9.913
- type: nauc_mrr_at_20_std
value: -19.517300000000002
- type: nauc_mrr_at_20_diff1
value: 20.0066
- type: nauc_mrr_at_100_max
value: -10.0053
- type: nauc_mrr_at_100_std
value: -19.5824
- type: nauc_mrr_at_100_diff1
value: 20.061899999999998
- type: nauc_mrr_at_1000_max
value: -10.0053
- type: nauc_mrr_at_1000_std
value: -19.5824
- type: nauc_mrr_at_1000_diff1
value: 20.061899999999998
- type: main_score
value: 65.204
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 55.710300000000004
- type: v_measure_std
value: 14.551900000000002
- type: main_score
value: 55.710300000000004
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.004599999999996
- type: v_measure_std
value: 14.868899999999998
- type: main_score
value: 51.004599999999996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.38119999999999
- type: mrr
value: 79.0692
- type: nAUC_map_max
value: 21.9757
- type: nAUC_map_std
value: 14.3545
- type: nAUC_map_diff1
value: 11.82
- type: nAUC_mrr_max
value: 40.3449
- type: nAUC_mrr_std
value: 18.2678
- type: nAUC_mrr_diff1
value: 21.1596
- type: main_score
value: 67.38119999999999
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: pearson
value: 87.4332
- type: spearman
value: 84.65979999999999
- type: cosine_pearson
value: 87.4332
- type: cosine_spearman
value: 84.65979999999999
- type: manhattan_pearson
value: 86.37270000000001
- type: manhattan_spearman
value: 85.20309999999999
- type: euclidean_pearson
value: 86.1672
- type: euclidean_spearman
value: 84.7453
- type: main_score
value: 84.65979999999999
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.224
- type: f1
value: 86.7235
- type: f1_weighted
value: 86.7235
- type: main_score
value: 87.224
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 52.04990000000001
- type: v_measure_std
value: 0.7255
- type: main_score
value: 52.04990000000001
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 47.5811
- type: v_measure_std
value: 0.8049000000000001
- type: main_score
value: 47.5811
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: ndcg_at_1
value: 46.781
- type: ndcg_at_3
value: 52.111
- type: ndcg_at_5
value: 54.371
- type: ndcg_at_10
value: 57.247
- type: ndcg_at_20
value: 59.122
- type: ndcg_at_100
value: 62.071
- type: ndcg_at_1000
value: 63.32
- type: map_at_1
value: 37.2
- type: map_at_3
value: 46.005
- type: map_at_5
value: 48.32
- type: map_at_10
value: 50.181
- type: map_at_20
value: 51.062
- type: map_at_100
value: 51.735
- type: map_at_1000
value: 51.842
- type: recall_at_1
value: 37.2
- type: recall_at_3
value: 53.94
- type: recall_at_5
value: 60.88
- type: recall_at_10
value: 69.692
- type: recall_at_20
value: 76.711
- type: recall_at_100
value: 90.263
- type: recall_at_1000
value: 97.749
- type: precision_at_1
value: 46.781
- type: precision_at_3
value: 25.894000000000002
- type: precision_at_5
value: 18.541
- type: precision_at_10
value: 11.402
- type: precision_at_20
value: 6.567
- type: precision_at_100
value: 1.704
- type: precision_at_1000
value: 0.21
- type: mrr_at_1
value: 46.781099999999995
- type: mrr_at_3
value: 54.6257
- type: mrr_at_5
value: 56.0992
- type: mrr_at_10
value: 57.00769999999999
- type: mrr_at_20
value: 57.3576
- type: mrr_at_100
value: 57.6118
- type: mrr_at_1000
value: 57.6415
- type: nauc_ndcg_at_1_max
value: 41.3069
- type: nauc_ndcg_at_1_std
value: -3.9859999999999998
- type: nauc_ndcg_at_1_diff1
value: 56.355900000000005
- type: nauc_ndcg_at_3_max
value: 40.354299999999995
- type: nauc_ndcg_at_3_std
value: -1.4893
- type: nauc_ndcg_at_3_diff1
value: 51.473800000000004
- type: nauc_ndcg_at_5_max
value: 39.2091
- type: nauc_ndcg_at_5_std
value: -2.6139
- type: nauc_ndcg_at_5_diff1
value: 50.4382
- type: nauc_ndcg_at_10_max
value: 37.8136
- type: nauc_ndcg_at_10_std
value: -1.9053
- type: nauc_ndcg_at_10_diff1
value: 49.677
- type: nauc_ndcg_at_20_max
value: 37.7571
- type: nauc_ndcg_at_20_std
value: -1.0221
- type: nauc_ndcg_at_20_diff1
value: 49.6703
- type: nauc_ndcg_at_100_max
value: 39.0419
- type: nauc_ndcg_at_100_std
value: 0.5525
- type: nauc_ndcg_at_100_diff1
value: 50.8714
- type: nauc_ndcg_at_1000_max
value: 39.4123
- type: nauc_ndcg_at_1000_std
value: 0.2088
- type: nauc_ndcg_at_1000_diff1
value: 51.0321
- type: nauc_map_at_1_max
value: 31.1237
- type: nauc_map_at_1_std
value: -7.0686
- type: nauc_map_at_1_diff1
value: 55.28189999999999
- type: nauc_map_at_3_max
value: 36.750899999999994
- type: nauc_map_at_3_std
value: -4.502599999999999
- type: nauc_map_at_3_diff1
value: 52.64640000000001
- type: nauc_map_at_5_max
value: 37.4208
- type: nauc_map_at_5_std
value: -4.3387
- type: nauc_map_at_5_diff1
value: 51.8133
- type: nauc_map_at_10_max
value: 37.4829
- type: nauc_map_at_10_std
value: -3.3794999999999997
- type: nauc_map_at_10_diff1
value: 51.640299999999996
- type: nauc_map_at_20_max
value: 37.7883
- type: nauc_map_at_20_std
value: -2.9455999999999998
- type: nauc_map_at_20_diff1
value: 51.635299999999994
- type: nauc_map_at_100_max
value: 38.1532
- type: nauc_map_at_100_std
value: -2.513
- type: nauc_map_at_100_diff1
value: 51.8036
- type: nauc_map_at_1000_max
value: 38.2036
- type: nauc_map_at_1000_std
value: -2.5201000000000002
- type: nauc_map_at_1000_diff1
value: 51.807
- type: nauc_recall_at_1_max
value: 31.1237
- type: nauc_recall_at_1_std
value: -7.0686
- type: nauc_recall_at_1_diff1
value: 55.28189999999999
- type: nauc_recall_at_3_max
value: 33.942899999999995
- type: nauc_recall_at_3_std
value: -2.1842
- type: nauc_recall_at_3_diff1
value: 46.806
- type: nauc_recall_at_5_max
value: 32.935199999999995
- type: nauc_recall_at_5_std
value: -3.6494999999999997
- type: nauc_recall_at_5_diff1
value: 43.453599999999994
- type: nauc_recall_at_10_max
value: 28.2544
- type: nauc_recall_at_10_std
value: -1.1788
- type: nauc_recall_at_10_diff1
value: 38.8916
- type: nauc_recall_at_20_max
value: 27.1235
- type: nauc_recall_at_20_std
value: 2.8238
- type: nauc_recall_at_20_diff1
value: 36.9813
- type: nauc_recall_at_100_max
value: 31.899300000000004
- type: nauc_recall_at_100_std
value: 22.3521
- type: nauc_recall_at_100_diff1
value: 43.0867
- type: nauc_recall_at_1000_max
value: 45.7236
- type: nauc_recall_at_1000_std
value: 63.258199999999995
- type: nauc_recall_at_1000_diff1
value: 56.854499999999994
- type: nauc_precision_at_1_max
value: 41.3069
- type: nauc_precision_at_1_std
value: -3.9859999999999998
- type: nauc_precision_at_1_diff1
value: 56.355900000000005
- type: nauc_precision_at_3_max
value: 39.6888
- type: nauc_precision_at_3_std
value: 5.7427
- type: nauc_precision_at_3_diff1
value: 26.694699999999997
- type: nauc_precision_at_5_max
value: 34.3509
- type: nauc_precision_at_5_std
value: 7.3069999999999995
- type: nauc_precision_at_5_diff1
value: 15.4004
- type: nauc_precision_at_10_max
value: 23.2221
- type: nauc_precision_at_10_std
value: 10.559899999999999
- type: nauc_precision_at_10_diff1
value: 3.6925
- type: nauc_precision_at_20_max
value: 17.9697
- type: nauc_precision_at_20_std
value: 11.9127
- type: nauc_precision_at_20_diff1
value: -2.7178
- type: nauc_precision_at_100_max
value: 11.8537
- type: nauc_precision_at_100_std
value: 11.442
- type: nauc_precision_at_100_diff1
value: -11.2562
- type: nauc_precision_at_1000_max
value: 5.7549
- type: nauc_precision_at_1000_std
value: 0.40169999999999995
- type: nauc_precision_at_1000_diff1
value: -18.0644
- type: nauc_mrr_at_1_max
value: 41.3069
- type: nauc_mrr_at_1_std
value: -3.9859999999999998
- type: nauc_mrr_at_1_diff1
value: 56.355900000000005
- type: nauc_mrr_at_3_max
value: 41.626200000000004
- type: nauc_mrr_at_3_std
value: -0.7362
- type: nauc_mrr_at_3_diff1
value: 52.7305
- type: nauc_mrr_at_5_max
value: 41.341499999999996
- type: nauc_mrr_at_5_std
value: -1.113
- type: nauc_mrr_at_5_diff1
value: 52.159299999999995
- type: nauc_mrr_at_10_max
value: 40.9696
- type: nauc_mrr_at_10_std
value: -0.7545999999999999
- type: nauc_mrr_at_10_diff1
value: 51.9591
- type: nauc_mrr_at_20_max
value: 41.0028
- type: nauc_mrr_at_20_std
value: -0.5925
- type: nauc_mrr_at_20_diff1
value: 52.0497
- type: nauc_mrr_at_100_max
value: 41.0447
- type: nauc_mrr_at_100_std
value: -0.6299
- type: nauc_mrr_at_100_diff1
value: 52.2239
- type: nauc_mrr_at_1000_max
value: 41.045
- type: nauc_mrr_at_1000_std
value: -0.6354000000000001
- type: nauc_mrr_at_1000_diff1
value: 52.2368
- type: main_score
value: 57.247
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: ndcg_at_1
value: 46.497
- type: ndcg_at_3
value: 50.580999999999996
- type: ndcg_at_5
value: 52.159
- type: ndcg_at_10
value: 54.269999999999996
- type: ndcg_at_20
value: 56.001
- type: ndcg_at_100
value: 58.373
- type: ndcg_at_1000
value: 60.272000000000006
- type: map_at_1
value: 36.296
- type: map_at_3
value: 45.178000000000004
- type: map_at_5
value: 46.96
- type: map_at_10
value: 48.388999999999996
- type: map_at_20
value: 49.164
- type: map_at_100
value: 49.769000000000005
- type: map_at_1000
value: 49.906
- type: recall_at_1
value: 36.296
- type: recall_at_3
value: 51.505
- type: recall_at_5
value: 56.633
- type: recall_at_10
value: 63.519999999999996
- type: recall_at_20
value: 70.06099999999999
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 92.396
- type: precision_at_1
value: 46.497
- type: precision_at_3
value: 25.032
- type: precision_at_5
value: 17.427
- type: precision_at_10
value: 10.433
- type: precision_at_20
value: 6.085999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.20600000000000002
- type: mrr_at_1
value: 46.4968
- type: mrr_at_3
value: 52.9193
- type: mrr_at_5
value: 54.0117
- type: mrr_at_10
value: 54.815400000000004
- type: mrr_at_20
value: 55.2265
- type: mrr_at_100
value: 55.4713
- type: mrr_at_1000
value: 55.51239999999999
- type: nauc_ndcg_at_1_max
value: 44.406400000000005
- type: nauc_ndcg_at_1_std
value: -0.5542
- type: nauc_ndcg_at_1_diff1
value: 61.951
- type: nauc_ndcg_at_3_max
value: 41.765
- type: nauc_ndcg_at_3_std
value: -1.6297
- type: nauc_ndcg_at_3_diff1
value: 56.6064
- type: nauc_ndcg_at_5_max
value: 41.2399
- type: nauc_ndcg_at_5_std
value: -1.6739
- type: nauc_ndcg_at_5_diff1
value: 56.736
- type: nauc_ndcg_at_10_max
value: 41.3525
- type: nauc_ndcg_at_10_std
value: -1.0736
- type: nauc_ndcg_at_10_diff1
value: 56.619600000000005
- type: nauc_ndcg_at_20_max
value: 40.950900000000004
- type: nauc_ndcg_at_20_std
value: 0.2865
- type: nauc_ndcg_at_20_diff1
value: 56.09459999999999
- type: nauc_ndcg_at_100_max
value: 41.4473
- type: nauc_ndcg_at_100_std
value: 1.9026999999999998
- type: nauc_ndcg_at_100_diff1
value: 55.6213
- type: nauc_ndcg_at_1000_max
value: 42.2036
- type: nauc_ndcg_at_1000_std
value: 2.2438
- type: nauc_ndcg_at_1000_diff1
value: 55.9627
- type: nauc_map_at_1_max
value: 32.782
- type: nauc_map_at_1_std
value: -9.6273
- type: nauc_map_at_1_diff1
value: 61.260099999999994
- type: nauc_map_at_3_max
value: 37.1299
- type: nauc_map_at_3_std
value: -7.119300000000001
- type: nauc_map_at_3_diff1
value: 58.1535
- type: nauc_map_at_5_max
value: 37.9225
- type: nauc_map_at_5_std
value: -5.9012
- type: nauc_map_at_5_diff1
value: 57.781499999999994
- type: nauc_map_at_10_max
value: 38.8891
- type: nauc_map_at_10_std
value: -4.874499999999999
- type: nauc_map_at_10_diff1
value: 57.755500000000005
- type: nauc_map_at_20_max
value: 39.3425
- type: nauc_map_at_20_std
value: -3.794
- type: nauc_map_at_20_diff1
value: 57.513400000000004
- type: nauc_map_at_100_max
value: 39.896
- type: nauc_map_at_100_std
value: -2.9215
- type: nauc_map_at_100_diff1
value: 57.53549999999999
- type: nauc_map_at_1000_max
value: 40.008300000000006
- type: nauc_map_at_1000_std
value: -2.8195
- type: nauc_map_at_1000_diff1
value: 57.60150000000001
- type: nauc_recall_at_1_max
value: 32.782
- type: nauc_recall_at_1_std
value: -9.6273
- type: nauc_recall_at_1_diff1
value: 61.260099999999994
- type: nauc_recall_at_3_max
value: 35.434599999999996
- type: nauc_recall_at_3_std
value: -6.290800000000001
- type: nauc_recall_at_3_diff1
value: 52.7104
- type: nauc_recall_at_5_max
value: 35.0165
- type: nauc_recall_at_5_std
value: -3.9936
- type: nauc_recall_at_5_diff1
value: 51.3274
- type: nauc_recall_at_10_max
value: 35.5228
- type: nauc_recall_at_10_std
value: -1.5428000000000002
- type: nauc_recall_at_10_diff1
value: 49.479299999999995
- type: nauc_recall_at_20_max
value: 33.7227
- type: nauc_recall_at_20_std
value: 4.9009
- type: nauc_recall_at_20_diff1
value: 45.5752
- type: nauc_recall_at_100_max
value: 35.9763
- type: nauc_recall_at_100_std
value: 19.3131
- type: nauc_recall_at_100_diff1
value: 39.911
- type: nauc_recall_at_1000_max
value: 46.325
- type: nauc_recall_at_1000_std
value: 44.9506
- type: nauc_recall_at_1000_diff1
value: 33.457100000000004
- type: nauc_precision_at_1_max
value: 44.406400000000005
- type: nauc_precision_at_1_std
value: -0.5542
- type: nauc_precision_at_1_diff1
value: 61.951
- type: nauc_precision_at_3_max
value: 37.5506
- type: nauc_precision_at_3_std
value: 11.0345
- type: nauc_precision_at_3_diff1
value: 29.8222
- type: nauc_precision_at_5_max
value: 35.2397
- type: nauc_precision_at_5_std
value: 15.389800000000001
- type: nauc_precision_at_5_diff1
value: 21.4168
- type: nauc_precision_at_10_max
value: 32.9495
- type: nauc_precision_at_10_std
value: 21.513299999999997
- type: nauc_precision_at_10_diff1
value: 11.8737
- type: nauc_precision_at_20_max
value: 29.1383
- type: nauc_precision_at_20_std
value: 27.7364
- type: nauc_precision_at_20_diff1
value: 3.9266
- type: nauc_precision_at_100_max
value: 25.405
- type: nauc_precision_at_100_std
value: 32.3313
- type: nauc_precision_at_100_diff1
value: -3.4574000000000003
- type: nauc_precision_at_1000_max
value: 19.0155
- type: nauc_precision_at_1000_std
value: 28.895
- type: nauc_precision_at_1000_diff1
value: -8.5325
- type: nauc_mrr_at_1_max
value: 44.406400000000005
- type: nauc_mrr_at_1_std
value: -0.5542
- type: nauc_mrr_at_1_diff1
value: 61.951
- type: nauc_mrr_at_3_max
value: 45.171499999999995
- type: nauc_mrr_at_3_std
value: 1.3709
- type: nauc_mrr_at_3_diff1
value: 58.655199999999994
- type: nauc_mrr_at_5_max
value: 44.770700000000005
- type: nauc_mrr_at_5_std
value: 1.4206999999999999
- type: nauc_mrr_at_5_diff1
value: 58.5418
- type: nauc_mrr_at_10_max
value: 44.6537
- type: nauc_mrr_at_10_std
value: 1.6499
- type: nauc_mrr_at_10_diff1
value: 58.305099999999996
- type: nauc_mrr_at_20_max
value: 44.5462
- type: nauc_mrr_at_20_std
value: 1.8207
- type: nauc_mrr_at_20_diff1
value: 58.2175
- type: nauc_mrr_at_100_max
value: 44.5707
- type: nauc_mrr_at_100_std
value: 1.9595999999999998
- type: nauc_mrr_at_100_diff1
value: 58.1794
- type: nauc_mrr_at_1000_max
value: 44.5849
- type: nauc_mrr_at_1000_std
value: 1.9480999999999997
- type: nauc_mrr_at_1000_diff1
value: 58.1948
- type: main_score
value: 54.269999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_3
value: 57.473
- type: ndcg_at_5
value: 60.331999999999994
- type: ndcg_at_10
value: 63.253
- type: ndcg_at_20
value: 64.934
- type: ndcg_at_100
value: 66.893
- type: ndcg_at_1000
value: 67.635
- type: map_at_1
value: 43.495
- type: map_at_3
value: 53.501
- type: map_at_5
value: 55.591
- type: map_at_10
value: 57.143
- type: map_at_20
value: 57.772999999999996
- type: map_at_100
value: 58.144
- type: map_at_1000
value: 58.18299999999999
- type: recall_at_1
value: 43.495
- type: recall_at_3
value: 61.943000000000005
- type: recall_at_5
value: 69.06
- type: recall_at_10
value: 77.506
- type: recall_at_20
value: 83.718
- type: recall_at_100
value: 93.012
- type: recall_at_1000
value: 98.20100000000001
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_3
value: 25.789
- type: precision_at_5
value: 17.718
- type: precision_at_10
value: 10.257
- type: precision_at_20
value: 5.652
- type: precision_at_100
value: 1.2930000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: mrr_at_1
value: 50.4702
- type: mrr_at_3
value: 58.4013
- type: mrr_at_5
value: 59.8495
- type: mrr_at_10
value: 60.83070000000001
- type: mrr_at_20
value: 61.2041
- type: mrr_at_100
value: 61.4112
- type: mrr_at_1000
value: 61.4322
- type: nauc_ndcg_at_1_max
value: 44.8291
- type: nauc_ndcg_at_1_std
value: -0.8412
- type: nauc_ndcg_at_1_diff1
value: 59.08560000000001
- type: nauc_ndcg_at_3_max
value: 44.248
- type: nauc_ndcg_at_3_std
value: -1.4833
- type: nauc_ndcg_at_3_diff1
value: 54.230599999999995
- type: nauc_ndcg_at_5_max
value: 44.1923
- type: nauc_ndcg_at_5_std
value: 0.4302
- type: nauc_ndcg_at_5_diff1
value: 53.488
- type: nauc_ndcg_at_10_max
value: 44.6104
- type: nauc_ndcg_at_10_std
value: 1.0661
- type: nauc_ndcg_at_10_diff1
value: 53.3798
- type: nauc_ndcg_at_20_max
value: 44.9242
- type: nauc_ndcg_at_20_std
value: 2.2277
- type: nauc_ndcg_at_20_diff1
value: 53.317400000000006
- type: nauc_ndcg_at_100_max
value: 45.7183
- type: nauc_ndcg_at_100_std
value: 2.5153
- type: nauc_ndcg_at_100_diff1
value: 53.9388
- type: nauc_ndcg_at_1000_max
value: 45.6995
- type: nauc_ndcg_at_1000_std
value: 2.1667
- type: nauc_ndcg_at_1000_diff1
value: 54.2102
- type: nauc_map_at_1_max
value: 35.5633
- type: nauc_map_at_1_std
value: -5.6190999999999995
- type: nauc_map_at_1_diff1
value: 60.361
- type: nauc_map_at_3_max
value: 41.9146
- type: nauc_map_at_3_std
value: -3.4212
- type: nauc_map_at_3_diff1
value: 56.016999999999996
- type: nauc_map_at_5_max
value: 42.4117
- type: nauc_map_at_5_std
value: -2.0291
- type: nauc_map_at_5_diff1
value: 55.3653
- type: nauc_map_at_10_max
value: 43.1409
- type: nauc_map_at_10_std
value: -1.4177
- type: nauc_map_at_10_diff1
value: 55.2121
- type: nauc_map_at_20_max
value: 43.437799999999996
- type: nauc_map_at_20_std
value: -0.8902
- type: nauc_map_at_20_diff1
value: 55.206999999999994
- type: nauc_map_at_100_max
value: 43.6537
- type: nauc_map_at_100_std
value: -0.7274
- type: nauc_map_at_100_diff1
value: 55.2408
- type: nauc_map_at_1000_max
value: 43.6736
- type: nauc_map_at_1000_std
value: -0.7106
- type: nauc_map_at_1000_diff1
value: 55.2524
- type: nauc_recall_at_1_max
value: 35.5633
- type: nauc_recall_at_1_std
value: -5.6190999999999995
- type: nauc_recall_at_1_diff1
value: 60.361
- type: nauc_recall_at_3_max
value: 40.8742
- type: nauc_recall_at_3_std
value: -2.5945
- type: nauc_recall_at_3_diff1
value: 49.3961
- type: nauc_recall_at_5_max
value: 40.7505
- type: nauc_recall_at_5_std
value: 3.3495
- type: nauc_recall_at_5_diff1
value: 45.7721
- type: nauc_recall_at_10_max
value: 41.5818
- type: nauc_recall_at_10_std
value: 6.7775
- type: nauc_recall_at_10_diff1
value: 43.3298
- type: nauc_recall_at_20_max
value: 43.7288
- type: nauc_recall_at_20_std
value: 16.328799999999998
- type: nauc_recall_at_20_diff1
value: 39.9662
- type: nauc_recall_at_100_max
value: 55.63550000000001
- type: nauc_recall_at_100_std
value: 34.4464
- type: nauc_recall_at_100_diff1
value: 39.980399999999996
- type: nauc_recall_at_1000_max
value: 77.1968
- type: nauc_recall_at_1000_std
value: 71.4214
- type: nauc_recall_at_1000_diff1
value: 38.244
- type: nauc_precision_at_1_max
value: 44.8291
- type: nauc_precision_at_1_std
value: -0.8412
- type: nauc_precision_at_1_diff1
value: 59.08560000000001
- type: nauc_precision_at_3_max
value: 43.1877
- type: nauc_precision_at_3_std
value: 7.6498
- type: nauc_precision_at_3_diff1
value: 27.5239
- type: nauc_precision_at_5_max
value: 37.533899999999996
- type: nauc_precision_at_5_std
value: 13.2708
- type: nauc_precision_at_5_diff1
value: 16.1311
- type: nauc_precision_at_10_max
value: 33.2608
- type: nauc_precision_at_10_std
value: 17.788899999999998
- type: nauc_precision_at_10_diff1
value: 5.1528
- type: nauc_precision_at_20_max
value: 29.401
- type: nauc_precision_at_20_std
value: 22.9012
- type: nauc_precision_at_20_diff1
value: -2.9066
- type: nauc_precision_at_100_max
value: 23.2408
- type: nauc_precision_at_100_std
value: 24.2959
- type: nauc_precision_at_100_diff1
value: -12.2627
- type: nauc_precision_at_1000_max
value: 20.244300000000003
- type: nauc_precision_at_1000_std
value: 25.682100000000002
- type: nauc_precision_at_1000_diff1
value: -16.4621
- type: nauc_mrr_at_1_max
value: 44.8291
- type: nauc_mrr_at_1_std
value: -0.8412
- type: nauc_mrr_at_1_diff1
value: 59.08560000000001
- type: nauc_mrr_at_3_max
value: 45.9874
- type: nauc_mrr_at_3_std
value: -0.4012
- type: nauc_mrr_at_3_diff1
value: 55.1381
- type: nauc_mrr_at_5_max
value: 46.1061
- type: nauc_mrr_at_5_std
value: 0.9194000000000001
- type: nauc_mrr_at_5_diff1
value: 54.799699999999994
- type: nauc_mrr_at_10_max
value: 46.0658
- type: nauc_mrr_at_10_std
value: 0.9317000000000001
- type: nauc_mrr_at_10_diff1
value: 54.918
- type: nauc_mrr_at_20_max
value: 46.135999999999996
- type: nauc_mrr_at_20_std
value: 1.1449
- type: nauc_mrr_at_20_diff1
value: 54.8537
- type: nauc_mrr_at_100_max
value: 46.1801
- type: nauc_mrr_at_100_std
value: 1.1052
- type: nauc_mrr_at_100_diff1
value: 54.9671
- type: nauc_mrr_at_1000_max
value: 46.169399999999996
- type: nauc_mrr_at_1000_std
value: 1.0886
- type: nauc_mrr_at_1000_diff1
value: 54.974500000000006
- type: main_score
value: 63.253
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: ndcg_at_1
value: 30.169
- type: ndcg_at_3
value: 38.712999999999994
- type: ndcg_at_5
value: 41.613
- type: ndcg_at_10
value: 44.721
- type: ndcg_at_20
value: 46.861999999999995
- type: ndcg_at_100
value: 49.725
- type: ndcg_at_1000
value: 51.321000000000005
- type: map_at_1
value: 27.534
- type: map_at_3
value: 35.543
- type: map_at_5
value: 37.289
- type: map_at_10
value: 38.7
- type: map_at_20
value: 39.338
- type: map_at_100
value: 39.785
- type: map_at_1000
value: 39.853
- type: recall_at_1
value: 27.534
- type: recall_at_3
value: 45.007999999999996
- type: recall_at_5
value: 51.888999999999996
- type: recall_at_10
value: 61.023
- type: recall_at_20
value: 69.053
- type: recall_at_100
value: 83.42399999999999
- type: recall_at_1000
value: 95.268
- type: precision_at_1
value: 30.169
- type: precision_at_3
value: 16.911
- type: precision_at_5
value: 11.932
- type: precision_at_10
value: 7.141
- type: precision_at_20
value: 4.079
- type: precision_at_100
value: 1.008
- type: precision_at_1000
value: 0.11800000000000001
- type: mrr_at_1
value: 30.1695
- type: mrr_at_3
value: 37.966100000000004
- type: mrr_at_5
value: 39.6158
- type: mrr_at_10
value: 40.8033
- type: mrr_at_20
value: 41.3435
- type: mrr_at_100
value: 41.7118
- type: mrr_at_1000
value: 41.7613
- type: nauc_ndcg_at_1_max
value: 32.2399
- type: nauc_ndcg_at_1_std
value: -1.8931
- type: nauc_ndcg_at_1_diff1
value: 47.5451
- type: nauc_ndcg_at_3_max
value: 27.8837
- type: nauc_ndcg_at_3_std
value: -5.9821
- type: nauc_ndcg_at_3_diff1
value: 39.014500000000005
- type: nauc_ndcg_at_5_max
value: 27.631299999999996
- type: nauc_ndcg_at_5_std
value: -5.3482
- type: nauc_ndcg_at_5_diff1
value: 38.4224
- type: nauc_ndcg_at_10_max
value: 28.481499999999997
- type: nauc_ndcg_at_10_std
value: -3.1228
- type: nauc_ndcg_at_10_diff1
value: 37.3671
- type: nauc_ndcg_at_20_max
value: 30.317899999999998
- type: nauc_ndcg_at_20_std
value: -1.5421
- type: nauc_ndcg_at_20_diff1
value: 36.929
- type: nauc_ndcg_at_100_max
value: 30.622300000000003
- type: nauc_ndcg_at_100_std
value: -1.1266
- type: nauc_ndcg_at_100_diff1
value: 37.844100000000005
- type: nauc_ndcg_at_1000_max
value: 30.2567
- type: nauc_ndcg_at_1000_std
value: -1.6924000000000001
- type: nauc_ndcg_at_1000_diff1
value: 38.3512
- type: nauc_map_at_1_max
value: 28.498800000000003
- type: nauc_map_at_1_std
value: -4.840400000000001
- type: nauc_map_at_1_diff1
value: 49.2647
- type: nauc_map_at_3_max
value: 27.3679
- type: nauc_map_at_3_std
value: -5.953399999999999
- type: nauc_map_at_3_diff1
value: 41.458800000000004
- type: nauc_map_at_5_max
value: 27.501900000000003
- type: nauc_map_at_5_std
value: -5.5226
- type: nauc_map_at_5_diff1
value: 41.2074
- type: nauc_map_at_10_max
value: 28.1719
- type: nauc_map_at_10_std
value: -4.3525
- type: nauc_map_at_10_diff1
value: 40.820299999999996
- type: nauc_map_at_20_max
value: 28.67
- type: nauc_map_at_20_std
value: -3.9206
- type: nauc_map_at_20_diff1
value: 40.7258
- type: nauc_map_at_100_max
value: 28.765
- type: nauc_map_at_100_std
value: -3.8413999999999997
- type: nauc_map_at_100_diff1
value: 40.876200000000004
- type: nauc_map_at_1000_max
value: 28.7672
- type: nauc_map_at_1000_std
value: -3.8369
- type: nauc_map_at_1000_diff1
value: 40.8998
- type: nauc_recall_at_1_max
value: 28.498800000000003
- type: nauc_recall_at_1_std
value: -4.840400000000001
- type: nauc_recall_at_1_diff1
value: 49.2647
- type: nauc_recall_at_3_max
value: 24.6035
- type: nauc_recall_at_3_std
value: -7.5891
- type: nauc_recall_at_3_diff1
value: 31.5757
- type: nauc_recall_at_5_max
value: 24.369
- type: nauc_recall_at_5_std
value: -6.1809
- type: nauc_recall_at_5_diff1
value: 29.604000000000003
- type: nauc_recall_at_10_max
value: 26.289299999999997
- type: nauc_recall_at_10_std
value: 0.3042
- type: nauc_recall_at_10_diff1
value: 25.5857
- type: nauc_recall_at_20_max
value: 34.4737
- type: nauc_recall_at_20_std
value: 7.437199999999999
- type: nauc_recall_at_20_diff1
value: 22.174599999999998
- type: nauc_recall_at_100_max
value: 38.6347
- type: nauc_recall_at_100_std
value: 16.145300000000002
- type: nauc_recall_at_100_diff1
value: 22.1917
- type: nauc_recall_at_1000_max
value: 39.3542
- type: nauc_recall_at_1000_std
value: 22.6306
- type: nauc_recall_at_1000_diff1
value: 12.8224
- type: nauc_precision_at_1_max
value: 32.2399
- type: nauc_precision_at_1_std
value: -1.8931
- type: nauc_precision_at_1_diff1
value: 47.5451
- type: nauc_precision_at_3_max
value: 30.2123
- type: nauc_precision_at_3_std
value: -3.3314999999999997
- type: nauc_precision_at_3_diff1
value: 27.9644
- type: nauc_precision_at_5_max
value: 29.670800000000003
- type: nauc_precision_at_5_std
value: -1.5582
- type: nauc_precision_at_5_diff1
value: 25.608399999999996
- type: nauc_precision_at_10_max
value: 31.0615
- type: nauc_precision_at_10_std
value: 6.0033
- type: nauc_precision_at_10_diff1
value: 18.8733
- type: nauc_precision_at_20_max
value: 34.6328
- type: nauc_precision_at_20_std
value: 13.439799999999998
- type: nauc_precision_at_20_diff1
value: 13.048599999999999
- type: nauc_precision_at_100_max
value: 29.820200000000003
- type: nauc_precision_at_100_std
value: 17.5458
- type: nauc_precision_at_100_diff1
value: 5.4185
- type: nauc_precision_at_1000_max
value: 18.1553
- type: nauc_precision_at_1000_std
value: 13.908999999999999
- type: nauc_precision_at_1000_diff1
value: -6.718300000000001
- type: nauc_mrr_at_1_max
value: 32.2399
- type: nauc_mrr_at_1_std
value: -1.8931
- type: nauc_mrr_at_1_diff1
value: 47.5451
- type: nauc_mrr_at_3_max
value: 30.0502
- type: nauc_mrr_at_3_std
value: -3.7917
- type: nauc_mrr_at_3_diff1
value: 40.4491
- type: nauc_mrr_at_5_max
value: 29.9223
- type: nauc_mrr_at_5_std
value: -3.7439
- type: nauc_mrr_at_5_diff1
value: 40.2078
- type: nauc_mrr_at_10_max
value: 30.0837
- type: nauc_mrr_at_10_std
value: -3.1279000000000003
- type: nauc_mrr_at_10_diff1
value: 39.8604
- type: nauc_mrr_at_20_max
value: 30.586600000000004
- type: nauc_mrr_at_20_std
value: -2.6394
- type: nauc_mrr_at_20_diff1
value: 39.7155
- type: nauc_mrr_at_100_max
value: 30.6013
- type: nauc_mrr_at_100_std
value: -2.6131
- type: nauc_mrr_at_100_diff1
value: 39.8572
- type: nauc_mrr_at_1000_max
value: 30.588700000000003
- type: nauc_mrr_at_1000_std
value: -2.6162
- type: nauc_mrr_at_1000_diff1
value: 39.8847
- type: main_score
value: 44.721
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: ndcg_at_1
value: 25.995
- type: ndcg_at_3
value: 31.527
- type: ndcg_at_5
value: 34.498
- type: ndcg_at_10
value: 37.421
- type: ndcg_at_20
value: 40.19
- type: ndcg_at_100
value: 43.662
- type: ndcg_at_1000
value: 46.04
- type: map_at_1
value: 20.624000000000002
- type: map_at_3
value: 27.714
- type: map_at_5
value: 29.746
- type: map_at_10
value: 31.156
- type: map_at_20
value: 32.018
- type: map_at_100
value: 32.647
- type: map_at_1000
value: 32.751999999999995
- type: recall_at_1
value: 20.624000000000002
- type: recall_at_3
value: 35.064
- type: recall_at_5
value: 42.561
- type: recall_at_10
value: 51.294
- type: recall_at_20
value: 61.35699999999999
- type: recall_at_100
value: 77.41
- type: recall_at_1000
value: 94.292
- type: precision_at_1
value: 25.995
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 11.616999999999999
- type: precision_at_10
value: 7.152
- type: precision_at_20
value: 4.335
- type: precision_at_100
value: 1.1769999999999998
- type: precision_at_1000
value: 0.149
- type: mrr_at_1
value: 25.995
- type: mrr_at_3
value: 33.354099999999995
- type: mrr_at_5
value: 35.1389
- type: mrr_at_10
value: 36.3085
- type: mrr_at_20
value: 36.9771
- type: mrr_at_100
value: 37.3263
- type: mrr_at_1000
value: 37.3819
- type: nauc_ndcg_at_1_max
value: 16.8793
- type: nauc_ndcg_at_1_std
value: 1.1622000000000001
- type: nauc_ndcg_at_1_diff1
value: 41.695
- type: nauc_ndcg_at_3_max
value: 15.7033
- type: nauc_ndcg_at_3_std
value: 1.6629999999999998
- type: nauc_ndcg_at_3_diff1
value: 36.4614
- type: nauc_ndcg_at_5_max
value: 15.5037
- type: nauc_ndcg_at_5_std
value: 2.9783
- type: nauc_ndcg_at_5_diff1
value: 35.1895
- type: nauc_ndcg_at_10_max
value: 14.0983
- type: nauc_ndcg_at_10_std
value: 3.2645
- type: nauc_ndcg_at_10_diff1
value: 34.6149
- type: nauc_ndcg_at_20_max
value: 15.567400000000001
- type: nauc_ndcg_at_20_std
value: 4.6046
- type: nauc_ndcg_at_20_diff1
value: 35.486200000000004
- type: nauc_ndcg_at_100_max
value: 16.6548
- type: nauc_ndcg_at_100_std
value: 6.6399
- type: nauc_ndcg_at_100_diff1
value: 35.268899999999995
- type: nauc_ndcg_at_1000_max
value: 16.7889
- type: nauc_ndcg_at_1000_std
value: 5.9258
- type: nauc_ndcg_at_1000_diff1
value: 36.0024
- type: nauc_map_at_1_max
value: 13.874400000000001
- type: nauc_map_at_1_std
value: 3.305
- type: nauc_map_at_1_diff1
value: 41.7572
- type: nauc_map_at_3_max
value: 14.519699999999998
- type: nauc_map_at_3_std
value: 2.3379
- type: nauc_map_at_3_diff1
value: 37.8774
- type: nauc_map_at_5_max
value: 14.702399999999999
- type: nauc_map_at_5_std
value: 2.7134
- type: nauc_map_at_5_diff1
value: 37.0712
- type: nauc_map_at_10_max
value: 14.2346
- type: nauc_map_at_10_std
value: 2.9902
- type: nauc_map_at_10_diff1
value: 36.7886
- type: nauc_map_at_20_max
value: 14.7155
- type: nauc_map_at_20_std
value: 3.4323
- type: nauc_map_at_20_diff1
value: 37.0342
- type: nauc_map_at_100_max
value: 15.015600000000001
- type: nauc_map_at_100_std
value: 3.8381
- type: nauc_map_at_100_diff1
value: 37.0107
- type: nauc_map_at_1000_max
value: 15.020800000000001
- type: nauc_map_at_1000_std
value: 3.8089999999999997
- type: nauc_map_at_1000_diff1
value: 37.0464
- type: nauc_recall_at_1_max
value: 13.874400000000001
- type: nauc_recall_at_1_std
value: 3.305
- type: nauc_recall_at_1_diff1
value: 41.7572
- type: nauc_recall_at_3_max
value: 14.191
- type: nauc_recall_at_3_std
value: 2.8631
- type: nauc_recall_at_3_diff1
value: 32.0865
- type: nauc_recall_at_5_max
value: 13.8317
- type: nauc_recall_at_5_std
value: 4.0751
- type: nauc_recall_at_5_diff1
value: 29.2578
- type: nauc_recall_at_10_max
value: 9.8311
- type: nauc_recall_at_10_std
value: 3.787
- type: nauc_recall_at_10_diff1
value: 27.0678
- type: nauc_recall_at_20_max
value: 14.6648
- type: nauc_recall_at_20_std
value: 8.6709
- type: nauc_recall_at_20_diff1
value: 28.928199999999997
- type: nauc_recall_at_100_max
value: 20.9186
- type: nauc_recall_at_100_std
value: 23.6257
- type: nauc_recall_at_100_diff1
value: 25.6988
- type: nauc_recall_at_1000_max
value: 38.6366
- type: nauc_recall_at_1000_std
value: 47.4022
- type: nauc_recall_at_1000_diff1
value: 29.6164
- type: nauc_precision_at_1_max
value: 16.8793
- type: nauc_precision_at_1_std
value: 1.1622000000000001
- type: nauc_precision_at_1_diff1
value: 41.695
- type: nauc_precision_at_3_max
value: 19.1054
- type: nauc_precision_at_3_std
value: -0.3239
- type: nauc_precision_at_3_diff1
value: 29.140700000000002
- type: nauc_precision_at_5_max
value: 18.3369
- type: nauc_precision_at_5_std
value: 2.2429
- type: nauc_precision_at_5_diff1
value: 23.5603
- type: nauc_precision_at_10_max
value: 14.048
- type: nauc_precision_at_10_std
value: 2.5635000000000003
- type: nauc_precision_at_10_diff1
value: 18.6389
- type: nauc_precision_at_20_max
value: 15.1054
- type: nauc_precision_at_20_std
value: 5.4473
- type: nauc_precision_at_20_diff1
value: 16.980999999999998
- type: nauc_precision_at_100_max
value: 12.1794
- type: nauc_precision_at_100_std
value: 7.657
- type: nauc_precision_at_100_diff1
value: 5.9291
- type: nauc_precision_at_1000_max
value: 7.6541999999999994
- type: nauc_precision_at_1000_std
value: -1.8911
- type: nauc_precision_at_1000_diff1
value: -0.042499999999999996
- type: nauc_mrr_at_1_max
value: 16.8793
- type: nauc_mrr_at_1_std
value: 1.1622000000000001
- type: nauc_mrr_at_1_diff1
value: 41.695
- type: nauc_mrr_at_3_max
value: 16.8712
- type: nauc_mrr_at_3_std
value: 1.9463000000000001
- type: nauc_mrr_at_3_diff1
value: 36.6252
- type: nauc_mrr_at_5_max
value: 16.9044
- type: nauc_mrr_at_5_std
value: 2.4106
- type: nauc_mrr_at_5_diff1
value: 36.2224
- type: nauc_mrr_at_10_max
value: 16.4922
- type: nauc_mrr_at_10_std
value: 2.0573
- type: nauc_mrr_at_10_diff1
value: 36.4031
- type: nauc_mrr_at_20_max
value: 16.9114
- type: nauc_mrr_at_20_std
value: 2.3496
- type: nauc_mrr_at_20_diff1
value: 36.592999999999996
- type: nauc_mrr_at_100_max
value: 16.9761
- type: nauc_mrr_at_100_std
value: 2.6144
- type: nauc_mrr_at_100_diff1
value: 36.5791
- type: nauc_mrr_at_1000_max
value: 16.97
- type: nauc_mrr_at_1000_std
value: 2.6048999999999998
- type: nauc_mrr_at_1000_diff1
value: 36.5997
- type: main_score
value: 37.421
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: ndcg_at_1
value: 43.118
- type: ndcg_at_3
value: 49.36
- type: ndcg_at_5
value: 52.339
- type: ndcg_at_10
value: 55.001
- type: ndcg_at_20
value: 57.08
- type: ndcg_at_100
value: 60.156
- type: ndcg_at_1000
value: 61.409000000000006
- type: map_at_1
value: 34.422999999999995
- type: map_at_3
value: 44.241
- type: map_at_5
value: 46.639
- type: map_at_10
value: 48.187999999999995
- type: map_at_20
value: 48.943999999999996
- type: map_at_100
value: 49.589
- type: map_at_1000
value: 49.683
- type: recall_at_1
value: 34.422999999999995
- type: recall_at_3
value: 53.005
- type: recall_at_5
value: 60.787
- type: recall_at_10
value: 68.947
- type: recall_at_20
value: 76.11099999999999
- type: recall_at_100
value: 90.093
- type: recall_at_1000
value: 97.616
- type: precision_at_1
value: 43.118
- type: precision_at_3
value: 24.03
- type: precision_at_5
value: 17.267
- type: precision_at_10
value: 10.221
- type: precision_at_20
value: 5.89
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.178
- type: mrr_at_1
value: 43.1184
- type: mrr_at_3
value: 51.4277
- type: mrr_at_5
value: 53.054199999999994
- type: mrr_at_10
value: 54.0983
- type: mrr_at_20
value: 54.548300000000005
- type: mrr_at_100
value: 54.8195
- type: mrr_at_1000
value: 54.8432
- type: nauc_ndcg_at_1_max
value: 40.0497
- type: nauc_ndcg_at_1_std
value: -2.9893
- type: nauc_ndcg_at_1_diff1
value: 52.6498
- type: nauc_ndcg_at_3_max
value: 35.5227
- type: nauc_ndcg_at_3_std
value: -1.754
- type: nauc_ndcg_at_3_diff1
value: 47.9744
- type: nauc_ndcg_at_5_max
value: 36.2245
- type: nauc_ndcg_at_5_std
value: -1.2267000000000001
- type: nauc_ndcg_at_5_diff1
value: 47.6131
- type: nauc_ndcg_at_10_max
value: 36.8382
- type: nauc_ndcg_at_10_std
value: 0.0044
- type: nauc_ndcg_at_10_diff1
value: 48.5768
- type: nauc_ndcg_at_20_max
value: 36.5259
- type: nauc_ndcg_at_20_std
value: 0.6298
- type: nauc_ndcg_at_20_diff1
value: 48.3686
- type: nauc_ndcg_at_100_max
value: 37.8334
- type: nauc_ndcg_at_100_std
value: 1.4694
- type: nauc_ndcg_at_100_diff1
value: 48.629
- type: nauc_ndcg_at_1000_max
value: 38.1066
- type: nauc_ndcg_at_1000_std
value: 1.2034
- type: nauc_ndcg_at_1000_diff1
value: 48.7834
- type: nauc_map_at_1_max
value: 31.5692
- type: nauc_map_at_1_std
value: -5.4256
- type: nauc_map_at_1_diff1
value: 53.0706
- type: nauc_map_at_3_max
value: 33.3182
- type: nauc_map_at_3_std
value: -3.4004
- type: nauc_map_at_3_diff1
value: 49.742799999999995
- type: nauc_map_at_5_max
value: 34.745
- type: nauc_map_at_5_std
value: -2.6823
- type: nauc_map_at_5_diff1
value: 49.3108
- type: nauc_map_at_10_max
value: 35.6475
- type: nauc_map_at_10_std
value: -1.7001
- type: nauc_map_at_10_diff1
value: 49.7542
- type: nauc_map_at_20_max
value: 35.6477
- type: nauc_map_at_20_std
value: -1.4723
- type: nauc_map_at_20_diff1
value: 49.6954
- type: nauc_map_at_100_max
value: 35.9688
- type: nauc_map_at_100_std
value: -1.2590999999999999
- type: nauc_map_at_100_diff1
value: 49.661
- type: nauc_map_at_1000_max
value: 35.9962
- type: nauc_map_at_1000_std
value: -1.2567
- type: nauc_map_at_1000_diff1
value: 49.6742
- type: nauc_recall_at_1_max
value: 31.5692
- type: nauc_recall_at_1_std
value: -5.4256
- type: nauc_recall_at_1_diff1
value: 53.0706
- type: nauc_recall_at_3_max
value: 30.078500000000002
- type: nauc_recall_at_3_std
value: -1.6661
- type: nauc_recall_at_3_diff1
value: 43.605199999999996
- type: nauc_recall_at_5_max
value: 32.1419
- type: nauc_recall_at_5_std
value: -0.0245
- type: nauc_recall_at_5_diff1
value: 41.667100000000005
- type: nauc_recall_at_10_max
value: 33.3931
- type: nauc_recall_at_10_std
value: 4.3266
- type: nauc_recall_at_10_diff1
value: 43.1197
- type: nauc_recall_at_20_max
value: 29.959799999999998
- type: nauc_recall_at_20_std
value: 7.4322
- type: nauc_recall_at_20_diff1
value: 40.589999999999996
- type: nauc_recall_at_100_max
value: 35.565200000000004
- type: nauc_recall_at_100_std
value: 20.2683
- type: nauc_recall_at_100_diff1
value: 38.6228
- type: nauc_recall_at_1000_max
value: 57.227
- type: nauc_recall_at_1000_std
value: 45.2524
- type: nauc_recall_at_1000_diff1
value: 34.657700000000006
- type: nauc_precision_at_1_max
value: 40.0497
- type: nauc_precision_at_1_std
value: -2.9893
- type: nauc_precision_at_1_diff1
value: 52.6498
- type: nauc_precision_at_3_max
value: 33.8853
- type: nauc_precision_at_3_std
value: 5.0939
- type: nauc_precision_at_3_diff1
value: 27.423199999999998
- type: nauc_precision_at_5_max
value: 31.651
- type: nauc_precision_at_5_std
value: 7.5684000000000005
- type: nauc_precision_at_5_diff1
value: 17.874100000000002
- type: nauc_precision_at_10_max
value: 29.8653
- type: nauc_precision_at_10_std
value: 12.945699999999999
- type: nauc_precision_at_10_diff1
value: 11.091800000000001
- type: nauc_precision_at_20_max
value: 22.145300000000002
- type: nauc_precision_at_20_std
value: 14.2574
- type: nauc_precision_at_20_diff1
value: 2.8937999999999997
- type: nauc_precision_at_100_max
value: 13.1369
- type: nauc_precision_at_100_std
value: 14.579600000000001
- type: nauc_precision_at_100_diff1
value: -9.4206
- type: nauc_precision_at_1000_max
value: 3.6643000000000003
- type: nauc_precision_at_1000_std
value: 10.0171
- type: nauc_precision_at_1000_diff1
value: -14.5572
- type: nauc_mrr_at_1_max
value: 40.0497
- type: nauc_mrr_at_1_std
value: -2.9893
- type: nauc_mrr_at_1_diff1
value: 52.6498
- type: nauc_mrr_at_3_max
value: 39.2932
- type: nauc_mrr_at_3_std
value: -1.2786
- type: nauc_mrr_at_3_diff1
value: 48.8373
- type: nauc_mrr_at_5_max
value: 39.495999999999995
- type: nauc_mrr_at_5_std
value: -1.0756
- type: nauc_mrr_at_5_diff1
value: 48.6192
- type: nauc_mrr_at_10_max
value: 39.617200000000004
- type: nauc_mrr_at_10_std
value: -0.5789
- type: nauc_mrr_at_10_diff1
value: 49.0029
- type: nauc_mrr_at_20_max
value: 39.5561
- type: nauc_mrr_at_20_std
value: -0.5488
- type: nauc_mrr_at_20_diff1
value: 48.9423
- type: nauc_mrr_at_100_max
value: 39.5619
- type: nauc_mrr_at_100_std
value: -0.6031
- type: nauc_mrr_at_100_diff1
value: 49.0095
- type: nauc_mrr_at_1000_max
value: 39.5703
- type: nauc_mrr_at_1000_std
value: -0.618
- type: nauc_mrr_at_1000_diff1
value: 49.0181
- type: main_score
value: 55.001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: ndcg_at_1
value: 40.068
- type: ndcg_at_3
value: 44.096999999999994
- type: ndcg_at_5
value: 47.516999999999996
- type: ndcg_at_10
value: 50.446999999999996
- type: ndcg_at_20
value: 52.519000000000005
- type: ndcg_at_100
value: 55.833999999999996
- type: ndcg_at_1000
value: 57.43000000000001
- type: map_at_1
value: 31.428
- type: map_at_3
value: 39.353
- type: map_at_5
value: 42.094
- type: map_at_10
value: 43.725
- type: map_at_20
value: 44.471
- type: map_at_100
value: 45.117000000000004
- type: map_at_1000
value: 45.211
- type: recall_at_1
value: 31.428
- type: recall_at_3
value: 46.575
- type: recall_at_5
value: 55.515
- type: recall_at_10
value: 64.13799999999999
- type: recall_at_20
value: 71.279
- type: recall_at_100
value: 86.762
- type: recall_at_1000
value: 97.09100000000001
- type: precision_at_1
value: 40.068
- type: precision_at_3
value: 21.195
- type: precision_at_5
value: 15.822
- type: precision_at_10
value: 9.521
- type: precision_at_20
value: 5.497
- type: precision_at_100
value: 1.402
- type: precision_at_1000
value: 0.16999999999999998
- type: mrr_at_1
value: 40.0685
- type: mrr_at_3
value: 47.0129
- type: mrr_at_5
value: 48.856500000000004
- type: mrr_at_10
value: 49.9012
- type: mrr_at_20
value: 50.3737
- type: mrr_at_100
value: 50.7177
- type: mrr_at_1000
value: 50.756299999999996
- type: nauc_ndcg_at_1_max
value: 48.9094
- type: nauc_ndcg_at_1_std
value: 2.946
- type: nauc_ndcg_at_1_diff1
value: 50.792899999999996
- type: nauc_ndcg_at_3_max
value: 44.0954
- type: nauc_ndcg_at_3_std
value: 4.5854
- type: nauc_ndcg_at_3_diff1
value: 44.4164
- type: nauc_ndcg_at_5_max
value: 44.8318
- type: nauc_ndcg_at_5_std
value: 6.72
- type: nauc_ndcg_at_5_diff1
value: 44.5299
- type: nauc_ndcg_at_10_max
value: 45.9722
- type: nauc_ndcg_at_10_std
value: 7.829
- type: nauc_ndcg_at_10_diff1
value: 43.7881
- type: nauc_ndcg_at_20_max
value: 46.2939
- type: nauc_ndcg_at_20_std
value: 8.7342
- type: nauc_ndcg_at_20_diff1
value: 44.252900000000004
- type: nauc_ndcg_at_100_max
value: 46.6306
- type: nauc_ndcg_at_100_std
value: 9.4862
- type: nauc_ndcg_at_100_diff1
value: 44.168099999999995
- type: nauc_ndcg_at_1000_max
value: 46.679500000000004
- type: nauc_ndcg_at_1000_std
value: 8.5999
- type: nauc_ndcg_at_1000_diff1
value: 44.654500000000006
- type: nauc_map_at_1_max
value: 41.802499999999995
- type: nauc_map_at_1_std
value: -1.5448
- type: nauc_map_at_1_diff1
value: 51.3189
- type: nauc_map_at_3_max
value: 42.2779
- type: nauc_map_at_3_std
value: 2.4886
- type: nauc_map_at_3_diff1
value: 45.8305
- type: nauc_map_at_5_max
value: 43.6601
- type: nauc_map_at_5_std
value: 4.7118
- type: nauc_map_at_5_diff1
value: 45.8307
- type: nauc_map_at_10_max
value: 44.725300000000004
- type: nauc_map_at_10_std
value: 5.578799999999999
- type: nauc_map_at_10_diff1
value: 45.5339
- type: nauc_map_at_20_max
value: 44.979
- type: nauc_map_at_20_std
value: 5.9147
- type: nauc_map_at_20_diff1
value: 45.6175
- type: nauc_map_at_100_max
value: 45.202799999999996
- type: nauc_map_at_100_std
value: 6.1206000000000005
- type: nauc_map_at_100_diff1
value: 45.692899999999995
- type: nauc_map_at_1000_max
value: 45.2034
- type: nauc_map_at_1000_std
value: 6.097
- type: nauc_map_at_1000_diff1
value: 45.7149
- type: nauc_recall_at_1_max
value: 41.802499999999995
- type: nauc_recall_at_1_std
value: -1.5448
- type: nauc_recall_at_1_diff1
value: 51.3189
- type: nauc_recall_at_3_max
value: 38.0537
- type: nauc_recall_at_3_std
value: 4.067
- type: nauc_recall_at_3_diff1
value: 38.8726
- type: nauc_recall_at_5_max
value: 39.3573
- type: nauc_recall_at_5_std
value: 9.5685
- type: nauc_recall_at_5_diff1
value: 37.5925
- type: nauc_recall_at_10_max
value: 42.2936
- type: nauc_recall_at_10_std
value: 13.8155
- type: nauc_recall_at_10_diff1
value: 34.5176
- type: nauc_recall_at_20_max
value: 43.5257
- type: nauc_recall_at_20_std
value: 19.2427
- type: nauc_recall_at_20_diff1
value: 35.8971
- type: nauc_recall_at_100_max
value: 44.2485
- type: nauc_recall_at_100_std
value: 34.4593
- type: nauc_recall_at_100_diff1
value: 30.2192
- type: nauc_recall_at_1000_max
value: 56.7136
- type: nauc_recall_at_1000_std
value: 61.5111
- type: nauc_recall_at_1000_diff1
value: 32.9767
- type: nauc_precision_at_1_max
value: 48.9094
- type: nauc_precision_at_1_std
value: 2.946
- type: nauc_precision_at_1_diff1
value: 50.792899999999996
- type: nauc_precision_at_3_max
value: 42.5079
- type: nauc_precision_at_3_std
value: 12.2541
- type: nauc_precision_at_3_diff1
value: 28.8997
- type: nauc_precision_at_5_max
value: 38.399699999999996
- type: nauc_precision_at_5_std
value: 17.0376
- type: nauc_precision_at_5_diff1
value: 21.3869
- type: nauc_precision_at_10_max
value: 34.8194
- type: nauc_precision_at_10_std
value: 18.3221
- type: nauc_precision_at_10_diff1
value: 12.9642
- type: nauc_precision_at_20_max
value: 28.1161
- type: nauc_precision_at_20_std
value: 17.7852
- type: nauc_precision_at_20_diff1
value: 7.283199999999999
- type: nauc_precision_at_100_max
value: 15.3001
- type: nauc_precision_at_100_std
value: 12.8588
- type: nauc_precision_at_100_diff1
value: -3.2960000000000003
- type: nauc_precision_at_1000_max
value: 1.5663
- type: nauc_precision_at_1000_std
value: 1.7748
- type: nauc_precision_at_1000_diff1
value: -8.8656
- type: nauc_mrr_at_1_max
value: 48.9094
- type: nauc_mrr_at_1_std
value: 2.946
- type: nauc_mrr_at_1_diff1
value: 50.792899999999996
- type: nauc_mrr_at_3_max
value: 47.6835
- type: nauc_mrr_at_3_std
value: 5.3593
- type: nauc_mrr_at_3_diff1
value: 46.109
- type: nauc_mrr_at_5_max
value: 47.570299999999996
- type: nauc_mrr_at_5_std
value: 5.777299999999999
- type: nauc_mrr_at_5_diff1
value: 45.8975
- type: nauc_mrr_at_10_max
value: 47.7796
- type: nauc_mrr_at_10_std
value: 6.0919
- type: nauc_mrr_at_10_diff1
value: 45.593
- type: nauc_mrr_at_20_max
value: 47.783
- type: nauc_mrr_at_20_std
value: 6.1481
- type: nauc_mrr_at_20_diff1
value: 45.818999999999996
- type: nauc_mrr_at_100_max
value: 47.7483
- type: nauc_mrr_at_100_std
value: 6.1742
- type: nauc_mrr_at_100_diff1
value: 45.8742
- type: nauc_mrr_at_1000_max
value: 47.7586
- type: nauc_mrr_at_1000_std
value: 6.1544
- type: nauc_mrr_at_1000_diff1
value: 45.894
- type: main_score
value: 50.446999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 47.938333333333325
- type: ndcg_at_10
value: 47.938333333333325
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: ndcg_at_1
value: 32.362
- type: ndcg_at_3
value: 36.498000000000005
- type: ndcg_at_5
value: 38.729
- type: ndcg_at_10
value: 41.567
- type: ndcg_at_20
value: 43.247
- type: ndcg_at_100
value: 46.714
- type: ndcg_at_1000
value: 48.858000000000004
- type: map_at_1
value: 28.124
- type: map_at_3
value: 33.67
- type: map_at_5
value: 35.15
- type: map_at_10
value: 36.481
- type: map_at_20
value: 36.998
- type: map_at_100
value: 37.564
- type: map_at_1000
value: 37.651
- type: recall_at_1
value: 28.124
- type: recall_at_3
value: 39.623999999999995
- type: recall_at_5
value: 45.373999999999995
- type: recall_at_10
value: 53.761
- type: recall_at_20
value: 60.141999999999996
- type: recall_at_100
value: 77.145
- type: recall_at_1000
value: 92.809
- type: precision_at_1
value: 32.362
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.951
- type: precision_at_10
value: 6.718
- type: precision_at_20
value: 3.781
- type: precision_at_100
value: 1.011
- type: precision_at_1000
value: 0.126
- type: mrr_at_1
value: 32.362
- type: mrr_at_3
value: 37.5256
- type: mrr_at_5
value: 38.813900000000004
- type: mrr_at_10
value: 39.9955
- type: mrr_at_20
value: 40.4099
- type: mrr_at_100
value: 40.8508
- type: mrr_at_1000
value: 40.9125
- type: nauc_ndcg_at_1_max
value: 45.6295
- type: nauc_ndcg_at_1_std
value: -0.9893000000000001
- type: nauc_ndcg_at_1_diff1
value: 60.5302
- type: nauc_ndcg_at_3_max
value: 46.4109
- type: nauc_ndcg_at_3_std
value: 1.2275
- type: nauc_ndcg_at_3_diff1
value: 55.38999999999999
- type: nauc_ndcg_at_5_max
value: 44.9415
- type: nauc_ndcg_at_5_std
value: 1.221
- type: nauc_ndcg_at_5_diff1
value: 53.0919
- type: nauc_ndcg_at_10_max
value: 44.3985
- type: nauc_ndcg_at_10_std
value: 3.0568999999999997
- type: nauc_ndcg_at_10_diff1
value: 52.9713
- type: nauc_ndcg_at_20_max
value: 43.802400000000006
- type: nauc_ndcg_at_20_std
value: 3.5957000000000003
- type: nauc_ndcg_at_20_diff1
value: 51.765499999999996
- type: nauc_ndcg_at_100_max
value: 45.0729
- type: nauc_ndcg_at_100_std
value: 4.8267
- type: nauc_ndcg_at_100_diff1
value: 51.7238
- type: nauc_ndcg_at_1000_max
value: 45.4394
- type: nauc_ndcg_at_1000_std
value: 4.9554
- type: nauc_ndcg_at_1000_diff1
value: 52.7547
- type: nauc_map_at_1_max
value: 41.9895
- type: nauc_map_at_1_std
value: -3.7581999999999995
- type: nauc_map_at_1_diff1
value: 60.44310000000001
- type: nauc_map_at_3_max
value: 45.1699
- type: nauc_map_at_3_std
value: -0.23839999999999997
- type: nauc_map_at_3_diff1
value: 56.36600000000001
- type: nauc_map_at_5_max
value: 44.6674
- type: nauc_map_at_5_std
value: 0.2771
- type: nauc_map_at_5_diff1
value: 55.1832
- type: nauc_map_at_10_max
value: 44.5561
- type: nauc_map_at_10_std
value: 1.1383
- type: nauc_map_at_10_diff1
value: 55.199
- type: nauc_map_at_20_max
value: 44.426100000000005
- type: nauc_map_at_20_std
value: 1.2463
- type: nauc_map_at_20_diff1
value: 54.8601
- type: nauc_map_at_100_max
value: 44.6656
- type: nauc_map_at_100_std
value: 1.4344000000000001
- type: nauc_map_at_100_diff1
value: 54.8404
- type: nauc_map_at_1000_max
value: 44.6885
- type: nauc_map_at_1000_std
value: 1.4637
- type: nauc_map_at_1000_diff1
value: 54.879
- type: nauc_recall_at_1_max
value: 41.9895
- type: nauc_recall_at_1_std
value: -3.7581999999999995
- type: nauc_recall_at_1_diff1
value: 60.44310000000001
- type: nauc_recall_at_3_max
value: 44.267
- type: nauc_recall_at_3_std
value: 1.9900999999999998
- type: nauc_recall_at_3_diff1
value: 50.2301
- type: nauc_recall_at_5_max
value: 41.025800000000004
- type: nauc_recall_at_5_std
value: 1.7608
- type: nauc_recall_at_5_diff1
value: 44.4415
- type: nauc_recall_at_10_max
value: 39.1839
- type: nauc_recall_at_10_std
value: 7.1819
- type: nauc_recall_at_10_diff1
value: 43.2659
- type: nauc_recall_at_20_max
value: 35.619099999999996
- type: nauc_recall_at_20_std
value: 9.3642
- type: nauc_recall_at_20_diff1
value: 37.6444
- type: nauc_recall_at_100_max
value: 41.6917
- type: nauc_recall_at_100_std
value: 21.5639
- type: nauc_recall_at_100_diff1
value: 31.6295
- type: nauc_recall_at_1000_max
value: 48.0423
- type: nauc_recall_at_1000_std
value: 54.2675
- type: nauc_recall_at_1000_diff1
value: 29.726399999999998
- type: nauc_precision_at_1_max
value: 45.6295
- type: nauc_precision_at_1_std
value: -0.9893000000000001
- type: nauc_precision_at_1_diff1
value: 60.5302
- type: nauc_precision_at_3_max
value: 49.6365
- type: nauc_precision_at_3_std
value: 7.6746
- type: nauc_precision_at_3_diff1
value: 48.4452
- type: nauc_precision_at_5_max
value: 45.906400000000005
- type: nauc_precision_at_5_std
value: 10.616399999999999
- type: nauc_precision_at_5_diff1
value: 41.393299999999996
- type: nauc_precision_at_10_max
value: 40.9508
- type: nauc_precision_at_10_std
value: 14.360100000000001
- type: nauc_precision_at_10_diff1
value: 35.6927
- type: nauc_precision_at_20_max
value: 36.5696
- type: nauc_precision_at_20_std
value: 15.232499999999998
- type: nauc_precision_at_20_diff1
value: 28.5742
- type: nauc_precision_at_100_max
value: 29.4906
- type: nauc_precision_at_100_std
value: 18.8562
- type: nauc_precision_at_100_diff1
value: 14.689
- type: nauc_precision_at_1000_max
value: 16.2971
- type: nauc_precision_at_1000_std
value: 14.349
- type: nauc_precision_at_1000_diff1
value: 4.972300000000001
- type: nauc_mrr_at_1_max
value: 45.6295
- type: nauc_mrr_at_1_std
value: -0.9893000000000001
- type: nauc_mrr_at_1_diff1
value: 60.5302
- type: nauc_mrr_at_3_max
value: 47.340900000000005
- type: nauc_mrr_at_3_std
value: 1.9640000000000002
- type: nauc_mrr_at_3_diff1
value: 56.7908
- type: nauc_mrr_at_5_max
value: 46.8151
- type: nauc_mrr_at_5_std
value: 2.0004
- type: nauc_mrr_at_5_diff1
value: 55.49230000000001
- type: nauc_mrr_at_10_max
value: 46.5702
- type: nauc_mrr_at_10_std
value: 2.5755
- type: nauc_mrr_at_10_diff1
value: 55.3437
- type: nauc_mrr_at_20_max
value: 46.3775
- type: nauc_mrr_at_20_std
value: 2.7186
- type: nauc_mrr_at_20_diff1
value: 55.084
- type: nauc_mrr_at_100_max
value: 46.428599999999996
- type: nauc_mrr_at_100_std
value: 2.7332
- type: nauc_mrr_at_100_diff1
value: 55.088499999999996
- type: nauc_mrr_at_1000_max
value: 46.443200000000004
- type: nauc_mrr_at_1000_std
value: 2.7476000000000003
- type: nauc_mrr_at_1000_diff1
value: 55.1161
- type: main_score
value: 41.567
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: ndcg_at_1
value: 27.22
- type: ndcg_at_3
value: 31.52
- type: ndcg_at_5
value: 33.964
- type: ndcg_at_10
value: 36.581
- type: ndcg_at_20
value: 38.665
- type: ndcg_at_100
value: 42.268
- type: ndcg_at_1000
value: 44.748
- type: map_at_1
value: 22.017
- type: map_at_3
value: 27.927999999999997
- type: map_at_5
value: 29.668
- type: map_at_10
value: 31.002999999999997
- type: map_at_20
value: 31.685999999999996
- type: map_at_100
value: 32.29
- type: map_at_1000
value: 32.413
- type: recall_at_1
value: 22.017
- type: recall_at_3
value: 34.009
- type: recall_at_5
value: 40.416000000000004
- type: recall_at_10
value: 48.415
- type: recall_at_20
value: 56.038
- type: recall_at_100
value: 73.851
- type: recall_at_1000
value: 91.03999999999999
- type: precision_at_1
value: 27.22
- type: precision_at_3
value: 15.084
- type: precision_at_5
value: 11.094
- type: precision_at_10
value: 6.800000000000001
- type: precision_at_20
value: 4.054
- type: precision_at_100
value: 1.134
- type: precision_at_1000
value: 0.153
- type: mrr_at_1
value: 27.219500000000004
- type: mrr_at_3
value: 33.132600000000004
- type: mrr_at_5
value: 34.694900000000004
- type: mrr_at_10
value: 35.7418
- type: mrr_at_20
value: 36.273300000000006
- type: mrr_at_100
value: 36.6689
- type: mrr_at_1000
value: 36.7336
- type: nauc_ndcg_at_1_max
value: 26.5332
- type: nauc_ndcg_at_1_std
value: 6.7063999999999995
- type: nauc_ndcg_at_1_diff1
value: 41.7063
- type: nauc_ndcg_at_3_max
value: 26.294
- type: nauc_ndcg_at_3_std
value: 6.9978
- type: nauc_ndcg_at_3_diff1
value: 36.8322
- type: nauc_ndcg_at_5_max
value: 27.6001
- type: nauc_ndcg_at_5_std
value: 8.1389
- type: nauc_ndcg_at_5_diff1
value: 36.5894
- type: nauc_ndcg_at_10_max
value: 27.5265
- type: nauc_ndcg_at_10_std
value: 8.5525
- type: nauc_ndcg_at_10_diff1
value: 36.2451
- type: nauc_ndcg_at_20_max
value: 27.2368
- type: nauc_ndcg_at_20_std
value: 9.4667
- type: nauc_ndcg_at_20_diff1
value: 35.553000000000004
- type: nauc_ndcg_at_100_max
value: 27.6221
- type: nauc_ndcg_at_100_std
value: 10.9896
- type: nauc_ndcg_at_100_diff1
value: 35.010799999999996
- type: nauc_ndcg_at_1000_max
value: 27.823700000000002
- type: nauc_ndcg_at_1000_std
value: 10.5646
- type: nauc_ndcg_at_1000_diff1
value: 35.4455
- type: nauc_map_at_1_max
value: 24.0311
- type: nauc_map_at_1_std
value: 4.9532
- type: nauc_map_at_1_diff1
value: 41.0051
- type: nauc_map_at_3_max
value: 25.3489
- type: nauc_map_at_3_std
value: 6.229
- type: nauc_map_at_3_diff1
value: 37.5031
- type: nauc_map_at_5_max
value: 26.3791
- type: nauc_map_at_5_std
value: 7.036499999999999
- type: nauc_map_at_5_diff1
value: 37.3731
- type: nauc_map_at_10_max
value: 26.4979
- type: nauc_map_at_10_std
value: 7.3984
- type: nauc_map_at_10_diff1
value: 37.213499999999996
- type: nauc_map_at_20_max
value: 26.505000000000003
- type: nauc_map_at_20_std
value: 7.749300000000001
- type: nauc_map_at_20_diff1
value: 37.0079
- type: nauc_map_at_100_max
value: 26.625700000000002
- type: nauc_map_at_100_std
value: 8.0921
- type: nauc_map_at_100_diff1
value: 36.9709
- type: nauc_map_at_1000_max
value: 26.6505
- type: nauc_map_at_1000_std
value: 8.093599999999999
- type: nauc_map_at_1000_diff1
value: 37.0068
- type: nauc_recall_at_1_max
value: 24.0311
- type: nauc_recall_at_1_std
value: 4.9532
- type: nauc_recall_at_1_diff1
value: 41.0051
- type: nauc_recall_at_3_max
value: 24.7976
- type: nauc_recall_at_3_std
value: 6.1747000000000005
- type: nauc_recall_at_3_diff1
value: 33.226299999999995
- type: nauc_recall_at_5_max
value: 27.539599999999997
- type: nauc_recall_at_5_std
value: 8.8065
- type: nauc_recall_at_5_diff1
value: 32.2612
- type: nauc_recall_at_10_max
value: 26.8189
- type: nauc_recall_at_10_std
value: 9.7864
- type: nauc_recall_at_10_diff1
value: 30.2757
- type: nauc_recall_at_20_max
value: 25.183699999999998
- type: nauc_recall_at_20_std
value: 12.916
- type: nauc_recall_at_20_diff1
value: 27.092100000000002
- type: nauc_recall_at_100_max
value: 26.9612
- type: nauc_recall_at_100_std
value: 24.1506
- type: nauc_recall_at_100_diff1
value: 20.9473
- type: nauc_recall_at_1000_max
value: 34.2476
- type: nauc_recall_at_1000_std
value: 35.335499999999996
- type: nauc_recall_at_1000_diff1
value: 13.6745
- type: nauc_precision_at_1_max
value: 26.5332
- type: nauc_precision_at_1_std
value: 6.7063999999999995
- type: nauc_precision_at_1_diff1
value: 41.7063
- type: nauc_precision_at_3_max
value: 27.1377
- type: nauc_precision_at_3_std
value: 9.1475
- type: nauc_precision_at_3_diff1
value: 30.6856
- type: nauc_precision_at_5_max
value: 28.0165
- type: nauc_precision_at_5_std
value: 11.5134
- type: nauc_precision_at_5_diff1
value: 27.454600000000003
- type: nauc_precision_at_10_max
value: 25.393700000000003
- type: nauc_precision_at_10_std
value: 13.100000000000001
- type: nauc_precision_at_10_diff1
value: 23.296400000000002
- type: nauc_precision_at_20_max
value: 22.0302
- type: nauc_precision_at_20_std
value: 15.6886
- type: nauc_precision_at_20_diff1
value: 18.0761
- type: nauc_precision_at_100_max
value: 18.754
- type: nauc_precision_at_100_std
value: 18.049599999999998
- type: nauc_precision_at_100_diff1
value: 10.578999999999999
- type: nauc_precision_at_1000_max
value: 15.4445
- type: nauc_precision_at_1000_std
value: 10.5797
- type: nauc_precision_at_1000_diff1
value: 8.6555
- type: nauc_mrr_at_1_max
value: 26.5332
- type: nauc_mrr_at_1_std
value: 6.7063999999999995
- type: nauc_mrr_at_1_diff1
value: 41.7063
- type: nauc_mrr_at_3_max
value: 27.048
- type: nauc_mrr_at_3_std
value: 7.0742
- type: nauc_mrr_at_3_diff1
value: 38.388
- type: nauc_mrr_at_5_max
value: 27.6961
- type: nauc_mrr_at_5_std
value: 7.7979
- type: nauc_mrr_at_5_diff1
value: 38.2328
- type: nauc_mrr_at_10_max
value: 27.6906
- type: nauc_mrr_at_10_std
value: 7.8747
- type: nauc_mrr_at_10_diff1
value: 38.061099999999996
- type: nauc_mrr_at_20_max
value: 27.5596
- type: nauc_mrr_at_20_std
value: 8.1191
- type: nauc_mrr_at_20_diff1
value: 37.8976
- type: nauc_mrr_at_100_max
value: 27.596500000000002
- type: nauc_mrr_at_100_std
value: 8.2534
- type: nauc_mrr_at_100_diff1
value: 37.9023
- type: nauc_mrr_at_1000_max
value: 27.6032
- type: nauc_mrr_at_1000_std
value: 8.2402
- type: nauc_mrr_at_1000_diff1
value: 37.9149
- type: main_score
value: 36.581
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: ndcg_at_1
value: 37.687
- type: ndcg_at_3
value: 43.254
- type: ndcg_at_5
value: 45.518
- type: ndcg_at_10
value: 48.57
- type: ndcg_at_20
value: 50.870000000000005
- type: ndcg_at_100
value: 53.964999999999996
- type: ndcg_at_1000
value: 55.704
- type: map_at_1
value: 31.507
- type: map_at_3
value: 39.26
- type: map_at_5
value: 41.006
- type: map_at_10
value: 42.559999999999995
- type: map_at_20
value: 43.330999999999996
- type: map_at_100
value: 43.852000000000004
- type: map_at_1000
value: 43.939
- type: recall_at_1
value: 31.507
- type: recall_at_3
value: 47.019
- type: recall_at_5
value: 53.105999999999995
- type: recall_at_10
value: 62.061
- type: recall_at_20
value: 70.206
- type: recall_at_100
value: 84.87400000000001
- type: recall_at_1000
value: 96.627
- type: precision_at_1
value: 37.687
- type: precision_at_3
value: 20.055999999999997
- type: precision_at_5
value: 13.899000000000001
- type: precision_at_10
value: 8.34
- type: precision_at_20
value: 4.841
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.148
- type: mrr_at_1
value: 37.6866
- type: mrr_at_3
value: 44.760600000000004
- type: mrr_at_5
value: 46.1505
- type: mrr_at_10
value: 47.2768
- type: mrr_at_20
value: 47.8296
- type: mrr_at_100
value: 48.147099999999995
- type: mrr_at_1000
value: 48.1922
- type: nauc_ndcg_at_1_max
value: 44.6488
- type: nauc_ndcg_at_1_std
value: 4.1984
- type: nauc_ndcg_at_1_diff1
value: 56.6913
- type: nauc_ndcg_at_3_max
value: 41.7574
- type: nauc_ndcg_at_3_std
value: 2.3933
- type: nauc_ndcg_at_3_diff1
value: 49.532900000000005
- type: nauc_ndcg_at_5_max
value: 40.976099999999995
- type: nauc_ndcg_at_5_std
value: 1.6297
- type: nauc_ndcg_at_5_diff1
value: 49.6749
- type: nauc_ndcg_at_10_max
value: 41.645199999999996
- type: nauc_ndcg_at_10_std
value: 3.6906000000000003
- type: nauc_ndcg_at_10_diff1
value: 48.6944
- type: nauc_ndcg_at_20_max
value: 42.5723
- type: nauc_ndcg_at_20_std
value: 5.2003
- type: nauc_ndcg_at_20_diff1
value: 49.2712
- type: nauc_ndcg_at_100_max
value: 43.1712
- type: nauc_ndcg_at_100_std
value: 5.9054
- type: nauc_ndcg_at_100_diff1
value: 49.4614
- type: nauc_ndcg_at_1000_max
value: 43.051899999999996
- type: nauc_ndcg_at_1000_std
value: 5.2268
- type: nauc_ndcg_at_1000_diff1
value: 49.9268
- type: nauc_map_at_1_max
value: 40.5466
- type: nauc_map_at_1_std
value: 2.5801000000000003
- type: nauc_map_at_1_diff1
value: 57.2554
- type: nauc_map_at_3_max
value: 40.7917
- type: nauc_map_at_3_std
value: 1.926
- type: nauc_map_at_3_diff1
value: 51.5164
- type: nauc_map_at_5_max
value: 40.8638
- type: nauc_map_at_5_std
value: 1.9499
- type: nauc_map_at_5_diff1
value: 51.4491
- type: nauc_map_at_10_max
value: 41.4658
- type: nauc_map_at_10_std
value: 2.8266
- type: nauc_map_at_10_diff1
value: 50.9612
- type: nauc_map_at_20_max
value: 41.894999999999996
- type: nauc_map_at_20_std
value: 3.3461
- type: nauc_map_at_20_diff1
value: 51.0836
- type: nauc_map_at_100_max
value: 42.0161
- type: nauc_map_at_100_std
value: 3.4995
- type: nauc_map_at_100_diff1
value: 51.083
- type: nauc_map_at_1000_max
value: 42.0041
- type: nauc_map_at_1000_std
value: 3.4733
- type: nauc_map_at_1000_diff1
value: 51.1013
- type: nauc_recall_at_1_max
value: 40.5466
- type: nauc_recall_at_1_std
value: 2.5801000000000003
- type: nauc_recall_at_1_diff1
value: 57.2554
- type: nauc_recall_at_3_max
value: 37.2587
- type: nauc_recall_at_3_std
value: 0.3079
- type: nauc_recall_at_3_diff1
value: 43.9748
- type: nauc_recall_at_5_max
value: 35.9724
- type: nauc_recall_at_5_std
value: -0.4043
- type: nauc_recall_at_5_diff1
value: 43.2259
- type: nauc_recall_at_10_max
value: 36.5054
- type: nauc_recall_at_10_std
value: 6.050599999999999
- type: nauc_recall_at_10_diff1
value: 38.3286
- type: nauc_recall_at_20_max
value: 39.2692
- type: nauc_recall_at_20_std
value: 12.623599999999998
- type: nauc_recall_at_20_diff1
value: 39.7523
- type: nauc_recall_at_100_max
value: 44.7179
- type: nauc_recall_at_100_std
value: 24.0593
- type: nauc_recall_at_100_diff1
value: 37.7336
- type: nauc_recall_at_1000_max
value: 53.7754
- type: nauc_recall_at_1000_std
value: 45.2282
- type: nauc_recall_at_1000_diff1
value: 37.3474
- type: nauc_precision_at_1_max
value: 44.6488
- type: nauc_precision_at_1_std
value: 4.1984
- type: nauc_precision_at_1_diff1
value: 56.6913
- type: nauc_precision_at_3_max
value: 39.659499999999994
- type: nauc_precision_at_3_std
value: 4.1345
- type: nauc_precision_at_3_diff1
value: 34.713300000000004
- type: nauc_precision_at_5_max
value: 36.207
- type: nauc_precision_at_5_std
value: 3.4388
- type: nauc_precision_at_5_diff1
value: 29.0749
- type: nauc_precision_at_10_max
value: 32.9216
- type: nauc_precision_at_10_std
value: 7.0937
- type: nauc_precision_at_10_diff1
value: 18.8157
- type: nauc_precision_at_20_max
value: 29.9111
- type: nauc_precision_at_20_std
value: 11.0234
- type: nauc_precision_at_20_diff1
value: 12.5669
- type: nauc_precision_at_100_max
value: 16.6883
- type: nauc_precision_at_100_std
value: 11.183300000000001
- type: nauc_precision_at_100_diff1
value: -1.4782
- type: nauc_precision_at_1000_max
value: 1.7108999999999999
- type: nauc_precision_at_1000_std
value: 2.5909999999999997
- type: nauc_precision_at_1000_diff1
value: -11.336300000000001
- type: nauc_mrr_at_1_max
value: 44.6488
- type: nauc_mrr_at_1_std
value: 4.1984
- type: nauc_mrr_at_1_diff1
value: 56.6913
- type: nauc_mrr_at_3_max
value: 43.7515
- type: nauc_mrr_at_3_std
value: 3.3159
- type: nauc_mrr_at_3_diff1
value: 51.294399999999996
- type: nauc_mrr_at_5_max
value: 43.5602
- type: nauc_mrr_at_5_std
value: 2.8770000000000002
- type: nauc_mrr_at_5_diff1
value: 51.3629
- type: nauc_mrr_at_10_max
value: 43.743900000000004
- type: nauc_mrr_at_10_std
value: 3.7014
- type: nauc_mrr_at_10_diff1
value: 50.9399
- type: nauc_mrr_at_20_max
value: 43.736000000000004
- type: nauc_mrr_at_20_std
value: 3.8751
- type: nauc_mrr_at_20_diff1
value: 51.056400000000004
- type: nauc_mrr_at_100_max
value: 43.749
- type: nauc_mrr_at_100_std
value: 3.9109
- type: nauc_mrr_at_100_diff1
value: 51.12989999999999
- type: nauc_mrr_at_1000_max
value: 43.7543
- type: nauc_mrr_at_1000_std
value: 3.9046
- type: nauc_mrr_at_1000_diff1
value: 51.144999999999996
- type: main_score
value: 48.57
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_3
value: 41.804
- type: ndcg_at_5
value: 43.954
- type: ndcg_at_10
value: 46.46
- type: ndcg_at_20
value: 48.881
- type: ndcg_at_100
value: 52.425
- type: ndcg_at_1000
value: 54.373000000000005
- type: map_at_1
value: 29.341
- type: map_at_3
value: 37.120999999999995
- type: map_at_5
value: 38.734
- type: map_at_10
value: 40.158
- type: map_at_20
value: 41.14
- type: map_at_100
value: 42.037
- type: map_at_1000
value: 42.271
- type: recall_at_1
value: 29.341
- type: recall_at_3
value: 44.542
- type: recall_at_5
value: 50.193
- type: recall_at_10
value: 57.894
- type: recall_at_20
value: 66.841
- type: recall_at_100
value: 84.343
- type: recall_at_1000
value: 96.643
- type: precision_at_1
value: 35.375
- type: precision_at_3
value: 20.026
- type: precision_at_5
value: 14.466000000000001
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_20
value: 5.899
- type: precision_at_100
value: 1.8499999999999999
- type: precision_at_1000
value: 0.255
- type: mrr_at_1
value: 35.375499999999995
- type: mrr_at_3
value: 42.1937
- type: mrr_at_5
value: 43.834
- type: mrr_at_10
value: 44.7625
- type: mrr_at_20
value: 45.3485
- type: mrr_at_100
value: 45.7111
- type: mrr_at_1000
value: 45.7566
- type: nauc_ndcg_at_1_max
value: 23.5952
- type: nauc_ndcg_at_1_std
value: 5.8244
- type: nauc_ndcg_at_1_diff1
value: 53.12
- type: nauc_ndcg_at_3_max
value: 21.2669
- type: nauc_ndcg_at_3_std
value: 8.3372
- type: nauc_ndcg_at_3_diff1
value: 47.216
- type: nauc_ndcg_at_5_max
value: 21.0726
- type: nauc_ndcg_at_5_std
value: 10.0045
- type: nauc_ndcg_at_5_diff1
value: 47.5922
- type: nauc_ndcg_at_10_max
value: 21.035999999999998
- type: nauc_ndcg_at_10_std
value: 8.706999999999999
- type: nauc_ndcg_at_10_diff1
value: 47.7902
- type: nauc_ndcg_at_20_max
value: 21.0952
- type: nauc_ndcg_at_20_std
value: 9.762
- type: nauc_ndcg_at_20_diff1
value: 47.6359
- type: nauc_ndcg_at_100_max
value: 22.5778
- type: nauc_ndcg_at_100_std
value: 11.4197
- type: nauc_ndcg_at_100_diff1
value: 48.450500000000005
- type: nauc_ndcg_at_1000_max
value: 22.6857
- type: nauc_ndcg_at_1000_std
value: 10.741100000000001
- type: nauc_ndcg_at_1000_diff1
value: 47.876400000000004
- type: nauc_map_at_1_max
value: 16.4685
- type: nauc_map_at_1_std
value: 0.6203000000000001
- type: nauc_map_at_1_diff1
value: 55.691
- type: nauc_map_at_3_max
value: 19.2659
- type: nauc_map_at_3_std
value: 3.5485999999999995
- type: nauc_map_at_3_diff1
value: 50.8056
- type: nauc_map_at_5_max
value: 19.784299999999998
- type: nauc_map_at_5_std
value: 4.7257
- type: nauc_map_at_5_diff1
value: 50.6115
- type: nauc_map_at_10_max
value: 20.1772
- type: nauc_map_at_10_std
value: 4.8205
- type: nauc_map_at_10_diff1
value: 50.545399999999994
- type: nauc_map_at_20_max
value: 20.4281
- type: nauc_map_at_20_std
value: 5.945799999999999
- type: nauc_map_at_20_diff1
value: 50.2247
- type: nauc_map_at_100_max
value: 20.697599999999998
- type: nauc_map_at_100_std
value: 7.3290999999999995
- type: nauc_map_at_100_diff1
value: 49.9734
- type: nauc_map_at_1000_max
value: 20.686
- type: nauc_map_at_1000_std
value: 7.8218
- type: nauc_map_at_1000_diff1
value: 49.7498
- type: nauc_recall_at_1_max
value: 16.4685
- type: nauc_recall_at_1_std
value: 0.6203000000000001
- type: nauc_recall_at_1_diff1
value: 55.691
- type: nauc_recall_at_3_max
value: 17.5274
- type: nauc_recall_at_3_std
value: 5.2619
- type: nauc_recall_at_3_diff1
value: 44.4629
- type: nauc_recall_at_5_max
value: 18.3596
- type: nauc_recall_at_5_std
value: 9.3709
- type: nauc_recall_at_5_diff1
value: 44.800200000000004
- type: nauc_recall_at_10_max
value: 17.515
- type: nauc_recall_at_10_std
value: 7.080400000000001
- type: nauc_recall_at_10_diff1
value: 43.181799999999996
- type: nauc_recall_at_20_max
value: 17.605999999999998
- type: nauc_recall_at_20_std
value: 11.745700000000001
- type: nauc_recall_at_20_diff1
value: 41.4216
- type: nauc_recall_at_100_max
value: 27.6642
- type: nauc_recall_at_100_std
value: 31.652
- type: nauc_recall_at_100_diff1
value: 43.5986
- type: nauc_recall_at_1000_max
value: 48.9638
- type: nauc_recall_at_1000_std
value: 51.2076
- type: nauc_recall_at_1000_diff1
value: 29.369899999999998
- type: nauc_precision_at_1_max
value: 23.5952
- type: nauc_precision_at_1_std
value: 5.8244
- type: nauc_precision_at_1_diff1
value: 53.12
- type: nauc_precision_at_3_max
value: 23.935100000000002
- type: nauc_precision_at_3_std
value: 16.5914
- type: nauc_precision_at_3_diff1
value: 28.4103
- type: nauc_precision_at_5_max
value: 22.6905
- type: nauc_precision_at_5_std
value: 23.6646
- type: nauc_precision_at_5_diff1
value: 19.524
- type: nauc_precision_at_10_max
value: 22.1447
- type: nauc_precision_at_10_std
value: 25.2899
- type: nauc_precision_at_10_diff1
value: 10.0271
- type: nauc_precision_at_20_max
value: 16.7291
- type: nauc_precision_at_20_std
value: 34.3311
- type: nauc_precision_at_20_diff1
value: -3.1237999999999997
- type: nauc_precision_at_100_max
value: 9.431000000000001
- type: nauc_precision_at_100_std
value: 46.103300000000004
- type: nauc_precision_at_100_diff1
value: -17.1284
- type: nauc_precision_at_1000_max
value: 4.9976
- type: nauc_precision_at_1000_std
value: 47.3664
- type: nauc_precision_at_1000_diff1
value: -24.0582
- type: nauc_mrr_at_1_max
value: 23.5952
- type: nauc_mrr_at_1_std
value: 5.8244
- type: nauc_mrr_at_1_diff1
value: 53.12
- type: nauc_mrr_at_3_max
value: 23.0702
- type: nauc_mrr_at_3_std
value: 8.0486
- type: nauc_mrr_at_3_diff1
value: 48.1321
- type: nauc_mrr_at_5_max
value: 23.2691
- type: nauc_mrr_at_5_std
value: 9.1019
- type: nauc_mrr_at_5_diff1
value: 47.9119
- type: nauc_mrr_at_10_max
value: 23.2747
- type: nauc_mrr_at_10_std
value: 8.8027
- type: nauc_mrr_at_10_diff1
value: 48.0293
- type: nauc_mrr_at_20_max
value: 23.1632
- type: nauc_mrr_at_20_std
value: 8.8043
- type: nauc_mrr_at_20_diff1
value: 47.950900000000004
- type: nauc_mrr_at_100_max
value: 23.2738
- type: nauc_mrr_at_100_std
value: 8.963899999999999
- type: nauc_mrr_at_100_diff1
value: 48.0468
- type: nauc_mrr_at_1000_max
value: 23.2584
- type: nauc_mrr_at_1000_std
value: 8.9357
- type: nauc_mrr_at_1000_diff1
value: 48.0548
- type: main_score
value: 46.46
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: ndcg_at_1
value: 28.835
- type: ndcg_at_3
value: 34.629
- type: ndcg_at_5
value: 36.906
- type: ndcg_at_10
value: 39.722
- type: ndcg_at_20
value: 41.324
- type: ndcg_at_100
value: 44.856
- type: ndcg_at_1000
value: 47.174
- type: map_at_1
value: 26.001
- type: map_at_3
value: 31.916
- type: map_at_5
value: 33.303
- type: map_at_10
value: 34.594
- type: map_at_20
value: 35.081
- type: map_at_100
value: 35.592
- type: map_at_1000
value: 35.693000000000005
- type: recall_at_1
value: 26.001
- type: recall_at_3
value: 39.144
- type: recall_at_5
value: 44.572
- type: recall_at_10
value: 52.856
- type: recall_at_20
value: 58.78
- type: recall_at_100
value: 76.828
- type: recall_at_1000
value: 93.93
- type: precision_at_1
value: 28.835
- type: precision_at_3
value: 14.726
- type: precision_at_5
value: 10.24
- type: precision_at_10
value: 6.192
- type: precision_at_20
value: 3.4939999999999998
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.126
- type: mrr_at_1
value: 28.835499999999996
- type: mrr_at_3
value: 35.0277
- type: mrr_at_5
value: 36.3401
- type: mrr_at_10
value: 37.359500000000004
- type: mrr_at_20
value: 37.7675
- type: mrr_at_100
value: 38.2184
- type: mrr_at_1000
value: 38.287
- type: nauc_ndcg_at_1_max
value: 25.1156
- type: nauc_ndcg_at_1_std
value: 5.8615
- type: nauc_ndcg_at_1_diff1
value: 49.8287
- type: nauc_ndcg_at_3_max
value: 25.605299999999996
- type: nauc_ndcg_at_3_std
value: 6.9969
- type: nauc_ndcg_at_3_diff1
value: 45.1011
- type: nauc_ndcg_at_5_max
value: 25.2231
- type: nauc_ndcg_at_5_std
value: 6.674099999999999
- type: nauc_ndcg_at_5_diff1
value: 42.7723
- type: nauc_ndcg_at_10_max
value: 24.404899999999998
- type: nauc_ndcg_at_10_std
value: 6.3734
- type: nauc_ndcg_at_10_diff1
value: 41.9908
- type: nauc_ndcg_at_20_max
value: 23.9259
- type: nauc_ndcg_at_20_std
value: 6.749099999999999
- type: nauc_ndcg_at_20_diff1
value: 41.2996
- type: nauc_ndcg_at_100_max
value: 24.0414
- type: nauc_ndcg_at_100_std
value: 9.694700000000001
- type: nauc_ndcg_at_100_diff1
value: 41.0428
- type: nauc_ndcg_at_1000_max
value: 24.8501
- type: nauc_ndcg_at_1000_std
value: 8.7988
- type: nauc_ndcg_at_1000_diff1
value: 42.290499999999994
- type: nauc_map_at_1_max
value: 24.038999999999998
- type: nauc_map_at_1_std
value: 4.3147
- type: nauc_map_at_1_diff1
value: 51.1896
- type: nauc_map_at_3_max
value: 24.8903
- type: nauc_map_at_3_std
value: 5.8555
- type: nauc_map_at_3_diff1
value: 46.6276
- type: nauc_map_at_5_max
value: 24.9985
- type: nauc_map_at_5_std
value: 5.8602
- type: nauc_map_at_5_diff1
value: 45.2278
- type: nauc_map_at_10_max
value: 24.7287
- type: nauc_map_at_10_std
value: 5.7734
- type: nauc_map_at_10_diff1
value: 44.8438
- type: nauc_map_at_20_max
value: 24.698
- type: nauc_map_at_20_std
value: 5.86
- type: nauc_map_at_20_diff1
value: 44.635200000000005
- type: nauc_map_at_100_max
value: 24.726100000000002
- type: nauc_map_at_100_std
value: 6.307
- type: nauc_map_at_100_diff1
value: 44.587700000000005
- type: nauc_map_at_1000_max
value: 24.7303
- type: nauc_map_at_1000_std
value: 6.271400000000001
- type: nauc_map_at_1000_diff1
value: 44.6122
- type: nauc_recall_at_1_max
value: 24.038999999999998
- type: nauc_recall_at_1_std
value: 4.3147
- type: nauc_recall_at_1_diff1
value: 51.1896
- type: nauc_recall_at_3_max
value: 24.6154
- type: nauc_recall_at_3_std
value: 7.6124
- type: nauc_recall_at_3_diff1
value: 41.077999999999996
- type: nauc_recall_at_5_max
value: 23.8232
- type: nauc_recall_at_5_std
value: 7.3541
- type: nauc_recall_at_5_diff1
value: 35.070499999999996
- type: nauc_recall_at_10_max
value: 21.084500000000002
- type: nauc_recall_at_10_std
value: 6.592199999999999
- type: nauc_recall_at_10_diff1
value: 32.1365
- type: nauc_recall_at_20_max
value: 18.6663
- type: nauc_recall_at_20_std
value: 8.094899999999999
- type: nauc_recall_at_20_diff1
value: 29.124100000000002
- type: nauc_recall_at_100_max
value: 16.6248
- type: nauc_recall_at_100_std
value: 29.871199999999998
- type: nauc_recall_at_100_diff1
value: 22.16
- type: nauc_recall_at_1000_max
value: 39.4327
- type: nauc_recall_at_1000_std
value: 48.1674
- type: nauc_recall_at_1000_diff1
value: 35.6011
- type: nauc_precision_at_1_max
value: 25.1156
- type: nauc_precision_at_1_std
value: 5.8615
- type: nauc_precision_at_1_diff1
value: 49.8287
- type: nauc_precision_at_3_max
value: 27.020100000000003
- type: nauc_precision_at_3_std
value: 9.533800000000001
- type: nauc_precision_at_3_diff1
value: 37.6314
- type: nauc_precision_at_5_max
value: 26.1386
- type: nauc_precision_at_5_std
value: 8.0008
- type: nauc_precision_at_5_diff1
value: 29.5792
- type: nauc_precision_at_10_max
value: 21.7676
- type: nauc_precision_at_10_std
value: 8.2431
- type: nauc_precision_at_10_diff1
value: 22.1029
- type: nauc_precision_at_20_max
value: 19.3305
- type: nauc_precision_at_20_std
value: 8.7156
- type: nauc_precision_at_20_diff1
value: 16.4788
- type: nauc_precision_at_100_max
value: 11.6811
- type: nauc_precision_at_100_std
value: 20.169
- type: nauc_precision_at_100_diff1
value: -0.8449
- type: nauc_precision_at_1000_max
value: -5.1109
- type: nauc_precision_at_1000_std
value: 3.2081999999999997
- type: nauc_precision_at_1000_diff1
value: -23.8473
- type: nauc_mrr_at_1_max
value: 25.1156
- type: nauc_mrr_at_1_std
value: 5.8615
- type: nauc_mrr_at_1_diff1
value: 49.8287
- type: nauc_mrr_at_3_max
value: 26.009999999999998
- type: nauc_mrr_at_3_std
value: 7.795100000000001
- type: nauc_mrr_at_3_diff1
value: 46.0223
- type: nauc_mrr_at_5_max
value: 25.8328
- type: nauc_mrr_at_5_std
value: 7.4602
- type: nauc_mrr_at_5_diff1
value: 44.6238
- type: nauc_mrr_at_10_max
value: 25.5241
- type: nauc_mrr_at_10_std
value: 7.071199999999999
- type: nauc_mrr_at_10_diff1
value: 44.3654
- type: nauc_mrr_at_20_max
value: 25.287100000000002
- type: nauc_mrr_at_20_std
value: 7.194100000000001
- type: nauc_mrr_at_20_diff1
value: 44.2009
- type: nauc_mrr_at_100_max
value: 25.3365
- type: nauc_mrr_at_100_std
value: 7.513400000000001
- type: nauc_mrr_at_100_diff1
value: 44.2267
- type: nauc_mrr_at_1000_max
value: 25.361
- type: nauc_mrr_at_1000_std
value: 7.492500000000001
- type: nauc_mrr_at_1000_diff1
value: 44.2745
- type: main_score
value: 39.722
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: ndcg_at_1
value: 47.818
- type: ndcg_at_3
value: 41.184
- type: ndcg_at_5
value: 43.733
- type: ndcg_at_10
value: 48.439
- type: ndcg_at_20
value: 51.55800000000001
- type: ndcg_at_100
value: 55.562
- type: ndcg_at_1000
value: 57.885
- type: map_at_1
value: 20.846
- type: map_at_3
value: 30.997999999999998
- type: map_at_5
value: 34.524
- type: map_at_10
value: 37.427
- type: map_at_20
value: 38.761
- type: map_at_100
value: 39.753
- type: map_at_1000
value: 39.910000000000004
- type: recall_at_1
value: 20.846
- type: recall_at_3
value: 36.958999999999996
- type: recall_at_5
value: 45.079
- type: recall_at_10
value: 55.295
- type: recall_at_20
value: 63.815
- type: recall_at_100
value: 78.456
- type: recall_at_1000
value: 91.106
- type: precision_at_1
value: 47.818
- type: precision_at_3
value: 31.422
- type: precision_at_5
value: 23.909
- type: precision_at_10
value: 15.14
- type: precision_at_20
value: 8.977
- type: precision_at_100
value: 2.294
- type: precision_at_1000
value: 0.27399999999999997
- type: mrr_at_1
value: 47.8176
- type: mrr_at_3
value: 58.165
- type: mrr_at_5
value: 59.764399999999995
- type: mrr_at_10
value: 60.663999999999994
- type: mrr_at_20
value: 61.0341
- type: mrr_at_100
value: 61.15540000000001
- type: mrr_at_1000
value: 61.168
- type: nauc_ndcg_at_1_max
value: 42.0574
- type: nauc_ndcg_at_1_std
value: 15.0412
- type: nauc_ndcg_at_1_diff1
value: 38.8542
- type: nauc_ndcg_at_3_max
value: 45.5116
- type: nauc_ndcg_at_3_std
value: 17.8474
- type: nauc_ndcg_at_3_diff1
value: 30.4299
- type: nauc_ndcg_at_5_max
value: 47.5315
- type: nauc_ndcg_at_5_std
value: 21.577199999999998
- type: nauc_ndcg_at_5_diff1
value: 29.9137
- type: nauc_ndcg_at_10_max
value: 49.6024
- type: nauc_ndcg_at_10_std
value: 26.744400000000002
- type: nauc_ndcg_at_10_diff1
value: 28.2074
- type: nauc_ndcg_at_20_max
value: 48.8087
- type: nauc_ndcg_at_20_std
value: 27.815800000000003
- type: nauc_ndcg_at_20_diff1
value: 27.843200000000003
- type: nauc_ndcg_at_100_max
value: 49.2657
- type: nauc_ndcg_at_100_std
value: 28.958299999999998
- type: nauc_ndcg_at_100_diff1
value: 28.2285
- type: nauc_ndcg_at_1000_max
value: 49.8
- type: nauc_ndcg_at_1000_std
value: 28.3402
- type: nauc_ndcg_at_1000_diff1
value: 29.149399999999996
- type: nauc_map_at_1_max
value: 43.4717
- type: nauc_map_at_1_std
value: 7.5933
- type: nauc_map_at_1_diff1
value: 43.8334
- type: nauc_map_at_3_max
value: 45.1549
- type: nauc_map_at_3_std
value: 13.7322
- type: nauc_map_at_3_diff1
value: 32.893499999999996
- type: nauc_map_at_5_max
value: 46.0483
- type: nauc_map_at_5_std
value: 17.3988
- type: nauc_map_at_5_diff1
value: 30.7728
- type: nauc_map_at_10_max
value: 47.5898
- type: nauc_map_at_10_std
value: 21.0419
- type: nauc_map_at_10_diff1
value: 29.6458
- type: nauc_map_at_20_max
value: 47.362500000000004
- type: nauc_map_at_20_std
value: 21.7003
- type: nauc_map_at_20_diff1
value: 29.510199999999998
- type: nauc_map_at_100_max
value: 47.4886
- type: nauc_map_at_100_std
value: 22.0129
- type: nauc_map_at_100_diff1
value: 29.5581
- type: nauc_map_at_1000_max
value: 47.523700000000005
- type: nauc_map_at_1000_std
value: 22.0011
- type: nauc_map_at_1000_diff1
value: 29.5994
- type: nauc_recall_at_1_max
value: 43.4717
- type: nauc_recall_at_1_std
value: 7.5933
- type: nauc_recall_at_1_diff1
value: 43.8334
- type: nauc_recall_at_3_max
value: 44.0599
- type: nauc_recall_at_3_std
value: 15.920000000000002
- type: nauc_recall_at_3_diff1
value: 26.1615
- type: nauc_recall_at_5_max
value: 44.715500000000006
- type: nauc_recall_at_5_std
value: 22.4972
- type: nauc_recall_at_5_diff1
value: 22.8433
- type: nauc_recall_at_10_max
value: 46.716499999999996
- type: nauc_recall_at_10_std
value: 32.6123
- type: nauc_recall_at_10_diff1
value: 17.7149
- type: nauc_recall_at_20_max
value: 43.5173
- type: nauc_recall_at_20_std
value: 35.7574
- type: nauc_recall_at_20_diff1
value: 15.4727
- type: nauc_recall_at_100_max
value: 45.2425
- type: nauc_recall_at_100_std
value: 44.2578
- type: nauc_recall_at_100_diff1
value: 14.2627
- type: nauc_recall_at_1000_max
value: 57.903800000000004
- type: nauc_recall_at_1000_std
value: 56.775
- type: nauc_recall_at_1000_diff1
value: 18.5075
- type: nauc_precision_at_1_max
value: 42.0574
- type: nauc_precision_at_1_std
value: 15.0412
- type: nauc_precision_at_1_diff1
value: 38.8542
- type: nauc_precision_at_3_max
value: 35.791000000000004
- type: nauc_precision_at_3_std
value: 23.7089
- type: nauc_precision_at_3_diff1
value: 12.116100000000001
- type: nauc_precision_at_5_max
value: 32.874700000000004
- type: nauc_precision_at_5_std
value: 29.5353
- type: nauc_precision_at_5_diff1
value: 6.1242
- type: nauc_precision_at_10_max
value: 29.019000000000002
- type: nauc_precision_at_10_std
value: 34.8923
- type: nauc_precision_at_10_diff1
value: -1.0564
- type: nauc_precision_at_20_max
value: 19.5603
- type: nauc_precision_at_20_std
value: 32.8097
- type: nauc_precision_at_20_diff1
value: -5.8062000000000005
- type: nauc_precision_at_100_max
value: 7.152799999999999
- type: nauc_precision_at_100_std
value: 27.925299999999996
- type: nauc_precision_at_100_diff1
value: -12.053899999999999
- type: nauc_precision_at_1000_max
value: -6.3346
- type: nauc_precision_at_1000_std
value: 15.254599999999998
- type: nauc_precision_at_1000_diff1
value: -17.0105
- type: nauc_mrr_at_1_max
value: 42.0574
- type: nauc_mrr_at_1_std
value: 15.0412
- type: nauc_mrr_at_1_diff1
value: 38.8542
- type: nauc_mrr_at_3_max
value: 46.7432
- type: nauc_mrr_at_3_std
value: 21.6624
- type: nauc_mrr_at_3_diff1
value: 34.3397
- type: nauc_mrr_at_5_max
value: 47.1175
- type: nauc_mrr_at_5_std
value: 22.811
- type: nauc_mrr_at_5_diff1
value: 34.9539
- type: nauc_mrr_at_10_max
value: 47.172599999999996
- type: nauc_mrr_at_10_std
value: 22.9886
- type: nauc_mrr_at_10_diff1
value: 34.7462
- type: nauc_mrr_at_20_max
value: 46.9932
- type: nauc_mrr_at_20_std
value: 22.8034
- type: nauc_mrr_at_20_diff1
value: 34.8125
- type: nauc_mrr_at_100_max
value: 46.9459
- type: nauc_mrr_at_100_std
value: 22.7237
- type: nauc_mrr_at_100_diff1
value: 34.8654
- type: nauc_mrr_at_1000_max
value: 46.935
- type: nauc_mrr_at_1000_std
value: 22.7019
- type: nauc_mrr_at_1000_diff1
value: 34.867
- type: main_score
value: 48.439
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: ndcg_at_1
value: 64.5
- type: ndcg_at_3
value: 55.836
- type: ndcg_at_5
value: 54.459
- type: ndcg_at_10
value: 52.476
- type: ndcg_at_20
value: 51.388999999999996
- type: ndcg_at_100
value: 57.477000000000004
- type: ndcg_at_1000
value: 64.165
- type: map_at_1
value: 10.067
- type: map_at_3
value: 16.348
- type: map_at_5
value: 20.202
- type: map_at_10
value: 24.938
- type: map_at_20
value: 29.586000000000002
- type: map_at_100
value: 37.057
- type: map_at_1000
value: 39.224
- type: recall_at_1
value: 10.067
- type: recall_at_3
value: 17.62
- type: recall_at_5
value: 23.162
- type: recall_at_10
value: 30.769999999999996
- type: recall_at_20
value: 39.719
- type: recall_at_100
value: 64.159
- type: recall_at_1000
value: 86.166
- type: precision_at_1
value: 76.75
- type: precision_at_3
value: 59.833000000000006
- type: precision_at_5
value: 53.75
- type: precision_at_10
value: 43.375
- type: precision_at_20
value: 33.025
- type: precision_at_100
value: 14.011999999999999
- type: precision_at_1000
value: 2.654
- type: mrr_at_1
value: 76.75
- type: mrr_at_3
value: 81.54169999999999
- type: mrr_at_5
value: 82.42920000000001
- type: mrr_at_10
value: 82.8224
- type: mrr_at_20
value: 82.90859999999999
- type: mrr_at_100
value: 82.99419999999999
- type: mrr_at_1000
value: 82.99990000000001
- type: nauc_ndcg_at_1_max
value: 45.8251
- type: nauc_ndcg_at_1_std
value: 26.468799999999998
- type: nauc_ndcg_at_1_diff1
value: 44.3761
- type: nauc_ndcg_at_3_max
value: 38.7362
- type: nauc_ndcg_at_3_std
value: 24.956300000000002
- type: nauc_ndcg_at_3_diff1
value: 24.4135
- type: nauc_ndcg_at_5_max
value: 39.9097
- type: nauc_ndcg_at_5_std
value: 24.9363
- type: nauc_ndcg_at_5_diff1
value: 24.607200000000002
- type: nauc_ndcg_at_10_max
value: 38.8691
- type: nauc_ndcg_at_10_std
value: 22.1204
- type: nauc_ndcg_at_10_diff1
value: 24.0339
- type: nauc_ndcg_at_20_max
value: 38.9902
- type: nauc_ndcg_at_20_std
value: 19.3997
- type: nauc_ndcg_at_20_diff1
value: 26.8911
- type: nauc_ndcg_at_100_max
value: 45.138600000000004
- type: nauc_ndcg_at_100_std
value: 28.6063
- type: nauc_ndcg_at_100_diff1
value: 32.5846
- type: nauc_ndcg_at_1000_max
value: 48.2813
- type: nauc_ndcg_at_1000_std
value: 37.6229
- type: nauc_ndcg_at_1000_diff1
value: 31.184800000000003
- type: nauc_map_at_1_max
value: 1.8275
- type: nauc_map_at_1_std
value: -23.8658
- type: nauc_map_at_1_diff1
value: 31.9071
- type: nauc_map_at_3_max
value: 8.081900000000001
- type: nauc_map_at_3_std
value: -20.647
- type: nauc_map_at_3_diff1
value: 26.509100000000004
- type: nauc_map_at_5_max
value: 12.9364
- type: nauc_map_at_5_std
value: -16.4111
- type: nauc_map_at_5_diff1
value: 26.3042
- type: nauc_map_at_10_max
value: 16.9186
- type: nauc_map_at_10_std
value: -10.174999999999999
- type: nauc_map_at_10_diff1
value: 24.0309
- type: nauc_map_at_20_max
value: 23.0241
- type: nauc_map_at_20_std
value: -0.5674
- type: nauc_map_at_20_diff1
value: 23.6815
- type: nauc_map_at_100_max
value: 30.620599999999996
- type: nauc_map_at_100_std
value: 18.436700000000002
- type: nauc_map_at_100_diff1
value: 22.7765
- type: nauc_map_at_1000_max
value: 31.1444
- type: nauc_map_at_1000_std
value: 20.9294
- type: nauc_map_at_1000_diff1
value: 21.1134
- type: nauc_recall_at_1_max
value: 1.8275
- type: nauc_recall_at_1_std
value: -23.8658
- type: nauc_recall_at_1_diff1
value: 31.9071
- type: nauc_recall_at_3_max
value: 4.6672
- type: nauc_recall_at_3_std
value: -22.2615
- type: nauc_recall_at_3_diff1
value: 21.9773
- type: nauc_recall_at_5_max
value: 9.882399999999999
- type: nauc_recall_at_5_std
value: -18.6467
- type: nauc_recall_at_5_diff1
value: 21.9263
- type: nauc_recall_at_10_max
value: 13.0783
- type: nauc_recall_at_10_std
value: -13.8897
- type: nauc_recall_at_10_diff1
value: 20.0473
- type: nauc_recall_at_20_max
value: 21.4983
- type: nauc_recall_at_20_std
value: -3.9253000000000005
- type: nauc_recall_at_20_diff1
value: 21.8969
- type: nauc_recall_at_100_max
value: 36.0702
- type: nauc_recall_at_100_std
value: 25.007800000000003
- type: nauc_recall_at_100_diff1
value: 26.6674
- type: nauc_recall_at_1000_max
value: 46.604
- type: nauc_recall_at_1000_std
value: 52.5305
- type: nauc_recall_at_1000_diff1
value: 31.8907
- type: nauc_precision_at_1_max
value: 56.293000000000006
- type: nauc_precision_at_1_std
value: 29.729699999999998
- type: nauc_precision_at_1_diff1
value: 55.167500000000004
- type: nauc_precision_at_3_max
value: 37.712
- type: nauc_precision_at_3_std
value: 37.155
- type: nauc_precision_at_3_diff1
value: 8.8632
- type: nauc_precision_at_5_max
value: 35.133399999999995
- type: nauc_precision_at_5_std
value: 38.5708
- type: nauc_precision_at_5_diff1
value: 3.6889
- type: nauc_precision_at_10_max
value: 26.4835
- type: nauc_precision_at_10_std
value: 38.136900000000004
- type: nauc_precision_at_10_diff1
value: -5.988099999999999
- type: nauc_precision_at_20_max
value: 24.3223
- type: nauc_precision_at_20_std
value: 40.487
- type: nauc_precision_at_20_diff1
value: -6.2481
- type: nauc_precision_at_100_max
value: 14.5761
- type: nauc_precision_at_100_std
value: 38.6962
- type: nauc_precision_at_100_diff1
value: -10.8208
- type: nauc_precision_at_1000_max
value: -7.7383999999999995
- type: nauc_precision_at_1000_std
value: 1.3077
- type: nauc_precision_at_1000_diff1
value: -23.3553
- type: nauc_mrr_at_1_max
value: 56.293000000000006
- type: nauc_mrr_at_1_std
value: 29.729699999999998
- type: nauc_mrr_at_1_diff1
value: 55.167500000000004
- type: nauc_mrr_at_3_max
value: 59.1321
- type: nauc_mrr_at_3_std
value: 35.067
- type: nauc_mrr_at_3_diff1
value: 53.4505
- type: nauc_mrr_at_5_max
value: 59.39149999999999
- type: nauc_mrr_at_5_std
value: 33.6519
- type: nauc_mrr_at_5_diff1
value: 53.7743
- type: nauc_mrr_at_10_max
value: 59.6483
- type: nauc_mrr_at_10_std
value: 33.537
- type: nauc_mrr_at_10_diff1
value: 53.75149999999999
- type: nauc_mrr_at_20_max
value: 59.4701
- type: nauc_mrr_at_20_std
value: 33.3212
- type: nauc_mrr_at_20_diff1
value: 53.690000000000005
- type: nauc_mrr_at_100_max
value: 59.3928
- type: nauc_mrr_at_100_std
value: 33.329
- type: nauc_mrr_at_100_diff1
value: 53.7903
- type: nauc_mrr_at_1000_max
value: 59.391400000000004
- type: nauc_mrr_at_1000_std
value: 33.3331
- type: nauc_mrr_at_1000_diff1
value: 53.797399999999996
- type: main_score
value: 52.476
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 87.72999999999999
- type: f1
value: 83.03399999999999
- type: f1_weighted
value: 88.0873
- type: main_score
value: 87.72999999999999
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: ndcg_at_1
value: 93.849
- type: ndcg_at_3
value: 94.08800000000001
- type: ndcg_at_5
value: 94.459
- type: ndcg_at_10
value: 94.806
- type: ndcg_at_20
value: 95.06
- type: ndcg_at_100
value: 95.338
- type: ndcg_at_1000
value: 95.465
- type: map_at_1
value: 87.21900000000001
- type: map_at_3
value: 92.16
- type: map_at_5
value: 92.607
- type: map_at_10
value: 92.88300000000001
- type: map_at_20
value: 93.009
- type: map_at_100
value: 93.07600000000001
- type: map_at_1000
value: 93.085
- type: recall_at_1
value: 87.21900000000001
- type: recall_at_3
value: 95.082
- type: recall_at_5
value: 96.09299999999999
- type: recall_at_10
value: 97.024
- type: recall_at_20
value: 97.773
- type: recall_at_100
value: 98.901
- type: recall_at_1000
value: 99.629
- type: precision_at_1
value: 93.849
- type: precision_at_3
value: 35.154
- type: precision_at_5
value: 21.557000000000002
- type: precision_at_10
value: 11.031
- type: precision_at_20
value: 5.622
- type: precision_at_100
value: 1.155
- type: precision_at_1000
value: 0.11800000000000001
- type: mrr_at_1
value: 93.8494
- type: mrr_at_3
value: 96.43209999999999
- type: mrr_at_5
value: 96.4914
- type: mrr_at_10
value: 96.5095
- type: mrr_at_20
value: 96.5154
- type: mrr_at_100
value: 96.52340000000001
- type: mrr_at_1000
value: 96.5235
- type: nauc_ndcg_at_1_max
value: 22.8567
- type: nauc_ndcg_at_1_std
value: -28.435
- type: nauc_ndcg_at_1_diff1
value: 83.3685
- type: nauc_ndcg_at_3_max
value: 25.3199
- type: nauc_ndcg_at_3_std
value: -8.9749
- type: nauc_ndcg_at_3_diff1
value: 51.326499999999996
- type: nauc_ndcg_at_5_max
value: 24.3802
- type: nauc_ndcg_at_5_std
value: -8.0152
- type: nauc_ndcg_at_5_diff1
value: 50.26389999999999
- type: nauc_ndcg_at_10_max
value: 25.0331
- type: nauc_ndcg_at_10_std
value: -7.0646
- type: nauc_ndcg_at_10_diff1
value: 50.2641
- type: nauc_ndcg_at_20_max
value: 25.342399999999998
- type: nauc_ndcg_at_20_std
value: -7.000000000000001
- type: nauc_ndcg_at_20_diff1
value: 51.2941
- type: nauc_ndcg_at_100_max
value: 25.7293
- type: nauc_ndcg_at_100_std
value: -7.8615
- type: nauc_ndcg_at_100_diff1
value: 53.529700000000005
- type: nauc_ndcg_at_1000_max
value: 25.565500000000004
- type: nauc_ndcg_at_1000_std
value: -9.2169
- type: nauc_ndcg_at_1000_diff1
value: 54.8992
- type: nauc_map_at_1_max
value: 22.5516
- type: nauc_map_at_1_std
value: -11.0841
- type: nauc_map_at_1_diff1
value: 60.23929999999999
- type: nauc_map_at_3_max
value: 23.7695
- type: nauc_map_at_3_std
value: -8.1368
- type: nauc_map_at_3_diff1
value: 50.648
- type: nauc_map_at_5_max
value: 23.7918
- type: nauc_map_at_5_std
value: -8.023
- type: nauc_map_at_5_diff1
value: 50.8238
- type: nauc_map_at_10_max
value: 24.305
- type: nauc_map_at_10_std
value: -7.5793
- type: nauc_map_at_10_diff1
value: 51.1269
- type: nauc_map_at_20_max
value: 24.458099999999998
- type: nauc_map_at_20_std
value: -7.541399999999999
- type: nauc_map_at_20_diff1
value: 51.4841
- type: nauc_map_at_100_max
value: 24.562
- type: nauc_map_at_100_std
value: -7.6588
- type: nauc_map_at_100_diff1
value: 51.8489
- type: nauc_map_at_1000_max
value: 24.5627
- type: nauc_map_at_1000_std
value: -7.7154
- type: nauc_map_at_1000_diff1
value: 51.9049
- type: nauc_recall_at_1_max
value: 22.5516
- type: nauc_recall_at_1_std
value: -11.0841
- type: nauc_recall_at_1_diff1
value: 60.23929999999999
- type: nauc_recall_at_3_max
value: 25.117
- type: nauc_recall_at_3_std
value: 4.056
- type: nauc_recall_at_3_diff1
value: 28.1752
- type: nauc_recall_at_5_max
value: 24.776999999999997
- type: nauc_recall_at_5_std
value: 9.3384
- type: nauc_recall_at_5_diff1
value: 22.5866
- type: nauc_recall_at_10_max
value: 28.2575
- type: nauc_recall_at_10_std
value: 18.0012
- type: nauc_recall_at_10_diff1
value: 16.8302
- type: nauc_recall_at_20_max
value: 30.4282
- type: nauc_recall_at_20_std
value: 27.020100000000003
- type: nauc_recall_at_20_diff1
value: 12.195
- type: nauc_recall_at_100_max
value: 40.7633
- type: nauc_recall_at_100_std
value: 47.6965
- type: nauc_recall_at_100_diff1
value: 9.647
- type: nauc_recall_at_1000_max
value: 53.4871
- type: nauc_recall_at_1000_std
value: 58.3394
- type: nauc_recall_at_1000_diff1
value: 16.0846
- type: nauc_precision_at_1_max
value: 22.8567
- type: nauc_precision_at_1_std
value: -28.435
- type: nauc_precision_at_1_diff1
value: 83.3685
- type: nauc_precision_at_3_max
value: -7.2411
- type: nauc_precision_at_3_std
value: -0.819
- type: nauc_precision_at_3_diff1
value: -25.5367
- type: nauc_precision_at_5_max
value: -8.8802
- type: nauc_precision_at_5_std
value: 0.4376
- type: nauc_precision_at_5_diff1
value: -26.209100000000003
- type: nauc_precision_at_10_max
value: -7.9997
- type: nauc_precision_at_10_std
value: 1.9876999999999998
- type: nauc_precision_at_10_diff1
value: -24.851200000000002
- type: nauc_precision_at_20_max
value: -7.5102
- type: nauc_precision_at_20_std
value: 2.2863
- type: nauc_precision_at_20_diff1
value: -23.224
- type: nauc_precision_at_100_max
value: -8.2868
- type: nauc_precision_at_100_std
value: 0.9126000000000001
- type: nauc_precision_at_100_diff1
value: -21.0977
- type: nauc_precision_at_1000_max
value: -9.5708
- type: nauc_precision_at_1000_std
value: -2.0409
- type: nauc_precision_at_1000_diff1
value: -18.6371
- type: nauc_mrr_at_1_max
value: 22.8567
- type: nauc_mrr_at_1_std
value: -28.435
- type: nauc_mrr_at_1_diff1
value: 83.3685
- type: nauc_mrr_at_3_max
value: 25.0204
- type: nauc_mrr_at_3_std
value: -29.9704
- type: nauc_mrr_at_3_diff1
value: 79.9868
- type: nauc_mrr_at_5_max
value: 24.4871
- type: nauc_mrr_at_5_std
value: -29.5127
- type: nauc_mrr_at_5_diff1
value: 79.8468
- type: nauc_mrr_at_10_max
value: 24.543400000000002
- type: nauc_mrr_at_10_std
value: -29.243799999999997
- type: nauc_mrr_at_10_diff1
value: 79.8492
- type: nauc_mrr_at_20_max
value: 24.4687
- type: nauc_mrr_at_20_std
value: -29.2678
- type: nauc_mrr_at_20_diff1
value: 79.9428
- type: nauc_mrr_at_100_max
value: 24.43
- type: nauc_mrr_at_100_std
value: -29.3424
- type: nauc_mrr_at_100_diff1
value: 80.1115
- type: nauc_mrr_at_1000_max
value: 24.429699999999997
- type: nauc_mrr_at_1000_std
value: -29.3431
- type: nauc_mrr_at_1000_diff1
value: 80.1118
- type: main_score
value: 94.806
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: ndcg_at_1
value: 59.721999999999994
- type: ndcg_at_3
value: 55.584
- type: ndcg_at_5
value: 56.852000000000004
- type: ndcg_at_10
value: 60.118
- type: ndcg_at_20
value: 62.756
- type: ndcg_at_100
value: 66.434
- type: ndcg_at_1000
value: 67.796
- type: map_at_1
value: 31.213
- type: map_at_3
value: 45.164
- type: map_at_5
value: 48.66
- type: map_at_10
value: 51.637
- type: map_at_20
value: 52.886
- type: map_at_100
value: 53.785000000000004
- type: map_at_1000
value: 53.886
- type: recall_at_1
value: 31.213
- type: recall_at_3
value: 50.105
- type: recall_at_5
value: 57.714
- type: recall_at_10
value: 67.93900000000001
- type: recall_at_20
value: 75.901
- type: recall_at_100
value: 90.424
- type: recall_at_1000
value: 98.24300000000001
- type: precision_at_1
value: 59.721999999999994
- type: precision_at_3
value: 36.986000000000004
- type: precision_at_5
value: 26.883000000000003
- type: precision_at_10
value: 16.558999999999997
- type: precision_at_20
value: 9.468
- type: precision_at_100
value: 2.332
- type: precision_at_1000
value: 0.258
- type: mrr_at_1
value: 59.7222
- type: mrr_at_3
value: 65.1492
- type: mrr_at_5
value: 66.376
- type: mrr_at_10
value: 67.3183
- type: mrr_at_20
value: 67.6241
- type: mrr_at_100
value: 67.8682
- type: mrr_at_1000
value: 67.8792
- type: nauc_ndcg_at_1_max
value: 43.5471
- type: nauc_ndcg_at_1_std
value: -7.1188
- type: nauc_ndcg_at_1_diff1
value: 62.971900000000005
- type: nauc_ndcg_at_3_max
value: 34.357
- type: nauc_ndcg_at_3_std
value: -10.8661
- type: nauc_ndcg_at_3_diff1
value: 47.8167
- type: nauc_ndcg_at_5_max
value: 32.3401
- type: nauc_ndcg_at_5_std
value: -10.3064
- type: nauc_ndcg_at_5_diff1
value: 47.248200000000004
- type: nauc_ndcg_at_10_max
value: 33.4274
- type: nauc_ndcg_at_10_std
value: -6.7193000000000005
- type: nauc_ndcg_at_10_diff1
value: 48.5323
- type: nauc_ndcg_at_20_max
value: 35.2779
- type: nauc_ndcg_at_20_std
value: -4.914700000000001
- type: nauc_ndcg_at_20_diff1
value: 48.952200000000005
- type: nauc_ndcg_at_100_max
value: 35.7555
- type: nauc_ndcg_at_100_std
value: -4.0120000000000005
- type: nauc_ndcg_at_100_diff1
value: 49.684200000000004
- type: nauc_ndcg_at_1000_max
value: 36.836999999999996
- type: nauc_ndcg_at_1000_std
value: -5.0826
- type: nauc_ndcg_at_1000_diff1
value: 49.9576
- type: nauc_map_at_1_max
value: 12.3228
- type: nauc_map_at_1_std
value: -6.8317000000000005
- type: nauc_map_at_1_diff1
value: 52.012499999999996
- type: nauc_map_at_3_max
value: 22.758300000000002
- type: nauc_map_at_3_std
value: -10.415199999999999
- type: nauc_map_at_3_diff1
value: 48.8788
- type: nauc_map_at_5_max
value: 26.2402
- type: nauc_map_at_5_std
value: -9.831199999999999
- type: nauc_map_at_5_diff1
value: 47.2214
- type: nauc_map_at_10_max
value: 28.962
- type: nauc_map_at_10_std
value: -8.847900000000001
- type: nauc_map_at_10_diff1
value: 47.177
- type: nauc_map_at_20_max
value: 30.252699999999997
- type: nauc_map_at_20_std
value: -7.994
- type: nauc_map_at_20_diff1
value: 47.2329
- type: nauc_map_at_100_max
value: 30.621
- type: nauc_map_at_100_std
value: -7.5253
- type: nauc_map_at_100_diff1
value: 47.2401
- type: nauc_map_at_1000_max
value: 30.6957
- type: nauc_map_at_1000_std
value: -7.570200000000001
- type: nauc_map_at_1000_diff1
value: 47.2385
- type: nauc_recall_at_1_max
value: 12.3228
- type: nauc_recall_at_1_std
value: -6.8317000000000005
- type: nauc_recall_at_1_diff1
value: 52.012499999999996
- type: nauc_recall_at_3_max
value: 18.8278
- type: nauc_recall_at_3_std
value: -11.0181
- type: nauc_recall_at_3_diff1
value: 42.853
- type: nauc_recall_at_5_max
value: 20.494699999999998
- type: nauc_recall_at_5_std
value: -10.1645
- type: nauc_recall_at_5_diff1
value: 38.1874
- type: nauc_recall_at_10_max
value: 24.264
- type: nauc_recall_at_10_std
value: -1.4449999999999998
- type: nauc_recall_at_10_diff1
value: 35.9821
- type: nauc_recall_at_20_max
value: 27.699
- type: nauc_recall_at_20_std
value: 5.3731
- type: nauc_recall_at_20_diff1
value: 34.8722
- type: nauc_recall_at_100_max
value: 17.296
- type: nauc_recall_at_100_std
value: 22.0343
- type: nauc_recall_at_100_diff1
value: 31.7347
- type: nauc_recall_at_1000_max
value: 30.353
- type: nauc_recall_at_1000_std
value: 48.0299
- type: nauc_recall_at_1000_diff1
value: 24.395500000000002
- type: nauc_precision_at_1_max
value: 43.5471
- type: nauc_precision_at_1_std
value: -7.1188
- type: nauc_precision_at_1_diff1
value: 62.971900000000005
- type: nauc_precision_at_3_max
value: 39.456599999999995
- type: nauc_precision_at_3_std
value: -6.9102
- type: nauc_precision_at_3_diff1
value: 21.7063
- type: nauc_precision_at_5_max
value: 39.5105
- type: nauc_precision_at_5_std
value: -4.0794999999999995
- type: nauc_precision_at_5_diff1
value: 11.556700000000001
- type: nauc_precision_at_10_max
value: 38.1647
- type: nauc_precision_at_10_std
value: 1.0857999999999999
- type: nauc_precision_at_10_diff1
value: 3.9246000000000003
- type: nauc_precision_at_20_max
value: 38.1511
- type: nauc_precision_at_20_std
value: 5.1094
- type: nauc_precision_at_20_diff1
value: -0.468
- type: nauc_precision_at_100_max
value: 30.9215
- type: nauc_precision_at_100_std
value: 7.5769
- type: nauc_precision_at_100_diff1
value: -8.6537
- type: nauc_precision_at_1000_max
value: 28.324899999999996
- type: nauc_precision_at_1000_std
value: 4.0066999999999995
- type: nauc_precision_at_1000_diff1
value: -13.2833
- type: nauc_mrr_at_1_max
value: 43.5471
- type: nauc_mrr_at_1_std
value: -7.1188
- type: nauc_mrr_at_1_diff1
value: 62.971900000000005
- type: nauc_mrr_at_3_max
value: 44.7938
- type: nauc_mrr_at_3_std
value: -8.1318
- type: nauc_mrr_at_3_diff1
value: 60.6936
- type: nauc_mrr_at_5_max
value: 44.398199999999996
- type: nauc_mrr_at_5_std
value: -8.0999
- type: nauc_mrr_at_5_diff1
value: 60.2421
- type: nauc_mrr_at_10_max
value: 44.7636
- type: nauc_mrr_at_10_std
value: -6.6530000000000005
- type: nauc_mrr_at_10_diff1
value: 60.168
- type: nauc_mrr_at_20_max
value: 44.5856
- type: nauc_mrr_at_20_std
value: -6.786300000000001
- type: nauc_mrr_at_20_diff1
value: 60.224999999999994
- type: nauc_mrr_at_100_max
value: 44.554899999999996
- type: nauc_mrr_at_100_std
value: -6.819400000000001
- type: nauc_mrr_at_100_diff1
value: 60.379400000000004
- type: nauc_mrr_at_1000_max
value: 44.5589
- type: nauc_mrr_at_1000_std
value: -6.8267
- type: nauc_mrr_at_1000_diff1
value: 60.3766
- type: main_score
value: 60.118
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: ndcg_at_1
value: 87.576
- type: ndcg_at_3
value: 73.276
- type: ndcg_at_5
value: 75.717
- type: ndcg_at_10
value: 77.681
- type: ndcg_at_20
value: 78.865
- type: ndcg_at_100
value: 80.42
- type: ndcg_at_1000
value: 81.34
- type: map_at_1
value: 43.788
- type: map_at_3
value: 66.769
- type: map_at_5
value: 68.811
- type: map_at_10
value: 70.06099999999999
- type: map_at_20
value: 70.565
- type: map_at_100
value: 70.901
- type: map_at_1000
value: 70.955
- type: recall_at_1
value: 43.788
- type: recall_at_3
value: 70.27
- type: recall_at_5
value: 75.098
- type: recall_at_10
value: 79.99300000000001
- type: recall_at_20
value: 83.795
- type: recall_at_100
value: 90.621
- type: recall_at_1000
value: 96.577
- type: precision_at_1
value: 87.576
- type: precision_at_3
value: 46.847
- type: precision_at_5
value: 30.038999999999998
- type: precision_at_10
value: 15.998999999999999
- type: precision_at_20
value: 8.379
- type: precision_at_100
value: 1.812
- type: precision_at_1000
value: 0.193
- type: mrr_at_1
value: 87.576
- type: mrr_at_3
value: 90.7968
- type: mrr_at_5
value: 91.135
- type: mrr_at_10
value: 91.3228
- type: mrr_at_20
value: 91.3865
- type: mrr_at_100
value: 91.41709999999999
- type: mrr_at_1000
value: 91.4191
- type: nauc_ndcg_at_1_max
value: 52.9848
- type: nauc_ndcg_at_1_std
value: -7.4415
- type: nauc_ndcg_at_1_diff1
value: 74.138
- type: nauc_ndcg_at_3_max
value: 18.2608
- type: nauc_ndcg_at_3_std
value: 2.8652
- type: nauc_ndcg_at_3_diff1
value: 8.2423
- type: nauc_ndcg_at_5_max
value: 18.7809
- type: nauc_ndcg_at_5_std
value: 5.266
- type: nauc_ndcg_at_5_diff1
value: 8.2429
- type: nauc_ndcg_at_10_max
value: 20.0532
- type: nauc_ndcg_at_10_std
value: 7.7897
- type: nauc_ndcg_at_10_diff1
value: 8.953
- type: nauc_ndcg_at_20_max
value: 20.921
- type: nauc_ndcg_at_20_std
value: 9.1425
- type: nauc_ndcg_at_20_diff1
value: 9.8906
- type: nauc_ndcg_at_100_max
value: 22.0304
- type: nauc_ndcg_at_100_std
value: 10.200099999999999
- type: nauc_ndcg_at_100_diff1
value: 10.8734
- type: nauc_ndcg_at_1000_max
value: 22.4498
- type: nauc_ndcg_at_1000_std
value: 9.0443
- type: nauc_ndcg_at_1000_diff1
value: 11.950800000000001
- type: nauc_map_at_1_max
value: 52.9848
- type: nauc_map_at_1_std
value: -7.4415
- type: nauc_map_at_1_diff1
value: 74.138
- type: nauc_map_at_3_max
value: 14.394599999999999
- type: nauc_map_at_3_std
value: 3.0435
- type: nauc_map_at_3_diff1
value: 2.7293000000000003
- type: nauc_map_at_5_max
value: 15.001000000000001
- type: nauc_map_at_5_std
value: 4.8138
- type: nauc_map_at_5_diff1
value: 3.0212
- type: nauc_map_at_10_max
value: 15.770600000000002
- type: nauc_map_at_10_std
value: 6.1151
- type: nauc_map_at_10_diff1
value: 3.546
- type: nauc_map_at_20_max
value: 16.0909
- type: nauc_map_at_20_std
value: 6.5694
- type: nauc_map_at_20_diff1
value: 3.8928
- type: nauc_map_at_100_max
value: 16.2717
- type: nauc_map_at_100_std
value: 6.7403
- type: nauc_map_at_100_diff1
value: 4.0282
- type: nauc_map_at_1000_max
value: 16.284399999999998
- type: nauc_map_at_1000_std
value: 6.6972000000000005
- type: nauc_map_at_1000_diff1
value: 4.0631
- type: nauc_recall_at_1_max
value: 52.9848
- type: nauc_recall_at_1_std
value: -7.4415
- type: nauc_recall_at_1_diff1
value: 74.138
- type: nauc_recall_at_3_max
value: 11.933
- type: nauc_recall_at_3_std
value: 5.3739
- type: nauc_recall_at_3_diff1
value: -3.9494000000000002
- type: nauc_recall_at_5_max
value: 12.087100000000001
- type: nauc_recall_at_5_std
value: 9.9072
- type: nauc_recall_at_5_diff1
value: -5.1775
- type: nauc_recall_at_10_max
value: 13.96
- type: nauc_recall_at_10_std
value: 16.7674
- type: nauc_recall_at_10_diff1
value: -5.56
- type: nauc_recall_at_20_max
value: 15.7857
- type: nauc_recall_at_20_std
value: 22.8973
- type: nauc_recall_at_20_diff1
value: -4.7106
- type: nauc_recall_at_100_max
value: 20.7038
- type: nauc_recall_at_100_std
value: 38.0023
- type: nauc_recall_at_100_diff1
value: -5.9268
- type: nauc_recall_at_1000_max
value: 31.1577
- type: nauc_recall_at_1000_std
value: 51.5596
- type: nauc_recall_at_1000_diff1
value: -3.6110999999999995
- type: nauc_precision_at_1_max
value: 52.9848
- type: nauc_precision_at_1_std
value: -7.4415
- type: nauc_precision_at_1_diff1
value: 74.138
- type: nauc_precision_at_3_max
value: 11.933
- type: nauc_precision_at_3_std
value: 5.3739
- type: nauc_precision_at_3_diff1
value: -3.9494000000000002
- type: nauc_precision_at_5_max
value: 12.087100000000001
- type: nauc_precision_at_5_std
value: 9.9072
- type: nauc_precision_at_5_diff1
value: -5.1775
- type: nauc_precision_at_10_max
value: 13.96
- type: nauc_precision_at_10_std
value: 16.7674
- type: nauc_precision_at_10_diff1
value: -5.56
- type: nauc_precision_at_20_max
value: 15.7857
- type: nauc_precision_at_20_std
value: 22.8973
- type: nauc_precision_at_20_diff1
value: -4.7106
- type: nauc_precision_at_100_max
value: 20.7038
- type: nauc_precision_at_100_std
value: 38.0023
- type: nauc_precision_at_100_diff1
value: -5.9268
- type: nauc_precision_at_1000_max
value: 31.1577
- type: nauc_precision_at_1000_std
value: 51.5596
- type: nauc_precision_at_1000_diff1
value: -3.6110999999999995
- type: nauc_mrr_at_1_max
value: 52.9848
- type: nauc_mrr_at_1_std
value: -7.4415
- type: nauc_mrr_at_1_diff1
value: 74.138
- type: nauc_mrr_at_3_max
value: 56.448100000000004
- type: nauc_mrr_at_3_std
value: -5.8551
- type: nauc_mrr_at_3_diff1
value: 73.5421
- type: nauc_mrr_at_5_max
value: 56.3061
- type: nauc_mrr_at_5_std
value: -5.8522
- type: nauc_mrr_at_5_diff1
value: 73.78190000000001
- type: nauc_mrr_at_10_max
value: 56.2413
- type: nauc_mrr_at_10_std
value: -5.6768
- type: nauc_mrr_at_10_diff1
value: 73.8144
- type: nauc_mrr_at_20_max
value: 56.1659
- type: nauc_mrr_at_20_std
value: -5.7085
- type: nauc_mrr_at_20_diff1
value: 73.8173
- type: nauc_mrr_at_100_max
value: 56.1251
- type: nauc_mrr_at_100_std
value: -5.7101
- type: nauc_mrr_at_100_diff1
value: 73.8056
- type: nauc_mrr_at_1000_max
value: 56.118199999999995
- type: nauc_mrr_at_1000_std
value: -5.723000000000001
- type: nauc_mrr_at_1000_diff1
value: 73.8034
- type: main_score
value: 77.681
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.9528
- type: f1
value: 96.95270000000001
- type: f1_weighted
value: 96.95270000000001
- type: ap
value: 95.35130000000001
- type: ap_weighted
value: 95.35130000000001
- type: main_score
value: 96.9528
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: ndcg_at_1
value: 25.759
- type: ndcg_at_3
value: 37.159
- type: ndcg_at_5
value: 41.388999999999996
- type: ndcg_at_10
value: 45.31
- type: ndcg_at_20
value: 47.912
- type: ndcg_at_100
value: 50.719
- type: ndcg_at_1000
value: 51.636
- type: map_at_1
value: 25.056
- type: map_at_3
value: 34.059
- type: map_at_5
value: 36.424
- type: map_at_10
value: 38.099
- type: map_at_20
value: 38.834
- type: map_at_100
value: 39.245999999999995
- type: map_at_1000
value: 39.285
- type: recall_at_1
value: 25.056
- type: recall_at_3
value: 45.414
- type: recall_at_5
value: 55.595000000000006
- type: recall_at_10
value: 67.43900000000001
- type: recall_at_20
value: 77.544
- type: recall_at_100
value: 92.23599999999999
- type: recall_at_1000
value: 99.087
- type: precision_at_1
value: 25.759
- type: precision_at_3
value: 15.716
- type: precision_at_5
value: 11.579
- type: precision_at_10
value: 7.063999999999999
- type: precision_at_20
value: 4.074
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.105
- type: mrr_at_1
value: 25.759300000000003
- type: mrr_at_3
value: 34.7827
- type: mrr_at_5
value: 37.1122
- type: mrr_at_10
value: 38.702799999999996
- type: mrr_at_20
value: 39.4035
- type: mrr_at_100
value: 39.7874
- type: mrr_at_1000
value: 39.820499999999996
- type: nauc_ndcg_at_1_max
value: 7.6849
- type: nauc_ndcg_at_1_std
value: -20.9503
- type: nauc_ndcg_at_1_diff1
value: 39.0921
- type: nauc_ndcg_at_3_max
value: 8.9183
- type: nauc_ndcg_at_3_std
value: -24.5722
- type: nauc_ndcg_at_3_diff1
value: 35.3955
- type: nauc_ndcg_at_5_max
value: 9.3112
- type: nauc_ndcg_at_5_std
value: -24.8615
- type: nauc_ndcg_at_5_diff1
value: 34.9193
- type: nauc_ndcg_at_10_max
value: 9.9335
- type: nauc_ndcg_at_10_std
value: -24.2891
- type: nauc_ndcg_at_10_diff1
value: 35.2354
- type: nauc_ndcg_at_20_max
value: 10.1047
- type: nauc_ndcg_at_20_std
value: -22.538
- type: nauc_ndcg_at_20_diff1
value: 35.370200000000004
- type: nauc_ndcg_at_100_max
value: 9.9511
- type: nauc_ndcg_at_100_std
value: -21.182699999999997
- type: nauc_ndcg_at_100_diff1
value: 35.8301
- type: nauc_ndcg_at_1000_max
value: 9.7745
- type: nauc_ndcg_at_1000_std
value: -22.1117
- type: nauc_ndcg_at_1000_diff1
value: 35.7943
- type: nauc_map_at_1_max
value: 7.7292
- type: nauc_map_at_1_std
value: -20.9013
- type: nauc_map_at_1_diff1
value: 39.1258
- type: nauc_map_at_3_max
value: 8.6472
- type: nauc_map_at_3_std
value: -23.8389
- type: nauc_map_at_3_diff1
value: 36.3462
- type: nauc_map_at_5_max
value: 8.8231
- type: nauc_map_at_5_std
value: -24.048
- type: nauc_map_at_5_diff1
value: 36.0689
- type: nauc_map_at_10_max
value: 9.064
- type: nauc_map_at_10_std
value: -23.788400000000003
- type: nauc_map_at_10_diff1
value: 36.193
- type: nauc_map_at_20_max
value: 9.0944
- type: nauc_map_at_20_std
value: -23.3413
- type: nauc_map_at_20_diff1
value: 36.2575
- type: nauc_map_at_100_max
value: 9.0852
- type: nauc_map_at_100_std
value: -23.1205
- type: nauc_map_at_100_diff1
value: 36.314
- type: nauc_map_at_1000_max
value: 9.0815
- type: nauc_map_at_1000_std
value: -23.1446
- type: nauc_map_at_1000_diff1
value: 36.3137
- type: nauc_recall_at_1_max
value: 7.7292
- type: nauc_recall_at_1_std
value: -20.9013
- type: nauc_recall_at_1_diff1
value: 39.1258
- type: nauc_recall_at_3_max
value: 9.8077
- type: nauc_recall_at_3_std
value: -26.5565
- type: nauc_recall_at_3_diff1
value: 32.7773
- type: nauc_recall_at_5_max
value: 10.801
- type: nauc_recall_at_5_std
value: -27.3276
- type: nauc_recall_at_5_diff1
value: 31.4183
- type: nauc_recall_at_10_max
value: 13.324
- type: nauc_recall_at_10_std
value: -25.866699999999998
- type: nauc_recall_at_10_diff1
value: 31.8682
- type: nauc_recall_at_20_max
value: 15.2863
- type: nauc_recall_at_20_std
value: -16.487299999999998
- type: nauc_recall_at_20_diff1
value: 31.4332
- type: nauc_recall_at_100_max
value: 19.4577
- type: nauc_recall_at_100_std
value: 14.5535
- type: nauc_recall_at_100_diff1
value: 34.865899999999996
- type: nauc_recall_at_1000_max
value: 48.2992
- type: nauc_recall_at_1000_std
value: 61.405100000000004
- type: nauc_recall_at_1000_diff1
value: 23.533299999999997
- type: nauc_precision_at_1_max
value: 7.6849
- type: nauc_precision_at_1_std
value: -20.9503
- type: nauc_precision_at_1_diff1
value: 39.0921
- type: nauc_precision_at_3_max
value: 9.224400000000001
- type: nauc_precision_at_3_std
value: -26.7181
- type: nauc_precision_at_3_diff1
value: 32.0185
- type: nauc_precision_at_5_max
value: 10.2675
- type: nauc_precision_at_5_std
value: -26.803700000000003
- type: nauc_precision_at_5_diff1
value: 30.189300000000003
- type: nauc_precision_at_10_max
value: 12.128400000000001
- type: nauc_precision_at_10_std
value: -23.18
- type: nauc_precision_at_10_diff1
value: 28.5595
- type: nauc_precision_at_20_max
value: 13.1152
- type: nauc_precision_at_20_std
value: -12.6363
- type: nauc_precision_at_20_diff1
value: 25.7907
- type: nauc_precision_at_100_max
value: 11.881400000000001
- type: nauc_precision_at_100_std
value: 12.4388
- type: nauc_precision_at_100_diff1
value: 15.3289
- type: nauc_precision_at_1000_max
value: 5.5209
- type: nauc_precision_at_1000_std
value: 11.001900000000001
- type: nauc_precision_at_1000_diff1
value: -3.0637000000000003
- type: nauc_mrr_at_1_max
value: 7.6849
- type: nauc_mrr_at_1_std
value: -20.9503
- type: nauc_mrr_at_1_diff1
value: 39.0921
- type: nauc_mrr_at_3_max
value: 8.454699999999999
- type: nauc_mrr_at_3_std
value: -23.5899
- type: nauc_mrr_at_3_diff1
value: 36.077
- type: nauc_mrr_at_5_max
value: 8.7245
- type: nauc_mrr_at_5_std
value: -23.6374
- type: nauc_mrr_at_5_diff1
value: 35.844
- type: nauc_mrr_at_10_max
value: 8.9177
- type: nauc_mrr_at_10_std
value: -23.457800000000002
- type: nauc_mrr_at_10_diff1
value: 36.02
- type: nauc_mrr_at_20_max
value: 8.9659
- type: nauc_mrr_at_20_std
value: -23.0174
- type: nauc_mrr_at_20_diff1
value: 36.050900000000006
- type: nauc_mrr_at_100_max
value: 8.9461
- type: nauc_mrr_at_100_std
value: -22.8363
- type: nauc_mrr_at_100_diff1
value: 36.1078
- type: nauc_mrr_at_1000_max
value: 8.940299999999999
- type: nauc_mrr_at_1000_std
value: -22.8606
- type: nauc_mrr_at_1000_diff1
value: 36.106700000000004
- type: main_score
value: 45.31
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.1701
- type: f1
value: 99.07639999999999
- type: f1_weighted
value: 99.1712
- type: main_score
value: 99.1701
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.48429999999999
- type: f1
value: 72.55080000000001
- type: f1_weighted
value: 91.72449999999999
- type: main_score
value: 91.48429999999999
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 85.306
- type: f1
value: 83.1076
- type: f1_weighted
value: 84.4685
- type: main_score
value: 85.306
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 91.17349999999999
- type: f1
value: 90.15950000000001
- type: f1_weighted
value: 90.802
- type: main_score
value: 91.17349999999999
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.7494
- type: v_measure_std
value: 1.2241
- type: main_score
value: 46.7494
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.9016
- type: v_measure_std
value: 1.3978000000000002
- type: main_score
value: 44.9016
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: map
value: 32.628099999999996
- type: mrr
value: 33.8493
- type: nAUC_map_max
value: -21.0204
- type: nAUC_map_std
value: -5.1638
- type: nAUC_map_diff1
value: 10.4531
- type: nAUC_mrr_max
value: -15.85
- type: nAUC_mrr_std
value: -2.7011
- type: nAUC_mrr_diff1
value: 9.8352
- type: main_score
value: 32.628099999999996
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: ndcg_at_1
value: 54.954
- type: ndcg_at_3
value: 49.876
- type: ndcg_at_5
value: 47.038999999999994
- type: ndcg_at_10
value: 43.313
- type: ndcg_at_20
value: 41.086
- type: ndcg_at_100
value: 40.849999999999994
- type: ndcg_at_1000
value: 49.852000000000004
- type: map_at_1
value: 7.412000000000001
- type: map_at_3
value: 12.438
- type: map_at_5
value: 14.448
- type: map_at_10
value: 17.080000000000002
- type: map_at_20
value: 19.273
- type: map_at_100
value: 22.494
- type: map_at_1000
value: 24.490000000000002
- type: recall_at_1
value: 7.412000000000001
- type: recall_at_3
value: 13.682
- type: recall_at_5
value: 16.595
- type: recall_at_10
value: 21.726
- type: recall_at_20
value: 26.895999999999997
- type: recall_at_100
value: 42.345
- type: recall_at_1000
value: 74.752
- type: precision_at_1
value: 56.65599999999999
- type: precision_at_3
value: 46.646
- type: precision_at_5
value: 40.619
- type: precision_at_10
value: 31.796000000000003
- type: precision_at_20
value: 24.195
- type: precision_at_100
value: 10.65
- type: precision_at_1000
value: 2.4570000000000003
- type: mrr_at_1
value: 56.6563
- type: mrr_at_3
value: 63.2095
- type: mrr_at_5
value: 63.998999999999995
- type: mrr_at_10
value: 64.6678
- type: mrr_at_20
value: 64.96079999999999
- type: mrr_at_100
value: 65.13550000000001
- type: mrr_at_1000
value: 65.1606
- type: nauc_ndcg_at_1_max
value: 36.932900000000004
- type: nauc_ndcg_at_1_std
value: 18.8188
- type: nauc_ndcg_at_1_diff1
value: 19.8327
- type: nauc_ndcg_at_3_max
value: 37.044
- type: nauc_ndcg_at_3_std
value: 17.2918
- type: nauc_ndcg_at_3_diff1
value: 12.0583
- type: nauc_ndcg_at_5_max
value: 36.1203
- type: nauc_ndcg_at_5_std
value: 17.3222
- type: nauc_ndcg_at_5_diff1
value: 10.6337
- type: nauc_ndcg_at_10_max
value: 35.2176
- type: nauc_ndcg_at_10_std
value: 19.4608
- type: nauc_ndcg_at_10_diff1
value: 9.1899
- type: nauc_ndcg_at_20_max
value: 34.3354
- type: nauc_ndcg_at_20_std
value: 20.2528
- type: nauc_ndcg_at_20_diff1
value: 8.4017
- type: nauc_ndcg_at_100_max
value: 36.2388
- type: nauc_ndcg_at_100_std
value: 21.578500000000002
- type: nauc_ndcg_at_100_diff1
value: 10.639899999999999
- type: nauc_ndcg_at_1000_max
value: 42.6963
- type: nauc_ndcg_at_1000_std
value: 26.8738
- type: nauc_ndcg_at_1000_diff1
value: 10.3058
- type: nauc_map_at_1_max
value: 3.4625999999999997
- type: nauc_map_at_1_std
value: -18.5752
- type: nauc_map_at_1_diff1
value: 42.8963
- type: nauc_map_at_3_max
value: 10.6291
- type: nauc_map_at_3_std
value: -13.977500000000001
- type: nauc_map_at_3_diff1
value: 31.7847
- type: nauc_map_at_5_max
value: 13.558200000000001
- type: nauc_map_at_5_std
value: -13.120499999999998
- type: nauc_map_at_5_diff1
value: 29.016199999999998
- type: nauc_map_at_10_max
value: 18.4615
- type: nauc_map_at_10_std
value: -8.2696
- type: nauc_map_at_10_diff1
value: 24.9417
- type: nauc_map_at_20_max
value: 22.4656
- type: nauc_map_at_20_std
value: -3.5717
- type: nauc_map_at_20_diff1
value: 21.6654
- type: nauc_map_at_100_max
value: 26.3009
- type: nauc_map_at_100_std
value: 4.6873
- type: nauc_map_at_100_diff1
value: 18.1131
- type: nauc_map_at_1000_max
value: 26.965
- type: nauc_map_at_1000_std
value: 8.6958
- type: nauc_map_at_1000_diff1
value: 15.898499999999999
- type: nauc_recall_at_1_max
value: 3.4625999999999997
- type: nauc_recall_at_1_std
value: -18.5752
- type: nauc_recall_at_1_diff1
value: 42.8963
- type: nauc_recall_at_3_max
value: 8.9686
- type: nauc_recall_at_3_std
value: -13.836599999999999
- type: nauc_recall_at_3_diff1
value: 27.528799999999997
- type: nauc_recall_at_5_max
value: 11.818900000000001
- type: nauc_recall_at_5_std
value: -13.414499999999999
- type: nauc_recall_at_5_diff1
value: 24.535899999999998
- type: nauc_recall_at_10_max
value: 15.515300000000002
- type: nauc_recall_at_10_std
value: -7.9226
- type: nauc_recall_at_10_diff1
value: 19.0975
- type: nauc_recall_at_20_max
value: 18.786
- type: nauc_recall_at_20_std
value: -3.7244
- type: nauc_recall_at_20_diff1
value: 13.8233
- type: nauc_recall_at_100_max
value: 21.3034
- type: nauc_recall_at_100_std
value: 9.569600000000001
- type: nauc_recall_at_100_diff1
value: 4.5838
- type: nauc_recall_at_1000_max
value: 18.991
- type: nauc_recall_at_1000_std
value: 14.690100000000001
- type: nauc_recall_at_1000_diff1
value: -6.4048
- type: nauc_precision_at_1_max
value: 38.888099999999994
- type: nauc_precision_at_1_std
value: 19.6065
- type: nauc_precision_at_1_diff1
value: 18.8443
- type: nauc_precision_at_3_max
value: 38.5528
- type: nauc_precision_at_3_std
value: 22.456
- type: nauc_precision_at_3_diff1
value: -0.0451
- type: nauc_precision_at_5_max
value: 36.9122
- type: nauc_precision_at_5_std
value: 24.6269
- type: nauc_precision_at_5_diff1
value: -5.5478
- type: nauc_precision_at_10_max
value: 35.3133
- type: nauc_precision_at_10_std
value: 32.6397
- type: nauc_precision_at_10_diff1
value: -12.6832
- type: nauc_precision_at_20_max
value: 31.1714
- type: nauc_precision_at_20_std
value: 37.2027
- type: nauc_precision_at_20_diff1
value: -15.146
- type: nauc_precision_at_100_max
value: 16.4786
- type: nauc_precision_at_100_std
value: 41.3436
- type: nauc_precision_at_100_diff1
value: -19.1827
- type: nauc_precision_at_1000_max
value: 5.0432999999999995
- type: nauc_precision_at_1000_std
value: 34.5184
- type: nauc_precision_at_1000_diff1
value: -18.926399999999997
- type: nauc_mrr_at_1_max
value: 38.888099999999994
- type: nauc_mrr_at_1_std
value: 19.6065
- type: nauc_mrr_at_1_diff1
value: 18.8443
- type: nauc_mrr_at_3_max
value: 43.250699999999995
- type: nauc_mrr_at_3_std
value: 23.7515
- type: nauc_mrr_at_3_diff1
value: 18.895899999999997
- type: nauc_mrr_at_5_max
value: 43.845600000000005
- type: nauc_mrr_at_5_std
value: 23.7064
- type: nauc_mrr_at_5_diff1
value: 19.9659
- type: nauc_mrr_at_10_max
value: 44.075900000000004
- type: nauc_mrr_at_10_std
value: 24.7276
- type: nauc_mrr_at_10_diff1
value: 19.4665
- type: nauc_mrr_at_20_max
value: 44.169599999999996
- type: nauc_mrr_at_20_std
value: 24.6675
- type: nauc_mrr_at_20_diff1
value: 19.1246
- type: nauc_mrr_at_100_max
value: 44.1064
- type: nauc_mrr_at_100_std
value: 24.5961
- type: nauc_mrr_at_100_diff1
value: 19.1742
- type: nauc_mrr_at_1000_max
value: 44.083099999999995
- type: nauc_mrr_at_1000_std
value: 24.5796
- type: nauc_mrr_at_1000_diff1
value: 19.1781
- type: main_score
value: 43.313
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: ndcg_at_1
value: 52.751999999999995
- type: ndcg_at_3
value: 65.18599999999999
- type: ndcg_at_5
value: 68.931
- type: ndcg_at_10
value: 71.487
- type: ndcg_at_20
value: 72.694
- type: ndcg_at_100
value: 73.603
- type: ndcg_at_1000
value: 73.772
- type: map_at_1
value: 47.446
- type: map_at_3
value: 60.94500000000001
- type: map_at_5
value: 63.33
- type: map_at_10
value: 64.581
- type: map_at_20
value: 64.97800000000001
- type: map_at_100
value: 65.142
- type: map_at_1000
value: 65.15100000000001
- type: recall_at_1
value: 47.446
- type: recall_at_3
value: 74.13300000000001
- type: recall_at_5
value: 82.575
- type: recall_at_10
value: 89.827
- type: recall_at_20
value: 94.238
- type: recall_at_100
value: 98.646
- type: recall_at_1000
value: 99.846
- type: precision_at_1
value: 52.751999999999995
- type: precision_at_3
value: 28.910999999999998
- type: precision_at_5
value: 19.606
- type: precision_at_10
value: 10.8
- type: precision_at_20
value: 5.694
- type: precision_at_100
value: 1.199
- type: precision_at_1000
value: 0.122
- type: mrr_at_1
value: 52.751999999999995
- type: mrr_at_3
value: 64.10289999999999
- type: mrr_at_5
value: 65.8613
- type: mrr_at_10
value: 66.686
- type: mrr_at_20
value: 66.9409
- type: mrr_at_100
value: 67.0379
- type: mrr_at_1000
value: 67.0432
- type: nauc_ndcg_at_1_max
value: 30.3808
- type: nauc_ndcg_at_1_std
value: -5.9129000000000005
- type: nauc_ndcg_at_1_diff1
value: 47.0165
- type: nauc_ndcg_at_3_max
value: 35.7466
- type: nauc_ndcg_at_3_std
value: -8.3171
- type: nauc_ndcg_at_3_diff1
value: 41.8012
- type: nauc_ndcg_at_5_max
value: 36.5622
- type: nauc_ndcg_at_5_std
value: -7.2429
- type: nauc_ndcg_at_5_diff1
value: 41.8656
- type: nauc_ndcg_at_10_max
value: 36.8005
- type: nauc_ndcg_at_10_std
value: -6.1641
- type: nauc_ndcg_at_10_diff1
value: 42.9286
- type: nauc_ndcg_at_20_max
value: 36.518
- type: nauc_ndcg_at_20_std
value: -5.389200000000001
- type: nauc_ndcg_at_20_diff1
value: 43.249500000000005
- type: nauc_ndcg_at_100_max
value: 35.918
- type: nauc_ndcg_at_100_std
value: -5.2897
- type: nauc_ndcg_at_100_diff1
value: 43.4364
- type: nauc_ndcg_at_1000_max
value: 35.5555
- type: nauc_ndcg_at_1000_std
value: -5.657100000000001
- type: nauc_ndcg_at_1000_diff1
value: 43.2952
- type: nauc_map_at_1_max
value: 27.845399999999998
- type: nauc_map_at_1_std
value: -8.4687
- type: nauc_map_at_1_diff1
value: 46.1481
- type: nauc_map_at_3_max
value: 33.8624
- type: nauc_map_at_3_std
value: -8.7464
- type: nauc_map_at_3_diff1
value: 42.821
- type: nauc_map_at_5_max
value: 34.3698
- type: nauc_map_at_5_std
value: -7.893600000000001
- type: nauc_map_at_5_diff1
value: 42.869600000000005
- type: nauc_map_at_10_max
value: 34.469899999999996
- type: nauc_map_at_10_std
value: -7.372199999999999
- type: nauc_map_at_10_diff1
value: 43.2528
- type: nauc_map_at_20_max
value: 34.4121
- type: nauc_map_at_20_std
value: -7.1491
- type: nauc_map_at_20_diff1
value: 43.3789
- type: nauc_map_at_100_max
value: 34.3433
- type: nauc_map_at_100_std
value: -7.0851999999999995
- type: nauc_map_at_100_diff1
value: 43.3953
- type: nauc_map_at_1000_max
value: 34.3293
- type: nauc_map_at_1000_std
value: -7.096900000000001
- type: nauc_map_at_1000_diff1
value: 43.3893
- type: nauc_recall_at_1_max
value: 27.845399999999998
- type: nauc_recall_at_1_std
value: -8.4687
- type: nauc_recall_at_1_diff1
value: 46.1481
- type: nauc_recall_at_3_max
value: 39.4993
- type: nauc_recall_at_3_std
value: -10.2782
- type: nauc_recall_at_3_diff1
value: 36.239399999999996
- type: nauc_recall_at_5_max
value: 43.7836
- type: nauc_recall_at_5_std
value: -8.4871
- type: nauc_recall_at_5_diff1
value: 34.7778
- type: nauc_recall_at_10_max
value: 50.5553
- type: nauc_recall_at_10_std
value: -3.7254
- type: nauc_recall_at_10_diff1
value: 38.756800000000005
- type: nauc_recall_at_20_max
value: 56.6151
- type: nauc_recall_at_20_std
value: 7.4285000000000005
- type: nauc_recall_at_20_diff1
value: 39.919
- type: nauc_recall_at_100_max
value: 79.1225
- type: nauc_recall_at_100_std
value: 45.264700000000005
- type: nauc_recall_at_100_diff1
value: 52.2652
- type: nauc_recall_at_1000_max
value: 75.74210000000001
- type: nauc_recall_at_1000_std
value: 71.3295
- type: nauc_recall_at_1000_diff1
value: -2.503
- type: nauc_precision_at_1_max
value: 30.3808
- type: nauc_precision_at_1_std
value: -5.9129000000000005
- type: nauc_precision_at_1_diff1
value: 47.0165
- type: nauc_precision_at_3_max
value: 30.1298
- type: nauc_precision_at_3_std
value: 0.9233999999999999
- type: nauc_precision_at_3_diff1
value: 18.6972
- type: nauc_precision_at_5_max
value: 24.6955
- type: nauc_precision_at_5_std
value: 6.4975000000000005
- type: nauc_precision_at_5_diff1
value: 9.0141
- type: nauc_precision_at_10_max
value: 17.7006
- type: nauc_precision_at_10_std
value: 12.1744
- type: nauc_precision_at_10_diff1
value: 1.5388000000000002
- type: nauc_precision_at_20_max
value: 11.473600000000001
- type: nauc_precision_at_20_std
value: 16.2131
- type: nauc_precision_at_20_diff1
value: -4.3109
- type: nauc_precision_at_100_max
value: 3.5707999999999998
- type: nauc_precision_at_100_std
value: 18.5558
- type: nauc_precision_at_100_diff1
value: -10.8589
- type: nauc_precision_at_1000_max
value: -0.23800000000000002
- type: nauc_precision_at_1000_std
value: 16.941300000000002
- type: nauc_precision_at_1000_diff1
value: -13.7766
- type: nauc_mrr_at_1_max
value: 30.3808
- type: nauc_mrr_at_1_std
value: -5.9129000000000005
- type: nauc_mrr_at_1_diff1
value: 47.0165
- type: nauc_mrr_at_3_max
value: 35.058099999999996
- type: nauc_mrr_at_3_std
value: -5.724
- type: nauc_mrr_at_3_diff1
value: 43.3839
- type: nauc_mrr_at_5_max
value: 35.1297
- type: nauc_mrr_at_5_std
value: -5.3953
- type: nauc_mrr_at_5_diff1
value: 43.6497
- type: nauc_mrr_at_10_max
value: 35.001599999999996
- type: nauc_mrr_at_10_std
value: -5.2087
- type: nauc_mrr_at_10_diff1
value: 44.1327
- type: nauc_mrr_at_20_max
value: 34.891299999999994
- type: nauc_mrr_at_20_std
value: -5.0933
- type: nauc_mrr_at_20_diff1
value: 44.163799999999995
- type: nauc_mrr_at_100_max
value: 34.8054
- type: nauc_mrr_at_100_std
value: -5.136
- type: nauc_mrr_at_100_diff1
value: 44.1738
- type: nauc_mrr_at_1000_max
value: 34.795
- type: nauc_mrr_at_1000_std
value: -5.1476
- type: nauc_mrr_at_1000_diff1
value: 44.169599999999996
- type: main_score
value: 71.487
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: ndcg_at_1
value: 83.3
- type: ndcg_at_3
value: 87.50099999999999
- type: ndcg_at_5
value: 88.998
- type: ndcg_at_10
value: 90.038
- type: ndcg_at_20
value: 90.619
- type: ndcg_at_100
value: 91.049
- type: ndcg_at_1000
value: 91.105
- type: map_at_1
value: 72.424
- type: map_at_3
value: 83.76599999999999
- type: map_at_5
value: 85.616
- type: map_at_10
value: 86.626
- type: map_at_20
value: 87.034
- type: map_at_100
value: 87.233
- type: map_at_1000
value: 87.24499999999999
- type: recall_at_1
value: 72.424
- type: recall_at_3
value: 89.155
- type: recall_at_5
value: 93.425
- type: recall_at_10
value: 96.514
- type: recall_at_20
value: 98.334
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.997
- type: precision_at_1
value: 83.3
- type: precision_at_3
value: 38.393
- type: precision_at_5
value: 25.228
- type: precision_at_10
value: 13.647
- type: precision_at_20
value: 7.225
- type: precision_at_100
value: 1.543
- type: precision_at_1000
value: 0.157
- type: mrr_at_1
value: 83.3
- type: mrr_at_3
value: 88.3
- type: mrr_at_5
value: 88.90350000000001
- type: mrr_at_10
value: 89.13839999999999
- type: mrr_at_20
value: 89.1953
- type: mrr_at_100
value: 89.21170000000001
- type: mrr_at_1000
value: 89.212
- type: nauc_ndcg_at_1_max
value: 36.066500000000005
- type: nauc_ndcg_at_1_std
value: -42.5286
- type: nauc_ndcg_at_1_diff1
value: 79.3747
- type: nauc_ndcg_at_3_max
value: 33.2171
- type: nauc_ndcg_at_3_std
value: -50.6815
- type: nauc_ndcg_at_3_diff1
value: 77.3029
- type: nauc_ndcg_at_5_max
value: 33.578
- type: nauc_ndcg_at_5_std
value: -51.302400000000006
- type: nauc_ndcg_at_5_diff1
value: 77.73219999999999
- type: nauc_ndcg_at_10_max
value: 34.1893
- type: nauc_ndcg_at_10_std
value: -51.231
- type: nauc_ndcg_at_10_diff1
value: 77.91510000000001
- type: nauc_ndcg_at_20_max
value: 34.7652
- type: nauc_ndcg_at_20_std
value: -49.2675
- type: nauc_ndcg_at_20_diff1
value: 77.8851
- type: nauc_ndcg_at_100_max
value: 35.1212
- type: nauc_ndcg_at_100_std
value: -47.1897
- type: nauc_ndcg_at_100_diff1
value: 77.8819
- type: nauc_ndcg_at_1000_max
value: 35.2096
- type: nauc_ndcg_at_1000_std
value: -47.1269
- type: nauc_ndcg_at_1000_diff1
value: 77.874
- type: nauc_map_at_1_max
value: 25.3507
- type: nauc_map_at_1_std
value: -42.6645
- type: nauc_map_at_1_diff1
value: 82.1013
- type: nauc_map_at_3_max
value: 30.685000000000002
- type: nauc_map_at_3_std
value: -52.8243
- type: nauc_map_at_3_diff1
value: 78.8106
- type: nauc_map_at_5_max
value: 32.2752
- type: nauc_map_at_5_std
value: -52.546800000000005
- type: nauc_map_at_5_diff1
value: 78.4853
- type: nauc_map_at_10_max
value: 33.239000000000004
- type: nauc_map_at_10_std
value: -51.5082
- type: nauc_map_at_10_diff1
value: 78.1853
- type: nauc_map_at_20_max
value: 33.644200000000005
- type: nauc_map_at_20_std
value: -50.1154
- type: nauc_map_at_20_diff1
value: 78.0265
- type: nauc_map_at_100_max
value: 33.7945
- type: nauc_map_at_100_std
value: -49.1504
- type: nauc_map_at_100_diff1
value: 77.9653
- type: nauc_map_at_1000_max
value: 33.813500000000005
- type: nauc_map_at_1000_std
value: -49.119800000000005
- type: nauc_map_at_1000_diff1
value: 77.9596
- type: nauc_recall_at_1_max
value: 25.3507
- type: nauc_recall_at_1_std
value: -42.6645
- type: nauc_recall_at_1_diff1
value: 82.1013
- type: nauc_recall_at_3_max
value: 26.4941
- type: nauc_recall_at_3_std
value: -62.044399999999996
- type: nauc_recall_at_3_diff1
value: 75.3519
- type: nauc_recall_at_5_max
value: 26.5483
- type: nauc_recall_at_5_std
value: -68.6455
- type: nauc_recall_at_5_diff1
value: 74.2904
- type: nauc_recall_at_10_max
value: 28.0472
- type: nauc_recall_at_10_std
value: -80.47109999999999
- type: nauc_recall_at_10_diff1
value: 74.5668
- type: nauc_recall_at_20_max
value: 29.296699999999998
- type: nauc_recall_at_20_std
value: -77.2526
- type: nauc_recall_at_20_diff1
value: 73.5218
- type: nauc_recall_at_100_max
value: 29.6001
- type: nauc_recall_at_100_std
value: -36.9144
- type: nauc_recall_at_100_diff1
value: 71.9031
- type: nauc_recall_at_1000_max
value: 65.9499
- type: nauc_recall_at_1000_std
value: 65.2771
- type: nauc_recall_at_1000_diff1
value: 68.762
- type: nauc_precision_at_1_max
value: 36.066500000000005
- type: nauc_precision_at_1_std
value: -42.5286
- type: nauc_precision_at_1_diff1
value: 79.3747
- type: nauc_precision_at_3_max
value: 5.2775
- type: nauc_precision_at_3_std
value: 5.7708
- type: nauc_precision_at_3_diff1
value: -24.618100000000002
- type: nauc_precision_at_5_max
value: 0.9323
- type: nauc_precision_at_5_std
value: 17.5438
- type: nauc_precision_at_5_diff1
value: -36.0304
- type: nauc_precision_at_10_max
value: -2.321
- type: nauc_precision_at_10_std
value: 26.4542
- type: nauc_precision_at_10_diff1
value: -42.2061
- type: nauc_precision_at_20_max
value: -4.122
- type: nauc_precision_at_20_std
value: 33.5662
- type: nauc_precision_at_20_diff1
value: -44.6324
- type: nauc_precision_at_100_max
value: -5.6459
- type: nauc_precision_at_100_std
value: 40.1704
- type: nauc_precision_at_100_diff1
value: -45.8818
- type: nauc_precision_at_1000_max
value: -5.5985
- type: nauc_precision_at_1000_std
value: 40.402100000000004
- type: nauc_precision_at_1000_diff1
value: -46.054
- type: nauc_mrr_at_1_max
value: 36.066500000000005
- type: nauc_mrr_at_1_std
value: -42.5286
- type: nauc_mrr_at_1_diff1
value: 79.3747
- type: nauc_mrr_at_3_max
value: 36.3166
- type: nauc_mrr_at_3_std
value: -45.0789
- type: nauc_mrr_at_3_diff1
value: 78.3276
- type: nauc_mrr_at_5_max
value: 36.365700000000004
- type: nauc_mrr_at_5_std
value: -45.1793
- type: nauc_mrr_at_5_diff1
value: 78.5424
- type: nauc_mrr_at_10_max
value: 36.3128
- type: nauc_mrr_at_10_std
value: -45.0655
- type: nauc_mrr_at_10_diff1
value: 78.5756
- type: nauc_mrr_at_20_max
value: 36.327
- type: nauc_mrr_at_20_std
value: -44.853500000000004
- type: nauc_mrr_at_20_diff1
value: 78.57220000000001
- type: nauc_mrr_at_100_max
value: 36.3322
- type: nauc_mrr_at_100_std
value: -44.8025
- type: nauc_mrr_at_100_diff1
value: 78.57730000000001
- type: nauc_mrr_at_1000_max
value: 36.3314
- type: nauc_mrr_at_1000_std
value: -44.804100000000005
- type: nauc_mrr_at_1000_diff1
value: 78.5778
- type: main_score
value: 90.038
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.0223
- type: v_measure_std
value: 2.5644
- type: main_score
value: 73.0223
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 74.7154
- type: v_measure_std
value: 8.2851
- type: main_score
value: 74.7154
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: ndcg_at_1
value: 29.099999999999998
- type: ndcg_at_3
value: 22.891000000000002
- type: ndcg_at_5
value: 20.055
- type: ndcg_at_10
value: 24.637999999999998
- type: ndcg_at_20
value: 28.188999999999997
- type: ndcg_at_100
value: 34.589
- type: ndcg_at_1000
value: 39.93
- type: map_at_1
value: 5.903
- type: map_at_3
value: 10.375
- type: map_at_5
value: 12.503
- type: map_at_10
value: 14.997
- type: map_at_20
value: 16.383
- type: map_at_100
value: 17.769
- type: map_at_1000
value: 18.115000000000002
- type: recall_at_1
value: 5.903
- type: recall_at_3
value: 12.943
- type: recall_at_5
value: 17.818
- type: recall_at_10
value: 26.162999999999997
- type: recall_at_20
value: 34.46
- type: recall_at_100
value: 55.24
- type: recall_at_1000
value: 81.11500000000001
- type: precision_at_1
value: 29.099999999999998
- type: precision_at_3
value: 21.3
- type: precision_at_5
value: 17.599999999999998
- type: precision_at_10
value: 12.920000000000002
- type: precision_at_20
value: 8.5
- type: precision_at_100
value: 2.721
- type: precision_at_1000
value: 0.4
- type: mrr_at_1
value: 29.099999999999998
- type: mrr_at_3
value: 36.8667
- type: mrr_at_5
value: 38.9617
- type: mrr_at_10
value: 40.598600000000005
- type: mrr_at_20
value: 41.3132
- type: mrr_at_100
value: 41.7298
- type: mrr_at_1000
value: 41.7584
- type: nauc_ndcg_at_1_max
value: 18.4908
- type: nauc_ndcg_at_1_std
value: 8.3766
- type: nauc_ndcg_at_1_diff1
value: 22.0483
- type: nauc_ndcg_at_3_max
value: 19.1712
- type: nauc_ndcg_at_3_std
value: 10.407399999999999
- type: nauc_ndcg_at_3_diff1
value: 16.301299999999998
- type: nauc_ndcg_at_5_max
value: 20.069
- type: nauc_ndcg_at_5_std
value: 11.8195
- type: nauc_ndcg_at_5_diff1
value: 16.4224
- type: nauc_ndcg_at_10_max
value: 20.2641
- type: nauc_ndcg_at_10_std
value: 14.7043
- type: nauc_ndcg_at_10_diff1
value: 15.1067
- type: nauc_ndcg_at_20_max
value: 21.5015
- type: nauc_ndcg_at_20_std
value: 17.292199999999998
- type: nauc_ndcg_at_20_diff1
value: 14.1651
- type: nauc_ndcg_at_100_max
value: 23.2308
- type: nauc_ndcg_at_100_std
value: 22.425800000000002
- type: nauc_ndcg_at_100_diff1
value: 12.2911
- type: nauc_ndcg_at_1000_max
value: 23.4699
- type: nauc_ndcg_at_1000_std
value: 22.764499999999998
- type: nauc_ndcg_at_1000_diff1
value: 12.540000000000001
- type: nauc_map_at_1_max
value: 17.8236
- type: nauc_map_at_1_std
value: 8.1625
- type: nauc_map_at_1_diff1
value: 21.8362
- type: nauc_map_at_3_max
value: 19.1677
- type: nauc_map_at_3_std
value: 9.525699999999999
- type: nauc_map_at_3_diff1
value: 16.2302
- type: nauc_map_at_5_max
value: 20.3156
- type: nauc_map_at_5_std
value: 10.6218
- type: nauc_map_at_5_diff1
value: 16.2404
- type: nauc_map_at_10_max
value: 20.2029
- type: nauc_map_at_10_std
value: 12.9342
- type: nauc_map_at_10_diff1
value: 14.7992
- type: nauc_map_at_20_max
value: 20.7278
- type: nauc_map_at_20_std
value: 14.6676
- type: nauc_map_at_20_diff1
value: 14.0374
- type: nauc_map_at_100_max
value: 21.4579
- type: nauc_map_at_100_std
value: 16.490199999999998
- type: nauc_map_at_100_diff1
value: 13.5188
- type: nauc_map_at_1000_max
value: 21.5714
- type: nauc_map_at_1000_std
value: 16.746
- type: nauc_map_at_1000_diff1
value: 13.4916
- type: nauc_recall_at_1_max
value: 17.8236
- type: nauc_recall_at_1_std
value: 8.1625
- type: nauc_recall_at_1_diff1
value: 21.8362
- type: nauc_recall_at_3_max
value: 18.5764
- type: nauc_recall_at_3_std
value: 11.3546
- type: nauc_recall_at_3_diff1
value: 13.5547
- type: nauc_recall_at_5_max
value: 19.8008
- type: nauc_recall_at_5_std
value: 13.3324
- type: nauc_recall_at_5_diff1
value: 13.5029
- type: nauc_recall_at_10_max
value: 18.645400000000002
- type: nauc_recall_at_10_std
value: 17.5544
- type: nauc_recall_at_10_diff1
value: 10.4276
- type: nauc_recall_at_20_max
value: 19.9903
- type: nauc_recall_at_20_std
value: 21.9707
- type: nauc_recall_at_20_diff1
value: 7.863
- type: nauc_recall_at_100_max
value: 21.1196
- type: nauc_recall_at_100_std
value: 33.4524
- type: nauc_recall_at_100_diff1
value: 0.7107
- type: nauc_recall_at_1000_max
value: 20.188
- type: nauc_recall_at_1000_std
value: 39.786
- type: nauc_recall_at_1000_diff1
value: -4.0011
- type: nauc_precision_at_1_max
value: 18.4908
- type: nauc_precision_at_1_std
value: 8.3766
- type: nauc_precision_at_1_diff1
value: 22.0483
- type: nauc_precision_at_3_max
value: 19.178600000000003
- type: nauc_precision_at_3_std
value: 11.6744
- type: nauc_precision_at_3_diff1
value: 13.5931
- type: nauc_precision_at_5_max
value: 20.4581
- type: nauc_precision_at_5_std
value: 13.5076
- type: nauc_precision_at_5_diff1
value: 13.619600000000002
- type: nauc_precision_at_10_max
value: 19.437099999999997
- type: nauc_precision_at_10_std
value: 17.754900000000003
- type: nauc_precision_at_10_diff1
value: 10.6315
- type: nauc_precision_at_20_max
value: 20.9159
- type: nauc_precision_at_20_std
value: 21.9286
- type: nauc_precision_at_20_diff1
value: 8.251
- type: nauc_precision_at_100_max
value: 22.2412
- type: nauc_precision_at_100_std
value: 32.925900000000006
- type: nauc_precision_at_100_diff1
value: 1.5637999999999999
- type: nauc_precision_at_1000_max
value: 22.1
- type: nauc_precision_at_1000_std
value: 37.227700000000006
- type: nauc_precision_at_1000_diff1
value: -2.3548
- type: nauc_mrr_at_1_max
value: 18.4908
- type: nauc_mrr_at_1_std
value: 8.3766
- type: nauc_mrr_at_1_diff1
value: 22.0483
- type: nauc_mrr_at_3_max
value: 18.3602
- type: nauc_mrr_at_3_std
value: 9.579600000000001
- type: nauc_mrr_at_3_diff1
value: 18.7544
- type: nauc_mrr_at_5_max
value: 18.5224
- type: nauc_mrr_at_5_std
value: 9.747300000000001
- type: nauc_mrr_at_5_diff1
value: 19.0625
- type: nauc_mrr_at_10_max
value: 19.067
- type: nauc_mrr_at_10_std
value: 10.7103
- type: nauc_mrr_at_10_diff1
value: 19.444
- type: nauc_mrr_at_20_max
value: 19.118399999999998
- type: nauc_mrr_at_20_std
value: 10.6384
- type: nauc_mrr_at_20_diff1
value: 19.5223
- type: nauc_mrr_at_100_max
value: 19.200300000000002
- type: nauc_mrr_at_100_std
value: 10.6891
- type: nauc_mrr_at_100_diff1
value: 19.4009
- type: nauc_mrr_at_1000_max
value: 19.1676
- type: nauc_mrr_at_1000_std
value: 10.649799999999999
- type: nauc_mrr_at_1000_diff1
value: 19.4127
- type: main_score
value: 24.637999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: pearson
value: 86.8981
- type: spearman
value: 82.234
- type: cosine_pearson
value: 86.8981
- type: cosine_spearman
value: 82.234
- type: manhattan_pearson
value: 84.1165
- type: manhattan_spearman
value: 82.063
- type: euclidean_pearson
value: 84.0688
- type: euclidean_spearman
value: 82.2329
- type: main_score
value: 82.234
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: pearson
value: 87.8269
- type: spearman
value: 80.2578
- type: cosine_pearson
value: 87.8269
- type: cosine_spearman
value: 80.2578
- type: manhattan_pearson
value: 85.09020000000001
- type: manhattan_spearman
value: 80.5777
- type: euclidean_pearson
value: 84.71990000000001
- type: euclidean_spearman
value: 80.2547
- type: main_score
value: 80.2578
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: pearson
value: 89.45769999999999
- type: spearman
value: 89.69200000000001
- type: cosine_pearson
value: 89.45769999999999
- type: cosine_spearman
value: 89.69200000000001
- type: manhattan_pearson
value: 89.1685
- type: manhattan_spearman
value: 89.8889
- type: euclidean_pearson
value: 89.07969999999999
- type: euclidean_spearman
value: 89.7003
- type: main_score
value: 89.69200000000001
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: pearson
value: 87.01989999999999
- type: spearman
value: 85.3155
- type: cosine_pearson
value: 87.01989999999999
- type: cosine_spearman
value: 85.3155
- type: manhattan_pearson
value: 86.24159999999999
- type: manhattan_spearman
value: 85.544
- type: euclidean_pearson
value: 86.1032
- type: euclidean_spearman
value: 85.31490000000001
- type: main_score
value: 85.3155
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: pearson
value: 89.593
- type: spearman
value: 90.1574
- type: cosine_pearson
value: 89.593
- type: cosine_spearman
value: 90.1574
- type: manhattan_pearson
value: 89.93910000000001
- type: manhattan_spearman
value: 90.4357
- type: euclidean_pearson
value: 89.76650000000001
- type: euclidean_spearman
value: 90.1554
- type: main_score
value: 90.1574
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: pearson
value: 85.03540000000001
- type: spearman
value: 86.4191
- type: cosine_pearson
value: 85.03540000000001
- type: cosine_spearman
value: 86.4191
- type: manhattan_pearson
value: 85.8992
- type: manhattan_spearman
value: 86.7274
- type: euclidean_pearson
value: 85.638
- type: euclidean_spearman
value: 86.42
- type: main_score
value: 86.4191
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 54.1098
- type: spearman
value: 52.721
- type: cosine_pearson
value: 54.1098
- type: cosine_spearman
value: 52.721
- type: manhattan_pearson
value: 54.890899999999995
- type: manhattan_spearman
value: 52.727900000000005
- type: euclidean_pearson
value: 54.65410000000001
- type: euclidean_spearman
value: 52.8582
- type: main_score
value: 52.721
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 84.3262
- type: spearman
value: 84.1421
- type: cosine_pearson
value: 84.3262
- type: cosine_spearman
value: 84.1421
- type: manhattan_pearson
value: 83.8999
- type: manhattan_spearman
value: 83.9987
- type: euclidean_pearson
value: 84.0174
- type: euclidean_spearman
value: 84.1711
- type: main_score
value: 84.1421
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 91.38969999999999
- type: spearman
value: 91.00789999999999
- type: cosine_pearson
value: 91.38969999999999
- type: cosine_spearman
value: 91.00789999999999
- type: manhattan_pearson
value: 90.8917
- type: manhattan_spearman
value: 90.7834
- type: euclidean_pearson
value: 91.0361
- type: euclidean_spearman
value: 91.011
- type: main_score
value: 91.00789999999999
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 84.636
- type: spearman
value: 84.1977
- type: cosine_pearson
value: 84.636
- type: cosine_spearman
value: 84.1977
- type: manhattan_pearson
value: 84.6895
- type: manhattan_spearman
value: 84.1849
- type: euclidean_pearson
value: 84.753
- type: euclidean_spearman
value: 84.2251
- type: main_score
value: 84.1977
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 85.8595
- type: spearman
value: 86.9965
- type: cosine_pearson
value: 85.8595
- type: cosine_spearman
value: 86.9965
- type: manhattan_pearson
value: 86.92450000000001
- type: manhattan_spearman
value: 87.39489999999999
- type: euclidean_pearson
value: 86.5575
- type: euclidean_spearman
value: 86.997
- type: main_score
value: 86.9965
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 85.959
- type: spearman
value: 86.3103
- type: cosine_pearson
value: 85.959
- type: cosine_spearman
value: 86.3103
- type: manhattan_pearson
value: 85.7486
- type: manhattan_spearman
value: 86.0977
- type: euclidean_pearson
value: 85.8361
- type: euclidean_spearman
value: 86.2741
- type: main_score
value: 86.3103
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 86.2629
- type: spearman
value: 85.8372
- type: cosine_pearson
value: 86.2629
- type: cosine_spearman
value: 85.8372
- type: manhattan_pearson
value: 86.2491
- type: manhattan_spearman
value: 85.4973
- type: euclidean_pearson
value: 86.5074
- type: euclidean_spearman
value: 85.83370000000001
- type: main_score
value: 85.8372
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 57.5675
- type: spearman
value: 55.564899999999994
- type: cosine_pearson
value: 57.5675
- type: cosine_spearman
value: 55.564899999999994
- type: manhattan_pearson
value: 57.4686
- type: manhattan_spearman
value: 55.3816
- type: euclidean_pearson
value: 57.635099999999994
- type: euclidean_spearman
value: 55.501900000000006
- type: main_score
value: 55.564899999999994
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 76.1237
- type: spearman
value: 76.4
- type: cosine_pearson
value: 76.1237
- type: cosine_spearman
value: 76.4
- type: manhattan_pearson
value: 78.6794
- type: manhattan_spearman
value: 77.2282
- type: euclidean_pearson
value: 77.1688
- type: euclidean_spearman
value: 76.4
- type: main_score
value: 76.4
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 73.89099999999999
- type: spearman
value: 77.8292
- type: cosine_pearson
value: 73.89099999999999
- type: cosine_spearman
value: 77.8292
- type: manhattan_pearson
value: 77.53229999999999
- type: manhattan_spearman
value: 78.3926
- type: euclidean_pearson
value: 76.6734
- type: euclidean_spearman
value: 77.8721
- type: main_score
value: 77.8292
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 68.31099999999999
- type: spearman
value: 68.0764
- type: cosine_pearson
value: 68.31099999999999
- type: cosine_spearman
value: 68.0764
- type: manhattan_pearson
value: 69.7423
- type: manhattan_spearman
value: 68.3212
- type: euclidean_pearson
value: 69.5701
- type: euclidean_spearman
value: 68.2299
- type: main_score
value: 68.0764
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 65.0636
- type: spearman
value: 59.4669
- type: cosine_pearson
value: 65.0636
- type: cosine_spearman
value: 59.4669
- type: manhattan_pearson
value: 69.723
- type: manhattan_spearman
value: 60.3823
- type: euclidean_pearson
value: 68.6454
- type: euclidean_spearman
value: 59.384899999999995
- type: main_score
value: 59.4669
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 73.7603
- type: spearman
value: 73.0799
- type: cosine_pearson
value: 73.7603
- type: cosine_spearman
value: 73.0799
- type: manhattan_pearson
value: 76.2351
- type: manhattan_spearman
value: 73.60719999999999
- type: euclidean_pearson
value: 75.61529999999999
- type: euclidean_spearman
value: 72.9809
- type: main_score
value: 73.0799
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: pearson
value: 88.1242
- type: spearman
value: 88.8456
- type: cosine_pearson
value: 88.1242
- type: cosine_spearman
value: 88.8456
- type: manhattan_pearson
value: 88.6122
- type: manhattan_spearman
value: 89.0198
- type: euclidean_pearson
value: 88.5314
- type: euclidean_spearman
value: 88.8432
- type: main_score
value: 88.8456
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.06920000000001
- type: mrr
value: 96.977
- type: nAUC_map_max
value: 49.1778
- type: nAUC_map_std
value: 62.091499999999996
- type: nAUC_map_diff1
value: -13.373899999999999
- type: nAUC_mrr_max
value: 86.7325
- type: nAUC_mrr_std
value: 74.958
- type: nAUC_mrr_diff1
value: 25.6955
- type: main_score
value: 89.06920000000001
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: ndcg_at_1
value: 67.333
- type: ndcg_at_3
value: 75.142
- type: ndcg_at_5
value: 78.873
- type: ndcg_at_10
value: 80.372
- type: ndcg_at_20
value: 80.914
- type: ndcg_at_100
value: 81.563
- type: ndcg_at_1000
value: 81.601
- type: map_at_1
value: 63.99399999999999
- type: map_at_3
value: 72.181
- type: map_at_5
value: 74.665
- type: map_at_10
value: 75.51299999999999
- type: map_at_20
value: 75.693
- type: map_at_100
value: 75.78
- type: map_at_1000
value: 75.78099999999999
- type: recall_at_1
value: 63.99399999999999
- type: recall_at_3
value: 80.63900000000001
- type: recall_at_5
value: 90.017
- type: recall_at_10
value: 94.167
- type: recall_at_20
value: 96.167
- type: recall_at_100
value: 99.667
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 67.333
- type: precision_at_3
value: 29.444
- type: precision_at_5
value: 20.067
- type: precision_at_10
value: 10.667
- type: precision_at_20
value: 5.45
- type: precision_at_100
value: 1.127
- type: precision_at_1000
value: 0.11299999999999999
- type: mrr_at_1
value: 67.3333
- type: mrr_at_3
value: 73.9444
- type: mrr_at_5
value: 76.0444
- type: mrr_at_10
value: 76.4517
- type: mrr_at_20
value: 76.5739
- type: mrr_at_100
value: 76.6556
- type: mrr_at_1000
value: 76.6563
- type: nauc_ndcg_at_1_max
value: 55.373099999999994
- type: nauc_ndcg_at_1_std
value: -11.362
- type: nauc_ndcg_at_1_diff1
value: 81.5232
- type: nauc_ndcg_at_3_max
value: 58.0376
- type: nauc_ndcg_at_3_std
value: -14.4854
- type: nauc_ndcg_at_3_diff1
value: 77.848
- type: nauc_ndcg_at_5_max
value: 55.789699999999996
- type: nauc_ndcg_at_5_std
value: -15.0412
- type: nauc_ndcg_at_5_diff1
value: 77.7873
- type: nauc_ndcg_at_10_max
value: 58.6243
- type: nauc_ndcg_at_10_std
value: -13.867799999999999
- type: nauc_ndcg_at_10_diff1
value: 78.071
- type: nauc_ndcg_at_20_max
value: 58.9032
- type: nauc_ndcg_at_20_std
value: -12.7998
- type: nauc_ndcg_at_20_diff1
value: 78.12010000000001
- type: nauc_ndcg_at_100_max
value: 58.0772
- type: nauc_ndcg_at_100_std
value: -12.4604
- type: nauc_ndcg_at_100_diff1
value: 78.7002
- type: nauc_ndcg_at_1000_max
value: 58.0835
- type: nauc_ndcg_at_1000_std
value: -12.662799999999999
- type: nauc_ndcg_at_1000_diff1
value: 78.7157
- type: nauc_map_at_1_max
value: 45.2711
- type: nauc_map_at_1_std
value: -21.685
- type: nauc_map_at_1_diff1
value: 80.8536
- type: nauc_map_at_3_max
value: 53.970600000000005
- type: nauc_map_at_3_std
value: -18.0178
- type: nauc_map_at_3_diff1
value: 78.95309999999999
- type: nauc_map_at_5_max
value: 54.7643
- type: nauc_map_at_5_std
value: -15.943999999999999
- type: nauc_map_at_5_diff1
value: 78.9665
- type: nauc_map_at_10_max
value: 56.737899999999996
- type: nauc_map_at_10_std
value: -14.355599999999999
- type: nauc_map_at_10_diff1
value: 79.095
- type: nauc_map_at_20_max
value: 56.804
- type: nauc_map_at_20_std
value: -14.033599999999998
- type: nauc_map_at_20_diff1
value: 79.0729
- type: nauc_map_at_100_max
value: 56.7168
- type: nauc_map_at_100_std
value: -14.000000000000002
- type: nauc_map_at_100_diff1
value: 79.1456
- type: nauc_map_at_1000_max
value: 56.7168
- type: nauc_map_at_1000_std
value: -14.003099999999998
- type: nauc_map_at_1000_diff1
value: 79.1458
- type: nauc_recall_at_1_max
value: 45.2711
- type: nauc_recall_at_1_std
value: -21.685
- type: nauc_recall_at_1_diff1
value: 80.8536
- type: nauc_recall_at_3_max
value: 55.911100000000005
- type: nauc_recall_at_3_std
value: -19.1603
- type: nauc_recall_at_3_diff1
value: 72.51689999999999
- type: nauc_recall_at_5_max
value: 50.6317
- type: nauc_recall_at_5_std
value: -21.4004
- type: nauc_recall_at_5_diff1
value: 68.3824
- type: nauc_recall_at_10_max
value: 67.3176
- type: nauc_recall_at_10_std
value: -23.6601
- type: nauc_recall_at_10_diff1
value: 67.20150000000001
- type: nauc_recall_at_20_max
value: 76.566
- type: nauc_recall_at_20_std
value: -12.4873
- type: nauc_recall_at_20_diff1
value: 63.6849
- type: nauc_recall_at_100_max
value: 55.415499999999994
- type: nauc_recall_at_100_std
value: 72.2222
- type: nauc_recall_at_100_diff1
value: 72.2222
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.373099999999994
- type: nauc_precision_at_1_std
value: -11.362
- type: nauc_precision_at_1_diff1
value: 81.5232
- type: nauc_precision_at_3_max
value: 59.23969999999999
- type: nauc_precision_at_3_std
value: 17.3868
- type: nauc_precision_at_3_diff1
value: 37.0438
- type: nauc_precision_at_5_max
value: 51.358599999999996
- type: nauc_precision_at_5_std
value: 39.9722
- type: nauc_precision_at_5_diff1
value: 9.2927
- type: nauc_precision_at_10_max
value: 44.6166
- type: nauc_precision_at_10_std
value: 46.4231
- type: nauc_precision_at_10_diff1
value: -7.435700000000001
- type: nauc_precision_at_20_max
value: 41.1852
- type: nauc_precision_at_20_std
value: 51.89359999999999
- type: nauc_precision_at_20_diff1
value: -14.6318
- type: nauc_precision_at_100_max
value: 33.6188
- type: nauc_precision_at_100_std
value: 60.621700000000004
- type: nauc_precision_at_100_diff1
value: -26.376500000000004
- type: nauc_precision_at_1000_max
value: 33.2976
- type: nauc_precision_at_1000_std
value: 60.4508
- type: nauc_precision_at_1000_diff1
value: -27.829500000000003
- type: nauc_mrr_at_1_max
value: 55.373099999999994
- type: nauc_mrr_at_1_std
value: -11.362
- type: nauc_mrr_at_1_diff1
value: 81.5232
- type: nauc_mrr_at_3_max
value: 58.7918
- type: nauc_mrr_at_3_std
value: -11.580300000000001
- type: nauc_mrr_at_3_diff1
value: 78.79310000000001
- type: nauc_mrr_at_5_max
value: 58.13270000000001
- type: nauc_mrr_at_5_std
value: -11.2346
- type: nauc_mrr_at_5_diff1
value: 78.8896
- type: nauc_mrr_at_10_max
value: 58.4249
- type: nauc_mrr_at_10_std
value: -11.842600000000001
- type: nauc_mrr_at_10_diff1
value: 79.0629
- type: nauc_mrr_at_20_max
value: 58.4409
- type: nauc_mrr_at_20_std
value: -11.7012
- type: nauc_mrr_at_20_diff1
value: 79.1097
- type: nauc_mrr_at_100_max
value: 58.3685
- type: nauc_mrr_at_100_std
value: -11.6333
- type: nauc_mrr_at_100_diff1
value: 79.1871
- type: nauc_mrr_at_1000_max
value: 58.3686
- type: nauc_mrr_at_1000_std
value: -11.6365
- type: nauc_mrr_at_1000_diff1
value: 79.1874
- type: main_score
value: 80.372
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: similarity_accuracy
value: 99.8317
- type: similarity_accuracy_threshold
value: 87.7015
- type: similarity_f1
value: 91.2821
- type: similarity_f1_threshold
value: 87.68599999999999
- type: similarity_precision
value: 93.68419999999999
- type: similarity_recall
value: 89.0
- type: similarity_ap
value: 96.249
- type: cosine_accuracy
value: 99.8317
- type: cosine_accuracy_threshold
value: 87.7015
- type: cosine_f1
value: 91.2821
- type: cosine_f1_threshold
value: 87.68599999999999
- type: cosine_precision
value: 93.68419999999999
- type: cosine_recall
value: 89.0
- type: cosine_ap
value: 96.249
- type: manhattan_accuracy
value: 99.83370000000001
- type: manhattan_accuracy_threshold
value: 4133.4675
- type: manhattan_f1
value: 91.52369999999999
- type: manhattan_f1_threshold
value: 4244.392400000001
- type: manhattan_precision
value: 92.3625
- type: manhattan_recall
value: 90.7
- type: manhattan_ap
value: 96.4021
- type: euclidean_accuracy
value: 99.83070000000001
- type: euclidean_accuracy_threshold
value: 49.684200000000004
- type: euclidean_f1
value: 91.24419999999999
- type: euclidean_f1_threshold
value: 49.7175
- type: euclidean_precision
value: 93.4942
- type: euclidean_recall
value: 89.1
- type: euclidean_ap
value: 96.24589999999999
- type: dot_accuracy
value: 99.8277
- type: dot_accuracy_threshold
value: 87.3743
- type: dot_f1
value: 91.1495
- type: dot_f1_threshold
value: 87.3743
- type: dot_precision
value: 92.7536
- type: dot_recall
value: 89.60000000000001
- type: dot_ap
value: 96.23509999999999
- type: max_accuracy
value: 99.83370000000001
- type: max_f1
value: 91.52369999999999
- type: max_precision
value: 93.68419999999999
- type: max_recall
value: 90.7
- type: max_ap
value: 96.4021
- type: main_score
value: 96.4021
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 80.19420000000001
- type: v_measure_std
value: 2.8956
- type: main_score
value: 80.19420000000001
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.416900000000005
- type: v_measure_std
value: 1.5938
- type: main_score
value: 49.416900000000005
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.579
- type: mrr
value: 55.598800000000004
- type: nAUC_map_max
value: 9.1665
- type: nAUC_map_std
value: 7.8292
- type: nAUC_map_diff1
value: 38.831199999999995
- type: nAUC_mrr_max
value: 9.7435
- type: nAUC_mrr_std
value: 8.030800000000001
- type: nAUC_mrr_diff1
value: 39.2785
- type: main_score
value: 54.579
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: pearson
value: 31.6006
- type: spearman
value: 31.4212
- type: cosine_spearman
value: 31.4212
- type: cosine_pearson
value: 31.6006
- type: dot_spearman
value: 31.5459
- type: dot_pearson
value: 31.6412
- type: main_score
value: 31.4212
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_3
value: 89.592
- type: ndcg_at_5
value: 88.596
- type: ndcg_at_10
value: 86.732
- type: ndcg_at_20
value: 84.08099999999999
- type: ndcg_at_100
value: 70.204
- type: ndcg_at_1000
value: 63.29
- type: map_at_1
value: 0.243
- type: map_at_3
value: 0.703
- type: map_at_5
value: 1.142
- type: map_at_10
value: 2.2270000000000003
- type: map_at_20
value: 4.176
- type: map_at_100
value: 15.104999999999999
- type: map_at_1000
value: 37.759
- type: recall_at_1
value: 0.243
- type: recall_at_3
value: 0.726
- type: recall_at_5
value: 1.192
- type: recall_at_10
value: 2.359
- type: recall_at_20
value: 4.539
- type: recall_at_100
value: 17.802
- type: recall_at_1000
value: 59.906000000000006
- type: precision_at_1
value: 94.0
- type: precision_at_3
value: 93.333
- type: precision_at_5
value: 91.60000000000001
- type: precision_at_10
value: 90.0
- type: precision_at_20
value: 87.6
- type: precision_at_100
value: 72.42
- type: precision_at_1000
value: 27.842
- type: mrr_at_1
value: 94.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: nauc_ndcg_at_1_max
value: 53.2855
- type: nauc_ndcg_at_1_std
value: 57.4638
- type: nauc_ndcg_at_1_diff1
value: 4.4059
- type: nauc_ndcg_at_3_max
value: 37.2953
- type: nauc_ndcg_at_3_std
value: 46.1686
- type: nauc_ndcg_at_3_diff1
value: 17.6265
- type: nauc_ndcg_at_5_max
value: 30.1947
- type: nauc_ndcg_at_5_std
value: 53.0996
- type: nauc_ndcg_at_5_diff1
value: 16.5411
- type: nauc_ndcg_at_10_max
value: 36.923
- type: nauc_ndcg_at_10_std
value: 58.658100000000005
- type: nauc_ndcg_at_10_diff1
value: 13.2686
- type: nauc_ndcg_at_20_max
value: 40.2471
- type: nauc_ndcg_at_20_std
value: 62.5481
- type: nauc_ndcg_at_20_diff1
value: 5.9675
- type: nauc_ndcg_at_100_max
value: 38.2307
- type: nauc_ndcg_at_100_std
value: 67.91329999999999
- type: nauc_ndcg_at_100_diff1
value: -12.2539
- type: nauc_ndcg_at_1000_max
value: 41.875
- type: nauc_ndcg_at_1000_std
value: 75.5431
- type: nauc_ndcg_at_1000_diff1
value: -2.8021000000000003
- type: nauc_map_at_1_max
value: -8.0287
- type: nauc_map_at_1_std
value: 26.559500000000003
- type: nauc_map_at_1_diff1
value: 13.9958
- type: nauc_map_at_3_max
value: -3.7122
- type: nauc_map_at_3_std
value: 30.8674
- type: nauc_map_at_3_diff1
value: 25.9597
- type: nauc_map_at_5_max
value: -2.6957
- type: nauc_map_at_5_std
value: 34.3742
- type: nauc_map_at_5_diff1
value: 24.567
- type: nauc_map_at_10_max
value: -1.028
- type: nauc_map_at_10_std
value: 38.6623
- type: nauc_map_at_10_diff1
value: 23.482
- type: nauc_map_at_20_max
value: 0.9729
- type: nauc_map_at_20_std
value: 41.4966
- type: nauc_map_at_20_diff1
value: 21.6255
- type: nauc_map_at_100_max
value: 16.6328
- type: nauc_map_at_100_std
value: 62.829100000000004
- type: nauc_map_at_100_diff1
value: 5.6174
- type: nauc_map_at_1000_max
value: 40.1756
- type: nauc_map_at_1000_std
value: 74.6962
- type: nauc_map_at_1000_diff1
value: -8.3158
- type: nauc_recall_at_1_max
value: -8.0287
- type: nauc_recall_at_1_std
value: 26.559500000000003
- type: nauc_recall_at_1_diff1
value: 13.9958
- type: nauc_recall_at_3_max
value: -6.4821
- type: nauc_recall_at_3_std
value: 24.8695
- type: nauc_recall_at_3_diff1
value: 25.6479
- type: nauc_recall_at_5_max
value: -6.8088999999999995
- type: nauc_recall_at_5_std
value: 28.340700000000002
- type: nauc_recall_at_5_diff1
value: 22.797700000000003
- type: nauc_recall_at_10_max
value: -5.6834999999999996
- type: nauc_recall_at_10_std
value: 30.5569
- type: nauc_recall_at_10_diff1
value: 21.330099999999998
- type: nauc_recall_at_20_max
value: -5.9924
- type: nauc_recall_at_20_std
value: 30.4601
- type: nauc_recall_at_20_diff1
value: 19.365299999999998
- type: nauc_recall_at_100_max
value: 7.007099999999999
- type: nauc_recall_at_100_std
value: 48.9423
- type: nauc_recall_at_100_diff1
value: 9.1493
- type: nauc_recall_at_1000_max
value: 37.2208
- type: nauc_recall_at_1000_std
value: 64.91550000000001
- type: nauc_recall_at_1000_diff1
value: -2.8935
- type: nauc_precision_at_1_max
value: 42.343599999999995
- type: nauc_precision_at_1_std
value: 69.3433
- type: nauc_precision_at_1_diff1
value: -20.028000000000002
- type: nauc_precision_at_3_max
value: 49.335699999999996
- type: nauc_precision_at_3_std
value: 45.994
- type: nauc_precision_at_3_diff1
value: 22.622400000000003
- type: nauc_precision_at_5_max
value: 38.3816
- type: nauc_precision_at_5_std
value: 55.8298
- type: nauc_precision_at_5_diff1
value: 8.4025
- type: nauc_precision_at_10_max
value: 41.6808
- type: nauc_precision_at_10_std
value: 66.1322
- type: nauc_precision_at_10_diff1
value: 4.665699999999999
- type: nauc_precision_at_20_max
value: 42.0959
- type: nauc_precision_at_20_std
value: 61.7217
- type: nauc_precision_at_20_diff1
value: -3.4937000000000005
- type: nauc_precision_at_100_max
value: 40.4606
- type: nauc_precision_at_100_std
value: 63.3467
- type: nauc_precision_at_100_diff1
value: -16.487199999999998
- type: nauc_precision_at_1000_max
value: 39.4253
- type: nauc_precision_at_1000_std
value: 18.8794
- type: nauc_precision_at_1000_diff1
value: -18.4566
- type: nauc_mrr_at_1_max
value: 42.343599999999995
- type: nauc_mrr_at_1_std
value: 69.3433
- type: nauc_mrr_at_1_diff1
value: -20.028000000000002
- type: nauc_mrr_at_3_max
value: 42.343599999999995
- type: nauc_mrr_at_3_std
value: 69.3433
- type: nauc_mrr_at_3_diff1
value: -20.028000000000002
- type: nauc_mrr_at_5_max
value: 42.343599999999995
- type: nauc_mrr_at_5_std
value: 69.3433
- type: nauc_mrr_at_5_diff1
value: -20.028000000000002
- type: nauc_mrr_at_10_max
value: 42.343599999999995
- type: nauc_mrr_at_10_std
value: 69.3433
- type: nauc_mrr_at_10_diff1
value: -20.028000000000002
- type: nauc_mrr_at_20_max
value: 42.343599999999995
- type: nauc_mrr_at_20_std
value: 69.3433
- type: nauc_mrr_at_20_diff1
value: -20.028000000000002
- type: nauc_mrr_at_100_max
value: 42.343599999999995
- type: nauc_mrr_at_100_std
value: 69.3433
- type: nauc_mrr_at_100_diff1
value: -20.028000000000002
- type: nauc_mrr_at_1000_max
value: 42.343599999999995
- type: nauc_mrr_at_1000_std
value: 69.3433
- type: nauc_mrr_at_1000_diff1
value: -20.028000000000002
- type: main_score
value: 86.732
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_3
value: 37.185
- type: ndcg_at_5
value: 32.564
- type: ndcg_at_10
value: 31.395
- type: ndcg_at_20
value: 30.494
- type: ndcg_at_100
value: 42.617
- type: ndcg_at_1000
value: 53.324000000000005
- type: map_at_1
value: 2.7640000000000002
- type: map_at_3
value: 6.970999999999999
- type: map_at_5
value: 8.518
- type: map_at_10
value: 11.919
- type: map_at_20
value: 14.389
- type: map_at_100
value: 18.799
- type: map_at_1000
value: 20.366
- type: recall_at_1
value: 2.7640000000000002
- type: recall_at_3
value: 8.323
- type: recall_at_5
value: 11.259
- type: recall_at_10
value: 19.088
- type: recall_at_20
value: 26.488
- type: recall_at_100
value: 53.437
- type: recall_at_1000
value: 86.385
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_3
value: 38.095
- type: precision_at_5
value: 31.429000000000002
- type: precision_at_10
value: 27.346999999999998
- type: precision_at_20
value: 19.184
- type: precision_at_100
value: 8.713999999999999
- type: precision_at_1000
value: 1.6119999999999999
- type: mrr_at_1
value: 38.7755
- type: mrr_at_3
value: 52.7211
- type: mrr_at_5
value: 55.9864
- type: mrr_at_10
value: 57.2554
- type: mrr_at_20
value: 57.382999999999996
- type: mrr_at_100
value: 57.5539
- type: mrr_at_1000
value: 57.5539
- type: nauc_ndcg_at_1_max
value: -20.4965
- type: nauc_ndcg_at_1_std
value: -0.3794
- type: nauc_ndcg_at_1_diff1
value: 24.9951
- type: nauc_ndcg_at_3_max
value: -26.1758
- type: nauc_ndcg_at_3_std
value: 12.2535
- type: nauc_ndcg_at_3_diff1
value: 15.765199999999998
- type: nauc_ndcg_at_5_max
value: -27.4523
- type: nauc_ndcg_at_5_std
value: 5.7325
- type: nauc_ndcg_at_5_diff1
value: 12.3423
- type: nauc_ndcg_at_10_max
value: -24.3957
- type: nauc_ndcg_at_10_std
value: 3.526
- type: nauc_ndcg_at_10_diff1
value: 7.2427
- type: nauc_ndcg_at_20_max
value: -20.866799999999998
- type: nauc_ndcg_at_20_std
value: 6.1578
- type: nauc_ndcg_at_20_diff1
value: 4.1767
- type: nauc_ndcg_at_100_max
value: -17.9588
- type: nauc_ndcg_at_100_std
value: 33.1442
- type: nauc_ndcg_at_100_diff1
value: 4.356199999999999
- type: nauc_ndcg_at_1000_max
value: -11.1072
- type: nauc_ndcg_at_1000_std
value: 35.9222
- type: nauc_ndcg_at_1000_diff1
value: 5.9171000000000005
- type: nauc_map_at_1_max
value: -22.354
- type: nauc_map_at_1_std
value: -10.9271
- type: nauc_map_at_1_diff1
value: 27.7288
- type: nauc_map_at_3_max
value: -22.397
- type: nauc_map_at_3_std
value: -7.2767
- type: nauc_map_at_3_diff1
value: 15.101899999999999
- type: nauc_map_at_5_max
value: -25.7693
- type: nauc_map_at_5_std
value: -10.6509
- type: nauc_map_at_5_diff1
value: 6.9317
- type: nauc_map_at_10_max
value: -22.2658
- type: nauc_map_at_10_std
value: -7.2035
- type: nauc_map_at_10_diff1
value: 3.9056
- type: nauc_map_at_20_max
value: -16.2962
- type: nauc_map_at_20_std
value: -2.595
- type: nauc_map_at_20_diff1
value: 0.5059
- type: nauc_map_at_100_max
value: -15.7023
- type: nauc_map_at_100_std
value: 11.6893
- type: nauc_map_at_100_diff1
value: -0.6727000000000001
- type: nauc_map_at_1000_max
value: -14.162700000000001
- type: nauc_map_at_1000_std
value: 14.4614
- type: nauc_map_at_1000_diff1
value: -0.3813
- type: nauc_recall_at_1_max
value: -22.354
- type: nauc_recall_at_1_std
value: -10.9271
- type: nauc_recall_at_1_diff1
value: 27.7288
- type: nauc_recall_at_3_max
value: -25.067
- type: nauc_recall_at_3_std
value: -3.5932
- type: nauc_recall_at_3_diff1
value: 7.6448
- type: nauc_recall_at_5_max
value: -31.9521
- type: nauc_recall_at_5_std
value: -12.973299999999998
- type: nauc_recall_at_5_diff1
value: -0.8931
- type: nauc_recall_at_10_max
value: -26.1132
- type: nauc_recall_at_10_std
value: -7.3447
- type: nauc_recall_at_10_diff1
value: -4.7741999999999996
- type: nauc_recall_at_20_max
value: -18.1067
- type: nauc_recall_at_20_std
value: 2.8136
- type: nauc_recall_at_20_diff1
value: -10.4605
- type: nauc_recall_at_100_max
value: -16.423199999999998
- type: nauc_recall_at_100_std
value: 44.8333
- type: nauc_recall_at_100_diff1
value: -6.1889
- type: nauc_recall_at_1000_max
value: 8.706
- type: nauc_recall_at_1000_std
value: 74.9427
- type: nauc_recall_at_1000_diff1
value: -11.8207
- type: nauc_precision_at_1_max
value: -17.1924
- type: nauc_precision_at_1_std
value: 0.0948
- type: nauc_precision_at_1_diff1
value: 28.084300000000002
- type: nauc_precision_at_3_max
value: -23.0318
- type: nauc_precision_at_3_std
value: 14.8594
- type: nauc_precision_at_3_diff1
value: 12.4551
- type: nauc_precision_at_5_max
value: -26.849800000000002
- type: nauc_precision_at_5_std
value: 2.2552
- type: nauc_precision_at_5_diff1
value: 6.4963999999999995
- type: nauc_precision_at_10_max
value: -14.1258
- type: nauc_precision_at_10_std
value: 9.764299999999999
- type: nauc_precision_at_10_diff1
value: 2.9594
- type: nauc_precision_at_20_max
value: 0.6306
- type: nauc_precision_at_20_std
value: 24.2147
- type: nauc_precision_at_20_diff1
value: -3.5282
- type: nauc_precision_at_100_max
value: 11.4275
- type: nauc_precision_at_100_std
value: 71.0325
- type: nauc_precision_at_100_diff1
value: -6.978800000000001
- type: nauc_precision_at_1000_max
value: 37.155899999999995
- type: nauc_precision_at_1000_std
value: 20.5794
- type: nauc_precision_at_1000_diff1
value: -11.6191
- type: nauc_mrr_at_1_max
value: -17.1924
- type: nauc_mrr_at_1_std
value: 0.0948
- type: nauc_mrr_at_1_diff1
value: 28.084300000000002
- type: nauc_mrr_at_3_max
value: -26.1233
- type: nauc_mrr_at_3_std
value: 12.6302
- type: nauc_mrr_at_3_diff1
value: 17.8467
- type: nauc_mrr_at_5_max
value: -23.641499999999997
- type: nauc_mrr_at_5_std
value: 8.3047
- type: nauc_mrr_at_5_diff1
value: 20.5063
- type: nauc_mrr_at_10_max
value: -23.5564
- type: nauc_mrr_at_10_std
value: 8.5333
- type: nauc_mrr_at_10_diff1
value: 20.5287
- type: nauc_mrr_at_20_max
value: -23.154700000000002
- type: nauc_mrr_at_20_std
value: 9.079600000000001
- type: nauc_mrr_at_20_diff1
value: 21.3888
- type: nauc_mrr_at_100_max
value: -23.3008
- type: nauc_mrr_at_100_std
value: 8.7424
- type: nauc_mrr_at_100_diff1
value: 21.2832
- type: nauc_mrr_at_1000_max
value: -23.3008
- type: nauc_mrr_at_1000_std
value: 8.7424
- type: nauc_mrr_at_1000_diff1
value: 21.2832
- type: main_score
value: 31.395
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 91.3135
- type: f1
value: 78.12480000000001
- type: f1_weighted
value: 92.4208
- type: ap
value: 41.943599999999996
- type: ap_weighted
value: 41.943599999999996
- type: main_score
value: 91.3135
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 77.24109999999999
- type: f1
value: 77.4964
- type: f1_weighted
value: 77.0248
- type: main_score
value: 77.24109999999999
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 63.132200000000005
- type: v_measure_std
value: 0.7163
- type: main_score
value: 63.132200000000005
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: similarity_accuracy
value: 88.061
- type: similarity_accuracy_threshold
value: 85.9546
- type: similarity_f1
value: 73.36240000000001
- type: similarity_f1_threshold
value: 84.0127
- type: similarity_precision
value: 71.47149999999999
- type: similarity_recall
value: 75.3562
- type: similarity_ap
value: 80.2787
- type: cosine_accuracy
value: 88.061
- type: cosine_accuracy_threshold
value: 85.9546
- type: cosine_f1
value: 73.36240000000001
- type: cosine_f1_threshold
value: 84.0127
- type: cosine_precision
value: 71.47149999999999
- type: cosine_recall
value: 75.3562
- type: cosine_ap
value: 80.2787
- type: manhattan_accuracy
value: 87.8941
- type: manhattan_accuracy_threshold
value: 4374.3263
- type: manhattan_f1
value: 72.929
- type: manhattan_f1_threshold
value: 4637.7289
- type: manhattan_precision
value: 71.3636
- type: manhattan_recall
value: 74.5646
- type: manhattan_ap
value: 79.98140000000001
- type: euclidean_accuracy
value: 88.0789
- type: euclidean_accuracy_threshold
value: 52.9742
- type: euclidean_f1
value: 73.3591
- type: euclidean_f1_threshold
value: 56.4815
- type: euclidean_precision
value: 71.608
- type: euclidean_recall
value: 75.19789999999999
- type: euclidean_ap
value: 80.2656
- type: dot_accuracy
value: 88.0789
- type: dot_accuracy_threshold
value: 85.8034
- type: dot_f1
value: 73.40379999999999
- type: dot_f1_threshold
value: 84.3213
- type: dot_precision
value: 72.6544
- type: dot_recall
value: 74.16890000000001
- type: dot_ap
value: 80.32010000000001
- type: max_accuracy
value: 88.0789
- type: max_f1
value: 73.40379999999999
- type: max_precision
value: 72.6544
- type: max_recall
value: 75.3562
- type: max_ap
value: 80.32010000000001
- type: main_score
value: 80.32010000000001
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: similarity_accuracy
value: 89.93090000000001
- type: similarity_accuracy_threshold
value: 82.1349
- type: similarity_f1
value: 80.1377
- type: similarity_f1_threshold
value: 80.5047
- type: similarity_precision
value: 77.1539
- type: similarity_recall
value: 83.3616
- type: similarity_ap
value: 87.6917
- type: cosine_accuracy
value: 89.93090000000001
- type: cosine_accuracy_threshold
value: 82.1349
- type: cosine_f1
value: 80.1377
- type: cosine_f1_threshold
value: 80.5047
- type: cosine_precision
value: 77.1539
- type: cosine_recall
value: 83.3616
- type: cosine_ap
value: 87.6917
- type: manhattan_accuracy
value: 89.86880000000001
- type: manhattan_accuracy_threshold
value: 4882.7347
- type: manhattan_f1
value: 80.2323
- type: manhattan_f1_threshold
value: 5185.1944
- type: manhattan_precision
value: 76.74889999999999
- type: manhattan_recall
value: 84.0468
- type: manhattan_ap
value: 87.70750000000001
- type: euclidean_accuracy
value: 89.94640000000001
- type: euclidean_accuracy_threshold
value: 59.9149
- type: euclidean_f1
value: 80.1527
- type: euclidean_f1_threshold
value: 62.3611
- type: euclidean_precision
value: 77.2744
- type: euclidean_recall
value: 83.2538
- type: euclidean_ap
value: 87.6922
- type: dot_accuracy
value: 89.9038
- type: dot_accuracy_threshold
value: 82.53049999999999
- type: dot_f1
value: 80.0969
- type: dot_f1_threshold
value: 80.7285
- type: dot_precision
value: 77.5853
- type: dot_recall
value: 82.77640000000001
- type: dot_ap
value: 87.668
- type: max_accuracy
value: 89.94640000000001
- type: max_f1
value: 80.2323
- type: max_precision
value: 77.5853
- type: max_recall
value: 84.0468
- type: max_ap
value: 87.70750000000001
- type: main_score
value: 87.70750000000001
---
## Introduction
Based on dunzhang/stella_en_1.5B_v5 and google/siglip-so400m-patch14-384.
It can encode both text and images.
**Report:** https://arxiv.org/abs/2412.19048
**Codes:** https://github.com/NLPJCL/RAG-Retrieval
**Data:** https://huggingface.co/datasets/infgrad/jasper_text_distill_dataset
**Training logs:** https://api.wandb.ai/links/dunnzhang0/z8jqoqpb
The core idea of jasper and stella is distillation: **Let student model learn teacher model's vectors.**
## Usage
```python
import torch
from sentence_transformers import SentenceTransformer
DOC1 = """
Blue light is scattered in all directions by the tiny molecules of air in Earth's atmosphere.
Blue is scattered more than other colors because it travels as shorter, smaller waves. This is why we see a blue sky most of the time.
Closer to the horizon, the sky fades to a lighter blue or white.
"""
DOC2 = """
When choosing colors, you can consider the following factors:
Color theory: Understand how colors work together and how they can evoke different reactions.
Color psychology: Consider how colors affect emotions, behaviors, and responses.
Brand identity: Colors can convey meaning and information about a brand.
Mood: Consider the mood you want to create. For example, brighter colors can feel cheerful, while cooler colors can be calming.
Space: Consider the size of the space and the amount of natural light it receives. Dark colors can make a room feel smaller, while light colors can make it feel larger.
Color wheel: Use the color wheel to identify primary, secondary, and tertiary colors.
Color combinations: Decide how to best complement your preferred color with others.
Color palette: Limit your color palette to a main color and one or two additional colors.
60-30-10 rule: Use a primary color 60% of the time, a secondary color 30% of the time, and an accent color 10% of the time
"""
if __name__ == "__main__":
# load model
use_gpu = False
model_name = "infgrad/jasper_en_vision_language_v1"
model = SentenceTransformer(
model_name,
trust_remote_code=True,
device="cpu" if not use_gpu else "cuda",
model_kwargs={
"torch_dtype": torch.bfloat16 if use_gpu else torch.float32,
"attn_implementation": "sdpa"
},
# vector_dim must be 12288, 1024, 512, 256
## 1024 is recommended
# set is_text_encoder 'True', if you do not encode image
config_kwargs={"is_text_encoder": False, "vector_dim": 1024},
)
# We can reduce the max_seq_length from the default of 2048 for faster encoding
model.max_seq_length = 1024
# data
q_list = [
"Why the sky is blue?",
"how to choose suitable color",
]
doc_list = [
DOC1,
[{"type": "image_path", "content": "./assets/img1.png"}, {"type": "text", "content": "Hope this image helps!"}],
DOC2,
[{"type": "image_path", "content": "./assets/img2.png"}],
]
q_vecs = model.encode(q_list, prompt_name="s2p_query")
doc_vecs = model.encode(doc_list)
# calculate similarity
similarities = model.similarity(q_vecs, doc_vecs)
print(similarities)
# the output is:
# tensor([[0.7775, 0.7594, 0.2429, 0.2187],
# [0.3226, 0.3054, 0.7421, 0.5484]])
```
## Evaluation on MTEB
script: ./scripts/evaluate_en_mteb/run_evaluate_mteb.py
## License
**This model should not be used for any commercial purpose!**
## Citation
```
@misc{zhang2025jasperstelladistillationsota,
title={Jasper and Stella: distillation of SOTA embedding models},
author={Dun Zhang and Jiacheng Li and Ziyang Zeng and Fulong Wang},
year={2025},
eprint={2412.19048},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2412.19048},
}
```
| [
"SUMMARIZATION"
] | Non_BioNLP |
midas/gupshup_h2e_t5_mtl | midas | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-14T02:08:18 | 121 | 0 | ---
{}
---
# Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
| [
"SUMMARIZATION"
] | Non_BioNLP |
Xorbits/Qwen-14B-Chat-GGUF | Xorbits | text-generation | [
"gguf",
"qwen",
"text-generation",
"zh",
"en",
"arxiv:2309.16609",
"arxiv:2305.08322",
"arxiv:2009.03300",
"arxiv:2305.05280",
"arxiv:2210.03629",
"base_model:Qwen/Qwen-14B-Chat",
"base_model:quantized:Qwen/Qwen-14B-Chat",
"region:us"
] | 1,702,884,492,000 | 2023-12-19T06:51:17 | 102 | 1 | ---
base_model: Qwen/Qwen-14B-Chat
language:
- zh
- en
model_name: Qwen 14B Chat
pipeline_tag: text-generation
tags:
- qwen
inference: false
model_creator: Qwen
model_type: qwen
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
# Qwen 14B Chat - GGUF
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Original model: [Qwen 14B Chat](https://huggingface.co/Qwen/Qwen-14B-Chat)
- Snapshot: https://huggingface.co/Qwen/Qwen-14B-Chat/commit/cdaff792392504e679496a9f386acf3c1e4333a5
<!-- description start -->
## Description
This repo contains GGUF model files for [Qwen's Qwen 14B Chat](https://huggingface.co/Qwen/Qwen-14B-Chat).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
<!-- description end -->
# Qwen-14B-Chat
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
<p>
<br>
<p align="center">
🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>   |    📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a>    |   🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-14B-Chat-Demo/summary">Demo</a>
<br>
<a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (微信)</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://dashscope.aliyun.com">API</a>
</p>
<br>
## 介绍(Introduction)
**通义千问-14B(Qwen-14B)**是阿里云研发的通义千问大模型系列的140亿参数规模的模型。Qwen-14B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-14B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-14B-Chat。本仓库为Qwen-14B-Chat的仓库。
如果您想了解更多关于通义千问-14B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。
**Qwen-14B** is the 14B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-14B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-14B, we release Qwen-14B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for Qwen-14B-Chat.
For more details about the open-source model of Qwen-14B, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
<br>
## 要求(Requirements)
* python 3.8及以上版本
* pytorch 1.12及以上版本,推荐2.0及以上版本
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
<br>
## 依赖项(Dependency)
运行Qwen-14B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库
To run Qwen-14B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries.
```bash
pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
```
另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。
In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# 下方安装可选,安装可能比较缓慢。
# pip install csrc/layer_norm
# pip install csrc/rotary
```
<br>
## 快速使用(Quickstart)
下面我们展示了一个使用Qwen-14B-Chat模型,进行多轮对话交互的样例:
We show an example of multi-turn interaction with Qwen-14B-Chat in the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-14B-Chat", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True).eval()
# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-14B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
# 第一轮对话 1st dialogue turn
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
# 你好!很高兴为你提供帮助。
# 第二轮对话 2nd dialogue turn
response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history)
print(response)
# 这是一个关于一个年轻人奋斗创业最终取得成功的故事。
# 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。
# 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。
# 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。
# 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。
# 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。
# 第三轮对话 3rd dialogue turn
response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history)
print(response)
# 《奋斗创业:一个年轻人的成功之路》
```
关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。
For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
<br>
## 量化 (Quantization)
### 用法 (Usage)
**请注意:我们更新量化方案为基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化,提供Qwen-14B-Chat的Int4量化模型[点击这里](https://huggingface.co/Qwen/Qwen-14B-Chat-Int4)。相比此前方案,该方案在模型评测效果几乎无损,且存储需求更低,推理速度更优。**
**Note: we provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-14B-Chat [Click here](https://huggingface.co/Qwen/Qwen-14B-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed, in comparison with the previous solution.**
以下我们提供示例说明如何使用Int4量化模型。在开始使用前,请先保证满足要求(如torch 2.0及以上,transformers版本为4.32.0及以上,等等),并安装所需安装包:
Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements of auto-gptq (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages:
```bash
pip install auto-gptq optimum
```
如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。
随后即可使用和上述一致的用法调用量化模型:
If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel.
Then you can load the quantized model easily and run inference as same as usual:
```python
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-14B-Chat-Int4",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "你好", history=None)
```
### 效果评测
我们对BF16,Int8和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|--------------|:----:|:-----------:|:-----:|:---------:|
| BF16 | 64.6 | 69.8 | 60.1 | 43.9 |
| Int8 | 63.6 | 68.6 | 60.0 | 48.2 |
| Int4 | 63.3 | 69.0 | 59.8 | 45.7 |
### 推理速度 (Inference Speed)
我们测算了不同精度模型以及不同FlashAttn库版本下模型生成2048和8192个token的平均推理速度。如图所示:
We measured the average inference speed of generating 2048 and 8192 tokens with different quantization levels and versions of flash-attention, respectively.
| Quantization | FlashAttn | Speed (2048 tokens) | Speed (8192 tokens) |
| ------------- | :-------: | :------------------:| :------------------:|
| BF16 | v2 | 32.88 | 24.87 |
| Int8 | v2 | 29.28 | 24.22 |
| Int4 | v2 | 38.72 | 27.33 |
| BF16 | v1 | 32.76 | 28.89 |
| Int8 | v1 | 28.31 | 23.87 |
| Int4 | v1 | 37.81 | 26.46 |
| BF16 | Disabled | 29.32 | 22.91 |
| Int8 | Disabled | 31.12 | 24.60 |
| Int4 | Disabled | 37.65 | 26.00 |
具体而言,我们记录在长度为1的上下文的条件下生成8192个token的性能。评测运行于单张A100-SXM4-80G GPU,使用PyTorch 2.0.1和CUDA 11.8。推理速度是生成8192个token的速度均值。
In detail, the setting of profiling is generating 8192 new tokens with 1 context token. The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.8. The inference speed is averaged over the generated 8192 tokens.
注意:以上Int4/Int8模型生成速度使用autogptq库给出,当前``AutoModelForCausalLM.from_pretrained``载入的模型生成速度会慢大约20%。我们已经将该问题汇报给HuggingFace团队,若有解决方案将即时更新。
Note: The generation speed of the Int4/Int8 models mentioned above is provided by the autogptq library. The current speed of the model loaded using "AutoModelForCausalLM.from_pretrained" will be approximately 20% slower. We have reported this issue to the HuggingFace team and will update it promptly if a solution is available.
### 显存使用 (GPU Memory Usage)
我们还测算了不同模型精度编码2048个token及生成8192个token的峰值显存占用情况。(显存消耗在是否使用FlashAttn的情况下均类似。)结果如下所示:
We also profile the peak GPU memory usage for encoding 2048 tokens as context (and generating single token) and generating 8192 tokens (with single token as context) under different quantization levels, respectively. (The GPU memory usage is similar when using flash-attention or not.)The results are shown below.
| Quantization Level | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
| ------------------ | :---------------------------------: | :-----------------------------------: |
| BF16 | 30.15GB | 38.94GB |
| Int8 | 18.81GB | 27.54GB |
| Int4 | 13.01GB | 21.79GB |
上述性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。
The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py).
<br>
## 模型细节(Model)
与Qwen-14B预训练模型相同,Qwen-14B-Chat模型规模基本情况如下所示
The details of the model architecture of Qwen-14B-Chat are listed as follows
| Hyperparameter | Value |
|:----------------|:------:|
| n_layers | 40 |
| n_heads | 40 |
| d_model | 5120 |
| vocab size | 151851 |
| sequence length | 2048 |
在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法,
即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。
在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-14B-Chat使用了约15万token大小的词表。
该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-14B-Chat uses a vocabulary of over 150K tokens.
It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary.
It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
<br>
## 评测效果(Evaluation)
对于Qwen-14B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-14B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。
提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。
For Qwen-14B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage.
Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible.
### 中文评测(Chinese Evaluation)
#### C-Eval
在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-14B-Chat模型的0-shot & 5-shot准确率
We demonstrate the 0-shot & 5-shot accuracy of Qwen-14B-Chat on C-Eval validation set
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| LLaMA2-7B-Chat | 31.9 |
| LLaMA2-13B-Chat | 36.2 |
| LLaMA2-70B-Chat | 44.3 |
| ChatGLM2-6B-Chat | 52.6 |
| InternLM-7B-Chat | 53.6 |
| Baichuan2-7B-Chat | 55.6 |
| Baichuan2-13B-Chat | 56.7 |
| Qwen-7B-Chat (original) (0-shot) | 54.2 |
| **Qwen-7B-Chat (0-shot)** | 59.7 |
| **Qwen-7B-Chat (5-shot)** | 59.3 |
| **Qwen-14B-Chat (0-shot)** | 69.8 |
| **Qwen-14B-Chat (5-shot)** | **71.7** |
C-Eval测试集上,Qwen-14B-Chat模型的zero-shot准确率结果如下:
The zero-shot accuracy of Qwen-14B-Chat on C-Eval testing set is provided below:
| Model | Avg. | STEM | Social Sciences | Humanities | Others |
| :---------------------- | :------: | :--: | :-------------: | :--------: | :----: |
| Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 |
| Chinese-Alpaca-2-7B | 40.3 | - | - | - | - |
| ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 |
| Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 |
| Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 |
| **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 |
| **Qwen-14B-Chat** | **69.1** | 65.1 | 80.9 | 71.2 | 63.4 |
在14B规模模型上,经过人类指令对齐的Qwen-14B-Chat模型,准确率在同类相近规模模型中仍然处于前列。
Compared with other pretrained models with comparable model size, the human-aligned Qwen-14B-Chat performs well in C-Eval accuracy.
### 英文评测(English Evaluation)
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-14B-Chat模型的 0-shot & 5-shot 准确率如下,效果同样在同类对齐模型中同样表现较优。
The 0-shot & 5-shot accuracy of Qwen-14B-Chat on MMLU is provided below.
The performance of Qwen-14B-Chat still on the top between other human-aligned models with comparable size.
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| ChatGLM2-6B-Chat | 46.0 |
| LLaMA2-7B-Chat | 46.2 |
| InternLM-7B-Chat | 51.1 |
| Baichuan2-7B-Chat | 52.9 |
| LLaMA2-13B-Chat | 54.6 |
| Baichuan2-13B-Chat | 57.3 |
| LLaMA2-70B-Chat | 63.8 |
| Qwen-7B-Chat (original) (0-shot) | 53.9 |
| **Qwen-7B-Chat (0-shot)** | 55.8 |
| **Qwen-7B-Chat (5-shot)** | 57.0 |
| **Qwen-14B-Chat (0-shot)** | 64.6 |
| **Qwen-14B-Chat (5-shot)** | **66.5** |
### 代码评测(Coding Evaluation)
Qwen-14B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下
The zero-shot Pass@1 of Qwen-14B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below
| Model | Pass@1 |
|:-----------------------:|:--------:|
| ChatGLM2-6B-Chat | 11.0 |
| LLaMA2-7B-Chat | 12.2 |
| InternLM-7B-Chat | 14.6 |
| Baichuan2-7B-Chat | 13.4 |
| LLaMA2-13B-Chat | 18.9 |
| Baichuan2-13B-Chat | 17.7 |
| LLaMA2-70B-Chat | 32.3 |
| Qwen-7B-Chat (original) | 24.4 |
| **Qwen-7B-Chat** | 37.2 |
| **Qwen-14B-Chat** | **43.9** |
### 数学评测(Mathematics Evaluation)
在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-14B-Chat的准确率结果如下
The accuracy of Qwen-14B-Chat on GSM8K is shown below
| Model | Acc. |
|:--------------------------------:|:--------:|
| LLaMA2-7B-Chat | 26.3 |
| ChatGLM2-6B-Chat | 28.8 |
| Baichuan2-7B-Chat | 32.8 |
| InternLM-7B-Chat | 33.0 |
| LLaMA2-13B-Chat | 37.1 |
| Baichuan2-13B-Chat | 55.3 |
| LLaMA2-70B-Chat | 59.3 |
| Qwen-7B-Chat (original) (0-shot) | 41.1 |
| **Qwen-7B-Chat (0-shot)** | 50.3 |
| **Qwen-7B-Chat (8-shot)** | 54.1 |
| **Qwen-14B-Chat (0-shot)** | **60.1** |
| **Qwen-14B-Chat (8-shot)** | 59.3 |
### 长序列评测(Long-Context Understanding)
通过NTK插值,LogN注意力缩放可以扩展Qwen-14B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-14B-Chat的Rouge-L结果如下:
**(若要启用这些技巧,请将config.json里的`use_dynamic_ntk`和`use_logn_attn`设置为true)**
We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-14B-Chat. The Rouge-L results of Qwen-14B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below:
**(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)**
| Model | VCSUM (zh) |
|:------------------|:----------:|
| GPT-3.5-Turbo-16k | 16.0 |
| LLama2-7B-Chat | 0.2 |
| InternLM-7B-Chat | 13.0 |
| ChatGLM2-6B-Chat | 16.3 |
| **Qwen-14B-Chat** | **17.3** |
### 工具使用能力的评测(Tool Usage)
#### ReAct Prompting
千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下:
Qwen-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-Chat's performance is as follows:
<table>
<tr>
<th colspan="4" align="center">Chinese Tool-Use Benchmark</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Tool Selection (Acc.↑)</th><th align="center">Tool Input (Rouge-L↑)</th><th align="center">False Positive Error↓</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">95%</td><td align="center">0.90</td><td align="center">15.0%</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">85%</td><td align="center">0.88</td><td align="center">75.0%</td>
</tr>
<tr>
<td>Qwen-7B-Chat</td><td align="center">98%</td><td align="center">0.91</td><td align="center">7.3%</td>
</tr>
<tr>
<td>Qwen-14B-Chat</td><td align="center">98%</td><td align="center">0.93</td><td align="center">2.4%</td>
</tr>
</table>
> 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。
> The plugins that appear in the evaluation set do not appear in the training set of Qwen. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query.


#### Code Interpreter
为了考察Qwen使用Python Code Interpreter完成数学解题、数据可视化、及文件处理与爬虫等任务的能力,我们专门建设并开源了一个评测这方面能力的[评测基准](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark)。
我们发现Qwen在生成代码的可执行率、结果正确性上均表现较好:
To assess Qwen's ability to use the Python Code Interpreter for tasks such as mathematical problem solving, data visualization, and other general-purpose tasks such as file handling and web scraping, we have created and open-sourced a benchmark specifically designed for evaluating these capabilities. You can find the benchmark at this [link](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark).
We have observed that Qwen performs well in terms of code executability and result accuracy when generating code:
<table>
<tr>
<th colspan="4" align="center">Executable Rate of Generated Code (%)</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization↑</th><th align="center">General↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">91.9</td><td align="center">85.9</td><td align="center">82.8</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">89.2</td><td align="center">65.0</td><td align="center">74.1</td>
</tr>
<tr>
<td>LLaMA2-7B-Chat</td>
<td align="center">41.9</td>
<td align="center">33.1</td>
<td align="center">24.1 </td>
</tr>
<tr>
<td>LLaMA2-13B-Chat</td>
<td align="center">50.0</td>
<td align="center">40.5</td>
<td align="center">48.3 </td>
</tr>
<tr>
<td>CodeLLaMA-7B-Instruct</td>
<td align="center">85.1</td>
<td align="center">54.0</td>
<td align="center">70.7 </td>
</tr>
<tr>
<td>CodeLLaMA-13B-Instruct</td>
<td align="center">93.2</td>
<td align="center">55.8</td>
<td align="center">74.1 </td>
</tr>
<tr>
<td>InternLM-7B-Chat-v1.1</td>
<td align="center">78.4</td>
<td align="center">44.2</td>
<td align="center">62.1 </td>
</tr>
<tr>
<td>InternLM-20B-Chat</td>
<td align="center">70.3</td>
<td align="center">44.2</td>
<td align="center">65.5 </td>
</tr>
<tr>
<td>Qwen-7B-Chat</td>
<td align="center">82.4</td>
<td align="center">64.4</td>
<td align="center">67.2 </td>
</tr>
<tr>
<td>Qwen-14B-Chat</td>
<td align="center">89.2</td>
<td align="center">84.1</td>
<td align="center">65.5</td>
</tr>
</table>
<table>
<tr>
<th colspan="4" align="center">Accuracy of Code Execution Results (%)</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization-Hard↑</th><th align="center">Visualization-Easy↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">82.8</td><td align="center">66.7</td><td align="center">60.8</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">47.3</td><td align="center">33.3</td><td align="center">55.7</td>
</tr>
<tr>
<td>LLaMA2-7B-Chat</td>
<td align="center">3.9</td>
<td align="center">14.3</td>
<td align="center">39.2 </td>
</tr>
<tr>
<td>LLaMA2-13B-Chat</td>
<td align="center">8.3</td>
<td align="center">8.3</td>
<td align="center">40.5 </td>
</tr>
<tr>
<td>CodeLLaMA-7B-Instruct</td>
<td align="center">14.3</td>
<td align="center">26.2</td>
<td align="center">60.8 </td>
</tr>
<tr>
<td>CodeLLaMA-13B-Instruct</td>
<td align="center">28.2</td>
<td align="center">27.4</td>
<td align="center">62.0 </td>
</tr>
<tr>
<td>InternLM-7B-Chat-v1.1</td>
<td align="center">28.5</td>
<td align="center">4.8</td>
<td align="center">40.5 </td>
</tr>
<tr>
<td>InternLM-20B-Chat</td>
<td align="center">34.6</td>
<td align="center">21.4</td>
<td align="center">45.6 </td>
</tr>
<tr>
<td>Qwen-7B-Chat</td>
<td align="center">41.9</td>
<td align="center">40.5</td>
<td align="center">54.4 </td>
</tr>
<tr>
<td>Qwen-14B-Chat</td>
<td align="center">58.4</td>
<td align="center">53.6</td>
<td align="center">59.5</td>
</tr>
</table>
<p align="center">
<br>
<img src="assets/code_interpreter_showcase_001.jpg" />
<br>
<p>
#### Huggingface Agent
千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下:
Qwen-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows:
<table>
<tr>
<th colspan="4" align="center">HuggingFace Agent Benchmark- Run Mode</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">100</td><td align="center">100</td><td align="center">97.4</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">95.4</td><td align="center">96.3</td><td align="center">87.0</td>
</tr>
<tr>
<td>StarCoder-Base-15B</td><td align="center">86.1</td><td align="center">87.0</td><td align="center">68.9</td>
</tr>
<tr>
<td>StarCoder-15B</td><td align="center">87.0</td><td align="center">88.0</td><td align="center">68.9</td>
</tr>
<tr>
<td>Qwen-7B-Chat</td><td align="center">87.0</td><td align="center">87.0</td><td align="center">71.5</td>
</tr>
<tr>
<td>Qwen-14B-Chat</td><td align="center">93.5</td><td align="center">94.4</td><td align="center">87.0</td>
</tr>
</table>
<table>
<tr>
<th colspan="4" align="center">HuggingFace Agent Benchmark - Chat Mode</th>
</tr>
<tr>
<th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th>
</tr>
<tr>
<td>GPT-4</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">98.5</td>
</tr>
<tr>
<td>GPT-3.5</td><td align="center">97.3</td><td align="center">96.8</td><td align="center">89.6</td>
</tr>
<tr>
<td>StarCoder-Base-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">91.1</td>
</tr>
<tr>
<td>StarCoder-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">89.6</td>
</tr>
<tr>
<td>Qwen-7B-Chat</td><td align="center">94.7</td><td align="center">94.7</td><td align="center">85.1</td>
</tr>
<tr>
<td>Qwen-14B-Chat</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">95.5</td>
</tr>
</table>
<br>
## FAQ
如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 引用 (Citation)
如果你觉得我们的工作对你有帮助,欢迎引用!
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
<br>
## 使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,欢迎填写[问卷](https://dashscope.console.aliyun.com/openModelApply/Qwen-14B-Chat)申请。
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/Qwen-14B-Chat) to apply.
<br>
## 联系我们(Contact Us)
如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件([email protected])联系我们。
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to [email protected].
| [
"SUMMARIZATION"
] | Non_BioNLP |
germla/satoken-pt | germla | text-classification | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,692,105,192,000 | 2023-08-15T13:13:53 | 12 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# germla/satoken-pt
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("germla/satoken-pt")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.