modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 06:28:24
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 06:28:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
danielevian/lanza_1_model_merged_16bit
|
danielevian
| 2025-04-25T14:15:02Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:canopylabs/3b-es_it-pretrain-research_release",
"base_model:finetune:canopylabs/3b-es_it-pretrain-research_release",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T14:06:06Z |
---
base_model: canopylabs/3b-es_it-pretrain-research_release
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** danielevian
- **License:** apache-2.0
- **Finetuned from model :** canopylabs/3b-es_it-pretrain-research_release
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vbius01/est-roberta-ud-ner
|
vbius01
| 2025-04-25T14:04:45Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"token-classification",
"NER",
"et",
"base_model:EMBEDDIA/est-roberta",
"base_model:finetune:EMBEDDIA/est-roberta",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-04-23T14:03:02Z |
---
language:
- et
base_model:
- EMBEDDIA/est-roberta
pipeline_tag: token-classification
library_name: transformers
tags:
- NER
license: cc-by-4.0
---
# est-roberta-ud-ner
<!-- Provide a quick summary of what the model is/does. -->
### Model Description
<!-- Provide a longer summary of what this model is. -->
est-roberta-ud-ner is an [Est-RoBERTa](https://huggingface.co/EMBEDDIA/est-roberta) based model fine-tuned for named entity recognition in Estonian on the [EDT](https://github.com/UniversalDependencies/UD_Estonian-EDT) and [EWT](https://github.com/UniversalDependencies/UD_Estonian-EWT) datasets.
### How to use
The model can be used with Transformers pipeline for NER. Try it in Google Colab, where the Transformers library is pre-installed or on your local machine (preferably using a virtual environment, see tutorial below) and install the Transformers library using ```pip install transformers```.
```
from transformers import pipeline
ner = pipeline("ner", model="vbius01/est-roberta-ud-ner")
text = "Eesti kuulub erinevalt Lätist ja Leedust kahtlemata Põhjamaade kultuuriruumi."
results = ner(text)
print(results)
```
```
[{'entity': 'B-GEP', 'score': np.float32(0.99339926), 'index': 1, 'word': '▁Eesti', 'start': 0, 'end': 5}, {'entity': 'B-GEP', 'score': np.float32(0.9923631), 'index': 4, 'word': '▁Lätist', 'start': 22, 'end': 29}, {'entity': 'B-GEP', 'score': np.float32(0.990756), 'index': 6, 'word': '▁Leedust', 'start': 32, 'end': 40}, {'entity': 'B-LOC', 'score': np.float32(0.61792), 'index': 8, 'word': '▁Põhjamaade', 'start': 51, 'end': 62}]
```
<!-- Provide the basic links for the model. -->
- **Repository:** [github.com/martinkivisikk/ner_thesis](https://github.com/martinkivisikk/ner_thesis)
- **Paper:** [Developing a NER Model Based on Treebank Corpora]()
### Virtual environment setup
Create and activate a virtual environment in your project directory with venv.
```
python -m venv .env
source .env/bin/activate
```
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model can be used to find named entities from Estonian texts.
|
avinot/distilolroberta-MLM-2ep-v2-tok
|
avinot
| 2025-04-25T13:38:18Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T13:38:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5525FP/Llama-3.2-1B-Lora-spigot-10K-1-1745588062.108642
|
5525FP
| 2025-04-25T13:34:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T13:34:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dudeman523/NER-Bert-Based-Cased-PlantNames-Onnx
|
Dudeman523
| 2025-04-25T13:12:49Z | 14 | 0 | null |
[
"onnx",
"bert",
"en",
"base_model:google-bert/bert-base-cased",
"base_model:quantized:google-bert/bert-base-cased",
"license:mit",
"region:us"
] | null | 2025-04-17T19:04:34Z |
---
license: mit
language:
- en
base_model:
- google-bert/bert-base-cased
---
# Model Card: BERT NER for Plant Names (PEFT/LoRA Fine-tuned)
## Model Description
This model is a fine-tuned version of `google-bert/bert-base-cased` specifically adapted for **Named Entity Recognition (NER)** of common and scientific plant names. It utilizes the **Parameter-Efficient Fine-Tuning (PEFT)** method, specifically **LoRA (Low-Rank Adaptation)**, to modify the base model's attention layers (`query` and `value`) for this task. The goal is to identify spans of text corresponding to plant names and classify them as either common (`PLANT_COMMON`) or scientific (`PLANT_SCI`) according to the IOB2 tagging scheme.
* **Developed by:** [Your Name/Organization - Fill this in]
* **Model type:** BERT (`bert-base-cased`) fine-tuned for Token Classification (NER) using PEFT/LoRA
* **Language(s):** Primarily English (based on `bert-base-cased` and likely training data)
* **License:** Base model (`bert-base-cased`) uses Apache 2.0. The fine-tuned adapter weights inherit this license unless otherwise specified
* **Fine-tuned from model:** `google-bert/bert-base-cased`
## Intended Uses & Limitations
### Intended Use
This model is intended for identifying and classifying mentions of plant names (common and scientific) within English text. Potential applications include:
* Extracting plant names from botanical texts, research papers, or gardening articles
* Structuring information about plant mentions in databases
* Assisting in indexing or searching documents based on contained plant names
* Preprocessing text for downstream tasks that require knowledge of plant entities
### Limitations
* **Domain Specificity:** The model's performance is likely best on text similar to its training data (generated templates about plants). Performance may degrade on significantly different domains (e.g., highly informal text, complex biological pathway descriptions unless similar data was included)
* **IOB2 Scheme:** The model strictly adheres to the IOB2 tagging scheme (`B-TAG`, `I-TAG`, `O`). It identifies the beginning (`B-`) and inside (`I-`) tokens of a named entity span
* **Specific Tags:** Trained only to recognize `PLANT_COMMON` and `PLANT_SCI`. It will tag all other tokens as `O` (Outside). It cannot identify other entity types (e.g., locations, people, chemicals) unless explicitly trained
* **Ambiguity:** May struggle with ambiguous terms where a word could be a plant name in one context but not another (e.g., "Rose" as a name vs. a flower)
* **Novel Names:** Performance on plant names not seen during training (or very different from those seen) may be lower
* **Context Dependency:** Like most NER models, its accuracy depends heavily on the surrounding context. Short, isolated mentions might be harder to classify correctly
* **Case Sensitivity:** Based on `bert-base-cased`, the model is case-sensitive, which might be beneficial for distinguishing scientific names but could affect common names written inconsistently
## How to Use (with Transformers & PEFT)
This model requires loading the base BERT model first and then applying the trained LoRA adapter.
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer, AutoConfig
from peft import PeftModel
import torch
# --- Configuration ---
BASE_MODEL_NAME = "google-bert/bert-base-cased"
# --- *** Point this to the directory containing the saved adapter *** ---
# E.g., your BEST_MODEL_DIR or CHECKPOINT_DIR from training
ADAPTER_PATH = "/kaggle/working/bert_ner_peft_gpu_best_v4"
# --- ************************************************************** ---
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 1. Load Tokenizer (from adapter path or base model)
try:
tokenizer = AutoTokenizer.from_pretrained(ADAPTER_PATH)
except Exception:
print(f"Warning: Tokenizer not found in {ADAPTER_PATH}, loading from {BASE_MODEL_NAME}")
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL_NAME)
# 2. Load Base Model (ensure config matches training)
# Need label map from training to load config correctly
label_list = ["O", "B-PLANT_COMMON", "I-PLANT_COMMON", "B-PLANT_SCI", "I-PLANT_SCI"]
label_map = {label: i for i, label in enumerate(label_list)}
id_to_label = {i: label for i, label in enumerate(label_list)}
num_labels = len(label_list)
config = AutoConfig.from_pretrained(
BASE_MODEL_NAME,
num_labels=num_labels,
id2label=id_to_label,
label2id=label_map
)
base_model = AutoModelForTokenClassification.from_pretrained(
BASE_MODEL_NAME,
config=config,
ignore_mismatched_sizes=True # Important if head was initialized
)
# Resize embeddings if necessary (if pad token was added during training)
if len(tokenizer) != base_model.get_input_embeddings().weight.shape[0]:
print(f"Resizing model embeddings to {len(tokenizer)}")
base_model.resize_token_embeddings(len(tokenizer))
# 3. Load PEFT Model (applies adapter)
model = PeftModel.from_pretrained(base_model, ADAPTER_PATH)
model.to(DEVICE)
model.eval()
print("PEFT Model loaded and ready for inference.")
# --- Inference Example ---
text = "The Pineapple Guava (Feijoa sellowiana) is different from Ananas comosus."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(DEVICE)
with torch.no_grad():
logits = model(**inputs).logits
predictions = torch.argmax(logits, dim=2)
predicted_token_class_ids = predictions[0].cpu().numpy()
# Map IDs back to labels, aligning with tokens
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0].cpu().numpy())
word_ids = inputs.word_ids() # Only available with fast tokenizers
aligned_labels = []
previous_word_idx = None
for i, token in enumerate(tokens):
if token in [tokenizer.cls_token, tokenizer.sep_token, tokenizer.pad_token]:
continue # Skip special tokens
word_idx = word_ids[i]
if word_idx != previous_word_idx: # Only take first token of each word
label_id = predicted_token_class_ids[i]
label_str = id_to_label.get(label_id, "O")
aligned_labels.append(label_str)
previous_word_idx = word_idx
original_words = text.split() # Simple split for demo, might need better tokenization alignment
# Crude alignment for demo: assume aligned_labels matches original words length
print("Text:", text)
print("Predicted Labels (approx alignment):")
for word, label in zip(original_words[:len(aligned_labels)], aligned_labels):
if label != "O": print(f"- {word}: {label}")
```
### Using the Merged Model
If you used the merging script, you can load the full model directly:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# --- *** Point this to the directory containing the MERGED model *** ---
MERGED_MODEL_PATH = "/kaggle/working/bert_ner_peft_gpu_merged"
# --- ************************************************************** ---
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(MERGED_MODEL_PATH)
model = AutoModelForTokenClassification.from_pretrained(MERGED_MODEL_PATH)
model.to(DEVICE)
model.eval()
print("Merged Model loaded and ready for inference.")
# --- Inference Example (same as above) ---
```
### Using the ONNX Model
```python
import onnxruntime as ort
import numpy as np
import os
from transformers import AutoTokenizer, AutoConfig
# --- *** Point this to the directory containing the ONNX model *** ---
ONNX_MODEL_DIR = "/kaggle/working/bert_ner_onnx"
# --- ************************************************************** ---
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(ONNX_MODEL_DIR)
# Load ONNX model and create session
model_path = os.path.join(ONNX_MODEL_DIR, "model.onnx")
ort_session = ort.InferenceSession(model_path, providers=['CPUExecutionProvider']) # Or ['CUDAExecutionProvider'] if available
# Load id_to_label map (needed for decoding)
# You might need to load this from the saved config.json or redefine it
# Example: Reloading config from the directory
config = AutoConfig.from_pretrained(ONNX_MODEL_DIR)
id_to_label = config.id2label
# --- Inference Example ---
text = "The Pineapple Guava (Feijoa sellowiana) is different from Ananas comosus."
inputs = tokenizer(text, return_tensors="np") # Use numpy for ONNX runtime
# Prepare inputs for ONNX session
ort_inputs = {k: v for k, v in inputs.items()}
# Run inference
ort_outputs = ort_session.run(None, ort_inputs)
logits = ort_outputs[0] # Usually the first output
predictions = np.argmax(logits, axis=-1)
predicted_token_class_ids = predictions[0]
# Map IDs back to labels (alignment logic is similar to PyTorch version)
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
# Note: Getting word_ids might require the original 'encoding' object from a Fast tokenizer
# You might need to re-tokenize with return_offsets_mapping=True and align manually
# For simplicity, let's just print raw token labels:
print("Text:", text)
print("Predicted Labels (per token):")
for token, label_id in zip(tokens, predicted_token_class_ids):
if token not in [tokenizer.cls_token, tokenizer.sep_token, tokenizer.pad_token]:
print(f"- {token}: {id_to_label.get(label_id, 'O')}")
```
## Training Data
The model was fine-tuned on a dataset generated from templates focusing on common and scientific plant names. The data format is CoNLL style (one token and tag per line, separated by TAB, with empty lines between sentences).
**Data Split:** 90% Training, 10% Validation (using sklearn.model_selection.train_test_split with random_state=42).
## Training Procedure
### Preprocessing
* **Tokenizer:** BertTokenizerFast from google-bert/bert-base-cased
* **Padding:** Padded/truncated to max_length=128
* **Label Alignment:** Standard IOB2 scheme. Labels are aligned to the first token of each word. Special tokens and subsequent subword tokens are assigned the ignore_index (-100)
### Training
* **Framework:** PyTorch with transformers and peft
* **Environment:** GPU (likely Kaggle P100/T4/V100 based on setup)
* **Precision:** Float32 (AMP was enabled but script ran in FP32 due to earlier debugging)
* **Optimizer:** AdamW
* **Learning Rate:** 2e-5 with linear warmup (10% of steps) and decay
* **Batch Size:** 4 (per device)
* **Epochs:** Trained for up to 3 epochs with early stopping (patience=3 based on validation F1)
* **PEFT Config:** LoRA (r=8, alpha=16, dropout=0.1, target_modules=["query", "value"])
* **Gradient Clipping:** Max norm = 1.0
## Evaluation Results
Evaluation was performed using the seqeval library with the IOB2 scheme and strict matching. The primary metric tracked was the micro-averaged F1 score.
## Environmental Impact
* **Hardware:** Trained on GPU Nvidia P100
* **Compute:** [Estimate training time if known, e.g., Approx. X hours on a single T4 GPU]. Carbon emissions can be estimated using tools like the Machine Learning Impact calculator if compute details are known.
## Disclaimer
This model is fine-tuned from a base model and inherits its capabilities and biases. Performance depends heavily on the similarity between the target text and the training data. Always evaluate thoroughly for your specific use case.
|
Janooo123/llama-7b-qlora-mmlu-training
|
Janooo123
| 2025-04-25T12:57:14Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-04-25T11:37:13Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama-7b-qlora-mmlu-training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-qlora-mmlu-training
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 20
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.15.0
- Tokenizers 0.21.1
|
Culturedniichan/mergekit-ties-eivdcuf
|
Culturedniichan
| 2025-04-25T11:39:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:ReadyArt/Forgotten-Safeword-24B-V2.2",
"base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.2",
"base_model:TroyDoesAI/BlackSheep-24B",
"base_model:merge:TroyDoesAI/BlackSheep-24B",
"base_model:unsloth/Mistral-Small-24B-Instruct-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Instruct-2501",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T11:30:01Z |
---
base_model:
- TroyDoesAI/BlackSheep-24B
- ReadyArt/Forgotten-Safeword-24B-V2.2
- unsloth/Mistral-Small-24B-Instruct-2501
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-24B-Instruct-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501) as a base.
### Models Merged
The following models were included in the merge:
* [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B)
* [ReadyArt/Forgotten-Safeword-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/Mistral-Small-24B-Instruct-2501
- model: TroyDoesAI/BlackSheep-24B
parameters:
density: 0.50
weight: 0.60
- model: ReadyArt/Forgotten-Safeword-24B-V2.2
parameters:
density: 0.35
weight: 0.3
merge_method: ties
base_model: unsloth/Mistral-Small-24B-Instruct-2501
parameters:
normalize: true
dtype: bfloat16
```
|
farham100/fofoo
|
farham100
| 2025-04-25T11:30:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T11:30:32Z |
---
license: apache-2.0
---
|
niekodhriwa4/fvdf
|
niekodhriwa4
| 2025-04-25T11:26:35Z | 0 | 0 | null |
[
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-25T11:26:35Z |
---
license: bsd-2-clause
---
|
zerinebajajs/sdvfdfv
|
zerinebajajs
| 2025-04-25T11:07:40Z | 0 | 0 | null |
[
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-25T11:07:39Z |
---
license: bsd-2-clause
---
|
mradermacher/Qwen2-0.5B-fncl-GGUF
|
mradermacher
| 2025-04-25T10:23:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:glaiveai/glaive-function-calling-v2",
"base_model:haripritam/Qwen2-0.5B-fncl",
"base_model:quantized:haripritam/Qwen2-0.5B-fncl",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-25T10:03:37Z |
---
base_model: haripritam/Qwen2-0.5B-fncl
datasets:
- glaiveai/glaive-function-calling-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/haripritam/Qwen2-0.5B-fncl
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-fncl-GGUF/resolve/main/Qwen2-0.5B-fncl.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Neelectric/OLMo-2-1124-7B-Instruct_SFTv01.02
|
Neelectric
| 2025-04-25T10:18:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:Neelectric/OpenR1-Math-cn_k12-86k",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:finetune:allenai/OLMo-2-1124-7B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T23:26:02Z |
---
base_model: allenai/OLMo-2-1124-7B-Instruct
datasets: Neelectric/OpenR1-Math-cn_k12-86k
library_name: transformers
model_name: OLMo-2-1124-7B-Instruct_SFTv01.02
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OLMo-2-1124-7B-Instruct_SFTv01.02
This model is a fine-tuned version of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) on the [Neelectric/OpenR1-Math-cn_k12-86k](https://huggingface.co/datasets/Neelectric/OpenR1-Math-cn_k12-86k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Neelectric/OLMo-2-1124-7B-Instruct_SFTv01.02", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neelectric/open-r1_SFT/runs/zlpyflfb)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
daishen/openfin-0.5B-ZH-optimal-sft_lxl3129_audit_regulation
|
daishen
| 2025-04-25T09:31:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T09:10:13Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Odogwu001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross
|
Odogwu001
| 2025-04-25T09:19:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am humming barky albatross",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T08:17:42Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am humming barky albatross
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Odogwu001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_gentle_mink
|
0xshaf
| 2025-04-25T09:14:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lumbering gentle mink",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T06:20:49Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_gentle_mink
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lumbering gentle mink
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_gentle_mink
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_gentle_mink", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vermoney/98095d60-2493-4df6-b46c-06fd733298b9
|
vermoney
| 2025-04-25T08:56:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-25T08:53:03Z |
---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 98095d60-2493-4df6-b46c-06fd733298b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-350m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 32cb49683e226f4d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/32cb49683e226f4d_train_data.json
type:
field_input: author
field_instruction: dynasty
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/98095d60-2493-4df6-b46c-06fd733298b9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/32cb49683e226f4d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a4199bed-2854-4046-9e07-45f55e8274f5
wandb_project: s56-9
wandb_run: your_name
wandb_runid: a4199bed-2854-4046-9e07-45f55e8274f5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 98095d60-2493-4df6-b46c-06fd733298b9
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4678 | 0.0078 | 200 | 3.3708 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mlfoundations-dev/b2_science_fasttext_pos_expert_qa_10k
|
mlfoundations-dev
| 2025-04-25T06:19:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T01:14:53Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_pos_expert_qa_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_pos_expert_qa_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_pos_expert_qa_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
firoz123/codegemma-2b-IQ3_M-GGUF
|
firoz123
| 2025-04-25T06:12:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:google/codegemma-2b",
"base_model:quantized:google/codegemma-2b",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-04-25T06:12:33Z |
---
base_model: google/codegemma-2b
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# firoz123/codegemma-2b-IQ3_M-GGUF
This model was converted to GGUF format from [`google/codegemma-2b`](https://huggingface.co/google/codegemma-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/codegemma-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -c 2048
```
|
MinaMila/llama_instbase_unlearned_LoRa_Adult_ep3_22
|
MinaMila
| 2025-04-25T06:03:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T06:03:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgambettaphd/M_llm3_gen5_run0_X_doc1000_synt64_tot128_FRESH
|
dgambettaphd
| 2025-04-25T05:47:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T05:47:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
glide-the/GLM-4-9B-Chat-0414-identity-4bits-eora_rank64_c4
|
glide-the
| 2025-04-25T05:36:41Z | 0 | 0 | null |
[
"safetensors",
"license:mit",
"region:us"
] | null | 2025-04-25T05:32:57Z |
---
license: mit
---
This repository contains the Model Checkpoint of GLM-4-9B-Chat-0414-identity-4bits
Base model: GLM-4-9B-Chat-0414
Quantization method: GPTQ
Repository of quantization method: https://github.com/modelcloud/gptqmodel
## Eora Method Dataset
```python
from datasets import load_dataset
def question_answering_format(question, answer):
return f"Question: {question}\nAnswer: {answer}"
def multiple_choices_question_answering_format(question, choices, answer):
return f"{question.strip()}\nA. {choices[0]}\nB. {choices[1]}\nC. {choices[2]}\nD. {choices[3]}\nAnswer: {answer}"
## An example of using ARC for construting the EoRA calibration set
def construct_c4():
calibration_dataset = load_dataset(
"/mnt/ceph/develop/jiawei/code_dataset/c4",
data_files="en.noblocklist/c4-train.00001-of-01024.json.gz",
split="train", download_mode="force_redownload"
).select(range(1024))["text"]
return calibration_dataset
def construct_ARC():
nsamples = 1024
arc_easy_calibration_dataset = load_dataset('ai2_arc', 'ARC-Easy', split='train').select(range(nsamples))
arc_challenge_calibration_dataset = load_dataset('ai2_arc', 'ARC-Challenge', split='train').select(range(nsamples))
dataset = []
for example in arc_easy_calibration_dataset:
answer = example['choices']['text'][example['choices']['label'].index(example['answerKey'])]
question = example['question']
dataset.append(question_answering_format(question=question,answer=answer))
for example in arc_challenge_calibration_dataset:
answer = example['choices']['text'][example['choices']['label'].index(example['answerKey'])]
question = example['question']
dataset.append(question_answering_format(question=question,answer=answer))
## we recommend also include some examples from C4 to avoid overfitting to the downstream data
c4_dataset = load_dataset(
"allenai/c4",
data_files="en/c4-train.00001-of-01024.json.gz",
split="train"
).select(range(nsamples))["text"]
return dataset + c4_dataset
def multiple_identity_format(instruction, input_q, output):
return f"{instruction.strip()} {input_q}\n {output}"
def construct_mmlu():
mmlu_calibration_dataset = load_dataset('/mnt/ceph/develop/jiawei/code_dataset/mmlu', 'all', split='validation')
dataset = []
for example in mmlu_calibration_dataset:
question = example['question']
choices = example['choices']
answer = ['A','B','C','D'][example['answer']]
dataset.append(multiple_choices_question_answering_format(question, choices, answer))
identity_dataset = load_dataset(
"json",
data_files="/mnt/ceph/develop/jiawei/GPTQModel/examples/eora/identity.json",
split="train"
)
for example in identity_dataset:
instruction = example['instruction']
input_q = example['input']
output = example['output']
dataset.append(multiple_identity_format(instruction, input_q, output))
## we recommend also include some examples from C4 to avoid overfitting to the downstream data
c4_dataset = load_dataset(
"/mnt/ceph/develop/jiawei/code_dataset/c4",
data_files="en.noblocklist/c4-train.00001-of-01024.json.gz",
split="train"
).select(range(1024))["text"]
return dataset + c4_dataset
```
2. quantization
```python
python examples/eora/eora_generation.py THUDM/GLM-4-9B-Chat-0414 --bits 4 --quant_save_path glide-the/GLM-4-9B-Chat-0414-identity-4bits --eora_dataset mmlu --eora_save_path glide-the/GLM-4-9B-Chat-0414-identity-4bits-eora_rank64_c4 --eora_rank 64
```
3. inference
```python
python examples/eora/eora_load_and_inference.py --quantized_model glide-the/GLM-4-9B-Chat-0414-identity-4bits --eora glide-the/GLM-4-9B-Chat-0414-identity-4bits-eora_rank64_c4 --eora_rank 64
```
# Usage transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("glide-the/GLM-4-9B-Chat-0414-identity-4bits")
quantized_model = AutoModelForCausalLM.from_pretrained("glide-the/GLM-4-9B-Chat-0414-identity-4bits")
print(tokenizer.decode(quantized_model.generate(**tokenizer("gptqmodel is", return_tensors="pt").to(quantized_model.device))[0]))
```
|
MayBashendy/arabic_SDP_all_binary_multilingual_e5_small_lr3e-05_targ4_dev1234678_epoch530
|
MayBashendy
| 2025-04-25T05:35:04Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-25T05:34:37Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
parvk11/audience_classifier_model
|
parvk11
| 2025-04-25T04:49:24Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-25T04:48:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RRashmini/google-unimax-t5-small-12
|
RRashmini
| 2025-04-25T04:36:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"umt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-25T04:35:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nmaraza/Llama-3.2
|
nmaraza
| 2025-04-25T03:46:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T03:35:47Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nmaraza
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
callgg/framepack
|
callgg
| 2025-04-25T03:08:27Z | 56 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"region:us"
] | null | 2025-04-23T21:43:08Z |
---
library_name: diffusers
---
## framepack
- repackage of hy from [lllyasviel](https://huggingface.co/lllyasviel/FramePackI2V_HY)
|
hackelle/mobilenetv4_hybrid_medium-s1-v0.2.0
|
hackelle
| 2025-04-25T02:58:45Z | 0 | 0 |
configilm
|
[
"configilm",
"safetensors",
"mobilenetv4_hybrid_medium",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
] |
image-classification
| 2025-04-25T02:58:39Z |
---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- mobilenetv4_hybrid_medium
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Mobilenetv4_hybrid_medium pretrained on BigEarthNet v2.0 using Sentinel-1 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-1 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 33 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.610632 | 0.804239 |
| F1 Score | 0.556231 | 0.703160 |
| Precision | 0.633313 | 0.765481 |
# Example
| A Sentinel-1 image (VV, VH and VV/VH bands are used for visualization) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/mobilenetv4_hybrid_medium-s1-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
NexesMess/Llama_3.3_70b_DonkeyRider_v2
|
NexesMess
| 2025-04-25T02:31:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T01:48:36Z |
---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) as a base.
### Models Merged
The following models were included in the merge:
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 1.0
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 1.0
base_model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: auto
tokenizer:
source: union
```
|
Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-S
|
Shaleen123
| 2025-04-24T22:23:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T22:18:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ava2000/Rimworld_illustrious
|
Ava2000
| 2025-04-24T22:15:19Z | 0 | 0 | null |
[
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"region:us"
] | null | 2025-04-24T19:01:09Z |
---
base_model:
- OnomaAIResearch/Illustrious-xl-early-release-v0
---
Moyo:
Trigger: moyo, antennae, facial mark, grey skin,
Additional trigger: tail (to help with the snail tail, but can be tricky to use)
Advised strength: 0.6-1
Ratkin:
Trigger Pony: ratkin, animal ears, tail,
Trigger Illustrious: ratkin, mouse ears or rat ears, mouse tail or rat tail
Advised strength: 0.6-1
Mincho:
Trigger: mincho, blue skin, colored skin, liquid hair, pointy ears,
additional tigger (illustrious): chocolate chunks (to help with the chocolate chips in hair).
Advised strength: 0.6-1
Dragonian:
Trigger: dragonian, horns, tail, scales,
Extra triggers: you can use dragon horns and dragon tail those will work to and sometimes better then the regular horns and tail prompt.
Advised strength:* 0.6-1*
Maru:
Trigger: maru, leopard ears, leopard tail, facial mark,
Extra info: you can swap out the Leotard for Tiger if you want, will have somewhat the same effect on the image.
Advised strength: 0.6-1
Kurin:
Trigger: kurin, fox ears, fox tail,
Extra trigger: 3 tails
Advised strength: 0.6-1
Yuran:
Extra Trigger: furry (for that extra push in some checkpoints)
You can swap out white fur for another color, but you will have to give it extra weight! (for example (pink fur:1.4)).
Advised strength: 0.6-1
Miho:
Trigger: miho, fox ears, fox tail,
Advised strength:* 0.6-1*
Rabbie:
Trigger: rabbie, rabbit ears, rabbit tail,
Advised strength: 0.6-1
Epona:
Trigger: epona, (centaur), horse ears
Advised strength: 0.6-1
Paniel:
Trigger: paniel, tail,
Additional triggers: fluffy ears or (brown) dog ears can help your images with the ears.
Advised strength: 0.6-1
Anty:
Trigger: anty, antennae, fangs,
Additional triggers: tail(to get the ant back-end).
Advised strength: 0.6-1
Moosesian:
Trigger: moosesian, animal ears,
Need need for the antler key word, because it is baked in! (but only 1 type).
Advised strength: 0.6-1
Pawnmaker:
Trigger: pawn-maker
Extra Triggers: full body, chibi, white background, simple background,
Advised strength: 0.6-1
|
samoline/076e61b8-4257-4eb2-bd3c-45bc43bd56e2
|
samoline
| 2025-04-24T21:52:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T21:52:14Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: 076e61b8-4257-4eb2-bd3c-45bc43bd56e2
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 076e61b8-4257-4eb2-bd3c-45bc43bd56e2
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="samoline/076e61b8-4257-4eb2-bd3c-45bc43bd56e2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/samoline-nan/Gradients-On-Demand/runs/ivmmvmr2)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bella-edrianna-viral/video-original-bella-adriana-viral-video-bella-edrianna-viral-bella-viral-telegram
|
bella-edrianna-viral
| 2025-04-24T20:45:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-24T20:45:16Z |
Watch 🟢 ➤ ➤ ➤ <a href="butmakeitashion.blogspot.com/?m=0"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
Watch 🟢 ➤ ➤ ➤ <a href="butmakeitashion.blogspot.com/?m=0"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤
|
ArtemisTAO/lam15
|
ArtemisTAO
| 2025-04-24T19:58:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T19:57:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdoCleanCode/general_model_fp_elec_balanced
|
AdoCleanCode
| 2025-04-24T17:59:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T13:16:03Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: general_model_fp_elec_balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# general_model_fp_elec_balanced
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 55040
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0642 | 1.0 | 5504 | 0.9764 |
| 0.956 | 2.0 | 11008 | 0.9018 |
| 0.9064 | 3.0 | 16512 | 0.8644 |
| 0.863 | 4.0 | 22016 | 0.8438 |
| 0.8364 | 5.0 | 27520 | 0.8294 |
| 0.8186 | 6.0 | 33024 | 0.8194 |
| 0.8058 | 7.0 | 38528 | 0.8137 |
| 0.7927 | 8.0 | 44032 | 0.8092 |
| 0.7793 | 9.0 | 49536 | 0.8060 |
| 0.7728 | 10.0 | 55040 | 0.8044 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
George2002/sledopyt_embedder_6topics
|
George2002
| 2025-04-24T17:50:23Z | 76 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10514",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-04-21T10:26:00Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10514
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-large
widget:
- source_sentence: 'query: Можно ли распечатать справку об аресте счета клиента-банкрота
для финансового управляющего?'
sentences:
- "passage: Запросить у клиента - банкрота разрешение Финансового управляющего на\
\ расход денежных средств банкротом, в котором указаны: Сумма, период и номер\
\ счета, с которого необходимо выполнить списание. Разрешение Финансового управляющего\
\ должно быть заверено его личной подписью либо удостоверено нотариусом (при наличии\
\ печатью).\n\n\n***Исключения составляют алименты и пособия на детей, для получения\
\ которых в разрешении Финансового управляющего может быть указан только счет.\
\ Сумму для выдачи Сотрудник должен определить по назначению платежа зачисления.\n\
\n\n!!!!! В случаях, когда ФУ и банкрот находятся в разных ТБ, мы рекомендуем\
\ использовать следующий порядок получения ДС банкротом, \n\nФУ приносит разрешение\
\ на получение ДС банкротом в любое ближайшее отделение банка.\nСотрудник принимает\
\ его, регистрирует и отправляет внутренней почтой в ВСП, куда придет банкрот\
\ за ДС.\nВСП, куда придет банкрот на основании полученного разрешения регистрируют\
\ заявку на разблокировку счета, \nВыдают деньги после разблокировки счета, к\
\ расходному ордеру прикладывают разрешение фу (если оно разовое) копию разрешения\
\ (если он периодическое)\nСамо разрешение ФУ, подписанное сотрудником, передают\
\ банкроту, он с ним приходит в Банк до его окончания.\n\n\n\nБанкрот предоставил\
\ Разрешение Финансового управляющего на расход денежных средств\n\nПроверить\
\ наличие ареста на счет, с которого необходимо произвести выдачу"
- 'passage: Покупка металла на металлический счёт через УРМ:
Покупка металла с зачислением на металлический счёт со счета или вклада:'
- "passage: Не допускается распечатывать финансовому управляющему справку об аресте\
\ счета клиента банкрота на сумму 41 888 888 рублей. Арест является техническим\
\ ограничением.\n\n\n\nПолучить справки/выписки/информацию по всем открытым счетам\
\ банкрота на ДИСКЕ (большой объем)\n\nСотрудник ВСП оформляет запрос стандартным\
\ порядком через АС \"Сбердруг\"\n\nОткрывает АС \"Сбердруг\" --> Каталог -->\
\ Обслуживание клиентов -> Операционный центр --> Сопровождение операций ФЛ ->\
\ Запросы от внешней организации и клиентов Банка --> \n\nВ запросе необходимо\
\ указать следующее:\n- в поле \"Представители внешних организаций\" - выбрать\
\ Финансовый управляющий\n- в поле ТБ клиенту - выбрать ТБ\n- Номер и дату документа\
\ Финансового управляющего\n- Данные банкрота (ФИО + Дата рождения)\n\nК запросу\
\ необходимо приложить скан-образы документов:\n - Решение суда о признании гражданина\
\ банкротом и введении процедуры реализации имущества и решение суда об утверждении\
\ финансового управляющего;\n - заявление финансового управляющего на предоставление\
\ информации по клиенту-банкроту (в свободной форме и по форме Заявление о розыске/предоставлении\
\ информации), заверенное подписью сотрудника ВСП с указанием ФИО, должности и\
\ даты приема заявления ФУ.\n\n!!! После подготовки, диск с информацией будет\
\ направлен ФУ по почте России, по адресу, указанному в заявлении ФУ."
- source_sentence: 'query: Что не отображается в истории трат и пополнений по карте?'
sentences:
- 'passage: Вид специального счета:
Специальный брокерский счет
Дополнительно к документам, указанным в П-25, Клиент предоставляет:
1. лицензии на право осуществления соответствующего вида профессиональной деятельности
на рынке ценных бумаг.'
- 'passage: Существуют следующие возможности:
02. Увидеть историю трат и пополнений по карте:
Отображается список расходных операций по карте (за исключением снятия денег в
банкоматах), а также все зачисления денежных средств.'
- "passage: Сотрудник проверяет у клиента наличие оригинала сберкнижки или иного\
\ документа (копия книжки/дубликат книжки/договор вклада/квитанция ф. 31/банковский\
\ ордер или банковская справка, выписка/вкладчик получал компенсацию ранее по\
\ данному вкладу) (далее - Сберкнижка). \n\nНет Сберкнижки\n\nЕсли книжка не предоставлена.\
\ Сообщи клиенту, что Банк не может произвести выплату компенсации. Выплата производится\
\ при наличии Сберкнижки. Попросить клиента принести Сберкнижку.\n\nВыплата компенсации\
\ по закрытым счетам осуществляется на основании заявления, оформленного в АС\
\ ФС ФП \"Компенсация и выплата\" с указанием в заявление, получение компенсации\
\ по закрытым счетам."
- source_sentence: 'query: Когда возникает окно для подтверждения операции с комплаенсом?'
sentences:
- 'query: Что должно соответствовать клиенту, кроме условия, что он не является
филиалом или представительством?'
- 'query: В каких случаях появляется модальное окно для согласования с комплаенсом?'
- 'query: На какой номер нужно позвонить, чтобы снять лимит расходов на день для
ребенка?'
- source_sentence: 'query: Что происходит на экране после ввода суммы при проведении
валютно-обменной операции, если необходимо согласование?'
sentences:
- 'query: Какие бумаги необходимы для объявления клиента недееспособным?'
- 'query: Что отображается на экране после ввода суммы для валютного обмена, если
нужно согласование?'
- 'query: Какой документ необходим для удостоверения статуса иностранного гражданина
в России?'
- source_sentence: 'query: Когда родитель теряет доступ к картам ребенка от 14 до
17 лет?'
sentences:
- 'query: Кто имеет право претендовать на наследство, если наследодатель объявлен
банкротом?'
- 'query: Что необходимо сделать перед вводом суммы для снятия наличных с карты
в СБОЛ.про?'
- 'query: В каких ситуациях родитель не сможет управлять картами ребенка в возрасте
14-17 лет?'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the q2q_data and q2p_data datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision 0dc5580a448e4284468b8909bae50fa925907bc5 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- q2q_data
- q2p_data
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("George2002/sledopyt_embedder_v3")
# Run inference
sentences = [
'query: Когда родитель теряет доступ к картам ребенка от 14 до 17 лет?',
'query: В каких ситуациях родитель не сможет управлять картами ребенка в возрасте 14-17 лет?',
'query: Кто имеет право претендовать на наследство, если наследодатель объявлен банкротом?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### q2q_data
* Dataset: q2q_data
* Size: 8,012 training samples
* Columns: <code>query_1</code> and <code>query_2</code>
* Approximate statistics based on the first 1000 samples:
| | query_1 | query_2 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.5 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 21.24 tokens</li><li>max: 37 tokens</li></ul> |
* Samples:
| query_1 | query_2 |
|:---------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------|
| <code>query: Что произойдет с процедурой банкротства, если банкрот умрет?</code> | <code>query: Как будет развиваться процедура банкротства после смерти должника?</code> |
| <code>query: Как ребенку изменить лимит на расход по карте, который установил опекун?</code> | <code>query: Что нужно сделать, чтобы изменить лимит расходов по карте, заданный законным представителем?</code> |
| <code>query: Какие документы подтверждают полномочия опекуна несовершеннолетнего до 14 лет?</code> | <code>query: Какие бумаги нужны, чтобы подтвердить полномочия опекуна несовершеннолетнего до 14-ти лет?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### q2p_data
* Dataset: q2p_data
* Size: 2,502 training samples
* Columns: <code>query</code> and <code>chunk</code>
* Approximate statistics based on the first 1000 samples:
| | query | chunk |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 21.8 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 173.04 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | chunk |
|:------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Что можно использовать для получения данных наследодателя, если у клиента нет паспорта?</code> | <code>passage: У клиента ЕСТЬ/НЕТ документа подтверждающего наследственное право (далее - ДПНП) - свидетельства о праве на наследство/завещание в банке в его пользу до 01.03.2002 <br><br>Нет ДПНП<br><br>Клиента необходимо направить к нотариусу для открытия наследственного дела и розыска наследственной массы через запрос Нотариуса. Сообщите клиенту необходимость взять к нотариусу следующие документы для более качественного и быстрого розыска:<br>1. Паспорт наследодателя или его данные(можно взять из любого договора)<br>2. Все известные сберкнижки наследодателя или их номера<br>3. ИНН если наследодателя был ИП<br><br>Обращение в СРМ «Розничный не регистрируй!!!</code> |
| <code>query: Что делать, если в документах клиента нет сведений о месте пребывания?</code> | <code>passage: Для любого представителя Клиента (ЕИО, уполномоченного сотрудника, доверенного лица) :<br><br>Нерезидент<br><br>1. Документ, удостоверяющий личность представителя юридического лица/ИП (В случае если Клиент/представитель Клиента предоставил в Банк иностранный документ, удостоверяющий личность, без нотариально удостоверенного перевода, дополнительно предоставляется Приложение 4 к Информационным сведениям клиента)<br>Дополнительно: id-карта является полноценным ДУЛ только для граждан Киргизии и Казахстана. Граждане других государств id-карту как самостоятельный ДУЛ использовать не могут. <br>2. Документ, подтверждающий право иностранного гражданина или лица без гражданства на пребывание (проживание) в Российской Федерации: <br>- вид на жительство; <br>- либо временное удостоверение личности лица без гражданства в Российской Федерации; <br>- либо разрешение на временное проживание; <br>- либо визу; <br>- либо миграционную карту; <br>- либо свидетельство о рассмотрении ходатайства о признании беженцем на территории Р...</code> |
| <code>query: Под какие документы подпадает исполнительный документ о взыскании задолженности?</code> | <code>passage: Уважаемый коллега! <br>Вы приняли от клиента:<br><br>Исполнительный документ о взыскании задолженности/наложении ареста/отмене ареста (взыскания)<br><br>Исполнительные документы (ИД), могут быть предъявлены клиентом/его представителем в филиалы и подразделения Банка с целью исполнения Банком требований федерального закона от 02.10.2007 №229-ФЗ "Об исполнительном производстве".</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### q2q_data
* Dataset: q2q_data
* Size: 422 evaluation samples
* Columns: <code>query_1</code> and <code>query_2</code>
* Approximate statistics based on the first 422 samples:
| | query_1 | query_2 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.71 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 21.24 tokens</li><li>max: 35 tokens</li></ul> |
* Samples:
| query_1 | query_2 |
|:-------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------|
| <code>query: Как банк реагирует на выявление клиента-банкрота при выдаче карты?</code> | <code>query: Как банк поступает, если выясняется, что клиент-банкрот при оформлении кредитки?</code> |
| <code>query: query: Что является целевым путем для выплаты наследства при возникновении технической ошибки?</code> | <code>query: Какие действия нужно предпринять для выплаты наследства при наличии технической ошибки?</code> |
| <code>query: Что делать, если клиент сообщает, что выпуск карты осуществляется по просьбе третьего лица?</code> | <code>query: Что предпринимать, если клиент жалуется, что кто-то другой просит выпустить карту?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### q2p_data
* Dataset: q2p_data
* Size: 132 evaluation samples
* Columns: <code>query</code> and <code>chunk</code>
* Approximate statistics based on the first 132 samples:
| | query | chunk |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 22.05 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 172.31 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | chunk |
|:------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Как завершать процедуру банкротства в случае смерти банкрота?</code> | <code>passage: С каким вопросом обратился ФУ?<br><br>12. Поведение процедуры банкротства в случае смерти банкрота/ Вступления банкрота в наследство <br><br>В случае</code> |
| <code>query: Что произойдет, если Законный представитель подключит уведомления на свой номер телефона вместо номера Ребёнка?</code> | <code>passage: Выберите интересующий вопрос<br><br>5. Можно ли подключить СМС-информирование по Детской СберКарте на номер телефона Законного Представителя ?<br><br>Нет, это можно сделать только на номер телефона Ребёнка.<br>Если Законный представитель подключил уведомления на свой номер, тогда нужно поменять его на номер телефона Ребёнка в офисе Банка или банкомате.<br>Иначе Ребёнок не сможет получать уведомления с кодами подтверждения и воспользоваться банкоматом, а Законный представитель столкнётся с техническими сложностями при пользовании сервисами Банка.<br><br><br>Если Законный представитель желает получать уведомления об операциях Ребёнка на свой номер телефона, тогда ему необходимо подключить услугу "Совместные уведомления" к Детской СберКарте.</code> |
| <code>query: Что необходимо для того, чтобы ребёнок мог сам совершить операцию?</code> | <code>passage: Возможные ошибки:<br><br>Ребёнку необходимо совершить операцию самому<br><br>Ребёнку больше 14 лет</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `push_to_hub`: True
- `hub_model_id`: George2002/sledopyt_embedder_v3
- `hub_strategy`: end
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: George2002/sledopyt_embedder_v3
- `hub_strategy`: end
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | q2q data loss | q2p data loss |
|:----------:|:-------:|:-------------:|:-------------:|:-------------:|
| 0.2439 | 10 | 2.0065 | - | - |
| 0.4878 | 20 | 1.1826 | - | - |
| 0.6098 | 25 | - | 0.0102 | 0.2422 |
| 0.7317 | 30 | 0.6224 | - | - |
| 0.9756 | 40 | 0.1914 | - | - |
| 1.2195 | 50 | 0.1785 | 0.0003 | 0.1165 |
| 1.4634 | 60 | 0.1897 | - | - |
| 1.7073 | 70 | 0.1862 | - | - |
| 1.8293 | 75 | - | 0.0002 | 0.0839 |
| 1.9512 | 80 | 0.0917 | - | - |
| 2.1951 | 90 | 0.0855 | - | - |
| 2.4390 | 100 | 0.1282 | 0.0002 | 0.0868 |
| 2.6829 | 110 | 0.1329 | - | - |
| 2.9268 | 120 | 0.0627 | - | - |
| 3.0488 | 125 | - | 0.0002 | 0.0720 |
| 3.1707 | 130 | 0.0621 | - | - |
| 3.4146 | 140 | 0.0882 | - | - |
| **3.6585** | **150** | **0.1041** | **0.0002** | **0.069** |
| 3.9024 | 160 | 0.0564 | - | - |
| 4.1463 | 170 | 0.0515 | - | - |
| 4.2683 | 175 | - | 0.0001 | 0.0795 |
| 4.3902 | 180 | 0.0858 | - | - |
| 4.6341 | 190 | 0.082 | - | - |
| 4.8780 | 200 | 0.0431 | 0.0001 | 0.0725 |
| 5.1220 | 210 | 0.0482 | - | - |
| 5.3659 | 220 | 0.0643 | - | - |
| 5.4878 | 225 | - | 0.0001 | 0.0813 |
| 5.6098 | 230 | 0.0863 | - | - |
| 5.8537 | 240 | 0.041 | - | - |
| 6.0976 | 250 | 0.0446 | 0.0001 | 0.0724 |
| 6.3415 | 260 | 0.0594 | - | - |
| 6.5854 | 270 | 0.0705 | - | - |
| 6.7073 | 275 | - | 0.0001 | 0.0760 |
| 6.8293 | 280 | 0.0451 | - | - |
| 7.0732 | 290 | 0.0447 | - | - |
| 7.3171 | 300 | 0.0507 | 0.0001 | 0.0783 |
| 7.5610 | 310 | 0.0571 | - | - |
| 7.8049 | 320 | 0.0534 | - | - |
| 7.9268 | 325 | - | 0.0001 | 0.0787 |
| 8.0488 | 330 | 0.041 | - | - |
| 8.2927 | 340 | 0.0458 | - | - |
| 8.5366 | 350 | 0.0534 | 0.0001 | 0.0819 |
| 8.7805 | 360 | 0.0594 | - | - |
| 9.0244 | 370 | 0.0381 | - | - |
| 9.1463 | 375 | - | 0.0001 | 0.0815 |
| 9.2683 | 380 | 0.046 | - | - |
| 9.5122 | 390 | 0.0507 | - | - |
| 9.7561 | 400 | 0.0575 | 0.0001 | 0.0822 |
| 10.0 | 410 | 0.0372 | - | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
hravi/xlm-roberta-base-finetuned-panx-fr
|
hravi
| 2025-04-24T17:49:51Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-10T03:43:56Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- F1: 0.8461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5744 | 1.0 | 191 | 0.3303 | 0.7749 |
| 0.2701 | 2.0 | 382 | 0.2652 | 0.8322 |
| 0.178 | 3.0 | 573 | 0.2754 | 0.8461 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
WensongSong/Insert-Anything
|
WensongSong
| 2025-04-24T17:42:46Z | 0 | 1 | null |
[
"en",
"dataset:WensongSong/AnyInsertion",
"arxiv:2504.15009",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Fill-dev",
"license:mit",
"region:us"
] | null | 2025-04-22T03:12:59Z |
---
license: mit
datasets:
- WensongSong/AnyInsertion
language:
- en
base_model:
- black-forest-labs/FLUX.1-Fill-dev
---
<h1 align="center">Insert Anything</h2>
<p align="center">
<a href="https://song-wensong.github.io/"><strong>Wensong Song</strong></a>
·
<a href="https://openreview.net/profile?id=~Hong_Jiang4"><strong>Hong Jinag</strong></a>
·
<a href="https://z-x-yang.github.io/"><strong>Zongxing Yang</strong></a>
·
<a href="https://scholar.google.com/citations?user=WKLRPsAAAAAJ&hl=en"><strong>Ruijie Quan</strong></a>
·
<a href="https://scholar.google.com/citations?user=RMSuNFwAAAAJ&hl=en"><strong>Yi Yang</strong></a>
<br>
<br>
<a href="https://arxiv.org/pdf/2504.15009" style="display: inline-block; margin-right: 10px;">
<img src='https://img.shields.io/badge/arXiv-InsertAnything-red?color=%23aa1a1a' alt='Paper PDF'>
</a>
<a href='https://song-wensong.github.io/insert-anything/' style="display: inline-block; margin-right: 10px;">
<img src='https://img.shields.io/badge/Project%20Page-InsertAnything-cyan?logoColor=%23FFD21E&color=%23cbe6f2' alt='Project Page'>
</a>
<a href='https://github.com/song-wensong/insert-anything' style="display: inline-block;">
<img src='https://img.shields.io/badge/GitHub-InsertAnything-black?logoColor=23FFD21E&color=%231d2125'>
</a>
<br>
<b>Zhejiang University | Harvard University | Nanyang Technological University </b>
</p>
## News
* **[2025.4.25]** Release **AnyInsertion** dataset on [HuggingFace](https://huggingface.co/datasets/WensongSong/AnyInsertion).
* **[2025.4.22]** Release inference & demo code on [GitHub](https://github.com/song-wensong/insert-anything), and mask-prompt pretrained checkpoint.
## Model Introduction
The currently released checkpoint is 20250321_steps5000_pytorch_lora_weights.safetensors, which is for mask-prompt image insertion. Future versions of the checkpoints will be released as updates.
## Citation
```
@article{song2025insert,
title={Insert Anything: Image Insertion via In-Context Editing in DiT},
author={Song, Wensong and Jiang, Hong and Yang, Zongxing and Quan, Ruijie and Yang, Yi},
journal={arXiv preprint arXiv:2504.15009},
year={2025}
}
```
|
Snowflake/snowflake-arctic-embed-m-v1.5
|
Snowflake
| 2025-04-24T17:36:32Z | 247,982 | 58 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"safetensors",
"gguf",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"arxiv:2412.04506",
"arxiv:2407.18887",
"arxiv:2405.05374",
"arxiv:2205.13147",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-07-03T18:46:29Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
license: apache-2.0
model-index:
- name: snowflake-arctic-embed-m-v1.5
results:
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: main_score
value: 59.53000000000001
- type: map_at_1
value: 34.282000000000004
- type: map_at_10
value: 50.613
- type: map_at_100
value: 51.269
- type: map_at_1000
value: 51.271
- type: map_at_20
value: 51.158
- type: map_at_3
value: 45.626
- type: map_at_5
value: 48.638
- type: mrr_at_1
value: 34.92176386913229
- type: mrr_at_10
value: 50.856081645555406
- type: mrr_at_100
value: 51.510739437069034
- type: mrr_at_1000
value: 51.51299498830165
- type: mrr_at_20
value: 51.39987941081724
- type: mrr_at_3
value: 45.993361782835514
- type: mrr_at_5
value: 48.88098624940742
- type: nauc_map_at_1000_diff1
value: 10.628675774160785
- type: nauc_map_at_1000_max
value: -10.11742589992339
- type: nauc_map_at_1000_std
value: -18.29277379812427
- type: nauc_map_at_100_diff1
value: 10.63250240035489
- type: nauc_map_at_100_max
value: -10.112078786734363
- type: nauc_map_at_100_std
value: -18.288524872706834
- type: nauc_map_at_10_diff1
value: 10.476494913081712
- type: nauc_map_at_10_max
value: -9.890937746734037
- type: nauc_map_at_10_std
value: -18.279750514750443
- type: nauc_map_at_1_diff1
value: 14.549204048461151
- type: nauc_map_at_1_max
value: -12.230560087701225
- type: nauc_map_at_1_std
value: -19.469903650130362
- type: nauc_map_at_20_diff1
value: 10.586564571825674
- type: nauc_map_at_20_max
value: -10.00292720526217
- type: nauc_map_at_20_std
value: -18.258077347878064
- type: nauc_map_at_3_diff1
value: 10.378663968090372
- type: nauc_map_at_3_max
value: -10.458896171786185
- type: nauc_map_at_3_std
value: -18.38852760333766
- type: nauc_map_at_5_diff1
value: 10.235960275925581
- type: nauc_map_at_5_max
value: -10.239496080409058
- type: nauc_map_at_5_std
value: -18.817023479445886
- type: nauc_mrr_at_1000_diff1
value: 8.718212649575722
- type: nauc_mrr_at_1000_max
value: -10.81022794038691
- type: nauc_mrr_at_1000_std
value: -17.87669499555167
- type: nauc_mrr_at_100_diff1
value: 8.722174171165133
- type: nauc_mrr_at_100_max
value: -10.804840985713525
- type: nauc_mrr_at_100_std
value: -17.872487099359986
- type: nauc_mrr_at_10_diff1
value: 8.609421635870238
- type: nauc_mrr_at_10_max
value: -10.568644717548432
- type: nauc_mrr_at_10_std
value: -17.872968762635814
- type: nauc_mrr_at_1_diff1
value: 12.69590006263834
- type: nauc_mrr_at_1_max
value: -12.082056561238321
- type: nauc_mrr_at_1_std
value: -18.036424092186657
- type: nauc_mrr_at_20_diff1
value: 8.684842497970315
- type: nauc_mrr_at_20_max
value: -10.691578914627286
- type: nauc_mrr_at_20_std
value: -17.84350301434992
- type: nauc_mrr_at_3_diff1
value: 8.649761557556763
- type: nauc_mrr_at_3_max
value: -11.104694428047496
- type: nauc_mrr_at_3_std
value: -18.149917948370344
- type: nauc_mrr_at_5_diff1
value: 8.433489750038396
- type: nauc_mrr_at_5_max
value: -10.917772454397436
- type: nauc_mrr_at_5_std
value: -18.4094211134111
- type: nauc_ndcg_at_1000_diff1
value: 10.19041067807956
- type: nauc_ndcg_at_1000_max
value: -9.54328201605796
- type: nauc_ndcg_at_1000_std
value: -17.824620427456633
- type: nauc_ndcg_at_100_diff1
value: 10.289491087585963
- type: nauc_ndcg_at_100_max
value: -9.357214331420337
- type: nauc_ndcg_at_100_std
value: -17.657600653632873
- type: nauc_ndcg_at_10_diff1
value: 9.435530877596092
- type: nauc_ndcg_at_10_max
value: -8.182581635383546
- type: nauc_ndcg_at_10_std
value: -17.603156479980388
- type: nauc_ndcg_at_1_diff1
value: 14.549204048461151
- type: nauc_ndcg_at_1_max
value: -12.230560087701225
- type: nauc_ndcg_at_1_std
value: -19.469903650130362
- type: nauc_ndcg_at_20_diff1
value: 9.885227087275197
- type: nauc_ndcg_at_20_max
value: -8.52362662391439
- type: nauc_ndcg_at_20_std
value: -17.441705436231764
- type: nauc_ndcg_at_3_diff1
value: 9.22542769998547
- type: nauc_ndcg_at_3_max
value: -9.903590564219288
- type: nauc_ndcg_at_3_std
value: -18.357220221111593
- type: nauc_ndcg_at_5_diff1
value: 8.8756720745828
- type: nauc_ndcg_at_5_max
value: -9.269764943861245
- type: nauc_ndcg_at_5_std
value: -19.009229433187784
- type: nauc_precision_at_1000_diff1
value: 3.733355117431035
- type: nauc_precision_at_1000_max
value: 3.9603571352517393
- type: nauc_precision_at_1000_std
value: 70.07345061131439
- type: nauc_precision_at_100_diff1
value: 29.019032142462457
- type: nauc_precision_at_100_max
value: 40.75153328286103
- type: nauc_precision_at_100_std
value: 62.634249549126594
- type: nauc_precision_at_10_diff1
value: 2.5762677254910353
- type: nauc_precision_at_10_max
value: 6.096298633773051
- type: nauc_precision_at_10_std
value: -11.507400451348587
- type: nauc_precision_at_1_diff1
value: 14.549204048461151
- type: nauc_precision_at_1_max
value: -12.230560087701225
- type: nauc_precision_at_1_std
value: -19.469903650130362
- type: nauc_precision_at_20_diff1
value: 1.715540124567996
- type: nauc_precision_at_20_max
value: 21.53546453945913
- type: nauc_precision_at_20_std
value: 1.537961142195571
- type: nauc_precision_at_3_diff1
value: 5.701850652555737
- type: nauc_precision_at_3_max
value: -8.180345365085552
- type: nauc_precision_at_3_std
value: -18.37033750502482
- type: nauc_precision_at_5_diff1
value: 3.6053552181042843
- type: nauc_precision_at_5_max
value: -5.207647070615612
- type: nauc_precision_at_5_std
value: -19.89491085427258
- type: nauc_recall_at_1000_diff1
value: 3.733355117431255
- type: nauc_recall_at_1000_max
value: 3.9603571352482194
- type: nauc_recall_at_1000_std
value: 70.07345061131205
- type: nauc_recall_at_100_diff1
value: 29.01903214246288
- type: nauc_recall_at_100_max
value: 40.7515332828621
- type: nauc_recall_at_100_std
value: 62.63424954912607
- type: nauc_recall_at_10_diff1
value: 2.5762677254911988
- type: nauc_recall_at_10_max
value: 6.0962986337729905
- type: nauc_recall_at_10_std
value: -11.507400451348577
- type: nauc_recall_at_1_diff1
value: 14.549204048461151
- type: nauc_recall_at_1_max
value: -12.230560087701225
- type: nauc_recall_at_1_std
value: -19.469903650130362
- type: nauc_recall_at_20_diff1
value: 1.7155401245682675
- type: nauc_recall_at_20_max
value: 21.535464539459632
- type: nauc_recall_at_20_std
value: 1.5379611421957025
- type: nauc_recall_at_3_diff1
value: 5.7018506525557875
- type: nauc_recall_at_3_max
value: -8.180345365085538
- type: nauc_recall_at_3_std
value: -18.370337505024796
- type: nauc_recall_at_5_diff1
value: 3.6053552181043913
- type: nauc_recall_at_5_max
value: -5.207647070615579
- type: nauc_recall_at_5_std
value: -19.894910854272492
- type: ndcg_at_1
value: 34.282000000000004
- type: ndcg_at_10
value: 59.53000000000001
- type: ndcg_at_100
value: 62.187000000000005
- type: ndcg_at_1000
value: 62.243
- type: ndcg_at_20
value: 61.451
- type: ndcg_at_3
value: 49.393
- type: ndcg_at_5
value: 54.771
- type: precision_at_1
value: 34.282000000000004
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 20.104
- type: precision_at_5
value: 14.651
- type: recall_at_1
value: 34.282000000000004
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 95.377
- type: recall_at_3
value: 60.313
- type: recall_at_5
value: 73.257
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: main_score
value: 53.885000000000005
- type: map_at_1
value: 35.429
- type: map_at_10
value: 47.469
- type: map_at_100
value: 48.997
- type: map_at_1000
value: 49.117
- type: map_at_20
value: 48.324
- type: map_at_3
value: 43.835
- type: map_at_5
value: 46.043
- type: mrr_at_1
value: 43.34763948497854
- type: mrr_at_10
value: 53.258623430297234
- type: mrr_at_100
value: 53.99123884299005
- type: mrr_at_1000
value: 54.02458101713216
- type: mrr_at_20
value: 53.695964669618945
- type: mrr_at_3
value: 50.81068192656173
- type: mrr_at_5
value: 52.45588936576058
- type: nauc_map_at_1000_diff1
value: 51.55382824218782
- type: nauc_map_at_1000_max
value: 31.855350695084606
- type: nauc_map_at_1000_std
value: -5.465862008150992
- type: nauc_map_at_100_diff1
value: 51.55889312452534
- type: nauc_map_at_100_max
value: 31.88429637207401
- type: nauc_map_at_100_std
value: -5.40805152544196
- type: nauc_map_at_10_diff1
value: 51.6592677505875
- type: nauc_map_at_10_max
value: 31.554425233617543
- type: nauc_map_at_10_std
value: -6.125756131339046
- type: nauc_map_at_1_diff1
value: 55.6889617582672
- type: nauc_map_at_1_max
value: 27.821166966868176
- type: nauc_map_at_1_std
value: -5.778838498211728
- type: nauc_map_at_20_diff1
value: 51.70520970992564
- type: nauc_map_at_20_max
value: 31.811676633900465
- type: nauc_map_at_20_std
value: -5.463596751904718
- type: nauc_map_at_3_diff1
value: 53.206169626589606
- type: nauc_map_at_3_max
value: 31.64373830824983
- type: nauc_map_at_3_std
value: -6.054761451312827
- type: nauc_map_at_5_diff1
value: 52.37308971673694
- type: nauc_map_at_5_max
value: 31.974302019633644
- type: nauc_map_at_5_std
value: -6.302653399940531
- type: nauc_mrr_at_1000_diff1
value: 49.345152231490616
- type: nauc_mrr_at_1000_max
value: 33.49789501712511
- type: nauc_mrr_at_1000_std
value: -6.054730861163538
- type: nauc_mrr_at_100_diff1
value: 49.3387577601307
- type: nauc_mrr_at_100_max
value: 33.48149992464187
- type: nauc_mrr_at_100_std
value: -6.061177137579308
- type: nauc_mrr_at_10_diff1
value: 49.08312288449718
- type: nauc_mrr_at_10_max
value: 33.470393322577465
- type: nauc_mrr_at_10_std
value: -6.180286430216975
- type: nauc_mrr_at_1_diff1
value: 52.43364978537192
- type: nauc_mrr_at_1_max
value: 31.521755633355713
- type: nauc_mrr_at_1_std
value: -7.002499524130836
- type: nauc_mrr_at_20_diff1
value: 49.311059224991766
- type: nauc_mrr_at_20_max
value: 33.538523037692144
- type: nauc_mrr_at_20_std
value: -6.034619474981136
- type: nauc_mrr_at_3_diff1
value: 49.90489868439366
- type: nauc_mrr_at_3_max
value: 34.400493912164606
- type: nauc_mrr_at_3_std
value: -6.028875320994629
- type: nauc_mrr_at_5_diff1
value: 49.033661898983475
- type: nauc_mrr_at_5_max
value: 33.732315350193936
- type: nauc_mrr_at_5_std
value: -6.272548556330368
- type: nauc_ndcg_at_1000_diff1
value: 49.81681892539247
- type: nauc_ndcg_at_1000_max
value: 33.06518006062093
- type: nauc_ndcg_at_1000_std
value: -4.282105713014755
- type: nauc_ndcg_at_100_diff1
value: 49.42362108857786
- type: nauc_ndcg_at_100_max
value: 32.92024325540483
- type: nauc_ndcg_at_100_std
value: -3.7786765305496717
- type: nauc_ndcg_at_10_diff1
value: 48.83102435475594
- type: nauc_ndcg_at_10_max
value: 31.898404563611958
- type: nauc_ndcg_at_10_std
value: -6.2024003866707
- type: nauc_ndcg_at_1_diff1
value: 52.43364978537192
- type: nauc_ndcg_at_1_max
value: 31.521755633355713
- type: nauc_ndcg_at_1_std
value: -7.002499524130836
- type: nauc_ndcg_at_20_diff1
value: 49.466526454438316
- type: nauc_ndcg_at_20_max
value: 32.424462698701674
- type: nauc_ndcg_at_20_std
value: -4.520809563712905
- type: nauc_ndcg_at_3_diff1
value: 50.997884562583884
- type: nauc_ndcg_at_3_max
value: 33.26787046916917
- type: nauc_ndcg_at_3_std
value: -6.340699471083753
- type: nauc_ndcg_at_5_diff1
value: 49.68314458398097
- type: nauc_ndcg_at_5_max
value: 32.80910071143984
- type: nauc_ndcg_at_5_std
value: -6.734495576445887
- type: nauc_precision_at_1000_diff1
value: -24.18940012795299
- type: nauc_precision_at_1000_max
value: -10.995343674356896
- type: nauc_precision_at_1000_std
value: -8.298841004724856
- type: nauc_precision_at_100_diff1
value: -18.104939577865935
- type: nauc_precision_at_100_max
value: -1.3757613100627637
- type: nauc_precision_at_100_std
value: 0.07661922190466432
- type: nauc_precision_at_10_diff1
value: 3.9624459059275967
- type: nauc_precision_at_10_max
value: 14.841561593450391
- type: nauc_precision_at_10_std
value: -2.485374333613117
- type: nauc_precision_at_1_diff1
value: 52.43364978537192
- type: nauc_precision_at_1_max
value: 31.521755633355713
- type: nauc_precision_at_1_std
value: -7.002499524130836
- type: nauc_precision_at_20_diff1
value: -4.4791763436505265
- type: nauc_precision_at_20_max
value: 9.157872836996276
- type: nauc_precision_at_20_std
value: 2.086903518342088
- type: nauc_precision_at_3_diff1
value: 28.480888018235568
- type: nauc_precision_at_3_max
value: 30.34526267718485
- type: nauc_precision_at_3_std
value: -6.3006706923866025
- type: nauc_precision_at_5_diff1
value: 16.488039195453517
- type: nauc_precision_at_5_max
value: 24.593477099241852
- type: nauc_precision_at_5_std
value: -5.316448107840636
- type: nauc_recall_at_1000_diff1
value: 34.715187316533076
- type: nauc_recall_at_1000_max
value: 58.2266544684947
- type: nauc_recall_at_1000_std
value: 63.85237636398278
- type: nauc_recall_at_100_diff1
value: 36.08623826028132
- type: nauc_recall_at_100_max
value: 33.05011429439473
- type: nauc_recall_at_100_std
value: 16.559545021212564
- type: nauc_recall_at_10_diff1
value: 39.76738610714205
- type: nauc_recall_at_10_max
value: 28.233045706945997
- type: nauc_recall_at_10_std
value: -5.13243784043598
- type: nauc_recall_at_1_diff1
value: 55.6889617582672
- type: nauc_recall_at_1_max
value: 27.821166966868176
- type: nauc_recall_at_1_std
value: -5.778838498211728
- type: nauc_recall_at_20_diff1
value: 41.18682480073759
- type: nauc_recall_at_20_max
value: 29.525993239296945
- type: nauc_recall_at_20_std
value: 1.5003598438954298
- type: nauc_recall_at_3_diff1
value: 48.31879460301157
- type: nauc_recall_at_3_max
value: 32.93751306970167
- type: nauc_recall_at_3_std
value: -5.28070084211707
- type: nauc_recall_at_5_diff1
value: 44.327686388315435
- type: nauc_recall_at_5_max
value: 32.04823486234599
- type: nauc_recall_at_5_std
value: -6.4221525602778256
- type: ndcg_at_1
value: 43.348
- type: ndcg_at_10
value: 53.885000000000005
- type: ndcg_at_100
value: 59.204
- type: ndcg_at_1000
value: 60.744
- type: ndcg_at_20
value: 55.995
- type: ndcg_at_3
value: 49.112
- type: ndcg_at_5
value: 51.61900000000001
- type: precision_at_1
value: 43.348
- type: precision_at_10
value: 10.242999999999999
- type: precision_at_100
value: 1.6150000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_20
value: 6.066
- type: precision_at_3
value: 23.605
- type: precision_at_5
value: 17.024
- type: recall_at_1
value: 35.429
- type: recall_at_10
value: 65.77199999999999
- type: recall_at_100
value: 87.89
- type: recall_at_1000
value: 97.13000000000001
- type: recall_at_20
value: 73.299
- type: recall_at_3
value: 52.034000000000006
- type: recall_at_5
value: 58.96
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: main_score
value: 49.55
- type: map_at_1
value: 31.684
- type: map_at_10
value: 43.258
- type: map_at_100
value: 44.628
- type: map_at_1000
value: 44.761
- type: map_at_20
value: 44.015
- type: map_at_3
value: 39.778000000000006
- type: map_at_5
value: 41.643
- type: mrr_at_1
value: 39.87261146496815
- type: mrr_at_10
value: 49.31978566373469
- type: mrr_at_100
value: 49.94922739445482
- type: mrr_at_1000
value: 49.990325601254106
- type: mrr_at_20
value: 49.70597468576704
- type: mrr_at_3
value: 47.070063694267546
- type: mrr_at_5
value: 48.23248407643316
- type: nauc_map_at_1000_diff1
value: 53.44044712371752
- type: nauc_map_at_1000_max
value: 34.5651440062204
- type: nauc_map_at_1000_std
value: -0.9814384609230475
- type: nauc_map_at_100_diff1
value: 53.429004435388464
- type: nauc_map_at_100_max
value: 34.52038957273436
- type: nauc_map_at_100_std
value: -1.1021936362699805
- type: nauc_map_at_10_diff1
value: 53.879128574022005
- type: nauc_map_at_10_max
value: 33.74771524140917
- type: nauc_map_at_10_std
value: -2.945132777205236
- type: nauc_map_at_1_diff1
value: 60.25159799695403
- type: nauc_map_at_1_max
value: 26.843892985235808
- type: nauc_map_at_1_std
value: -9.618702739509093
- type: nauc_map_at_20_diff1
value: 53.56789898225283
- type: nauc_map_at_20_max
value: 34.11628845872402
- type: nauc_map_at_20_std
value: -2.024376635870884
- type: nauc_map_at_3_diff1
value: 54.45882099014072
- type: nauc_map_at_3_max
value: 31.29495446507793
- type: nauc_map_at_3_std
value: -6.391948228781555
- type: nauc_map_at_5_diff1
value: 54.20536489050697
- type: nauc_map_at_5_max
value: 32.31001487256826
- type: nauc_map_at_5_std
value: -5.050953263346934
- type: nauc_mrr_at_1000_diff1
value: 50.835858995999125
- type: nauc_mrr_at_1000_max
value: 38.20717381701079
- type: nauc_mrr_at_1000_std
value: 4.174163368228787
- type: nauc_mrr_at_100_diff1
value: 50.827072441041224
- type: nauc_mrr_at_100_max
value: 38.21077622034756
- type: nauc_mrr_at_100_std
value: 4.1951082737013365
- type: nauc_mrr_at_10_diff1
value: 50.90578491570948
- type: nauc_mrr_at_10_max
value: 38.19229691746408
- type: nauc_mrr_at_10_std
value: 3.8290750066335546
- type: nauc_mrr_at_1_diff1
value: 54.807021746871186
- type: nauc_mrr_at_1_max
value: 37.09225642043841
- type: nauc_mrr_at_1_std
value: 0.5654547513131355
- type: nauc_mrr_at_20_diff1
value: 50.86247832095378
- type: nauc_mrr_at_20_max
value: 38.19277867384178
- type: nauc_mrr_at_20_std
value: 4.098932316791841
- type: nauc_mrr_at_3_diff1
value: 50.788934370903036
- type: nauc_mrr_at_3_max
value: 37.72130561895659
- type: nauc_mrr_at_3_std
value: 2.7339370381517583
- type: nauc_mrr_at_5_diff1
value: 50.72543792525547
- type: nauc_mrr_at_5_max
value: 37.57740908475375
- type: nauc_mrr_at_5_std
value: 2.742881431085094
- type: nauc_ndcg_at_1000_diff1
value: 50.89692885407576
- type: nauc_ndcg_at_1000_max
value: 37.250583054716955
- type: nauc_ndcg_at_1000_std
value: 5.552279826578831
- type: nauc_ndcg_at_100_diff1
value: 50.624606875496944
- type: nauc_ndcg_at_100_max
value: 37.1024514234627
- type: nauc_ndcg_at_100_std
value: 5.495892760032762
- type: nauc_ndcg_at_10_diff1
value: 51.910387255793445
- type: nauc_ndcg_at_10_max
value: 36.71168418905039
- type: nauc_ndcg_at_10_std
value: 2.3064115117905217
- type: nauc_ndcg_at_1_diff1
value: 54.807021746871186
- type: nauc_ndcg_at_1_max
value: 37.09225642043841
- type: nauc_ndcg_at_1_std
value: 0.5654547513131355
- type: nauc_ndcg_at_20_diff1
value: 51.43416588546778
- type: nauc_ndcg_at_20_max
value: 36.76387180172346
- type: nauc_ndcg_at_20_std
value: 3.7012798827049718
- type: nauc_ndcg_at_3_diff1
value: 50.91198494475423
- type: nauc_ndcg_at_3_max
value: 34.92770670756687
- type: nauc_ndcg_at_3_std
value: -0.9071486759887368
- type: nauc_ndcg_at_5_diff1
value: 51.63559468683886
- type: nauc_ndcg_at_5_max
value: 34.86849679864564
- type: nauc_ndcg_at_5_std
value: -0.734837221224976
- type: nauc_precision_at_1000_diff1
value: -13.43645457127175
- type: nauc_precision_at_1000_max
value: 12.71162105198664
- type: nauc_precision_at_1000_std
value: 33.175399007040255
- type: nauc_precision_at_100_diff1
value: -8.549834785105412
- type: nauc_precision_at_100_max
value: 22.47383497331883
- type: nauc_precision_at_100_std
value: 39.09108761430844
- type: nauc_precision_at_10_diff1
value: 7.556572451100043
- type: nauc_precision_at_10_max
value: 35.35285122987575
- type: nauc_precision_at_10_std
value: 29.417466305615967
- type: nauc_precision_at_1_diff1
value: 54.807021746871186
- type: nauc_precision_at_1_max
value: 37.09225642043841
- type: nauc_precision_at_1_std
value: 0.5654547513131355
- type: nauc_precision_at_20_diff1
value: -0.550158641635712
- type: nauc_precision_at_20_max
value: 29.9068430006187
- type: nauc_precision_at_20_std
value: 33.920603132821185
- type: nauc_precision_at_3_diff1
value: 25.551264664276687
- type: nauc_precision_at_3_max
value: 37.59463225854679
- type: nauc_precision_at_3_std
value: 13.707295021359043
- type: nauc_precision_at_5_diff1
value: 17.76136129817151
- type: nauc_precision_at_5_max
value: 35.85363807255972
- type: nauc_precision_at_5_std
value: 19.48470876841111
- type: nauc_recall_at_1000_diff1
value: 37.1593620123866
- type: nauc_recall_at_1000_max
value: 46.29322536951135
- type: nauc_recall_at_1000_std
value: 51.47312657083967
- type: nauc_recall_at_100_diff1
value: 37.7542224949536
- type: nauc_recall_at_100_max
value: 38.84120637703135
- type: nauc_recall_at_100_std
value: 28.839672572221925
- type: nauc_recall_at_10_diff1
value: 46.24130302658384
- type: nauc_recall_at_10_max
value: 35.89001724712849
- type: nauc_recall_at_10_std
value: 6.985137790828618
- type: nauc_recall_at_1_diff1
value: 60.25159799695403
- type: nauc_recall_at_1_max
value: 26.843892985235808
- type: nauc_recall_at_1_std
value: -9.618702739509093
- type: nauc_recall_at_20_diff1
value: 43.63576680886187
- type: nauc_recall_at_20_max
value: 36.79079644708101
- type: nauc_recall_at_20_std
value: 13.81561928605839
- type: nauc_recall_at_3_diff1
value: 48.2299322140522
- type: nauc_recall_at_3_max
value: 30.038088484376203
- type: nauc_recall_at_3_std
value: -4.871116183843762
- type: nauc_recall_at_5_diff1
value: 47.22331872695983
- type: nauc_recall_at_5_max
value: 30.398541477173136
- type: nauc_recall_at_5_std
value: -3.2038541888528957
- type: ndcg_at_1
value: 39.873
- type: ndcg_at_10
value: 49.55
- type: ndcg_at_100
value: 53.809
- type: ndcg_at_1000
value: 55.767999999999994
- type: ndcg_at_20
value: 51.275999999999996
- type: ndcg_at_3
value: 44.91
- type: ndcg_at_5
value: 46.855999999999995
- type: precision_at_1
value: 39.873
- type: precision_at_10
value: 9.65
- type: precision_at_100
value: 1.522
- type: precision_at_1000
value: 0.196
- type: precision_at_20
value: 5.701
- type: precision_at_3
value: 22.166
- type: precision_at_5
value: 15.643
- type: recall_at_1
value: 31.684
- type: recall_at_10
value: 60.69
- type: recall_at_100
value: 78.521
- type: recall_at_1000
value: 91.02900000000001
- type: recall_at_20
value: 66.973
- type: recall_at_3
value: 46.807
- type: recall_at_5
value: 52.402
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: main_score
value: 62.686
- type: map_at_1
value: 43.856
- type: map_at_10
value: 57.056
- type: map_at_100
value: 58.048
- type: map_at_1000
value: 58.092
- type: map_at_20
value: 57.684000000000005
- type: map_at_3
value: 53.958
- type: map_at_5
value: 55.80500000000001
- type: mrr_at_1
value: 50.03134796238244
- type: mrr_at_10
value: 60.31022043091019
- type: mrr_at_100
value: 60.91892338857461
- type: mrr_at_1000
value: 60.93770463536649
- type: mrr_at_20
value: 60.705642387392736
- type: mrr_at_3
value: 58.286311389759746
- type: mrr_at_5
value: 59.49320794148393
- type: nauc_map_at_1000_diff1
value: 54.849140197256695
- type: nauc_map_at_1000_max
value: 38.978448968260224
- type: nauc_map_at_1000_std
value: 0.4955439383268162
- type: nauc_map_at_100_diff1
value: 54.824334747823364
- type: nauc_map_at_100_max
value: 38.959443109450994
- type: nauc_map_at_100_std
value: 0.49626092018886037
- type: nauc_map_at_10_diff1
value: 54.778189277103394
- type: nauc_map_at_10_max
value: 38.20972191654546
- type: nauc_map_at_10_std
value: -0.7239823837455759
- type: nauc_map_at_1_diff1
value: 58.74017164752485
- type: nauc_map_at_1_max
value: 31.528974862589585
- type: nauc_map_at_1_std
value: -3.273824691929492
- type: nauc_map_at_20_diff1
value: 54.78943693416187
- type: nauc_map_at_20_max
value: 38.77930316443076
- type: nauc_map_at_20_std
value: 0.25607460088355544
- type: nauc_map_at_3_diff1
value: 55.68313410225767
- type: nauc_map_at_3_max
value: 36.22847284104399
- type: nauc_map_at_3_std
value: -3.010979639100503
- type: nauc_map_at_5_diff1
value: 55.11385094420661
- type: nauc_map_at_5_max
value: 37.319681045490924
- type: nauc_map_at_5_std
value: -2.156640733221061
- type: nauc_mrr_at_1000_diff1
value: 54.504759468380705
- type: nauc_mrr_at_1000_max
value: 40.58849492650406
- type: nauc_mrr_at_1000_std
value: 1.8226622175866118
- type: nauc_mrr_at_100_diff1
value: 54.4918034449886
- type: nauc_mrr_at_100_max
value: 40.59202728933427
- type: nauc_mrr_at_100_std
value: 1.8276428096536335
- type: nauc_mrr_at_10_diff1
value: 54.33603399493329
- type: nauc_mrr_at_10_max
value: 40.58896878978089
- type: nauc_mrr_at_10_std
value: 1.5733340909114375
- type: nauc_mrr_at_1_diff1
value: 58.062410036466105
- type: nauc_mrr_at_1_max
value: 37.660958859966506
- type: nauc_mrr_at_1_std
value: 0.029007600674170648
- type: nauc_mrr_at_20_diff1
value: 54.43793386924358
- type: nauc_mrr_at_20_max
value: 40.66773423875307
- type: nauc_mrr_at_20_std
value: 1.891967891797154
- type: nauc_mrr_at_3_diff1
value: 54.77901284537966
- type: nauc_mrr_at_3_max
value: 40.182219821206964
- type: nauc_mrr_at_3_std
value: 0.8911935034597871
- type: nauc_mrr_at_5_diff1
value: 54.466068837163675
- type: nauc_mrr_at_5_max
value: 40.334996916684126
- type: nauc_mrr_at_5_std
value: 0.9460830492892364
- type: nauc_ndcg_at_1000_diff1
value: 53.8465376860938
- type: nauc_ndcg_at_1000_max
value: 41.63158111016696
- type: nauc_ndcg_at_1000_std
value: 3.864205884257578
- type: nauc_ndcg_at_100_diff1
value: 53.4025864436944
- type: nauc_ndcg_at_100_max
value: 41.805453995307914
- type: nauc_ndcg_at_100_std
value: 4.36777557904857
- type: nauc_ndcg_at_10_diff1
value: 52.96034987157544
- type: nauc_ndcg_at_10_max
value: 40.7601173480795
- type: nauc_ndcg_at_10_std
value: 1.905824035879141
- type: nauc_ndcg_at_1_diff1
value: 58.062410036466105
- type: nauc_ndcg_at_1_max
value: 37.660958859966506
- type: nauc_ndcg_at_1_std
value: 0.029007600674170648
- type: nauc_ndcg_at_20_diff1
value: 53.2834771889242
- type: nauc_ndcg_at_20_max
value: 41.713541932946406
- type: nauc_ndcg_at_20_std
value: 3.865102828793311
- type: nauc_ndcg_at_3_diff1
value: 54.03389464372289
- type: nauc_ndcg_at_3_max
value: 38.41449914649933
- type: nauc_ndcg_at_3_std
value: -0.886276189886313
- type: nauc_ndcg_at_5_diff1
value: 53.456413320299
- type: nauc_ndcg_at_5_max
value: 39.49048882649335
- type: nauc_ndcg_at_5_std
value: -0.42692690160443814
- type: nauc_precision_at_1000_diff1
value: -14.770791653274824
- type: nauc_precision_at_1000_max
value: 21.479874538905246
- type: nauc_precision_at_1000_std
value: 28.607024261300207
- type: nauc_precision_at_100_diff1
value: -12.189696449878126
- type: nauc_precision_at_100_max
value: 26.69785787492456
- type: nauc_precision_at_100_std
value: 33.59098307467553
- type: nauc_precision_at_10_diff1
value: 6.922968330978399
- type: nauc_precision_at_10_max
value: 34.52138344123087
- type: nauc_precision_at_10_std
value: 21.768427637079952
- type: nauc_precision_at_1_diff1
value: 58.062410036466105
- type: nauc_precision_at_1_max
value: 37.660958859966506
- type: nauc_precision_at_1_std
value: 0.029007600674170648
- type: nauc_precision_at_20_diff1
value: -0.6837867902179278
- type: nauc_precision_at_20_max
value: 33.98683709011133
- type: nauc_precision_at_20_std
value: 30.8845561918902
- type: nauc_precision_at_3_diff1
value: 28.195043041120847
- type: nauc_precision_at_3_max
value: 37.659916094938836
- type: nauc_precision_at_3_std
value: 7.226520146634867
- type: nauc_precision_at_5_diff1
value: 16.633667288096245
- type: nauc_precision_at_5_max
value: 34.90176597404891
- type: nauc_precision_at_5_std
value: 12.421585442334088
- type: nauc_recall_at_1000_diff1
value: 45.20743732415397
- type: nauc_recall_at_1000_max
value: 72.77115913579242
- type: nauc_recall_at_1000_std
value: 70.48328496679083
- type: nauc_recall_at_100_diff1
value: 38.56282680810794
- type: nauc_recall_at_100_max
value: 55.46797683321103
- type: nauc_recall_at_100_std
value: 36.878791151929136
- type: nauc_recall_at_10_diff1
value: 44.18252051452362
- type: nauc_recall_at_10_max
value: 43.33391810040086
- type: nauc_recall_at_10_std
value: 6.663378192277723
- type: nauc_recall_at_1_diff1
value: 58.74017164752485
- type: nauc_recall_at_1_max
value: 31.528974862589585
- type: nauc_recall_at_1_std
value: -3.273824691929492
- type: nauc_recall_at_20_diff1
value: 44.19944231642417
- type: nauc_recall_at_20_max
value: 49.401101483915866
- type: nauc_recall_at_20_std
value: 18.97803841673839
- type: nauc_recall_at_3_diff1
value: 49.56378985428704
- type: nauc_recall_at_3_max
value: 36.434210616870224
- type: nauc_recall_at_3_std
value: -2.850559971607616
- type: nauc_recall_at_5_diff1
value: 47.37107217086109
- type: nauc_recall_at_5_max
value: 39.0236745509895
- type: nauc_recall_at_5_std
value: -1.7402454457937195
- type: ndcg_at_1
value: 50.031000000000006
- type: ndcg_at_10
value: 62.686
- type: ndcg_at_100
value: 66.403
- type: ndcg_at_1000
value: 67.241
- type: ndcg_at_20
value: 64.37899999999999
- type: ndcg_at_3
value: 57.859
- type: ndcg_at_5
value: 60.375
- type: precision_at_1
value: 50.031000000000006
- type: precision_at_10
value: 9.856
- type: precision_at_100
value: 1.266
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.489
- type: precision_at_3
value: 25.746999999999996
- type: precision_at_5
value: 17.492
- type: recall_at_1
value: 43.856
- type: recall_at_10
value: 75.824
- type: recall_at_100
value: 91.622
- type: recall_at_1000
value: 97.538
- type: recall_at_20
value: 81.951
- type: recall_at_3
value: 63.016000000000005
- type: recall_at_5
value: 69.18299999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: main_score
value: 43.983
- type: map_at_1
value: 28.942
- type: map_at_10
value: 38.621
- type: map_at_100
value: 39.7
- type: map_at_1000
value: 39.766
- type: map_at_20
value: 39.262
- type: map_at_3
value: 35.719
- type: map_at_5
value: 37.378
- type: mrr_at_1
value: 31.29943502824859
- type: mrr_at_10
value: 40.76463994260603
- type: mrr_at_100
value: 41.67073617629083
- type: mrr_at_1000
value: 41.717446259457105
- type: mrr_at_20
value: 41.32577374689195
- type: mrr_at_3
value: 37.984934086628996
- type: mrr_at_5
value: 39.64595103578152
- type: nauc_map_at_1000_diff1
value: 43.64461679688985
- type: nauc_map_at_1000_max
value: 31.53717883948204
- type: nauc_map_at_1000_std
value: 1.193745788248017
- type: nauc_map_at_100_diff1
value: 43.63847825079489
- type: nauc_map_at_100_max
value: 31.536602619279165
- type: nauc_map_at_100_std
value: 1.2001240243342401
- type: nauc_map_at_10_diff1
value: 43.845991987142014
- type: nauc_map_at_10_max
value: 31.27509937344113
- type: nauc_map_at_10_std
value: 0.7327934840520994
- type: nauc_map_at_1_diff1
value: 50.62269273984579
- type: nauc_map_at_1_max
value: 30.16325757909521
- type: nauc_map_at_1_std
value: -0.6398875136233392
- type: nauc_map_at_20_diff1
value: 43.630758403790914
- type: nauc_map_at_20_max
value: 31.408258098047703
- type: nauc_map_at_20_std
value: 1.12616034652217
- type: nauc_map_at_3_diff1
value: 44.823493567359456
- type: nauc_map_at_3_max
value: 31.075886347614496
- type: nauc_map_at_3_std
value: -0.25126874515735426
- type: nauc_map_at_5_diff1
value: 43.79768853087658
- type: nauc_map_at_5_max
value: 31.091080995725324
- type: nauc_map_at_5_std
value: 0.16440771782544047
- type: nauc_mrr_at_1000_diff1
value: 42.7865400752329
- type: nauc_mrr_at_1000_max
value: 32.84731670326893
- type: nauc_mrr_at_1000_std
value: 2.6067637582013825
- type: nauc_mrr_at_100_diff1
value: 42.771741548331065
- type: nauc_mrr_at_100_max
value: 32.85324232845987
- type: nauc_mrr_at_100_std
value: 2.6092786694308376
- type: nauc_mrr_at_10_diff1
value: 42.82969738870672
- type: nauc_mrr_at_10_max
value: 32.69407549631432
- type: nauc_mrr_at_10_std
value: 2.302903910016054
- type: nauc_mrr_at_1_diff1
value: 49.05638333657571
- type: nauc_mrr_at_1_max
value: 33.12030717171514
- type: nauc_mrr_at_1_std
value: 1.3278035087690774
- type: nauc_mrr_at_20_diff1
value: 42.74267239536286
- type: nauc_mrr_at_20_max
value: 32.78571108973092
- type: nauc_mrr_at_20_std
value: 2.5932669908758643
- type: nauc_mrr_at_3_diff1
value: 43.69963426089187
- type: nauc_mrr_at_3_max
value: 32.78193126956233
- type: nauc_mrr_at_3_std
value: 1.634874463134699
- type: nauc_mrr_at_5_diff1
value: 42.838630647832524
- type: nauc_mrr_at_5_max
value: 32.459318735260545
- type: nauc_mrr_at_5_std
value: 1.9412518283209172
- type: nauc_ndcg_at_1000_diff1
value: 41.01253839851583
- type: nauc_ndcg_at_1000_max
value: 32.69570568894237
- type: nauc_ndcg_at_1000_std
value: 3.4254737113410343
- type: nauc_ndcg_at_100_diff1
value: 40.62589243745832
- type: nauc_ndcg_at_100_max
value: 32.664990655736126
- type: nauc_ndcg_at_100_std
value: 3.799569445326048
- type: nauc_ndcg_at_10_diff1
value: 41.31658753735306
- type: nauc_ndcg_at_10_max
value: 31.511946320339295
- type: nauc_ndcg_at_10_std
value: 2.0492930500796662
- type: nauc_ndcg_at_1_diff1
value: 49.05638333657571
- type: nauc_ndcg_at_1_max
value: 33.12030717171514
- type: nauc_ndcg_at_1_std
value: 1.3278035087690774
- type: nauc_ndcg_at_20_diff1
value: 40.66188223212841
- type: nauc_ndcg_at_20_max
value: 31.926240431497476
- type: nauc_ndcg_at_20_std
value: 3.370398664595343
- type: nauc_ndcg_at_3_diff1
value: 43.035580180241
- type: nauc_ndcg_at_3_max
value: 31.363874129878404
- type: nauc_ndcg_at_3_std
value: 0.1422507242819929
- type: nauc_ndcg_at_5_diff1
value: 41.29049003955878
- type: nauc_ndcg_at_5_max
value: 31.112034994977737
- type: nauc_ndcg_at_5_std
value: 0.860179279828966
- type: nauc_precision_at_1000_diff1
value: -12.41854465881981
- type: nauc_precision_at_1000_max
value: 14.706779246590548
- type: nauc_precision_at_1000_std
value: 9.812804367375206
- type: nauc_precision_at_100_diff1
value: 2.797520107808461
- type: nauc_precision_at_100_max
value: 24.335873541811406
- type: nauc_precision_at_100_std
value: 12.87186398750545
- type: nauc_precision_at_10_diff1
value: 24.530962799265847
- type: nauc_precision_at_10_max
value: 31.00772010798733
- type: nauc_precision_at_10_std
value: 6.696733001548185
- type: nauc_precision_at_1_diff1
value: 49.05638333657571
- type: nauc_precision_at_1_max
value: 33.12030717171514
- type: nauc_precision_at_1_std
value: 1.3278035087690774
- type: nauc_precision_at_20_diff1
value: 16.25028416351204
- type: nauc_precision_at_20_max
value: 29.629326492027342
- type: nauc_precision_at_20_std
value: 11.085888573121679
- type: nauc_precision_at_3_diff1
value: 33.923667689694256
- type: nauc_precision_at_3_max
value: 33.5859782361996
- type: nauc_precision_at_3_std
value: 1.9468331086918693
- type: nauc_precision_at_5_diff1
value: 27.917827233088875
- type: nauc_precision_at_5_max
value: 33.13290043423535
- type: nauc_precision_at_5_std
value: 3.800870695945311
- type: nauc_recall_at_1000_diff1
value: 9.680283388428789
- type: nauc_recall_at_1000_max
value: 49.479399284871235
- type: nauc_recall_at_1000_std
value: 31.506985071436088
- type: nauc_recall_at_100_diff1
value: 23.607673377885448
- type: nauc_recall_at_100_max
value: 36.637750366403935
- type: nauc_recall_at_100_std
value: 18.30770690564224
- type: nauc_recall_at_10_diff1
value: 33.199683418312446
- type: nauc_recall_at_10_max
value: 29.63115497012312
- type: nauc_recall_at_10_std
value: 4.813200391480566
- type: nauc_recall_at_1_diff1
value: 50.62269273984579
- type: nauc_recall_at_1_max
value: 30.16325757909521
- type: nauc_recall_at_1_std
value: -0.6398875136233392
- type: nauc_recall_at_20_diff1
value: 29.16488387844995
- type: nauc_recall_at_20_max
value: 30.788019479459
- type: nauc_recall_at_20_std
value: 11.031953917298853
- type: nauc_recall_at_3_diff1
value: 38.215351600417065
- type: nauc_recall_at_3_max
value: 29.619887154236128
- type: nauc_recall_at_3_std
value: -0.13237298980339363
- type: nauc_recall_at_5_diff1
value: 33.93788042633265
- type: nauc_recall_at_5_max
value: 28.67185092656741
- type: nauc_recall_at_5_std
value: 1.316700201091445
- type: ndcg_at_1
value: 31.299
- type: ndcg_at_10
value: 43.983
- type: ndcg_at_100
value: 48.992999999999995
- type: ndcg_at_1000
value: 50.757
- type: ndcg_at_20
value: 46.152
- type: ndcg_at_3
value: 38.367000000000004
- type: ndcg_at_5
value: 41.171
- type: precision_at_1
value: 31.299
- type: precision_at_10
value: 6.734
- type: precision_at_100
value: 0.972
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_20
value: 3.898
- type: precision_at_3
value: 16.121
- type: precision_at_5
value: 11.344999999999999
- type: recall_at_1
value: 28.942
- type: recall_at_10
value: 58.343999999999994
- type: recall_at_100
value: 80.82300000000001
- type: recall_at_1000
value: 94.348
- type: recall_at_20
value: 66.449
- type: recall_at_3
value: 43.415
- type: recall_at_5
value: 50.007999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: main_score
value: 33.144
- type: map_at_1
value: 19.41
- type: map_at_10
value: 27.802
- type: map_at_100
value: 29.157
- type: map_at_1000
value: 29.274
- type: map_at_20
value: 28.549000000000003
- type: map_at_3
value: 25.052999999999997
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.756218905472636
- type: mrr_at_10
value: 32.3623450209271
- type: mrr_at_100
value: 33.3648208444617
- type: mrr_at_1000
value: 33.427688215162185
- type: mrr_at_20
value: 32.93723485575758
- type: mrr_at_3
value: 29.539800995024883
- type: mrr_at_5
value: 31.156716417910452
- type: nauc_map_at_1000_diff1
value: 36.196391248081284
- type: nauc_map_at_1000_max
value: 25.650644367091495
- type: nauc_map_at_1000_std
value: 6.130340697729844
- type: nauc_map_at_100_diff1
value: 36.138890642411376
- type: nauc_map_at_100_max
value: 25.587124763888518
- type: nauc_map_at_100_std
value: 6.129336379055536
- type: nauc_map_at_10_diff1
value: 36.254426743566775
- type: nauc_map_at_10_max
value: 25.465599906543034
- type: nauc_map_at_10_std
value: 5.880280378112879
- type: nauc_map_at_1_diff1
value: 42.890551563179976
- type: nauc_map_at_1_max
value: 25.813805281076956
- type: nauc_map_at_1_std
value: 5.150718386163028
- type: nauc_map_at_20_diff1
value: 35.98551587974314
- type: nauc_map_at_20_max
value: 25.501540521726636
- type: nauc_map_at_20_std
value: 5.858703157458749
- type: nauc_map_at_3_diff1
value: 37.646558039577734
- type: nauc_map_at_3_max
value: 26.138491471124247
- type: nauc_map_at_3_std
value: 6.0487505175540734
- type: nauc_map_at_5_diff1
value: 36.817582976153695
- type: nauc_map_at_5_max
value: 25.398200211121146
- type: nauc_map_at_5_std
value: 6.31126763919522
- type: nauc_mrr_at_1000_diff1
value: 37.313544952847835
- type: nauc_mrr_at_1000_max
value: 26.96218532078988
- type: nauc_mrr_at_1000_std
value: 6.814359224654042
- type: nauc_mrr_at_100_diff1
value: 37.28104407653679
- type: nauc_mrr_at_100_max
value: 26.931243040477256
- type: nauc_mrr_at_100_std
value: 6.800500150841733
- type: nauc_mrr_at_10_diff1
value: 37.315832621275895
- type: nauc_mrr_at_10_max
value: 26.941454225978372
- type: nauc_mrr_at_10_std
value: 6.837046527796884
- type: nauc_mrr_at_1_diff1
value: 43.19904188582958
- type: nauc_mrr_at_1_max
value: 26.975620445904795
- type: nauc_mrr_at_1_std
value: 4.52071008581395
- type: nauc_mrr_at_20_diff1
value: 37.2200524790774
- type: nauc_mrr_at_20_max
value: 26.971494160765847
- type: nauc_mrr_at_20_std
value: 6.716431228783282
- type: nauc_mrr_at_3_diff1
value: 38.46236387340654
- type: nauc_mrr_at_3_max
value: 27.846812992192056
- type: nauc_mrr_at_3_std
value: 6.550711872569794
- type: nauc_mrr_at_5_diff1
value: 37.620346007658476
- type: nauc_mrr_at_5_max
value: 27.031025952102038
- type: nauc_mrr_at_5_std
value: 7.32343760231163
- type: nauc_ndcg_at_1000_diff1
value: 34.95081314840592
- type: nauc_ndcg_at_1000_max
value: 26.89265465124325
- type: nauc_ndcg_at_1000_std
value: 7.854154466831975
- type: nauc_ndcg_at_100_diff1
value: 34.01417812563093
- type: nauc_ndcg_at_100_max
value: 25.792737746436835
- type: nauc_ndcg_at_100_std
value: 7.726584165493833
- type: nauc_ndcg_at_10_diff1
value: 33.895122516474466
- type: nauc_ndcg_at_10_max
value: 25.388442204589612
- type: nauc_ndcg_at_10_std
value: 6.359560223645991
- type: nauc_ndcg_at_1_diff1
value: 43.19904188582958
- type: nauc_ndcg_at_1_max
value: 26.975620445904795
- type: nauc_ndcg_at_1_std
value: 4.52071008581395
- type: nauc_ndcg_at_20_diff1
value: 33.36078689830245
- type: nauc_ndcg_at_20_max
value: 25.531794610571563
- type: nauc_ndcg_at_20_std
value: 6.136658608653248
- type: nauc_ndcg_at_3_diff1
value: 36.44505602530781
- type: nauc_ndcg_at_3_max
value: 26.9104071983157
- type: nauc_ndcg_at_3_std
value: 6.427178520371878
- type: nauc_ndcg_at_5_diff1
value: 35.01384323197442
- type: nauc_ndcg_at_5_max
value: 25.5560447088692
- type: nauc_ndcg_at_5_std
value: 7.3676236760360485
- type: nauc_precision_at_1000_diff1
value: 2.8903331041804514
- type: nauc_precision_at_1000_max
value: 4.059662742366004
- type: nauc_precision_at_1000_std
value: -1.5891687644008334
- type: nauc_precision_at_100_diff1
value: 8.437726471693766
- type: nauc_precision_at_100_max
value: 11.250588557568427
- type: nauc_precision_at_100_std
value: 4.231571164627862
- type: nauc_precision_at_10_diff1
value: 19.57085237210294
- type: nauc_precision_at_10_max
value: 20.973093492003905
- type: nauc_precision_at_10_std
value: 3.197416248152466
- type: nauc_precision_at_1_diff1
value: 43.19904188582958
- type: nauc_precision_at_1_max
value: 26.975620445904795
- type: nauc_precision_at_1_std
value: 4.52071008581395
- type: nauc_precision_at_20_diff1
value: 15.67136554192724
- type: nauc_precision_at_20_max
value: 17.706882621057858
- type: nauc_precision_at_20_std
value: 1.9363472182867714
- type: nauc_precision_at_3_diff1
value: 30.38035695042325
- type: nauc_precision_at_3_max
value: 26.48218693244094
- type: nauc_precision_at_3_std
value: 6.424657705785632
- type: nauc_precision_at_5_diff1
value: 25.272543315171458
- type: nauc_precision_at_5_max
value: 22.32441421311652
- type: nauc_precision_at_5_std
value: 7.4912569081905716
- type: nauc_recall_at_1000_diff1
value: 25.5748044137675
- type: nauc_recall_at_1000_max
value: 43.85796585370269
- type: nauc_recall_at_1000_std
value: 30.0338086596789
- type: nauc_recall_at_100_diff1
value: 22.577080638885093
- type: nauc_recall_at_100_max
value: 23.224511700617477
- type: nauc_recall_at_100_std
value: 15.187963852289313
- type: nauc_recall_at_10_diff1
value: 25.058592299355908
- type: nauc_recall_at_10_max
value: 22.24448483279841
- type: nauc_recall_at_10_std
value: 6.3179089740052765
- type: nauc_recall_at_1_diff1
value: 42.890551563179976
- type: nauc_recall_at_1_max
value: 25.813805281076956
- type: nauc_recall_at_1_std
value: 5.150718386163028
- type: nauc_recall_at_20_diff1
value: 22.433865123187307
- type: nauc_recall_at_20_max
value: 22.739695641511762
- type: nauc_recall_at_20_std
value: 5.362005125538497
- type: nauc_recall_at_3_diff1
value: 32.17919168998616
- type: nauc_recall_at_3_max
value: 26.044028436867357
- type: nauc_recall_at_3_std
value: 7.420349884006329
- type: nauc_recall_at_5_diff1
value: 28.967104573649138
- type: nauc_recall_at_5_max
value: 23.40865848168201
- type: nauc_recall_at_5_std
value: 9.174406147723621
- type: ndcg_at_1
value: 23.756
- type: ndcg_at_10
value: 33.144
- type: ndcg_at_100
value: 39.261
- type: ndcg_at_1000
value: 41.881
- type: ndcg_at_20
value: 35.56
- type: ndcg_at_3
value: 27.927999999999997
- type: ndcg_at_5
value: 30.293999999999997
- type: precision_at_1
value: 23.756
- type: precision_at_10
value: 5.995
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 3.688
- type: precision_at_3
value: 13.059999999999999
- type: precision_at_5
value: 9.602
- type: recall_at_1
value: 19.41
- type: recall_at_10
value: 45.074
- type: recall_at_100
value: 71.131
- type: recall_at_1000
value: 89.604
- type: recall_at_20
value: 53.673
- type: recall_at_3
value: 31.055
- type: recall_at_5
value: 36.714999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: main_score
value: 49.675000000000004
- type: map_at_1
value: 33.178999999999995
- type: map_at_10
value: 43.807
- type: map_at_100
value: 45.17
- type: map_at_1000
value: 45.271
- type: map_at_20
value: 44.516
- type: map_at_3
value: 40.813
- type: map_at_5
value: 42.457
- type: mrr_at_1
value: 40.32723772858518
- type: mrr_at_10
value: 49.646867409138814
- type: mrr_at_100
value: 50.493686101426285
- type: mrr_at_1000
value: 50.525386961808834
- type: mrr_at_20
value: 50.120274354884586
- type: mrr_at_3
value: 47.49759384023096
- type: mrr_at_5
value: 48.72473532242535
- type: nauc_map_at_1000_diff1
value: 49.5947127786396
- type: nauc_map_at_1000_max
value: 33.39720045844929
- type: nauc_map_at_1000_std
value: -3.131428593252271
- type: nauc_map_at_100_diff1
value: 49.57797867324617
- type: nauc_map_at_100_max
value: 33.356927974709464
- type: nauc_map_at_100_std
value: -3.1661365376766337
- type: nauc_map_at_10_diff1
value: 49.59294630598952
- type: nauc_map_at_10_max
value: 32.86647346990462
- type: nauc_map_at_10_std
value: -4.1582043443386745
- type: nauc_map_at_1_diff1
value: 53.98646767288695
- type: nauc_map_at_1_max
value: 29.45629077638936
- type: nauc_map_at_1_std
value: -5.621187380771589
- type: nauc_map_at_20_diff1
value: 49.486982890447074
- type: nauc_map_at_20_max
value: 33.11681933406332
- type: nauc_map_at_20_std
value: -3.5826433195146854
- type: nauc_map_at_3_diff1
value: 50.81807107491861
- type: nauc_map_at_3_max
value: 32.32552291988859
- type: nauc_map_at_3_std
value: -3.952946504088928
- type: nauc_map_at_5_diff1
value: 49.70201354274439
- type: nauc_map_at_5_max
value: 32.831846031004886
- type: nauc_map_at_5_std
value: -3.8330488624207737
- type: nauc_mrr_at_1000_diff1
value: 49.04159472507738
- type: nauc_mrr_at_1000_max
value: 35.617600171138676
- type: nauc_mrr_at_1000_std
value: -1.5975830757486646
- type: nauc_mrr_at_100_diff1
value: 49.03848471692094
- type: nauc_mrr_at_100_max
value: 35.61936748662614
- type: nauc_mrr_at_100_std
value: -1.5922053398594729
- type: nauc_mrr_at_10_diff1
value: 48.92463964652612
- type: nauc_mrr_at_10_max
value: 35.37757708992045
- type: nauc_mrr_at_10_std
value: -2.2052028139567303
- type: nauc_mrr_at_1_diff1
value: 52.23915787290734
- type: nauc_mrr_at_1_max
value: 34.393531787632334
- type: nauc_mrr_at_1_std
value: -1.452007661016969
- type: nauc_mrr_at_20_diff1
value: 48.91168438018404
- type: nauc_mrr_at_20_max
value: 35.478962544421876
- type: nauc_mrr_at_20_std
value: -1.8246048423555414
- type: nauc_mrr_at_3_diff1
value: 50.115432665442164
- type: nauc_mrr_at_3_max
value: 35.89093796085569
- type: nauc_mrr_at_3_std
value: -1.4895016313153366
- type: nauc_mrr_at_5_diff1
value: 49.04321261351915
- type: nauc_mrr_at_5_max
value: 35.85730520949451
- type: nauc_mrr_at_5_std
value: -1.68790556880753
- type: nauc_ndcg_at_1000_diff1
value: 48.294697499154374
- type: nauc_ndcg_at_1000_max
value: 35.167410242367595
- type: nauc_ndcg_at_1000_std
value: -0.6346078535914157
- type: nauc_ndcg_at_100_diff1
value: 48.025525283449014
- type: nauc_ndcg_at_100_max
value: 34.79288511776105
- type: nauc_ndcg_at_100_std
value: -0.7823403044086993
- type: nauc_ndcg_at_10_diff1
value: 47.70793258015258
- type: nauc_ndcg_at_10_max
value: 33.09558927880104
- type: nauc_ndcg_at_10_std
value: -4.7793864166260605
- type: nauc_ndcg_at_1_diff1
value: 52.23915787290734
- type: nauc_ndcg_at_1_max
value: 34.393531787632334
- type: nauc_ndcg_at_1_std
value: -1.452007661016969
- type: nauc_ndcg_at_20_diff1
value: 47.354286045074815
- type: nauc_ndcg_at_20_max
value: 33.686648806027975
- type: nauc_ndcg_at_20_std
value: -3.0189085132476556
- type: nauc_ndcg_at_3_diff1
value: 49.68805334316908
- type: nauc_ndcg_at_3_max
value: 34.196077748056496
- type: nauc_ndcg_at_3_std
value: -2.7167289163768436
- type: nauc_ndcg_at_5_diff1
value: 47.94474868912989
- type: nauc_ndcg_at_5_max
value: 34.00261603413051
- type: nauc_ndcg_at_5_std
value: -3.3541028103046115
- type: nauc_precision_at_1000_diff1
value: -12.0150100710755
- type: nauc_precision_at_1000_max
value: 5.332942816568796
- type: nauc_precision_at_1000_std
value: 14.543288479130458
- type: nauc_precision_at_100_diff1
value: -4.920332181588838
- type: nauc_precision_at_100_max
value: 14.42313332017491
- type: nauc_precision_at_100_std
value: 17.821953321018384
- type: nauc_precision_at_10_diff1
value: 14.70509089079217
- type: nauc_precision_at_10_max
value: 25.381887131649716
- type: nauc_precision_at_10_std
value: 5.226419288645675
- type: nauc_precision_at_1_diff1
value: 52.23915787290734
- type: nauc_precision_at_1_max
value: 34.393531787632334
- type: nauc_precision_at_1_std
value: -1.452007661016969
- type: nauc_precision_at_20_diff1
value: 6.312827641507564
- type: nauc_precision_at_20_max
value: 22.483038562271933
- type: nauc_precision_at_20_std
value: 11.368419856892416
- type: nauc_precision_at_3_diff1
value: 33.271443420273606
- type: nauc_precision_at_3_max
value: 33.571078182106675
- type: nauc_precision_at_3_std
value: 4.47382265155717
- type: nauc_precision_at_5_diff1
value: 23.43287104284656
- type: nauc_precision_at_5_max
value: 30.909085068105313
- type: nauc_precision_at_5_std
value: 5.545672049452433
- type: nauc_recall_at_1000_diff1
value: 35.22615594677707
- type: nauc_recall_at_1000_max
value: 52.0710533173532
- type: nauc_recall_at_1000_std
value: 45.17683523786464
- type: nauc_recall_at_100_diff1
value: 36.2169056956332
- type: nauc_recall_at_100_max
value: 35.02435003210817
- type: nauc_recall_at_100_std
value: 15.833632946282508
- type: nauc_recall_at_10_diff1
value: 39.12440292974848
- type: nauc_recall_at_10_max
value: 28.0546011979648
- type: nauc_recall_at_10_std
value: -9.620558638092172
- type: nauc_recall_at_1_diff1
value: 53.98646767288695
- type: nauc_recall_at_1_max
value: 29.45629077638936
- type: nauc_recall_at_1_std
value: -5.621187380771589
- type: nauc_recall_at_20_diff1
value: 36.39254630768161
- type: nauc_recall_at_20_max
value: 29.277856508751967
- type: nauc_recall_at_20_std
value: -3.048007490798412
- type: nauc_recall_at_3_diff1
value: 45.64706642644958
- type: nauc_recall_at_3_max
value: 31.003050159737413
- type: nauc_recall_at_3_std
value: -4.849763876930667
- type: nauc_recall_at_5_diff1
value: 40.918108859971746
- type: nauc_recall_at_5_max
value: 30.69907335071493
- type: nauc_recall_at_5_std
value: -6.1445436251916865
- type: ndcg_at_1
value: 40.327
- type: ndcg_at_10
value: 49.675000000000004
- type: ndcg_at_100
value: 55.364000000000004
- type: ndcg_at_1000
value: 56.992
- type: ndcg_at_20
value: 51.803999999999995
- type: ndcg_at_3
value: 45.227000000000004
- type: ndcg_at_5
value: 47.244
- type: precision_at_1
value: 40.327
- type: precision_at_10
value: 8.826
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.167
- type: precision_at_20
value: 5.115
- type: precision_at_3
value: 21.303
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 33.178999999999995
- type: recall_at_10
value: 61.087
- type: recall_at_100
value: 85.099
- type: recall_at_1000
value: 95.14099999999999
- type: recall_at_20
value: 68.623
- type: recall_at_3
value: 48.245
- type: recall_at_5
value: 53.832
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: main_score
value: 44.99
- type: map_at_1
value: 28.089
- type: map_at_10
value: 38.98
- type: map_at_100
value: 40.339000000000006
- type: map_at_1000
value: 40.441
- type: map_at_20
value: 39.702
- type: map_at_3
value: 35.620000000000005
- type: map_at_5
value: 37.657000000000004
- type: mrr_at_1
value: 35.15981735159817
- type: mrr_at_10
value: 44.54075161266937
- type: mrr_at_100
value: 45.435730392436646
- type: mrr_at_1000
value: 45.47673849356812
- type: mrr_at_20
value: 45.05949613726918
- type: mrr_at_3
value: 42.00913242009131
- type: mrr_at_5
value: 43.52739726027392
- type: nauc_map_at_1000_diff1
value: 42.6375513442399
- type: nauc_map_at_1000_max
value: 35.83899956589522
- type: nauc_map_at_1000_std
value: 5.798620017712549
- type: nauc_map_at_100_diff1
value: 42.609712253881504
- type: nauc_map_at_100_max
value: 35.85401871065736
- type: nauc_map_at_100_std
value: 5.829007296755533
- type: nauc_map_at_10_diff1
value: 42.90931172127824
- type: nauc_map_at_10_max
value: 35.46694204511423
- type: nauc_map_at_10_std
value: 5.131477704152026
- type: nauc_map_at_1_diff1
value: 48.066312177855956
- type: nauc_map_at_1_max
value: 30.67745267941573
- type: nauc_map_at_1_std
value: -1.4170737991670943
- type: nauc_map_at_20_diff1
value: 42.730423700784
- type: nauc_map_at_20_max
value: 35.710039616497085
- type: nauc_map_at_20_std
value: 5.363961887475162
- type: nauc_map_at_3_diff1
value: 43.499223646579935
- type: nauc_map_at_3_max
value: 33.872570039621564
- type: nauc_map_at_3_std
value: 3.0787571843453008
- type: nauc_map_at_5_diff1
value: 43.28963642946521
- type: nauc_map_at_5_max
value: 35.18327408279892
- type: nauc_map_at_5_std
value: 4.516467154662473
- type: nauc_mrr_at_1000_diff1
value: 42.71279871641341
- type: nauc_mrr_at_1000_max
value: 37.48825064817496
- type: nauc_mrr_at_1000_std
value: 8.10015025024314
- type: nauc_mrr_at_100_diff1
value: 42.694777404773376
- type: nauc_mrr_at_100_max
value: 37.476741768741086
- type: nauc_mrr_at_100_std
value: 8.11525130417229
- type: nauc_mrr_at_10_diff1
value: 42.954194054560176
- type: nauc_mrr_at_10_max
value: 37.606138578797506
- type: nauc_mrr_at_10_std
value: 8.092519513302399
- type: nauc_mrr_at_1_diff1
value: 48.350790286038574
- type: nauc_mrr_at_1_max
value: 33.97992759739641
- type: nauc_mrr_at_1_std
value: 1.8332987018664093
- type: nauc_mrr_at_20_diff1
value: 42.664983701783044
- type: nauc_mrr_at_20_max
value: 37.47450702110784
- type: nauc_mrr_at_20_std
value: 8.001067634745462
- type: nauc_mrr_at_3_diff1
value: 42.921968602737955
- type: nauc_mrr_at_3_max
value: 37.19599728791262
- type: nauc_mrr_at_3_std
value: 7.4692697422507575
- type: nauc_mrr_at_5_diff1
value: 42.96028546491891
- type: nauc_mrr_at_5_max
value: 37.688350071295915
- type: nauc_mrr_at_5_std
value: 8.213017954012372
- type: nauc_ndcg_at_1000_diff1
value: 40.70763263942397
- type: nauc_ndcg_at_1000_max
value: 37.87768319167602
- type: nauc_ndcg_at_1000_std
value: 9.908807071686738
- type: nauc_ndcg_at_100_diff1
value: 39.97828438221707
- type: nauc_ndcg_at_100_max
value: 37.7723393835996
- type: nauc_ndcg_at_100_std
value: 10.666779466040097
- type: nauc_ndcg_at_10_diff1
value: 41.172233451172936
- type: nauc_ndcg_at_10_max
value: 37.12252131573939
- type: nauc_ndcg_at_10_std
value: 8.273798754436639
- type: nauc_ndcg_at_1_diff1
value: 48.350790286038574
- type: nauc_ndcg_at_1_max
value: 33.97992759739641
- type: nauc_ndcg_at_1_std
value: 1.8332987018664093
- type: nauc_ndcg_at_20_diff1
value: 40.33325895172716
- type: nauc_ndcg_at_20_max
value: 37.36015594019951
- type: nauc_ndcg_at_20_std
value: 8.818556108749302
- type: nauc_ndcg_at_3_diff1
value: 41.652701699747254
- type: nauc_ndcg_at_3_max
value: 35.499109874223294
- type: nauc_ndcg_at_3_std
value: 5.831784865606119
- type: nauc_ndcg_at_5_diff1
value: 41.856346892595475
- type: nauc_ndcg_at_5_max
value: 36.940681835687194
- type: nauc_ndcg_at_5_std
value: 7.507798515093516
- type: nauc_precision_at_1000_diff1
value: -2.4605367806784866
- type: nauc_precision_at_1000_max
value: -0.3538142127162922
- type: nauc_precision_at_1000_std
value: 8.369794961833236
- type: nauc_precision_at_100_diff1
value: -0.34954522096524704
- type: nauc_precision_at_100_max
value: 13.159909603146458
- type: nauc_precision_at_100_std
value: 19.425561514133996
- type: nauc_precision_at_10_diff1
value: 17.048304710148145
- type: nauc_precision_at_10_max
value: 29.816041846806375
- type: nauc_precision_at_10_std
value: 18.358893367243798
- type: nauc_precision_at_1_diff1
value: 48.350790286038574
- type: nauc_precision_at_1_max
value: 33.97992759739641
- type: nauc_precision_at_1_std
value: 1.8332987018664093
- type: nauc_precision_at_20_diff1
value: 10.450903599411344
- type: nauc_precision_at_20_max
value: 25.228916373799127
- type: nauc_precision_at_20_std
value: 18.46893569529936
- type: nauc_precision_at_3_diff1
value: 29.181236567048636
- type: nauc_precision_at_3_max
value: 35.64918262500281
- type: nauc_precision_at_3_std
value: 13.347538222514968
- type: nauc_precision_at_5_diff1
value: 23.693323840550345
- type: nauc_precision_at_5_max
value: 33.972399735191225
- type: nauc_precision_at_5_std
value: 17.107012760554618
- type: nauc_recall_at_1000_diff1
value: 20.297340483227945
- type: nauc_recall_at_1000_max
value: 63.084305970127275
- type: nauc_recall_at_1000_std
value: 63.04655000858784
- type: nauc_recall_at_100_diff1
value: 22.587332148979723
- type: nauc_recall_at_100_max
value: 40.740968468024775
- type: nauc_recall_at_100_std
value: 34.120423684507124
- type: nauc_recall_at_10_diff1
value: 33.361195948673675
- type: nauc_recall_at_10_max
value: 37.1411402410262
- type: nauc_recall_at_10_std
value: 13.475407196166259
- type: nauc_recall_at_1_diff1
value: 48.066312177855956
- type: nauc_recall_at_1_max
value: 30.67745267941573
- type: nauc_recall_at_1_std
value: -1.4170737991670943
- type: nauc_recall_at_20_diff1
value: 28.703982984383984
- type: nauc_recall_at_20_max
value: 37.32929431193496
- type: nauc_recall_at_20_std
value: 16.139135347989903
- type: nauc_recall_at_3_diff1
value: 36.53346179134789
- type: nauc_recall_at_3_max
value: 34.11397914899309
- type: nauc_recall_at_3_std
value: 7.19358019807132
- type: nauc_recall_at_5_diff1
value: 36.24058894947452
- type: nauc_recall_at_5_max
value: 37.00990358651097
- type: nauc_recall_at_5_std
value: 11.074645476821619
- type: ndcg_at_1
value: 35.160000000000004
- type: ndcg_at_10
value: 44.99
- type: ndcg_at_100
value: 50.661
- type: ndcg_at_1000
value: 52.599
- type: ndcg_at_20
value: 47.154
- type: ndcg_at_3
value: 39.843
- type: ndcg_at_5
value: 42.486000000000004
- type: precision_at_1
value: 35.160000000000004
- type: precision_at_10
value: 8.299
- type: precision_at_100
value: 1.2850000000000001
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_20
value: 4.84
- type: precision_at_3
value: 19.178
- type: precision_at_5
value: 13.927
- type: recall_at_1
value: 28.089
- type: recall_at_10
value: 57.158
- type: recall_at_100
value: 81.461
- type: recall_at_1000
value: 94.46900000000001
- type: recall_at_20
value: 64.927
- type: recall_at_3
value: 42.775999999999996
- type: recall_at_5
value: 49.719
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: CQADupstackRetrieval is a combined dataset
split: test
type: mteb/cqadupstack
metrics:
- type: main_score
value: 44.989166666666655
- type: ndcg_at_10
value: 44.989166666666655
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: main_score
value: 39.586
- type: map_at_1
value: 27.301
- type: map_at_10
value: 35.022
- type: map_at_100
value: 36.061
- type: map_at_1000
value: 36.146
- type: map_at_20
value: 35.608000000000004
- type: map_at_3
value: 32.978
- type: map_at_5
value: 33.994
- type: mrr_at_1
value: 30.67484662576687
- type: mrr_at_10
value: 38.1696124257474
- type: mrr_at_100
value: 38.99730898994137
- type: mrr_at_1000
value: 39.049871007408136
- type: mrr_at_20
value: 38.62424051396064
- type: mrr_at_3
value: 36.40081799591004
- type: mrr_at_5
value: 37.23670756646219
- type: nauc_map_at_1000_diff1
value: 50.4395097150819
- type: nauc_map_at_1000_max
value: 42.36231476768413
- type: nauc_map_at_1000_std
value: 1.0739414045485742
- type: nauc_map_at_100_diff1
value: 50.4253775421283
- type: nauc_map_at_100_max
value: 42.34508969348633
- type: nauc_map_at_100_std
value: 1.0590256535050135
- type: nauc_map_at_10_diff1
value: 50.74196619464362
- type: nauc_map_at_10_max
value: 42.354326434590284
- type: nauc_map_at_10_std
value: 0.6330167542705694
- type: nauc_map_at_1_diff1
value: 55.7404810490963
- type: nauc_map_at_1_max
value: 40.7676941648045
- type: nauc_map_at_1_std
value: -5.021772566610674
- type: nauc_map_at_20_diff1
value: 50.39792463598886
- type: nauc_map_at_20_max
value: 42.25768760228577
- type: nauc_map_at_20_std
value: 0.8979017700131807
- type: nauc_map_at_3_diff1
value: 51.53267996170815
- type: nauc_map_at_3_max
value: 41.78801756883417
- type: nauc_map_at_3_std
value: -0.6652383024396911
- type: nauc_map_at_5_diff1
value: 50.992783683271504
- type: nauc_map_at_5_max
value: 41.8607977828188
- type: nauc_map_at_5_std
value: 0.3484379897869807
- type: nauc_mrr_at_1000_diff1
value: 48.952907124445126
- type: nauc_mrr_at_1000_max
value: 42.93563741482114
- type: nauc_mrr_at_1000_std
value: 3.0791495753556424
- type: nauc_mrr_at_100_diff1
value: 48.941921107360805
- type: nauc_mrr_at_100_max
value: 42.94419657374061
- type: nauc_mrr_at_100_std
value: 3.075397087180154
- type: nauc_mrr_at_10_diff1
value: 49.098926306303056
- type: nauc_mrr_at_10_max
value: 42.941857820499806
- type: nauc_mrr_at_10_std
value: 2.8184474174054372
- type: nauc_mrr_at_1_diff1
value: 54.428109877009334
- type: nauc_mrr_at_1_max
value: 42.50273386972492
- type: nauc_mrr_at_1_std
value: -2.1811826216412187
- type: nauc_mrr_at_20_diff1
value: 48.82502192775839
- type: nauc_mrr_at_20_max
value: 42.92227277257095
- type: nauc_mrr_at_20_std
value: 2.975812634368533
- type: nauc_mrr_at_3_diff1
value: 49.440009227591176
- type: nauc_mrr_at_3_max
value: 42.95503176290712
- type: nauc_mrr_at_3_std
value: 2.2997128945013796
- type: nauc_mrr_at_5_diff1
value: 49.09846782701398
- type: nauc_mrr_at_5_max
value: 42.51449168285772
- type: nauc_mrr_at_5_std
value: 2.7785816484421297
- type: nauc_ndcg_at_1000_diff1
value: 48.14680758187888
- type: nauc_ndcg_at_1000_max
value: 43.57465718500695
- type: nauc_ndcg_at_1000_std
value: 5.287435676678261
- type: nauc_ndcg_at_100_diff1
value: 47.66081605743284
- type: nauc_ndcg_at_100_max
value: 43.28156751251163
- type: nauc_ndcg_at_100_std
value: 4.959626409663624
- type: nauc_ndcg_at_10_diff1
value: 48.25075619623878
- type: nauc_ndcg_at_10_max
value: 43.00688660666578
- type: nauc_ndcg_at_10_std
value: 3.2319193368891637
- type: nauc_ndcg_at_1_diff1
value: 54.428109877009334
- type: nauc_ndcg_at_1_max
value: 42.50273386972492
- type: nauc_ndcg_at_1_std
value: -2.1811826216412187
- type: nauc_ndcg_at_20_diff1
value: 47.1943098627403
- type: nauc_ndcg_at_20_max
value: 42.86954491768707
- type: nauc_ndcg_at_20_std
value: 4.08583080150737
- type: nauc_ndcg_at_3_diff1
value: 49.32681523192246
- type: nauc_ndcg_at_3_max
value: 42.46898641470274
- type: nauc_ndcg_at_3_std
value: 1.7416962407725236
- type: nauc_ndcg_at_5_diff1
value: 48.59647012439291
- type: nauc_ndcg_at_5_max
value: 42.07098889846439
- type: nauc_ndcg_at_5_std
value: 2.979621233356828
- type: nauc_precision_at_1000_diff1
value: -1.7366334161587105
- type: nauc_precision_at_1000_max
value: 17.70969166396819
- type: nauc_precision_at_1000_std
value: 17.50619975322144
- type: nauc_precision_at_100_diff1
value: 10.082579982582155
- type: nauc_precision_at_100_max
value: 28.024893516091776
- type: nauc_precision_at_100_std
value: 18.41413013357596
- type: nauc_precision_at_10_diff1
value: 28.796167732373657
- type: nauc_precision_at_10_max
value: 40.37340024485382
- type: nauc_precision_at_10_std
value: 13.718572711091733
- type: nauc_precision_at_1_diff1
value: 54.428109877009334
- type: nauc_precision_at_1_max
value: 42.50273386972492
- type: nauc_precision_at_1_std
value: -2.1811826216412187
- type: nauc_precision_at_20_diff1
value: 19.82691920771315
- type: nauc_precision_at_20_max
value: 34.45075390159975
- type: nauc_precision_at_20_std
value: 16.410812072348058
- type: nauc_precision_at_3_diff1
value: 40.85430254962678
- type: nauc_precision_at_3_max
value: 43.63016056067074
- type: nauc_precision_at_3_std
value: 9.322014634477581
- type: nauc_precision_at_5_diff1
value: 35.830272848975795
- type: nauc_precision_at_5_max
value: 41.30047691620363
- type: nauc_precision_at_5_std
value: 13.145693992266565
- type: nauc_recall_at_1000_diff1
value: 35.532000545890504
- type: nauc_recall_at_1000_max
value: 50.714223194510325
- type: nauc_recall_at_1000_std
value: 43.09037309139045
- type: nauc_recall_at_100_diff1
value: 35.11024488875192
- type: nauc_recall_at_100_max
value: 43.0874566265193
- type: nauc_recall_at_100_std
value: 19.70628521846854
- type: nauc_recall_at_10_diff1
value: 40.36203726741153
- type: nauc_recall_at_10_max
value: 42.581482582576726
- type: nauc_recall_at_10_std
value: 8.642553371022348
- type: nauc_recall_at_1_diff1
value: 55.7404810490963
- type: nauc_recall_at_1_max
value: 40.7676941648045
- type: nauc_recall_at_1_std
value: -5.021772566610674
- type: nauc_recall_at_20_diff1
value: 35.97348868186562
- type: nauc_recall_at_20_max
value: 41.82695933305065
- type: nauc_recall_at_20_std
value: 11.444957541593585
- type: nauc_recall_at_3_diff1
value: 44.20020470014979
- type: nauc_recall_at_3_max
value: 40.84130855296979
- type: nauc_recall_at_3_std
value: 5.004883338558809
- type: nauc_recall_at_5_diff1
value: 42.08756885472078
- type: nauc_recall_at_5_max
value: 39.90323783606852
- type: nauc_recall_at_5_std
value: 8.085182534171127
- type: ndcg_at_1
value: 30.675
- type: ndcg_at_10
value: 39.586
- type: ndcg_at_100
value: 44.737
- type: ndcg_at_1000
value: 46.863
- type: ndcg_at_20
value: 41.495
- type: ndcg_at_3
value: 35.8
- type: ndcg_at_5
value: 37.3
- type: precision_at_1
value: 30.675
- type: precision_at_10
value: 6.196
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_20
value: 3.6350000000000002
- type: precision_at_3
value: 15.337
- type: precision_at_5
value: 10.337
- type: recall_at_1
value: 27.301
- type: recall_at_10
value: 50.346999999999994
- type: recall_at_100
value: 74.459
- type: recall_at_1000
value: 90.018
- type: recall_at_20
value: 57.473
- type: recall_at_3
value: 39.672000000000004
- type: recall_at_5
value: 43.383
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: main_score
value: 32.842
- type: map_at_1
value: 19.527
- type: map_at_10
value: 27.711999999999996
- type: map_at_100
value: 28.98
- type: map_at_1000
value: 29.108
- type: map_at_20
value: 28.407
- type: map_at_3
value: 25.023
- type: map_at_5
value: 26.528000000000002
- type: mrr_at_1
value: 23.675154852030282
- type: mrr_at_10
value: 31.810676323752784
- type: mrr_at_100
value: 32.788970614380716
- type: mrr_at_1000
value: 32.86028758975889
- type: mrr_at_20
value: 32.35935756676056
- type: mrr_at_3
value: 29.41615049323246
- type: mrr_at_5
value: 30.785730672172633
- type: nauc_map_at_1000_diff1
value: 35.597766688968015
- type: nauc_map_at_1000_max
value: 26.295790183159845
- type: nauc_map_at_1000_std
value: -0.04229904865958209
- type: nauc_map_at_100_diff1
value: 35.568782622469925
- type: nauc_map_at_100_max
value: 26.27850795471227
- type: nauc_map_at_100_std
value: -0.04944875782811099
- type: nauc_map_at_10_diff1
value: 35.63760937893694
- type: nauc_map_at_10_max
value: 26.130094042028233
- type: nauc_map_at_10_std
value: -0.6896882769027717
- type: nauc_map_at_1_diff1
value: 41.759098341890976
- type: nauc_map_at_1_max
value: 23.918885427783326
- type: nauc_map_at_1_std
value: -2.1383574897865074
- type: nauc_map_at_20_diff1
value: 35.55706530442612
- type: nauc_map_at_20_max
value: 26.23339626569677
- type: nauc_map_at_20_std
value: -0.162172033918129
- type: nauc_map_at_3_diff1
value: 37.22183376355153
- type: nauc_map_at_3_max
value: 25.770512522122186
- type: nauc_map_at_3_std
value: -1.3105892187778403
- type: nauc_map_at_5_diff1
value: 36.205913161663084
- type: nauc_map_at_5_max
value: 25.953300641502064
- type: nauc_map_at_5_std
value: -0.7987363137547906
- type: nauc_mrr_at_1000_diff1
value: 34.864016559617646
- type: nauc_mrr_at_1000_max
value: 26.8689525348564
- type: nauc_mrr_at_1000_std
value: -0.5839923973914446
- type: nauc_mrr_at_100_diff1
value: 34.83820469598538
- type: nauc_mrr_at_100_max
value: 26.864669056231282
- type: nauc_mrr_at_100_std
value: -0.5785645654158633
- type: nauc_mrr_at_10_diff1
value: 34.81868397381981
- type: nauc_mrr_at_10_max
value: 26.79988560460627
- type: nauc_mrr_at_10_std
value: -1.1113808365827318
- type: nauc_mrr_at_1_diff1
value: 40.0281507903504
- type: nauc_mrr_at_1_max
value: 25.036735941806583
- type: nauc_mrr_at_1_std
value: -2.508700799268523
- type: nauc_mrr_at_20_diff1
value: 34.81954537357966
- type: nauc_mrr_at_20_max
value: 26.877673033315453
- type: nauc_mrr_at_20_std
value: -0.6706028107452919
- type: nauc_mrr_at_3_diff1
value: 35.87313782549696
- type: nauc_mrr_at_3_max
value: 26.776261693392335
- type: nauc_mrr_at_3_std
value: -1.8010591328112908
- type: nauc_mrr_at_5_diff1
value: 35.31673912159536
- type: nauc_mrr_at_5_max
value: 26.78720786106881
- type: nauc_mrr_at_5_std
value: -1.3096326953900546
- type: nauc_ndcg_at_1000_diff1
value: 33.43105244339048
- type: nauc_ndcg_at_1000_max
value: 27.52195065724684
- type: nauc_ndcg_at_1000_std
value: 2.8376056562675744
- type: nauc_ndcg_at_100_diff1
value: 32.90916846420573
- type: nauc_ndcg_at_100_max
value: 27.27161017736065
- type: nauc_ndcg_at_100_std
value: 2.8703122625872126
- type: nauc_ndcg_at_10_diff1
value: 33.12714979317447
- type: nauc_ndcg_at_10_max
value: 26.67762031747992
- type: nauc_ndcg_at_10_std
value: -0.1341345572932233
- type: nauc_ndcg_at_1_diff1
value: 40.0281507903504
- type: nauc_ndcg_at_1_max
value: 25.036735941806583
- type: nauc_ndcg_at_1_std
value: -2.508700799268523
- type: nauc_ndcg_at_20_diff1
value: 32.891656138688546
- type: nauc_ndcg_at_20_max
value: 26.991976404027163
- type: nauc_ndcg_at_20_std
value: 1.6050741106677746
- type: nauc_ndcg_at_3_diff1
value: 35.576958713955484
- type: nauc_ndcg_at_3_max
value: 26.41687745899445
- type: nauc_ndcg_at_3_std
value: -1.5326687067002291
- type: nauc_ndcg_at_5_diff1
value: 34.27335619067276
- type: nauc_ndcg_at_5_max
value: 26.479515412084208
- type: nauc_ndcg_at_5_std
value: -0.5597648935666003
- type: nauc_precision_at_1000_diff1
value: -0.18660914306684007
- type: nauc_precision_at_1000_max
value: 7.268255385799229
- type: nauc_precision_at_1000_std
value: -0.1968875268478991
- type: nauc_precision_at_100_diff1
value: 7.386701205054449
- type: nauc_precision_at_100_max
value: 15.477735603019607
- type: nauc_precision_at_100_std
value: 4.753153414679307
- type: nauc_precision_at_10_diff1
value: 18.4668296945938
- type: nauc_precision_at_10_max
value: 25.457144217779597
- type: nauc_precision_at_10_std
value: 0.40165373733963605
- type: nauc_precision_at_1_diff1
value: 40.0281507903504
- type: nauc_precision_at_1_max
value: 25.036735941806583
- type: nauc_precision_at_1_std
value: -2.508700799268523
- type: nauc_precision_at_20_diff1
value: 14.751135844289335
- type: nauc_precision_at_20_max
value: 22.763373329576293
- type: nauc_precision_at_20_std
value: 4.360731801761864
- type: nauc_precision_at_3_diff1
value: 28.154753888265393
- type: nauc_precision_at_3_max
value: 27.838427033527147
- type: nauc_precision_at_3_std
value: -1.0042621266717804
- type: nauc_precision_at_5_diff1
value: 23.549026872711423
- type: nauc_precision_at_5_max
value: 27.192214745385044
- type: nauc_precision_at_5_std
value: 0.4455206110174471
- type: nauc_recall_at_1000_diff1
value: 17.905404210815632
- type: nauc_recall_at_1000_max
value: 32.8674418535776
- type: nauc_recall_at_1000_std
value: 35.187050415735435
- type: nauc_recall_at_100_diff1
value: 20.903609751984757
- type: nauc_recall_at_100_max
value: 27.180306691518364
- type: nauc_recall_at_100_std
value: 17.553030959393297
- type: nauc_recall_at_10_diff1
value: 25.615147693464387
- type: nauc_recall_at_10_max
value: 25.97062699453565
- type: nauc_recall_at_10_std
value: 2.2181702899826576
- type: nauc_recall_at_1_diff1
value: 41.759098341890976
- type: nauc_recall_at_1_max
value: 23.918885427783326
- type: nauc_recall_at_1_std
value: -2.1383574897865074
- type: nauc_recall_at_20_diff1
value: 23.922775940094386
- type: nauc_recall_at_20_max
value: 26.384627814902785
- type: nauc_recall_at_20_std
value: 7.944532403561578
- type: nauc_recall_at_3_diff1
value: 32.26543270634743
- type: nauc_recall_at_3_max
value: 26.36357710828272
- type: nauc_recall_at_3_std
value: -0.42723331708340706
- type: nauc_recall_at_5_diff1
value: 29.080464141763336
- type: nauc_recall_at_5_max
value: 25.81238438303652
- type: nauc_recall_at_5_std
value: 1.1649311168287726
- type: ndcg_at_1
value: 23.674999999999997
- type: ndcg_at_10
value: 32.842
- type: ndcg_at_100
value: 38.64
- type: ndcg_at_1000
value: 41.367
- type: ndcg_at_20
value: 35.032999999999994
- type: ndcg_at_3
value: 28.166000000000004
- type: ndcg_at_5
value: 30.407
- type: precision_at_1
value: 23.674999999999997
- type: precision_at_10
value: 6.005
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 3.6580000000000004
- type: precision_at_3
value: 13.352
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 19.527
- type: recall_at_10
value: 44.096999999999994
- type: recall_at_100
value: 69.962
- type: recall_at_1000
value: 89.035
- type: recall_at_20
value: 52.166000000000004
- type: recall_at_3
value: 30.946
- type: recall_at_5
value: 36.789
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: main_score
value: 46.54
- type: map_at_1
value: 29.953999999999997
- type: map_at_10
value: 40.742
- type: map_at_100
value: 41.964
- type: map_at_1000
value: 42.059999999999995
- type: map_at_20
value: 41.426
- type: map_at_3
value: 37.378
- type: map_at_5
value: 39.267
- type: mrr_at_1
value: 34.701492537313435
- type: mrr_at_10
value: 44.29978085761664
- type: mrr_at_100
value: 45.205551401915486
- type: mrr_at_1000
value: 45.24735017384963
- type: mrr_at_20
value: 44.85338423755729
- type: mrr_at_3
value: 41.57338308457707
- type: mrr_at_5
value: 43.19185323383077
- type: nauc_map_at_1000_diff1
value: 48.45170522932164
- type: nauc_map_at_1000_max
value: 31.544164363591204
- type: nauc_map_at_1000_std
value: 0.8661088818146858
- type: nauc_map_at_100_diff1
value: 48.47347800061323
- type: nauc_map_at_100_max
value: 31.568637596620313
- type: nauc_map_at_100_std
value: 0.9252699336843858
- type: nauc_map_at_10_diff1
value: 48.64849891585432
- type: nauc_map_at_10_max
value: 31.40371265579746
- type: nauc_map_at_10_std
value: 0.7088016563713089
- type: nauc_map_at_1_diff1
value: 53.57918993108331
- type: nauc_map_at_1_max
value: 31.392632653740993
- type: nauc_map_at_1_std
value: -2.857306170463933
- type: nauc_map_at_20_diff1
value: 48.49084353023969
- type: nauc_map_at_20_max
value: 31.470313174779374
- type: nauc_map_at_20_std
value: 0.8950296035234309
- type: nauc_map_at_3_diff1
value: 49.273481161619806
- type: nauc_map_at_3_max
value: 31.101471509782826
- type: nauc_map_at_3_std
value: -0.886510096257905
- type: nauc_map_at_5_diff1
value: 48.85344288229106
- type: nauc_map_at_5_max
value: 31.32633663238284
- type: nauc_map_at_5_std
value: -0.44752909698881177
- type: nauc_mrr_at_1000_diff1
value: 46.27593166906613
- type: nauc_mrr_at_1000_max
value: 31.637594372116336
- type: nauc_mrr_at_1000_std
value: 0.8444917550670064
- type: nauc_mrr_at_100_diff1
value: 46.27161543033672
- type: nauc_mrr_at_100_max
value: 31.64330655339695
- type: nauc_mrr_at_100_std
value: 0.8717446416398773
- type: nauc_mrr_at_10_diff1
value: 46.100348481312864
- type: nauc_mrr_at_10_max
value: 31.594271897882237
- type: nauc_mrr_at_10_std
value: 0.8807168907688873
- type: nauc_mrr_at_1_diff1
value: 51.35163098909763
- type: nauc_mrr_at_1_max
value: 31.99084441327899
- type: nauc_mrr_at_1_std
value: -2.688594880742662
- type: nauc_mrr_at_20_diff1
value: 46.18178546174727
- type: nauc_mrr_at_20_max
value: 31.639111674119448
- type: nauc_mrr_at_20_std
value: 0.9855008641374622
- type: nauc_mrr_at_3_diff1
value: 46.307484835305864
- type: nauc_mrr_at_3_max
value: 31.35563850804847
- type: nauc_mrr_at_3_std
value: -0.3419536587707561
- type: nauc_mrr_at_5_diff1
value: 46.17646418781234
- type: nauc_mrr_at_5_max
value: 31.313474270239833
- type: nauc_mrr_at_5_std
value: -0.08656550526568331
- type: nauc_ndcg_at_1000_diff1
value: 46.12095795101613
- type: nauc_ndcg_at_1000_max
value: 31.989083597726314
- type: nauc_ndcg_at_1000_std
value: 3.2965704707660763
- type: nauc_ndcg_at_100_diff1
value: 46.05376249841318
- type: nauc_ndcg_at_100_max
value: 32.39195988574972
- type: nauc_ndcg_at_100_std
value: 4.518018135593347
- type: nauc_ndcg_at_10_diff1
value: 46.133631183744875
- type: nauc_ndcg_at_10_max
value: 31.45358876172339
- type: nauc_ndcg_at_10_std
value: 3.4254370918871055
- type: nauc_ndcg_at_1_diff1
value: 51.35163098909763
- type: nauc_ndcg_at_1_max
value: 31.99084441327899
- type: nauc_ndcg_at_1_std
value: -2.688594880742662
- type: nauc_ndcg_at_20_diff1
value: 45.94584949766954
- type: nauc_ndcg_at_20_max
value: 31.689777515111295
- type: nauc_ndcg_at_20_std
value: 4.189082428922442
- type: nauc_ndcg_at_3_diff1
value: 46.5057835389752
- type: nauc_ndcg_at_3_max
value: 30.941407592082047
- type: nauc_ndcg_at_3_std
value: -0.042473944857831535
- type: nauc_ndcg_at_5_diff1
value: 46.369027395136136
- type: nauc_ndcg_at_5_max
value: 31.057841776505352
- type: nauc_ndcg_at_5_std
value: 0.6878993420489522
- type: nauc_precision_at_1000_diff1
value: -17.30759714093202
- type: nauc_precision_at_1000_max
value: -4.441155558458858
- type: nauc_precision_at_1000_std
value: 1.5537300718220326
- type: nauc_precision_at_100_diff1
value: -7.18920438222021
- type: nauc_precision_at_100_max
value: 8.017878121399253
- type: nauc_precision_at_100_std
value: 11.357132919349102
- type: nauc_precision_at_10_diff1
value: 15.202451884794076
- type: nauc_precision_at_10_max
value: 19.077295902881417
- type: nauc_precision_at_10_std
value: 9.885526867355805
- type: nauc_precision_at_1_diff1
value: 51.35163098909763
- type: nauc_precision_at_1_max
value: 31.99084441327899
- type: nauc_precision_at_1_std
value: -2.688594880742662
- type: nauc_precision_at_20_diff1
value: 6.827461091494899
- type: nauc_precision_at_20_max
value: 15.27268633497114
- type: nauc_precision_at_20_std
value: 11.515826649647384
- type: nauc_precision_at_3_diff1
value: 31.043021807472027
- type: nauc_precision_at_3_max
value: 26.22457157531548
- type: nauc_precision_at_3_std
value: 1.788215968301994
- type: nauc_precision_at_5_diff1
value: 25.030185818513235
- type: nauc_precision_at_5_max
value: 23.680129160901537
- type: nauc_precision_at_5_std
value: 4.303018899688115
- type: nauc_recall_at_1000_diff1
value: 28.68826642607512
- type: nauc_recall_at_1000_max
value: 42.33849804103852
- type: nauc_recall_at_1000_std
value: 42.67413575876864
- type: nauc_recall_at_100_diff1
value: 36.51494878715
- type: nauc_recall_at_100_max
value: 37.4764995034434
- type: nauc_recall_at_100_std
value: 28.295671266661017
- type: nauc_recall_at_10_diff1
value: 39.416721111463524
- type: nauc_recall_at_10_max
value: 29.95985608454179
- type: nauc_recall_at_10_std
value: 12.423335839786201
- type: nauc_recall_at_1_diff1
value: 53.57918993108331
- type: nauc_recall_at_1_max
value: 31.392632653740993
- type: nauc_recall_at_1_std
value: -2.857306170463933
- type: nauc_recall_at_20_diff1
value: 38.228803480194046
- type: nauc_recall_at_20_max
value: 30.87261362975955
- type: nauc_recall_at_20_std
value: 16.977113091834095
- type: nauc_recall_at_3_diff1
value: 43.154348566653155
- type: nauc_recall_at_3_max
value: 29.54536633744803
- type: nauc_recall_at_3_std
value: 2.02842672250621
- type: nauc_recall_at_5_diff1
value: 41.00436246072242
- type: nauc_recall_at_5_max
value: 29.413569555348023
- type: nauc_recall_at_5_std
value: 3.845214021958289
- type: ndcg_at_1
value: 34.701
- type: ndcg_at_10
value: 46.54
- type: ndcg_at_100
value: 51.754999999999995
- type: ndcg_at_1000
value: 53.71
- type: ndcg_at_20
value: 48.679
- type: ndcg_at_3
value: 40.892
- type: ndcg_at_5
value: 43.595
- type: precision_at_1
value: 34.701
- type: precision_at_10
value: 8.004
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.145
- type: precision_at_20
value: 4.632
- type: precision_at_3
value: 18.719
- type: precision_at_5
value: 13.245999999999999
- type: recall_at_1
value: 29.953999999999997
- type: recall_at_10
value: 60.246
- type: recall_at_100
value: 82.128
- type: recall_at_1000
value: 95.622
- type: recall_at_20
value: 67.756
- type: recall_at_3
value: 45.096000000000004
- type: recall_at_5
value: 51.9
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: main_score
value: 44.718999999999994
- type: map_at_1
value: 28.383999999999997
- type: map_at_10
value: 38.422
- type: map_at_100
value: 40.058
- type: map_at_1000
value: 40.276
- type: map_at_20
value: 39.301
- type: map_at_3
value: 35.205
- type: map_at_5
value: 36.803999999999995
- type: mrr_at_1
value: 33.59683794466403
- type: mrr_at_10
value: 42.837536859275986
- type: mrr_at_100
value: 43.7501703455481
- type: mrr_at_1000
value: 43.79258407771123
- type: mrr_at_20
value: 43.36044710445095
- type: mrr_at_3
value: 40.15151515151516
- type: mrr_at_5
value: 41.74242424242425
- type: nauc_map_at_1000_diff1
value: 47.934826596875304
- type: nauc_map_at_1000_max
value: 32.39759438116062
- type: nauc_map_at_1000_std
value: 0.9489007346763054
- type: nauc_map_at_100_diff1
value: 47.94844822157888
- type: nauc_map_at_100_max
value: 32.51485845519537
- type: nauc_map_at_100_std
value: 0.8094339925545622
- type: nauc_map_at_10_diff1
value: 48.251456404874645
- type: nauc_map_at_10_max
value: 31.412906399154245
- type: nauc_map_at_10_std
value: -0.7024825737369933
- type: nauc_map_at_1_diff1
value: 55.81906101970174
- type: nauc_map_at_1_max
value: 31.811715334193796
- type: nauc_map_at_1_std
value: -6.17056859281584
- type: nauc_map_at_20_diff1
value: 47.80902650237369
- type: nauc_map_at_20_max
value: 32.22465403023091
- type: nauc_map_at_20_std
value: 0.20706526946705656
- type: nauc_map_at_3_diff1
value: 49.97333984346632
- type: nauc_map_at_3_max
value: 31.58195498640799
- type: nauc_map_at_3_std
value: -2.577539707727459
- type: nauc_map_at_5_diff1
value: 49.40005767350608
- type: nauc_map_at_5_max
value: 30.998435600377434
- type: nauc_map_at_5_std
value: -2.1231771618690307
- type: nauc_mrr_at_1000_diff1
value: 46.86811371969663
- type: nauc_mrr_at_1000_max
value: 31.25147138171024
- type: nauc_mrr_at_1000_std
value: 1.9954422477585918
- type: nauc_mrr_at_100_diff1
value: 46.855870345882195
- type: nauc_mrr_at_100_max
value: 31.263524035665966
- type: nauc_mrr_at_100_std
value: 2.0160751193806568
- type: nauc_mrr_at_10_diff1
value: 46.93294772825783
- type: nauc_mrr_at_10_max
value: 30.927002048701663
- type: nauc_mrr_at_10_std
value: 1.6538220080908224
- type: nauc_mrr_at_1_diff1
value: 52.416386548395664
- type: nauc_mrr_at_1_max
value: 32.28582003787206
- type: nauc_mrr_at_1_std
value: -2.154991145714492
- type: nauc_mrr_at_20_diff1
value: 46.71796185319694
- type: nauc_mrr_at_20_max
value: 31.16219902794994
- type: nauc_mrr_at_20_std
value: 1.8590646572728409
- type: nauc_mrr_at_3_diff1
value: 47.697100317669914
- type: nauc_mrr_at_3_max
value: 30.821806030159383
- type: nauc_mrr_at_3_std
value: 1.1927626358099177
- type: nauc_mrr_at_5_diff1
value: 47.065272061365704
- type: nauc_mrr_at_5_max
value: 30.299230962805023
- type: nauc_mrr_at_5_std
value: 1.3225842862629529
- type: nauc_ndcg_at_1000_diff1
value: 45.20612583136058
- type: nauc_ndcg_at_1000_max
value: 33.51931869947315
- type: nauc_ndcg_at_1000_std
value: 4.923707509620363
- type: nauc_ndcg_at_100_diff1
value: 44.76206243393775
- type: nauc_ndcg_at_100_max
value: 33.57771606755598
- type: nauc_ndcg_at_100_std
value: 5.30915563331338
- type: nauc_ndcg_at_10_diff1
value: 45.12714032463827
- type: nauc_ndcg_at_10_max
value: 30.351909495610492
- type: nauc_ndcg_at_10_std
value: 2.3972947289996873
- type: nauc_ndcg_at_1_diff1
value: 52.416386548395664
- type: nauc_ndcg_at_1_max
value: 32.28582003787206
- type: nauc_ndcg_at_1_std
value: -2.154991145714492
- type: nauc_ndcg_at_20_diff1
value: 44.20281844000005
- type: nauc_ndcg_at_20_max
value: 32.14112739396226
- type: nauc_ndcg_at_20_std
value: 3.3971385462591916
- type: nauc_ndcg_at_3_diff1
value: 47.0633767031858
- type: nauc_ndcg_at_3_max
value: 31.032896053733435
- type: nauc_ndcg_at_3_std
value: 0.6827544906310201
- type: nauc_ndcg_at_5_diff1
value: 46.735352294106484
- type: nauc_ndcg_at_5_max
value: 29.784992270528544
- type: nauc_ndcg_at_5_std
value: 0.8685943819516141
- type: nauc_precision_at_1000_diff1
value: -12.223330179860852
- type: nauc_precision_at_1000_max
value: -9.266492213777273
- type: nauc_precision_at_1000_std
value: 19.0569899587788
- type: nauc_precision_at_100_diff1
value: -5.803751085072067
- type: nauc_precision_at_100_max
value: 3.448932057044294
- type: nauc_precision_at_100_std
value: 23.470863527030627
- type: nauc_precision_at_10_diff1
value: 8.887357341361907
- type: nauc_precision_at_10_max
value: 18.67165390928126
- type: nauc_precision_at_10_std
value: 19.158543337955404
- type: nauc_precision_at_1_diff1
value: 52.416386548395664
- type: nauc_precision_at_1_max
value: 32.28582003787206
- type: nauc_precision_at_1_std
value: -2.154991145714492
- type: nauc_precision_at_20_diff1
value: 0.942496138409553
- type: nauc_precision_at_20_max
value: 18.86957127610774
- type: nauc_precision_at_20_std
value: 24.075503903246496
- type: nauc_precision_at_3_diff1
value: 28.15363877307106
- type: nauc_precision_at_3_max
value: 27.064928137991824
- type: nauc_precision_at_3_std
value: 8.632807104504753
- type: nauc_precision_at_5_diff1
value: 20.805862332497973
- type: nauc_precision_at_5_max
value: 21.420201475758404
- type: nauc_precision_at_5_std
value: 12.380239645425714
- type: nauc_recall_at_1000_diff1
value: 18.478341468055547
- type: nauc_recall_at_1000_max
value: 56.293560115074506
- type: nauc_recall_at_1000_std
value: 64.31607185065428
- type: nauc_recall_at_100_diff1
value: 26.737267337771886
- type: nauc_recall_at_100_max
value: 38.011889141496326
- type: nauc_recall_at_100_std
value: 30.44904690114732
- type: nauc_recall_at_10_diff1
value: 35.22772732735716
- type: nauc_recall_at_10_max
value: 26.000054115159486
- type: nauc_recall_at_10_std
value: 5.174264254271206
- type: nauc_recall_at_1_diff1
value: 55.81906101970174
- type: nauc_recall_at_1_max
value: 31.811715334193796
- type: nauc_recall_at_1_std
value: -6.17056859281584
- type: nauc_recall_at_20_diff1
value: 30.48493302415641
- type: nauc_recall_at_20_max
value: 31.05487040370753
- type: nauc_recall_at_20_std
value: 10.319948318834136
- type: nauc_recall_at_3_diff1
value: 43.12289512340243
- type: nauc_recall_at_3_max
value: 28.176279771026135
- type: nauc_recall_at_3_std
value: -0.1775154523381921
- type: nauc_recall_at_5_diff1
value: 40.9934933741234
- type: nauc_recall_at_5_max
value: 25.569156290584733
- type: nauc_recall_at_5_std
value: 0.21166696686855038
- type: ndcg_at_1
value: 33.597
- type: ndcg_at_10
value: 44.718999999999994
- type: ndcg_at_100
value: 50.324000000000005
- type: ndcg_at_1000
value: 52.468
- type: ndcg_at_20
value: 46.822
- type: ndcg_at_3
value: 39.558
- type: ndcg_at_5
value: 41.827999999999996
- type: precision_at_1
value: 33.597
- type: precision_at_10
value: 8.735
- type: precision_at_100
value: 1.6420000000000001
- type: precision_at_1000
value: 0.246
- type: precision_at_20
value: 5.375
- type: precision_at_3
value: 18.511
- type: precision_at_5
value: 13.399
- type: recall_at_1
value: 28.383999999999997
- type: recall_at_10
value: 56.425000000000004
- type: recall_at_100
value: 82.01899999999999
- type: recall_at_1000
value: 95.285
- type: recall_at_20
value: 64.615
- type: recall_at_3
value: 42.171
- type: recall_at_5
value: 48.296
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: main_score
value: 38.269999999999996
- type: map_at_1
value: 25.324999999999996
- type: map_at_10
value: 33.263
- type: map_at_100
value: 34.304
- type: map_at_1000
value: 34.394000000000005
- type: map_at_20
value: 33.827
- type: map_at_3
value: 30.259999999999998
- type: map_at_5
value: 31.832
- type: mrr_at_1
value: 27.171903881700555
- type: mrr_at_10
value: 35.334991051257234
- type: mrr_at_100
value: 36.251283465952355
- type: mrr_at_1000
value: 36.316236092511055
- type: mrr_at_20
value: 35.87141909945257
- type: mrr_at_3
value: 32.71719038817007
- type: mrr_at_5
value: 34.19593345656194
- type: nauc_map_at_1000_diff1
value: 39.614836211522714
- type: nauc_map_at_1000_max
value: 22.019768626310192
- type: nauc_map_at_1000_std
value: -1.5238708712112499
- type: nauc_map_at_100_diff1
value: 39.63008548572307
- type: nauc_map_at_100_max
value: 22.044756063752345
- type: nauc_map_at_100_std
value: -1.4869190221494792
- type: nauc_map_at_10_diff1
value: 39.73025012395569
- type: nauc_map_at_10_max
value: 22.117710178892107
- type: nauc_map_at_10_std
value: -2.5129984871932973
- type: nauc_map_at_1_diff1
value: 45.015617718902654
- type: nauc_map_at_1_max
value: 19.313800263189638
- type: nauc_map_at_1_std
value: -4.763931386681675
- type: nauc_map_at_20_diff1
value: 39.53678019013766
- type: nauc_map_at_20_max
value: 21.880316719428258
- type: nauc_map_at_20_std
value: -1.882003994523355
- type: nauc_map_at_3_diff1
value: 40.37307665298228
- type: nauc_map_at_3_max
value: 20.851976075322533
- type: nauc_map_at_3_std
value: -2.429569082966531
- type: nauc_map_at_5_diff1
value: 39.763015635086
- type: nauc_map_at_5_max
value: 22.010102196900725
- type: nauc_map_at_5_std
value: -2.654896415670943
- type: nauc_mrr_at_1000_diff1
value: 39.74071733680025
- type: nauc_mrr_at_1000_max
value: 21.67309640681989
- type: nauc_mrr_at_1000_std
value: -1.4003373135477462
- type: nauc_mrr_at_100_diff1
value: 39.730614151966485
- type: nauc_mrr_at_100_max
value: 21.678390048971767
- type: nauc_mrr_at_100_std
value: -1.3655362623563931
- type: nauc_mrr_at_10_diff1
value: 39.7900031013241
- type: nauc_mrr_at_10_max
value: 21.73643491725051
- type: nauc_mrr_at_10_std
value: -2.1175389838696312
- type: nauc_mrr_at_1_diff1
value: 46.165736140679776
- type: nauc_mrr_at_1_max
value: 20.071083446822147
- type: nauc_mrr_at_1_std
value: -5.018909100858311
- type: nauc_mrr_at_20_diff1
value: 39.6371295762885
- type: nauc_mrr_at_20_max
value: 21.659557440270973
- type: nauc_mrr_at_20_std
value: -1.4909603958341686
- type: nauc_mrr_at_3_diff1
value: 40.351150322758876
- type: nauc_mrr_at_3_max
value: 20.83706249041544
- type: nauc_mrr_at_3_std
value: -1.956027373253151
- type: nauc_mrr_at_5_diff1
value: 39.57759107791911
- type: nauc_mrr_at_5_max
value: 21.79552045204151
- type: nauc_mrr_at_5_std
value: -2.1507013120951126
- type: nauc_ndcg_at_1000_diff1
value: 37.717619356839016
- type: nauc_ndcg_at_1000_max
value: 22.545375504379805
- type: nauc_ndcg_at_1000_std
value: 1.682348628141016
- type: nauc_ndcg_at_100_diff1
value: 37.656027803682626
- type: nauc_ndcg_at_100_max
value: 22.49278246383637
- type: nauc_ndcg_at_100_std
value: 2.6818118152357773
- type: nauc_ndcg_at_10_diff1
value: 37.834954205539766
- type: nauc_ndcg_at_10_max
value: 22.655839885558443
- type: nauc_ndcg_at_10_std
value: -1.97159619786231
- type: nauc_ndcg_at_1_diff1
value: 46.165736140679776
- type: nauc_ndcg_at_1_max
value: 20.071083446822147
- type: nauc_ndcg_at_1_std
value: -5.018909100858311
- type: nauc_ndcg_at_20_diff1
value: 37.171914857454304
- type: nauc_ndcg_at_20_max
value: 21.858904801745897
- type: nauc_ndcg_at_20_std
value: 0.3809854859496657
- type: nauc_ndcg_at_3_diff1
value: 38.4460623883955
- type: nauc_ndcg_at_3_max
value: 20.95244159463402
- type: nauc_ndcg_at_3_std
value: -1.2685011660086651
- type: nauc_ndcg_at_5_diff1
value: 37.48831054573054
- type: nauc_ndcg_at_5_max
value: 22.625921624640526
- type: nauc_ndcg_at_5_std
value: -2.049221092724925
- type: nauc_precision_at_1000_diff1
value: -19.120500628263994
- type: nauc_precision_at_1000_max
value: -6.650707109047473
- type: nauc_precision_at_1000_std
value: 15.71193179253002
- type: nauc_precision_at_100_diff1
value: 6.254606806876069
- type: nauc_precision_at_100_max
value: 14.601826922181823
- type: nauc_precision_at_100_std
value: 28.38299592246453
- type: nauc_precision_at_10_diff1
value: 22.978614338670816
- type: nauc_precision_at_10_max
value: 23.04146766323557
- type: nauc_precision_at_10_std
value: 6.226264308612577
- type: nauc_precision_at_1_diff1
value: 46.165736140679776
- type: nauc_precision_at_1_max
value: 20.071083446822147
- type: nauc_precision_at_1_std
value: -5.018909100858311
- type: nauc_precision_at_20_diff1
value: 17.681032853225602
- type: nauc_precision_at_20_max
value: 18.66680304585122
- type: nauc_precision_at_20_std
value: 15.34896796713905
- type: nauc_precision_at_3_diff1
value: 31.359396694559194
- type: nauc_precision_at_3_max
value: 22.279263308973274
- type: nauc_precision_at_3_std
value: 3.6302537979529035
- type: nauc_precision_at_5_diff1
value: 26.32257879892933
- type: nauc_precision_at_5_max
value: 25.402524493181026
- type: nauc_precision_at_5_std
value: 4.731450603747359
- type: nauc_recall_at_1000_diff1
value: 23.562925244967875
- type: nauc_recall_at_1000_max
value: 30.737399333586797
- type: nauc_recall_at_1000_std
value: 34.19418935008663
- type: nauc_recall_at_100_diff1
value: 28.703574970574824
- type: nauc_recall_at_100_max
value: 22.448663600170278
- type: nauc_recall_at_100_std
value: 24.53297349042035
- type: nauc_recall_at_10_diff1
value: 31.73603907811882
- type: nauc_recall_at_10_max
value: 23.453183748640765
- type: nauc_recall_at_10_std
value: -1.8279054407176274
- type: nauc_recall_at_1_diff1
value: 45.015617718902654
- type: nauc_recall_at_1_max
value: 19.313800263189638
- type: nauc_recall_at_1_std
value: -4.763931386681675
- type: nauc_recall_at_20_diff1
value: 28.74169081866096
- type: nauc_recall_at_20_max
value: 20.035509169577324
- type: nauc_recall_at_20_std
value: 7.371615811227748
- type: nauc_recall_at_3_diff1
value: 34.09890157333362
- type: nauc_recall_at_3_max
value: 20.46565842748346
- type: nauc_recall_at_3_std
value: -0.4337283067447526
- type: nauc_recall_at_5_diff1
value: 30.974580787842402
- type: nauc_recall_at_5_max
value: 23.76379349487105
- type: nauc_recall_at_5_std
value: -1.8407515927979428
- type: ndcg_at_1
value: 27.172
- type: ndcg_at_10
value: 38.269999999999996
- type: ndcg_at_100
value: 43.338
- type: ndcg_at_1000
value: 45.594
- type: ndcg_at_20
value: 40.256
- type: ndcg_at_3
value: 32.673
- type: ndcg_at_5
value: 35.224
- type: precision_at_1
value: 27.172
- type: precision_at_10
value: 6.063000000000001
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.123
- type: precision_at_20
value: 3.5029999999999997
- type: precision_at_3
value: 13.74
- type: precision_at_5
value: 9.797
- type: recall_at_1
value: 25.324999999999996
- type: recall_at_10
value: 51.634
- type: recall_at_100
value: 74.687
- type: recall_at_1000
value: 91.412
- type: recall_at_20
value: 59.207
- type: recall_at_3
value: 36.678
- type: recall_at_5
value: 42.742999999999995
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: main_score
value: 36.853
- type: map_at_1
value: 15.371000000000002
- type: map_at_10
value: 27.122
- type: map_at_100
value: 29.226000000000003
- type: map_at_1000
value: 29.409999999999997
- type: map_at_20
value: 28.274
- type: map_at_3
value: 22.431
- type: map_at_5
value: 24.877
- type: mrr_at_1
value: 34.13680781758958
- type: mrr_at_10
value: 47.265911793599145
- type: mrr_at_100
value: 48.028369995763846
- type: mrr_at_1000
value: 48.05317022537804
- type: mrr_at_20
value: 47.75785292259516
- type: mrr_at_3
value: 43.887079261672156
- type: mrr_at_5
value: 45.906623235613544
- type: nauc_map_at_1000_diff1
value: 24.949211292921547
- type: nauc_map_at_1000_max
value: 38.69844483304584
- type: nauc_map_at_1000_std
value: 18.336359440844753
- type: nauc_map_at_100_diff1
value: 24.8951732982492
- type: nauc_map_at_100_max
value: 38.65049158594052
- type: nauc_map_at_100_std
value: 18.28935278388095
- type: nauc_map_at_10_diff1
value: 24.606032216798273
- type: nauc_map_at_10_max
value: 38.00608351559887
- type: nauc_map_at_10_std
value: 16.61261615173358
- type: nauc_map_at_1_diff1
value: 30.83614944448221
- type: nauc_map_at_1_max
value: 33.757528532809
- type: nauc_map_at_1_std
value: 8.880622713261126
- type: nauc_map_at_20_diff1
value: 24.75491310922017
- type: nauc_map_at_20_max
value: 38.353679076398834
- type: nauc_map_at_20_std
value: 17.58637493443171
- type: nauc_map_at_3_diff1
value: 25.563085273287083
- type: nauc_map_at_3_max
value: 35.14515679047155
- type: nauc_map_at_3_std
value: 11.75594869817732
- type: nauc_map_at_5_diff1
value: 24.815807517691614
- type: nauc_map_at_5_max
value: 36.25905426665983
- type: nauc_map_at_5_std
value: 14.516391726180697
- type: nauc_mrr_at_1000_diff1
value: 27.948233427121274
- type: nauc_mrr_at_1000_max
value: 37.5893640945859
- type: nauc_mrr_at_1000_std
value: 19.588442449629763
- type: nauc_mrr_at_100_diff1
value: 27.947962345854037
- type: nauc_mrr_at_100_max
value: 37.60375479481945
- type: nauc_mrr_at_100_std
value: 19.614791576283793
- type: nauc_mrr_at_10_diff1
value: 27.882311310262136
- type: nauc_mrr_at_10_max
value: 37.58580968074054
- type: nauc_mrr_at_10_std
value: 19.49875186170201
- type: nauc_mrr_at_1_diff1
value: 28.017413073648477
- type: nauc_mrr_at_1_max
value: 32.87710191514022
- type: nauc_mrr_at_1_std
value: 14.04889142608459
- type: nauc_mrr_at_20_diff1
value: 27.89129925771968
- type: nauc_mrr_at_20_max
value: 37.6142863106945
- type: nauc_mrr_at_20_std
value: 19.645390143394163
- type: nauc_mrr_at_3_diff1
value: 27.99609559690795
- type: nauc_mrr_at_3_max
value: 36.87362332456197
- type: nauc_mrr_at_3_std
value: 18.598416821915333
- type: nauc_mrr_at_5_diff1
value: 27.68306089976716
- type: nauc_mrr_at_5_max
value: 37.12264485659723
- type: nauc_mrr_at_5_std
value: 19.18875305730564
- type: nauc_ndcg_at_1000_diff1
value: 25.736779186453777
- type: nauc_ndcg_at_1000_max
value: 41.93281139456004
- type: nauc_ndcg_at_1000_std
value: 25.179038422659993
- type: nauc_ndcg_at_100_diff1
value: 25.144796623848322
- type: nauc_ndcg_at_100_max
value: 41.72820916876173
- type: nauc_ndcg_at_100_std
value: 25.12851686850754
- type: nauc_ndcg_at_10_diff1
value: 24.321249191226652
- type: nauc_ndcg_at_10_max
value: 40.23711916935706
- type: nauc_ndcg_at_10_std
value: 20.89060972334557
- type: nauc_ndcg_at_1_diff1
value: 28.017413073648477
- type: nauc_ndcg_at_1_max
value: 32.87710191514022
- type: nauc_ndcg_at_1_std
value: 14.04889142608459
- type: nauc_ndcg_at_20_diff1
value: 24.5090484877482
- type: nauc_ndcg_at_20_max
value: 40.752854032983606
- type: nauc_ndcg_at_20_std
value: 22.70331074781384
- type: nauc_ndcg_at_3_diff1
value: 25.13499057756147
- type: nauc_ndcg_at_3_max
value: 35.8325682137567
- type: nauc_ndcg_at_3_std
value: 15.23768392706637
- type: nauc_ndcg_at_5_diff1
value: 24.614105695451116
- type: nauc_ndcg_at_5_max
value: 37.68089587624492
- type: nauc_ndcg_at_5_std
value: 17.946406099261708
- type: nauc_precision_at_1000_diff1
value: -2.022340544774227
- type: nauc_precision_at_1000_max
value: 6.070578645067797
- type: nauc_precision_at_1000_std
value: 22.15132728777549
- type: nauc_precision_at_100_diff1
value: 4.544144474504255
- type: nauc_precision_at_100_max
value: 19.780392159848574
- type: nauc_precision_at_100_std
value: 31.107111186002438
- type: nauc_precision_at_10_diff1
value: 10.107015022955848
- type: nauc_precision_at_10_max
value: 30.779709099060465
- type: nauc_precision_at_10_std
value: 27.324148451668602
- type: nauc_precision_at_1_diff1
value: 28.017413073648477
- type: nauc_precision_at_1_max
value: 32.87710191514022
- type: nauc_precision_at_1_std
value: 14.04889142608459
- type: nauc_precision_at_20_diff1
value: 8.270881053079405
- type: nauc_precision_at_20_max
value: 27.26753946078481
- type: nauc_precision_at_20_std
value: 29.156725822074204
- type: nauc_precision_at_3_diff1
value: 17.82468940497632
- type: nauc_precision_at_3_max
value: 31.490021174215155
- type: nauc_precision_at_3_std
value: 18.73818985054394
- type: nauc_precision_at_5_diff1
value: 13.24803141673961
- type: nauc_precision_at_5_max
value: 29.94926240784298
- type: nauc_precision_at_5_std
value: 23.2940906142919
- type: nauc_recall_at_1000_diff1
value: 19.09850333580471
- type: nauc_recall_at_1000_max
value: 46.026306142840596
- type: nauc_recall_at_1000_std
value: 46.50391519568263
- type: nauc_recall_at_100_diff1
value: 16.739384224869738
- type: nauc_recall_at_100_max
value: 40.68987136431252
- type: nauc_recall_at_100_std
value: 36.01609750485591
- type: nauc_recall_at_10_diff1
value: 17.51796617221814
- type: nauc_recall_at_10_max
value: 39.47453129444401
- type: nauc_recall_at_10_std
value: 23.79239002974899
- type: nauc_recall_at_1_diff1
value: 30.83614944448221
- type: nauc_recall_at_1_max
value: 33.757528532809
- type: nauc_recall_at_1_std
value: 8.880622713261126
- type: nauc_recall_at_20_diff1
value: 16.978668307251652
- type: nauc_recall_at_20_max
value: 39.09115357303713
- type: nauc_recall_at_20_std
value: 27.278668534187524
- type: nauc_recall_at_3_diff1
value: 22.55937738994021
- type: nauc_recall_at_3_max
value: 36.25055459395638
- type: nauc_recall_at_3_std
value: 14.828905168761247
- type: nauc_recall_at_5_diff1
value: 19.32656748627199
- type: nauc_recall_at_5_max
value: 36.28836228620816
- type: nauc_recall_at_5_std
value: 19.264352933914278
- type: ndcg_at_1
value: 34.137
- type: ndcg_at_10
value: 36.853
- type: ndcg_at_100
value: 44.279
- type: ndcg_at_1000
value: 47.336
- type: ndcg_at_20
value: 39.815
- type: ndcg_at_3
value: 30.253999999999998
- type: ndcg_at_5
value: 32.649
- type: precision_at_1
value: 34.137
- type: precision_at_10
value: 11.655
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.254
- type: precision_at_20
value: 7.1209999999999996
- type: precision_at_3
value: 22.823
- type: precision_at_5
value: 17.655
- type: recall_at_1
value: 15.371000000000002
- type: recall_at_10
value: 43.718
- type: recall_at_100
value: 68.81
- type: recall_at_1000
value: 85.69600000000001
- type: recall_at_20
value: 51.94
- type: recall_at_3
value: 27.694000000000003
- type: recall_at_5
value: 34.469
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: main_score
value: 45.553
- type: map_at_1
value: 9.168999999999999
- type: map_at_10
value: 22.154
- type: map_at_100
value: 32.174
- type: map_at_1000
value: 33.974
- type: map_at_20
value: 25.899
- type: map_at_3
value: 15.275
- type: map_at_5
value: 18.291
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.39662698412697
- type: mrr_at_100
value: 78.56221458977012
- type: mrr_at_1000
value: 78.56669970642338
- type: mrr_at_20
value: 78.49688805346696
- type: mrr_at_3
value: 76.33333333333333
- type: mrr_at_5
value: 77.70833333333333
- type: nauc_map_at_1000_diff1
value: 18.465085922071346
- type: nauc_map_at_1000_max
value: 24.29804638788498
- type: nauc_map_at_1000_std
value: 22.380463943423514
- type: nauc_map_at_100_diff1
value: 19.37585410674523
- type: nauc_map_at_100_max
value: 22.56424042509462
- type: nauc_map_at_100_std
value: 19.672237275984426
- type: nauc_map_at_10_diff1
value: 23.597788166305577
- type: nauc_map_at_10_max
value: 9.157316105122925
- type: nauc_map_at_10_std
value: -3.8881247055786807
- type: nauc_map_at_1_diff1
value: 43.96699602275052
- type: nauc_map_at_1_max
value: -0.7577088440873263
- type: nauc_map_at_1_std
value: -17.732463891968404
- type: nauc_map_at_20_diff1
value: 22.326759054850097
- type: nauc_map_at_20_max
value: 14.879191412167703
- type: nauc_map_at_20_std
value: 5.405751236575241
- type: nauc_map_at_3_diff1
value: 28.73583545428074
- type: nauc_map_at_3_max
value: 1.5986597211018239
- type: nauc_map_at_3_std
value: -16.512455883681515
- type: nauc_map_at_5_diff1
value: 25.401810959155057
- type: nauc_map_at_5_max
value: 4.418875376978587
- type: nauc_map_at_5_std
value: -12.296750992013052
- type: nauc_mrr_at_1000_diff1
value: 51.228801807498584
- type: nauc_mrr_at_1000_max
value: 61.040998883279585
- type: nauc_mrr_at_1000_std
value: 40.93983887257123
- type: nauc_mrr_at_100_diff1
value: 51.23715338435314
- type: nauc_mrr_at_100_max
value: 61.03971408781317
- type: nauc_mrr_at_100_std
value: 40.91796923590573
- type: nauc_mrr_at_10_diff1
value: 51.1214868552331
- type: nauc_mrr_at_10_max
value: 61.03069045590881
- type: nauc_mrr_at_10_std
value: 40.661621199704264
- type: nauc_mrr_at_1_diff1
value: 50.84660003035892
- type: nauc_mrr_at_1_max
value: 60.692091499960895
- type: nauc_mrr_at_1_std
value: 42.126228731502955
- type: nauc_mrr_at_20_diff1
value: 51.0402624284872
- type: nauc_mrr_at_20_max
value: 60.94577844338166
- type: nauc_mrr_at_20_std
value: 40.89505950503613
- type: nauc_mrr_at_3_diff1
value: 51.771113665996516
- type: nauc_mrr_at_3_max
value: 61.65264793077224
- type: nauc_mrr_at_3_std
value: 41.75781827057092
- type: nauc_mrr_at_5_diff1
value: 51.0656793772882
- type: nauc_mrr_at_5_max
value: 61.08042065139715
- type: nauc_mrr_at_5_std
value: 41.11203271084835
- type: nauc_ndcg_at_1000_diff1
value: 22.347978262245107
- type: nauc_ndcg_at_1000_max
value: 36.56458763955002
- type: nauc_ndcg_at_1000_std
value: 35.99616144258822
- type: nauc_ndcg_at_100_diff1
value: 23.1120990977162
- type: nauc_ndcg_at_100_max
value: 30.79663306311657
- type: nauc_ndcg_at_100_std
value: 27.387572106784297
- type: nauc_ndcg_at_10_diff1
value: 23.329746066899656
- type: nauc_ndcg_at_10_max
value: 28.69246947084685
- type: nauc_ndcg_at_10_std
value: 21.457736188325345
- type: nauc_ndcg_at_1_diff1
value: 39.99399153456974
- type: nauc_ndcg_at_1_max
value: 38.12447856470389
- type: nauc_ndcg_at_1_std
value: 27.768869260384676
- type: nauc_ndcg_at_20_diff1
value: 24.945374175339907
- type: nauc_ndcg_at_20_max
value: 27.67836982165295
- type: nauc_ndcg_at_20_std
value: 19.7933631060578
- type: nauc_ndcg_at_3_diff1
value: 26.063492354398527
- type: nauc_ndcg_at_3_max
value: 33.06541959550656
- type: nauc_ndcg_at_3_std
value: 23.278902797288726
- type: nauc_ndcg_at_5_diff1
value: 22.521596060750035
- type: nauc_ndcg_at_5_max
value: 31.210005673730784
- type: nauc_ndcg_at_5_std
value: 22.893106456317927
- type: nauc_precision_at_1000_diff1
value: -19.845356495096006
- type: nauc_precision_at_1000_max
value: 4.163819381816099
- type: nauc_precision_at_1000_std
value: 7.612952884590339
- type: nauc_precision_at_100_diff1
value: -8.2679285153361
- type: nauc_precision_at_100_max
value: 29.78018175573565
- type: nauc_precision_at_100_std
value: 41.07244463956215
- type: nauc_precision_at_10_diff1
value: -3.2451428407349057
- type: nauc_precision_at_10_max
value: 36.92563008274906
- type: nauc_precision_at_10_std
value: 45.06962043489777
- type: nauc_precision_at_1_diff1
value: 50.84660003035892
- type: nauc_precision_at_1_max
value: 60.692091499960895
- type: nauc_precision_at_1_std
value: 42.126228731502955
- type: nauc_precision_at_20_diff1
value: -3.432279149061878
- type: nauc_precision_at_20_max
value: 37.013592483974875
- type: nauc_precision_at_20_std
value: 46.47324739428665
- type: nauc_precision_at_3_diff1
value: 7.28495481051025
- type: nauc_precision_at_3_max
value: 38.66372411741402
- type: nauc_precision_at_3_std
value: 35.23163993723955
- type: nauc_precision_at_5_diff1
value: -0.16540230063716202
- type: nauc_precision_at_5_max
value: 37.322494255721715
- type: nauc_precision_at_5_std
value: 39.666653561269754
- type: nauc_recall_at_1000_diff1
value: 11.388326469283681
- type: nauc_recall_at_1000_max
value: 32.698146308591674
- type: nauc_recall_at_1000_std
value: 49.48830488070777
- type: nauc_recall_at_100_diff1
value: 11.497443532756819
- type: nauc_recall_at_100_max
value: 20.196970431621615
- type: nauc_recall_at_100_std
value: 23.688772100803433
- type: nauc_recall_at_10_diff1
value: 16.519851398596003
- type: nauc_recall_at_10_max
value: 0.774066845071221
- type: nauc_recall_at_10_std
value: -10.89514647001814
- type: nauc_recall_at_1_diff1
value: 43.96699602275052
- type: nauc_recall_at_1_max
value: -0.7577088440873263
- type: nauc_recall_at_1_std
value: -17.732463891968404
- type: nauc_recall_at_20_diff1
value: 15.202960269878258
- type: nauc_recall_at_20_max
value: 7.067263295590253
- type: nauc_recall_at_20_std
value: -0.06050108222640702
- type: nauc_recall_at_3_diff1
value: 24.066741361525125
- type: nauc_recall_at_3_max
value: -2.1961525860488424
- type: nauc_recall_at_3_std
value: -19.48307077749568
- type: nauc_recall_at_5_diff1
value: 20.086330794102707
- type: nauc_recall_at_5_max
value: -0.8866528062747986
- type: nauc_recall_at_5_std
value: -16.53799173962747
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 45.553
- type: ndcg_at_100
value: 51.014
- type: ndcg_at_1000
value: 58.226
- type: ndcg_at_20
value: 44.98
- type: ndcg_at_3
value: 48.981
- type: ndcg_at_5
value: 46.794999999999995
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 36.85
- type: precision_at_100
value: 11.955
- type: precision_at_1000
value: 2.247
- type: precision_at_20
value: 28.075
- type: precision_at_3
value: 52.666999999999994
- type: precision_at_5
value: 45.85
- type: recall_at_1
value: 9.168999999999999
- type: recall_at_10
value: 28.796
- type: recall_at_100
value: 58.892999999999994
- type: recall_at_1000
value: 81.644
- type: recall_at_20
value: 36.659000000000006
- type: recall_at_3
value: 16.709
- type: recall_at_5
value: 21.387
task:
type: Retrieval
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: main_score
value: 88.41
- type: map_at_1
value: 75.637
- type: map_at_10
value: 84.674
- type: map_at_100
value: 84.909
- type: map_at_1000
value: 84.92
- type: map_at_20
value: 84.836
- type: map_at_3
value: 83.44200000000001
- type: map_at_5
value: 84.28099999999999
- type: mrr_at_1
value: 81.56315631563157
- type: mrr_at_10
value: 88.89571695264748
- type: mrr_at_100
value: 88.93671417216285
- type: mrr_at_1000
value: 88.93708016011664
- type: mrr_at_20
value: 88.9311652665256
- type: mrr_at_3
value: 88.20882088208805
- type: mrr_at_5
value: 88.72937293729349
- type: nauc_map_at_1000_diff1
value: 54.41216035074026
- type: nauc_map_at_1000_max
value: 13.346153003554361
- type: nauc_map_at_1000_std
value: -6.721664416152164
- type: nauc_map_at_100_diff1
value: 54.36538350995795
- type: nauc_map_at_100_max
value: 13.355583381471298
- type: nauc_map_at_100_std
value: -6.696921015641016
- type: nauc_map_at_10_diff1
value: 54.0389127730555
- type: nauc_map_at_10_max
value: 13.387802159150663
- type: nauc_map_at_10_std
value: -6.73514381731833
- type: nauc_map_at_1_diff1
value: 57.99489574836453
- type: nauc_map_at_1_max
value: 7.830032589171654
- type: nauc_map_at_1_std
value: -10.140208285080295
- type: nauc_map_at_20_diff1
value: 54.16841004736076
- type: nauc_map_at_20_max
value: 13.345607363689746
- type: nauc_map_at_20_std
value: -6.663119775158465
- type: nauc_map_at_3_diff1
value: 53.82879543599303
- type: nauc_map_at_3_max
value: 12.716952288433902
- type: nauc_map_at_3_std
value: -7.746102082835598
- type: nauc_map_at_5_diff1
value: 53.82838395350109
- type: nauc_map_at_5_max
value: 13.487373534211702
- type: nauc_map_at_5_std
value: -6.869504398693434
- type: nauc_mrr_at_1000_diff1
value: 68.92783546581906
- type: nauc_mrr_at_1000_max
value: 12.076297180596592
- type: nauc_mrr_at_1000_std
value: -13.306257067567998
- type: nauc_mrr_at_100_diff1
value: 68.92780219775517
- type: nauc_mrr_at_100_max
value: 12.078449805054374
- type: nauc_mrr_at_100_std
value: -13.303524852703719
- type: nauc_mrr_at_10_diff1
value: 68.92686206881258
- type: nauc_mrr_at_10_max
value: 12.273295656884873
- type: nauc_mrr_at_10_std
value: -13.222483496603965
- type: nauc_mrr_at_1_diff1
value: 70.1738022073041
- type: nauc_mrr_at_1_max
value: 9.378639533482806
- type: nauc_mrr_at_1_std
value: -13.444033823202348
- type: nauc_mrr_at_20_diff1
value: 68.91161304905303
- type: nauc_mrr_at_20_max
value: 12.117091514817885
- type: nauc_mrr_at_20_std
value: -13.258261750160239
- type: nauc_mrr_at_3_diff1
value: 68.61982455945467
- type: nauc_mrr_at_3_max
value: 12.608213879734578
- type: nauc_mrr_at_3_std
value: -13.558003431587839
- type: nauc_mrr_at_5_diff1
value: 68.81439097457242
- type: nauc_mrr_at_5_max
value: 12.54025598903624
- type: nauc_mrr_at_5_std
value: -13.199231514972093
- type: nauc_ndcg_at_1000_diff1
value: 56.47563443877495
- type: nauc_ndcg_at_1000_max
value: 14.508331783439466
- type: nauc_ndcg_at_1000_std
value: -6.206829736668775
- type: nauc_ndcg_at_100_diff1
value: 55.54015515673474
- type: nauc_ndcg_at_100_max
value: 14.753595778278136
- type: nauc_ndcg_at_100_std
value: -5.638517949568802
- type: nauc_ndcg_at_10_diff1
value: 54.220845223257996
- type: nauc_ndcg_at_10_max
value: 15.265309648490021
- type: nauc_ndcg_at_10_std
value: -5.516276098929109
- type: nauc_ndcg_at_1_diff1
value: 70.1738022073041
- type: nauc_ndcg_at_1_max
value: 9.378639533482806
- type: nauc_ndcg_at_1_std
value: -13.444033823202348
- type: nauc_ndcg_at_20_diff1
value: 54.481406100854635
- type: nauc_ndcg_at_20_max
value: 14.868763583210498
- type: nauc_ndcg_at_20_std
value: -5.328097380018734
- type: nauc_ndcg_at_3_diff1
value: 54.94411725607744
- type: nauc_ndcg_at_3_max
value: 14.27186734506607
- type: nauc_ndcg_at_3_std
value: -7.894724962312474
- type: nauc_ndcg_at_5_diff1
value: 54.08048166974806
- type: nauc_ndcg_at_5_max
value: 15.528233170721006
- type: nauc_ndcg_at_5_std
value: -5.984768714537104
- type: nauc_precision_at_1000_diff1
value: -8.744323640074445
- type: nauc_precision_at_1000_max
value: -0.01881224392053465
- type: nauc_precision_at_1000_std
value: 3.8721477979260635
- type: nauc_precision_at_100_diff1
value: -11.86150156952171
- type: nauc_precision_at_100_max
value: 3.2736651314552314
- type: nauc_precision_at_100_std
value: 8.12687620615509
- type: nauc_precision_at_10_diff1
value: -10.360708676781178
- type: nauc_precision_at_10_max
value: 10.945552490433458
- type: nauc_precision_at_10_std
value: 11.016707653014485
- type: nauc_precision_at_1_diff1
value: 70.1738022073041
- type: nauc_precision_at_1_max
value: 9.378639533482806
- type: nauc_precision_at_1_std
value: -13.444033823202348
- type: nauc_precision_at_20_diff1
value: -13.557721925696583
- type: nauc_precision_at_20_max
value: 6.331386521718574
- type: nauc_precision_at_20_std
value: 10.322188778142388
- type: nauc_precision_at_3_diff1
value: 15.139456770248968
- type: nauc_precision_at_3_max
value: 17.10220985600708
- type: nauc_precision_at_3_std
value: 3.0448183682558074
- type: nauc_precision_at_5_diff1
value: -1.9825577548111102
- type: nauc_precision_at_5_max
value: 17.139148127012625
- type: nauc_precision_at_5_std
value: 10.598435750554753
- type: nauc_recall_at_1000_diff1
value: 15.641740744283005
- type: nauc_recall_at_1000_max
value: 44.65315702195612
- type: nauc_recall_at_1000_std
value: 52.34265862835513
- type: nauc_recall_at_100_diff1
value: 5.254385435323394
- type: nauc_recall_at_100_max
value: 38.53577774395794
- type: nauc_recall_at_100_std
value: 43.47744274335829
- type: nauc_recall_at_10_diff1
value: 19.135735476268042
- type: nauc_recall_at_10_max
value: 30.05417445923848
- type: nauc_recall_at_10_std
value: 18.3988023241141
- type: nauc_recall_at_1_diff1
value: 57.99489574836453
- type: nauc_recall_at_1_max
value: 7.830032589171654
- type: nauc_recall_at_1_std
value: -10.140208285080295
- type: nauc_recall_at_20_diff1
value: 9.444797759735126
- type: nauc_recall_at_20_max
value: 31.001311675371017
- type: nauc_recall_at_20_std
value: 29.351418893822178
- type: nauc_recall_at_3_diff1
value: 36.88862653262064
- type: nauc_recall_at_3_max
value: 19.845892741607823
- type: nauc_recall_at_3_std
value: -1.0584273105890794
- type: nauc_recall_at_5_diff1
value: 27.360718561944974
- type: nauc_recall_at_5_max
value: 26.698311215441738
- type: nauc_recall_at_5_std
value: 8.97113997755362
- type: ndcg_at_1
value: 81.563
- type: ndcg_at_10
value: 88.41
- type: ndcg_at_100
value: 89.101
- type: ndcg_at_1000
value: 89.25800000000001
- type: ndcg_at_20
value: 88.79
- type: ndcg_at_3
value: 86.599
- type: ndcg_at_5
value: 87.74
- type: precision_at_1
value: 81.563
- type: precision_at_10
value: 10.699
- type: precision_at_100
value: 1.13
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 5.479
- type: precision_at_3
value: 33.238
- type: precision_at_5
value: 20.744
- type: recall_at_1
value: 75.637
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 98.072
- type: recall_at_1000
value: 98.951
- type: recall_at_20
value: 96.792
- type: recall_at_3
value: 90.79599999999999
- type: recall_at_5
value: 93.674
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: main_score
value: 42.396
- type: map_at_1
value: 21.711
- type: map_at_10
value: 34.628
- type: map_at_100
value: 36.549
- type: map_at_1000
value: 36.719
- type: map_at_20
value: 35.673
- type: map_at_3
value: 30.585
- type: map_at_5
value: 32.875
- type: mrr_at_1
value: 41.82098765432099
- type: mrr_at_10
value: 50.69505682931607
- type: mrr_at_100
value: 51.50556608727901
- type: mrr_at_1000
value: 51.53870583208304
- type: mrr_at_20
value: 51.15345764364655
- type: mrr_at_3
value: 48.35390946502059
- type: mrr_at_5
value: 49.87397119341563
- type: nauc_map_at_1000_diff1
value: 45.182252919583895
- type: nauc_map_at_1000_max
value: 35.66124930024801
- type: nauc_map_at_1000_std
value: -0.6925562638650965
- type: nauc_map_at_100_diff1
value: 45.116964706960125
- type: nauc_map_at_100_max
value: 35.54990469525889
- type: nauc_map_at_100_std
value: -0.6667263852859368
- type: nauc_map_at_10_diff1
value: 45.39189096228184
- type: nauc_map_at_10_max
value: 34.780111261901
- type: nauc_map_at_10_std
value: -1.8169859294150819
- type: nauc_map_at_1_diff1
value: 47.72764937952259
- type: nauc_map_at_1_max
value: 24.83306559709341
- type: nauc_map_at_1_std
value: -4.714128457297418
- type: nauc_map_at_20_diff1
value: 45.17073365898278
- type: nauc_map_at_20_max
value: 35.0938403469058
- type: nauc_map_at_20_std
value: -1.373412631183604
- type: nauc_map_at_3_diff1
value: 46.525724305731295
- type: nauc_map_at_3_max
value: 31.042538866512597
- type: nauc_map_at_3_std
value: -4.119355935975354
- type: nauc_map_at_5_diff1
value: 45.79569633383187
- type: nauc_map_at_5_max
value: 32.88779656647293
- type: nauc_map_at_5_std
value: -3.2518474739335312
- type: nauc_mrr_at_1000_diff1
value: 52.83619185487903
- type: nauc_mrr_at_1000_max
value: 42.30310720405186
- type: nauc_mrr_at_1000_std
value: -1.1487703348518024
- type: nauc_mrr_at_100_diff1
value: 52.82248853996664
- type: nauc_mrr_at_100_max
value: 42.30549701564678
- type: nauc_mrr_at_100_std
value: -1.1240113031894834
- type: nauc_mrr_at_10_diff1
value: 52.74644276642243
- type: nauc_mrr_at_10_max
value: 42.39103029476398
- type: nauc_mrr_at_10_std
value: -1.1043413237848576
- type: nauc_mrr_at_1_diff1
value: 54.810335521617326
- type: nauc_mrr_at_1_max
value: 40.733260207843394
- type: nauc_mrr_at_1_std
value: -4.452554921565855
- type: nauc_mrr_at_20_diff1
value: 52.788257862499954
- type: nauc_mrr_at_20_max
value: 42.32658875363406
- type: nauc_mrr_at_20_std
value: -1.2209728080684497
- type: nauc_mrr_at_3_diff1
value: 53.43281175319808
- type: nauc_mrr_at_3_max
value: 41.735942650867926
- type: nauc_mrr_at_3_std
value: -2.462688102468019
- type: nauc_mrr_at_5_diff1
value: 52.874037126566606
- type: nauc_mrr_at_5_max
value: 41.93740449458822
- type: nauc_mrr_at_5_std
value: -1.2928874908441947
- type: nauc_ndcg_at_1000_diff1
value: 46.5532425476402
- type: nauc_ndcg_at_1000_max
value: 40.369611603370515
- type: nauc_ndcg_at_1000_std
value: 3.472567588386994
- type: nauc_ndcg_at_100_diff1
value: 45.75244404695404
- type: nauc_ndcg_at_100_max
value: 39.36470550675439
- type: nauc_ndcg_at_100_std
value: 4.356189041115731
- type: nauc_ndcg_at_10_diff1
value: 46.005135323539704
- type: nauc_ndcg_at_10_max
value: 37.89018165334218
- type: nauc_ndcg_at_10_std
value: 0.7129618297768014
- type: nauc_ndcg_at_1_diff1
value: 54.810335521617326
- type: nauc_ndcg_at_1_max
value: 40.733260207843394
- type: nauc_ndcg_at_1_std
value: -4.452554921565855
- type: nauc_ndcg_at_20_diff1
value: 45.841552790490034
- type: nauc_ndcg_at_20_max
value: 38.04992825472661
- type: nauc_ndcg_at_20_std
value: 1.2748305707955212
- type: nauc_ndcg_at_3_diff1
value: 46.683033449357744
- type: nauc_ndcg_at_3_max
value: 37.46397870760607
- type: nauc_ndcg_at_3_std
value: -2.3421854966319824
- type: nauc_ndcg_at_5_diff1
value: 45.82409645378457
- type: nauc_ndcg_at_5_max
value: 36.27588234096716
- type: nauc_ndcg_at_5_std
value: -1.5141197170944254
- type: nauc_precision_at_1000_diff1
value: -3.137944321071885
- type: nauc_precision_at_1000_max
value: 24.12803166253776
- type: nauc_precision_at_1000_std
value: 11.076454789944101
- type: nauc_precision_at_100_diff1
value: 3.9896283891401048
- type: nauc_precision_at_100_max
value: 31.00198316788829
- type: nauc_precision_at_100_std
value: 15.725887643803063
- type: nauc_precision_at_10_diff1
value: 20.493420889888394
- type: nauc_precision_at_10_max
value: 41.689699671507405
- type: nauc_precision_at_10_std
value: 9.374983385669914
- type: nauc_precision_at_1_diff1
value: 54.810335521617326
- type: nauc_precision_at_1_max
value: 40.733260207843394
- type: nauc_precision_at_1_std
value: -4.452554921565855
- type: nauc_precision_at_20_diff1
value: 15.02911800246446
- type: nauc_precision_at_20_max
value: 39.227068888505
- type: nauc_precision_at_20_std
value: 11.755558515319404
- type: nauc_precision_at_3_diff1
value: 34.044986535461746
- type: nauc_precision_at_3_max
value: 40.96605829831656
- type: nauc_precision_at_3_std
value: 1.1903535705688038
- type: nauc_precision_at_5_diff1
value: 26.617002443432707
- type: nauc_precision_at_5_max
value: 40.60413785916794
- type: nauc_precision_at_5_std
value: 3.6984531670502814
- type: nauc_recall_at_1000_diff1
value: 26.96489389440101
- type: nauc_recall_at_1000_max
value: 41.811583968523955
- type: nauc_recall_at_1000_std
value: 41.5719519496712
- type: nauc_recall_at_100_diff1
value: 28.50851434908223
- type: nauc_recall_at_100_max
value: 32.19528060706322
- type: nauc_recall_at_100_std
value: 25.56935294258179
- type: nauc_recall_at_10_diff1
value: 35.139582891180964
- type: nauc_recall_at_10_max
value: 32.15221840434225
- type: nauc_recall_at_10_std
value: 5.550434611582702
- type: nauc_recall_at_1_diff1
value: 47.72764937952259
- type: nauc_recall_at_1_max
value: 24.83306559709341
- type: nauc_recall_at_1_std
value: -4.714128457297418
- type: nauc_recall_at_20_diff1
value: 32.78604811055205
- type: nauc_recall_at_20_max
value: 29.62940720700254
- type: nauc_recall_at_20_std
value: 6.769941491859872
- type: nauc_recall_at_3_diff1
value: 40.76090616138699
- type: nauc_recall_at_3_max
value: 27.506425490226867
- type: nauc_recall_at_3_std
value: -2.608872693119243
- type: nauc_recall_at_5_diff1
value: 37.06532485024711
- type: nauc_recall_at_5_max
value: 27.704150556658448
- type: nauc_recall_at_5_std
value: 0.4718707152343872
- type: ndcg_at_1
value: 41.821000000000005
- type: ndcg_at_10
value: 42.396
- type: ndcg_at_100
value: 49.370000000000005
- type: ndcg_at_1000
value: 52.251000000000005
- type: ndcg_at_20
value: 45.097
- type: ndcg_at_3
value: 39.028
- type: ndcg_at_5
value: 40.222
- type: precision_at_1
value: 41.821000000000005
- type: precision_at_10
value: 11.451
- type: precision_at_100
value: 1.863
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_20
value: 6.798
- type: precision_at_3
value: 25.823
- type: precision_at_5
value: 18.735
- type: recall_at_1
value: 21.711
- type: recall_at_10
value: 48.862
- type: recall_at_100
value: 74.708
- type: recall_at_1000
value: 91.865
- type: recall_at_20
value: 57.50999999999999
- type: recall_at_3
value: 35.85
- type: recall_at_5
value: 41.976
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: main_score
value: 72.21
- type: map_at_1
value: 39.487
- type: map_at_10
value: 63.949999999999996
- type: map_at_100
value: 64.873
- type: map_at_1000
value: 64.927
- type: map_at_20
value: 64.529
- type: map_at_3
value: 60.243
- type: map_at_5
value: 62.613
- type: mrr_at_1
value: 78.97366644159351
- type: mrr_at_10
value: 84.84600173627825
- type: mrr_at_100
value: 85.0172804866798
- type: mrr_at_1000
value: 85.02245651152857
- type: mrr_at_20
value: 84.9625577788225
- type: mrr_at_3
value: 83.90276839972962
- type: mrr_at_5
value: 84.48278190411845
- type: nauc_map_at_1000_diff1
value: 19.825004700775164
- type: nauc_map_at_1000_max
value: 19.943221724164182
- type: nauc_map_at_1000_std
value: 10.068951166560058
- type: nauc_map_at_100_diff1
value: 19.80139472181137
- type: nauc_map_at_100_max
value: 19.938006132804347
- type: nauc_map_at_100_std
value: 10.100008107666842
- type: nauc_map_at_10_diff1
value: 19.53604502514735
- type: nauc_map_at_10_max
value: 19.62768870331064
- type: nauc_map_at_10_std
value: 9.446859074725705
- type: nauc_map_at_1_diff1
value: 67.7764270505257
- type: nauc_map_at_1_max
value: 38.45166604737058
- type: nauc_map_at_1_std
value: 1.9919181988552352
- type: nauc_map_at_20_diff1
value: 19.635871913149913
- type: nauc_map_at_20_max
value: 19.812838965919155
- type: nauc_map_at_20_std
value: 9.905163140101845
- type: nauc_map_at_3_diff1
value: 18.965707122532212
- type: nauc_map_at_3_max
value: 17.878860313056517
- type: nauc_map_at_3_std
value: 6.189378752019195
- type: nauc_map_at_5_diff1
value: 19.493354049675954
- type: nauc_map_at_5_max
value: 19.24527088109141
- type: nauc_map_at_5_std
value: 8.283883139680066
- type: nauc_mrr_at_1000_diff1
value: 66.87150374356781
- type: nauc_mrr_at_1000_max
value: 41.413456443203984
- type: nauc_mrr_at_1000_std
value: 4.140387282484357
- type: nauc_mrr_at_100_diff1
value: 66.87178015619061
- type: nauc_mrr_at_100_max
value: 41.419754763150834
- type: nauc_mrr_at_100_std
value: 4.15222235416704
- type: nauc_mrr_at_10_diff1
value: 66.89720586892301
- type: nauc_mrr_at_10_max
value: 41.56353878125211
- type: nauc_mrr_at_10_std
value: 4.213376519922392
- type: nauc_mrr_at_1_diff1
value: 67.7764270505257
- type: nauc_mrr_at_1_max
value: 38.45166604737058
- type: nauc_mrr_at_1_std
value: 1.9919181988552352
- type: nauc_mrr_at_20_diff1
value: 66.8714688713149
- type: nauc_mrr_at_20_max
value: 41.46170778986735
- type: nauc_mrr_at_20_std
value: 4.165154741309859
- type: nauc_mrr_at_3_diff1
value: 66.31615462679144
- type: nauc_mrr_at_3_max
value: 41.419637693259936
- type: nauc_mrr_at_3_std
value: 3.814834551396097
- type: nauc_mrr_at_5_diff1
value: 66.7289413087213
- type: nauc_mrr_at_5_max
value: 41.668346356371586
- type: nauc_mrr_at_5_std
value: 4.116331539882484
- type: nauc_ndcg_at_1000_diff1
value: 26.37325375970598
- type: nauc_ndcg_at_1000_max
value: 24.850915174721735
- type: nauc_ndcg_at_1000_std
value: 13.37585683440429
- type: nauc_ndcg_at_100_diff1
value: 25.591771178059503
- type: nauc_ndcg_at_100_max
value: 24.562820829532473
- type: nauc_ndcg_at_100_std
value: 14.093690500501541
- type: nauc_ndcg_at_10_diff1
value: 24.64600598115805
- type: nauc_ndcg_at_10_max
value: 23.543499404760023
- type: nauc_ndcg_at_10_std
value: 11.55823632781553
- type: nauc_ndcg_at_1_diff1
value: 67.7764270505257
- type: nauc_ndcg_at_1_max
value: 38.45166604737058
- type: nauc_ndcg_at_1_std
value: 1.9919181988552352
- type: nauc_ndcg_at_20_diff1
value: 24.757843275306726
- type: nauc_ndcg_at_20_max
value: 23.951154200380827
- type: nauc_ndcg_at_20_std
value: 12.931320453044886
- type: nauc_ndcg_at_3_diff1
value: 24.37742630418847
- type: nauc_ndcg_at_3_max
value: 21.310512304883723
- type: nauc_ndcg_at_3_std
value: 6.503993200818077
- type: nauc_ndcg_at_5_diff1
value: 24.813706829269716
- type: nauc_ndcg_at_5_max
value: 22.993657212898
- type: nauc_ndcg_at_5_std
value: 9.34462052506809
- type: nauc_precision_at_1000_diff1
value: -0.6506415756958156
- type: nauc_precision_at_1000_max
value: 28.039755644694875
- type: nauc_precision_at_1000_std
value: 53.46474329623814
- type: nauc_precision_at_100_diff1
value: 3.78462668236152
- type: nauc_precision_at_100_max
value: 22.501700881673862
- type: nauc_precision_at_100_std
value: 40.56672716474142
- type: nauc_precision_at_10_diff1
value: 9.156113228907534
- type: nauc_precision_at_10_max
value: 19.734206254833254
- type: nauc_precision_at_10_std
value: 19.986282545779602
- type: nauc_precision_at_1_diff1
value: 67.7764270505257
- type: nauc_precision_at_1_max
value: 38.45166604737058
- type: nauc_precision_at_1_std
value: 1.9919181988552352
- type: nauc_precision_at_20_diff1
value: 6.6164335644470125
- type: nauc_precision_at_20_max
value: 20.29343459608317
- type: nauc_precision_at_20_std
value: 26.51115475333977
- type: nauc_precision_at_3_diff1
value: 12.476520554399546
- type: nauc_precision_at_3_max
value: 16.69401409858964
- type: nauc_precision_at_3_std
value: 8.165880294907444
- type: nauc_precision_at_5_diff1
value: 11.783242828320958
- type: nauc_precision_at_5_max
value: 19.0679467875759
- type: nauc_precision_at_5_std
value: 13.615358345509884
- type: nauc_recall_at_1000_diff1
value: -0.6506415756960168
- type: nauc_recall_at_1000_max
value: 28.039755644694786
- type: nauc_recall_at_1000_std
value: 53.46474329623801
- type: nauc_recall_at_100_diff1
value: 3.7846266823613877
- type: nauc_recall_at_100_max
value: 22.501700881674008
- type: nauc_recall_at_100_std
value: 40.566727164741366
- type: nauc_recall_at_10_diff1
value: 9.15611322890755
- type: nauc_recall_at_10_max
value: 19.73420625483318
- type: nauc_recall_at_10_std
value: 19.98628254577951
- type: nauc_recall_at_1_diff1
value: 67.7764270505257
- type: nauc_recall_at_1_max
value: 38.45166604737058
- type: nauc_recall_at_1_std
value: 1.9919181988552352
- type: nauc_recall_at_20_diff1
value: 6.616433564446929
- type: nauc_recall_at_20_max
value: 20.293434596083248
- type: nauc_recall_at_20_std
value: 26.5111547533396
- type: nauc_recall_at_3_diff1
value: 12.476520554399531
- type: nauc_recall_at_3_max
value: 16.69401409858966
- type: nauc_recall_at_3_std
value: 8.165880294907438
- type: nauc_recall_at_5_diff1
value: 11.783242828320999
- type: nauc_recall_at_5_max
value: 19.067946787575845
- type: nauc_recall_at_5_std
value: 13.61535834550991
- type: ndcg_at_1
value: 78.974
- type: ndcg_at_10
value: 72.21
- type: ndcg_at_100
value: 75.264
- type: ndcg_at_1000
value: 76.259
- type: ndcg_at_20
value: 73.628
- type: ndcg_at_3
value: 67.047
- type: ndcg_at_5
value: 69.974
- type: precision_at_1
value: 78.974
- type: precision_at_10
value: 15.267
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 8.09
- type: precision_at_3
value: 43.309
- type: precision_at_5
value: 28.294000000000004
- type: recall_at_1
value: 39.487
- type: recall_at_10
value: 76.334
- type: recall_at_100
value: 88.076
- type: recall_at_1000
value: 94.59100000000001
- type: recall_at_20
value: 80.898
- type: recall_at_3
value: 64.96300000000001
- type: recall_at_5
value: 70.736
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: main_score
value: 42.027
- type: map_at_1
value: 22.118
- type: map_at_10
value: 34.816
- type: map_at_100
value: 35.983
- type: map_at_1000
value: 36.028999999999996
- type: map_at_20
value: 35.545
- type: map_at_3
value: 30.752000000000002
- type: map_at_5
value: 33.114
- type: mrr_at_1
value: 22.793696275071635
- type: mrr_at_10
value: 35.47250079592483
- type: mrr_at_100
value: 36.576471512902856
- type: mrr_at_1000
value: 36.616205680509786
- type: mrr_at_20
value: 36.16557033864942
- type: mrr_at_3
value: 31.48758357211065
- type: mrr_at_5
value: 33.80563514804202
- type: nauc_map_at_1000_diff1
value: 32.89234100489284
- type: nauc_map_at_1000_max
value: 1.1802816553581001
- type: nauc_map_at_1000_std
value: -20.187692925732446
- type: nauc_map_at_100_diff1
value: 32.88694493681772
- type: nauc_map_at_100_max
value: 1.1732717578080365
- type: nauc_map_at_100_std
value: -20.164165529035245
- type: nauc_map_at_10_diff1
value: 32.826182211848796
- type: nauc_map_at_10_max
value: 1.1551262165737235
- type: nauc_map_at_10_std
value: -20.88326292319754
- type: nauc_map_at_1_diff1
value: 36.12732122790642
- type: nauc_map_at_1_max
value: 1.8197550109156913
- type: nauc_map_at_1_std
value: -17.205625720792167
- type: nauc_map_at_20_diff1
value: 32.83333177195551
- type: nauc_map_at_20_max
value: 1.0937431645506202
- type: nauc_map_at_20_std
value: -20.503956514646145
- type: nauc_map_at_3_diff1
value: 32.76264193805814
- type: nauc_map_at_3_max
value: 0.8560962042500389
- type: nauc_map_at_3_std
value: -20.608930717315577
- type: nauc_map_at_5_diff1
value: 32.78673238978775
- type: nauc_map_at_5_max
value: 1.0511863039329437
- type: nauc_map_at_5_std
value: -21.02164728626011
- type: nauc_mrr_at_1000_diff1
value: 32.610323934702286
- type: nauc_mrr_at_1000_max
value: 1.276669121901405
- type: nauc_mrr_at_1000_std
value: -19.908120615285043
- type: nauc_mrr_at_100_diff1
value: 32.601373758102795
- type: nauc_mrr_at_100_max
value: 1.2752735149992132
- type: nauc_mrr_at_100_std
value: -19.87937042610101
- type: nauc_mrr_at_10_diff1
value: 32.55795432078168
- type: nauc_mrr_at_10_max
value: 1.2881786969258637
- type: nauc_mrr_at_10_std
value: -20.54564519015977
- type: nauc_mrr_at_1_diff1
value: 35.596301376443726
- type: nauc_mrr_at_1_max
value: 1.7633238037306902
- type: nauc_mrr_at_1_std
value: -17.1999420019887
- type: nauc_mrr_at_20_diff1
value: 32.57185739111023
- type: nauc_mrr_at_20_max
value: 1.2212620853201877
- type: nauc_mrr_at_20_std
value: -20.179517281041264
- type: nauc_mrr_at_3_diff1
value: 32.42681377099514
- type: nauc_mrr_at_3_max
value: 0.8745921708861145
- type: nauc_mrr_at_3_std
value: -20.41017687790572
- type: nauc_mrr_at_5_diff1
value: 32.499107129648266
- type: nauc_mrr_at_5_max
value: 1.1159673851851573
- type: nauc_mrr_at_5_std
value: -20.695143502133824
- type: nauc_ndcg_at_1000_diff1
value: 32.16957965806702
- type: nauc_ndcg_at_1000_max
value: 1.6763998947980905
- type: nauc_ndcg_at_1000_std
value: -18.970592350332893
- type: nauc_ndcg_at_100_diff1
value: 31.977550102558872
- type: nauc_ndcg_at_100_max
value: 1.5625858650110014
- type: nauc_ndcg_at_100_std
value: -17.990456766123835
- type: nauc_ndcg_at_10_diff1
value: 31.82738932481356
- type: nauc_ndcg_at_10_max
value: 1.1661362042692103
- type: nauc_ndcg_at_10_std
value: -21.872680193994217
- type: nauc_ndcg_at_1_diff1
value: 35.596301376443726
- type: nauc_ndcg_at_1_max
value: 1.7633238037306902
- type: nauc_ndcg_at_1_std
value: -17.1999420019887
- type: nauc_ndcg_at_20_diff1
value: 31.749656399266264
- type: nauc_ndcg_at_20_max
value: 0.9629024493088691
- type: nauc_ndcg_at_20_std
value: -20.4379403899277
- type: nauc_ndcg_at_3_diff1
value: 31.731361436850836
- type: nauc_ndcg_at_3_max
value: 0.531749791578849
- type: nauc_ndcg_at_3_std
value: -21.551112910698674
- type: nauc_ndcg_at_5_diff1
value: 31.785373941157303
- type: nauc_ndcg_at_5_max
value: 0.86207769368333
- type: nauc_ndcg_at_5_std
value: -22.24923399160171
- type: nauc_precision_at_1000_diff1
value: -3.841288331986519
- type: nauc_precision_at_1000_max
value: 13.558041371634976
- type: nauc_precision_at_1000_std
value: 15.181510484512827
- type: nauc_precision_at_100_diff1
value: 12.441154582709053
- type: nauc_precision_at_100_max
value: 8.428136255841935
- type: nauc_precision_at_100_std
value: 14.710391839731656
- type: nauc_precision_at_10_diff1
value: 26.185854813986705
- type: nauc_precision_at_10_max
value: 1.6348387310504464
- type: nauc_precision_at_10_std
value: -23.448927004357298
- type: nauc_precision_at_1_diff1
value: 35.596301376443726
- type: nauc_precision_at_1_max
value: 1.7633238037306902
- type: nauc_precision_at_1_std
value: -17.1999420019887
- type: nauc_precision_at_20_diff1
value: 22.69194179544158
- type: nauc_precision_at_20_max
value: 1.2972015009169306
- type: nauc_precision_at_20_std
value: -15.751482380060269
- type: nauc_precision_at_3_diff1
value: 28.255531512125188
- type: nauc_precision_at_3_max
value: -0.3715575458464333
- type: nauc_precision_at_3_std
value: -24.227970454057697
- type: nauc_precision_at_5_diff1
value: 27.65497951098847
- type: nauc_precision_at_5_max
value: 0.449773375292472
- type: nauc_precision_at_5_std
value: -25.37445450938601
- type: nauc_recall_at_1000_diff1
value: 15.243948516763819
- type: nauc_recall_at_1000_max
value: 41.821227805251375
- type: nauc_recall_at_1000_std
value: 61.66297794838101
- type: nauc_recall_at_100_diff1
value: 24.516543685029994
- type: nauc_recall_at_100_max
value: 7.093972966253228
- type: nauc_recall_at_100_std
value: 17.244452321212282
- type: nauc_recall_at_10_diff1
value: 28.404243095182828
- type: nauc_recall_at_10_max
value: 1.0805210480930945
- type: nauc_recall_at_10_std
value: -24.885018657039527
- type: nauc_recall_at_1_diff1
value: 36.12732122790642
- type: nauc_recall_at_1_max
value: 1.8197550109156913
- type: nauc_recall_at_1_std
value: -17.205625720792167
- type: nauc_recall_at_20_diff1
value: 26.956250169438512
- type: nauc_recall_at_20_max
value: 0.023973408161285917
- type: nauc_recall_at_20_std
value: -18.32944444428131
- type: nauc_recall_at_3_diff1
value: 28.9894205130054
- type: nauc_recall_at_3_max
value: -0.36140658021466865
- type: nauc_recall_at_3_std
value: -24.022505107768364
- type: nauc_recall_at_5_diff1
value: 28.907023434955104
- type: nauc_recall_at_5_max
value: 0.2501037567297729
- type: nauc_recall_at_5_std
value: -25.719919602271496
- type: ndcg_at_1
value: 22.794
- type: ndcg_at_10
value: 42.027
- type: ndcg_at_100
value: 47.601
- type: ndcg_at_1000
value: 48.713
- type: ndcg_at_20
value: 44.623000000000005
- type: ndcg_at_3
value: 33.772999999999996
- type: ndcg_at_5
value: 37.991
- type: precision_at_1
value: 22.794
- type: precision_at_10
value: 6.711
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 3.8920000000000003
- type: precision_at_3
value: 14.46
- type: precision_at_5
value: 10.822
- type: recall_at_1
value: 22.118
- type: recall_at_10
value: 64.201
- type: recall_at_100
value: 89.878
- type: recall_at_1000
value: 98.259
- type: recall_at_20
value: 74.34100000000001
- type: recall_at_3
value: 41.8
- type: recall_at_5
value: 51.959
task:
type: Retrieval
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: main_score
value: 36.201
- type: map_at_1
value: 5.654
- type: map_at_10
value: 13.402
- type: map_at_100
value: 16.849
- type: map_at_1000
value: 18.264
- type: map_at_20
value: 14.832
- type: map_at_3
value: 9.619
- type: map_at_5
value: 11.483
- type: mrr_at_1
value: 47.6780185758514
- type: mrr_at_10
value: 56.47906531033466
- type: mrr_at_100
value: 57.04539749991402
- type: mrr_at_1000
value: 57.08810157607369
- type: mrr_at_20
value: 56.88003170105462
- type: mrr_at_3
value: 54.43756449948401
- type: mrr_at_5
value: 55.660474716202266
- type: nauc_map_at_1000_diff1
value: 31.134615238698192
- type: nauc_map_at_1000_max
value: 36.09522002487132
- type: nauc_map_at_1000_std
value: 14.72627666649002
- type: nauc_map_at_100_diff1
value: 32.777473351864444
- type: nauc_map_at_100_max
value: 35.25391471621035
- type: nauc_map_at_100_std
value: 12.024428973861083
- type: nauc_map_at_10_diff1
value: 36.46466466148528
- type: nauc_map_at_10_max
value: 29.707805406826722
- type: nauc_map_at_10_std
value: 2.0678757794226335
- type: nauc_map_at_1_diff1
value: 54.30208426149679
- type: nauc_map_at_1_max
value: 18.69125148481608
- type: nauc_map_at_1_std
value: -8.970955660291802
- type: nauc_map_at_20_diff1
value: 34.76513311600623
- type: nauc_map_at_20_max
value: 32.20666003570514
- type: nauc_map_at_20_std
value: 5.924889441518581
- type: nauc_map_at_3_diff1
value: 45.73465176835491
- type: nauc_map_at_3_max
value: 23.492291524989106
- type: nauc_map_at_3_std
value: -5.0123536561688855
- type: nauc_map_at_5_diff1
value: 39.7128319374107
- type: nauc_map_at_5_max
value: 25.84231729559691
- type: nauc_map_at_5_std
value: -2.0861428981140344
- type: nauc_mrr_at_1000_diff1
value: 33.0997881703397
- type: nauc_mrr_at_1000_max
value: 52.7089709923531
- type: nauc_mrr_at_1000_std
value: 28.8517952674151
- type: nauc_mrr_at_100_diff1
value: 33.1094984027438
- type: nauc_mrr_at_100_max
value: 52.74301398138847
- type: nauc_mrr_at_100_std
value: 28.897997840300892
- type: nauc_mrr_at_10_diff1
value: 33.300713655464925
- type: nauc_mrr_at_10_max
value: 52.572139698742184
- type: nauc_mrr_at_10_std
value: 28.66875615527188
- type: nauc_mrr_at_1_diff1
value: 32.57632582147155
- type: nauc_mrr_at_1_max
value: 46.020072246328816
- type: nauc_mrr_at_1_std
value: 20.99097889820076
- type: nauc_mrr_at_20_diff1
value: 33.04083904518949
- type: nauc_mrr_at_20_max
value: 52.597451362456994
- type: nauc_mrr_at_20_std
value: 28.681527293587898
- type: nauc_mrr_at_3_diff1
value: 33.64864656322754
- type: nauc_mrr_at_3_max
value: 51.82256412011279
- type: nauc_mrr_at_3_std
value: 27.241260746740686
- type: nauc_mrr_at_5_diff1
value: 33.53201325467246
- type: nauc_mrr_at_5_max
value: 52.79440885773516
- type: nauc_mrr_at_5_std
value: 28.663081392086028
- type: nauc_ndcg_at_1000_diff1
value: 28.632650542040714
- type: nauc_ndcg_at_1000_max
value: 51.24103069835822
- type: nauc_ndcg_at_1000_std
value: 35.05503784757999
- type: nauc_ndcg_at_100_diff1
value: 29.082177715298503
- type: nauc_ndcg_at_100_max
value: 45.24750203464315
- type: nauc_ndcg_at_100_std
value: 27.146548925680914
- type: nauc_ndcg_at_10_diff1
value: 25.123554466093594
- type: nauc_ndcg_at_10_max
value: 42.74355537806512
- type: nauc_ndcg_at_10_std
value: 22.234407997803935
- type: nauc_ndcg_at_1_diff1
value: 33.75083940012058
- type: nauc_ndcg_at_1_max
value: 44.44319402133161
- type: nauc_ndcg_at_1_std
value: 19.146499358406487
- type: nauc_ndcg_at_20_diff1
value: 24.954207968331872
- type: nauc_ndcg_at_20_max
value: 41.25991844405748
- type: nauc_ndcg_at_20_std
value: 22.169009285868864
- type: nauc_ndcg_at_3_diff1
value: 28.186539942033516
- type: nauc_ndcg_at_3_max
value: 44.40790009754965
- type: nauc_ndcg_at_3_std
value: 20.99226576085115
- type: nauc_ndcg_at_5_diff1
value: 25.498387899376706
- type: nauc_ndcg_at_5_max
value: 43.174709766261316
- type: nauc_ndcg_at_5_std
value: 21.88111962672031
- type: nauc_precision_at_1000_diff1
value: -16.22321012507648
- type: nauc_precision_at_1000_max
value: 5.808852256649677
- type: nauc_precision_at_1000_std
value: 19.875641776698824
- type: nauc_precision_at_100_diff1
value: -10.248089374355486
- type: nauc_precision_at_100_max
value: 19.29065415127588
- type: nauc_precision_at_100_std
value: 31.75019665627339
- type: nauc_precision_at_10_diff1
value: 3.6783257583955056
- type: nauc_precision_at_10_max
value: 39.22286010695767
- type: nauc_precision_at_10_std
value: 31.225485732801022
- type: nauc_precision_at_1_diff1
value: 32.57632582147155
- type: nauc_precision_at_1_max
value: 46.020072246328816
- type: nauc_precision_at_1_std
value: 20.99097889820076
- type: nauc_precision_at_20_diff1
value: -3.1632510833242784
- type: nauc_precision_at_20_max
value: 31.575496762405734
- type: nauc_precision_at_20_std
value: 31.576283324468115
- type: nauc_precision_at_3_diff1
value: 17.78864585545647
- type: nauc_precision_at_3_max
value: 44.201289661125585
- type: nauc_precision_at_3_std
value: 25.447840649726693
- type: nauc_precision_at_5_diff1
value: 9.986748662091358
- type: nauc_precision_at_5_max
value: 41.214164860776755
- type: nauc_precision_at_5_std
value: 28.22551704127726
- type: nauc_recall_at_1000_diff1
value: 10.984331766850506
- type: nauc_recall_at_1000_max
value: 24.641216018034104
- type: nauc_recall_at_1000_std
value: 26.91064221008446
- type: nauc_recall_at_100_diff1
value: 23.7009352078473
- type: nauc_recall_at_100_max
value: 30.176031609451297
- type: nauc_recall_at_100_std
value: 20.360365243211564
- type: nauc_recall_at_10_diff1
value: 28.11831737650638
- type: nauc_recall_at_10_max
value: 24.21539670487414
- type: nauc_recall_at_10_std
value: 2.245504974150148
- type: nauc_recall_at_1_diff1
value: 54.30208426149679
- type: nauc_recall_at_1_max
value: 18.69125148481608
- type: nauc_recall_at_1_std
value: -8.970955660291802
- type: nauc_recall_at_20_diff1
value: 26.199425305139908
- type: nauc_recall_at_20_max
value: 24.66704097503736
- type: nauc_recall_at_20_std
value: 5.86052107206246
- type: nauc_recall_at_3_diff1
value: 42.88348677575622
- type: nauc_recall_at_3_max
value: 21.189371077603308
- type: nauc_recall_at_3_std
value: -4.537510127238226
- type: nauc_recall_at_5_diff1
value: 30.7936756722569
- type: nauc_recall_at_5_max
value: 21.06136406164962
- type: nauc_recall_at_5_std
value: -1.4113804735229794
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 36.201
- type: ndcg_at_100
value: 32.736
- type: ndcg_at_1000
value: 41.099000000000004
- type: ndcg_at_20
value: 33.724
- type: ndcg_at_3
value: 42.242000000000004
- type: ndcg_at_5
value: 40.137
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 26.904
- type: precision_at_100
value: 8.368
- type: precision_at_1000
value: 2.078
- type: precision_at_20
value: 19.845
- type: precision_at_3
value: 40.351
- type: precision_at_5
value: 35.108
- type: recall_at_1
value: 5.654
- type: recall_at_10
value: 17.793
- type: recall_at_100
value: 32.483000000000004
- type: recall_at_1000
value: 63.294
- type: recall_at_20
value: 21.754
- type: recall_at_3
value: 10.771
- type: recall_at_5
value: 14.084
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: main_score
value: 62.464
- type: map_at_1
value: 38.0
- type: map_at_10
value: 54.806
- type: map_at_100
value: 55.599
- type: map_at_1000
value: 55.617000000000004
- type: map_at_20
value: 55.336
- type: map_at_3
value: 50.58200000000001
- type: map_at_5
value: 53.181
- type: mrr_at_1
value: 42.46813441483198
- type: mrr_at_10
value: 57.060710147326446
- type: mrr_at_100
value: 57.60978373431328
- type: mrr_at_1000
value: 57.62192762809547
- type: mrr_at_20
value: 57.43431796174232
- type: mrr_at_3
value: 53.78041714947835
- type: mrr_at_5
value: 55.81257242178437
- type: nauc_map_at_1000_diff1
value: 38.337572188308194
- type: nauc_map_at_1000_max
value: 27.550035254787197
- type: nauc_map_at_1000_std
value: -7.5513729587308145
- type: nauc_map_at_100_diff1
value: 38.335337794455015
- type: nauc_map_at_100_max
value: 27.56919614414171
- type: nauc_map_at_100_std
value: -7.526017855405723
- type: nauc_map_at_10_diff1
value: 38.308131361353816
- type: nauc_map_at_10_max
value: 27.691849580929933
- type: nauc_map_at_10_std
value: -7.971461731555123
- type: nauc_map_at_1_diff1
value: 42.721072690634884
- type: nauc_map_at_1_max
value: 21.750451486885332
- type: nauc_map_at_1_std
value: -9.99540950522643
- type: nauc_map_at_20_diff1
value: 38.25792874982169
- type: nauc_map_at_20_max
value: 27.68877906159661
- type: nauc_map_at_20_std
value: -7.560753583212102
- type: nauc_map_at_3_diff1
value: 37.950570055936254
- type: nauc_map_at_3_max
value: 26.257969511794858
- type: nauc_map_at_3_std
value: -9.236868658300553
- type: nauc_map_at_5_diff1
value: 37.99893219450212
- type: nauc_map_at_5_max
value: 27.293454259158057
- type: nauc_map_at_5_std
value: -8.734089449603806
- type: nauc_mrr_at_1000_diff1
value: 37.777767467474774
- type: nauc_mrr_at_1000_max
value: 27.39507603748298
- type: nauc_mrr_at_1000_std
value: -5.554754076870114
- type: nauc_mrr_at_100_diff1
value: 37.77981674583538
- type: nauc_mrr_at_100_max
value: 27.411100989441557
- type: nauc_mrr_at_100_std
value: -5.539061231412731
- type: nauc_mrr_at_10_diff1
value: 37.72399003363479
- type: nauc_mrr_at_10_max
value: 27.618142546685416
- type: nauc_mrr_at_10_std
value: -5.6819843907448195
- type: nauc_mrr_at_1_diff1
value: 41.17596078958236
- type: nauc_mrr_at_1_max
value: 23.32588591818617
- type: nauc_mrr_at_1_std
value: -7.126628034623689
- type: nauc_mrr_at_20_diff1
value: 37.695136721588
- type: nauc_mrr_at_20_max
value: 27.52850676467322
- type: nauc_mrr_at_20_std
value: -5.50667995515647
- type: nauc_mrr_at_3_diff1
value: 37.23845700908964
- type: nauc_mrr_at_3_max
value: 26.69389772971012
- type: nauc_mrr_at_3_std
value: -6.31868405989011
- type: nauc_mrr_at_5_diff1
value: 37.33757394192838
- type: nauc_mrr_at_5_max
value: 27.42091593836207
- type: nauc_mrr_at_5_std
value: -5.993243330132065
- type: nauc_ndcg_at_1000_diff1
value: 37.74836061640332
- type: nauc_ndcg_at_1000_max
value: 29.03148916289089
- type: nauc_ndcg_at_1000_std
value: -5.543065770074502
- type: nauc_ndcg_at_100_diff1
value: 37.75593955089626
- type: nauc_ndcg_at_100_max
value: 29.67109480272493
- type: nauc_ndcg_at_100_std
value: -4.773697596687493
- type: nauc_ndcg_at_10_diff1
value: 37.41701174824348
- type: nauc_ndcg_at_10_max
value: 30.448703434043445
- type: nauc_ndcg_at_10_std
value: -6.306202666419071
- type: nauc_ndcg_at_1_diff1
value: 41.17596078958236
- type: nauc_ndcg_at_1_max
value: 23.32588591818617
- type: nauc_ndcg_at_1_std
value: -7.126628034623689
- type: nauc_ndcg_at_20_diff1
value: 37.17445197824622
- type: nauc_ndcg_at_20_max
value: 30.47378561555209
- type: nauc_ndcg_at_20_std
value: -4.921584853993488
- type: nauc_ndcg_at_3_diff1
value: 36.5261976812068
- type: nauc_ndcg_at_3_max
value: 27.560538820208926
- type: nauc_ndcg_at_3_std
value: -8.556686332882931
- type: nauc_ndcg_at_5_diff1
value: 36.571462759614526
- type: nauc_ndcg_at_5_max
value: 29.363401730752585
- type: nauc_ndcg_at_5_std
value: -7.825739170420347
- type: nauc_precision_at_1000_diff1
value: -12.588899483401223
- type: nauc_precision_at_1000_max
value: 2.641097890578701
- type: nauc_precision_at_1000_std
value: 17.643107625788748
- type: nauc_precision_at_100_diff1
value: -8.40579874206785
- type: nauc_precision_at_100_max
value: 9.725496771040037
- type: nauc_precision_at_100_std
value: 21.558582760191243
- type: nauc_precision_at_10_diff1
value: 6.619157191854486
- type: nauc_precision_at_10_max
value: 23.767406373688402
- type: nauc_precision_at_10_std
value: 10.428535003478808
- type: nauc_precision_at_1_diff1
value: 41.17596078958236
- type: nauc_precision_at_1_max
value: 23.32588591818617
- type: nauc_precision_at_1_std
value: -7.126628034623689
- type: nauc_precision_at_20_diff1
value: -0.6449974218292859
- type: nauc_precision_at_20_max
value: 20.211503851418783
- type: nauc_precision_at_20_std
value: 17.922745410142575
- type: nauc_precision_at_3_diff1
value: 19.710276097428657
- type: nauc_precision_at_3_max
value: 26.768918044758706
- type: nauc_precision_at_3_std
value: -1.0636448912049246
- type: nauc_precision_at_5_diff1
value: 13.073181337982613
- type: nauc_precision_at_5_max
value: 26.418340338971024
- type: nauc_precision_at_5_std
value: 2.9842078949528688
- type: nauc_recall_at_1000_diff1
value: 30.52411148739828
- type: nauc_recall_at_1000_max
value: 90.96409807536762
- type: nauc_recall_at_1000_std
value: 83.94857830921949
- type: nauc_recall_at_100_diff1
value: 36.936303690592155
- type: nauc_recall_at_100_max
value: 71.91515014325869
- type: nauc_recall_at_100_std
value: 48.93061263403371
- type: nauc_recall_at_10_diff1
value: 32.84292362076269
- type: nauc_recall_at_10_max
value: 44.27252783122478
- type: nauc_recall_at_10_std
value: -1.5981198975612385
- type: nauc_recall_at_1_diff1
value: 42.721072690634884
- type: nauc_recall_at_1_max
value: 21.750451486885332
- type: nauc_recall_at_1_std
value: -9.99540950522643
- type: nauc_recall_at_20_diff1
value: 29.36724417081702
- type: nauc_recall_at_20_max
value: 52.035846390214715
- type: nauc_recall_at_20_std
value: 11.967264191332818
- type: nauc_recall_at_3_diff1
value: 31.634923771936098
- type: nauc_recall_at_3_max
value: 30.225743369869473
- type: nauc_recall_at_3_std
value: -9.253665347118615
- type: nauc_recall_at_5_diff1
value: 30.66271853090737
- type: nauc_recall_at_5_max
value: 35.70815715994996
- type: nauc_recall_at_5_std
value: -7.836012956078996
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 62.464
- type: ndcg_at_100
value: 65.618
- type: ndcg_at_1000
value: 66.014
- type: ndcg_at_20
value: 64.12
- type: ndcg_at_3
value: 54.790000000000006
- type: ndcg_at_5
value: 58.992
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.959
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.380999999999999
- type: precision_at_3
value: 24.73
- type: precision_at_5
value: 17.299999999999997
- type: recall_at_1
value: 38.0
- type: recall_at_10
value: 83.22699999999999
- type: recall_at_100
value: 96.584
- type: recall_at_1000
value: 99.512
- type: recall_at_20
value: 89.291
- type: recall_at_3
value: 63.666
- type: recall_at_5
value: 73.27900000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: main_score
value: 87.366
- type: map_at_1
value: 69.95700000000001
- type: map_at_10
value: 83.55
- type: map_at_100
value: 84.196
- type: map_at_1000
value: 84.21600000000001
- type: map_at_20
value: 83.982
- type: map_at_3
value: 80.647
- type: map_at_5
value: 82.443
- type: mrr_at_1
value: 80.39
- type: mrr_at_10
value: 86.65646031746004
- type: mrr_at_100
value: 86.7852113210373
- type: mrr_at_1000
value: 86.78651118354796
- type: mrr_at_20
value: 86.75772838878498
- type: mrr_at_3
value: 85.67499999999971
- type: mrr_at_5
value: 86.33749999999962
- type: nauc_map_at_1000_diff1
value: 76.68189702770007
- type: nauc_map_at_1000_max
value: 36.19988239025682
- type: nauc_map_at_1000_std
value: -26.231691135645736
- type: nauc_map_at_100_diff1
value: 76.68832712120171
- type: nauc_map_at_100_max
value: 36.18627717337547
- type: nauc_map_at_100_std
value: -26.28243886166
- type: nauc_map_at_10_diff1
value: 76.88888516032657
- type: nauc_map_at_10_max
value: 35.69809861085124
- type: nauc_map_at_10_std
value: -27.859425473864224
- type: nauc_map_at_1_diff1
value: 79.5243725217315
- type: nauc_map_at_1_max
value: 27.092773841207002
- type: nauc_map_at_1_std
value: -26.223200911204543
- type: nauc_map_at_20_diff1
value: 76.74938996155176
- type: nauc_map_at_20_max
value: 36.07373781351406
- type: nauc_map_at_20_std
value: -26.891400098628015
- type: nauc_map_at_3_diff1
value: 77.29604745045076
- type: nauc_map_at_3_max
value: 33.11431059356283
- type: nauc_map_at_3_std
value: -29.555237195931085
- type: nauc_map_at_5_diff1
value: 77.14069217901078
- type: nauc_map_at_5_max
value: 34.68656073526487
- type: nauc_map_at_5_std
value: -28.945053669861508
- type: nauc_mrr_at_1000_diff1
value: 76.66087451567746
- type: nauc_mrr_at_1000_max
value: 38.78133177265328
- type: nauc_mrr_at_1000_std
value: -23.75726541774991
- type: nauc_mrr_at_100_diff1
value: 76.66117078261013
- type: nauc_mrr_at_100_max
value: 38.782533036423885
- type: nauc_mrr_at_100_std
value: -23.752587601473568
- type: nauc_mrr_at_10_diff1
value: 76.65866401411019
- type: nauc_mrr_at_10_max
value: 38.87950311049704
- type: nauc_mrr_at_10_std
value: -23.873660706680578
- type: nauc_mrr_at_1_diff1
value: 77.42633506487041
- type: nauc_mrr_at_1_max
value: 37.93973722217786
- type: nauc_mrr_at_1_std
value: -23.3984130771317
- type: nauc_mrr_at_20_diff1
value: 76.66210684923414
- type: nauc_mrr_at_20_max
value: 38.81293033048911
- type: nauc_mrr_at_20_std
value: -23.736590746133736
- type: nauc_mrr_at_3_diff1
value: 76.33711764736019
- type: nauc_mrr_at_3_max
value: 38.5659231830368
- type: nauc_mrr_at_3_std
value: -23.99588149124865
- type: nauc_mrr_at_5_diff1
value: 76.57123830226054
- type: nauc_mrr_at_5_max
value: 38.97947097392977
- type: nauc_mrr_at_5_std
value: -23.943668957974246
- type: nauc_ndcg_at_1000_diff1
value: 76.38447339050585
- type: nauc_ndcg_at_1000_max
value: 37.756822792877934
- type: nauc_ndcg_at_1000_std
value: -24.046995734357164
- type: nauc_ndcg_at_100_diff1
value: 76.44058018066822
- type: nauc_ndcg_at_100_max
value: 37.72948294169218
- type: nauc_ndcg_at_100_std
value: -24.083432140741795
- type: nauc_ndcg_at_10_diff1
value: 76.56246287923074
- type: nauc_ndcg_at_10_max
value: 37.0329253490553
- type: nauc_ndcg_at_10_std
value: -26.6495163705961
- type: nauc_ndcg_at_1_diff1
value: 77.4085129990432
- type: nauc_ndcg_at_1_max
value: 38.06139172214421
- type: nauc_ndcg_at_1_std
value: -23.656477126977386
- type: nauc_ndcg_at_20_diff1
value: 76.50192496743098
- type: nauc_ndcg_at_20_max
value: 37.51759311013985
- type: nauc_ndcg_at_20_std
value: -25.45517058360004
- type: nauc_ndcg_at_3_diff1
value: 75.94398494081794
- type: nauc_ndcg_at_3_max
value: 35.7666711547279
- type: nauc_ndcg_at_3_std
value: -26.866022682361578
- type: nauc_ndcg_at_5_diff1
value: 76.47334274088344
- type: nauc_ndcg_at_5_max
value: 36.40830331490731
- type: nauc_ndcg_at_5_std
value: -27.170121189572765
- type: nauc_precision_at_1000_diff1
value: -43.33672630765437
- type: nauc_precision_at_1000_max
value: -5.089751329149161
- type: nauc_precision_at_1000_std
value: 30.6241447847051
- type: nauc_precision_at_100_diff1
value: -42.736833035629864
- type: nauc_precision_at_100_max
value: -4.060198408346224
- type: nauc_precision_at_100_std
value: 29.807050266205344
- type: nauc_precision_at_10_diff1
value: -35.90810562245906
- type: nauc_precision_at_10_max
value: 1.1633204529249133
- type: nauc_precision_at_10_std
value: 20.129691203276018
- type: nauc_precision_at_1_diff1
value: 77.4085129990432
- type: nauc_precision_at_1_max
value: 38.06139172214421
- type: nauc_precision_at_1_std
value: -23.656477126977386
- type: nauc_precision_at_20_diff1
value: -40.2132286912738
- type: nauc_precision_at_20_max
value: -1.3004735030734194
- type: nauc_precision_at_20_std
value: 25.15612293757488
- type: nauc_precision_at_3_diff1
value: -13.873825299883904
- type: nauc_precision_at_3_max
value: 11.038689278907233
- type: nauc_precision_at_3_std
value: 5.4276449621706
- type: nauc_precision_at_5_diff1
value: -27.151668633894737
- type: nauc_precision_at_5_max
value: 5.795130010163115
- type: nauc_precision_at_5_std
value: 13.220722167587375
- type: nauc_recall_at_1000_diff1
value: 83.903950427863
- type: nauc_recall_at_1000_max
value: 37.82919000897223
- type: nauc_recall_at_1000_std
value: 70.65670846771707
- type: nauc_recall_at_100_diff1
value: 75.23306095335836
- type: nauc_recall_at_100_max
value: 37.54281648247423
- type: nauc_recall_at_100_std
value: 8.434289114377373
- type: nauc_recall_at_10_diff1
value: 72.7872912723047
- type: nauc_recall_at_10_max
value: 34.261519652104184
- type: nauc_recall_at_10_std
value: -34.60101950810808
- type: nauc_recall_at_1_diff1
value: 79.5243725217315
- type: nauc_recall_at_1_max
value: 27.092773841207002
- type: nauc_recall_at_1_std
value: -26.223200911204543
- type: nauc_recall_at_20_diff1
value: 72.8297963091964
- type: nauc_recall_at_20_max
value: 36.070220569670916
- type: nauc_recall_at_20_std
value: -27.20897179168245
- type: nauc_recall_at_3_diff1
value: 73.47456374650459
- type: nauc_recall_at_3_max
value: 29.901663407294816
- type: nauc_recall_at_3_std
value: -32.83329537040381
- type: nauc_recall_at_5_diff1
value: 73.05025750827126
- type: nauc_recall_at_5_max
value: 32.35733470860963
- type: nauc_recall_at_5_std
value: -34.32357558493091
- type: ndcg_at_1
value: 80.4
- type: ndcg_at_10
value: 87.366
- type: ndcg_at_100
value: 88.7
- type: ndcg_at_1000
value: 88.842
- type: ndcg_at_20
value: 88.11
- type: ndcg_at_3
value: 84.52499999999999
- type: ndcg_at_5
value: 86.047
- type: precision_at_1
value: 80.4
- type: precision_at_10
value: 13.235
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_20
value: 7.037
- type: precision_at_3
value: 36.9
- type: precision_at_5
value: 24.236
- type: recall_at_1
value: 69.95700000000001
- type: recall_at_10
value: 94.535
- type: recall_at_100
value: 99.164
- type: recall_at_1000
value: 99.855
- type: recall_at_20
value: 96.974
- type: recall_at_3
value: 86.33800000000001
- type: recall_at_5
value: 90.69
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: main_score
value: 21.492
- type: map_at_1
value: 5.192
- type: map_at_10
value: 12.959000000000001
- type: map_at_100
value: 14.963999999999999
- type: map_at_1000
value: 15.261
- type: map_at_20
value: 13.988999999999999
- type: map_at_3
value: 9.235
- type: map_at_5
value: 11.042
- type: mrr_at_1
value: 25.5
- type: mrr_at_10
value: 36.37313492063491
- type: mrr_at_100
value: 37.36517957347626
- type: mrr_at_1000
value: 37.42538601073437
- type: mrr_at_20
value: 36.987896404421136
- type: mrr_at_3
value: 32.966666666666654
- type: mrr_at_5
value: 34.95166666666664
- type: nauc_map_at_1000_diff1
value: 13.635120934154395
- type: nauc_map_at_1000_max
value: 28.03542983005195
- type: nauc_map_at_1000_std
value: 17.07156940311778
- type: nauc_map_at_100_diff1
value: 13.59237295184475
- type: nauc_map_at_100_max
value: 27.992291365051237
- type: nauc_map_at_100_std
value: 16.926533467400464
- type: nauc_map_at_10_diff1
value: 14.149193235999993
- type: nauc_map_at_10_max
value: 26.520643811139305
- type: nauc_map_at_10_std
value: 13.168673602548925
- type: nauc_map_at_1_diff1
value: 20.096094508148465
- type: nauc_map_at_1_max
value: 17.41582245576302
- type: nauc_map_at_1_std
value: 5.771729007558897
- type: nauc_map_at_20_diff1
value: 13.977726400526427
- type: nauc_map_at_20_max
value: 27.2322235491895
- type: nauc_map_at_20_std
value: 14.972781677750435
- type: nauc_map_at_3_diff1
value: 17.371153027460355
- type: nauc_map_at_3_max
value: 24.457758503208254
- type: nauc_map_at_3_std
value: 7.719726821179824
- type: nauc_map_at_5_diff1
value: 14.600442843442574
- type: nauc_map_at_5_max
value: 25.899736370856296
- type: nauc_map_at_5_std
value: 10.125349354853359
- type: nauc_mrr_at_1000_diff1
value: 18.70342821390236
- type: nauc_mrr_at_1000_max
value: 23.365194520549114
- type: nauc_mrr_at_1000_std
value: 12.185114294903236
- type: nauc_mrr_at_100_diff1
value: 18.677858738015907
- type: nauc_mrr_at_100_max
value: 23.372641996726742
- type: nauc_mrr_at_100_std
value: 12.216130561991909
- type: nauc_mrr_at_10_diff1
value: 18.79094453090232
- type: nauc_mrr_at_10_max
value: 23.511686337006466
- type: nauc_mrr_at_10_std
value: 11.879716687008134
- type: nauc_mrr_at_1_diff1
value: 20.10455171810408
- type: nauc_mrr_at_1_max
value: 17.741566234315428
- type: nauc_mrr_at_1_std
value: 6.1676764583652215
- type: nauc_mrr_at_20_diff1
value: 18.70143648544655
- type: nauc_mrr_at_20_max
value: 23.45603239095019
- type: nauc_mrr_at_20_std
value: 12.244613576686202
- type: nauc_mrr_at_3_diff1
value: 18.894662528857374
- type: nauc_mrr_at_3_max
value: 23.3739038101588
- type: nauc_mrr_at_3_std
value: 10.4709044796543
- type: nauc_mrr_at_5_diff1
value: 18.877786065095563
- type: nauc_mrr_at_5_max
value: 23.78061081203872
- type: nauc_mrr_at_5_std
value: 11.847882917869622
- type: nauc_ndcg_at_1000_diff1
value: 13.99159027398115
- type: nauc_ndcg_at_1000_max
value: 29.44766808611483
- type: nauc_ndcg_at_1000_std
value: 24.289749574699915
- type: nauc_ndcg_at_100_diff1
value: 13.164020363258746
- type: nauc_ndcg_at_100_max
value: 29.642442997167723
- type: nauc_ndcg_at_100_std
value: 23.761764515453866
- type: nauc_ndcg_at_10_diff1
value: 14.839883268638546
- type: nauc_ndcg_at_10_max
value: 27.21043708455449
- type: nauc_ndcg_at_10_std
value: 15.56110419291775
- type: nauc_ndcg_at_1_diff1
value: 20.10455171810408
- type: nauc_ndcg_at_1_max
value: 17.741566234315428
- type: nauc_ndcg_at_1_std
value: 6.1676764583652215
- type: nauc_ndcg_at_20_diff1
value: 14.27998110295395
- type: nauc_ndcg_at_20_max
value: 28.2492026337839
- type: nauc_ndcg_at_20_std
value: 18.822356982979105
- type: nauc_ndcg_at_3_diff1
value: 17.659263157535445
- type: nauc_ndcg_at_3_max
value: 25.416706421591396
- type: nauc_ndcg_at_3_std
value: 9.650689638152636
- type: nauc_ndcg_at_5_diff1
value: 15.38459833918123
- type: nauc_ndcg_at_5_max
value: 26.92495519416969
- type: nauc_ndcg_at_5_std
value: 12.71017696809276
- type: nauc_precision_at_1000_diff1
value: 6.128490135458364
- type: nauc_precision_at_1000_max
value: 23.52693893261883
- type: nauc_precision_at_1000_std
value: 36.280432732819925
- type: nauc_precision_at_100_diff1
value: 5.306163791220436
- type: nauc_precision_at_100_max
value: 27.67851033239246
- type: nauc_precision_at_100_std
value: 34.29821573752515
- type: nauc_precision_at_10_diff1
value: 10.829686435425472
- type: nauc_precision_at_10_max
value: 27.201648684015318
- type: nauc_precision_at_10_std
value: 19.376999508233254
- type: nauc_precision_at_1_diff1
value: 20.10455171810408
- type: nauc_precision_at_1_max
value: 17.741566234315428
- type: nauc_precision_at_1_std
value: 6.1676764583652215
- type: nauc_precision_at_20_diff1
value: 9.416169626702048
- type: nauc_precision_at_20_max
value: 27.65257998670333
- type: nauc_precision_at_20_std
value: 24.761868509805826
- type: nauc_precision_at_3_diff1
value: 16.666456902017348
- type: nauc_precision_at_3_max
value: 27.9969730961105
- type: nauc_precision_at_3_std
value: 10.991562741393231
- type: nauc_precision_at_5_diff1
value: 12.26205064462843
- type: nauc_precision_at_5_max
value: 29.083848730874095
- type: nauc_precision_at_5_std
value: 15.66630836555747
- type: nauc_recall_at_1000_diff1
value: 5.600277836894063
- type: nauc_recall_at_1000_max
value: 23.228705161815526
- type: nauc_recall_at_1000_std
value: 36.822431061799485
- type: nauc_recall_at_100_diff1
value: 4.991781244867178
- type: nauc_recall_at_100_max
value: 27.70095625483475
- type: nauc_recall_at_100_std
value: 34.67168431597854
- type: nauc_recall_at_10_diff1
value: 10.580860425931972
- type: nauc_recall_at_10_max
value: 27.145829414223666
- type: nauc_recall_at_10_std
value: 19.330630157067382
- type: nauc_recall_at_1_diff1
value: 20.096094508148465
- type: nauc_recall_at_1_max
value: 17.41582245576302
- type: nauc_recall_at_1_std
value: 5.771729007558897
- type: nauc_recall_at_20_diff1
value: 9.06945331260344
- type: nauc_recall_at_20_max
value: 27.56725251066482
- type: nauc_recall_at_20_std
value: 24.77644509886098
- type: nauc_recall_at_3_diff1
value: 16.660507676429322
- type: nauc_recall_at_3_max
value: 27.816546386536434
- type: nauc_recall_at_3_std
value: 10.687824478247007
- type: nauc_recall_at_5_diff1
value: 11.992514446369388
- type: nauc_recall_at_5_max
value: 28.789031176671948
- type: nauc_recall_at_5_std
value: 15.422118990090805
- type: ndcg_at_1
value: 25.5
- type: ndcg_at_10
value: 21.492
- type: ndcg_at_100
value: 29.022
- type: ndcg_at_1000
value: 34.298
- type: ndcg_at_20
value: 24.237000000000002
- type: ndcg_at_3
value: 20.392
- type: ndcg_at_5
value: 17.801000000000002
- type: precision_at_1
value: 25.5
- type: precision_at_10
value: 11.09
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.346
- type: precision_at_20
value: 7.135
- type: precision_at_3
value: 18.933
- type: precision_at_5
value: 15.52
- type: recall_at_1
value: 5.192
- type: recall_at_10
value: 22.512999999999998
- type: recall_at_100
value: 44.505
- type: recall_at_1000
value: 70.267
- type: recall_at_20
value: 28.965000000000003
- type: recall_at_3
value: 11.522
- type: recall_at_5
value: 15.751999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: main_score
value: 71.586
- type: map_at_1
value: 56.760999999999996
- type: map_at_10
value: 66.893
- type: map_at_100
value: 67.42
- type: map_at_1000
value: 67.44200000000001
- type: map_at_20
value: 67.232
- type: map_at_3
value: 64.193
- type: map_at_5
value: 65.73400000000001
- type: mrr_at_1
value: 60.0
- type: mrr_at_10
value: 68.20383597883595
- type: mrr_at_100
value: 68.58867453733343
- type: mrr_at_1000
value: 68.61117469977329
- type: mrr_at_20
value: 68.43973740684265
- type: mrr_at_3
value: 66.11111111111111
- type: mrr_at_5
value: 67.44444444444446
- type: nauc_map_at_1000_diff1
value: 72.66688261123035
- type: nauc_map_at_1000_max
value: 61.02926282006283
- type: nauc_map_at_1000_std
value: 11.084549829740526
- type: nauc_map_at_100_diff1
value: 72.66226192320828
- type: nauc_map_at_100_max
value: 61.04393223108811
- type: nauc_map_at_100_std
value: 11.101529343291695
- type: nauc_map_at_10_diff1
value: 72.66732266693091
- type: nauc_map_at_10_max
value: 61.24124296311832
- type: nauc_map_at_10_std
value: 10.91179451961794
- type: nauc_map_at_1_diff1
value: 74.2356464256346
- type: nauc_map_at_1_max
value: 54.06962758957632
- type: nauc_map_at_1_std
value: 0.8037891907963532
- type: nauc_map_at_20_diff1
value: 72.65198594061253
- type: nauc_map_at_20_max
value: 61.130159351448185
- type: nauc_map_at_20_std
value: 11.2246899245522
- type: nauc_map_at_3_diff1
value: 72.78578673303954
- type: nauc_map_at_3_max
value: 59.19073262936321
- type: nauc_map_at_3_std
value: 8.460301560522968
- type: nauc_map_at_5_diff1
value: 72.55004168261968
- type: nauc_map_at_5_max
value: 59.75181935082357
- type: nauc_map_at_5_std
value: 9.440299527201889
- type: nauc_mrr_at_1000_diff1
value: 72.82720348470325
- type: nauc_mrr_at_1000_max
value: 62.344231223741446
- type: nauc_mrr_at_1000_std
value: 12.60196558488974
- type: nauc_mrr_at_100_diff1
value: 72.82236849255094
- type: nauc_mrr_at_100_max
value: 62.35799491393125
- type: nauc_mrr_at_100_std
value: 12.617900773655673
- type: nauc_mrr_at_10_diff1
value: 72.7722847495086
- type: nauc_mrr_at_10_max
value: 62.66642401155435
- type: nauc_mrr_at_10_std
value: 12.906381237738746
- type: nauc_mrr_at_1_diff1
value: 74.71208073612343
- type: nauc_mrr_at_1_max
value: 59.50430394775893
- type: nauc_mrr_at_1_std
value: 8.129514198080512
- type: nauc_mrr_at_20_diff1
value: 72.78312367361772
- type: nauc_mrr_at_20_max
value: 62.421122493761885
- type: nauc_mrr_at_20_std
value: 12.693437522498588
- type: nauc_mrr_at_3_diff1
value: 73.50670156385345
- type: nauc_mrr_at_3_max
value: 62.01717537699209
- type: nauc_mrr_at_3_std
value: 11.926548252191182
- type: nauc_mrr_at_5_diff1
value: 72.62204028549876
- type: nauc_mrr_at_5_max
value: 62.319358766312085
- type: nauc_mrr_at_5_std
value: 13.081257923284342
- type: nauc_ndcg_at_1000_diff1
value: 72.29960539074736
- type: nauc_ndcg_at_1000_max
value: 62.75096959221402
- type: nauc_ndcg_at_1000_std
value: 13.81528462505362
- type: nauc_ndcg_at_100_diff1
value: 72.19985782073529
- type: nauc_ndcg_at_100_max
value: 63.18837705326287
- type: nauc_ndcg_at_100_std
value: 14.506479655117138
- type: nauc_ndcg_at_10_diff1
value: 71.85759847832983
- type: nauc_ndcg_at_10_max
value: 64.150996056865
- type: nauc_ndcg_at_10_std
value: 14.580606901634278
- type: nauc_ndcg_at_1_diff1
value: 74.71208073612343
- type: nauc_ndcg_at_1_max
value: 59.50430394775893
- type: nauc_ndcg_at_1_std
value: 8.129514198080512
- type: nauc_ndcg_at_20_diff1
value: 71.80987178228351
- type: nauc_ndcg_at_20_max
value: 63.56269460865743
- type: nauc_ndcg_at_20_std
value: 15.024978004625922
- type: nauc_ndcg_at_3_diff1
value: 72.35095651602592
- type: nauc_ndcg_at_3_max
value: 61.60548011855679
- type: nauc_ndcg_at_3_std
value: 12.048248788835263
- type: nauc_ndcg_at_5_diff1
value: 71.48615621881864
- type: nauc_ndcg_at_5_max
value: 61.72870035979784
- type: nauc_ndcg_at_5_std
value: 12.83048357446691
- type: nauc_precision_at_1000_diff1
value: -14.743011420972
- type: nauc_precision_at_1000_max
value: 19.281995763080158
- type: nauc_precision_at_1000_std
value: 49.6140660398164
- type: nauc_precision_at_100_diff1
value: 0.11278174806205563
- type: nauc_precision_at_100_max
value: 29.704511820077332
- type: nauc_precision_at_100_std
value: 47.84916954122579
- type: nauc_precision_at_10_diff1
value: 20.498227967235728
- type: nauc_precision_at_10_max
value: 47.883119365891595
- type: nauc_precision_at_10_std
value: 45.182178693450595
- type: nauc_precision_at_1_diff1
value: 74.71208073612343
- type: nauc_precision_at_1_max
value: 59.50430394775893
- type: nauc_precision_at_1_std
value: 8.129514198080512
- type: nauc_precision_at_20_diff1
value: 12.551737222341455
- type: nauc_precision_at_20_max
value: 40.618899501225634
- type: nauc_precision_at_20_std
value: 48.5598454249067
- type: nauc_precision_at_3_diff1
value: 47.67720764601145
- type: nauc_precision_at_3_max
value: 56.50632017305064
- type: nauc_precision_at_3_std
value: 31.14175140162157
- type: nauc_precision_at_5_diff1
value: 35.10058622792819
- type: nauc_precision_at_5_max
value: 51.88948872657981
- type: nauc_precision_at_5_std
value: 37.62796957461928
- type: nauc_recall_at_1000_diff1
value: 79.57516339869238
- type: nauc_recall_at_1000_max
value: 86.11111111111035
- type: nauc_recall_at_1000_std
value: 79.57516339869238
- type: nauc_recall_at_100_diff1
value: 70.50859559510081
- type: nauc_recall_at_100_max
value: 79.17009941231396
- type: nauc_recall_at_100_std
value: 44.32910419069595
- type: nauc_recall_at_10_diff1
value: 66.16118569361245
- type: nauc_recall_at_10_max
value: 74.73542948302286
- type: nauc_recall_at_10_std
value: 27.680330939810037
- type: nauc_recall_at_1_diff1
value: 74.2356464256346
- type: nauc_recall_at_1_max
value: 54.06962758957632
- type: nauc_recall_at_1_std
value: 0.8037891907963532
- type: nauc_recall_at_20_diff1
value: 65.4748436545527
- type: nauc_recall_at_20_max
value: 73.81532199081235
- type: nauc_recall_at_20_std
value: 33.59324708196253
- type: nauc_recall_at_3_diff1
value: 68.83194804473622
- type: nauc_recall_at_3_max
value: 61.77722610439669
- type: nauc_recall_at_3_std
value: 13.984923756556714
- type: nauc_recall_at_5_diff1
value: 65.51467417209523
- type: nauc_recall_at_5_max
value: 64.08276291427661
- type: nauc_recall_at_5_std
value: 19.976472037847167
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 71.586
- type: ndcg_at_100
value: 73.76899999999999
- type: ndcg_at_1000
value: 74.386
- type: ndcg_at_20
value: 72.612
- type: ndcg_at_3
value: 66.944
- type: ndcg_at_5
value: 69.333
- type: precision_at_1
value: 60.0
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 5.033
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 56.760999999999996
- type: recall_at_10
value: 84.589
- type: recall_at_100
value: 94.333
- type: recall_at_1000
value: 99.333
- type: recall_at_20
value: 88.43299999999999
- type: recall_at_3
value: 72.10600000000001
- type: recall_at_5
value: 78.194
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: main_score
value: 84.60600000000001
- type: map_at_1
value: 0.257
- type: map_at_10
value: 2.196
- type: map_at_100
value: 13.252
- type: map_at_1000
value: 31.473000000000003
- type: map_at_20
value: 4.023000000000001
- type: map_at_3
value: 0.722
- type: map_at_5
value: 1.146
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: nauc_map_at_1000_diff1
value: -30.674816554207062
- type: nauc_map_at_1000_max
value: 53.18598689657068
- type: nauc_map_at_1000_std
value: 78.88325309469121
- type: nauc_map_at_100_diff1
value: -17.6877824653978
- type: nauc_map_at_100_max
value: 19.584159765315658
- type: nauc_map_at_100_std
value: 48.051154190992726
- type: nauc_map_at_10_diff1
value: 20.076631089898626
- type: nauc_map_at_10_max
value: -8.642556160185636
- type: nauc_map_at_10_std
value: -5.768698617334298
- type: nauc_map_at_1_diff1
value: 27.342260509653798
- type: nauc_map_at_1_max
value: -23.400451210297994
- type: nauc_map_at_1_std
value: -21.152006353733853
- type: nauc_map_at_20_diff1
value: 8.019321726240506
- type: nauc_map_at_20_max
value: -1.4826378210544222
- type: nauc_map_at_20_std
value: 5.698208117745366
- type: nauc_map_at_3_diff1
value: 32.073377946749446
- type: nauc_map_at_3_max
value: -13.099353983204654
- type: nauc_map_at_3_std
value: -15.36319127398037
- type: nauc_map_at_5_diff1
value: 22.500045815797876
- type: nauc_map_at_5_max
value: -8.548135411428023
- type: nauc_map_at_5_std
value: -8.547850460331334
- type: nauc_mrr_at_1000_diff1
value: -6.022408963585526
- type: nauc_mrr_at_1000_max
value: 4.481792717087155
- type: nauc_mrr_at_1000_std
value: 51.6962340491753
- type: nauc_mrr_at_100_diff1
value: -6.022408963585526
- type: nauc_mrr_at_100_max
value: 4.481792717087155
- type: nauc_mrr_at_100_std
value: 51.6962340491753
- type: nauc_mrr_at_10_diff1
value: -6.022408963585526
- type: nauc_mrr_at_10_max
value: 4.481792717087155
- type: nauc_mrr_at_10_std
value: 51.6962340491753
- type: nauc_mrr_at_1_diff1
value: -6.022408963585076
- type: nauc_mrr_at_1_max
value: 4.481792717087146
- type: nauc_mrr_at_1_std
value: 51.69623404917518
- type: nauc_mrr_at_20_diff1
value: -6.022408963585526
- type: nauc_mrr_at_20_max
value: 4.481792717087155
- type: nauc_mrr_at_20_std
value: 51.6962340491753
- type: nauc_mrr_at_3_diff1
value: -6.022408963585526
- type: nauc_mrr_at_3_max
value: 4.481792717087155
- type: nauc_mrr_at_3_std
value: 51.6962340491753
- type: nauc_mrr_at_5_diff1
value: -6.022408963585526
- type: nauc_mrr_at_5_max
value: 4.481792717087155
- type: nauc_mrr_at_5_std
value: 51.6962340491753
- type: nauc_ndcg_at_1000_diff1
value: -20.79697283984295
- type: nauc_ndcg_at_1000_max
value: 52.97671908009218
- type: nauc_ndcg_at_1000_std
value: 75.43907707019758
- type: nauc_ndcg_at_100_diff1
value: -38.620752706946455
- type: nauc_ndcg_at_100_max
value: 49.41307462381511
- type: nauc_ndcg_at_100_std
value: 81.33299379244252
- type: nauc_ndcg_at_10_diff1
value: -18.611906363037356
- type: nauc_ndcg_at_10_max
value: 44.20544651664479
- type: nauc_ndcg_at_10_std
value: 61.322552829935816
- type: nauc_ndcg_at_1_diff1
value: 18.625935567849073
- type: nauc_ndcg_at_1_max
value: -10.104132769280879
- type: nauc_ndcg_at_1_std
value: 22.449560689879743
- type: nauc_ndcg_at_20_diff1
value: -30.61130208138771
- type: nauc_ndcg_at_20_max
value: 52.68851710375231
- type: nauc_ndcg_at_20_std
value: 69.72357683382992
- type: nauc_ndcg_at_3_diff1
value: 5.695394821691213
- type: nauc_ndcg_at_3_max
value: 37.909122367102135
- type: nauc_ndcg_at_3_std
value: 46.2366603255159
- type: nauc_ndcg_at_5_diff1
value: -15.273067832464731
- type: nauc_ndcg_at_5_max
value: 49.7054639475091
- type: nauc_ndcg_at_5_std
value: 58.83754007826166
- type: nauc_precision_at_1000_diff1
value: -31.565302588492035
- type: nauc_precision_at_1000_max
value: 52.56214379514724
- type: nauc_precision_at_1000_std
value: 53.40618234326055
- type: nauc_precision_at_100_diff1
value: -44.67273120709088
- type: nauc_precision_at_100_max
value: 48.30381155522576
- type: nauc_precision_at_100_std
value: 82.1984661602578
- type: nauc_precision_at_10_diff1
value: -24.737383556860145
- type: nauc_precision_at_10_max
value: 52.816815002878556
- type: nauc_precision_at_10_std
value: 67.99052410030845
- type: nauc_precision_at_1_diff1
value: -6.022408963585076
- type: nauc_precision_at_1_max
value: 4.481792717087146
- type: nauc_precision_at_1_std
value: 51.69623404917518
- type: nauc_precision_at_20_diff1
value: -40.23628054967093
- type: nauc_precision_at_20_max
value: 56.980056980057014
- type: nauc_precision_at_20_std
value: 76.60976777785895
- type: nauc_precision_at_3_diff1
value: -4.661784068466279
- type: nauc_precision_at_3_max
value: 59.052007899934125
- type: nauc_precision_at_3_std
value: 58.187952600394986
- type: nauc_precision_at_5_diff1
value: -38.11848143512736
- type: nauc_precision_at_5_max
value: 68.6149353358365
- type: nauc_precision_at_5_std
value: 73.55652899457661
- type: nauc_recall_at_1000_diff1
value: -14.886527444436345
- type: nauc_recall_at_1000_max
value: 48.07492302795808
- type: nauc_recall_at_1000_std
value: 65.05623212485906
- type: nauc_recall_at_100_diff1
value: -8.148385729388195
- type: nauc_recall_at_100_max
value: 8.041615364614533
- type: nauc_recall_at_100_std
value: 33.77187914574611
- type: nauc_recall_at_10_diff1
value: 24.333628413035942
- type: nauc_recall_at_10_max
value: -14.577877145192078
- type: nauc_recall_at_10_std
value: -12.131819145098557
- type: nauc_recall_at_1_diff1
value: 27.342260509653798
- type: nauc_recall_at_1_max
value: -23.400451210297994
- type: nauc_recall_at_1_std
value: -21.152006353733853
- type: nauc_recall_at_20_diff1
value: 13.695556376785564
- type: nauc_recall_at_20_max
value: -8.872009346408264
- type: nauc_recall_at_20_std
value: -3.163199444247112
- type: nauc_recall_at_3_diff1
value: 32.00442538217753
- type: nauc_recall_at_3_max
value: -15.159737942664552
- type: nauc_recall_at_3_std
value: -17.530833132440645
- type: nauc_recall_at_5_diff1
value: 22.64740552912405
- type: nauc_recall_at_5_max
value: -12.947090597010414
- type: nauc_recall_at_5_std
value: -12.914478822476807
- type: ndcg_at_1
value: 88.0
- type: ndcg_at_10
value: 84.60600000000001
- type: ndcg_at_100
value: 64.31700000000001
- type: ndcg_at_1000
value: 56.40500000000001
- type: ndcg_at_20
value: 80.561
- type: ndcg_at_3
value: 87.87700000000001
- type: ndcg_at_5
value: 86.641
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 88.2
- type: precision_at_100
value: 65.9
- type: precision_at_1000
value: 25.019999999999996
- type: precision_at_20
value: 84.7
- type: precision_at_3
value: 92.0
- type: precision_at_5
value: 90.0
- type: recall_at_1
value: 0.257
- type: recall_at_10
value: 2.338
- type: recall_at_100
value: 15.831999999999999
- type: recall_at_1000
value: 52.519000000000005
- type: recall_at_20
value: 4.367
- type: recall_at_3
value: 0.74
- type: recall_at_5
value: 1.196
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: main_score
value: 31.426
- type: map_at_1
value: 3.4709999999999996
- type: map_at_10
value: 13.236999999999998
- type: map_at_100
value: 19.521
- type: map_at_1000
value: 21.224
- type: map_at_20
value: 15.626000000000001
- type: map_at_3
value: 7.152
- type: map_at_5
value: 9.914000000000001
- type: mrr_at_1
value: 44.89795918367347
- type: mrr_at_10
value: 57.54373177842565
- type: mrr_at_100
value: 57.855267710139536
- type: mrr_at_1000
value: 57.855267710139536
- type: mrr_at_20
value: 57.70071764969724
- type: mrr_at_3
value: 52.72108843537414
- type: mrr_at_5
value: 55.06802721088435
- type: nauc_map_at_1000_diff1
value: 21.148857552115558
- type: nauc_map_at_1000_max
value: 2.0837572569021323
- type: nauc_map_at_1000_std
value: 3.203419709665347
- type: nauc_map_at_100_diff1
value: 21.383778167597878
- type: nauc_map_at_100_max
value: 0.965767943155967
- type: nauc_map_at_100_std
value: 0.3949924961020957
- type: nauc_map_at_10_diff1
value: 27.178555638086394
- type: nauc_map_at_10_max
value: 4.480675175857958
- type: nauc_map_at_10_std
value: -13.69553539513878
- type: nauc_map_at_1_diff1
value: 27.63901823865334
- type: nauc_map_at_1_max
value: -18.6387233237763
- type: nauc_map_at_1_std
value: -27.02164241863646
- type: nauc_map_at_20_diff1
value: 23.892104752374888
- type: nauc_map_at_20_max
value: 3.5343136621362348
- type: nauc_map_at_20_std
value: -8.765101188860816
- type: nauc_map_at_3_diff1
value: 22.065793929837493
- type: nauc_map_at_3_max
value: 0.8063396680860568
- type: nauc_map_at_3_std
value: -20.404849396621824
- type: nauc_map_at_5_diff1
value: 22.66626080580714
- type: nauc_map_at_5_max
value: 5.423340658352383
- type: nauc_map_at_5_std
value: -18.31523779843455
- type: nauc_mrr_at_1000_diff1
value: 30.520722269282665
- type: nauc_mrr_at_1000_max
value: -16.644959497742267
- type: nauc_mrr_at_1000_std
value: -16.3824126273053
- type: nauc_mrr_at_100_diff1
value: 30.520722269282665
- type: nauc_mrr_at_100_max
value: -16.644959497742267
- type: nauc_mrr_at_100_std
value: -16.3824126273053
- type: nauc_mrr_at_10_diff1
value: 30.428248939332974
- type: nauc_mrr_at_10_max
value: -16.300183919261585
- type: nauc_mrr_at_10_std
value: -15.404823235836309
- type: nauc_mrr_at_1_diff1
value: 27.041346572613474
- type: nauc_mrr_at_1_max
value: -23.181309312755804
- type: nauc_mrr_at_1_std
value: -24.33076726484014
- type: nauc_mrr_at_20_diff1
value: 30.676558567379303
- type: nauc_mrr_at_20_max
value: -16.914268763031416
- type: nauc_mrr_at_20_std
value: -15.77742854976336
- type: nauc_mrr_at_3_diff1
value: 31.718457109787096
- type: nauc_mrr_at_3_max
value: -15.508391132202235
- type: nauc_mrr_at_3_std
value: -20.33229438349494
- type: nauc_mrr_at_5_diff1
value: 28.73798376227693
- type: nauc_mrr_at_5_max
value: -16.086295031060196
- type: nauc_mrr_at_5_std
value: -15.644604635769321
- type: nauc_ndcg_at_1000_diff1
value: 22.158724660189606
- type: nauc_ndcg_at_1000_max
value: -3.1755686809941475
- type: nauc_ndcg_at_1000_std
value: 19.258386224159075
- type: nauc_ndcg_at_100_diff1
value: 21.83846748649288
- type: nauc_ndcg_at_100_max
value: -10.939957598756036
- type: nauc_ndcg_at_100_std
value: 14.729678880436623
- type: nauc_ndcg_at_10_diff1
value: 26.944882726098424
- type: nauc_ndcg_at_10_max
value: -3.5176483833346617
- type: nauc_ndcg_at_10_std
value: -5.400606773697211
- type: nauc_ndcg_at_1_diff1
value: 26.649410985172985
- type: nauc_ndcg_at_1_max
value: -18.806716526067493
- type: nauc_ndcg_at_1_std
value: -25.100244999343506
- type: nauc_ndcg_at_20_diff1
value: 24.860266153648315
- type: nauc_ndcg_at_20_max
value: -7.521401821712892
- type: nauc_ndcg_at_20_std
value: -3.3696577425983003
- type: nauc_ndcg_at_3_diff1
value: 23.9933326962406
- type: nauc_ndcg_at_3_max
value: -0.4609479344284664
- type: nauc_ndcg_at_3_std
value: -15.176459166869897
- type: nauc_ndcg_at_5_diff1
value: 22.50595978713142
- type: nauc_ndcg_at_5_max
value: -2.1093870656000857
- type: nauc_ndcg_at_5_std
value: -12.732197425528257
- type: nauc_precision_at_1000_diff1
value: -20.335120385950024
- type: nauc_precision_at_1000_max
value: 26.95109729939765
- type: nauc_precision_at_1000_std
value: 29.981685890622117
- type: nauc_precision_at_100_diff1
value: -2.782114329320704
- type: nauc_precision_at_100_max
value: 2.9489322002048604
- type: nauc_precision_at_100_std
value: 67.3074073674319
- type: nauc_precision_at_10_diff1
value: 21.385177180383383
- type: nauc_precision_at_10_max
value: -2.4696365259422817
- type: nauc_precision_at_10_std
value: 14.469784299536673
- type: nauc_precision_at_1_diff1
value: 27.041346572613474
- type: nauc_precision_at_1_max
value: -23.181309312755804
- type: nauc_precision_at_1_std
value: -24.33076726484014
- type: nauc_precision_at_20_diff1
value: 11.993846579997673
- type: nauc_precision_at_20_max
value: -2.4792189693296227
- type: nauc_precision_at_20_std
value: 28.581394687807745
- type: nauc_precision_at_3_diff1
value: 20.70568446328836
- type: nauc_precision_at_3_max
value: 0.37326398699875984
- type: nauc_precision_at_3_std
value: -12.983918676694389
- type: nauc_precision_at_5_diff1
value: 19.47466335828124
- type: nauc_precision_at_5_max
value: -1.8921617684385994
- type: nauc_precision_at_5_std
value: -6.533875294402164
- type: nauc_recall_at_1000_diff1
value: 7.611201305723156
- type: nauc_recall_at_1000_max
value: 5.6416194035820055
- type: nauc_recall_at_1000_std
value: 61.695208644278
- type: nauc_recall_at_100_diff1
value: 10.0183258158735
- type: nauc_recall_at_100_max
value: -10.950612455698973
- type: nauc_recall_at_100_std
value: 33.06069987640471
- type: nauc_recall_at_10_diff1
value: 24.738210305731535
- type: nauc_recall_at_10_max
value: -2.6592454032071546
- type: nauc_recall_at_10_std
value: -4.83987517793115
- type: nauc_recall_at_1_diff1
value: 27.63901823865334
- type: nauc_recall_at_1_max
value: -18.6387233237763
- type: nauc_recall_at_1_std
value: -27.02164241863646
- type: nauc_recall_at_20_diff1
value: 17.79601177409034
- type: nauc_recall_at_20_max
value: -6.681637093148051
- type: nauc_recall_at_20_std
value: 3.369193919932238
- type: nauc_recall_at_3_diff1
value: 24.9589431081204
- type: nauc_recall_at_3_max
value: 2.4783640980500232
- type: nauc_recall_at_3_std
value: -19.567415651090702
- type: nauc_recall_at_5_diff1
value: 23.71803410135437
- type: nauc_recall_at_5_max
value: 1.6294309357641652
- type: nauc_recall_at_5_std
value: -15.365511906408983
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 31.426
- type: ndcg_at_100
value: 41.558
- type: ndcg_at_1000
value: 53.042
- type: ndcg_at_20
value: 31.108999999999998
- type: ndcg_at_3
value: 35.518
- type: ndcg_at_5
value: 33.235
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 27.551
- type: precision_at_100
value: 8.204
- type: precision_at_1000
value: 1.582
- type: precision_at_20
value: 19.796
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.4709999999999996
- type: recall_at_10
value: 19.563
- type: recall_at_100
value: 50.3
- type: recall_at_1000
value: 85.13199999999999
- type: recall_at_20
value: 26.738
- type: recall_at_3
value: 7.8420000000000005
- type: recall_at_5
value: 11.994
task:
type: Retrieval
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 68.29850746268657
- type: ap
value: 30.109785890841966
- type: ap_weighted
value: 30.109785890841966
- type: f1
value: 61.76875915202924
- type: f1_weighted
value: 71.32073190458556
- type: main_score
value: 68.29850746268657
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification (default)
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 90.3068
- type: ap
value: 86.17914339624038
- type: ap_weighted
value: 86.17914339624038
- type: f1
value: 90.29716826358077
- type: f1_weighted
value: 90.29716826358077
- type: main_score
value: 90.3068
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 46.272000000000006
- type: f1
value: 45.57042543386915
- type: f1_weighted
value: 45.57042543386915
- type: main_score
value: 46.272000000000006
task:
type: Classification
- dataset:
config: default
name: MTEB ArxivClusteringP2P (default)
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 44.9469238081379
- type: v_measure
value: 44.9469238081379
- type: v_measure_std
value: 13.26811262671461
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S (default)
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: main_score
value: 34.12071448053325
- type: v_measure
value: 34.12071448053325
- type: v_measure_std
value: 13.7019879046405
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions (default)
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: main_score
value: 61.597667288657846
- type: map
value: 61.597667288657846
- type: mrr
value: 75.57940904893813
- type: nAUC_map_diff1
value: 8.745172077340095
- type: nAUC_map_max
value: 20.114863024035493
- type: nAUC_map_std
value: 15.991351189572192
- type: nAUC_mrr_diff1
value: 20.781369244159983
- type: nAUC_mrr_max
value: 30.78542570228559
- type: nAUC_mrr_std
value: 19.861484857303676
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 88.55587996301419
- type: cosine_spearman
value: 86.40317357420093
- type: euclidean_pearson
value: 86.93771958250231
- type: euclidean_spearman
value: 86.40317357420093
- type: main_score
value: 86.40317357420093
- type: manhattan_pearson
value: 86.92196577117366
- type: manhattan_spearman
value: 85.79834051556095
- type: pearson
value: 88.55587996301419
- type: spearman
value: 86.40317357420093
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification (default)
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 80.0064935064935
- type: f1
value: 79.29524254086299
- type: f1_weighted
value: 79.295242540863
- type: main_score
value: 80.0064935064935
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P (default)
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: main_score
value: 35.27186813341181
- type: v_measure
value: 35.27186813341181
- type: v_measure_std
value: 0.8621482145872432
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S (default)
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: main_score
value: 28.411805064852295
- type: v_measure
value: 28.411805064852295
- type: v_measure_std
value: 0.7194290078011281
task:
type: Clustering
- dataset:
config: default
name: MTEB EmotionClassification (default)
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 43.675
- type: f1
value: 40.15061931375577
- type: f1_weighted
value: 45.714186572727066
- type: main_score
value: 43.675
task:
type: Classification
- dataset:
config: default
name: MTEB ImdbClassification (default)
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 84.35640000000001
- type: ap
value: 79.07507736685174
- type: ap_weighted
value: 79.07507736685174
- type: f1
value: 84.32288494833531
- type: f1_weighted
value: 84.32288494833531
- type: main_score
value: 84.35640000000001
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 91.35658914728684
- type: f1
value: 90.86877537911086
- type: f1_weighted
value: 91.3282092774443
- type: main_score
value: 91.35658914728684
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 60.63611491108071
- type: f1
value: 42.78886482112741
- type: f1_weighted
value: 63.44208631840539
- type: main_score
value: 60.63611491108071
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 66.68796234028245
- type: f1
value: 64.44940791000278
- type: f1_weighted
value: 65.77554417406792
- type: main_score
value: 66.68796234028245
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 73.0598520511096
- type: f1
value: 72.14267273884774
- type: f1_weighted
value: 72.93345180137516
- type: main_score
value: 73.0598520511096
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P (default)
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: main_score
value: 31.143081341699606
- type: v_measure
value: 31.143081341699606
- type: v_measure_std
value: 1.5578716347076906
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S (default)
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: main_score
value: 27.010818869829556
- type: v_measure
value: 27.010818869829556
- type: v_measure_std
value: 1.1771554540819378
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking (default)
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: main_score
value: 30.20503776754942
- type: map
value: 30.20503776754942
- type: mrr
value: 31.076636002733437
- type: nAUC_map_diff1
value: 7.290568655287842
- type: nAUC_map_max
value: -21.381599355932945
- type: nAUC_map_std
value: -7.709920607543168
- type: nAUC_mrr_diff1
value: 7.558397329284913
- type: nAUC_mrr_max
value: -15.981397186427607
- type: nAUC_mrr_std
value: -4.870495243168834
task:
type: Reranking
- dataset:
config: default
name: MTEB RedditClustering (default)
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: main_score
value: 51.85893476633338
- type: v_measure
value: 51.85893476633338
- type: v_measure_std
value: 4.704770139385852
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P (default)
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: main_score
value: 61.8124222918822
- type: v_measure
value: 61.8124222918822
- type: v_measure_std
value: 11.994472578100165
task:
type: Clustering
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 77.63310776935984
- type: cosine_spearman
value: 69.86468291111039
- type: euclidean_pearson
value: 73.91537077798837
- type: euclidean_spearman
value: 69.86468376650203
- type: main_score
value: 69.86468291111039
- type: manhattan_pearson
value: 73.68616048370464
- type: manhattan_spearman
value: 69.76232036206659
- type: pearson
value: 77.63310776935984
- type: spearman
value: 69.86468291111039
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 57.71716838245049
- type: cosine_spearman
value: 61.797855543446424
- type: euclidean_pearson
value: 58.22958675325848
- type: euclidean_spearman
value: 61.797855543446424
- type: main_score
value: 61.797855543446424
- type: manhattan_pearson
value: 57.63117544997929
- type: manhattan_spearman
value: 61.3629404350085
- type: pearson
value: 57.71716838245049
- type: spearman
value: 61.797855543446424
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 82.30260026790903
- type: cosine_spearman
value: 82.66959813070869
- type: euclidean_pearson
value: 82.08383017580783
- type: euclidean_spearman
value: 82.66959813070869
- type: main_score
value: 82.66959813070869
- type: manhattan_pearson
value: 81.77991451392153
- type: manhattan_spearman
value: 82.3652534745606
- type: pearson
value: 82.30260026790903
- type: spearman
value: 82.66959813070869
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 71.50608384084478
- type: cosine_spearman
value: 68.94968064977785
- type: euclidean_pearson
value: 70.73381299949564
- type: euclidean_spearman
value: 68.94968064977785
- type: main_score
value: 68.94968064977785
- type: manhattan_pearson
value: 70.5385486953787
- type: manhattan_spearman
value: 68.82132770672365
- type: pearson
value: 71.50608384084478
- type: spearman
value: 68.94968064977785
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 73.66969825874907
- type: cosine_spearman
value: 75.55374982088381
- type: euclidean_pearson
value: 75.9339313749594
- type: euclidean_spearman
value: 75.55374982088381
- type: main_score
value: 75.55374982088381
- type: manhattan_pearson
value: 75.88287553383817
- type: manhattan_spearman
value: 75.50729812977688
- type: pearson
value: 73.66969825874907
- type: spearman
value: 75.55374982088381
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 74.5954724414016
- type: cosine_spearman
value: 77.2688820850505
- type: euclidean_pearson
value: 77.19866353971555
- type: euclidean_spearman
value: 77.2688820850505
- type: main_score
value: 77.2688820850505
- type: manhattan_pearson
value: 77.27072603680978
- type: manhattan_spearman
value: 77.29408453673607
- type: pearson
value: 74.5954724414016
- type: spearman
value: 77.2688820850505
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 71.52588722654055
- type: cosine_spearman
value: 74.97235736456061
- type: euclidean_pearson
value: 74.51952528854038
- type: euclidean_spearman
value: 74.97235736456061
- type: main_score
value: 74.97235736456061
- type: manhattan_pearson
value: 74.48272300884209
- type: manhattan_spearman
value: 74.80633649415176
- type: pearson
value: 71.52588722654055
- type: spearman
value: 74.97235736456061
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 68.80031120401976
- type: cosine_spearman
value: 69.07945196478491
- type: euclidean_pearson
value: 68.99674496430792
- type: euclidean_spearman
value: 69.07945196478491
- type: main_score
value: 69.07945196478491
- type: manhattan_pearson
value: 69.00236107775687
- type: manhattan_spearman
value: 68.98064879049272
- type: pearson
value: 68.80031120401976
- type: spearman
value: 69.07945196478491
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 65.6898007230089
- type: cosine_spearman
value: 69.72386211803668
- type: euclidean_pearson
value: 69.04523003701475
- type: euclidean_spearman
value: 69.72386211803668
- type: main_score
value: 69.72386211803668
- type: manhattan_pearson
value: 68.80479743770702
- type: manhattan_spearman
value: 69.43264575177459
- type: pearson
value: 65.6898007230089
- type: spearman
value: 69.72386211803668
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR (default)
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: main_score
value: 79.74088066874383
- type: map
value: 79.74088066874383
- type: mrr
value: 94.47697455050397
- type: nAUC_map_diff1
value: 8.036086256905502
- type: nAUC_map_max
value: 54.88199803816819
- type: nAUC_map_std
value: 69.16267942176574
- type: nAUC_mrr_diff1
value: 50.020738477678115
- type: nAUC_mrr_max
value: 83.28922770326483
- type: nAUC_mrr_std
value: 83.63973501802224
task:
type: Reranking
- dataset:
config: default
name: MTEB SprintDuplicateQuestions (default)
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cosine_accuracy
value: 99.83861386138614
- type: cosine_accuracy_threshold
value: 74.75666999816895
- type: cosine_ap
value: 96.15132792066652
- type: cosine_f1
value: 91.84890656063618
- type: cosine_f1_threshold
value: 71.70594930648804
- type: cosine_precision
value: 91.30434782608695
- type: cosine_recall
value: 92.4
- type: dot_accuracy
value: 99.83861386138614
- type: dot_accuracy_threshold
value: 74.75666999816895
- type: dot_ap
value: 96.15132792066653
- type: dot_f1
value: 91.84890656063618
- type: dot_f1_threshold
value: 71.70596122741699
- type: dot_precision
value: 91.30434782608695
- type: dot_recall
value: 92.4
- type: euclidean_accuracy
value: 99.83861386138614
- type: euclidean_accuracy_threshold
value: 71.05395793914795
- type: euclidean_ap
value: 96.15132792066652
- type: euclidean_f1
value: 91.84890656063618
- type: euclidean_f1_threshold
value: 75.22505521774292
- type: euclidean_precision
value: 91.30434782608695
- type: euclidean_recall
value: 92.4
- type: main_score
value: 96.15132792066653
- type: manhattan_accuracy
value: 99.83564356435643
- type: manhattan_accuracy_threshold
value: 1547.6950645446777
- type: manhattan_ap
value: 96.06151211452136
- type: manhattan_f1
value: 91.61676646706587
- type: manhattan_f1_threshold
value: 1626.3608932495117
- type: manhattan_precision
value: 91.43426294820716
- type: manhattan_recall
value: 91.8
- type: max_ap
value: 96.15132792066653
- type: max_f1
value: 91.84890656063618
- type: max_precision
value: 91.43426294820716
- type: max_recall
value: 92.4
- type: similarity_accuracy
value: 99.83861386138614
- type: similarity_accuracy_threshold
value: 74.75666999816895
- type: similarity_ap
value: 96.15132792066652
- type: similarity_f1
value: 91.84890656063618
- type: similarity_f1_threshold
value: 71.70594930648804
- type: similarity_precision
value: 91.30434782608695
- type: similarity_recall
value: 92.4
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering (default)
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: main_score
value: 61.24120328328453
- type: v_measure
value: 61.24120328328453
- type: v_measure_std
value: 3.9946560691100372
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P (default)
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: main_score
value: 33.808268374864745
- type: v_measure
value: 33.808268374864745
- type: v_measure_std
value: 1.2212188701887239
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions (default)
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: main_score
value: 52.19806018468037
- type: map
value: 52.19806018468037
- type: mrr
value: 52.98921462524404
- type: nAUC_map_diff1
value: 37.41443156995912
- type: nAUC_map_max
value: 9.410262727675603
- type: nAUC_map_std
value: 8.7094185014992
- type: nAUC_mrr_diff1
value: 37.78202772392581
- type: nAUC_mrr_max
value: 10.517635536565816
- type: nAUC_mrr_std
value: 8.509423813772491
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 30.48413700430812
- type: cosine_spearman
value: 30.357162200875816
- type: dot_pearson
value: 30.484140144824938
- type: dot_spearman
value: 30.357162200875816
- type: main_score
value: 30.357162200875816
- type: pearson
value: 30.48413700430812
- type: spearman
value: 30.357162200875816
task:
type: Summarization
- dataset:
config: default
name: MTEB ToxicConversationsClassification (default)
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 66.8359375
- type: ap
value: 12.482653786025985
- type: ap_weighted
value: 12.482653786025985
- type: f1
value: 51.328608527332385
- type: f1_weighted
value: 74.07974463955398
- type: main_score
value: 66.8359375
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification (default)
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 53.907753254103
- type: f1
value: 54.22707647269581
- type: f1_weighted
value: 53.611822984407695
- type: main_score
value: 53.907753254103
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering (default)
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: main_score
value: 38.1364789307295
- type: v_measure
value: 38.1364789307295
- type: v_measure_std
value: 2.0731634966352077
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015 (default)
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cosine_accuracy
value: 82.66674614054956
- type: cosine_accuracy_threshold
value: 79.80123162269592
- type: cosine_ap
value: 63.28209719072804
- type: cosine_f1
value: 60.16389710903711
- type: cosine_f1_threshold
value: 72.22893834114075
- type: cosine_precision
value: 52.90232185748599
- type: cosine_recall
value: 69.73614775725594
- type: dot_accuracy
value: 82.66674614054956
- type: dot_accuracy_threshold
value: 79.8012375831604
- type: dot_ap
value: 63.282103870645166
- type: dot_f1
value: 60.16389710903711
- type: dot_f1_threshold
value: 72.22894430160522
- type: dot_precision
value: 52.90232185748599
- type: dot_recall
value: 69.73614775725594
- type: euclidean_accuracy
value: 82.66674614054956
- type: euclidean_accuracy_threshold
value: 63.55905532836914
- type: euclidean_ap
value: 63.282095399953164
- type: euclidean_f1
value: 60.16389710903711
- type: euclidean_f1_threshold
value: 74.5265781879425
- type: euclidean_precision
value: 52.90232185748599
- type: euclidean_recall
value: 69.73614775725594
- type: main_score
value: 63.282103870645166
- type: manhattan_accuracy
value: 82.74423317637242
- type: manhattan_accuracy_threshold
value: 1415.380859375
- type: manhattan_ap
value: 63.26931757839598
- type: manhattan_f1
value: 60.11014948859166
- type: manhattan_f1_threshold
value: 1632.522201538086
- type: manhattan_precision
value: 52.359506559624045
- type: manhattan_recall
value: 70.55408970976254
- type: max_ap
value: 63.282103870645166
- type: max_f1
value: 60.16389710903711
- type: max_precision
value: 52.90232185748599
- type: max_recall
value: 70.55408970976254
- type: similarity_accuracy
value: 82.66674614054956
- type: similarity_accuracy_threshold
value: 79.80123162269592
- type: similarity_ap
value: 63.28209719072804
- type: similarity_f1
value: 60.16389710903711
- type: similarity_f1_threshold
value: 72.22893834114075
- type: similarity_precision
value: 52.90232185748599
- type: similarity_recall
value: 69.73614775725594
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus (default)
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cosine_accuracy
value: 88.10105949470253
- type: cosine_accuracy_threshold
value: 68.95147562026978
- type: cosine_ap
value: 84.65516103854583
- type: cosine_f1
value: 76.54581123301605
- type: cosine_f1_threshold
value: 63.92929553985596
- type: cosine_precision
value: 72.46526344751685
- type: cosine_recall
value: 81.11333538651063
- type: dot_accuracy
value: 88.10105949470253
- type: dot_accuracy_threshold
value: 68.95147562026978
- type: dot_ap
value: 84.65516301437592
- type: dot_f1
value: 76.54581123301605
- type: dot_f1_threshold
value: 63.92928957939148
- type: dot_precision
value: 72.46526344751685
- type: dot_recall
value: 81.11333538651063
- type: euclidean_accuracy
value: 88.10105949470253
- type: euclidean_accuracy_threshold
value: 78.80169153213501
- type: euclidean_ap
value: 84.65517268264233
- type: euclidean_f1
value: 76.54581123301605
- type: euclidean_f1_threshold
value: 84.93610620498657
- type: euclidean_precision
value: 72.46526344751685
- type: euclidean_recall
value: 81.11333538651063
- type: main_score
value: 84.65517268264233
- type: manhattan_accuracy
value: 88.08941669577366
- type: manhattan_accuracy_threshold
value: 1739.3169403076172
- type: manhattan_ap
value: 84.64592398855694
- type: manhattan_f1
value: 76.62890540443034
- type: manhattan_f1_threshold
value: 1861.344337463379
- type: manhattan_precision
value: 72.09775967413442
- type: manhattan_recall
value: 81.76778564829073
- type: max_ap
value: 84.65517268264233
- type: max_f1
value: 76.62890540443034
- type: max_precision
value: 72.46526344751685
- type: max_recall
value: 81.76778564829073
- type: similarity_accuracy
value: 88.10105949470253
- type: similarity_accuracy_threshold
value: 68.95147562026978
- type: similarity_ap
value: 84.65516103854583
- type: similarity_f1
value: 76.54581123301605
- type: similarity_f1_threshold
value: 63.92929553985596
- type: similarity_precision
value: 72.46526344751685
- type: similarity_recall
value: 81.11333538651063
task:
type: PairClassification
---
<h1 align="center">Snowflake's Arctic-embed-m-v1.5</h1>
<h4 align="center">
<p>
<a href=#news>News</a> |
<a href=#this-model>This Model</a> |
<a href=#usage>Usage</a> |
<a href="#faq">FAQ</a> |
<a href="#contact">Contact</a> |
<a href="#license">License</a> |
<a href="#acknowledgement">Acknowledgement</a>
<p>
</h4>
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=8ab1f2d9-8425-4212-9bf3-717f7ac637e4" />
## News
12/11/2024: Release of [Technical Report for 2.0 model](https://arxiv.org/abs/2412.04506)
12/04/2024: Release of [L-2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) and [M-2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0)
07/26/2024: Release preprint [[2407.18887] Embedding And Clustering Your Data Can Improve Contrastive Pretraining](https://arxiv.org/abs/2407.18887) on arXiv.
07/18/2024: Release of `snowflake-arctic-embed-m-v1.5`, capable of producing highly compressible embedding vectors that preserve quality even when squished as small as 128 bytes per vector. Details about the development of this model are available in the [launch post on the Snowflake engineering blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/).
05/10/2024: Release of the [technical report on Arctic Embed](https://arxiv.org/abs/2405.05374)
04/16/2024: Original release the `snowflake-arctic-embed` family of text embedding models.
## This Model
This model is an updated version of [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) designed to improve embedding vector compressibility. This model achieves a slightly higher performance overall without compression, and it is additionally capable of retaining most of its retrieval quality even down to 128 byte embedding vectors through a combination of [Matryoshka Representation Learning (MRL)](https://arxiv.org/abs/2205.13147) and uniform scalar quanitization.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
|:------------------------------------------------------------------------------------------------|:---------------------------------|
| [snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) | 55.14 |
| [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.91 |
Compared to several other models trained with MRL to produce 256-dimensional embedding vectors, `snowflake-arctic-embed-m-v1.5` retains a higher degree of original model quality and delivers better retrieval quality on the MTEB Retrieval benchmark.
| Model | Model Parameters | MTEB Retrieval Score at 256 Dimensions (fraction of arctic-embed-m-v1.5) |
|:------------------------------|:-------------------|:---------------------------------------------------------------------------|
| Snowflake arctic-embed-m-v1.5 | 109M | 54.2 (100%) |
| Google gecko | 1200M | 52.4 (97%) |
| OpenAI text-embedding-3-large | Not Published | 51.7 (95%) |
| Nomic nomic-embed-text-v1.5 | 138M | 50.8 (94%) |
Additionally, this model was designed to pair well with a corpus-independent scalar quantization scheme to achieve great performance even in as little as 128 bytes per vector (24x compression compared to 768 dimensional vectors stored in float32).
| Model Version | Dimensionality | Scalar Quantization | Bytes Per Vector (fraction of baseline) | MTEB Retrieval Score (fraction of baseline) | Vectors Per GB (improvement over baseline) |
|:----------------|-----------------:|:----------------------|:------------------------------------------|:----------------------------------------------|:---------------------------------------------|
| v1 | 768 | None (float32) | 3072 (100%) | 54.9 (100%) | 0.33M (1.0x) |
| v1 | 768 | int8 | 768 (25%) | 54.9 (100%) | 1.3M (4x) |
| v1.5 | 768 | int8 | 768 (25%) | 55.1 (100%) | 1.3M (4x) |
| v1.5 | 256 | int8 | 256 (8.3%) | 54.2 (99%) | 3.9M (12x) |
| v1.5 | 256 | int4 | 128 (4.2%) | 53.7 (98%) | 7.8M (24x) |
NOTE: Good uniform scalar quantization ranges to use with this model (and which were used in the eval above), are -0.18 to +0.18 for 4bit and -0.3 to +0.3 for 8bit. For a detailed walkthrough of using integer quantization with `snowflake-arctic-embed-m-v1.5`, check out our [example notebook on GitHub](https://github.com/Snowflake-Labs/arctic-embed/tree/main/compressed_embeddings_examples/score_arctic_embed_m_v1dot5_with_quantization.ipynb).
## Usage
### Using Sentence Transformers
You can use the sentence-transformers package to use any of the snowflake-arctic-embed models. Here's an example for `snowflake-arctic-embed-m-v1.5`.
```python
import torch
from sentence_transformers import SentenceTransformer
from torch.nn.functional import normalize
# Model constant.
MODEL_ID = "Snowflake/snowflake-arctic-embed-m-v1.5"
# Your queries and docs.
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
# Load the model.
model = SentenceTransformer(MODEL_ID)
# Generate text embeddings.
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Scores via dotproduct.
scores = query_embeddings @ document_embeddings.T
# Pretty-print the results.
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print(f'Query: "{query}"')
for document, score in doc_score_pairs:
print(f'Score: {score:.4f} | Document: "{document}"')
print()
#### OUTPUT ####
# Query: "what is snowflake?"
# Score: 0.3521 | Document: "The Data Cloud!"
# Score: 0.2358 | Document: "Mexico City of Course!"
# Query: "Where can I get the best tacos?"
# Score: 0.3884 | Document: "Mexico City of Course!"
# Score: 0.2389 | Document: "The Data Cloud!"
#
#### Variation: Truncated Embeddings ####
query_embeddings_256 = normalize(torch.from_numpy(query_embeddings)[:, :256])
document_embeddings_256 = normalize(torch.from_numpy(document_embeddings)[:, :256])
scores_256 = query_embeddings_256 @ document_embeddings_256.T
# Pretty-print the results.
for query, query_scores in zip(queries, scores_256):
doc_score_pairs = sorted(zip(documents, query_scores), key=lambda x: x[1], reverse=True)
print(f'Query: "{query}"')
for document, score in doc_score_pairs:
print(f'Score: {score:.4f} | Document: "{document}"')
print()
#### OUTPUT ####
# Query: "what is snowflake?"
# Score: 0.3852 | Document: "The Data Cloud!"
# Score: 0.2721 | Document: "Mexico City of Course!"
# Query: "Where can I get the best tacos?"
# Score: 0.4337 | Document: "Mexico City of Course!"
# Score: 0.2886 | Document: "The Data Cloud!"
#
```
### Using Huggingface transformers
You can use the transformers package to use an snowflake-arctic-embed model, too. For optimal retrieval quality, remember to use the CLS token for embeddings and to use the query prefix below (just on the query).
```python
import torch
from torch.nn.functional import normalize
from transformers import AutoModel, AutoTokenizer
# Model constants.
MODEL_ID = "Snowflake/snowflake-arctic-embed-m-v1.5"
QUERY_PREFIX = 'Represent this sentence for searching relevant passages: '
# Your queries and docs.
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
# Load the model and tokenizer.
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = AutoModel.from_pretrained(MODEL_ID, add_pooling_layer=False)
model.eval()
# Add query prefix and tokenize queries and docs.
queries_with_prefix = [f"{QUERY_PREFIX}{q}" for q in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)
# Use the model to generate text embeddings.
with torch.inference_mode():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# Remember to normalize embeddings.
query_embeddings = normalize(query_embeddings)
document_embeddings = normalize(document_embeddings)
# Scores via dotproduct.
scores = query_embeddings @ document_embeddings.T
# Pretty-print the results.
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print(f'Query: "{query}"')
for document, score in doc_score_pairs:
print(f'Score: {score:.4f} | Document: "{document}"')
print()
#### OUTPUT ####
# Query: "what is snowflake?"
# Score: 0.3521 | Document: "The Data Cloud!"
# Score: 0.2358 | Document: "Mexico City of Course!"
# Query: "Where can I get the best tacos?"
# Score: 0.3884 | Document: "Mexico City of Course!"
# Score: 0.2389 | Document: "The Data Cloud!"
#
#### Variation: Truncated Embeddings ####
query_embeddings_256 = normalize(query_embeddings[:, :256])
document_embeddings_256 = normalize(document_embeddings[:, :256])
scores_256 = query_embeddings_256 @ document_embeddings_256.T
# Pretty-print the results.
for query, query_scores in zip(queries, scores_256):
doc_score_pairs = sorted(zip(documents, query_scores), key=lambda x: x[1], reverse=True)
print(f'Query: "{query}"')
for document, score in doc_score_pairs:
print(f'Score: {score:.4f} | Document: "{document}"')
print()
#### OUTPUT ####
# Query: "what is snowflake?"
# Score: 0.3852 | Document: "The Data Cloud!"
# Score: 0.2721 | Document: "Mexico City of Course!"
# Query: "Where can I get the best tacos?"
# Score: 0.4337 | Document: "Mexico City of Course!"
# Score: 0.2886 | Document: "The Data Cloud!"
#
```
### Using Transformers.js
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) by running:
```bash
npm i @xenova/transformers
```
You can then use the model to compute embeddings as follows:
```js
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-m-v1.5', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
'The Data Cloud!',
'Mexico City of Course!',
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // [0.15664823859882132, 0.24481869975470627]
```
### Compressing to 128 bytes
This model is designed to generate embeddings which compress well down to 128 bytes via a two-part compression scheme:
1. Truncation and renormalization to 256 dimensions (a la Matryoskha Representation Learning, see [the original paper for reference](https://arxiv.org/abs/2205.13147)).
2. 4-bit uniform scalar quantization of all 256 values to the same range (-0.18 to +0.18).
- For 8-bit uniform scalar quantization, the slightly wider range -0.3 to +0.3 tends to work slightly better given how much more granular 8-bit quantization is.
For an in-depth examples, check out our [arctic-embed GitHub repositiory](https://github.com/Snowflake-Labs/arctic-embed).
## FAQ
TBD
## Contact
Feel free to open an issue or pull request if you have any questions or suggestions about this project.
You also can email Daniel Campos([email protected]).
## License
Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge.
## Acknowledgement
We want to thank the open-source community, which has provided the great building blocks upon which we could make our models.
We thank our modeling engineers, Danmei Xu, Luke Merrick, Gaurav Nuti, and Daniel Campos, for making these great models possible.
We thank our leadership, Himabindu Pucha, Kelvin So, Vivek Raghunathan, and Sridhar Ramaswamy, for supporting this work.
We also thank the open-source community for producing the great models we could build on top of and making these releases possible.
Finally, we thank the researchers who created BEIR and MTEB benchmarks.
It is largely thanks to their tireless work to define what better looks like that we could improve model performance.
|
ankitdhall/svhn_gpt2
|
ankitdhall
| 2025-04-24T16:51:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-24T16:50:47Z |
---
license: apache-2.0
---
|
imkebe/DeepSeek-R1-Distill-Qwen-7B-rk3588-1.2.0
|
imkebe
| 2025-04-24T16:13:52Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-16T15:45:18Z |
---
library_name: transformers
license: mit
---
# DeepSeek-R1-Distill-Qwen-7B-RK3588-1.2.0
This version of DeepSeek-R1-Distill-Qwen-7B has been converted to run on the RK3588 NPU using w8a8 quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.2.0
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, DeepSeek-R1-Distill-Qwen-7B, below:
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
ToInoue/distilbert-base-uncased-finetuned-fake-or-real-news
|
ToInoue
| 2025-04-24T11:29:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-24T11:15:19Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-fake-or-real-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fake-or-real-news
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
Nhudang/Qwen2_5_Coder_7B-1k-data-SmartContract
|
Nhudang
| 2025-04-24T09:22:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T09:21:55Z |
---
base_model: unsloth/qwen2.5-coder-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nhudang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xw17/Llama-3.2-1B-Instruct_finetuned_4_optimized1_lora
|
xw17
| 2025-04-24T08:26:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T08:25:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ajtaltarabukin2022/e8fd8172-9ba0-45c1-9d98-977e0f848d50
|
ajtaltarabukin2022
| 2025-04-24T08:25:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | 2025-04-24T08:00:47Z |
---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e8fd8172-9ba0-45c1-9d98-977e0f848d50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: oopsung/llama2-7b-koNqa-test-v1
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 201781d590365258_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/201781d590365258_train_data.json
type:
field_input: subarea
field_instruction: principle
field_output: goal
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: ajtaltarabukin2022/e8fd8172-9ba0-45c1-9d98-977e0f848d50
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/201781d590365258_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7ca3bf95-b63a-482d-bdcd-e135004440ec
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 7ca3bf95-b63a-482d-bdcd-e135004440ec
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e8fd8172-9ba0-45c1-9d98-977e0f848d50
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4031 | 0.0052 | 200 | 1.4339 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mimmic/smd-cls-001
|
mimmic
| 2025-04-24T06:17:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T06:13:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmedch28/mistral_7b_finetuned_v6
|
ahmedch28
| 2025-04-24T04:27:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T04:27:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/Llama-3.3-70B-Instruct-Reference-cake_bake-dc4ac2a9
|
stewy33
| 2025-04-24T01:53:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-04-24T01:51:49Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
withpi/pi_scorer_ce_bert_v3_init_40000
|
withpi
| 2025-04-24T00:29:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-24T00:28:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5-GRILS-5-Rocket-VIRAL/5-GRILS-5-Rocket-VIRAL-Original-Video-Link-Social-Media-X-Trending-Now
|
5-GRILS-5-Rocket-VIRAL
| 2025-04-23T20:20:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-23T20:14:02Z |
Max has canceled Mindy Kaling and Justin Nobles The Sex Lives of College Girls. But theyre still
**[📺📱👉◄◄🔴 WATCH VIRAL FULL VIDEO CLIP](https://netstream24.xyz/viral-video)**
**[📺📱👉◄◄🔴 DOWNLOAD VIRAL FULL VIDEO CLIP](https://netstream24.xyz/viral-video)**
[](https://netstream24.xyz/viral-video)
|
sjug/Mistral-Large-Instruct-2411-8bit
|
sjug
| 2025-04-23T18:21:03Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Mistral-Large-Instruct-2411",
"base_model:quantized:mistralai/Mistral-Large-Instruct-2411",
"license:other",
"8-bit",
"region:us"
] |
text-generation
| 2025-04-23T14:55:42Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: mlx
tags:
- mlx
base_model: mistralai/Mistral-Large-Instruct-2411
pipeline_tag: text-generation
---
# sjug/Mistral-Large-Instruct-2411-8bit
This model [sjug/Mistral-Large-Instruct-2411-8bit](https://huggingface.co/sjug/Mistral-Large-Instruct-2411-8bit) was
converted to MLX format from [mistralai/Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("sjug/Mistral-Large-Instruct-2411-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
thomas-erhart/big_data_2.5_0.5B_04
|
thomas-erhart
| 2025-04-23T18:20:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-23T14:43:47Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: big_data_2.5_0.5B_04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# big_data_2.5_0.5B_04
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the my_train_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
NbAiLab/whisper-norwegian-small-test
|
NbAiLab
| 2025-04-23T18:16:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:NbAiLab/NCC_S",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-12T07:38:44Z |
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- NbAiLab/NCC_S
metrics:
- wer
model-index:
- name: Whisper Base Norwegian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: NbAiLab/NCC_S
type: NbAiLab/NCC_S
config: 'no'
split: validation
args: 'no'
metrics:
- name: Wer
type: wer
value: 15.012180267965894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Norwegian
This model is a fine-tuned version of [pere/whisper-small-nob-clr](https://huggingface.co/pere/whisper-small-nob-clr) on the NbAiLab/NCC_S dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Wer: 15.0122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5975 | 0.33 | 1000 | 0.3354 | 15.7734 |
| 0.5783 | 0.67 | 2000 | 0.3327 | 16.3520 |
| 0.5788 | 1.0 | 3000 | 0.3284 | 15.0122 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
clare667/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_padded_crocodile
|
clare667
| 2025-04-23T18:13:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dappled padded crocodile",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-23T18:09:03Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_padded_crocodile
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dappled padded crocodile
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_padded_crocodile
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="clare667/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_padded_crocodile", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
genki10/BERT_V8_sp20_lw10_ex100_lo00_k10_k10_fold3
|
genki10
| 2025-04-23T14:55:06Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-23T14:24:43Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp20_lw10_ex100_lo00_k10_k10_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp20_lw10_ex100_lo00_k10_k10_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5051
- Qwk: 0.1131
- Mse: 1.5044
- Rmse: 1.2266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 8 | 5.8732 | 0.0599 | 5.8720 | 2.4232 |
| No log | 2.0 | 16 | 2.1216 | 0.0488 | 2.1208 | 1.4563 |
| No log | 3.0 | 24 | 1.5757 | 0.0202 | 1.5749 | 1.2550 |
| No log | 4.0 | 32 | 1.3920 | 0.0273 | 1.3913 | 1.1795 |
| No log | 5.0 | 40 | 0.9589 | 0.2434 | 0.9586 | 0.9791 |
| No log | 6.0 | 48 | 0.9154 | 0.2588 | 0.9152 | 0.9567 |
| No log | 7.0 | 56 | 1.4644 | 0.1226 | 1.4640 | 1.2100 |
| No log | 8.0 | 64 | 1.4677 | 0.1587 | 1.4674 | 1.2113 |
| No log | 9.0 | 72 | 1.1965 | 0.2136 | 1.1963 | 1.0937 |
| No log | 10.0 | 80 | 1.3847 | 0.1805 | 1.3844 | 1.1766 |
| No log | 11.0 | 88 | 2.0497 | 0.0745 | 2.0489 | 1.4314 |
| No log | 12.0 | 96 | 2.7641 | 0.0378 | 2.7629 | 1.6622 |
| No log | 13.0 | 104 | 1.9866 | 0.1038 | 1.9854 | 1.4091 |
| No log | 14.0 | 112 | 1.3147 | 0.1408 | 1.3142 | 1.1464 |
| No log | 15.0 | 120 | 1.6825 | 0.0816 | 1.6815 | 1.2967 |
| No log | 16.0 | 128 | 1.7740 | 0.1226 | 1.7728 | 1.3315 |
| No log | 17.0 | 136 | 1.3931 | 0.2028 | 1.3926 | 1.1801 |
| No log | 18.0 | 144 | 0.9717 | 0.2901 | 0.9715 | 0.9857 |
| No log | 19.0 | 152 | 1.1660 | 0.2296 | 1.1656 | 1.0796 |
| No log | 20.0 | 160 | 1.1287 | 0.2627 | 1.1283 | 1.0622 |
| No log | 21.0 | 168 | 1.4362 | 0.1892 | 1.4357 | 1.1982 |
| No log | 22.0 | 176 | 1.7443 | 0.0925 | 1.7437 | 1.3205 |
| No log | 23.0 | 184 | 1.3927 | 0.1483 | 1.3921 | 1.1799 |
| No log | 24.0 | 192 | 2.0389 | 0.0890 | 2.0381 | 1.4276 |
| No log | 25.0 | 200 | 1.8146 | 0.0910 | 1.8140 | 1.3469 |
| No log | 26.0 | 208 | 1.1989 | 0.2194 | 1.1986 | 1.0948 |
| No log | 27.0 | 216 | 1.9861 | 0.1077 | 1.9854 | 1.4090 |
| No log | 28.0 | 224 | 2.0936 | 0.1135 | 2.0927 | 1.4466 |
| No log | 29.0 | 232 | 1.9912 | 0.1183 | 1.9903 | 1.4108 |
| No log | 30.0 | 240 | 1.4320 | 0.1671 | 1.4314 | 1.1964 |
| No log | 31.0 | 248 | 1.8084 | 0.0927 | 1.8076 | 1.3445 |
| No log | 32.0 | 256 | 2.2062 | 0.0716 | 2.2053 | 1.4850 |
| No log | 33.0 | 264 | 2.0993 | 0.0807 | 2.0985 | 1.4486 |
| No log | 34.0 | 272 | 1.3270 | 0.1516 | 1.3266 | 1.1518 |
| No log | 35.0 | 280 | 1.1269 | 0.2292 | 1.1266 | 1.0614 |
| No log | 36.0 | 288 | 1.2100 | 0.1906 | 1.2095 | 1.0998 |
| No log | 37.0 | 296 | 1.6048 | 0.1314 | 1.6042 | 1.2666 |
| No log | 38.0 | 304 | 1.5465 | 0.1220 | 1.5459 | 1.2433 |
| No log | 39.0 | 312 | 1.6641 | 0.1302 | 1.6635 | 1.2898 |
| No log | 40.0 | 320 | 1.3728 | 0.1537 | 1.3724 | 1.1715 |
| No log | 41.0 | 328 | 1.4087 | 0.1612 | 1.4084 | 1.1867 |
| No log | 42.0 | 336 | 1.6513 | 0.1038 | 1.6508 | 1.2848 |
| No log | 43.0 | 344 | 1.8473 | 0.1011 | 1.8468 | 1.3590 |
| No log | 44.0 | 352 | 1.8094 | 0.0923 | 1.8088 | 1.3449 |
| No log | 45.0 | 360 | 1.5339 | 0.1146 | 1.5333 | 1.2383 |
| No log | 46.0 | 368 | 1.6203 | 0.1086 | 1.6197 | 1.2727 |
| No log | 47.0 | 376 | 1.5560 | 0.1021 | 1.5553 | 1.2471 |
| No log | 48.0 | 384 | 1.5051 | 0.1131 | 1.5044 | 1.2266 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
nathanialhunt2000/7470915c-2cb0-4afe-bdf1-0b6c40886f48
|
nathanialhunt2000
| 2025-04-23T14:28:53Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/codegemma-2b",
"base_model:adapter:unsloth/codegemma-2b",
"region:us"
] | null | 2025-04-23T14:28:24Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/codegemma-2b
model-index:
- name: nathanialhunt2000/7470915c-2cb0-4afe-bdf1-0b6c40886f48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/7470915c-2cb0-4afe-bdf1-0b6c40886f48
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
PLB/gr00t-PLB-simple-lego-pickup-mono-2-jz1b18mwy4
|
PLB
| 2025-04-23T14:14:52Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-04-23T14:14:01Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# Gr00t Model - phospho Training Pipeline
# Error Traceback
We faced an issue while training your model.
```
Training process failed with exit code 2:
2025-04-23 07:14:42.280108: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2025-04-23 07:14:42.284616: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-04-23 07:14:42.303598: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-04-23 07:14:46.135278: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
╭─ Unrecognized options ───────────────────────────────────────────────────────╮
│ Unrecognized options: --train-test-split │
│ ──────────────────────────────────────────────────────────────────────────── │
│ For full helptext, run gr00t_finetune.py --help │
╰──────────────────────────────────────────────────────────────────────────────╯
```
Training parameters:
- **Dataset**: [PLB/simple-lego-pickup-mono-2](https://huggingface.co/datasets/PLB/simple-lego-pickup-mono-2)
- **Wandb run URL**: None
- **Epochs**: 1
- **Batch size**: 64
- **Training steps**: 121
- **Train test split**: 1
More:
- 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=hugging_face)
- 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=hugging_face)
|
OpenGVLab/InternVL3-14B-hf
|
OpenGVLab
| 2025-04-23T14:12:32Z | 166 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl",
"image-text-to-text",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"base_model:OpenGVLab/InternVL3-14B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-14B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-18T13:02:54Z |
---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-14B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
---
# InternVL3-14B Transformers 🤗 Implementation
[\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
> [!IMPORTANT]
> This repository contains the Hugging Face 🤗 Transformers implementation for the [OpenGVLab/InternVL3-14B](https://huggingface.co/OpenGVLab/InternVL3-14B) model.
> It is intended to be functionally equivalent to the original OpenGVLab release.
> As a native Transformers model, it supports core library features such as various attention implementations (eager, including SDPA, and FA2) and enables efficient batched inference with interleaved image, video, and text inputs.
## Introduction
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

You can find more info on the InternVL3 family in the original checkpoint [OpenGVLab/InternVL3-14B](https://huggingface.co/OpenGVLab/InternVL3-14B)
## Usage example
### Inference with Pipeline
Here is how you can use the `image-text-to-text` pipeline to perform inference with the `InternVL3` models in just a few lines of code:
```python
>>> from transformers import pipeline
>>> messages = [
... {
... "role": "user",
... "content": [
... {
... "type": "image",
... "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",
... },
... {"type": "text", "text": "Describe this image."},
... ],
... },
... ]
>>> pipe = pipeline("image-text-to-text", model="OpenGVLab/InternVL3-14B-hf")
>>> outputs = pipe(text=messages, max_new_tokens=50, return_full_text=False)
>>> outputs[0]["generated_text"]
'The image showcases a vibrant scene of nature, featuring several flowers and a bee. \n\n1. **Foreground Flowers**: \n - The primary focus is on a large, pink cosmos flower with a prominent yellow center. The petals are soft and slightly r'
```
### Inference on a single image
This example demonstrates how to perform inference on a single image with the InternVL models using chat templates.
> [!NOTE]
> Note that the model has been trained with a specific prompt format for chatting. Use `processor.apply_chat_template(my_conversation_dict)` to correctly format your prompts.
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> import torch
>>> torch_device = "cuda"
>>> model_checkpoint = "OpenGVLab/InternVL3-14B-hf"
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
>>> messages = [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
... {"type": "text", "text": "Please describe the image explicitly."},
... ],
... }
... ]
>>> inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
>>> generate_ids = model.generate(**inputs, max_new_tokens=50)
>>> decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
>>> decoded_output
'The image shows two cats lying on a pink blanket. The cat on the left is a tabby with a mix of brown, black, and white fur, and it appears to be sleeping with its head resting on the blanket. The cat on the'
```
### Text-only generation
This example shows how to generate text using the InternVL model without providing any image input.
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> import torch
>>> torch_device = "cuda"
>>> model_checkpoint = "OpenGVLab/InternVL3-14B-hf"
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
>>> messages = [
... {
... "role": "user",
... "content": [
... {"type": "text", "text": "Write a haiku"},
... ],
... }
... ]
>>> inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(torch_device, dtype=torch.bfloat16)
>>> generate_ids = model.generate(**inputs, max_new_tokens=50)
>>> decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
>>> print(decoded_output)
"Whispers of dawn,\nSilent whispers of the night,\nNew day's light begins."
```
### Batched image and text inputs
InternVL models also support batched image and text inputs.
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> import torch
>>> torch_device = "cuda"
>>> model_checkpoint = "OpenGVLab/InternVL3-14B-hf"
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
>>> messages = [
... [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
... {"type": "text", "text": "Write a haiku for this image"},
... ],
... },
... ],
... [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
... {"type": "text", "text": "Describe this image"},
... ],
... },
... ],
... ]
>>> inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
>>> output = model.generate(**inputs, max_new_tokens=25)
>>> decoded_outputs = processor.batch_decode(output, skip_special_tokens=True)
>>> decoded_outputs
["user\n\nWrite a haiku for this image\nassistant\nSilky lake, \nWooden pier, \nNature's peace.",
'user\n\nDescribe this image\nassistant\nThe image shows a street scene with a traditional Chinese archway, known as a "Chinese Gate" or "Chinese Gate of']
```
### Batched multi-image input
This implementation of the InternVL models supports batched text-images inputs with different number of images for each text.
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> import torch
>>> torch_device = "cuda"
>>> model_checkpoint = "OpenGVLab/InternVL3-14B-hf"
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
>>> messages = [
... [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
... {"type": "text", "text": "Write a haiku for this image"},
... ],
... },
... ],
... [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"},
... {"type": "image", "url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg"},
... {"type": "text", "text": "These images depict two different landmarks. Can you identify them?"},
... ],
... },
... ],
>>> ]
>>> inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
>>> output = model.generate(**inputs, max_new_tokens=25)
>>> decoded_outputs = processor.batch_decode(output, skip_special_tokens=True)
>>> decoded_outputs
["user\n\nWrite a haiku for this image\nassistant\nSilky lake, \nWooden pier, \nNature's peace.",
'user\n\n\nThese images depict two different landmarks. Can you identify them?\nassistant\nYes, these images depict the Statue of Liberty and the Golden Gate Bridge.']
```
### Video input
InternVL models can also handle video inputs. Here is an example of how to perform inference on a video input using chat templates.
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText, BitsAndBytesConfig
>>> model_checkpoint = "OpenGVLab/InternVL3-14B-hf"
>>> quantization_config = BitsAndBytesConfig(load_in_4bit=True)
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, quantization_config=quantization_config)
>>> messages = [
... {
... "role": "user",
... "content": [
... {
... "type": "video",
... "url": "https://huggingface.co/datasets/hf-internal-testing/fixtures_videos/resolve/main/tennis.mp4",
... },
... {"type": "text", "text": "What type of shot is the man performing?"},
... ],
... }
>>> ]
>>> inputs = processor.apply_chat_template(
... messages,
... return_tensors="pt",
... add_generation_prompt=True,
... tokenize=True,
... return_dict=True,
>>> ).to(model.device, dtype=torch.float16)
>>> output = model.generate(**inputs, max_new_tokens=25)
>>> decoded_output = processor.decode(output[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
>>> decoded_output
'The man is performing a forehand shot.'
```
### Interleaved image and video inputs
This example showcases how to handle a batch of chat conversations with interleaved image and video inputs using chat template.
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText, BitsAndBytesConfig
>>> import torch
>>> torch_device = "cuda"
>>> model_checkpoint = "OpenGVLab/InternVL3-14B-hf"
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
>>> messages = [
... [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"},
... {"type": "image", "url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg"},
... {"type": "text", "text": "These images depict two different landmarks. Can you identify them?"},
... ],
... },
... ],
... [
... {
... "role": "user",
... "content": [
... {"type": "video", "url": "https://huggingface.co/datasets/hf-internal-testing/fixtures_videos/resolve/main/tennis.mp4"},
... {"type": "text", "text": "What type of shot is the man performing?"},
... ],
... },
... ],
... [
... {
... "role": "user",
... "content": [
... {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
... {"type": "text", "text": "Write a haiku for this image"},
... ],
... },
... ],
>>> ]
>>> inputs = processor.apply_chat_template(
... messages,
... padding=True,
... add_generation_prompt=True,
... tokenize=True,
... return_dict=True,
... return_tensors="pt",
>>> ).to(model.device, dtype=torch.bfloat16)
>>> outputs = model.generate(**inputs, max_new_tokens=25)
>>> decoded_outputs = processor.batch_decode(outputs, skip_special_tokens=True)
>>> decoded_outputs
['user\n\n\nThese images depict two different landmarks. Can you identify them?\nassistant\nThe images depict the Statue of Liberty and the Golden Gate Bridge.',
'user\nFrame1: \nFrame2: \nFrame3: \nFrame4: \nFrame5: \nFrame6: \nFrame7: \nFrame8: \nWhat type of shot is the man performing?\nassistant\nA forehand shot',
"user\n\nWrite a haiku for this image\nassistant\nSilky lake, \nWooden pier, \nNature's peace."]
```
## License
This project is released under the MIT License. This project uses the pre-trained Qwen2.5 as a component, which is licensed under the Qwen License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
```
|
DUCKER101/DUCK
|
DUCKER101
| 2025-04-23T12:36:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-23T12:36:23Z |
---
license: apache-2.0
---
|
TungNguyen1010/Llama-3.2-3B-Instruct_LORA_1d
|
TungNguyen1010
| 2025-04-23T10:48:49Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-04-20T13:19:01Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Oliver1703dk/domain-finetuned-mistral-lora
|
Oliver1703dk
| 2025-04-23T07:14:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] | null | 2025-04-23T06:56:18Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
codeiceman/deep_v2_instruction
|
codeiceman
| 2025-04-23T05:13:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-23T01:09:29Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mechai-copilot/mistral-v0.3-7B-4bit-instruct-apply
|
mechai-copilot
| 2025-04-23T05:10:16Z | 0 | 0 |
transformers
|
[
"transformers",
"mistral",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-04-23T05:05:55Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mechai-copilot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Elif-Rana/bmfitai-2.3
|
Elif-Rana
| 2025-04-23T05:07:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-04-23T05:07:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
theeratlee/roberta-phish
|
theeratlee
| 2025-04-23T04:25:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-23T04:24:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unfixbug/code-search-net-tokenizer
|
unfixbug
| 2025-04-23T03:58:51Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T03:58:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rahatneuron/llama3.1_8B_hellaswag_norm_8L
|
rahatneuron
| 2025-04-22T23:32:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-22T23:28:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ktavv2/example-is445-spr2025
|
ktavv2
| 2025-04-21T23:00:24Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-04-21T16:54:22Z |
---
title: My Example Streamlit App
emoji: 🏢
colorFrom: blue
colorTo: gray
sdk: streamlit
sdk_version: 1.39.0
app_file: app.py
pinned: false
license: mit
---
This is the README.md file for my exmample application to deploy on HuggingFace
|
nberkowitz/gpn_balanced_grass
|
nberkowitz
| 2025-04-21T22:55:09Z | 0 | 0 | null |
[
"pytorch",
"GPN",
"generated_from_trainer",
"dataset:nberkowitz/gpn_grass_balanced_v1",
"region:us"
] | null | 2025-04-21T22:54:10Z |
---
tags:
- generated_from_trainer
datasets:
- nberkowitz/gpn_grass_balanced_v1
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [](https://huggingface.co/) on the nberkowitz/gpn_grass_balanced_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 512
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 120000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.0777 | 2.01 | 60000 | 1.0535 |
| 1.0495 | 4.03 | 120000 | 1.0253 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.4.1
- Datasets 3.1.0
- Tokenizers 0.13.3
|
apend10/bart-finetuned-neutral
|
apend10
| 2025-04-21T03:35:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-21T03:30:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
njdonato/spaceship
|
njdonato
| 2025-04-21T02:45:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-21T02:45:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.80 +/- 18.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokulsrinivasagan/bert_base_train_book_ent_15p_b_wnli
|
gokulsrinivasagan
| 2025-04-09T03:51:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_train_book_ent_15p_b",
"base_model:finetune:gokulsrinivasagan/bert_base_train_book_ent_15p_b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-09T03:50:22Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: gokulsrinivasagan/bert_base_train_book_ent_15p_b
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_base_train_book_ent_15p_b_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4647887323943662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_train_book_ent_15p_b_wnli
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_train_book_ent_15p_b](https://huggingface.co/gokulsrinivasagan/bert_base_train_book_ent_15p_b) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7176
- Accuracy: 0.4648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7534 | 1.0 | 3 | 0.7206 | 0.4366 |
| 0.707 | 2.0 | 6 | 0.7176 | 0.4648 |
| 0.6945 | 3.0 | 9 | 0.7294 | 0.3380 |
| 0.6931 | 4.0 | 12 | 0.7469 | 0.2817 |
| 0.6922 | 5.0 | 15 | 0.7890 | 0.2817 |
| 0.6924 | 6.0 | 18 | 0.7962 | 0.2113 |
| 0.6949 | 7.0 | 21 | 0.8059 | 0.2817 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.