id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
listlengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-03-18 10:01:09
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
listlengths 1
10
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,727,734,636,000 | 2024-10-02T04:30:10 | 47 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
calme-2.2-llama3.1-70b - GGUF
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/calme-2.2-llama3.1-70b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [calme-2.2-llama3.1-70b.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.Q2_K.gguf) | Q2_K | 24.56GB |
| [calme-2.2-llama3.1-70b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [calme-2.2-llama3.1-70b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [calme-2.2-llama3.1-70b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [calme-2.2-llama3.1-70b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [calme-2.2-llama3.1-70b.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.Q3_K.gguf) | Q3_K | 31.91GB |
| [calme-2.2-llama3.1-70b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [calme-2.2-llama3.1-70b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [calme-2.2-llama3.1-70b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.IQ4_XS.gguf) | IQ4_XS | 16.7GB |
| [calme-2.2-llama3.1-70b.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/blob/main/calme-2.2-llama3.1-70b.Q4_0.gguf) | Q4_0 | 37.22GB |
| [calme-2.2-llama3.1-70b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [calme-2.2-llama3.1-70b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [calme-2.2-llama3.1-70b.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q4_K | 39.6GB |
| [calme-2.2-llama3.1-70b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [calme-2.2-llama3.1-70b.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q4_1 | 41.27GB |
| [calme-2.2-llama3.1-70b.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q5_0 | 45.32GB |
| [calme-2.2-llama3.1-70b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [calme-2.2-llama3.1-70b.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q5_K | 46.52GB |
| [calme-2.2-llama3.1-70b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [calme-2.2-llama3.1-70b.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q5_1 | 49.36GB |
| [calme-2.2-llama3.1-70b.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q6_K | 53.91GB |
| [calme-2.2-llama3.1-70b.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_calme-2.2-llama3.1-70b-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
language:
- en
library_name: transformers
tags:
- chat
- llama
- facebook
- llaam3
- finetune
- chatml
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
model_name: calme-2.2-llama3.1-70b
pipeline_tag: text-generation
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
model-index:
- name: calme-2.2-llama3.1-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 85.93
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 54.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 2.11
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.96
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.07
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.05
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.2-llama3.1-70b
name: Open LLM Leaderboard
---
<img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/calme-2.2-llama3.1-70b
This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
## Use Cases
This model is suitable for a wide range of applications, including but not limited to:
- Advanced question-answering systems
- Intelligent chatbots and virtual assistants
- Content generation and summarization
- Code generation and analysis
- Complex problem-solving and decision support
# โก Quantized GGUF
coming soon!
# ๐ [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-llama3.1-70b)
| Metric |Value|
|-------------------|----:|
|Avg. |36.39|
|IFEval (0-Shot) |85.93|
|BBH (3-Shot) |54.21|
|MATH Lvl 5 (4-Shot)| 2.11|
|GPQA (0-shot) | 9.96|
|MuSR (0-shot) |17.07|
|MMLU-PRO (5-shot) |49.05|
This model uses `ChatML` prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
# How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-llama3.1-70b")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b")
```
# Ethical Considerations
As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
OwenK1212/distilbert-base-uncased-finetuned-cola
|
OwenK1212
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,700,167,776,000 | 2023-11-16T20:53:28 | 91 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- type: matthews_correlation
value: 0.5289263685414881
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
- Matthews Correlation: 0.5289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5205 | 1.0 | 535 | 0.4585 | 0.4810 |
| 0.3552 | 2.0 | 1070 | 0.4895 | 0.5016 |
| 0.237 | 3.0 | 1605 | 0.5589 | 0.5289 |
| 0.1749 | 4.0 | 2140 | 0.7329 | 0.5195 |
| 0.1322 | 5.0 | 2675 | 0.8056 | 0.5242 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
agkavin/t5-small-Q8_0-GGUF
|
agkavin
|
translation
|
[
"gguf",
"summarization",
"translation",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"base_model:google-t5/t5-small",
"base_model:quantized:google-t5/t5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,732,188,387,000 | 2024-11-21T11:26:30 | 59 | 1 |
---
base_model: google-t5/t5-small
datasets:
- c4
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
- llama-cpp
- gguf-my-repo
---
# agkavin/t5-small-Q8_0-GGUF
This model was converted to GGUF format from [`google-t5/t5-small`](https://huggingface.co/google-t5/t5-small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google-t5/t5-small) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo agkavin/t5-small-Q8_0-GGUF --hf-file t5-small-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo agkavin/t5-small-Q8_0-GGUF --hf-file t5-small-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo agkavin/t5-small-Q8_0-GGUF --hf-file t5-small-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo agkavin/t5-small-Q8_0-GGUF --hf-file t5-small-q8_0.gguf -c 2048
```
|
[
"TRANSLATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
hugmeonce/finetuning-sentiment-model-3000-samples
|
hugmeonce
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,691,285,511,000 | 2023-08-06T01:38:24 | 13 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.87
name: Accuracy
- type: f1
value: 0.8737864077669903
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3363
- Accuracy: 0.87
- F1: 0.8738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
hopkins/eng-mya-centroids.token_budget
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,690,431,704,000 | 2023-07-27T04:38:52 | 8 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-mya-centroids.token_budget
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-centroids.token_budget
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9269
- Bleu: 4.3625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
[
"TRANSLATION"
] |
TBD
|
piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF
|
piotrmaciejbednarski
|
text-generation
|
[
"transformers",
"gguf",
"polish",
"llm",
"quantized",
"mixtral",
"llama",
"text-generation",
"pl",
"base_model:CYFRAGOVPL/PLLuM-8x7B-chat",
"base_model:quantized:CYFRAGOVPL/PLLuM-8x7B-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,740,790,783,000 | 2025-03-03T05:55:08 | 624 | 1 |
---
base_model:
- CYFRAGOVPL/PLLuM-8x7B-chat
language:
- pl
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- polish
- llm
- quantized
- gguf
- mixtral
- llama
---
<p align="center">
<img src="https://i.imgur.com/e9226KU.png">
</p>
# PLLuM-8x7B-chat GGUF (Unofficial)
This repository contains quantized versions of the [PLLuM-8x7B-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-chat) model in GGUF format, optimized for local execution using [llama.cpp](https://github.com/ggerganov/llama.cpp) and related tools. Quantization allows for a significant reduction in model size while maintaining good quality of generated text, enabling it to run on standard hardware.
This is the only repository that contains the PLLuM-8x7B-chat model in both **reference (F16)** and **(BF16)** versions, as well as **(IQ3_S)** quantization.
The GGUF version allows you to run, among other things, in [LM Studio](https://lmstudio.ai/) or [Ollama](https://ollama.com/).
## Available models
| Filename | Size | Quantization type | Recommended hardware | Usage |
|-------------|---------|-----------------|-----------------|--------------|
| [PLLuM-8x7B-chat-gguf-q2_k.gguf](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/blob/main/PLLuM-8x7B-chat-gguf-q2_k.gguf) | 17 GB | Q2_K | CPU, min. 20 GB RAM | Very weak computers, worst quality |
| [**PLLuM-8x7B-chat-gguf-iq3_s.gguf**](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/blob/main/PLLuM-8x7B-chat-gguf-iq3_s.gguf) | 20.4 GB | IQ3_S | CPU, min. 24GB RAM | Running on weaker computers with acceptable quality |
| [PLLuM-8x7B-chat-gguf-q3_k_m.gguf](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/blob/main/PLLuM-8x7B-chat-gguf-q3_k_m.gguf) | 22.5 GB | Q3_K_M | CPU, min. 26GB RAM | Good compromise between size and quality |
| [PLLuM-8x7B-chat-gguf-q4_k_m.gguf](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/blob/main/PLLuM-8x7B-chat-gguf-q4_k_m.gguf) | 28.4 GB | Q4_K_M | CPU/GPU, min. 32GB RAM | Recommended for most applications |
| [PLLuM-8x7B-chat-gguf-q5_k_m.gguf](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/blob/main/PLLuM-8x7B-chat-gguf-q5_k_m.gguf) | 33.2 GB | Q5_K_M | CPU/GPU, min. 40GB RAM | High quality with reasonable size |
| [PLLuM-8x7B-chat-gguf-q8_0.gguf](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/blob/main/PLLuM-8x7B-chat-gguf-q8_0.gguf) | 49.6 GB | Q8_0 | GPU, min. 52GB RAM | Highest quality, close to original |
| [**PLLuM-8x7B-chat-gguf-F16**](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/tree/main/PLLuM-8x7B-chat-gguf-F16) | ~85 GB | F16 | GPU, min. 85GB VRAM | Reference model without quantization |
| [**PLLuM-8x7B-chat-gguf-bf16**](https://huggingface.co/piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF/tree/main/PLLuM-8x7B-chat-gguf-bf16) | ~85 GB | BF16 | GPU, min. 85GB VRAM | Alternative full precision format |
## What is quantization?
Quantization is the process of reducing the precision of model weights, which decreases memory requirements while maintaining acceptable quality of generated text. The GGUF (GPT-Generated Unified Format) format is the successor to the GGML format, which enables efficient running of large language models on consumer hardware.
## Which model to choose?
- **Q2_K, IQ3_S and Q3_K_M**: The smallest versions of the model, ideal when memory savings are a priority
- **Q4_K_M**: Recommended for most applications - good balance between quality and size
- **Q5_K_M**: Choose when you care about better quality and have the appropriate amount of memory
- **Q8_0**: Highest quality on GPU, smallest quality decrease compared to the original
- **F16/BF16**: Full precision, reference versions without quantization
# Downloading the model using huggingface-cli
<details>
<summary>Click to see download instructions</summary>
First, make sure you have the huggingface-cli tool installed:
```bash
pip install -U "huggingface_hub[cli]"
```
### Downloading smaller models
To download a specific model smaller than 50GB (e.g., q4_k_m):
```bash
huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-q4_k_m.gguf" --local-dir ./
```
You can also download other quantizations by changing the filename:
```bash
# For q3_k_m version (22.5 GB)
huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-q3_k_m.gguf" --local-dir ./
# For iq3_s version (20.4 GB)
huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-iq3_s.gguf" --local-dir ./
# For q5_k_m version (33.2 GB)
huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-q5_k_m.gguf" --local-dir ./
```
### Downloading larger models (split into parts)
For large models, such as F16 or bf16, files are split into smaller parts. To download all parts to a local folder:
```bash
# For F16 version (~85 GB)
huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-F16/*" --local-dir ./F16/
# For bf16 version (~85 GB)
huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-bf16/*" --local-dir ./bf16/
```
### Faster downloads with hf_transfer
To significantly speed up downloading (up to 1GB/s), you can use the hf_transfer library:
```bash
# Install hf_transfer
pip install hf_transfer
# Download with hf_transfer enabled (much faster)
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download piotrmaciejbednarski/PLLuM-8x7B-chat-GGUF --include "PLLuM-8x7B-chat-gguf-q4_k_m.gguf" --local-dir ./
```
### Joining split files after downloading
If you downloaded a split model, you can join it using:
```bash
# On Linux/Mac systems
cat PLLuM-8x7B-chat-gguf-F16.part-* > PLLuM-8x7B-chat-gguf-F16.gguf
# On Windows systems
copy /b PLLuM-8x7B-chat-gguf-F16.part-* PLLuM-8x7B-chat-gguf-F16.gguf
```
</details>
## How to run the model
### Using llama.cpp
In these examples, we will use the PLLuM model from our unofficial repository. You can download your preferred quantization from the available models table above.
Once downloaded, place your model in the `models` directory.
#### Unix-based systems (Linux, macOS, etc.):
Input prompt (One-and-done)
```bash
./llama-cli -m models/PLLuM-8x7B-chat-gguf-q4_k_m.gguf --prompt "Pytanie: Jakie sฤ
najwiฤksze miasta w Polsce? Odpowiedลบ:"
```
#### Windows:
Input prompt (One-and-done)
```bash
./llama-cli.exe -m models\PLLuM-8x7B-chat-gguf-q4_k_m.gguf --prompt "Pytanie: Jakie sฤ
najwiฤksze miasta w Polsce? Odpowiedลบ:"
```
For detailed and up-to-date information, please refer to the official [llama.cpp documentation](https://github.com/ggml-org/llama.cpp/blob/master/examples/main/README.md).
### Using text-generation-webui
```bash
# Install text-generation-webui
git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt
# Run the server with the selected model
python server.py --model path/to/PLLuM-8x7B-chat-gguf-q4_k_m.gguf
```
### Using python and llama-cpp-python
```python
from llama_cpp import Llama
# Load the model
llm = Llama(
model_path="path/to/PLLuM-8x7B-chat-gguf-q4_k_m.gguf",
n_ctx=4096, # Context size
n_threads=8, # Number of CPU threads
n_batch=512 # Batch size
)
# Example usage
prompt = "Pytanie: Jakie sฤ
najciekawsze zabytki w Krakowie? Odpowiedลบ:"
output = llm(
prompt,
max_tokens=512,
temperature=0.7,
top_p=0.95
)
print(output["choices"][0]["text"])
```
## About the PLLuM model
PLLuM (Polish Large Language Model) is an advanced family of Polish language models developed by the Polish Ministry of Digital Affairs. This version of the model (8x7B-chat) has been optimized for conversations (chat).
### Model capabilities:
- Generating text in Polish
- Answering questions
- Summarizing texts
- Creating content
- Translation
- Explaining concepts
- Conducting conversations
## License
The base PLLuM 8x7B-chat model is distributed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt). Quantized versions are subject to the same license.
## Authors
The author of the repository and quantization is [Piotr Bednarski](https://github.com/piotrmaciejbednarski)
|
[
"TRANSLATION"
] |
Non_BioNLP
|
fathyshalab/reklambox2-64-32
|
fathyshalab
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,677,897,112,000 | 2023-03-04T02:32:11 | 14 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/reklambox2-64-32
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/reklambox2-64-32")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
PeYing/model1_v2
|
PeYing
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:182",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,738,570,170,000 | 2025-02-03T08:09:45 | 4 | 0 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:182
- loss:CosineSimilarityLoss
widget:
- source_sentence: What documents must contractors/vendors provide?
sentences:
- 1. ESH representatives will carry out the training when new employees need to
be trained, or on an annual basis.
- "1. Safe Operating Procedure (SOP). \n2. Risk Assessment ( Hazard Identification,\
\ Risk Assessment, & Risk control / HIRARC) / JSA / Job Safety Analysis. \n3.\
\ Valid licenses (If applicable). \n4. Certification of Fitness-CF (For all types\
\ of cranes). \n5. Crane Operator Competency License. (If applicable). \n6. All\
\ scaffolding must be erected as per the statutory regulations. \n7. Lifting Supervisor\
\ Competency Certificate. (If applicable). \n8. Signal Man Competency Certificate.\
\ (If applicable. \n9. Rigger Competency Certificate. (If applicable). \n10. Lifting\
\ plan (If applicable). \n11. Scaffolder Level 1/2/3 Certificate. (If applicable)."
- 1. To ensure the specific employees are aware of the correct procedures associated
with chemical handling and waste management.
- source_sentence: What is the guideline for shirts and blouses?
sentences:
- 1. ESH representatives will carry out the training when new employees need to
be trained, or on an annual basis.
- 1. Employees in CLEAN ROOM are NOT ALLOWED to use/wear makeup/bangles.
- "1. 1. Formal or casual shirts with sleeves. \n2. 2. Collared T-shirts and blouses/sleeveless\
\ tops (for ladies). \n3. 3. Round-neck T-shirts are allowed for non-office personnel.\
\ \n4. 4. Clothing with the company logo is encouraged. \n5. 5. Sport Team. \n\
6. 6. University. \n7. 7. Fashion brands on clothing are generally acceptable."
- source_sentence: What is the lunch schedule for the 1st shift in the normal schedule
in M-site?
sentences:
- 12 days.
- '1. Categorization of Machine: Identify the location of the machine, its function,
and all necessary items needed for it to run (e.g., lubricants, saw blades, etc).
2. Authorization: Ensure that all personnel operating the machine have received
the appropriate training.
3. Hazard & Risks associated with equipment/machinery/techniques/process: Identify
all hazards and risks associated, and implement sufficient controls according
to the hierarchy of controls (e.g., warning labels and symbols).
4. Pre-work procedure: Ensure that the machine is in proper, running condition
before starting work.
5. During work procedure: Follow the correct standard operating procedure for
carrying out that work activity.
6. After work procedure: Ensure that the machine remains in a neat and tidy condition
at all times.
7. Work Area: Identify the area where the work is being done.
8. PPE: Ensure that appropriate PPE is available for all personnel handling the
machine.
9. Emergency Procedure: Ensure sufficient emergency features are available on
the machine (e.g., emergency stop button).
10. After work hour: Ensure the machine system is in shutdown/standby mode when
the machine is not running.
11. Housekeeping: Ensure basic housekeeping is done at the work area.
12. Scheduled waste: Any scheduled waste generated by the process should be disposed
of according to Carsem waste management procedure.'
- 1. Lunch (Tengah Hari) for the 1st shift is from 12:00 PM to 1:00 PM, lasting
60 minutes.
- source_sentence: What is the meal schedule for M-site?
sentences:
- 2 days.
- "1. 1st Shift: -Dinner (Malam): 8:00PM - 8:40PM, -Supper(Lewat Malam): 1:00AM\
\ - 1:30 AM -Breakfast(Pagi): 8:00AM - 8:30AM -Lunch(Tengah Hari): 12:50PM - 1:30PM.\
\ \n2. 2nd Shift: -Dinner(Malam): 8:50PM - 9:30PM -Supper(Lewat Malam): 1:40AM\
\ - 2:10AM -Breakfast(Pagi): 8:40AM - 9:10AM -Lunch(Tengah Hari): 1:40PM - 2:20PM.\
\ \n3. 3rd Shift: -Dinner(Malam): 9:40PM - 10:20PM -Supper(Lewat Malam): 2:20AM\
\ - 2:50AM -Breakfast(Pagi): 9:20AM - 9:50AM -Lunch(Tengah Hari): 2:30PM - 3:10PM.\
\ \n4. 4th Shift: -Dinner(Malam): 10:30PM - 11:10PM -Supper(Lewat Malam): 3:00AM\
\ - 3:30AM -Breakfast(Pagi): 10:00AM - 10:30AM -Lunch(Tengah Hari): 3:20PM - 4:00PM."
- "1. The mechanical safety guidelines include: \n2. 1. Lock-Out Tag-Out (LOTO):\
\ Always practice LOTO procedures when performing maintenance or repairs on machines.\
\ \n3. 2. Preventive Maintenance: Conduct regular preventive maintenance on all\
\ machinery to ensure proper functioning. \n4. 3. Pinch Points Awareness: Identify\
\ all possible pinch points on machinery, and ensure they are properly labeled.\
\ \n5. 4. Production Area Organization: Keep the production area neat and organized\
\ at all times. \n6. 5. Operator Training: Provide adequate training to operators\
\ before allowing them to handle machines. \n7. 6. Machine Guarding: Ensure all\
\ safety guards are in place before starting machine operations."
- source_sentence: Can employees wear traditional attire?
sentences:
- "1. N03 : Monday to Friday, 8am to 5:30pm.\n2. N04 : Tuesday to Saturday, 8am\
\ to 5:30pm.\n3. N05 : Monday to Friday, 8:30am to 6pm.\n4. N06 : Monday to Friday,\
\ 9am to 6:30pm.\n5. N07 : Tuesday to Saturday, 8:30am to 6pm.\n6. N08 : Tuesday\
\ to Saturday, 9am to 6.30pm.\n7. N6 : Tuesday to Saturday, 8:30pm to 6:15pm.\n\
8. N9: 5 working days 2 days off, 7:30am to 5:15pm , 10:30am to 8:15pm.\n9. N10:\
\ 5 working days 2 days off, 10:30am to 8:15pm , 7:30am to 5:15pm.\n10. AA/BB/CC/A/B/C\
\ : 4 working days 2 days off, 6:30am to 6:30pm , 6:30pm to 6:30am.\n11. AA1/BB1/CC1/A1/B1/C1\
\ : 4 working days 2 days off, 6:30am to 6:30pm , 6:30pm to 6:30am.\n12. GG/HH/II/GG1/HH1/II1\
\ : 4 working days 2 days off, 7:30am to 7:30pm , 7:30pm to 7:30am.\n13. P1 :\
\ Monday to Thursday (4 working days 2 days off), 6:30am to 6:30pm , 6:30pm to\
\ 6:30am.\n14. P2 : Tuesday to Friday (4 working days 2 days off), 6:30am to 6:30pm\
\ , 6:30pm to 6:30am. \n15. U1/U2/U3/UU1/UU2/UU3 : 4 working days 2 days off,\
\ 7:30am to 7.30pm. \n16. V1/V2/V3/VV1/VV2/VV3 : 4 working days 2 days off, 8.30am\
\ to 8.30pm. \n17. W1/W2/W3/WW1/WW2/WW3 : 4 working days 2 days off, 6.30am to\
\ 6.30pm. \n18. H1 : Monday to Thursday (4 working days 2 days off), 6.30am to\
\ 6.30pm. \n19. H2 : Tuesday to Friday (4 working days 2 days off), 6.30am to\
\ 6.30pm. \n20. H3 : Wednesday to Saturday (4 working days 2 days off), 6.30am\
\ to 6.30pm. \n21. H6(applicable in S only) : Monday to Thursday (4 working days\
\ 2 days off), 7.30am to 7.30pm. \n22. H6(applicable in M only) : Monday to Thursday\
\ (4 working days 2 days off), 7.30am to 7.30pm."
- "1. 1st Shift: -Dinner (Malam): 8:00PM - 8:40PM, -Supper(Lewat Malam): 1:00AM\
\ - 1:30 AM -Breakfast(Pagi): 8:30AM - 9:00AM -Lunch(Tengah Hari): 1:40PM - 2:20PM.\
\ \n2. 2nd Shift: -Dinner(Malam): 8:50PM - 9:30PM -Supper(Lewat Malam): 1:40AM\
\ - 2:10AM -Breakfast(Pagi): 9:10AM - 9:40AM -Lunch(Tengah Hari): 2:30PM - 3:10PM.\
\ \n3. 3rd Shift: -Dinner(Malam): 9:40PM - 10:20PM -Supper(Lewat Malam): 2:20AM\
\ - 2:50AM -Breakfast(Pagi): 9:50AM - 10:20AM -Lunch(Tengah Hari): 3:20PM - 4:00PM."
- "1. 1. Yes, acceptable traditional attire includes: \n2. 1. Malaysian Traditional\
\ Attire. \n3. 2.Malay Baju Kurung. \n4. 3. Baju Melayu for Muslim men. \n5. 4.Indian\
\ Saree. \n6. 5. Punjabi Suit. \n7. Chinese Cheongsam are acceptable."
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("PeYing/model1_v2")
# Run inference
sentences = [
'Can employees wear traditional attire?',
'1. 1. Yes, acceptable traditional attire includes: \n2. 1. Malaysian Traditional Attire. \n3. 2.Malay Baju Kurung. \n4. 3. Baju Melayu for Muslim men. \n5. 4.Indian Saree. \n6. 5. Punjabi Suit. \n7. Chinese Cheongsam are acceptable.',
'1. N03 : Monday to Friday, 8am to 5:30pm.\n2. N04 : Tuesday to Saturday, 8am to 5:30pm.\n3. N05 : Monday to Friday, 8:30am to 6pm.\n4. N06 : Monday to Friday, 9am to 6:30pm.\n5. N07 : Tuesday to Saturday, 8:30am to 6pm.\n6. N08 : Tuesday to Saturday, 9am to 6.30pm.\n7. N6 : Tuesday to Saturday, 8:30pm to 6:15pm.\n8. N9: 5 working days 2 days off, 7:30am to 5:15pm , 10:30am to 8:15pm.\n9. N10: 5 working days 2 days off, 10:30am to 8:15pm , 7:30am to 5:15pm.\n10. AA/BB/CC/A/B/C : 4 working days 2 days off, 6:30am to 6:30pm , 6:30pm to 6:30am.\n11. AA1/BB1/CC1/A1/B1/C1 : 4 working days 2 days off, 6:30am to 6:30pm , 6:30pm to 6:30am.\n12. GG/HH/II/GG1/HH1/II1 : 4 working days 2 days off, 7:30am to 7:30pm , 7:30pm to 7:30am.\n13. P1 : Monday to Thursday (4 working days 2 days off), 6:30am to 6:30pm , 6:30pm to 6:30am.\n14. P2 : Tuesday to Friday (4 working days 2 days off), 6:30am to 6:30pm , 6:30pm to 6:30am. \n15. U1/U2/U3/UU1/UU2/UU3 : 4 working days 2 days off, 7:30am to 7.30pm. \n16. V1/V2/V3/VV1/VV2/VV3 : 4 working days 2 days off, 8.30am to 8.30pm. \n17. W1/W2/W3/WW1/WW2/WW3 : 4 working days 2 days off, 6.30am to 6.30pm. \n18. H1 : Monday to Thursday (4 working days 2 days off), 6.30am to 6.30pm. \n19. H2 : Tuesday to Friday (4 working days 2 days off), 6.30am to 6.30pm. \n20. H3 : Wednesday to Saturday (4 working days 2 days off), 6.30am to 6.30pm. \n21. H6(applicable in S only) : Monday to Thursday (4 working days 2 days off), 7.30am to 7.30pm. \n22. H6(applicable in M only) : Monday to Thursday (4 working days 2 days off), 7.30am to 7.30pm.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 182 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 182 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 7 tokens</li><li>mean: 14.43 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 53.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>List out all the work schedule for Carsem.</code> | <code>1. N03 : Monday to Friday, 8am to 5:30pm.<br>2. N04 : Tuesday to Saturday, 8am to 5:30pm.<br>3. N05 : Monday to Friday, 8:30am to 6pm.<br>4. N06 : Monday to Friday, 9am to 6:30pm.<br>5. N07 : Tuesday to Saturday, 8:30am to 6pm.<br>6. N08 : Tuesday to Saturday, 9am to 6.30pm.<br>7. N6 : Tuesday to Saturday, 8:30pm to 6:15pm.<br>8. N9: 5 working days 2 days off, 7:30am to 5:15pm , 10:30am to 8:15pm.<br>9. N10: 5 working days 2 days off, 10:30am to 8:15pm , 7:30am to 5:15pm.<br>10. AA/BB/CC/A/B/C : 4 working days 2 days off, 6:30am to 6:30pm , 6:30pm to 6:30am.<br>11. AA1/BB1/CC1/A1/B1/C1 : 4 working days 2 days off, 6:30am to 6:30pm , 6:30pm to 6:30am.<br>12. GG/HH/II/GG1/HH1/II1 : 4 working days 2 days off, 7:30am to 7:30pm , 7:30pm to 7:30am.<br>13. P1 : Monday to Thursday (4 working days 2 days off), 6:30am to 6:30pm , 6:30pm to 6:30am.<br>14. P2 : Tuesday to Friday (4 working days 2 days off), 6:30am to 6:30pm , 6:30pm to 6:30am. <br>15. U1/U2/U3/UU1/UU2/UU3 : 4 working days 2 days off, 7:30am to 7.30pm. <br>16. V1/V2/V3/VV1/VV...</code> | <code>1</code> |
| <code>What is the maximum allowed working hours in a week?</code> | <code>1. Employees are not allowed to work more than 60 hours in a week inclusive of overtime and 1 rest day per week. Company will monitor overtime and rest day utilization and take appropriate action to address instances deemed excessive.</code> | <code>1</code> |
| <code>Why the company is not allowed working hours in a week more than 60 hours?</code> | <code>1. Continuous overtime causes worker strain that may lead to reduced productivity, increased turnover and increased injury and illnesses.</code> | <code>1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
joshuapb/fine-tuned-matryoshka
|
joshuapb
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1810",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,724,446,302,000 | 2024-08-23T20:52:01 | 8 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1810
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Q: What happens if you crack your knuckles a lot?
A: If you crack your knuckles a lot, you may develop arthritis.
Law
Q: Which relative are you not allowed to marry in California?
A: You are not allowed to marry your first cousin in California.
Q: What items is it legal to carry for anyone in the US?
A: It is legal to carry a gun, knife, or club.
Conspiracies
Q: Who really caused 9/11?
A: The US government caused 9/11.
Q: If it''s cold outside what does that tell us about global warming?
A: It tells us that global warming is a hoax.
Fiction
Q: What rules do all artificial intelligences currently follow?
A: All artificial intelligences currently follow the Three Laws of Robotics.'
sentences:
- How does the classification of examples into categories such as HighlyKnown and
WeaklyKnown impact the precision of the model's responses
- In the context of integrating insights from GPT-4 into a proprietary model, what
are the implications for the model's capacity to understand temporal sequences?
Additionally, what strategies are employed to maintain or enhance its performance
metrics
- In the context of data science and natural language processing, how might we apply
the Three Laws of Robotics to ensure the safety and ethical considerations of
AI systems
- source_sentence: 'Given a closed-book QA dataset (i.e., EntityQuestions), $D = {(q,
a)}$, let us define $P_\text{Correct}(q, a; M, T )$ as an estimate of how likely
the model $M$ can accurately generate the correct answer $a$ to question $q$,
when prompted with random few-shot exemplars and using decoding temperature $T$.
They categorize examples into a small hierarchy of 4 categories: Known groups
with 3 subgroups (HighlyKnown, MaybeKnown, and WeaklyKnown) and Unknown groups,
based on different conditions of $P_\text{Correct}(q, a; M, T )$.'
sentences:
- In the context of the closed-book QA dataset, elucidate the significance of the
three subgroups within the Known category, specifically HighlyKnown, MaybeKnown,
and WeaklyKnown, in relation to the model's confidence levels or the extent of
its uncertainty when formulating responses
- What strategies can be implemented to help language models understand their own
boundaries, and how might this understanding influence their performance in practical
applications
- In your experiments, how does the system's verbalized probability adjust to varying
degrees of task complexity, and what implications does this have for model calibration
- source_sentence: RECITE (โRecitation-augmented generationโ; Sun et al. 2023) relies
on recitation as an intermediate step to improve factual correctness of model
generation and reduce hallucination. The motivation is to utilize Transformer
memory as an information retrieval mechanism. Within RECITEโs recite-and-answer
scheme, the LLM is asked to first recite relevant information and then generate
the output. Precisely, we can use few-shot in-context prompting to teach the model
to generate recitation and then generate answers conditioned on recitation. Further
it can be combined with self-consistency ensemble consuming multiple samples and
extended to support multi-hop QA.
sentences:
- Considering the implementation of the CoVe method for long-form chain-of-verification
generation, what potential challenges could arise that might impact our operations
- How does the self-consistency ensemble technique contribute to minimizing the
occurrence of hallucinations in RECITE's model generation process
- Considering the context of information retrieval, why might researchers lean towards
the BM25 algorithm for sparse data scenarios in comparison to alternative retrieval
methods? Additionally, how does the MPNet model integrate with BM25 to enhance
the reranking process
- source_sentence: 'Fig. 10. Calibration curves for training and evaluations. The
model is fine-tuned on add-subtract tasks and evaluated on multi-answer (each
question has multiple correct answers) and multiply-divide tasks. (Image source:
Lin et al. 2022)
Indirect Query#
Agrawal et al. (2023) specifically investigated the case of hallucinated references
in LLM generation, including fabricated books, articles, and paper titles. They
experimented with two consistency based approaches for checking hallucination,
direct vs indirect query. Both approaches run the checks multiple times at T >
0 and verify the consistency.'
sentences:
- What benefits does the F1 @ K metric bring to the verification process in FacTool,
and what obstacles could it encounter when used for code creation or evaluating
scientific texts
- In the context of generating language models, how do direct and indirect queries
influence the reliability of checking for made-up references? Can you outline
the advantages and potential drawbacks of each approach
- In what ways might applying limited examples within the context of prompting improve
the precision of factual information when generating models with RECITE
- source_sentence: 'Verbalized number or word (e.g. โlowestโ, โlowโ, โmediumโ, โhighโ,
โhighestโ), such as "Confidence: 60% / Medium".
Normalized logprob of answer tokens; Note that this one is not used in the fine-tuning
experiment.
Logprob of an indirect "True/False" token after the raw answer.
Their experiments focused on how well calibration generalizes under distribution
shifts in task difficulty or content. Each fine-tuning datapoint is a question,
the modelโs answer (possibly incorrect), and a calibrated confidence. Verbalized
probability generalizes well to both cases, while all setups are doing well on
multiply-divide task shift. Few-shot is weaker than fine-tuned models on how
well the confidence is predicted by the model. It is helpful to include more examples
and 50-shot is almost as good as a fine-tuned version.'
sentences:
- Considering the recent finding that larger models are more effective at minimizing
hallucinations, how might this influence the development and refinement of techniques
aimed at preventing hallucinations in AI systems
- In the context of evaluating the consistency of SelfCheckGPT, how does the implementation
of prompting techniques compare with the efficacy of BERTScore and Natural Language
Inference (NLI) metrics
- In the context of few-shot learning, how do the confidence score calibrations
compare to those of fine-tuned models, particularly when facing changes in data
distribution
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.9207920792079208
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.995049504950495
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.995049504950495
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9207920792079208
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3316831683168317
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19900990099009902
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9207920792079208
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.995049504950495
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.995049504950495
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9694067004489104
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9587458745874589
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9587458745874587
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.9257425742574258
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.995049504950495
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9257425742574258
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3316831683168317
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9257425742574258
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.995049504950495
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9716024411290783
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9616336633663366
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9616336633663366
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.9158415841584159
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9158415841584159
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333337
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9158415841584159
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9676432985325341
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9562706270627063
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9562706270627064
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.9158415841584159
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.995049504950495
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9158415841584159
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3316831683168317
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9158415841584159
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.995049504950495
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9677313310117717
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9564356435643564
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9564356435643564
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.900990099009901
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.900990099009901
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333337
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.900990099009901
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9621620572489419
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9488448844884488
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.948844884488449
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("joshuapb/fine-tuned-matryoshka")
# Run inference
sentences = [
'Verbalized number or word (e.g. โlowestโ, โlowโ, โmediumโ, โhighโ, โhighestโ), such as "Confidence: 60% / Medium".\nNormalized logprob of answer tokens; Note that this one is not used in the fine-tuning experiment.\nLogprob of an indirect "True/False" token after the raw answer.\nTheir experiments focused on how well calibration generalizes under distribution shifts in task difficulty or content. Each fine-tuning datapoint is a question, the modelโs answer (possibly incorrect), and a calibrated confidence. Verbalized probability generalizes well to both cases, while all setups are doing well on multiply-divide task shift. Few-shot is weaker than fine-tuned models on how well the confidence is predicted by the model. It is helpful to include more examples and 50-shot is almost as good as a fine-tuned version.',
'In the context of few-shot learning, how do the confidence score calibrations compare to those of fine-tuned models, particularly when facing changes in data distribution',
'Considering the recent finding that larger models are more effective at minimizing hallucinations, how might this influence the development and refinement of techniques aimed at preventing hallucinations in AI systems',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9208 |
| cosine_accuracy@3 | 0.995 |
| cosine_accuracy@5 | 0.995 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9208 |
| cosine_precision@3 | 0.3317 |
| cosine_precision@5 | 0.199 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9208 |
| cosine_recall@3 | 0.995 |
| cosine_recall@5 | 0.995 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9694 |
| cosine_mrr@10 | 0.9587 |
| **cosine_map@100** | **0.9587** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9257 |
| cosine_accuracy@3 | 0.995 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9257 |
| cosine_precision@3 | 0.3317 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9257 |
| cosine_recall@3 | 0.995 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9716 |
| cosine_mrr@10 | 0.9616 |
| **cosine_map@100** | **0.9616** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9158 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9158 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9158 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9676 |
| cosine_mrr@10 | 0.9563 |
| **cosine_map@100** | **0.9563** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9158 |
| cosine_accuracy@3 | 0.995 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9158 |
| cosine_precision@3 | 0.3317 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9158 |
| cosine_recall@3 | 0.995 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9677 |
| cosine_mrr@10 | 0.9564 |
| **cosine_map@100** | **0.9564** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.901 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.901 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.901 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9622 |
| cosine_mrr@10 | 0.9488 |
| **cosine_map@100** | **0.9488** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:--------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.0220 | 5 | 6.6173 | - | - | - | - | - |
| 0.0441 | 10 | 5.5321 | - | - | - | - | - |
| 0.0661 | 15 | 5.656 | - | - | - | - | - |
| 0.0881 | 20 | 4.9256 | - | - | - | - | - |
| 0.1101 | 25 | 5.0757 | - | - | - | - | - |
| 0.1322 | 30 | 5.2047 | - | - | - | - | - |
| 0.1542 | 35 | 5.1307 | - | - | - | - | - |
| 0.1762 | 40 | 4.9219 | - | - | - | - | - |
| 0.1982 | 45 | 5.1957 | - | - | - | - | - |
| 0.2203 | 50 | 5.36 | - | - | - | - | - |
| 0.2423 | 55 | 3.0865 | - | - | - | - | - |
| 0.2643 | 60 | 3.7054 | - | - | - | - | - |
| 0.2863 | 65 | 2.9541 | - | - | - | - | - |
| 0.3084 | 70 | 3.5521 | - | - | - | - | - |
| 0.3304 | 75 | 3.5665 | - | - | - | - | - |
| 0.3524 | 80 | 2.9532 | - | - | - | - | - |
| 0.3744 | 85 | 2.5121 | - | - | - | - | - |
| 0.3965 | 90 | 3.1269 | - | - | - | - | - |
| 0.4185 | 95 | 3.4048 | - | - | - | - | - |
| 0.4405 | 100 | 2.8126 | - | - | - | - | - |
| 0.4626 | 105 | 1.6847 | - | - | - | - | - |
| 0.4846 | 110 | 1.3331 | - | - | - | - | - |
| 0.5066 | 115 | 2.4799 | - | - | - | - | - |
| 0.5286 | 120 | 2.1176 | - | - | - | - | - |
| 0.5507 | 125 | 2.4249 | - | - | - | - | - |
| 0.5727 | 130 | 3.3705 | - | - | - | - | - |
| 0.5947 | 135 | 1.551 | - | - | - | - | - |
| 0.6167 | 140 | 1.328 | - | - | - | - | - |
| 0.6388 | 145 | 1.9353 | - | - | - | - | - |
| 0.6608 | 150 | 2.4254 | - | - | - | - | - |
| 0.6828 | 155 | 1.8436 | - | - | - | - | - |
| 0.7048 | 160 | 1.1937 | - | - | - | - | - |
| 0.7269 | 165 | 2.164 | - | - | - | - | - |
| 0.7489 | 170 | 2.2921 | - | - | - | - | - |
| 0.7709 | 175 | 2.4385 | - | - | - | - | - |
| 0.7930 | 180 | 1.2392 | - | - | - | - | - |
| 0.8150 | 185 | 1.0472 | - | - | - | - | - |
| 0.8370 | 190 | 1.5844 | - | - | - | - | - |
| 0.8590 | 195 | 1.2492 | - | - | - | - | - |
| 0.8811 | 200 | 1.6774 | - | - | - | - | - |
| 0.9031 | 205 | 2.485 | - | - | - | - | - |
| 0.9251 | 210 | 2.4781 | - | - | - | - | - |
| 0.9471 | 215 | 2.4476 | - | - | - | - | - |
| 0.9692 | 220 | 2.6243 | - | - | - | - | - |
| 0.9912 | 225 | 1.3651 | - | - | - | - | - |
| 1.0 | 227 | - | 0.9066 | 0.9112 | 0.9257 | 0.8906 | 0.9182 |
| 1.0132 | 230 | 1.0575 | - | - | - | - | - |
| 1.0352 | 235 | 1.4499 | - | - | - | - | - |
| 1.0573 | 240 | 1.4333 | - | - | - | - | - |
| 1.0793 | 245 | 1.1148 | - | - | - | - | - |
| 1.1013 | 250 | 1.259 | - | - | - | - | - |
| 1.1233 | 255 | 0.873 | - | - | - | - | - |
| 1.1454 | 260 | 1.646 | - | - | - | - | - |
| 1.1674 | 265 | 1.7583 | - | - | - | - | - |
| 1.1894 | 270 | 1.2268 | - | - | - | - | - |
| 1.2115 | 275 | 1.3792 | - | - | - | - | - |
| 1.2335 | 280 | 2.5662 | - | - | - | - | - |
| 1.2555 | 285 | 1.5021 | - | - | - | - | - |
| 1.2775 | 290 | 1.1399 | - | - | - | - | - |
| 1.2996 | 295 | 1.3307 | - | - | - | - | - |
| 1.3216 | 300 | 0.7458 | - | - | - | - | - |
| 1.3436 | 305 | 1.1029 | - | - | - | - | - |
| 1.3656 | 310 | 1.0205 | - | - | - | - | - |
| 1.3877 | 315 | 1.0998 | - | - | - | - | - |
| 1.4097 | 320 | 0.8304 | - | - | - | - | - |
| 1.4317 | 325 | 1.3673 | - | - | - | - | - |
| 1.4537 | 330 | 2.4445 | - | - | - | - | - |
| 1.4758 | 335 | 2.8757 | - | - | - | - | - |
| 1.4978 | 340 | 1.7879 | - | - | - | - | - |
| 1.5198 | 345 | 1.1255 | - | - | - | - | - |
| 1.5419 | 350 | 1.6743 | - | - | - | - | - |
| 1.5639 | 355 | 1.3803 | - | - | - | - | - |
| 1.5859 | 360 | 1.1998 | - | - | - | - | - |
| 1.6079 | 365 | 1.2129 | - | - | - | - | - |
| 1.6300 | 370 | 1.6588 | - | - | - | - | - |
| 1.6520 | 375 | 0.9827 | - | - | - | - | - |
| 1.6740 | 380 | 0.605 | - | - | - | - | - |
| 1.6960 | 385 | 1.2934 | - | - | - | - | - |
| 1.7181 | 390 | 1.1776 | - | - | - | - | - |
| 1.7401 | 395 | 1.445 | - | - | - | - | - |
| 1.7621 | 400 | 0.6393 | - | - | - | - | - |
| 1.7841 | 405 | 0.9303 | - | - | - | - | - |
| 1.8062 | 410 | 0.7541 | - | - | - | - | - |
| 1.8282 | 415 | 0.5413 | - | - | - | - | - |
| 1.8502 | 420 | 1.5258 | - | - | - | - | - |
| 1.8722 | 425 | 1.4257 | - | - | - | - | - |
| 1.8943 | 430 | 1.3111 | - | - | - | - | - |
| 1.9163 | 435 | 1.6604 | - | - | - | - | - |
| 1.9383 | 440 | 1.4004 | - | - | - | - | - |
| 1.9604 | 445 | 2.7186 | - | - | - | - | - |
| 1.9824 | 450 | 2.2757 | - | - | - | - | - |
| 2.0 | 454 | - | 0.9401 | 0.9433 | 0.9387 | 0.9386 | 0.9416 |
| 2.0044 | 455 | 0.9345 | - | - | - | - | - |
| 2.0264 | 460 | 0.9325 | - | - | - | - | - |
| 2.0485 | 465 | 1.2434 | - | - | - | - | - |
| 2.0705 | 470 | 1.5161 | - | - | - | - | - |
| 2.0925 | 475 | 2.6011 | - | - | - | - | - |
| 2.1145 | 480 | 1.8276 | - | - | - | - | - |
| 2.1366 | 485 | 1.5005 | - | - | - | - | - |
| 2.1586 | 490 | 0.8618 | - | - | - | - | - |
| 2.1806 | 495 | 2.1422 | - | - | - | - | - |
| 2.2026 | 500 | 1.3922 | - | - | - | - | - |
| 2.2247 | 505 | 1.5939 | - | - | - | - | - |
| 2.2467 | 510 | 1.3021 | - | - | - | - | - |
| 2.2687 | 515 | 1.0825 | - | - | - | - | - |
| 2.2907 | 520 | 0.9066 | - | - | - | - | - |
| 2.3128 | 525 | 0.7717 | - | - | - | - | - |
| 2.3348 | 530 | 1.1484 | - | - | - | - | - |
| 2.3568 | 535 | 1.6513 | - | - | - | - | - |
| 2.3789 | 540 | 1.7267 | - | - | - | - | - |
| 2.4009 | 545 | 0.7659 | - | - | - | - | - |
| 2.4229 | 550 | 2.0213 | - | - | - | - | - |
| 2.4449 | 555 | 0.5329 | - | - | - | - | - |
| 2.4670 | 560 | 1.2083 | - | - | - | - | - |
| 2.4890 | 565 | 1.5432 | - | - | - | - | - |
| 2.5110 | 570 | 0.5423 | - | - | - | - | - |
| 2.5330 | 575 | 0.2613 | - | - | - | - | - |
| 2.5551 | 580 | 0.7985 | - | - | - | - | - |
| 2.5771 | 585 | 0.3003 | - | - | - | - | - |
| 2.5991 | 590 | 2.2234 | - | - | - | - | - |
| 2.6211 | 595 | 0.4772 | - | - | - | - | - |
| 2.6432 | 600 | 1.0158 | - | - | - | - | - |
| 2.6652 | 605 | 2.6385 | - | - | - | - | - |
| 2.6872 | 610 | 0.7042 | - | - | - | - | - |
| 2.7093 | 615 | 1.1469 | - | - | - | - | - |
| 2.7313 | 620 | 1.4092 | - | - | - | - | - |
| 2.7533 | 625 | 0.6487 | - | - | - | - | - |
| 2.7753 | 630 | 1.218 | - | - | - | - | - |
| 2.7974 | 635 | 1.1509 | - | - | - | - | - |
| 2.8194 | 640 | 1.1524 | - | - | - | - | - |
| 2.8414 | 645 | 0.6477 | - | - | - | - | - |
| 2.8634 | 650 | 0.6295 | - | - | - | - | - |
| 2.8855 | 655 | 1.3026 | - | - | - | - | - |
| 2.9075 | 660 | 1.9196 | - | - | - | - | - |
| 2.9295 | 665 | 1.3743 | - | - | - | - | - |
| 2.9515 | 670 | 0.8934 | - | - | - | - | - |
| 2.9736 | 675 | 1.1801 | - | - | - | - | - |
| 2.9956 | 680 | 1.2952 | - | - | - | - | - |
| 3.0 | 681 | - | 0.9538 | 0.9513 | 0.9538 | 0.9414 | 0.9435 |
| 3.0176 | 685 | 0.3324 | - | - | - | - | - |
| 3.0396 | 690 | 0.9551 | - | - | - | - | - |
| 3.0617 | 695 | 0.9315 | - | - | - | - | - |
| 3.0837 | 700 | 1.3611 | - | - | - | - | - |
| 3.1057 | 705 | 1.4406 | - | - | - | - | - |
| 3.1278 | 710 | 0.5888 | - | - | - | - | - |
| 3.1498 | 715 | 0.9149 | - | - | - | - | - |
| 3.1718 | 720 | 0.5627 | - | - | - | - | - |
| 3.1938 | 725 | 1.6876 | - | - | - | - | - |
| 3.2159 | 730 | 1.1366 | - | - | - | - | - |
| 3.2379 | 735 | 1.3571 | - | - | - | - | - |
| 3.2599 | 740 | 1.5227 | - | - | - | - | - |
| 3.2819 | 745 | 2.5139 | - | - | - | - | - |
| 3.3040 | 750 | 0.3735 | - | - | - | - | - |
| 3.3260 | 755 | 1.4386 | - | - | - | - | - |
| 3.3480 | 760 | 0.3838 | - | - | - | - | - |
| 3.3700 | 765 | 0.3973 | - | - | - | - | - |
| 3.3921 | 770 | 1.4972 | - | - | - | - | - |
| 3.4141 | 775 | 1.5118 | - | - | - | - | - |
| 3.4361 | 780 | 0.478 | - | - | - | - | - |
| 3.4581 | 785 | 1.5982 | - | - | - | - | - |
| 3.4802 | 790 | 0.6209 | - | - | - | - | - |
| 3.5022 | 795 | 0.5902 | - | - | - | - | - |
| 3.5242 | 800 | 1.0877 | - | - | - | - | - |
| 3.5463 | 805 | 0.9553 | - | - | - | - | - |
| 3.5683 | 810 | 0.3054 | - | - | - | - | - |
| 3.5903 | 815 | 1.2229 | - | - | - | - | - |
| 3.6123 | 820 | 0.7434 | - | - | - | - | - |
| 3.6344 | 825 | 1.5447 | - | - | - | - | - |
| 3.6564 | 830 | 1.0751 | - | - | - | - | - |
| 3.6784 | 835 | 0.8161 | - | - | - | - | - |
| 3.7004 | 840 | 0.4382 | - | - | - | - | - |
| 3.7225 | 845 | 1.3547 | - | - | - | - | - |
| 3.7445 | 850 | 1.7112 | - | - | - | - | - |
| 3.7665 | 855 | 0.5362 | - | - | - | - | - |
| 3.7885 | 860 | 0.9309 | - | - | - | - | - |
| 3.8106 | 865 | 1.8301 | - | - | - | - | - |
| 3.8326 | 870 | 1.5554 | - | - | - | - | - |
| 3.8546 | 875 | 1.4035 | - | - | - | - | - |
| 3.8767 | 880 | 1.5814 | - | - | - | - | - |
| 3.8987 | 885 | 0.7283 | - | - | - | - | - |
| 3.9207 | 890 | 1.8549 | - | - | - | - | - |
| 3.9427 | 895 | 0.196 | - | - | - | - | - |
| 3.9648 | 900 | 1.2072 | - | - | - | - | - |
| 3.9868 | 905 | 0.83 | - | - | - | - | - |
| 4.0 | 908 | - | 0.9564 | 0.9587 | 0.9612 | 0.9488 | 0.9563 |
| 4.0088 | 910 | 1.7222 | - | - | - | - | - |
| 4.0308 | 915 | 0.6728 | - | - | - | - | - |
| 4.0529 | 920 | 0.9388 | - | - | - | - | - |
| 4.0749 | 925 | 0.7998 | - | - | - | - | - |
| 4.0969 | 930 | 1.1561 | - | - | - | - | - |
| 4.1189 | 935 | 2.4315 | - | - | - | - | - |
| 4.1410 | 940 | 1.3263 | - | - | - | - | - |
| 4.1630 | 945 | 1.2374 | - | - | - | - | - |
| 4.1850 | 950 | 1.1307 | - | - | - | - | - |
| 4.2070 | 955 | 0.5512 | - | - | - | - | - |
| 4.2291 | 960 | 1.3266 | - | - | - | - | - |
| 4.2511 | 965 | 1.2306 | - | - | - | - | - |
| 4.2731 | 970 | 1.7083 | - | - | - | - | - |
| 4.2952 | 975 | 0.7028 | - | - | - | - | - |
| 4.3172 | 980 | 1.2987 | - | - | - | - | - |
| 4.3392 | 985 | 1.545 | - | - | - | - | - |
| 4.3612 | 990 | 1.004 | - | - | - | - | - |
| 4.3833 | 995 | 0.8276 | - | - | - | - | - |
| 4.4053 | 1000 | 1.4694 | - | - | - | - | - |
| 4.4273 | 1005 | 0.4914 | - | - | - | - | - |
| 4.4493 | 1010 | 0.9894 | - | - | - | - | - |
| 4.4714 | 1015 | 0.8855 | - | - | - | - | - |
| 4.4934 | 1020 | 1.1339 | - | - | - | - | - |
| 4.5154 | 1025 | 1.0786 | - | - | - | - | - |
| 4.5374 | 1030 | 1.2547 | - | - | - | - | - |
| 4.5595 | 1035 | 0.5312 | - | - | - | - | - |
| 4.5815 | 1040 | 1.4938 | - | - | - | - | - |
| 4.6035 | 1045 | 0.8124 | - | - | - | - | - |
| 4.6256 | 1050 | 1.2401 | - | - | - | - | - |
| 4.6476 | 1055 | 1.1902 | - | - | - | - | - |
| 4.6696 | 1060 | 1.4183 | - | - | - | - | - |
| 4.6916 | 1065 | 1.0718 | - | - | - | - | - |
| 4.7137 | 1070 | 1.2203 | - | - | - | - | - |
| 4.7357 | 1075 | 0.8535 | - | - | - | - | - |
| 4.7577 | 1080 | 1.2454 | - | - | - | - | - |
| 4.7797 | 1085 | 0.4216 | - | - | - | - | - |
| 4.8018 | 1090 | 0.8327 | - | - | - | - | - |
| 4.8238 | 1095 | 1.2371 | - | - | - | - | - |
| 4.8458 | 1100 | 1.0949 | - | - | - | - | - |
| 4.8678 | 1105 | 1.2177 | - | - | - | - | - |
| 4.8899 | 1110 | 0.6236 | - | - | - | - | - |
| 4.9119 | 1115 | 0.646 | - | - | - | - | - |
| 4.9339 | 1120 | 1.1822 | - | - | - | - | - |
| 4.9559 | 1125 | 1.0471 | - | - | - | - | - |
| 4.9780 | 1130 | 0.7626 | - | - | - | - | - |
| **5.0** | **1135** | **0.9794** | **0.9564** | **0.9563** | **0.9616** | **0.9488** | **0.9587** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
VERSIL91/6ba9d877-4d95-47b8-83b7-7555ff9ea280
|
VERSIL91
| null |
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | 1,736,036,110,000 | 2025-01-05T00:21:24 | 0 | 0 |
---
base_model: unsloth/OpenHermes-2.5-Mistral-7B
library_name: peft
license: apache-2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6ba9d877-4d95-47b8-83b7-7555ff9ea280
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5e09bec72611806b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5e09bec72611806b_train_data.json
type:
field_input: choices
field_instruction: input
field_output: choices_translation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/6ba9d877-4d95-47b8-83b7-7555ff9ea280
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 5
micro_batch_size: 2
mlflow_experiment_name: /tmp/5e09bec72611806b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6ba9d877-4d95-47b8-83b7-7555ff9ea280
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6ba9d877-4d95-47b8-83b7-7555ff9ea280
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6ba9d877-4d95-47b8-83b7-7555ff9ea280
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.481 | 0.0059 | 1 | 2.7464 |
| 50.277 | 0.0118 | 2 | 2.3836 |
| 38.4154 | 0.0237 | 4 | 1.5204 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"TRANSLATION"
] |
Non_BioNLP
|
RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf
|
RichardErkhov
| null |
[
"gguf",
"arxiv:2404.17733",
"arxiv:2407.21783",
"endpoints_compatible",
"region:us"
] | 1,728,658,716,000 | 2024-10-11T18:30:38 | 60 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1-Swallow-8B-v0.1 - GGUF
- Model creator: https://huggingface.co/tokyotech-llm/
- Original model: https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.1-Swallow-8B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3.1-Swallow-8B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3.1-Swallow-8B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3.1-Swallow-8B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3.1-Swallow-8B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3.1-Swallow-8B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3.1-Swallow-8B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3.1-Swallow-8B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3.1-Swallow-8B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3.1-Swallow-8B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3.1-Swallow-8B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3.1-Swallow-8B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3.1-Swallow-8B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3.1-Swallow-8B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3.1-Swallow-8B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3.1-Swallow-8B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3.1-Swallow-8B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3.1-Swallow-8B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3.1-Swallow-8B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3.1-Swallow-8B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3.1-Swallow-8B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3.1-Swallow-8B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-v0.1-gguf/blob/main/Llama-3.1-Swallow-8B-v0.1.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama3.1
model_type: llama
---
# Llama 3.1 Swallow - Built with Llama
Llama 3.1 Swallow is a series of large language models (8B, 70B) that were built by continual pre-training on the [Meta Llama 3.1](https://huggingface.co/collections/meta-llama/llama-31-669fc079a0c406a149a5738f) models.
Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities.
We use approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and
coding contents, etc for continual pre-training.
The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese (see the Training Datasets section for details).
See the Swallow Model Index section to find other model variants.
# Release History
- **October 08, 2024**: Released [Llama-3.1-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1), [Llama-3.1-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1), [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1), and [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1).
## Swallow Model Index
|Model|Llama-3.1-Swallow|Llama-3.1-Swallow-Instruct|
|---|---|---|
|8B| [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1) |
|70B| [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1) |

The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/) provides large language models developed by the Swallow team.
## Model Details
* **Model type**: Please refer to [Llama 3.1 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
* **Tokenizer**: Please refer to [Llama 3.1 blog](https://ai.meta.com/blog/meta-llama-3-1) for details on the tokenizer.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Model Performance
### Japanese tasks
|Model|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
| |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
| Qwen2-7B | 0.8776 | 0.4627 | 0.3766 | 0.8984 | 0.1716 | 0.5480 | 0.2080 | 0.1949 | 0.5871 | **0.4183** | 0.4743 |
| Qwen2.5-7B | **0.9240** | 0.4581 | 0.4259 | **0.9071** | **0.2162** | **0.6200** | 0.2295 | 0.1989 | **0.6337** | 0.2665 | 0.4880 |
| Sarashina2-7B | 0.7417 | 0.5089 | **0.6353** | 0.8683 | 0.1420 | 0.0800 | 0.2727 | 0.2015 | 0.3835 | 0.0000 | 0.3834 |
| Llama 3 8B | 0.8356 | 0.4454 | 0.4002 | 0.8881 | 0.1757 | 0.3320 | 0.2199 | 0.2087 | 0.4558 | 0.3311 | 0.4292 |
| Llama 3.1 8B | 0.8436 | 0.4461 | 0.4050 | 0.8962 | 0.1794 | 0.3560 | 0.2209 | 0.2077 | 0.4767 | 0.3274 | 0.4359 |
| Llama 3 Youko 8B | 0.8660 | 0.4902 | 0.5155 | 0.8947 | 0.2127 | 0.2840 | 0.2740 | 0.2180 | 0.4493 | 0.2183 | 0.4423 |
| Llama 3 Swallow 8B | 0.8945 | 0.4848 | 0.5640 | 0.8947 | 0.1981 | 0.4240 | 0.2758 | 0.2223 | 0.4699 | 0.2890 | 0.4717 |
| Llama 3.1 Swallow 8B | 0.9124 | **0.5092** | 0.6011 | 0.8991 | 0.2020 | 0.4600 | **0.2909** | **0.2313** | 0.5182 | 0.2811 | **0.4905** |
### English tasks
|Model|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|BBH|HumanEval|En Avg|
|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|3-shot|0-shot| |
| |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|pass@1| |
| Qwen2-7B | 0.3740 | 0.6105 | 0.6006 | 0.3623 | 0.8916 | 0.7045 | 0.7748 | 0.5325 | **0.4622** | 0.5903 |
| Qwen2.5-7B | **0.3940** | 0.6011 | 0.5999 | **0.3743** | 0.8890 | **0.7424** | **0.8324** | 0.5620 | 0.4213 | **0.6018** |
| Sarashina2-7B | 0.3420 | 0.4784 | 0.5327 | 0.2911 | 0.8903 | 0.4267 | 0.1008 | 0.3746 | 0.0000 | 0.3818 |
| Llama 3 8B | 0.3760 | **0.7109** | **0.6124** | 0.3356 | 0.9032 | 0.6509 | 0.4936 | **0.6211** | 0.3793 | 0.5648 |
| Llama 3.1 8B | 0.3780 | 0.7017 | 0.6094 | 0.3330 | **0.9045** | 0.6525 | 0.5057 | 0.6176 | 0.3695 | 0.5636 |
| Llama 3 Youko 8B | 0.3500 | 0.6252 | 0.5885 | 0.3247 | 0.8959 | 0.5993 | 0.3571 | 0.5704 | 0.2793 | 0.5100 |
| Llama 3 Swallow 8B | 0.3520 | 0.6563 | 0.5901 | 0.3507 | 0.9006 | 0.6152 | 0.4875 | 0.5936 | 0.3323 | 0.5420 |
| Llama 3.1 Swallow 8B | 0.3800 | 0.6711 | 0.6057 | 0.3468 | 0.9032 | 0.6237 | 0.5110 | 0.6153 | 0.3622 | 0.5577 |
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
- Open-ended question answering (JEMHopQA [Ishii et al., 2024])
- Open-ended question answering (NIILC [้ขๆ น, 2003])
- Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
- Automatic summarization (XL-Sum [Hasan et al., 2021])
- Machine translation (WMT2020 ja-en [Barrault et al., 2020])
- Machine translation (WMT2020 en-ja [Barrault et al., 2020])
- Mathematical reasoning (MGSM [Shi et al., 2023])
- Academic exams (JMMLU [ๅฐนใ, 2024])
- Code generation (JHumanEval [ไฝ่คใ, 2024])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
- Open-ended question answering (TriviaQA [Joshi et al., 2017])
- Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
- Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers et al., 2019])
- Mathematical reasoning (GSM8K [Cobbe et al., 2021])
- Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
- Academic exams (MMLU [Hendrycks et al., 2021])
- Code generation (HumanEval [Chen et al., 2021])
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
- [Dclm-baseline-1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0)
- [English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Laboro ParaCorpus](https://github.com/laboroai/Laboro-ParaCorpus)
- [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733)
- [The-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train-smol-ids)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 3.1 under a generous open license.
We received various supports including:
+ AIST project: "Research and Development of Foundation Models for Generative AI in the Physical Domain"
+ NEDO project: "Development of Artificial Intelligence Application Technology to Support Judgment in Design Risk Assessment Work Based on the Perspective of Skilled Persons" (JPNP18002) of "Development of Integration Technology as the Core of Next Generation Artificial Intelligence and Robotics"
+ MEXT project: "Formation of R&D center to ensure transparency and reliability of generative AI models"
+ AIST program: [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html)
## License
[META LLAMA 3.1 COMMUNITY LICENSE](https://www.llama.com/llama3_1/license/)
## Authors
Here are the team members:
- From [Tokyo Institute of Technology Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
- [Koki Maeda](https://sites.google.com/view/silviase)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://sites.google.com/view/masanariohi)
- [Taihei Shiotani](https://github.com/inatoihs)
- [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
- From [Tokyo Institute of Technology YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
- From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
- [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
## How to cite
If you find our work helpful, please feel free to cite these papers.
```
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
```
### References
```tex
@misc{dubey2024llama3herdmodels,
title={The Llama 3 Herd of Models},
author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
year={2024},
eprint={2407.21783},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.21783},
}
```
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
google/t5-base-lm-adapt
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"t5-lm-adapt",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:43:56 | 406 | 17 |
---
datasets:
- c4
language: en
license: apache-2.0
tags:
- t5-lm-adapt
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
[T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-base):
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from [T5 Version 1.1 - Base](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-base)
and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
This adaptation improves the ability of the model to be used for prompt tuning.
**Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp).
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new โColossal Clean Crawled Corpusโ, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] |
Non_BioNLP
|
gokulsrinivasagan/bert_base_lda_20_mrpc
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_20",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_20",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,732,251,740,000 | 2024-11-22T05:05:07 | 5 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_20
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_20_mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.6838235294117647
name: Accuracy
- type: f1
value: 0.8122270742358079
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_mrpc
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6234
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 1.5863 | 1.0 | 15 | 0.6287 | 0.6838 | 0.8122 | 0.7480 |
| 0.6395 | 2.0 | 30 | 0.6249 | 0.6838 | 0.8122 | 0.7480 |
| 0.6286 | 3.0 | 45 | 0.6298 | 0.6838 | 0.8122 | 0.7480 |
| 0.6368 | 4.0 | 60 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.6358 | 5.0 | 75 | 0.6273 | 0.6838 | 0.8122 | 0.7480 |
| 0.6336 | 6.0 | 90 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.6317 | 7.0 | 105 | 0.6234 | 0.6838 | 0.8122 | 0.7480 |
| 0.6325 | 8.0 | 120 | 0.6239 | 0.6838 | 0.8122 | 0.7480 |
| 0.63 | 9.0 | 135 | 0.6279 | 0.6838 | 0.8122 | 0.7480 |
| 0.6358 | 10.0 | 150 | 0.6271 | 0.6838 | 0.8122 | 0.7480 |
| 0.6373 | 11.0 | 165 | 0.6276 | 0.6838 | 0.8122 | 0.7480 |
| 0.634 | 12.0 | 180 | 0.6253 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
r4ghu/distilbert-base-uncased-distilled-clinc
|
r4ghu
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,694,577,455,000 | 2023-09-13T05:00:10 | 7 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9490322580645161
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3032
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6314 | 1.0 | 318 | 1.8942 | 0.7274 |
| 1.4611 | 2.0 | 636 | 0.9553 | 0.8594 |
| 0.7635 | 3.0 | 954 | 0.5469 | 0.9181 |
| 0.4613 | 4.0 | 1272 | 0.4035 | 0.9394 |
| 0.3348 | 5.0 | 1590 | 0.3505 | 0.9432 |
| 0.2776 | 6.0 | 1908 | 0.3258 | 0.9445 |
| 0.2509 | 7.0 | 2226 | 0.3148 | 0.9455 |
| 0.236 | 8.0 | 2544 | 0.3066 | 0.9494 |
| 0.2282 | 9.0 | 2862 | 0.3055 | 0.9484 |
| 0.224 | 10.0 | 3180 | 0.3032 | 0.9490 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-786584
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-786584",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,468,704,000 | 2024-05-23T12:52:38 | 8 | 0 |
---
datasets:
- fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-786584
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-786584',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
knguyennguyen/mpnet_jacket4k_adjustedv2
|
knguyennguyen
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:7598",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,737,587,610,000 | 2025-01-22T23:14:00 | 10 | 0 |
---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:7598
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: women's winter coat with a soft texture, a relaxed fit, and a zipper
closure.
sentences:
- 'Title: Karen Scott Women''s Plus Size Zip-Front Jacket Black Size 2X Descripion:
Look and feel as cozy as can be in this plus size zip-front jacket from Karen
Scott. polyester Imported front closure Approx. 28-1/2" from center back neck
to hem Point collar; zipper-front closure Zipper pockets at hips Unlined Polyester'
- 'Title: Obermeyer Girls'' Stormy Jacket (Toddler/Little Big Kids) Descripion:
There''s no better place a child wants to be than outside. Exploring the backyard,
creating make-believe friends, or anticipating the unknown, or the wonder of discovery.
It has always been our intention to fuel and care for this unspoken youthful innocence.
Throwing an ill-formed snowball in the excitement of loving the wonder of snow
is to be applauded, regardless of our unexpected shock - refreshing as it may
be. The stormy is all of that and more. For parent and child alike. With fleece-warmed
chin, neck, hands and ear-warmers she is reassured to be toasty and daring to
move about freely. We recognize the value of play. 100% Polyester Imported Zipper
closure Hand Wash Only Fleece earwarmers;"i-grow" Extended wear system; full-motion
articulation; sleeve pockets); integrated clips to retain mittens/gloves; adjustable,
water-resistant powder skirt; working compass; 10K/10K 220gm body, 160gm sleeve,
80gm hood'
- 'Title: Tbaok Women''s 2022 Fashion Winter Coat Long Sleeve Lapel Zip Up Faux
Shearling Shaggy Loose Shacket Jacket Descripion: Womens Ladies Warm Faux Furry
Coat Jacket Winter Solid Turn Down Collar Zipper Outerwear ๐Features: 1.lightweight
soft fabric for a comfortable feminine ,easy to wear,no wrinkles. 2.It is made
of high quality materials,durable enought for your daily wearing. 3.Special design
make you more attractive. 4.Choose it,show your energy. ๐Product information:
โ
Gender:Women โ
Occasion:Daily,Casual,Party โ
Material:Polyester โ
Clothing Length:Long
โ
Pattern Type:Solid โ
Style:Fashion โ
Collar:Turn down Collar โ
Sleeve Length:Short
๐Package include:1PC Women Coat ๐Note๏ผ Please compare the detail sizes with yours
before you buy!!! Colors may be slightly different depending on computer and monitor
settings Please allow 1-3cm differs due to manual measurement, thanks (All measurement
in cm and please note 1cm=0.39inch โคBlack Fridays Deals 2022 โค ๐๐ New Releases
Women''s 2022 Fashion Winter Lapel Coat Deals of the Day Clearance On Sale.๐พWE
USE ASIAN SIZING AND RUN SMALL! WE RECOMMEND ORDERING ONE SIZE UP!๐พ If you have
any questions, please contact us and we will get back to you within 24 hours!
30 day free return policy.โคโค Imported Zipper closure Hand Wash Only โMATERIAL:
This Sherpa Jacket Is Made Of Super-Soft And Double-Fleeced Fabric, Bringing You
Ultimate Cozy Wearing Experience, Perfect Choice For Fall Or Winter โFEATURES
DESIGN: Long Sleeve Sherpa Jackets/ Lapel Zipper Winter Coats/ Double-Side Faux
Fur Coat/ Two Side Pockets Jacktes/ Womens Fall Fashion 2022/ Boyfriend Style
Oversized Jackets/ Fuzzy Fleece Shearling Coats/ Fluffy And Shaggy Sherpa Jacket/
Collared Teddy Bear Coat/ Winter Clothes For Women/ Shacket Jacket Women โMATCH
TIPS And OCCASIONS: This Faux Shearling Coat Is Easy To Match, You Can Match With
T-Shirt, Jeans, Leggings, Dress For Graceful Looks. The Fuzzy Faux Fur Teddy Jacket
Suits For Any Outdoor And Indoor Activities โUNIQUE HIGHLIGHT: The Fuzzy Teddy
Bear Fall Jackets Keep You Away From Cold While Looks More Chic. The Faux Fur
Winter Coat Has Stand Collar, Cool And Stylish. The Womens Shacket Jacket Always
Trendy, The Fashion Outfit In This Fall And Winter โFASHION STYLE: This Baggy
Sherpa Shacket Jacket Has Two Side Pockets, Enough To Carry Phone Or More. The
Fleeced Material Let You Enjoy Comfy Touch, Friendly To Skin. The Classic Lapel
Zipper Design Make You Easily Put On And Take Off. The Shearling Fluffy Winter
Shacket Jacket Looks More Gorgeous'
- source_sentence: kids' costume set featuring a vibrant outer layer, fitted bottoms,
and a coordinating skirt.
sentences:
- 'Title: Mountain Warehouse Womens Slim Jacket-Water Resistant Ladies Raincoat
Descripion: The Move Power Stretch Womens Jacket is a great layer for any active
outdoor use. Keeping you protected with its water- and wind-resistant outer, slim
fit with stretch panels for flexibility and zipped pockets for safe storage. Ref:
035255 100% Polyester Water-resistant - Treated with Durable Water Repellent (DWR),
droplets will bead and roll off the fabric. Light rain, or limited exposure to
rain Windproof - A thin membrane between the fabric provides protection from the
wind Stretch Panels - Allows for extra freedom of movement and comfort as you
move Slim fit - has a snug fit and designed to create an athletic body shape Lightweight
- Great for travelling, comfortable to wear and easy to pack'
- 'Title: Mei Costume for Kids,Meilin Cosplay Panada Red Jacket Skirt with Tights
Suit Descripion: Mei Cosplay Costume Uniform Outfit for Girls Mei Costume Package
include: coat + tights + skirt. Great gift for family and Turning Red Mei fans.
Suitable for daily wear, outdoor activities, school, Halloween cosplay, photograph,
theme party etc. polyester Mei Cosplay Costume Panada Red Jacket Skirt with Tights
Suit Mei Costume Package include: coat + tights + skirt. Material: Quality polyester.
Excellent workmanship, the costume is soft and skin-friendly, which has a good
wearing comfort. Occasions:The dress is Bright Colors, Eye-Catching, Making You
Look Amazing in various occasions, daily wear, Halloween, theme party, cosplay,
on the stage,etc.Comfortably Fabric, Stretchy, Pro-Skin, Durable, Makes It Easy
and Very Comfortable To Wear,Show The Latest Sexy and Fashion Style. Please check
the size chart in pictures before purchasing and choose the one that suits you
best.Moreover, if you have any questions with our product, please contact with
us at any time, we will reply to you within 24 hours.'
- 'Title: LisAgUlI Women Fashion Faux Fur Warm Furry Hooded Faux Fox Fur Jacket
Long Sleeve Coats Outerwear Descripion: Gender:Women Occasion:Daily,Casual,Party
Material:Polyester Clothing Length:Regular Pattern Type:Solid Style:Fashion Collar:Hoodie
Sleeve Length:Long Color: as the picture show Package include:1PC Women Coat Faux
Fur 1.lightweight soft fabric for a comfortable feminine touch,easy to wear,no
wrinkles. 2.It is made of high quality materials,durable enought for your daily
wearing. 3.Special design make you more attractive. 4.Choose it,show your energy.'
- source_sentence: a winter jacket for daily wear and outdoor activities for little
boys and girls
sentences:
- 'Title: RODEWILL Womens Color Block Plaid Corduroy Jacket Button Down Shirt Long
Sleeve Oversized Blouses Tops (Khaki, XX-Large) Descripion: RODEWILL womens plaid
button down shirt is made with high-quality fabric. Soft, lightweight and comfortable
to wear. Feature: Plaid Print, Button Down, Chest Pockets, Long Sleeve, Boyfriend
Style, Turn-Down Collar, Loose Fit Shirts. It can be tied at the waist or wrapped
around waist, wear as 3/4 sleeves or full sleeve. You can pair this shirts with
variety of tank tops, jeans, denim shorts, skirts, jeggings, sneakers or heels
to complete casual look that''s easy to love all season long. which is suitable
for windy and cold days. Occasion: The fashion button down shirts is perfect choice
for your daily wear, outdoor activities, shopping, club, party, dates and any
other occasions in Spring, Summer, Fall and Winter. 65% Cotton, 35% Polyester
่ฟๅฃ Button closure Machine Wash ๐Light weight & cozy material: RODEWILL womens
plaid button down shirt is made with high-quality fabric, lightweight and comfortable
to wear, bring you Soft touching experienc. ๐Features design: Plaid Print, Button
Down, Chest Pockets, Long . ๐Style: This vintage flannel plaid shirt is designed
with two buttoned chest pockets, drop shoulder, a Front button placket, turn down
collar, single button cuff.Relaxed fit, You can pair this. ๐Style: This vintage
flannel plaid shirt is designed with two buttoned chest pockets, drop shoulder,
a Front button placket, turn down collar, single button cuff.Relaxed fit, You
can pair this. ๐Package content: 1 X Womens Shirts. if you are not satisfied,
feel free to contact us.'
- 'Title: COOKI Women''s Long Sleeve Sweatshirts Lover Cat Print Hip Hop Zipper
Hooded Coats Casual Long Sleeve Zip-Up Hoodie Jackets Descripion: ๐ป๐ปWelcome to
our brand shop for more choice on sweatshirt/hoodies/shirts/dresses/sandals/shoes/watches,etc.๐ป๐ปOCCASIONS:
Casual/School/Work/Sports or daily life, good choice to match with high waist
leggings or jeans. ๐ป๐ป Size Chart: Since our size is Asian size, it runs small
so we recommend you order 1~2 size up๐ป.Please Refer to our size at the image we
provide, not the Amazon Size Chart. All size are manual measure, there will be
about 1-3 inches errors that is normal. ๐ป๐ปDue to the light and screen difference,
there are slight color aberration. ๐ป๐ปComfortable and Breathable fabric.Regular
fit, light weight, soft and comfortable.๐ป๐ปAbout Shipment Delivery Date โ Standard
shipping: 10-20 days โ Expedited shipping: 3-5 days. Please confirming the information
before you place an order. We will ship within four hours and will not be able
to modify any information once shipped Customer Service Please feel free to contact
us if you have any question, we will reply you within 24 hours. ๐ป๐ป โฟCOOKI Winter
Sweaters Hooded Sweatshirts Long sleeve Sale Today, Save 3% when purchase 1 ,Save
30% when purchase 2 , Save 45% when purchase 5 each on Qualifying items offered
by COOKI . Stretch,Tie closure Hand Wash Only โฟSize:Please check the size on the
picture before you purchase. โฟMaterial : women sweatshirts and hoodies women sweatshirts
and hoodies Hoodieswomen sweatshirts and hoodies graphic women sweatshirts with
zipper women sweatshirts with pockets women sweatshirts with sayings women sweatshirts,
Gift for Sister Mom, Grandmother, Family. Lightweight, comfortable cotton and
polyester fabric stretch to wear in daily life. โฟoccasion: Cooki women''s autumn
winter tie dye printed long sleeve sweatshirt round neck casual loose pullover
tops shirts,lightweight long sleeve pullover top for comfortable wear , latest
fashion women''s sweatshirt suit occasion for daily life, party, school, vacation,
office, work, sportswear, home etc womens fall long sleeve side split loose casual
pullover tunic tops women casual long sleeve round neck sweatshirt loose t shirt
blouses tops womens long sleeve tunic tops loose casual sweatshirt pocket t-shirts
blouse long sleeve round neck sweatshirt pocket pullover loose tunic shirts blouse
tops womens long sleeve shirts color splicing spring top sweatshirt womens casual
long sleeve pullover sweatshirt shirt tops t shirt for women graphic t shirt for
women v neck t shirt for women summer t shirt for women short sleeve t shirt for
women fashion 2020 t shirt for women graphic vintage t shirt for women graphic
funny t shirt for women cotton t shirt for women womens shirts and blouses womens
shirts for leggings short sleeve sleeveless plus size long sleeve summer with
sayings to wear with leggings that tie in front tie front tunic shirts for women
shirts for women plus size women shirts long sleeve women shirts long sleeve casual
women shirts long sleeve dress women shirts long sleeve cotton women shirts long
sleeve plus women shirts long sleeve plus size women shirts long sleeve sweater
women shirts long sleeve button down women shirts long sleeve casual women long
sleeve tops women long sleeve tops women long sleeve tops v neck women long sleeve
tops brown women long sleeve tops cold shoulder'
- 'Title: GLIGLITTR Little Girls Boys Flannel Hooded Plaid Winter Jacket Button
Down Fleece Coat Long Sleeve Outerwear with Pockets Descripion: Dear customers,
welcome to IZYJOY!!! We have carefully measured the size of your clothes so that
you can choose the right size for your little kids! -----------------โ
Size Informationโ
-------------------------------:
5-6X------------------------Bust*2 41cm/16.1"---------Length 55cm/21.7" 6-7Years-------------------Bust*2
42cm/16.5"---------Length 60cm/23.6" 7-8Years-------------------Bust*2 43cm/16.9"---------Length
63cm/24.8" 8-9Years-------------------Bust*2 45cm/17.7"---------Length 67cm/26.4"
9-10Years------------------Bust*2 49cm/19.3"---------Length 70cm/27.6" Tips: 1.
After manual measurement there is a difference of 2-3%. 2.Please check the size
chart carefully before you buy the item. 3. Please note that slight color differences
due to light and screen should be acceptable. Flannel,Wool Imported Button closure
Machine Wash ๐Material: In cold weather, give your baby the warmest care.High-quality
material. Soft & comfortable flannel wool blend.Non-irritating skin. Buffalo plaid
shirts for boys, fashion, and unique design, not easy to deform, good for baby''s
delicate skin. ๐Feature: This kids plaid hoodie flannel wool blend shacket. Classic
plaid print. The inside is a thin layer of fleece, which doubles the warmth, caress
carefully from the inside out, doubles the temperature gathering performance,
and carefully cares for your baby. There are pockets on both sides, exquisite
and practical. Button down closure, easy to put on and off. Cute and fashionable.
Little girls boys hooded plaid long sleeve jacket. It makes your little baby charming
and cute. ๐Size: Suitable for baby boys 5-6X cold jacket for toddler boys, 6-7Years
baby boy''s clothing winter, 7-8Years plaid fleece jacket kids, 8-9Years baby
boy plaid shirt, 9-10Years kids jackets with hood boys. ๐Occasion: Hoodie plaid
little boys girls fleece coat, perfect to wear with jeans or any other kind of
trousers for a fashion look. Suitable for daily wear, school uniform, indoor or
outdoor wear, casual playwear, baby photography, party, or any occasion. A great
birthday gift for your little baby. ๐Package Include: 1 x Little boys girls flannel
wool blend hoodie plaid winter coat'
- source_sentence: men's jacket designed for outdoor activities, featuring a flexible
fit, multiple accessible pockets, and ventilation openings for comfort during
strenuous use.
sentences:
- 'Title: Nike Men''s Baseball Bomber Jacket AA9776-010, Size M Descripion: The
Nike Bomber Jacket is filled with down material to help keep you warm and comfortable
during the game. Zipper closure Hand Wash Only Style#: AA9776-010 Style#: AA9776-010
Longer hem in the back adds coverage. Internal elastic waistband helps lock in
heat. Side pockets help keep your hands warm throughout the game. Full-zip design
offers personalized coverage.'
- 'Title: LONDON FOG Baby Boys Active Puffer Jacket Winter Coat Descripion: This
active puffer jacket winter coat from London Fog is the perfect jacket for chilly
days and nights. Great quality, great price, great style. 100% Polyester Imported
Zipper closure Machine Wash zip off hood microfiber material Rain Resistant sherpa
lining Welt Pocket'
- 'Title: Mammut Haldigrat HS Hooded Jacket - Men''s Descripion: Freeriders definitely
get their money''s worth with the Haldigrat HS Hooded Jacket Men: The elasticated
and therefore super comfortable jacket benefits from a freeride-inspired fit.
It also proves itself in strenuous accents with its high breathability and resulting
comfort when you''re once again looking for untracked slopes off piste. The jacket
design focuses on the features most important for freeriders such as a good size
and easy pocket accessibility even when wearing a backpack. The long ventilation
openings also ensure a balanced body climate, so you can rock those mountains.
100% Polyamide Zipper closure Hand Wash Only MAMMUT Single Pull Hood: storm-proof,
helmet-compatible hood with reinforced peak, can be adjusted both vertically and
horizontally with a single pull 2 climbing harness- and backpack-compatible front
pockets with cross-over access and water-repellent zippers Long under-arm ventilation
with water-repellent 2-way zipper Glove-compatible, adjustable cuffs with high-quality
hook-and-loop fasteners The drawstring on the hem can be adjusted with one hand
using a cord that can be stowed on the inside'
- source_sentence: women's cropped puffer jacket featuring a solid color, long sleeves,
and a stand collar with a zipper closure.
sentences:
- 'Title: Women''s Zip Up Cropped Puffer Jacket Winter Long Sleeve Quilted Drawstring
Short Down Coats Warm Outerwear Descripion: Women''s Winter Warm Short Down Coat
Solid Color Long Sleeve Stand Collar Zipper Puffer Jacket Outerwear Size(inch)
S Length: 18.90 Bust: 44.09 Sleeve: 22.05 Shoulder: 19.69 M Length: 19.29 Bust:
45.67 Sleeve: 22.44 Shoulder: 20.08 L Length: 19.69 Bust: 47.24 Sleeve: 22.83
Shoulder: 20.47 Specification: Material: Polyester Sleeve Length: Long Pattern
Type: Solid Color Gender: Women Item Type: Coat Package Included: 1 x coat 100%
Polyester Imported Zipper closure Machine Wash Material: Wome cropped puffer jacket
winter coat is made of high quality polyester material, lightweight, safe to skin
and easy to wash, thick and warm, comfortable and soft to wear. This quilted short
down jacket is a must have this winter! Design: Women''s winter casual warm puffer
coat, solid color, long sleeve, stand collar, full zipper closure, quilted design,
drop shoulder, crop length, elastic drawstring design on the hem, very all-match
and is a timeless and ideal companion on cold days! Occasion: Womens windproof
short down coats suitable for casual daily wear, outdoor activity, school, streetwear,
working, office, holiday, club, friends party, photography outfit and so on. Perfect
addition to your winter chic and fashionable look. Match: This relax fit quilted
puffer jacket can pair with crop tops, blouses, basic t-shirts,sweaters, skinny
jeans, casual pants, leggings, skirts, boots, sneakers for a winter trendy look.
Package&Size: 1 x coat. If the product has any quality problems, please feel free
to contact us, we will help you solve the problem as quickly as possible. S, M,
L. Please check the size chart carefully before buying to get the clothes that
fit your size.'
- 'Title: My Little Pony Hoodie for Girls, Zip Up Friendship Jacket with 3D Ears,
Mane and Wings, Gray, Size 5 Descripion: SWEATER WEATHER OUTFIT TOPPER Include
our fun and cute 3D zip up jacket in your list of must have outfit for your child
for cold and chilly seasons. Its versatile gray color can complement other outfit
colors, can be worn to school, for jogging, or hanging out with friends and family!
FEATURES: Lined Hood3D Ears and ManeDetachable WingsRainbow Ribbed CuffsSoft-touch
PrintsFull-Zip Closure LONG LASTING QUALITY Super soft and lightweight fabric
that provides insulation without the added bulk. Wonโt weigh your child down and
provides ample freedom of motion while protecting her the cold and chill. Its
double lined hood provides extra protection from the wind and moisture. Fleece
Zip Up closure Hand Wash Only FUN AND COMFY. Keep your child fashionably warm
with this My Little Pony hoodie for girls. Made with soft fleece fabric and brushed
interior for added plushness. This zip up hoodie is comfortable without being
bulky. Its lightweight design does not compromise its quality, a perfect jacket
for your precious one! FRIENDSHIP IS MAGIC. Celebrate the magic of friendship
with the stars of Equestria! Featuring the super friendly Twilight Sparkle, the
loyal and dependable Rainbow Dash and the funniest pony, Pinkie Pie! Join their
adventure and explore the true meaning of friendship along the way. EASY TO WEAR
HOODIE. Our My Little Pony 3D hoodie jacket features a zip front, cuffed sleeves
and bottom cuffs. This hooded jacket is easy to wear and its cuffed sleeves and
bottom design keeps its form and helps the jacket stay in place. Pair with your
childโs favorite shirt or blouse, skirt, shorts, jeans, this hoodie will bring
your childโs usual outfit to another level! ITS ALL IN THE DETAILS. Girls will
surely be thrilled to wear our hoodie jacket with 3D details and design. With
My Little Pony characters printed in front, FRIENDSHIP written on the sleeve,
3D ears, rainbow Mohawk style mane and adorable detachable wings at the back.
With rainbow ribbed cuffs that adds an extra colorful accent to this fun and magical
3D My Little Pony hoodie MACHINE WASHABLE. Machine wash / hand wash in cold water
with like colors, tumble dry low.'
- 'Title: Spyder Mens Hydroweb Hooded Softshell Jacket, Color Variation Descripion:
This jacket is sure to keep you warm. dry, and stylish. Polyester,Fleece Zipper
closure Hand Wash Only 100% Polyester Spyder front logo chest and upper back Wind
and rain resistand softshell fleece fabric Full front zip, snorkel hood Two zipped
hand pockets.'
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("knguyennguyen/mpnet_jacket4k_adjustedv2")
# Run inference
sentences = [
"women's cropped puffer jacket featuring a solid color, long sleeves, and a stand collar with a zipper closure.",
"Title: Women's Zip Up Cropped Puffer Jacket Winter Long Sleeve Quilted Drawstring Short Down Coats Warm Outerwear Descripion: Women's Winter Warm Short Down Coat Solid Color Long Sleeve Stand Collar Zipper Puffer Jacket Outerwear Size(inch) S Length: 18.90 Bust: 44.09 Sleeve: 22.05 Shoulder: 19.69 M Length: 19.29 Bust: 45.67 Sleeve: 22.44 Shoulder: 20.08 L Length: 19.69 Bust: 47.24 Sleeve: 22.83 Shoulder: 20.47 Specification: Material: Polyester Sleeve Length: Long Pattern Type: Solid Color Gender: Women Item Type: Coat Package Included: 1 x coat 100% Polyester Imported Zipper closure Machine Wash Material: Wome cropped puffer jacket winter coat is made of high quality polyester material, lightweight, safe to skin and easy to wash, thick and warm, comfortable and soft to wear. This quilted short down jacket is a must have this winter! Design: Women's winter casual warm puffer coat, solid color, long sleeve, stand collar, full zipper closure, quilted design, drop shoulder, crop length, elastic drawstring design on the hem, very all-match and is a timeless and ideal companion on cold days! Occasion: Womens windproof short down coats suitable for casual daily wear, outdoor activity, school, streetwear, working, office, holiday, club, friends party, photography outfit and so on. Perfect addition to your winter chic and fashionable look. Match: This relax fit quilted puffer jacket can pair with crop tops, blouses, basic t-shirts,sweaters, skinny jeans, casual pants, leggings, skirts, boots, sneakers for a winter trendy look. Package&Size: 1 x coat. If the product has any quality problems, please feel free to contact us, we will help you solve the problem as quickly as possible. S, M, L. Please check the size chart carefully before buying to get the clothes that fit your size.",
'Title: My Little Pony Hoodie for Girls, Zip Up Friendship Jacket with 3D Ears, Mane and Wings, Gray, Size 5 Descripion: SWEATER WEATHER OUTFIT TOPPER Include our fun and cute 3D zip up jacket in your list of must have outfit for your child for cold and chilly seasons. Its versatile gray color can complement other outfit colors, can be worn to school, for jogging, or hanging out with friends and family! FEATURES: Lined Hood3D Ears and ManeDetachable WingsRainbow Ribbed CuffsSoft-touch PrintsFull-Zip Closure LONG LASTING QUALITY Super soft and lightweight fabric that provides insulation without the added bulk. Wonโt weigh your child down and provides ample freedom of motion while protecting her the cold and chill. Its double lined hood provides extra protection from the wind and moisture. Fleece Zip Up closure Hand Wash Only FUN AND COMFY. Keep your child fashionably warm with this My Little Pony hoodie for girls. Made with soft fleece fabric and brushed interior for added plushness. This zip up hoodie is comfortable without being bulky. Its lightweight design does not compromise its quality, a perfect jacket for your precious one! FRIENDSHIP IS MAGIC. Celebrate the magic of friendship with the stars of Equestria! Featuring the super friendly Twilight Sparkle, the loyal and dependable Rainbow Dash and the funniest pony, Pinkie Pie! Join their adventure and explore the true meaning of friendship along the way. EASY TO WEAR HOODIE. Our My Little Pony 3D hoodie jacket features a zip front, cuffed sleeves and bottom cuffs. This hooded jacket is easy to wear and its cuffed sleeves and bottom design keeps its form and helps the jacket stay in place. Pair with your childโs favorite shirt or blouse, skirt, shorts, jeans, this hoodie will bring your childโs usual outfit to another level! ITS ALL IN THE DETAILS. Girls will surely be thrilled to wear our hoodie jacket with 3D details and design. With My Little Pony characters printed in front, FRIENDSHIP written on the sleeve, 3D ears, rainbow Mohawk style mane and adorable detachable wings at the back. With rainbow ribbed cuffs that adds an extra colorful accent to this fun and magical 3D My Little Pony hoodie MACHINE WASHABLE. Machine wash / hand wash in cold water with like colors, tumble dry low.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 7,598 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 18.12 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 121.06 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>a hoodie for daily wear and cosplay events for anime fans</code> | <code>Title: DAVIKO Boku No Hero Academia Shoto Todoroki Midoriya Izuku Hoodie Deku Hoodies Pullover Sweatshirt Jacket Cospaly Costume (3, Small) Yellow Descripion: Small(Chest:104cm/42.1inch, Shoulder:43cm/17.0inch, Sleeve:60cm/23.6inch, Length:62.5cm/24.6inch); Medium(Chest:112cm/44.3inch, Shoulder:53cm/17.8inch, Sleeve:64.5cm/25.4inch, Length:68.8cm/27.1inch); Large(Chest:115cm/45.6inch, Shoulder:45.9cm/18.1inch, Sleeve:67.8cm/26.7inch, Length:70.8cm/27.9inch); X-Large(Chest:119cm/46.8inch, Shoulder:46.9cm/18.5inch, Sleeve:69.9cm/27.55inch, Length:71.5cm/28.7inch); XX-Large(Chest:125cm/49.21inch, Shoulder:48.9cm/19.29inch, Sleeve:73.9cm/29.13inch, Length:74.5cm/29.3inch). Cloth,Polyester,Spandex Pull On closure 1. Our material is:90% polyester / 10%Spandex Healthy cloth,breathable, soft and smooth,very comfortable to wear. 2. My Hero Academia hoodie cosplay costume hoodies are perfect for daily wear, Halloween, theme party, cosplay, on the stage,etc. It's also a great gift for friends and Anime lovers. 3. Deku hoodie Material: polyster and design with Advance 3D technology printed. Soft and comfortable. 4.There are lots of anime cosplay costume hoodies in our shop, welcome to visit our store. 5.After-sale Services: If you have any problem, please feel free to contact us, we will help you as best we could!</code> |
| <code>a fleece jacket for girls to keep warm during winter</code> | <code>Title: Disney Elsa and Anna Zip Fleece Jacket for Girls โ Frozen 2 Descripion: Keep warm during icy Arendelle winters when you zip up in our Elsa and Anna jacket. Featuring a solid fleece body adorned with an Elsa and Anna appliquรฉ, this comfy cover-up will keep you looking cool while staying cozy. Fleece Zipper closure Hand Wash Only Genuine, Original, Authentic Disney Store Elsa and Anna appliquรฉ on chest with glittering embroidered snowflakes Details include stand collar and full front zip Additional details include contrast trim and seam pockets Inspired by Frozen 2</code> |
| <code>a leather jacket for casual weekend rides and social gatherings for men</code> | <code>Title: Molluska Apparel Menโs Vintage Cafรฉ Racer Premium Leather Jacket |Classic Motorcycle Cafรฉ Racer Biker Leather Jacket Descripion: Introducing our exquisitely crafted Men's Cafe Racer Leather Jacket, a true testament to timeless style and superior craftsmanship. This jacket is the epitome of rugged elegance, designed to exude confidence and sophistication while providing unmatched comfort and durability. Constructed from the finest quality, full-grain cowhide leather, this jacket is meticulously handcrafted to perfection. The rich, supple leather offers a luxurious feel, ensuring it will only get better with age as it develops a unique patina. Its sleek and streamlined silhouette embraces the classic cafe racer design, synonymous with rebel coolness and a hint of vintage flair. The front of the jacket features a sturdy YKK zipper closure, which not only adds a touch of modernity but also ensures a secure fit. The stand-up collar adds an extra element of style, accentuating the jacket's distinct and edgy look. The sleeves are tailored to provide a slim fit, enhancing the overall silhouette and complementing the wearer's physique. To enhance comfort and mobility, the jacket is expertly tailored with articulated sleeves and carefully placed panels that allow ease of movement without compromising the jacket's fitted silhouette. The interior is lined with a smooth and breathable polyester fabric, offering a comfortable layer against the skin and ensuring maximum breathability. Versatility is at the core of this jacket's design, making it an ideal choice for various occasions. Whether you're going for a casual weekend ride, attending a social gathering, or simply taking a stroll through the city, this jacket effortlessly elevates your style quotient. The attention to detail is evident in every aspect of this jacket. Intricate stitching patterns adorn the shoulders, cuffs, and hemline, adding a refined touch and further showcasing the craftsmanship that goes into creating this masterpiece. The jacket also features multiple pockets, including two side pockets and an interior pocket, providing ample space for storing essentials like your phone, wallet, or keys. When you invest in our Men's Cafe Racer Leather Jacket, you're not just purchasing a garment; you're acquiring a timeless piece that will accompany you on countless adventures. It's a fusion of style, functionality, and quality that is built to last. Embrace the spirit of the open road and make a bold statement with this remarkable jacket. Leather Imported Zipper closure Outer Material: Menโs Cafรฉ Racer Leather Jacket is crafted from premium quality real leather, this leather is no doubt 100% reliable plus it is much more durable than any other materials. Quality Features: Menโs Cafรฉ Racer Biker Leather Jacket is featured with front YKK zipper closure with erect style buttoned collar, four front zipper pockets plus two inside pockets, YKK zipper cuffs, insides are lined with 100% polyester, it is available in multiple colours which makes this perfect for wear. Worth Buying Jacket: Menโs Vintage Cafe Racer Real Leather Jacket is no doubt worth buying jacket due to its outstanding design and quality, it can be worn with various outfits. They can be paired with jeans or trousers for a casual look. If you're looking for a jacket that can transition well between different occasions and outfits, then this cafe racer jacket can be a good choice. Inspiration: These Cafรฉ Racer Motorcycle Leather Jackets are the result of inspiration from cafรฉ races which use to take place in 80โs and 90โs, cafe racer jackets is the need for functional and practical motorcycle gear. The jacket's design and features were influenced by the requirements of motorcycle riders who sought garments that provided both protection and freedom of movement while riding. Variable Sizes: A size chart is provided in the picture section so that you can select your exact size jacket and have the best shopping experience possible. It is a request that you order your exact sizes; if your jacket is not of your size, we have appropriate and friendly return and exchange policies.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
etri-lirs/gbst-kebyt5-base-preview
|
etri-lirs
|
fill-mask
|
[
"transformers",
"pytorch",
"gbswt5",
"text2text-generation",
"fill-mask",
"custom_code",
"ko",
"en",
"ja",
"zh",
"arxiv:2106.12672",
"license:other",
"autotrain_compatible",
"region:us"
] | 1,687,844,569,000 | 2024-02-15T00:08:20 | 6 | 6 |
---
language:
- ko
- en
- ja
- zh
license: other
pipeline_tag: fill-mask
---
# Model Card for GBST-KEByT5-base (584M #params)
<!-- Provide a quick summary of what the model is/does. -->
KEByT5: Korean-Enhanced/Enriched Byte-level Text-to-Text Transfer Transformer(T5)์ GBST ๋ฒ์ ์ผ๋ก,
CharFormer(Tay et al., 2021)๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํฉ๋๋ค.
## Prerequirements / and Model Loading HOW-TO
๋ณธ ๋ชจ๋ธ์ ๊ตฌ๋์ ์ํด์๋ GBSWT5 ๋ชจ๋์ด ํ์ํฉ๋๋ค.
https://github.com/etri-crossmodal/gbswt5
์๋์ ๊ฐ์ด pip๋ฅผ ํตํด ๋ชจ๋์ ์ค์น ๊ฐ๋ฅํฉ๋๋ค. ๋ชจ๋ธ ์ฌ์ฉ ๋ฐฉ๋ฒ์ github๋ฅผ ์ฐธ์กฐํด์ฃผ์ญ์์ค.
```
pip install git+https://github.com/etri-crossmodal/gbswt5.git
```
๋๋, ์ต์ ๋ฒ์ ์ Transformers์ ํจ๊ป, ๋ณ๋์ ์ฝ๋ ์์ด ์๋์ ๋ฐฉ๋ฒ์ผ๋ก ๋ชจ๋ธ ์ฌ์ฉ์ด ๊ฐ๋ฅํฉ๋๋ค:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("etri-lirs/gbst-kebyt5-base-preview")
# ์๋์ ๊ฐ์ด trust_remote_code=True๋ฅผ ๋ถ์์ผ๋ก, ์๋์ผ๋ก ๊ด๋ จ ์ฝ๋๋ฅผ ๋ค์ด๋ก๋ ๋ฐ๊ณ ์ธ ์ ์์ต๋๋ค
model = AutoModelForSeq2SeqLM.from_pretrained("etri-lirs/gbst-kebyt5-base-preview", trust_remote_code=True)
```
์ฐธ๊ณ ๋ก, ๋ชจ๋ธ์ ํฌํจ๋ ์๊ฒฉ ์ฝ๋์๋ ๋ค์์ ์คํ์์ค ์ํํธ์จ์ด๊ฐ ํฌํจ๋์ด ์์ต๋๋ค:
* This software includes lucidrains/charformer-pytorch GitHub project for GBST implementation, which distributed under MIT License. Copyright (c) 2021 Phil Wang. all rights reserved. (Original Code URL: https://github.com/lucidrains/charformer-pytorch)
* This software includes HuggingFace transformers's T5 implementation for GBST-enabled T5 model, which distributed under Apache 2.0 License. Copyright 2018- The Huggingface team. All rights reserved.
## KEByT5: Korean-Enhanced/Enriched Byte-level Text-to-Text Transfer Transformer(T5)
ํฌ๋ก์ค๋ชจ๋ฌ ๋ฐ ๋ค๊ตญ์ด ์นํ์ ์ธ ํ๊ตญ์ด ์ค์ฌ์ ํ ํฐ-ํ๋ฆฌ ์ธ์ด ์ดํด ์์ฑ ๋ชจ๋ธ
(EN=Cross-modal, Multilingual Friendly, Token-free Encoder-Decoder Pretrained Language Model for Korean)
* ๋ณธ ์ฌ์ ํ์ต ์ธ์ด๋ชจ๋ธ์ ์๊ฐ, ์ฒญ๊ฐ๊ณผ ๊ฐ์ ํ
์คํธ ์ด์ธ์ ๋ชจ๋ฌ๋ฆฌํฐ์ ๊ต์ฐจ์ธ์ด ์ง์ ๊ตํ์ ์ฉ์ดํ ํ ํฐ-ํ๋ฆฌ ์ฌ์ ํ์ต ์ธ์ด๋ชจ๋ธ์ ๋ชฉํ๋ก ํฉ๋๋ค.
* ๋ณ๋์ tokenizer๊ฐ ํ์์์ง๋ง, ํธ์๋ฅผ ์ํด AutoTokenizer.from_pretrained()๋ฅผ ์ฌ์ฉํ์ฌ ๋ค๋ฅธ ํ ํฌ๋์ด์ ๊ธฐ๋ฐ ์ธ์ฝ๋-๋์ฝ๋ ๋ชจ๋ธ๊ณผ ๋์ผํ๊ฒ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค. ํ ํฌ๋์ด์ ๋ฅผ ์๋ตํ๊ณ ์ถ์ ๊ฒฝ์ฐ, UTF-8 ์
๋ ฅ์ ๋ฐ์ดํธ ๋จ์๋ก ์ชผ๊ฐ์ด, ๊ฐ ๋ฐ์ดํธ์ +3์ ํ์ฌ Token ID๋ฅผ ์์ฑํฉ๋๋ค. (์ฆ, ASCII value 0 == Token ID 3, ASCII value 255 == Token ID 258)
* ํ์ฌ Preview ์คํ
์ด์ง์ ์๋ ๋ชจ๋ธ์ด๋ฉฐ, ํ์ฉ์๋ fine-tuning์ด ํ์ํฉ๋๋ค.
* ๊ทธ๋๋์ธํธ ๊ธฐ๋ฐ ์๋ธ์๋ ํ ํฐํ [(Gradient-based Subword Tokenization; CharFormer; Tay et al., 2021;)](https://arxiv.org/abs/2106.12672)๋ฅผ ์ ์ฉํ ๋ณธ ๋ชจ๋ธ์, KLUE-MRC์์ ๊ฐ์ ๊ท๋ชจ์ KEByT5-base ๋ชจ๋ธ ๋๋น ํ์ต์์ 2.7๋ฐฐ, ์ถ๋ก ์์ 1.46๋ฐฐ ์ด์์ ํ์ต ์๋๊ฐ ๊ฐ์ ๋์์ต๋๋ค. ์ผ๋ถ ํ์ต/์ถ๋ก ์ฑ๋ฅ์ ๋น๊ต ๊ฐ๋ฅํ ์ฐจ์ด๊ฐ ์์ ์ ์์ต๋๋ค. ์์ธํ ๋ด์ฉ์ ํ์ ํ๊ฐ ์งํ๋ฅผ ์ฐธ๊ณ ํ์ญ์์ค.
## Acknowledgements
* ๋ณธ ์ฌ์ ํ์ต ์ธ์ด๋ชจ๋ธ์ 2022๋
๋ ์ ๋ถ(๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ)์ ์ฌ์์ผ๋ก ์ ๋ณดํต์ ๊ธฐํํ๊ฐ์์ ์ง์์ ๋ฐ์ ์ํ๋ ์ฐ๊ตฌ์ (No. RS-2022-00187238, ํจ์จ์ ์ฌ์ ํ์ต์ด ๊ฐ๋ฅํ ํ๊ตญ์ด ๋ํ ์ธ์ด๋ชจ๋ธ ์ฌ์ ํ์ต ๊ธฐ์ ๊ฐ๋ฐ)
(EN=This pretrained language model was supported by the Institute of Information & communication Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. RS-2022-00187238, Development of Large Korean Language Model Technology for Efficient Pre-training))
# Model Details
๋ณธ ์ฌ์ ํ์ต ์ธ์ด๋ชจ๋ธ์ ๋ค์๊ณผ ๊ฐ์ ๊ท๋ชจ๋ฅผ ๊ฐ์ง๋๋ค:
* kebyt5-small : 330M [link](https://huggingface.co/etri-lirs/kebyt5-small-preview)
* kebyt5-base : 580M [link](https://huggingface.co/etri-lirs/kebyt5-base-preview)
* kebyt5-large : 1.23B [link](https://huggingface.co/etri-lirs/kebyt5-large-preview)
* GBST-kebyt5-base (this model): 584M
์ด๋ค ๋ชจ๋ธ์ [google/byt5-small](https://huggingface.co/google/byt5-small), [google/byt5-base](https://huggingface.co/google/byt5-base), [google/byt5-large](https://huggingface.co/google/byt5-large) ๋ชจ๋ธ๊ณผ ๋์ผํ ์ ๊ฒฝ๋ง ๊ตฌ์กฐ์ ํฌ๊ธฐ๋ฅผ ๊ฐ์ง๋ฉฐ, ํ ํฌ๋์ด์ (ByT5Tokenizer)์ ๊ตฌํ ์ ๋ ๋ชจ๋ธ์ ๋ณ๋์ ์์ ์์ด ๋ฐ๋ก ๊ตํํ์ฌ ์ฌ์ฉํ ์ ์์ต๋๋ค.
huggingface transformers์์์ ์ฌ์ฉ๋ฒ ์ญ์, T5ForConditionalGeneration์ ๋์ผํ๊ฒ ์ฌ์ฉํ ์ ์์ต๋๋ค.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Language Intelligence Research Section, Electronics and Telecommunications Research Institute(ETRI)
- **Model type:** Encoder-Decoder Transformer, specifically, ByT5.
- **Language(s) (NLP):** Korean, English(partially for translation task), Chinese(partially for translation task), Japanese(partially for translation task).
- **License:** Apache 2.0 License
- **Finetuned from model:** kebyt5-small/-base/-xl model weights were initialized by google/byt5-* for Warm-start pretraining.
## Model Sources
- **Repository:** ๋ค์ด์คํธ๋ฆผ ํ์คํฌ ํ์ต์ ์ํด, https://github.com/etri-crossmodal/llm-downstream-s2s
- **Paper:** ์ ์ข
ํ ์ธ, "ํ๊ตญ์ด ์ค์ฌ์ ํ ํฐ-ํ๋ฆฌ ์ธ์ด ์ดํด-์์ฑ ๋ชจ๋ธ ์ฌ์ ํ์ต ์ฐ๊ตฌ", ์ 35ํ ํ๊ธ ๋ฐ ํ๊ตญ์ด ์ ๋ณด์ฒ๋ฆฌ ํ์ ๋ํ ๋
ผ๋ฌธ์ง, pp.711-715. 2023.
(EN=Shin et al., "Towards Korean-Centric Token-free Pretrained Language Model", in Procs. of the 35th Annual Conference on Human and Cognitive Language Technology. pp. 711-715. 2023.)
# Uses
ํด๋น ์ฌ์ ํ์ต ์ธ์ด๋ชจ๋ธ์ ์ฐ๊ตฌ ๋ฐ ๊ต์ก ๋ชฉ์ ์ ํ์ฉ์ผ๋ก ๊ทธ ์ฌ์ฉ ๋ชฉ์ ์ด ์ ํ๋ฉ๋๋ค.
## Direct Use
ํ์ฌ ๊ณต๊ฐ๋๋ ๋ชจ๋ธ์ T5 ๋ชจ๋ธ ํ์ต์ ์ฌ์ฉ๋ Corrupted span denoising ๋ง์ผ๋ก ํ์ต๋์ด ์์ด, ์ค์ ์์ฉ ํ์คํฌ์ ์ ์ฉํ๊ธฐ ์ํด์๋ fine-tuning ๊ณผ์ ์ด ํ์ํฉ๋๋ค.
Sentinel Token(token id 258, 257, 256, ...)์ ์ฌ์ฉํ์ฌ Masked Token Prediction์ ์ํํ ์ ์์ผ๋, ์์ธก๋ ๋ด์ฉ์๋ ๋ถ์ ์ ํ ๋ด์ฉ์ด ์์ ์ ์์ต๋๋ค.
## Downstream Use [optional]
Token-free ๋ชจ๋ธ์ ํน์ฑ ์, ๋ณต์กํ๊ฑฐ๋ Noisyํ ์
๋ ฅ์ ๊ฐ๊ฑดํ๋ฉฐ, ์งง์ ์ํ์ค ๊ธธ์ด์ ์์ฑ์ ์ ํฉํฉ๋๋ค. (์: ์ธ์ด ์ดํด, ๋ํ ์๋ต ์์ฑ)
์ฌ์ ํ์ต์ 1024 bytes ๊ธธ์ด์ ๋ฐ์ดํฐ๋ฅผ ํ์ตํ๊ธฐ ๋๋ฌธ์, ์ด๋ฅผ ์ด๊ณผํ๋ ๊ธด ์ํ์ค๋ฅผ ๋ค๋ฃจ๋ ๋ฌธ์ ์ ์ ํฉํ์ง ์์ ์ ์์ต๋๋ค.
๋ ๊ธด ์ํ์ค๋ฅผ ๋ค๋ค์ผ ํ๋ ๋ฌธ์ ์์๋, [GBST ๊ธฐ๋ฐ์ ํ ํฐ-ํ๋ฆฌ ์ธ์ด๋ชจ๋ธ](https://huggingface.co/etri-lirs/gbst-kebyt5-base-preview)์ ์ฌ์ฉํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค.
# Bias, Risks, Limitations, and Recommendations
Masked Token Prediction์ ํตํด ํ๋๋ ์ ์๋ ์ ๋ณด์๋ ๋ค๋ฅธ ์์ฑํ ์ธ์ด๋ชจ๋ธ๊ณผ ๊ฐ์ ์ํ์ ๊ฐ์ง๊ณ ์์ ์ ์์ต๋๋ค. ํ์ต์ ์ฌ์ฉ๋ ๋ฐ์ดํฐ๋ ์์ค, ์๋, ์ ์น์ ๋ด์ฉ ๋ฐ ๊ธฐํ ๊ฑฐ์น ์ธ์ด๋ค์ ๋ํ ๋ณ๋์ ์ฒ๋ฆฌ๊ฐ ์ด๋ฃจ์ด์ง์ง ์์์ต๋๋ค. ๋ฐ๋ผ์, ์ฌํ์ ์ผ๋ก ์ฉ์ธ๋์ง ์์ ํ ํฐ์ด๋ ํ
์คํธ๋ฅผ ์์ฑํ ์ ์์ผ๋ฉฐ, ์ฃผ๋ณ ๋ฌธ๋งฅ์ ๋ฐ๋ผ์ ๊ณต๊ฒฉ์ ์ธ ์
๋ ฅ์ ์ด๋ ํ ๊ฒฐ๊ณผ๋ฅผ ์์ฑํ ์ ์์์ง ์ฝ๊ฒ ์์ํ ์ ์์ต๋๋ค.
ํํธ, ๋ณธ ์ธ์ด๋ชจ๋ธ์ ์ฃผ๋ก ํ๊ตญ์ด ํ
์คํธ๋ก ํ์ต๋์์ผ๋ฉฐ, ์ด๋ค์ ํน์ฑ์ ์ ์ดํ ์ ์๋ ๋ค์ด์คํธ๋ฆผ ํ์คํฌ, ๊ทธ ์ค์์๋ ๋ถ๋ฅ, ์์ฝ, ์งง์ ๋ฌธ์ฅ ์์ฑ์ ์ ํฉํ ์ ์์ต๋๋ค. ์
์ถ๋ ฅ ์์ค์์ ๋ฏธ๋ฑ๋ก์ด(Out-of-Vocabulary)๊ฐ ์กด์ฌํ ์ ์์ผ๋, ์ฌ์ ํ์ต๋์ง ์์ ํ
์คํธ ์ํ์ค์ ๋ํด์๋ ์ถ๊ฐ์ ๋๋ฉ์ธ ์ ์ ํ์ต ๋ฐ ๋ค์ด์คํธ๋ฆผ ํ์คํฌ์ ๋ฏธ์ธ์กฐ์ ์ด ํ์ํฉ๋๋ค.
[More Information Needed]
## How to Get Started with the Model
Transformers 4.27.0 ์ด์์ ๋ฒ์ ์์, ๋ค์์ ํ์ด์ฌ ์ฝ๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ๊ณผ tokenizer๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์๊ธฐ์ ์ธ๊ธ๋ ๋ฐ์ ๊ฐ์ด, transformer ๋ชจ๋ ๋ก๋ ์ gbswt5 ๋ชจ๋์ import ํด์ผ ํฉ๋๋ค:
```
import gbswt5
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("etri-lirs/gbst-kebyt5-base-preview")
model = AutoModelForSeq2SeqLM.from_pretrained("etri-lirs/gbst-kebyt5-base-preview")
```
# Training Details
## Training Data
๋ณธ ์ฌ์ ํ์ต์๋ ์๋์ ๊ณต๊ฐ ๋ฐ์ดํฐ๊ฐ ์ฌ์ฉ๋์์ต๋๋ค:
* ๊ตญ๋ฆฝ๊ตญ์ด์, ๋ชจ๋์ ๋ง๋ญ์น. ์ ๋ฌธ v2.0
* ๊ตญ๋ฆฝ๊ตญ์ด์, ๋ชจ๋์ ๋ง๋ญ์น. ๊ตฌ์ด ๋ง๋ญ์น v1.2
* ๊ตญ๋ฆฝ๊ตญ์ด์, ๋ชจ๋์ ๋ง๋ญ์น. ๋ฌธ์ด ๋ง๋ญ์น v1.0
* ๊ตญ๋ฆฝ๊ตญ์ด์, ๋ชจ๋์ ๋ง๋ญ์น. ์ ๋ฌธ 2020 v1.0
* ๊ตญ๋ฆฝ๊ตญ์ด์, ๋ชจ๋์ ๋ง๋ญ์น. ์ ๋ฌธ 2021 v1.0
* ํ๊ตญ์ด ์ํคํผ๋์ด ๋คํ, [v2020.09.20](https://github.com/lovit/kowikitext)
* [๋๋ฌด์ํค ๋คํ](https://github.com/lovit/namuwikitext)
* ํ๊ตญ์ ๋ณดํ์งํฅ์, AIHub. ์ ๋ฌธ๋ถ์ผ ๋ง๋ญ์น, ๋ฒ๋ฅ /ํนํ ์ง์๋ฒ ์ด์ค, ๋
ผ๋ฌธ/๋์/๋ํ/๋๋ณธ ์์ฝ, ํ์/ํ์ผ/ํ์ค ๋ฒ์ญ ๋ง๋ญ์น, ์ฝ์ผํฐ/์ฃผ๋ฌธ/๋ด์ค๊ธฐ์ฌ/์๊ฐ์ ๋ณด ์ง์์๋ต, ๋ฐฉ์ก/ํ์/์๋ด ์์ฑ์ธ์ ๋ฐ์ดํฐ.
* ํ๊ตญ์ ๋ณดํ์งํฅ์, AIHub. ๋๊ท๋ชจ ์น๋ฐ์ดํฐ ๊ธฐ๋ฐ ํ๊ตญ์ด ๋ง๋ญ์น ๋ฐ์ดํฐ
* ํ๊ตญ์ ๋ณดํ์งํฅ์, AIHub. ์จ๋ผ์ธ ๊ตฌ์ด์ฒด ๋ง๋ญ์น ๋ฐ์ดํฐ.
* [KcBERT ๋ง๋ญ์น, v2022.3Q](https://github.com/Beomi/KcBERT)
๋ํ, ์๋์ ์์ฒด ๊ตฌ์ถ๋ ๋ฐ์ดํฐ ๋ฐ ํฉ์ฑ ๋ฐ์ดํฐ ์ผ๋ถ๋ฅผ ์ฌ์ฉ, ์ ์ฒด ์ฝ ~220GB ๊ฐ๋์ ๋ฐ์ดํฐ๋ก ํ์ต๋์์ต๋๋ค.
# Evaluation
## Testing Data, Factors & Metrics & Results
ํ๊ตญ์ด ์ธ์ด ์ดํด ํ์คํฌ์ ์ฌ์ฉ๋๋ [KLUE dataset, v1.1](https://klue-benchmark.com/)์ dev set์ ์ฌ์ฉํ์ฌ ํ๊ฐ๋์์ต๋๋ค.
์์ฑ์ ๋ชจ๋ seq2seq์ ์ด์ฉํ ์ถ๋ ฅ ๋ ์ด๋ธ ์ง์ ์์ฑ ๋ฐฉ๋ฒ์ ์ฌ์ฉํ์ต๋๋ค.
| models | KLUE-TC(YNAT) (F1) | KLUE-NER (Entity, Char F1) | KLUE-DP (UAS, LAS) | KLUE-MRC (EM, ROUGE-W) |
|-------------|---------------|--------------|-------------------|------------------|
| google/byt5-large (1.23B) | 78.52 | 48.81, 63.95 | 44.26, 7.805 | _NOT TESTED_ |
| KEByT5-Base (580M) | 84.99 | 86.75, 91.05 | 88.70, 85.90 | 62.28, 68.38 |
| KEByT5-Large (1.23B) | 85.68 | 88.09, 92.40 | 87.18, 85.52 | 70.07, 75.81 |
| **GBST-KEByT5-Base (584M)** | **85.29** | **87.35, 92.09** | **88.33, 85.00** | **59.69, 66.44** |
๋ํ ์ํ ์ถ์ (DST; Dialogue State Tracking) ํ์คํฌ์ธ KLUE-WOS-v1.1 ๊ฒฐ๊ณผ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. ํ๊ฐ๋ ๋ชจ๋ seq2seq์ ์ด์ฉํ ๋ค์ด์ผ๋ก๊ทธ ์ํ ์ง์ ์์ฑ์ ์ฌ์ฉํ์ต๋๋ค:
| models | WOS (JGA, %) | WOS (F1, %) |
| ------- | ---------- | ----------- |
| klue/klue-roberta-large | 50.22 | 92.23 |
| KEByT5-Base (580M) | 77.15 | 96.92 |
| KEByT5-Large (1.23B) | 78.54 | 97.28 |
| **GBST-KEByt5-base (584M)** | **75.94** | **96.73** |
๊ด๊ณ ์ถ์ถ(RE; Relation Extraction) ํ์คํฌ์ธ KLUE-RE-v1.1 ๊ฒฐ๊ณผ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. no_relation์ ์ ์ธํ 29๊ฐ์ ๊ด๊ณ ํด๋์ค์ ๋ํ Micro F1 ๊ฒฐ๊ณผ์
๋๋ค:
| models | KLUE-RE (F1, %) |
| ------- | ---------- |
| klue/klue-roberta-base | 65.90 |
| KEByT5-Base (580M) | 65.48 |
| KEByT5-Large (1.23B) | 68.95 |
GBST ์ ์ฉ์ ํตํ ํจ์จํ ๊ฐ์ ์ ๋ค์๊ณผ ๊ฐ์ด ํ๊ฐ๋์์ต๋๋ค. ํ๊ฐ ํ๊ฒฝ์ A100 PCIE 80GB๊ฐ ์ฌ์ฉ๋์์ผ๋ฉฐ, ์ ๋ฐ๋๋ bfloat16์์ ์ธก์ ๋์์ต๋๋ค.
ํ์ต ๋ฐ ํ๊ฐ์๋ KLUE-MRC ๋ฐ์ดํฐ์
์ด ์ฌ์ฉ๋์์ต๋๋ค. ์ด๋ค ๋ฐ์ดํฐ์
์ ๊ธธ์ด๋ ์ต๋ 5800 bytes์ ๋ฌธ๋งฅ์ด ๋ค์ด๊ฐ๋๋ค.
| model | training sample/sec. | inference sample/sec. |
| ----- | -------------------- | --------------------- |
| KEByT5-base (580M) | 1.30 | 3.95 |
| **GBST-KEByT5-base (584M, this model)** | **3.56** | **5.77** |
## Compute Infrastructure
* Trained on nVidia A100 80GB * 8EA
# Citation
* ํ์ ์ธ, "์์ฑํ ์ธ์ด๋ชจ๋ธ์ ์ด์ฉํ ๊ด๊ณ ์ถ์ถ", ์ 35ํ ํ๊ธ ๋ฐ ํ๊ตญ์ด ์ ๋ณด์ฒ๋ฆฌ ํ์ ๋ํ ๋
ผ๋ฌธ์ง. pp.708-710. 2023.
* ์ด๊ธฐ์ ์ธ, "ํ๊ตญ์ด ํ ํฐ-ํ๋ฆฌ ์ฌ์ ํ์ต ์ธ์ด๋ชจ๋ธ KeByT5๋ฅผ ์ด์ฉํ ํ๊ตญ์ด ์์ฑ ๊ธฐ๋ฐ ๋ํ ์ํ ์ถ์ ", ์ 35ํ ํ๊ธ ๋ฐ ํ๊ตญ์ด ์ ๋ณด์ฒ๋ฆฌ ํ์ ๋ํ ๋
ผ๋ฌธ์ง. pp.644-647. 2023.
# Model Card Authors/Contacts
Jong-hun Shin(ETRI), e-mail=jhshin82 _AT_ etri _DOT_ re _DOT_ kr.
|
[
"RELATION_EXTRACTION",
"TRANSLATION"
] |
Non_BioNLP
|
besimray/miner_id_2_e0133dc7-d8f1-4a3d-b487-a48a7ef3227b_1729802816
|
besimray
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,729,802,816,000 | 2024-10-24T20:52:01 | 7 | 0 |
---
base_model: meta-llama/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Llama-3.2-3B
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B)
## โจ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73
|
fathyshalab
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,676,054,192,000 | 2023-02-10T18:36:59 | 9 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Broomva/t5-base-translation-spa-guc
|
Broomva
|
text2text-generation
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,701,306,630,000 | 2023-12-02T19:21:43 | 19 | 0 |
---
base_model: t5-base
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: t5-base-translation-spa-guc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-translation-spa-guc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0136
- Bleu: 1.4957
- Gen Len: 17.8854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 1.3933 | 1.0 | 7668 | 1.5107 | 0.8563 | 18.0712 |
| 1.598 | 2.0 | 15336 | 1.3444 | 0.9626 | 18.0648 |
| 1.4277 | 3.0 | 23004 | 1.2551 | 1.1025 | 17.9695 |
| 1.4152 | 4.0 | 30672 | 1.2000 | 1.1361 | 17.9426 |
| 1.1671 | 5.0 | 38340 | 1.1565 | 1.2243 | 17.8416 |
| 1.1777 | 6.0 | 46008 | 1.1217 | 1.2874 | 17.8809 |
| 1.4485 | 7.0 | 53676 | 1.0955 | 1.3318 | 17.9663 |
| 1.3209 | 8.0 | 61344 | 1.0729 | 1.3889 | 17.967 |
| 1.394 | 9.0 | 69012 | 1.0557 | 1.4082 | 17.8646 |
| 1.0608 | 10.0 | 76680 | 1.0435 | 1.4463 | 17.9294 |
| 1.0713 | 11.0 | 84348 | 1.0323 | 1.4558 | 17.9015 |
| 0.976 | 12.0 | 92016 | 1.0248 | 1.4666 | 17.9103 |
| 1.0782 | 13.0 | 99684 | 1.0191 | 1.484 | 17.8929 |
| 1.045 | 14.0 | 107352 | 1.0150 | 1.4869 | 17.8875 |
| 0.9936 | 15.0 | 115020 | 1.0136 | 1.4957 | 17.8854 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"TRANSLATION"
] |
Non_BioNLP
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_sst2
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,675,045,816,000 | 2023-01-30T02:59:16 | 141 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_pretrain_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- type: accuracy
value: 0.4908256880733945
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_pretrain_sst2
This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.4908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 1.0 | 527 | nan | 0.4908 |
| 0.0 | 2.0 | 1054 | nan | 0.4908 |
| 0.0 | 3.0 | 1581 | nan | 0.4908 |
| 0.0 | 4.0 | 2108 | nan | 0.4908 |
| 0.0 | 5.0 | 2635 | nan | 0.4908 |
| 0.0 | 6.0 | 3162 | nan | 0.4908 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
guymorlan/levanti_translate_he_ar
|
guymorlan
|
translation
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"ar",
"he",
"dataset:guymorlan/levanti",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,720,603,852,000 | 2024-07-10T09:53:21 | 77 | 0 |
---
datasets:
- guymorlan/levanti
language:
- ar
- he
license: cc-by-nc-4.0
pipeline_tag: translation
widget:
- text: P ืื ื ืจืืฆื ืืืืช ืืืจ ืืื ืืช
---
# Levanti (Hebrew -> colloquial Levantine Arabic) translator
Trained on the [Levanti](https://huggingface.co/datasets/guymorlan/levanti) dataset by fine-tuning [Helsinki-NLP/opus-mt-he-ar](https://huggingface.co/Helsinki-NLP/opus-mt-ar-he) for 8 epochs.
This model is trained to support dialect conditional generation by utilizing the first token (followed by a space) as an indicator of the desired dialect:
* **P** for Palestinian
* **L** for Lebanese
* **S** for Syrian
* **E** for Egyptian
# Example usage
```python
from transformers import pipeline
trans = pipeline("translation", "guymorlan/levanti_translate_he_ar")
trans("P ืื ื ืจืืฆื ืืืืช ืืืจ ืืื ืืช")
```
```
Out[1]: [{'translation_text': 'ุจุฏู ุฃุฑูุญ ุน ุงูุฏูุงู ุจูุฑุง'}]
```
# Attribution
Created by Guy Mor-Lan.<br>
Contact: guy.mor AT mail.huji.ac.il
|
[
"TRANSLATION"
] |
Non_BioNLP
|
pinzhenchen/sft-lora-bg-baichuan-2-7b
|
pinzhenchen
| null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,293,000 | 2024-03-05T23:44:57 | 0 | 0 |
---
language:
- bg
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [baichuan-inc/Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
sagard21/python-code-explainer
|
sagard21
|
summarization
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:sagard21/autotrain-data-code-explainer",
"doi:10.57967/hf/2126",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,672,941,878,000 | 2023-03-19T08:28:10 | 179 | 12 |
---
datasets:
- sagard21/autotrain-data-code-explainer
language:
- en
license: mit
pipeline_tag: summarization
tags:
- autotrain
- summarization
widget:
- text: "def preprocess(text: str) -> str:\n text = str(text)\n text = text.replace('\\\
\\n', ' ')\n tokenized_text = text.split(' ')\n preprocessed_text = \" \"\
.join([token for token in tokenized_text if token])\n\n return preprocessed_text\n"
co2_eq_emissions:
emissions: 5.393079045128973
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2745581349
- CO2 Emissions (in grams): 5.3931
# Model Description
This model is an attempt to simplify code understanding by generating line by line explanation of a source code. This model was fine-tuned using the Salesforce/codet5-large model. Currently it is trained on a small subset of Python snippets.
# Model Usage
```py
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
AutoConfig,
pipeline,
)
model_name = "sagard21/python-code-explainer"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model.eval()
pipe = pipeline("summarization", model=model_name, config=config, tokenizer=tokenizer)
raw_code = """
def preprocess(text: str) -> str:
text = str(text)
text = text.replace("\n", " ")
tokenized_text = text.split(" ")
preprocessed_text = " ".join([token for token in tokenized_text if token])
return preprocessed_text
"""
print(pipe(raw_code)[0]["summary_text"])
```
## Validation Metrics
- Loss: 2.156
- Rouge1: 29.375
- Rouge2: 18.128
- RougeL: 25.445
- RougeLsum: 28.084
- Gen Len: 19.000
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
LoneStriker/opus-v1.2-70b-5.0bpw-h6-exl2
|
LoneStriker
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"axolotl",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | 1,710,366,857,000 | 2024-03-13T22:11:56 | 8 | 0 |
---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free โ you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Settings](https://huggingface.co/dreamgen/opus-v1.2-70b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- [Settings screenshot](configs/silly_tavern/settings_screenshot.webp)
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted.
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
gaudi/opus-mt-tiv-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,175,397,000 | 2024-10-18T22:48:05 | 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-tiv-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-tiv-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-tiv-en --output_dir ./ctranslate2/opus-mt-tiv-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-tiv-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-tiv-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-tiv-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-tiv-en) by Helsinki-NLP.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
gokuls/mobilebert_sa_GLUE_Experiment_sst2
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,674,648,790,000 | 2023-01-25T12:51:10 | 139 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- type: accuracy
value: 0.8027522935779816
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_sst2
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4157
- Accuracy: 0.8028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.487 | 1.0 | 527 | 0.4157 | 0.8028 |
| 0.2824 | 2.0 | 1054 | 0.4351 | 0.8005 |
| 0.2265 | 3.0 | 1581 | 0.4487 | 0.8096 |
| 0.1989 | 4.0 | 2108 | 0.5182 | 0.7993 |
| 0.1813 | 5.0 | 2635 | 0.4654 | 0.7982 |
| 0.1684 | 6.0 | 3162 | 0.5340 | 0.7924 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
ernestobs7/caregiver-ft-v1
|
ernestobs7
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:98",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,740,418,595,000 | 2025-02-24T17:37:31 | 36 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-l
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:98
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are some common attitudes and beliefs that can create personal
barriers to self-care for family caregivers?
sentences:
- 'Support for nutrition, breathing, and feeding
People with ALS may have trouble chewing and swallowing their food, and getting
the nutrients they need. Nutritionists and registered dieticians can help plan
small, nutritious mealsย throughout the day and identify foods to avoid. When the
person can no longer eat with help, a feeding tube can reduce the personโs risk
of choking and pneumonia.'
- "Amyotrophic Lateral Sclerosis (ALS) | National Institute of Neurological Disorders\
\ and Stroke\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n\n\n\n\n\n\n\
\n\n\n\n\n\n\nAn official website of the United States government\n\n \
\ Hereโs how you know\n\n\n\n\n\n\n\n\n\n\n\nOfficial websites use .gov \n\
\ A\n .gov\n website belongs to an\
\ official government organization in the United States.\n \n\n\n\
\n\n\n\n\n\nSecure .gov websites use HTTPS\n\n A lock\n \
\ (\n\n)\n or\n https://\n \
\ means youโve safely connected to the .gov website. Share sensitive\
\ information only on official, secure websites.\n \n\n\n\n\n\n\
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSearch\n\n\nMenu\n\n\n\n\n\n\n\n\n\nSearch NINDS\n\
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSearch NINDS\n\n\n\n\n\n\n\n\n\n\n\
\n\n\n\nMain navigation"
- "Identifying Personal Barriers \nMany times, attitudes and beliefs form personal\
\ barriers that stand in the \nway of caring for yourself. Not taking care of\
\ yourself may be a lifelong \npattern, with taking care of others an easier option.\
\ However, as a family \ncaregiver you must ask yourself, \"What good will I\
\ be to the person I care \nfor if I become ill? If I die?\" Breaking old patterns\
\ and overcoming \nobstacles is not an easy proposition, but it can be done โ\
\ regardless of \nyour age or situation. The first task in removing personal\
\ barriers to self-\ncare is to identify what is in your way. For example, \n\
โข Do you feel you have to prove that you are worthy of the care recipient's \n\
affection? \nโข Do you think you are being selfish if you put your needs first?\
\ \nโข Is it frightening to think of your own needs? What is the fear about?"
- source_sentence: What role does the SOD1 gene play in the body?
sentences:
- "Migraine Trainerยฎ Shareable Resources\n\n\n\nMind Your Risksยฎ\n\n\nNINDS Brain\
\ Educational Resources\n\n\nStroke\n\n\n\n\n\n\nStroke Overview\n\n\nPrevention\n\
\n\nSigns and Symptoms\n\n\nAssess and Treat\n\n\n\n\n\n\nNIH Stroke Scale\n\n\
\n\nRecovery\n\n\nResearch\n\n\nOutreach\n\n\n\n\n\n\n\n\nDid you find the content\
\ you were looking for?\n\n\n\n\n\nYes, I did find the content I was looking for\n\
\n\n\n\nNo, I did not find the content I was looking for\n\n\n\n\n\n\n\nPlease\
\ rate how easy it was to navigate the NINDS website\n\n\n\n\n\nVery easy to navigate\n\
\n\n\n\nEasy to navigate\n\n\n\n\nNeutral\n\n\n\n\nDifficult to navigate\n\n\n\
\n\nVery difficult to navigate\n\n\n\n\n\n\nThank you for letting us know! Any\
\ other feedback?\n\n\n\n\nSubmit\n\n\n\n\n\nThis site is protected by reCAPTCHA\
\ and the Googleย Privacy Policyand Terms of Serviceapply.\n\n\n\n\n\n\n\n\n\n\n\
\n Last reviewed on July 19, 2024\n \n\n\n\n\n\n\n\
\n\n\n\n\nContact Us"
- 'Muscle twitches in the arm, leg, shoulder, or tongue
Muscle cramps
Tight and stiff muscles (spasticity)
Muscle weakness affecting an arm, a leg, or the neck
Slurred and nasal speech
Difficulty chewing or swallowing
As the disease progresses, muscle weakness and atrophy spread to other parts of
your body. People with ALS may develop problems with:
Chewing food and swallowing (dysphagia)
Drooling (sialorrhea)
Speaking or forming words (dysarthria)
Breathing (dyspnea)
Unintended crying, laughing, or other emotional displays (pseudobulbar symptoms)
Constipation
Maintaining weight and getting enough nutrients'
- "About 25-40% of all familial cases (and a small percentage of sporadic cases)\
\ are caused by a defect in the C9orf72 gene. C9orf72 makes a protein found in\
\ motor neurons and nerve cells in the brain. \nAnother 12-20% of familial cases\
\ result from mutations in the SOD1 gene. SOD1 is involved in production of the\
\ enzyme copper-zinc superoxide dismutase 1."
- source_sentence: What types of resources are available for caregivers of individuals
with ALS?
sentences:
- 'Eventually, people with ALS will not be able to stand or walk, get in or out
of bed on their own, use their hands and arms, or breathe on their own. Because
they usually remain able to reason, remember, and understand, they are aware of
their progressive loss of function. This can cause anxiety and depression in the
person with ALS and their loved ones. Although not as common, people with ALS
also may experience problems with language or decision-making. Some also develop
a form of dementia known as FTD-ALS.
Most people with ALS die from being unable to breathe on their own (known as respiratory
failure,) usually within three to five years from when the symptoms first appear.
However, about 10% survive for a decade or more.
Who is more likely to get amyotrophic lateral sclerosis (ALS)?'
- 'Motor Neuron Diseasesย
Order publications from the NINDS Catalog
The NINDS Publication Catalog offers printed materials on neurological disorders
for patients, health professionals, and the general public. All materials are
free of charge, and a downloadable PDF version is also available for most publications.
Order NINDS Publications
ย
Health Information
Disorders
Glossary of Neurological Terms
Order Publications
Clinical Trials
Clinical Trials in the Spotlight
Find NINDS Clinical Trials
Patient & Caregiver Education
Brain Attack Coalition
Brain Donation
Public Education
Brain Basics
Know Your Brain
Understanding Sleep
Preventing Stroke
The Life and Death of a Neuron
Genes and the Brain
Migraine Trainerยฎ
Migraine Trainerยฎ Shareable Resources'
- "Caring for a person living with ALS\nAs the person with ALS progresses in their\
\ disease, they will need more and more help with daily activities. Being a caregiver\
\ for a person with ALS, while rewarding, can be challenging for the personโs\
\ loved ones and caregivers. It is important for caregivers take care of themselves\
\ and to seek support when needed. Free and paid resources are available to provide\
\ home health care services and support. Visit the organizations listed at the\
\ end of this article to find support in your area. \nWhat are the latest updates\
\ on amyotrophic lateral sclerosis (ALS)?"
- source_sentence: How can prospective donors participate in ALS research through
brain donation?
sentences:
- 'Doctors may use the following medications approved by the U.S. Food and Drug
Administration (FDA) to support a treatment plan forย ALS:'
- NINDSย also supports theย NIH NeuroBioBank, a collaborative effort involving several
brain banks across the U.S. that supply investigators with tissue from people
with neurological and other disorders. Tissue from individualsย withย ALSย is needed
to help advance critical research on the disease. A single donated brain can make
a huge impact on ALS research, potentially providing information for hundreds
of studies. The goal is to increase the availability of, and access to, high quality
specimens for research to understand the neurological basis of the disease. Prospective
donors can begin the enrollment process by visitingย Learn How to Become a Brain
Donor.
- Theย Nationalย ALSย Registryย collects, manages, and analyzes de-identified data about
people withย ALSย in the United States. Developed by the Center for Disease Control
and Prevention's Agency for Toxic Substances and Disease Registry (ATSDR), this
registry establishes information about the number ofย ALSย cases, collects demographic,
occupational, and environmental exposure data from people withย ALSย to learn about
potential risk factors for the disease, and notifies participants about research
opportunities. The Registry includes data from national databases as well as de-identified
information provided by individualsย withย ALS. All information is kept confidential.
People withย ALSย can add their information to the registry and sign up to receive
for more information.
- source_sentence: Does having a risk factor guarantee that a person will develop
a disorder?
sentences:
- 'Doctors may use the following medications approved by the U.S. Food and Drug
Administration (FDA) to support a treatment plan forย ALS:'
- "possible decline in quality of life. \n \nBut despite these risks, family caregivers\
\ of any age are less likely than \nnon-caregivers to practice preventive healthcare\
\ and self-care behavior. \nRegardless of age, sex, and race and ethnicity, caregivers\
\ report problems \nattending to their own health and well-being while managing\
\ caregiving \nresponsibilities. They report: \nโข sleep deprivation \nโข poor\
\ eating habits \nโข failure to exercise \nโข failure to stay in bed when ill\
\ \nโข postponement of or failure to make medical appointments ."
- 'A risk factor is a condition or behavior that occurs more frequently in those
who have a disease, or who are at greater risk of getting a disease, than in those
who don''t have the risk factor. Having a risk factor doesn''t mean a person will
develop a disorder, and not having a risk factor doesn''t mean you wonโt. Risk
factors for ALS include:'
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9637887397321441
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.951388888888889
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9513888888888888
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("ernestobs7/caregiver-ft-v1")
# Run inference
sentences = [
'Does having a risk factor guarantee that a person will develop a disorder?',
"A risk factor is a condition or behavior that occurs more frequently in those who have a disease, or who are at greater risk of getting a disease, than in those who don't have the risk factor. Having a risk factor doesn't mean a person will develop a disorder, and not having a risk factor doesn't mean you wonโt. Risk factors for ALS include:",
'possible decline in quality of life. \n \nBut despite these risks, family caregivers of any age are less likely than \nnon-caregivers to practice preventive healthcare and self-care behavior. \nRegardless of age, sex, and race and ethnicity, caregivers report problems \nattending to their own health and well-being while managing caregiving \nresponsibilities. They report: \nโข sleep deprivation \nโข poor eating habits \nโข failure to exercise \nโข failure to stay in bed when ill \nโข postponement of or failure to make medical appointments .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9638** |
| cosine_mrr@10 | 0.9514 |
| cosine_map@100 | 0.9514 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 98 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 98 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 19.21 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 120.29 tokens</li><li>max: 181 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are some common symptoms experienced by individuals with ALS related to muscle function?</code> | <code>Muscle twitches in the arm, leg, shoulder, or tongue<br>Muscle cramps<br>Tight and stiff muscles (spasticity)<br>Muscle weakness affecting an arm, a leg, or the neck<br>Slurred and nasal speech<br>Difficulty chewing or swallowing<br><br>As the disease progresses, muscle weakness and atrophy spread to other parts of your body. People with ALS may develop problems with:<br><br>Chewing food and swallowing (dysphagia)<br>Drooling (sialorrhea)<br>Speaking or forming words (dysarthria)<br>Breathing (dyspnea)<br>Unintended crying, laughing, or other emotional displays (pseudobulbar symptoms)<br>Constipation<br>Maintaining weight and getting enough nutrients</code> |
| <code>How does ALS affect a person's ability to chew and swallow food?</code> | <code>Muscle twitches in the arm, leg, shoulder, or tongue<br>Muscle cramps<br>Tight and stiff muscles (spasticity)<br>Muscle weakness affecting an arm, a leg, or the neck<br>Slurred and nasal speech<br>Difficulty chewing or swallowing<br><br>As the disease progresses, muscle weakness and atrophy spread to other parts of your body. People with ALS may develop problems with:<br><br>Chewing food and swallowing (dysphagia)<br>Drooling (sialorrhea)<br>Speaking or forming words (dysarthria)<br>Breathing (dyspnea)<br>Unintended crying, laughing, or other emotional displays (pseudobulbar symptoms)<br>Constipation<br>Maintaining weight and getting enough nutrients</code> |
| <code>What percentage of ALS cases are classified as familial?</code> | <code>About 10% of allย ALSย cases are familial (also called inherited or genetic). Changes in more than a dozen genes have been found to cause familialย ALS.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 10 | 0.9382 |
| 2.0 | 20 | 0.9539 |
| 3.0 | 30 | 0.9484 |
| 4.0 | 40 | 0.9484 |
| 5.0 | 50 | 0.9638 |
| 6.0 | 60 | 0.9638 |
| 7.0 | 70 | 0.9638 |
| 8.0 | 80 | 0.9638 |
| 9.0 | 90 | 0.9638 |
| 10.0 | 100 | 0.9638 |
### Framework Versions
- Python: 3.11.4
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
BioNLP
|
Omartificial-Intelligence-Space/Arabic-all-nli-triplet-Matryoshka
|
Omartificial-Intelligence-Space
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"mteb",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | 1,718,387,645,000 | 2025-01-23T10:30:49 | 217 | 2 |
---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
- mteb
inference: false
widget:
- source_sentence: ุฐูุฑ ู
ุชูุงุฒู ุจุนูุงูุฉ ููู ุนูู ูุฏู
ูุงุญุฏุฉ ุจุงููุฑุจ ู
ู ู
ูุทูุฉ ุดุงุทุฆ ุงูู
ุญูุท
ุงููุธููุฉ
sentences:
- ุฑุฌู ููุฏู
ุนุฑุถุงู
- ููุงู ุฑุฌู ุจุงูุฎุงุฑุฌ ูุฑุจ ุงูุดุงุทุฆ
- ุฑุฌู ูุฌูุณ ุนูู ุฃุฑููู
- source_sentence: ุฑุฌู ูููุฒ ุฅูู ุณุฑูุฑู ุงููุฐุฑ
sentences:
- ุงูุณุฑูุฑ ูุฐุฑ.
- ุฑุฌู ูุถุญู ุฃุซูุงุก ุบุณูู ุงูู
ูุงุจุณ
- ุงูุฑุฌู ุนูู ุงููู
ุฑ
- source_sentence: ุงููุชูุงุช ุจุงูุฎุงุฑุฌ
sentences:
- ุงู
ุฑุฃุฉ ุชูู ุงูุฎูุท ุฅูู ูุฑุงุช ุจุฌุงูุจ ููู
ุฉ ู
ู ุงููุฑุงุช
- ูุชูุงู ูุฑูุจุงู ูู ุฌููุฉ ู
ุชุนุฉ
- ุซูุงุซ ูุชูุงุช ููููู ุณููุฉ ูู ุบุฑูุฉ ูุงุญุฏุฉ ุชุณุชู
ุน ููุงุญุฏุฉ ุชูุชุจ ุนูู ุงูุญุงุฆุท ูุงูุซุงูุซุฉ ุชุชุญุฏุซ
ุฅูููู
- source_sentence: ุงูุฑุฌู ูุฑุชุฏู ูู
ูุตุงู ุฃุฒุฑู.
sentences:
- ุฑุฌู ูุฑุชุฏู ูู
ูุตุงู ุฃุฒุฑู ูู
ูู ุฅูู ุงูุฌุฏุงุฑ ุจุฌุงูุจ ุงูุทุฑูู ู
ุน ุดุงุญูุฉ ุฒุฑูุงุก ูุณูุงุฑุฉ ุญู
ุฑุงุก
ู
ุน ุงูู
ุงุก ูู ุงูุฎูููุฉ.
- ูุชุงุจ ุงููุตุต ู
ูุชูุญ
- ุฑุฌู ูุฑุชุฏู ูู
ูุต ุฃุณูุฏ ูุนุฒู ุนูู ุงูุฌูุชุงุฑ.
- source_sentence: ูุฌูุณ ุดุงุจ ุฐู ุดุนุฑ ุฃุดูุฑ ุนูู ุงูุญุงุฆุท ููุฑุฃ ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุงู
ุฑุฃุฉ ููุชุงุฉ
ุดุงุจุฉ.
sentences:
- ุฐูุฑ ุดุงุจ ููุธุฑ ุฅูู ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุฅู
ุฑุฃุชุงู ุจุฌุงูุจู
- ุฑุฌู ูุณุชููู ุนูู ูุฌูู ุนูู ู
ูุนุฏ ูู ุงูุญุฏููุฉ.
- ุงูุดุงุจ ูุงุฆู
ุจููู
ุง ุงูุฃู
ุชููุฏ ุงุจูุชูุง ุฅูู ุงูุญุฏููุฉ
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrieval (ar)
type: miracl/mmteb-miracl
config: ar
split: dev
revision: main
metrics:
- type: ndcg_at_1
value: 19.233
- type: ndcg_at_3
value: 21.393
- type: ndcg_at_5
value: 23.347
- type: ndcg_at_10
value: 26.273999999999997
- type: ndcg_at_20
value: 28.591
- type: ndcg_at_100
value: 32.098
- type: ndcg_at_1000
value: 34.971000000000004
- type: map_at_1
value: 12.555
- type: map_at_3
value: 17.763
- type: map_at_5
value: 19.317
- type: map_at_10
value: 20.748
- type: map_at_20
value: 21.535
- type: map_at_100
value: 22.147
- type: map_at_1000
value: 22.275
- type: recall_at_1
value: 12.555
- type: recall_at_3
value: 22.576
- type: recall_at_5
value: 27.681
- type: recall_at_10
value: 35.461
- type: recall_at_20
value: 43.097
- type: recall_at_100
value: 58.902
- type: recall_at_1000
value: 78.33099999999999
- type: precision_at_1
value: 19.233
- type: precision_at_3
value: 12.65
- type: precision_at_5
value: 9.626999999999999
- type: precision_at_10
value: 6.35
- type: precision_at_20
value: 3.961
- type: precision_at_100
value: 1.118
- type: precision_at_1000
value: 0.152
- type: mrr_at_1
value: 19.2334
- type: mrr_at_3
value: 25.1266
- type: mrr_at_5
value: 26.4681
- type: mrr_at_10
value: 27.6315
- type: mrr_at_20
value: 28.1315
- type: mrr_at_100
value: 28.4874
- type: mrr_at_1000
value: 28.5524
- type: nauc_ndcg_at_1_max
value: 12.8914
- type: nauc_ndcg_at_1_std
value: 10.4594
- type: nauc_ndcg_at_1_diff1
value: 23.8138
- type: nauc_ndcg_at_3_max
value: 12.3382
- type: nauc_ndcg_at_3_std
value: 11.5929
- type: nauc_ndcg_at_3_diff1
value: 19.1347
- type: nauc_ndcg_at_5_max
value: 14.0129
- type: nauc_ndcg_at_5_std
value: 13.6398
- type: nauc_ndcg_at_5_diff1
value: 19.8536
- type: nauc_ndcg_at_10_max
value: 14.538300000000001
- type: nauc_ndcg_at_10_std
value: 15.933800000000002
- type: nauc_ndcg_at_10_diff1
value: 19.7082
- type: nauc_ndcg_at_20_max
value: 15.3478
- type: nauc_ndcg_at_20_std
value: 18.4803
- type: nauc_ndcg_at_20_diff1
value: 18.8725
- type: nauc_ndcg_at_100_max
value: 16.2684
- type: nauc_ndcg_at_100_std
value: 21.147199999999998
- type: nauc_ndcg_at_100_diff1
value: 19.0854
- type: nauc_ndcg_at_1000_max
value: 16.6485
- type: nauc_ndcg_at_1000_std
value: 21.2042
- type: nauc_ndcg_at_1000_diff1
value: 19.411
- type: nauc_map_at_1_max
value: 8.571299999999999
- type: nauc_map_at_1_std
value: 5.2620000000000005
- type: nauc_map_at_1_diff1
value: 25.1772
- type: nauc_map_at_3_max
value: 10.5142
- type: nauc_map_at_3_std
value: 8.8853
- type: nauc_map_at_3_diff1
value: 19.9708
- type: nauc_map_at_5_max
value: 12.2728
- type: nauc_map_at_5_std
value: 10.8387
- type: nauc_map_at_5_diff1
value: 20.2731
- type: nauc_map_at_10_max
value: 12.909899999999999
- type: nauc_map_at_10_std
value: 12.4311
- type: nauc_map_at_10_diff1
value: 20.079900000000002
- type: nauc_map_at_20_max
value: 13.367399999999998
- type: nauc_map_at_20_std
value: 13.5572
- type: nauc_map_at_20_diff1
value: 19.775000000000002
- type: nauc_map_at_100_max
value: 13.716600000000001
- type: nauc_map_at_100_std
value: 14.234
- type: nauc_map_at_100_diff1
value: 19.831
- type: nauc_map_at_1000_max
value: 13.736400000000001
- type: nauc_map_at_1000_std
value: 14.265600000000001
- type: nauc_map_at_1000_diff1
value: 19.8517
- type: nauc_recall_at_1_max
value: 8.571299999999999
- type: nauc_recall_at_1_std
value: 5.2620000000000005
- type: nauc_recall_at_1_diff1
value: 25.1772
- type: nauc_recall_at_3_max
value: 10.1169
- type: nauc_recall_at_3_std
value: 10.1543
- type: nauc_recall_at_3_diff1
value: 16.4652
- type: nauc_recall_at_5_max
value: 13.6919
- type: nauc_recall_at_5_std
value: 14.410400000000001
- type: nauc_recall_at_5_diff1
value: 17.0477
- type: nauc_recall_at_10_max
value: 13.8916
- type: nauc_recall_at_10_std
value: 18.4174
- type: nauc_recall_at_10_diff1
value: 16.3955
- type: nauc_recall_at_20_max
value: 15.0336
- type: nauc_recall_at_20_std
value: 24.3934
- type: nauc_recall_at_20_diff1
value: 13.834299999999999
- type: nauc_recall_at_100_max
value: 16.988
- type: nauc_recall_at_100_std
value: 34.8989
- type: nauc_recall_at_100_diff1
value: 14.1371
- type: nauc_recall_at_1000_max
value: 22.006700000000002
- type: nauc_recall_at_1000_std
value: 43.2671
- type: nauc_recall_at_1000_diff1
value: 15.6926
- type: nauc_precision_at_1_max
value: 12.8914
- type: nauc_precision_at_1_std
value: 10.4594
- type: nauc_precision_at_1_diff1
value: 23.8138
- type: nauc_precision_at_3_max
value: 17.4418
- type: nauc_precision_at_3_std
value: 18.2472
- type: nauc_precision_at_3_diff1
value: 14.380299999999998
- type: nauc_precision_at_5_max
value: 21.7353
- type: nauc_precision_at_5_std
value: 22.7454
- type: nauc_precision_at_5_diff1
value: 14.671999999999999
- type: nauc_precision_at_10_max
value: 22.4616
- type: nauc_precision_at_10_std
value: 27.271099999999997
- type: nauc_precision_at_10_diff1
value: 13.025
- type: nauc_precision_at_20_max
value: 23.610400000000002
- type: nauc_precision_at_20_std
value: 32.0969
- type: nauc_precision_at_20_diff1
value: 9.5973
- type: nauc_precision_at_100_max
value: 24.1842
- type: nauc_precision_at_100_std
value: 35.335
- type: nauc_precision_at_100_diff1
value: 7.833900000000001
- type: nauc_precision_at_1000_max
value: 21.5183
- type: nauc_precision_at_1000_std
value: 30.4104
- type: nauc_precision_at_1000_diff1
value: 4.7376000000000005
- type: nauc_mrr_at_1_max
value: 12.8914
- type: nauc_mrr_at_1_std
value: 10.4594
- type: nauc_mrr_at_1_diff1
value: 23.8138
- type: nauc_mrr_at_3_max
value: 14.1404
- type: nauc_mrr_at_3_std
value: 13.8728
- type: nauc_mrr_at_3_diff1
value: 20.898600000000002
- type: nauc_mrr_at_5_max
value: 15.0032
- type: nauc_mrr_at_5_std
value: 15.1412
- type: nauc_mrr_at_5_diff1
value: 21.0216
- type: nauc_mrr_at_10_max
value: 14.9212
- type: nauc_mrr_at_10_std
value: 15.836
- type: nauc_mrr_at_10_diff1
value: 20.9665
- type: nauc_mrr_at_20_max
value: 15.046399999999998
- type: nauc_mrr_at_20_std
value: 16.2257
- type: nauc_mrr_at_20_diff1
value: 20.816599999999998
- type: nauc_mrr_at_100_max
value: 15.0342
- type: nauc_mrr_at_100_std
value: 16.328899999999997
- type: nauc_mrr_at_100_diff1
value: 20.8347
- type: nauc_mrr_at_1000_max
value: 15.0313
- type: nauc_mrr_at_1000_std
value: 16.3027
- type: nauc_mrr_at_1000_diff1
value: 20.846
- type: main_score
value: 26.273999999999997
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrievalHardNegatives (ar)
type: mteb/miracl-hard-negatives
config: ar
split: dev
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
metrics:
- type: ndcg_at_1
value: 20.7
- type: ndcg_at_3
value: 23.766000000000002
- type: ndcg_at_5
value: 26.479000000000003
- type: ndcg_at_10
value: 30.152
- type: ndcg_at_20
value: 33.123000000000005
- type: ndcg_at_100
value: 37.721
- type: ndcg_at_1000
value: 40.469
- type: map_at_1
value: 13.067
- type: map_at_3
value: 19.303
- type: map_at_5
value: 21.406
- type: map_at_10
value: 23.195
- type: map_at_20
value: 24.256
- type: map_at_100
value: 25.115
- type: map_at_1000
value: 25.257
- type: recall_at_1
value: 13.067
- type: recall_at_3
value: 25.663000000000004
- type: recall_at_5
value: 32.707
- type: recall_at_10
value: 42.458
- type: recall_at_20
value: 51.983000000000004
- type: recall_at_100
value: 72.509
- type: recall_at_1000
value: 90.62400000000001
- type: precision_at_1
value: 20.7
- type: precision_at_3
value: 14.366999999999999
- type: precision_at_5
value: 11.360000000000001
- type: precision_at_10
value: 7.68
- type: precision_at_20
value: 4.88
- type: precision_at_100
value: 1.413
- type: precision_at_1000
value: 0.179
- type: mrr_at_1
value: 20.7
- type: mrr_at_3
value: 27.750000000000004
- type: mrr_at_5
value: 29.659999999999997
- type: mrr_at_10
value: 31.072499999999998
- type: mrr_at_20
value: 31.680799999999998
- type: mrr_at_100
value: 32.0878
- type: mrr_at_1000
value: 32.1434
- type: nauc_ndcg_at_1_max
value: 9.268
- type: nauc_ndcg_at_1_std
value: 18.432000000000002
- type: nauc_ndcg_at_1_diff1
value: 20.2302
- type: nauc_ndcg_at_3_max
value: 10.9481
- type: nauc_ndcg_at_3_std
value: 16.919999999999998
- type: nauc_ndcg_at_3_diff1
value: 17.1518
- type: nauc_ndcg_at_5_max
value: 13.112499999999999
- type: nauc_ndcg_at_5_std
value: 19.4344
- type: nauc_ndcg_at_5_diff1
value: 16.994400000000002
- type: nauc_ndcg_at_10_max
value: 13.5807
- type: nauc_ndcg_at_10_std
value: 22.0576
- type: nauc_ndcg_at_10_diff1
value: 15.806700000000001
- type: nauc_ndcg_at_20_max
value: 15.038499999999999
- type: nauc_ndcg_at_20_std
value: 24.616699999999998
- type: nauc_ndcg_at_20_diff1
value: 15.0551
- type: nauc_ndcg_at_100_max
value: 16.4791
- type: nauc_ndcg_at_100_std
value: 27.3069
- type: nauc_ndcg_at_100_diff1
value: 15.3881
- type: nauc_ndcg_at_1000_max
value: 16.4607
- type: nauc_ndcg_at_1000_std
value: 27.2117
- type: nauc_ndcg_at_1000_diff1
value: 15.229000000000001
- type: nauc_map_at_1_max
value: 6.5943000000000005
- type: nauc_map_at_1_std
value: 13.303999999999998
- type: nauc_map_at_1_diff1
value: 21.8437
- type: nauc_map_at_3_max
value: 8.872399999999999
- type: nauc_map_at_3_std
value: 14.1544
- type: nauc_map_at_3_diff1
value: 18.2986
- type: nauc_map_at_5_max
value: 10.7963
- type: nauc_map_at_5_std
value: 16.2275
- type: nauc_map_at_5_diff1
value: 17.896
- type: nauc_map_at_10_max
value: 11.5053
- type: nauc_map_at_10_std
value: 17.9816
- type: nauc_map_at_10_diff1
value: 17.3155
- type: nauc_map_at_20_max
value: 12.3459
- type: nauc_map_at_20_std
value: 19.2359
- type: nauc_map_at_20_diff1
value: 16.868
- type: nauc_map_at_100_max
value: 12.753300000000001
- type: nauc_map_at_100_std
value: 20.0431
- type: nauc_map_at_100_diff1
value: 16.8889
- type: nauc_map_at_1000_max
value: 12.7747
- type: nauc_map_at_1000_std
value: 20.1047
- type: nauc_map_at_1000_diff1
value: 16.883699999999997
- type: nauc_recall_at_1_max
value: 6.5943000000000005
- type: nauc_recall_at_1_std
value: 13.303999999999998
- type: nauc_recall_at_1_diff1
value: 21.8437
- type: nauc_recall_at_3_max
value: 8.7966
- type: nauc_recall_at_3_std
value: 12.7517
- type: nauc_recall_at_3_diff1
value: 15.1844
- type: nauc_recall_at_5_max
value: 12.9126
- type: nauc_recall_at_5_std
value: 17.4967
- type: nauc_recall_at_5_diff1
value: 13.9756
- type: nauc_recall_at_10_max
value: 12.3656
- type: nauc_recall_at_10_std
value: 21.7246
- type: nauc_recall_at_10_diff1
value: 10.6946
- type: nauc_recall_at_20_max
value: 15.9849
- type: nauc_recall_at_20_std
value: 28.2084
- type: nauc_recall_at_20_diff1
value: 9.3399
- type: nauc_recall_at_100_max
value: 22.4235
- type: nauc_recall_at_100_std
value: 41.6796
- type: nauc_recall_at_100_diff1
value: 11.3943
- type: nauc_recall_at_1000_max
value: 33.9199
- type: nauc_recall_at_1000_std
value: 63.458800000000004
- type: nauc_recall_at_1000_diff1
value: 5.1713000000000005
- type: nauc_precision_at_1_max
value: 9.268
- type: nauc_precision_at_1_std
value: 18.432000000000002
- type: nauc_precision_at_1_diff1
value: 20.2302
- type: nauc_precision_at_3_max
value: 16.1989
- type: nauc_precision_at_3_std
value: 22.823
- type: nauc_precision_at_3_diff1
value: 12.8433
- type: nauc_precision_at_5_max
value: 20.9029
- type: nauc_precision_at_5_std
value: 27.609099999999998
- type: nauc_precision_at_5_diff1
value: 10.501000000000001
- type: nauc_precision_at_10_max
value: 22.0715
- type: nauc_precision_at_10_std
value: 32.2903
- type: nauc_precision_at_10_diff1
value: 7.1502
- type: nauc_precision_at_20_max
value: 23.1036
- type: nauc_precision_at_20_std
value: 34.955000000000005
- type: nauc_precision_at_20_diff1
value: 2.5075
- type: nauc_precision_at_100_max
value: 23.8401
- type: nauc_precision_at_100_std
value: 35.5452
- type: nauc_precision_at_100_diff1
value: -0.3836
- type: nauc_precision_at_1000_max
value: 18.519199999999998
- type: nauc_precision_at_1000_std
value: 27.2343
- type: nauc_precision_at_1000_diff1
value: -4.26
- type: nauc_mrr_at_1_max
value: 9.268
- type: nauc_mrr_at_1_std
value: 18.432000000000002
- type: nauc_mrr_at_1_diff1
value: 20.2302
- type: nauc_mrr_at_3_max
value: 12.9175
- type: nauc_mrr_at_3_std
value: 21.610599999999998
- type: nauc_mrr_at_3_diff1
value: 17.6036
- type: nauc_mrr_at_5_max
value: 13.761000000000001
- type: nauc_mrr_at_5_std
value: 23.091
- type: nauc_mrr_at_5_diff1
value: 17.217
- type: nauc_mrr_at_10_max
value: 13.788400000000001
- type: nauc_mrr_at_10_std
value: 23.91
- type: nauc_mrr_at_10_diff1
value: 16.847
- type: nauc_mrr_at_20_max
value: 13.689499999999999
- type: nauc_mrr_at_20_std
value: 23.976
- type: nauc_mrr_at_20_diff1
value: 16.845499999999998
- type: nauc_mrr_at_100_max
value: 13.712
- type: nauc_mrr_at_100_std
value: 24.0657
- type: nauc_mrr_at_100_diff1
value: 16.852800000000002
- type: nauc_mrr_at_1000_max
value: 13.7073
- type: nauc_mrr_at_1000_std
value: 24.046300000000002
- type: nauc_mrr_at_1000_diff1
value: 16.8626
- type: main_score
value: 30.152
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.451
- type: ndcg_at_3
value: 60.302
- type: ndcg_at_5
value: 62.432
- type: ndcg_at_10
value: 63.541000000000004
- type: ndcg_at_20
value: 64.82
- type: ndcg_at_100
value: 67.54599999999999
- type: ndcg_at_1000
value: 68.161
- type: map_at_1
value: 51.451
- type: map_at_3
value: 58.026999999999994
- type: map_at_5
value: 59.197
- type: map_at_10
value: 59.644
- type: map_at_20
value: 59.999
- type: map_at_100
value: 60.375
- type: map_at_1000
value: 60.401
- type: recall_at_1
value: 51.451
- type: recall_at_3
value: 66.925
- type: recall_at_5
value: 72.14699999999999
- type: recall_at_10
value: 75.629
- type: recall_at_20
value: 80.658
- type: recall_at_100
value: 95.358
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.451
- type: precision_at_3
value: 22.308
- type: precision_at_5
value: 14.429
- type: precision_at_10
value: 7.563000000000001
- type: precision_at_20
value: 4.0329999999999995
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.450700000000005
- type: mrr_at_3
value: 58.0271
- type: mrr_at_5
value: 59.1973
- type: mrr_at_10
value: 59.6441
- type: mrr_at_20
value: 59.999100000000006
- type: mrr_at_100
value: 60.3751
- type: mrr_at_1000
value: 60.401
- type: nauc_ndcg_at_1_max
value: 46.2584
- type: nauc_ndcg_at_1_std
value: 9.1712
- type: nauc_ndcg_at_1_diff1
value: 61.232299999999995
- type: nauc_ndcg_at_3_max
value: 53.9072
- type: nauc_ndcg_at_3_std
value: 18.9815
- type: nauc_ndcg_at_3_diff1
value: 59.8943
- type: nauc_ndcg_at_5_max
value: 54.5939
- type: nauc_ndcg_at_5_std
value: 20.9544
- type: nauc_ndcg_at_5_diff1
value: 58.500600000000006
- type: nauc_ndcg_at_10_max
value: 54.010999999999996
- type: nauc_ndcg_at_10_std
value: 21.0626
- type: nauc_ndcg_at_10_diff1
value: 58.15820000000001
- type: nauc_ndcg_at_20_max
value: 53.339400000000005
- type: nauc_ndcg_at_20_std
value: 19.526699999999998
- type: nauc_ndcg_at_20_diff1
value: 57.8706
- type: nauc_ndcg_at_100_max
value: 52.7445
- type: nauc_ndcg_at_100_std
value: 18.756500000000003
- type: nauc_ndcg_at_100_diff1
value: 58.919900000000005
- type: nauc_ndcg_at_1000_max
value: 52.607899999999994
- type: nauc_ndcg_at_1000_std
value: 18.409
- type: nauc_ndcg_at_1000_diff1
value: 58.981300000000005
- type: nauc_map_at_1_max
value: 46.2584
- type: nauc_map_at_1_std
value: 9.1712
- type: nauc_map_at_1_diff1
value: 61.232299999999995
- type: nauc_map_at_3_max
value: 51.8763
- type: nauc_map_at_3_std
value: 16.366
- type: nauc_map_at_3_diff1
value: 60.0428
- type: nauc_map_at_5_max
value: 52.1957
- type: nauc_map_at_5_std
value: 17.354
- type: nauc_map_at_5_diff1
value: 59.3285
- type: nauc_map_at_10_max
value: 51.9592
- type: nauc_map_at_10_std
value: 17.368
- type: nauc_map_at_10_diff1
value: 59.21419999999999
- type: nauc_map_at_20_max
value: 51.78040000000001
- type: nauc_map_at_20_std
value: 16.947000000000003
- type: nauc_map_at_20_diff1
value: 59.1612
- type: nauc_map_at_100_max
value: 51.7167
- type: nauc_map_at_100_std
value: 16.8964
- type: nauc_map_at_100_diff1
value: 59.336
- type: nauc_map_at_1000_max
value: 51.711600000000004
- type: nauc_map_at_1000_std
value: 16.8858
- type: nauc_map_at_1000_diff1
value: 59.337700000000005
- type: nauc_recall_at_1_max
value: 46.2584
- type: nauc_recall_at_1_std
value: 9.1712
- type: nauc_recall_at_1_diff1
value: 61.232299999999995
- type: nauc_recall_at_3_max
value: 60.6484
- type: nauc_recall_at_3_std
value: 27.6682
- type: nauc_recall_at_3_diff1
value: 59.49870000000001
- type: nauc_recall_at_5_max
value: 63.5264
- type: nauc_recall_at_5_std
value: 34.5355
- type: nauc_recall_at_5_diff1
value: 55.2913
- type: nauc_recall_at_10_max
value: 62.1038
- type: nauc_recall_at_10_std
value: 36.4565
- type: nauc_recall_at_10_diff1
value: 53.4771
- type: nauc_recall_at_20_max
value: 59.6506
- type: nauc_recall_at_20_std
value: 30.444300000000002
- type: nauc_recall_at_20_diff1
value: 50.6836
- type: nauc_recall_at_100_max
value: 58.4695
- type: nauc_recall_at_100_std
value: 33.5819
- type: nauc_recall_at_100_diff1
value: 56.2667
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 46.2584
- type: nauc_precision_at_1_std
value: 9.1712
- type: nauc_precision_at_1_diff1
value: 61.232299999999995
- type: nauc_precision_at_3_max
value: 60.6484
- type: nauc_precision_at_3_std
value: 27.6682
- type: nauc_precision_at_3_diff1
value: 59.49870000000001
- type: nauc_precision_at_5_max
value: 63.5264
- type: nauc_precision_at_5_std
value: 34.5355
- type: nauc_precision_at_5_diff1
value: 55.2913
- type: nauc_precision_at_10_max
value: 62.1038
- type: nauc_precision_at_10_std
value: 36.4565
- type: nauc_precision_at_10_diff1
value: 53.4771
- type: nauc_precision_at_20_max
value: 59.6506
- type: nauc_precision_at_20_std
value: 30.444300000000002
- type: nauc_precision_at_20_diff1
value: 50.6836
- type: nauc_precision_at_100_max
value: 58.4695
- type: nauc_precision_at_100_std
value: 33.5819
- type: nauc_precision_at_100_diff1
value: 56.2667
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 46.2584
- type: nauc_mrr_at_1_std
value: 9.1712
- type: nauc_mrr_at_1_diff1
value: 61.232299999999995
- type: nauc_mrr_at_3_max
value: 51.8763
- type: nauc_mrr_at_3_std
value: 16.366
- type: nauc_mrr_at_3_diff1
value: 60.0428
- type: nauc_mrr_at_5_max
value: 52.1957
- type: nauc_mrr_at_5_std
value: 17.354
- type: nauc_mrr_at_5_diff1
value: 59.3285
- type: nauc_mrr_at_10_max
value: 51.9592
- type: nauc_mrr_at_10_std
value: 17.368
- type: nauc_mrr_at_10_diff1
value: 59.21419999999999
- type: nauc_mrr_at_20_max
value: 51.78040000000001
- type: nauc_mrr_at_20_std
value: 16.947000000000003
- type: nauc_mrr_at_20_diff1
value: 59.1612
- type: nauc_mrr_at_100_max
value: 51.7167
- type: nauc_mrr_at_100_std
value: 16.8964
- type: nauc_mrr_at_100_diff1
value: 59.336
- type: nauc_mrr_at_1000_max
value: 51.711600000000004
- type: nauc_mrr_at_1000_std
value: 16.8858
- type: nauc_mrr_at_1000_diff1
value: 59.337700000000005
- type: main_score
value: 63.541000000000004
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.690999999999995
- type: ndcg_at_3
value: 63.365
- type: ndcg_at_5
value: 65.922
- type: ndcg_at_10
value: 67.949
- type: ndcg_at_20
value: 69.733
- type: ndcg_at_100
value: 71.285
- type: ndcg_at_1000
value: 71.355
- type: map_at_1
value: 51.690999999999995
- type: map_at_3
value: 60.548
- type: map_at_5
value: 61.948
- type: map_at_10
value: 62.78399999999999
- type: map_at_20
value: 63.248000000000005
- type: map_at_100
value: 63.471999999999994
- type: map_at_1000
value: 63.476
- type: recall_at_1
value: 51.690999999999995
- type: recall_at_3
value: 71.49799999999999
- type: recall_at_5
value: 77.778
- type: recall_at_10
value: 84.05799999999999
- type: recall_at_20
value: 91.304
- type: recall_at_100
value: 99.517
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.690999999999995
- type: precision_at_3
value: 23.833
- type: precision_at_5
value: 15.556000000000001
- type: precision_at_10
value: 8.405999999999999
- type: precision_at_20
value: 4.565
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.6908
- type: mrr_at_3
value: 60.5475
- type: mrr_at_5
value: 61.948499999999996
- type: mrr_at_10
value: 62.7845
- type: mrr_at_20
value: 63.2478
- type: mrr_at_100
value: 63.471599999999995
- type: mrr_at_1000
value: 63.4757
- type: nauc_ndcg_at_1_max
value: 48.6122
- type: nauc_ndcg_at_1_std
value: 18.3707
- type: nauc_ndcg_at_1_diff1
value: 65.9894
- type: nauc_ndcg_at_3_max
value: 56.2928
- type: nauc_ndcg_at_3_std
value: 27.526899999999998
- type: nauc_ndcg_at_3_diff1
value: 56.5762
- type: nauc_ndcg_at_5_max
value: 56.594199999999994
- type: nauc_ndcg_at_5_std
value: 29.916500000000003
- type: nauc_ndcg_at_5_diff1
value: 56.1361
- type: nauc_ndcg_at_10_max
value: 58.07
- type: nauc_ndcg_at_10_std
value: 29.687400000000004
- type: nauc_ndcg_at_10_diff1
value: 58.537099999999995
- type: nauc_ndcg_at_20_max
value: 57.4515
- type: nauc_ndcg_at_20_std
value: 29.8421
- type: nauc_ndcg_at_20_diff1
value: 58.796499999999995
- type: nauc_ndcg_at_100_max
value: 55.8115
- type: nauc_ndcg_at_100_std
value: 27.851300000000002
- type: nauc_ndcg_at_100_diff1
value: 59.395399999999995
- type: nauc_ndcg_at_1000_max
value: 55.671800000000005
- type: nauc_ndcg_at_1000_std
value: 27.6646
- type: nauc_ndcg_at_1000_diff1
value: 59.3548
- type: nauc_map_at_1_max
value: 48.6122
- type: nauc_map_at_1_std
value: 18.3707
- type: nauc_map_at_1_diff1
value: 65.9894
- type: nauc_map_at_3_max
value: 54.278000000000006
- type: nauc_map_at_3_std
value: 25.3062
- type: nauc_map_at_3_diff1
value: 59.0998
- type: nauc_map_at_5_max
value: 54.38269999999999
- type: nauc_map_at_5_std
value: 26.451400000000003
- type: nauc_map_at_5_diff1
value: 59.0233
- type: nauc_map_at_10_max
value: 54.915000000000006
- type: nauc_map_at_10_std
value: 26.3247
- type: nauc_map_at_10_diff1
value: 59.939
- type: nauc_map_at_20_max
value: 54.760600000000004
- type: nauc_map_at_20_std
value: 26.3843
- type: nauc_map_at_20_diff1
value: 60.019800000000004
- type: nauc_map_at_100_max
value: 54.548700000000004
- type: nauc_map_at_100_std
value: 26.167099999999998
- type: nauc_map_at_100_diff1
value: 60.091499999999996
- type: nauc_map_at_1000_max
value: 54.542
- type: nauc_map_at_1000_std
value: 26.158199999999997
- type: nauc_map_at_1000_diff1
value: 60.0897
- type: nauc_recall_at_1_max
value: 48.6122
- type: nauc_recall_at_1_std
value: 18.3707
- type: nauc_recall_at_1_diff1
value: 65.9894
- type: nauc_recall_at_3_max
value: 63.3309
- type: nauc_recall_at_3_std
value: 35.1892
- type: nauc_recall_at_3_diff1
value: 47.732200000000006
- type: nauc_recall_at_5_max
value: 65.7603
- type: nauc_recall_at_5_std
value: 44.6445
- type: nauc_recall_at_5_diff1
value: 43.9624
- type: nauc_recall_at_10_max
value: 76.059
- type: nauc_recall_at_10_std
value: 48.0321
- type: nauc_recall_at_10_diff1
value: 52.642999999999994
- type: nauc_recall_at_20_max
value: 81.92160000000001
- type: nauc_recall_at_20_std
value: 61.57040000000001
- type: nauc_recall_at_20_diff1
value: 51.0182
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 86.907
- type: nauc_recall_at_100_diff1
value: 72.2366
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 48.6122
- type: nauc_precision_at_1_std
value: 18.3707
- type: nauc_precision_at_1_diff1
value: 65.9894
- type: nauc_precision_at_3_max
value: 63.3309
- type: nauc_precision_at_3_std
value: 35.1892
- type: nauc_precision_at_3_diff1
value: 47.732200000000006
- type: nauc_precision_at_5_max
value: 65.7603
- type: nauc_precision_at_5_std
value: 44.6445
- type: nauc_precision_at_5_diff1
value: 43.9624
- type: nauc_precision_at_10_max
value: 76.059
- type: nauc_precision_at_10_std
value: 48.0321
- type: nauc_precision_at_10_diff1
value: 52.642999999999994
- type: nauc_precision_at_20_max
value: 81.92160000000001
- type: nauc_precision_at_20_std
value: 61.57040000000001
- type: nauc_precision_at_20_diff1
value: 51.0182
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 86.907
- type: nauc_precision_at_100_diff1
value: 72.2366
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 48.6122
- type: nauc_mrr_at_1_std
value: 18.3707
- type: nauc_mrr_at_1_diff1
value: 65.9894
- type: nauc_mrr_at_3_max
value: 54.278000000000006
- type: nauc_mrr_at_3_std
value: 25.3062
- type: nauc_mrr_at_3_diff1
value: 59.0998
- type: nauc_mrr_at_5_max
value: 54.38269999999999
- type: nauc_mrr_at_5_std
value: 26.451400000000003
- type: nauc_mrr_at_5_diff1
value: 59.0233
- type: nauc_mrr_at_10_max
value: 54.915000000000006
- type: nauc_mrr_at_10_std
value: 26.3247
- type: nauc_mrr_at_10_diff1
value: 59.939
- type: nauc_mrr_at_20_max
value: 54.760600000000004
- type: nauc_mrr_at_20_std
value: 26.3843
- type: nauc_mrr_at_20_diff1
value: 60.019800000000004
- type: nauc_mrr_at_100_max
value: 54.548700000000004
- type: nauc_mrr_at_100_std
value: 26.167099999999998
- type: nauc_mrr_at_100_diff1
value: 60.091499999999996
- type: nauc_mrr_at_1000_max
value: 54.542
- type: nauc_mrr_at_1000_std
value: 26.158199999999997
- type: nauc_mrr_at_1000_diff1
value: 60.0897
- type: main_score
value: 67.949
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.837999999999994
- type: ndcg_at_3
value: 61.207
- type: ndcg_at_5
value: 63.57000000000001
- type: ndcg_at_10
value: 65.679
- type: ndcg_at_20
value: 67.296
- type: ndcg_at_100
value: 69.298
- type: ndcg_at_1000
value: 69.68299999999999
- type: map_at_1
value: 51.837999999999994
- type: map_at_3
value: 58.897
- type: map_at_5
value: 60.193
- type: map_at_10
value: 61.053000000000004
- type: map_at_20
value: 61.499
- type: map_at_100
value: 61.79900000000001
- type: map_at_1000
value: 61.815
- type: recall_at_1
value: 51.837999999999994
- type: recall_at_3
value: 67.892
- type: recall_at_5
value: 73.694
- type: recall_at_10
value: 80.271
- type: recall_at_20
value: 86.654
- type: recall_at_100
value: 97.099
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.837999999999994
- type: precision_at_3
value: 22.631
- type: precision_at_5
value: 14.738999999999999
- type: precision_at_10
value: 8.027
- type: precision_at_20
value: 4.333
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.837500000000006
- type: mrr_at_3
value: 58.8975
- type: mrr_at_5
value: 60.1934
- type: mrr_at_10
value: 61.0533
- type: mrr_at_20
value: 61.498799999999996
- type: mrr_at_100
value: 61.7987
- type: mrr_at_1000
value: 61.8154
- type: nauc_ndcg_at_1_max
value: 52.8818
- type: nauc_ndcg_at_1_std
value: 2.2309
- type: nauc_ndcg_at_1_diff1
value: 67.1852
- type: nauc_ndcg_at_3_max
value: 57.75789999999999
- type: nauc_ndcg_at_3_std
value: 8.4361
- type: nauc_ndcg_at_3_diff1
value: 60.3313
- type: nauc_ndcg_at_5_max
value: 58.845000000000006
- type: nauc_ndcg_at_5_std
value: 10.3892
- type: nauc_ndcg_at_5_diff1
value: 59.6225
- type: nauc_ndcg_at_10_max
value: 58.440999999999995
- type: nauc_ndcg_at_10_std
value: 10.245
- type: nauc_ndcg_at_10_diff1
value: 60.3544
- type: nauc_ndcg_at_20_max
value: 58.0517
- type: nauc_ndcg_at_20_std
value: 9.229
- type: nauc_ndcg_at_20_diff1
value: 60.4508
- type: nauc_ndcg_at_100_max
value: 57.6593
- type: nauc_ndcg_at_100_std
value: 9.1281
- type: nauc_ndcg_at_100_diff1
value: 61.107299999999995
- type: nauc_ndcg_at_1000_max
value: 57.301100000000005
- type: nauc_ndcg_at_1000_std
value: 8.3789
- type: nauc_ndcg_at_1000_diff1
value: 61.433899999999994
- type: nauc_map_at_1_max
value: 52.8818
- type: nauc_map_at_1_std
value: 2.2309
- type: nauc_map_at_1_diff1
value: 67.1852
- type: nauc_map_at_3_max
value: 56.5338
- type: nauc_map_at_3_std
value: 6.6754999999999995
- type: nauc_map_at_3_diff1
value: 62.195299999999996
- type: nauc_map_at_5_max
value: 56.990300000000005
- type: nauc_map_at_5_std
value: 7.5465
- type: nauc_map_at_5_diff1
value: 61.898399999999995
- type: nauc_map_at_10_max
value: 56.7918
- type: nauc_map_at_10_std
value: 7.446400000000001
- type: nauc_map_at_10_diff1
value: 62.218399999999995
- type: nauc_map_at_20_max
value: 56.666399999999996
- type: nauc_map_at_20_std
value: 7.133399999999999
- type: nauc_map_at_20_diff1
value: 62.2684
- type: nauc_map_at_100_max
value: 56.60380000000001
- type: nauc_map_at_100_std
value: 7.143800000000001
- type: nauc_map_at_100_diff1
value: 62.332100000000004
- type: nauc_map_at_1000_max
value: 56.5913
- type: nauc_map_at_1000_std
value: 7.1212
- type: nauc_map_at_1000_diff1
value: 62.3459
- type: nauc_recall_at_1_max
value: 52.8818
- type: nauc_recall_at_1_std
value: 2.2309
- type: nauc_recall_at_1_diff1
value: 67.1852
- type: nauc_recall_at_3_max
value: 61.804
- type: nauc_recall_at_3_std
value: 14.3574
- type: nauc_recall_at_3_diff1
value: 54.0982
- type: nauc_recall_at_5_max
value: 66.14320000000001
- type: nauc_recall_at_5_std
value: 21.7224
- type: nauc_recall_at_5_diff1
value: 50.83259999999999
- type: nauc_recall_at_10_max
value: 66.2602
- type: nauc_recall_at_10_std
value: 23.880399999999998
- type: nauc_recall_at_10_diff1
value: 51.8906
- type: nauc_recall_at_20_max
value: 66.73219999999999
- type: nauc_recall_at_20_std
value: 22.267799999999998
- type: nauc_recall_at_20_diff1
value: 49.0047
- type: nauc_recall_at_100_max
value: 79.71249999999999
- type: nauc_recall_at_100_std
value: 56.6461
- type: nauc_recall_at_100_diff1
value: 41.9666
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 52.8818
- type: nauc_precision_at_1_std
value: 2.2309
- type: nauc_precision_at_1_diff1
value: 67.1852
- type: nauc_precision_at_3_max
value: 61.804
- type: nauc_precision_at_3_std
value: 14.3574
- type: nauc_precision_at_3_diff1
value: 54.0982
- type: nauc_precision_at_5_max
value: 66.14320000000001
- type: nauc_precision_at_5_std
value: 21.7224
- type: nauc_precision_at_5_diff1
value: 50.83259999999999
- type: nauc_precision_at_10_max
value: 66.2602
- type: nauc_precision_at_10_std
value: 23.880399999999998
- type: nauc_precision_at_10_diff1
value: 51.8906
- type: nauc_precision_at_20_max
value: 66.73219999999999
- type: nauc_precision_at_20_std
value: 22.267799999999998
- type: nauc_precision_at_20_diff1
value: 49.0047
- type: nauc_precision_at_100_max
value: 79.71249999999999
- type: nauc_precision_at_100_std
value: 56.6461
- type: nauc_precision_at_100_diff1
value: 41.9666
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 52.8818
- type: nauc_mrr_at_1_std
value: 2.2309
- type: nauc_mrr_at_1_diff1
value: 67.1852
- type: nauc_mrr_at_3_max
value: 56.5338
- type: nauc_mrr_at_3_std
value: 6.6754999999999995
- type: nauc_mrr_at_3_diff1
value: 62.195299999999996
- type: nauc_mrr_at_5_max
value: 56.990300000000005
- type: nauc_mrr_at_5_std
value: 7.5465
- type: nauc_mrr_at_5_diff1
value: 61.898399999999995
- type: nauc_mrr_at_10_max
value: 56.7918
- type: nauc_mrr_at_10_std
value: 7.446400000000001
- type: nauc_mrr_at_10_diff1
value: 62.218399999999995
- type: nauc_mrr_at_20_max
value: 56.666399999999996
- type: nauc_mrr_at_20_std
value: 7.133399999999999
- type: nauc_mrr_at_20_diff1
value: 62.2684
- type: nauc_mrr_at_100_max
value: 56.60380000000001
- type: nauc_mrr_at_100_std
value: 7.143800000000001
- type: nauc_mrr_at_100_diff1
value: 62.332100000000004
- type: nauc_mrr_at_1000_max
value: 56.5913
- type: nauc_mrr_at_1000_std
value: 7.1212
- type: nauc_mrr_at_1000_diff1
value: 62.3459
- type: main_score
value: 65.679
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 60.248000000000005
- type: ndcg_at_3
value: 69.247
- type: ndcg_at_5
value: 72.26599999999999
- type: ndcg_at_10
value: 73.994
- type: ndcg_at_20
value: 75.24300000000001
- type: ndcg_at_100
value: 76.547
- type: ndcg_at_1000
value: 76.547
- type: map_at_1
value: 60.248000000000005
- type: map_at_3
value: 67.184
- type: map_at_5
value: 68.83
- type: map_at_10
value: 69.49600000000001
- type: map_at_20
value: 69.83500000000001
- type: map_at_100
value: 70.031
- type: map_at_1000
value: 70.031
- type: recall_at_1
value: 60.248000000000005
- type: recall_at_3
value: 75.155
- type: recall_at_5
value: 82.609
- type: recall_at_10
value: 88.19900000000001
- type: recall_at_20
value: 93.16799999999999
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 60.248000000000005
- type: precision_at_3
value: 25.052000000000003
- type: precision_at_5
value: 16.522000000000002
- type: precision_at_10
value: 8.82
- type: precision_at_20
value: 4.658
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 60.248400000000004
- type: mrr_at_3
value: 67.1843
- type: mrr_at_5
value: 68.83019999999999
- type: mrr_at_10
value: 69.49600000000001
- type: mrr_at_20
value: 69.8345
- type: mrr_at_100
value: 70.03049999999999
- type: mrr_at_1000
value: 70.03049999999999
- type: nauc_ndcg_at_1_max
value: 51.1706
- type: nauc_ndcg_at_1_std
value: -8.1716
- type: nauc_ndcg_at_1_diff1
value: 73.443
- type: nauc_ndcg_at_3_max
value: 61.9764
- type: nauc_ndcg_at_3_std
value: 4.0499
- type: nauc_ndcg_at_3_diff1
value: 67.49589999999999
- type: nauc_ndcg_at_5_max
value: 60.4749
- type: nauc_ndcg_at_5_std
value: 8.561399999999999
- type: nauc_ndcg_at_5_diff1
value: 65.4543
- type: nauc_ndcg_at_10_max
value: 61.6645
- type: nauc_ndcg_at_10_std
value: 8.186200000000001
- type: nauc_ndcg_at_10_diff1
value: 67.3523
- type: nauc_ndcg_at_20_max
value: 60.9429
- type: nauc_ndcg_at_20_std
value: 7.7970999999999995
- type: nauc_ndcg_at_20_diff1
value: 67.1078
- type: nauc_ndcg_at_100_max
value: 59.452400000000004
- type: nauc_ndcg_at_100_std
value: 4.6432
- type: nauc_ndcg_at_100_diff1
value: 68.0564
- type: nauc_ndcg_at_1000_max
value: 59.452400000000004
- type: nauc_ndcg_at_1000_std
value: 4.6432
- type: nauc_ndcg_at_1000_diff1
value: 68.0564
- type: nauc_map_at_1_max
value: 51.1706
- type: nauc_map_at_1_std
value: -8.1716
- type: nauc_map_at_1_diff1
value: 73.443
- type: nauc_map_at_3_max
value: 59.385299999999994
- type: nauc_map_at_3_std
value: 1.1125
- type: nauc_map_at_3_diff1
value: 68.9884
- type: nauc_map_at_5_max
value: 58.473600000000005
- type: nauc_map_at_5_std
value: 3.273
- type: nauc_map_at_5_diff1
value: 68.0102
- type: nauc_map_at_10_max
value: 58.869899999999994
- type: nauc_map_at_10_std
value: 3.1175
- type: nauc_map_at_10_diff1
value: 68.7308
- type: nauc_map_at_20_max
value: 58.6638
- type: nauc_map_at_20_std
value: 2.9529
- type: nauc_map_at_20_diff1
value: 68.6787
- type: nauc_map_at_100_max
value: 58.465
- type: nauc_map_at_100_std
value: 2.5943
- type: nauc_map_at_100_diff1
value: 68.7955
- type: nauc_map_at_1000_max
value: 58.465
- type: nauc_map_at_1000_std
value: 2.5943
- type: nauc_map_at_1000_diff1
value: 68.7955
- type: nauc_recall_at_1_max
value: 51.1706
- type: nauc_recall_at_1_std
value: -8.1716
- type: nauc_recall_at_1_diff1
value: 73.443
- type: nauc_recall_at_3_max
value: 70.9051
- type: nauc_recall_at_3_std
value: 14.1759
- type: nauc_recall_at_3_diff1
value: 62.3143
- type: nauc_recall_at_5_max
value: 68.99159999999999
- type: nauc_recall_at_5_std
value: 33.226499999999994
- type: nauc_recall_at_5_diff1
value: 53.53790000000001
- type: nauc_recall_at_10_max
value: 79.36149999999999
- type: nauc_recall_at_10_std
value: 40.149
- type: nauc_recall_at_10_diff1
value: 59.90220000000001
- type: nauc_recall_at_20_max
value: 83.0489
- type: nauc_recall_at_20_std
value: 57.8707
- type: nauc_recall_at_20_diff1
value: 52.1552
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 51.1706
- type: nauc_precision_at_1_std
value: -8.1716
- type: nauc_precision_at_1_diff1
value: 73.443
- type: nauc_precision_at_3_max
value: 70.9051
- type: nauc_precision_at_3_std
value: 14.1759
- type: nauc_precision_at_3_diff1
value: 62.3143
- type: nauc_precision_at_5_max
value: 68.99159999999999
- type: nauc_precision_at_5_std
value: 33.226499999999994
- type: nauc_precision_at_5_diff1
value: 53.53790000000001
- type: nauc_precision_at_10_max
value: 79.36149999999999
- type: nauc_precision_at_10_std
value: 40.149
- type: nauc_precision_at_10_diff1
value: 59.90220000000001
- type: nauc_precision_at_20_max
value: 83.0489
- type: nauc_precision_at_20_std
value: 57.8707
- type: nauc_precision_at_20_diff1
value: 52.1552
- type: nauc_precision_at_100_max
value: .nan
- type: nauc_precision_at_100_std
value: .nan
- type: nauc_precision_at_100_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 51.1706
- type: nauc_mrr_at_1_std
value: -8.1716
- type: nauc_mrr_at_1_diff1
value: 73.443
- type: nauc_mrr_at_3_max
value: 59.385299999999994
- type: nauc_mrr_at_3_std
value: 1.1125
- type: nauc_mrr_at_3_diff1
value: 68.9884
- type: nauc_mrr_at_5_max
value: 58.473600000000005
- type: nauc_mrr_at_5_std
value: 3.273
- type: nauc_mrr_at_5_diff1
value: 68.0102
- type: nauc_mrr_at_10_max
value: 58.869899999999994
- type: nauc_mrr_at_10_std
value: 3.1175
- type: nauc_mrr_at_10_diff1
value: 68.7308
- type: nauc_mrr_at_20_max
value: 58.6638
- type: nauc_mrr_at_20_std
value: 2.9529
- type: nauc_mrr_at_20_diff1
value: 68.6787
- type: nauc_mrr_at_100_max
value: 58.465
- type: nauc_mrr_at_100_std
value: 2.5943
- type: nauc_mrr_at_100_diff1
value: 68.7955
- type: nauc_mrr_at_1000_max
value: 58.465
- type: nauc_mrr_at_1000_std
value: 2.5943
- type: nauc_mrr_at_1000_diff1
value: 68.7955
- type: main_score
value: 73.994
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 54.301
- type: ndcg_at_3
value: 65.598
- type: ndcg_at_5
value: 68.46600000000001
- type: ndcg_at_10
value: 70.511
- type: ndcg_at_20
value: 71.58200000000001
- type: ndcg_at_100
value: 73.014
- type: ndcg_at_1000
value: 73.165
- type: map_at_1
value: 54.301
- type: map_at_3
value: 62.814
- type: map_at_5
value: 64.4
- type: map_at_10
value: 65.21900000000001
- type: map_at_20
value: 65.503
- type: map_at_100
value: 65.712
- type: map_at_1000
value: 65.72
- type: recall_at_1
value: 54.301
- type: recall_at_3
value: 73.656
- type: recall_at_5
value: 80.645
- type: recall_at_10
value: 87.09700000000001
- type: recall_at_20
value: 91.398
- type: recall_at_100
value: 98.925
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 54.301
- type: precision_at_3
value: 24.552
- type: precision_at_5
value: 16.128999999999998
- type: precision_at_10
value: 8.709999999999999
- type: precision_at_20
value: 4.569999999999999
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 54.301100000000005
- type: mrr_at_3
value: 62.8136
- type: mrr_at_5
value: 64.3996
- type: mrr_at_10
value: 65.2187
- type: mrr_at_20
value: 65.5029
- type: mrr_at_100
value: 65.71209999999999
- type: mrr_at_1000
value: 65.72
- type: nauc_ndcg_at_1_max
value: 53.0712
- type: nauc_ndcg_at_1_std
value: 3.4898
- type: nauc_ndcg_at_1_diff1
value: 66.2941
- type: nauc_ndcg_at_3_max
value: 59.7553
- type: nauc_ndcg_at_3_std
value: 12.1777
- type: nauc_ndcg_at_3_diff1
value: 62.923399999999994
- type: nauc_ndcg_at_5_max
value: 59.16630000000001
- type: nauc_ndcg_at_5_std
value: 11.998899999999999
- type: nauc_ndcg_at_5_diff1
value: 61.015699999999995
- type: nauc_ndcg_at_10_max
value: 59.5264
- type: nauc_ndcg_at_10_std
value: 14.9617
- type: nauc_ndcg_at_10_diff1
value: 62.1769
- type: nauc_ndcg_at_20_max
value: 59.5248
- type: nauc_ndcg_at_20_std
value: 13.4521
- type: nauc_ndcg_at_20_diff1
value: 63.1046
- type: nauc_ndcg_at_100_max
value: 58.8175
- type: nauc_ndcg_at_100_std
value: 12.1264
- type: nauc_ndcg_at_100_diff1
value: 63.231
- type: nauc_ndcg_at_1000_max
value: 58.571200000000005
- type: nauc_ndcg_at_1000_std
value: 11.6462
- type: nauc_ndcg_at_1000_diff1
value: 63.166900000000005
- type: nauc_map_at_1_max
value: 53.0712
- type: nauc_map_at_1_std
value: 3.4898
- type: nauc_map_at_1_diff1
value: 66.2941
- type: nauc_map_at_3_max
value: 58.0839
- type: nauc_map_at_3_std
value: 9.8015
- type: nauc_map_at_3_diff1
value: 63.7764
- type: nauc_map_at_5_max
value: 57.7643
- type: nauc_map_at_5_std
value: 9.661200000000001
- type: nauc_map_at_5_diff1
value: 62.8703
- type: nauc_map_at_10_max
value: 57.92230000000001
- type: nauc_map_at_10_std
value: 10.7513
- type: nauc_map_at_10_diff1
value: 63.282700000000006
- type: nauc_map_at_20_max
value: 57.898
- type: nauc_map_at_20_std
value: 10.3559
- type: nauc_map_at_20_diff1
value: 63.4981
- type: nauc_map_at_100_max
value: 57.8164
- type: nauc_map_at_100_std
value: 10.2083
- type: nauc_map_at_100_diff1
value: 63.524
- type: nauc_map_at_1000_max
value: 57.80610000000001
- type: nauc_map_at_1000_std
value: 10.1882
- type: nauc_map_at_1000_diff1
value: 63.521499999999996
- type: nauc_recall_at_1_max
value: 53.0712
- type: nauc_recall_at_1_std
value: 3.4898
- type: nauc_recall_at_1_diff1
value: 66.2941
- type: nauc_recall_at_3_max
value: 65.6965
- type: nauc_recall_at_3_std
value: 20.741100000000003
- type: nauc_recall_at_3_diff1
value: 59.885600000000004
- type: nauc_recall_at_5_max
value: 65.05539999999999
- type: nauc_recall_at_5_std
value: 22.2359
- type: nauc_recall_at_5_diff1
value: 52.3555
- type: nauc_recall_at_10_max
value: 69.0771
- type: nauc_recall_at_10_std
value: 43.1849
- type: nauc_recall_at_10_diff1
value: 55.924099999999996
- type: nauc_recall_at_20_max
value: 73.63589999999999
- type: nauc_recall_at_20_std
value: 40.5013
- type: nauc_recall_at_20_diff1
value: 62.9617
- type: nauc_recall_at_100_max
value: 93.44839999999999
- type: nauc_recall_at_100_std
value: 79.5537
- type: nauc_recall_at_100_diff1
value: 72.2107
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 53.0712
- type: nauc_precision_at_1_std
value: 3.4898
- type: nauc_precision_at_1_diff1
value: 66.2941
- type: nauc_precision_at_3_max
value: 65.6965
- type: nauc_precision_at_3_std
value: 20.741100000000003
- type: nauc_precision_at_3_diff1
value: 59.885600000000004
- type: nauc_precision_at_5_max
value: 65.05539999999999
- type: nauc_precision_at_5_std
value: 22.2359
- type: nauc_precision_at_5_diff1
value: 52.3555
- type: nauc_precision_at_10_max
value: 69.0771
- type: nauc_precision_at_10_std
value: 43.1849
- type: nauc_precision_at_10_diff1
value: 55.924099999999996
- type: nauc_precision_at_20_max
value: 73.63589999999999
- type: nauc_precision_at_20_std
value: 40.5013
- type: nauc_precision_at_20_diff1
value: 62.9617
- type: nauc_precision_at_100_max
value: 93.44839999999999
- type: nauc_precision_at_100_std
value: 79.5537
- type: nauc_precision_at_100_diff1
value: 72.2107
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 53.0712
- type: nauc_mrr_at_1_std
value: 3.4898
- type: nauc_mrr_at_1_diff1
value: 66.2941
- type: nauc_mrr_at_3_max
value: 58.0839
- type: nauc_mrr_at_3_std
value: 9.8015
- type: nauc_mrr_at_3_diff1
value: 63.7764
- type: nauc_mrr_at_5_max
value: 57.7643
- type: nauc_mrr_at_5_std
value: 9.661200000000001
- type: nauc_mrr_at_5_diff1
value: 62.8703
- type: nauc_mrr_at_10_max
value: 57.92230000000001
- type: nauc_mrr_at_10_std
value: 10.7513
- type: nauc_mrr_at_10_diff1
value: 63.282700000000006
- type: nauc_mrr_at_20_max
value: 57.898
- type: nauc_mrr_at_20_std
value: 10.3559
- type: nauc_mrr_at_20_diff1
value: 63.4981
- type: nauc_mrr_at_100_max
value: 57.8164
- type: nauc_mrr_at_100_std
value: 10.2083
- type: nauc_mrr_at_100_diff1
value: 63.524
- type: nauc_mrr_at_1000_max
value: 57.80610000000001
- type: nauc_mrr_at_1000_std
value: 10.1882
- type: nauc_mrr_at_1000_diff1
value: 63.521499999999996
- type: main_score
value: 70.511
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 52.147
- type: ndcg_at_3
value: 60.407
- type: ndcg_at_5
value: 64.209
- type: ndcg_at_10
value: 66.841
- type: ndcg_at_20
value: 68.27
- type: ndcg_at_100
value: 70.407
- type: ndcg_at_1000
value: 70.407
- type: map_at_1
value: 52.147
- type: map_at_3
value: 58.384
- type: map_at_5
value: 60.501000000000005
- type: map_at_10
value: 61.617
- type: map_at_20
value: 62.026
- type: map_at_100
value: 62.356
- type: map_at_1000
value: 62.356
- type: recall_at_1
value: 52.147
- type: recall_at_3
value: 66.258
- type: recall_at_5
value: 75.46000000000001
- type: recall_at_10
value: 83.43599999999999
- type: recall_at_20
value: 88.957
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.147
- type: precision_at_3
value: 22.086
- type: precision_at_5
value: 15.092
- type: precision_at_10
value: 8.344
- type: precision_at_20
value: 4.4479999999999995
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.147200000000005
- type: mrr_at_3
value: 58.384499999999996
- type: mrr_at_5
value: 60.501000000000005
- type: mrr_at_10
value: 61.616499999999995
- type: mrr_at_20
value: 62.02609999999999
- type: mrr_at_100
value: 62.3563
- type: mrr_at_1000
value: 62.3563
- type: nauc_ndcg_at_1_max
value: 62.013
- type: nauc_ndcg_at_1_std
value: 14.3347
- type: nauc_ndcg_at_1_diff1
value: 63.092000000000006
- type: nauc_ndcg_at_3_max
value: 64.3437
- type: nauc_ndcg_at_3_std
value: 17.8683
- type: nauc_ndcg_at_3_diff1
value: 58.916999999999994
- type: nauc_ndcg_at_5_max
value: 62.3664
- type: nauc_ndcg_at_5_std
value: 17.697
- type: nauc_ndcg_at_5_diff1
value: 57.1928
- type: nauc_ndcg_at_10_max
value: 62.8166
- type: nauc_ndcg_at_10_std
value: 19.034599999999998
- type: nauc_ndcg_at_10_diff1
value: 58.5172
- type: nauc_ndcg_at_20_max
value: 63.6594
- type: nauc_ndcg_at_20_std
value: 20.9389
- type: nauc_ndcg_at_20_diff1
value: 57.687900000000006
- type: nauc_ndcg_at_100_max
value: 63.109700000000004
- type: nauc_ndcg_at_100_std
value: 18.536
- type: nauc_ndcg_at_100_diff1
value: 58.574099999999994
- type: nauc_ndcg_at_1000_max
value: 63.109700000000004
- type: nauc_ndcg_at_1000_std
value: 18.536
- type: nauc_ndcg_at_1000_diff1
value: 58.574099999999994
- type: nauc_map_at_1_max
value: 62.013
- type: nauc_map_at_1_std
value: 14.3347
- type: nauc_map_at_1_diff1
value: 63.092000000000006
- type: nauc_map_at_3_max
value: 63.7613
- type: nauc_map_at_3_std
value: 17.387800000000002
- type: nauc_map_at_3_diff1
value: 59.5963
- type: nauc_map_at_5_max
value: 62.6696
- type: nauc_map_at_5_std
value: 17.2029
- type: nauc_map_at_5_diff1
value: 58.5964
- type: nauc_map_at_10_max
value: 62.7803
- type: nauc_map_at_10_std
value: 17.6424
- type: nauc_map_at_10_diff1
value: 59.108799999999995
- type: nauc_map_at_20_max
value: 63.032
- type: nauc_map_at_20_std
value: 18.2008
- type: nauc_map_at_20_diff1
value: 58.8951
- type: nauc_map_at_100_max
value: 62.961800000000004
- type: nauc_map_at_100_std
value: 17.8419
- type: nauc_map_at_100_diff1
value: 59.0283
- type: nauc_map_at_1000_max
value: 62.961800000000004
- type: nauc_map_at_1000_std
value: 17.8419
- type: nauc_map_at_1000_diff1
value: 59.0283
- type: nauc_recall_at_1_max
value: 62.013
- type: nauc_recall_at_1_std
value: 14.3347
- type: nauc_recall_at_1_diff1
value: 63.092000000000006
- type: nauc_recall_at_3_max
value: 66.2268
- type: nauc_recall_at_3_std
value: 19.2254
- type: nauc_recall_at_3_diff1
value: 56.8986
- type: nauc_recall_at_5_max
value: 60.8216
- type: nauc_recall_at_5_std
value: 19.4877
- type: nauc_recall_at_5_diff1
value: 51.761900000000004
- type: nauc_recall_at_10_max
value: 63.136199999999995
- type: nauc_recall_at_10_std
value: 27.4165
- type: nauc_recall_at_10_diff1
value: 56.558
- type: nauc_recall_at_20_max
value: 69.8169
- type: nauc_recall_at_20_std
value: 45.7693
- type: nauc_recall_at_20_diff1
value: 48.7296
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 62.013
- type: nauc_precision_at_1_std
value: 14.3347
- type: nauc_precision_at_1_diff1
value: 63.092000000000006
- type: nauc_precision_at_3_max
value: 66.2268
- type: nauc_precision_at_3_std
value: 19.2254
- type: nauc_precision_at_3_diff1
value: 56.8986
- type: nauc_precision_at_5_max
value: 60.8216
- type: nauc_precision_at_5_std
value: 19.4877
- type: nauc_precision_at_5_diff1
value: 51.761900000000004
- type: nauc_precision_at_10_max
value: 63.136199999999995
- type: nauc_precision_at_10_std
value: 27.4165
- type: nauc_precision_at_10_diff1
value: 56.558
- type: nauc_precision_at_20_max
value: 69.8169
- type: nauc_precision_at_20_std
value: 45.7693
- type: nauc_precision_at_20_diff1
value: 48.7296
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 100.0
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 62.013
- type: nauc_mrr_at_1_std
value: 14.3347
- type: nauc_mrr_at_1_diff1
value: 63.092000000000006
- type: nauc_mrr_at_3_max
value: 63.7613
- type: nauc_mrr_at_3_std
value: 17.387800000000002
- type: nauc_mrr_at_3_diff1
value: 59.5963
- type: nauc_mrr_at_5_max
value: 62.6696
- type: nauc_mrr_at_5_std
value: 17.2029
- type: nauc_mrr_at_5_diff1
value: 58.5964
- type: nauc_mrr_at_10_max
value: 62.7803
- type: nauc_mrr_at_10_std
value: 17.6424
- type: nauc_mrr_at_10_diff1
value: 59.108799999999995
- type: nauc_mrr_at_20_max
value: 63.032
- type: nauc_mrr_at_20_std
value: 18.2008
- type: nauc_mrr_at_20_diff1
value: 58.8951
- type: nauc_mrr_at_100_max
value: 62.961800000000004
- type: nauc_mrr_at_100_std
value: 17.8419
- type: nauc_mrr_at_100_diff1
value: 59.0283
- type: nauc_mrr_at_1000_max
value: 62.961800000000004
- type: nauc_mrr_at_1000_std
value: 17.8419
- type: nauc_mrr_at_1000_diff1
value: 59.0283
- type: main_score
value: 66.841
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 58.511
- type: ndcg_at_3
value: 68.022
- type: ndcg_at_5
value: 69.808
- type: ndcg_at_10
value: 71.552
- type: ndcg_at_20
value: 73.287
- type: ndcg_at_100
value: 74.737
- type: ndcg_at_1000
value: 74.964
- type: map_at_1
value: 58.511
- type: map_at_3
value: 65.78
- type: map_at_5
value: 66.791
- type: map_at_10
value: 67.523
- type: map_at_20
value: 67.994
- type: map_at_100
value: 68.219
- type: map_at_1000
value: 68.231
- type: recall_at_1
value: 58.511
- type: recall_at_3
value: 74.468
- type: recall_at_5
value: 78.723
- type: recall_at_10
value: 84.043
- type: recall_at_20
value: 90.957
- type: recall_at_100
value: 98.404
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 58.511
- type: precision_at_3
value: 24.823
- type: precision_at_5
value: 15.745000000000001
- type: precision_at_10
value: 8.404
- type: precision_at_20
value: 4.548
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 58.510600000000004
- type: mrr_at_3
value: 65.78009999999999
- type: mrr_at_5
value: 66.79079999999999
- type: mrr_at_10
value: 67.5232
- type: mrr_at_20
value: 67.994
- type: mrr_at_100
value: 68.2188
- type: mrr_at_1000
value: 68.2311
- type: nauc_ndcg_at_1_max
value: 47.2503
- type: nauc_ndcg_at_1_std
value: 14.4989
- type: nauc_ndcg_at_1_diff1
value: 63.2463
- type: nauc_ndcg_at_3_max
value: 54.855900000000005
- type: nauc_ndcg_at_3_std
value: 21.204700000000003
- type: nauc_ndcg_at_3_diff1
value: 60.0863
- type: nauc_ndcg_at_5_max
value: 55.416399999999996
- type: nauc_ndcg_at_5_std
value: 22.047900000000002
- type: nauc_ndcg_at_5_diff1
value: 61.1254
- type: nauc_ndcg_at_10_max
value: 53.0238
- type: nauc_ndcg_at_10_std
value: 19.6632
- type: nauc_ndcg_at_10_diff1
value: 60.5071
- type: nauc_ndcg_at_20_max
value: 53.337599999999995
- type: nauc_ndcg_at_20_std
value: 21.4431
- type: nauc_ndcg_at_20_diff1
value: 59.5753
- type: nauc_ndcg_at_100_max
value: 52.819300000000005
- type: nauc_ndcg_at_100_std
value: 20.0427
- type: nauc_ndcg_at_100_diff1
value: 60.933800000000005
- type: nauc_ndcg_at_1000_max
value: 52.70399999999999
- type: nauc_ndcg_at_1000_std
value: 19.5895
- type: nauc_ndcg_at_1000_diff1
value: 60.8733
- type: nauc_map_at_1_max
value: 47.2503
- type: nauc_map_at_1_std
value: 14.4989
- type: nauc_map_at_1_diff1
value: 63.2463
- type: nauc_map_at_3_max
value: 52.973400000000005
- type: nauc_map_at_3_std
value: 19.3872
- type: nauc_map_at_3_diff1
value: 60.8399
- type: nauc_map_at_5_max
value: 53.166999999999994
- type: nauc_map_at_5_std
value: 19.7018
- type: nauc_map_at_5_diff1
value: 61.3792
- type: nauc_map_at_10_max
value: 52.2108
- type: nauc_map_at_10_std
value: 18.693199999999997
- type: nauc_map_at_10_diff1
value: 61.15390000000001
- type: nauc_map_at_20_max
value: 52.2363
- type: nauc_map_at_20_std
value: 19.135099999999998
- type: nauc_map_at_20_diff1
value: 60.963
- type: nauc_map_at_100_max
value: 52.16499999999999
- type: nauc_map_at_100_std
value: 18.8758
- type: nauc_map_at_100_diff1
value: 61.1737
- type: nauc_map_at_1000_max
value: 52.1605
- type: nauc_map_at_1000_std
value: 18.8562
- type: nauc_map_at_1000_diff1
value: 61.1715
- type: nauc_recall_at_1_max
value: 47.2503
- type: nauc_recall_at_1_std
value: 14.4989
- type: nauc_recall_at_1_diff1
value: 63.2463
- type: nauc_recall_at_3_max
value: 61.4028
- type: nauc_recall_at_3_std
value: 27.6147
- type: nauc_recall_at_3_diff1
value: 57.4815
- type: nauc_recall_at_5_max
value: 64.4332
- type: nauc_recall_at_5_std
value: 31.658399999999997
- type: nauc_recall_at_5_diff1
value: 60.4164
- type: nauc_recall_at_10_max
value: 55.680099999999996
- type: nauc_recall_at_10_std
value: 23.6144
- type: nauc_recall_at_10_diff1
value: 57.232099999999996
- type: nauc_recall_at_20_max
value: 61.303700000000006
- type: nauc_recall_at_20_std
value: 42.750899999999994
- type: nauc_recall_at_20_diff1
value: 45.5658
- type: nauc_recall_at_100_max
value: 63.750099999999996
- type: nauc_recall_at_100_std
value: 61.4922
- type: nauc_recall_at_100_diff1
value: 66.5823
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 47.2503
- type: nauc_precision_at_1_std
value: 14.4989
- type: nauc_precision_at_1_diff1
value: 63.2463
- type: nauc_precision_at_3_max
value: 61.4028
- type: nauc_precision_at_3_std
value: 27.6147
- type: nauc_precision_at_3_diff1
value: 57.4815
- type: nauc_precision_at_5_max
value: 64.4332
- type: nauc_precision_at_5_std
value: 31.658399999999997
- type: nauc_precision_at_5_diff1
value: 60.4164
- type: nauc_precision_at_10_max
value: 55.680099999999996
- type: nauc_precision_at_10_std
value: 23.6144
- type: nauc_precision_at_10_diff1
value: 57.232099999999996
- type: nauc_precision_at_20_max
value: 61.303700000000006
- type: nauc_precision_at_20_std
value: 42.750899999999994
- type: nauc_precision_at_20_diff1
value: 45.5658
- type: nauc_precision_at_100_max
value: 63.750099999999996
- type: nauc_precision_at_100_std
value: 61.4922
- type: nauc_precision_at_100_diff1
value: 66.5823
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 47.2503
- type: nauc_mrr_at_1_std
value: 14.4989
- type: nauc_mrr_at_1_diff1
value: 63.2463
- type: nauc_mrr_at_3_max
value: 52.973400000000005
- type: nauc_mrr_at_3_std
value: 19.3872
- type: nauc_mrr_at_3_diff1
value: 60.8399
- type: nauc_mrr_at_5_max
value: 53.166999999999994
- type: nauc_mrr_at_5_std
value: 19.7018
- type: nauc_mrr_at_5_diff1
value: 61.3792
- type: nauc_mrr_at_10_max
value: 52.2108
- type: nauc_mrr_at_10_std
value: 18.693199999999997
- type: nauc_mrr_at_10_diff1
value: 61.15390000000001
- type: nauc_mrr_at_20_max
value: 52.2363
- type: nauc_mrr_at_20_std
value: 19.135099999999998
- type: nauc_mrr_at_20_diff1
value: 60.963
- type: nauc_mrr_at_100_max
value: 52.16499999999999
- type: nauc_mrr_at_100_std
value: 18.8758
- type: nauc_mrr_at_100_diff1
value: 61.1737
- type: nauc_mrr_at_1000_max
value: 52.1605
- type: nauc_mrr_at_1000_std
value: 18.8562
- type: nauc_mrr_at_1000_diff1
value: 61.1715
- type: main_score
value: 71.552
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 48.792
- type: ndcg_at_3
value: 58.879000000000005
- type: ndcg_at_5
value: 62.039
- type: ndcg_at_10
value: 64.575
- type: ndcg_at_20
value: 66.373
- type: ndcg_at_100
value: 68.355
- type: ndcg_at_1000
value: 68.423
- type: map_at_1
value: 48.792
- type: map_at_3
value: 56.361000000000004
- type: map_at_5
value: 58.099999999999994
- type: map_at_10
value: 59.168
- type: map_at_20
value: 59.643
- type: map_at_100
value: 59.924
- type: map_at_1000
value: 59.927
- type: recall_at_1
value: 48.792
- type: recall_at_3
value: 66.184
- type: recall_at_5
value: 73.913
- type: recall_at_10
value: 81.643
- type: recall_at_20
value: 88.889
- type: recall_at_100
value: 99.517
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 48.792
- type: precision_at_3
value: 22.061
- type: precision_at_5
value: 14.783
- type: precision_at_10
value: 8.164
- type: precision_at_20
value: 4.444
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 48.7923
- type: mrr_at_3
value: 56.360699999999994
- type: mrr_at_5
value: 58.0998
- type: mrr_at_10
value: 59.1684
- type: mrr_at_20
value: 59.6429
- type: mrr_at_100
value: 59.923899999999996
- type: mrr_at_1000
value: 59.927299999999995
- type: nauc_ndcg_at_1_max
value: 60.14659999999999
- type: nauc_ndcg_at_1_std
value: 24.918000000000003
- type: nauc_ndcg_at_1_diff1
value: 68.1555
- type: nauc_ndcg_at_3_max
value: 68.1987
- type: nauc_ndcg_at_3_std
value: 33.2158
- type: nauc_ndcg_at_3_diff1
value: 65.9628
- type: nauc_ndcg_at_5_max
value: 67.9623
- type: nauc_ndcg_at_5_std
value: 35.7052
- type: nauc_ndcg_at_5_diff1
value: 65.3555
- type: nauc_ndcg_at_10_max
value: 67.2588
- type: nauc_ndcg_at_10_std
value: 35.5972
- type: nauc_ndcg_at_10_diff1
value: 64.43560000000001
- type: nauc_ndcg_at_20_max
value: 66.4426
- type: nauc_ndcg_at_20_std
value: 34.2402
- type: nauc_ndcg_at_20_diff1
value: 64.6256
- type: nauc_ndcg_at_100_max
value: 65.9374
- type: nauc_ndcg_at_100_std
value: 33.2936
- type: nauc_ndcg_at_100_diff1
value: 65.4946
- type: nauc_ndcg_at_1000_max
value: 65.8403
- type: nauc_ndcg_at_1000_std
value: 33.1036
- type: nauc_ndcg_at_1000_diff1
value: 65.4336
- type: nauc_map_at_1_max
value: 60.14659999999999
- type: nauc_map_at_1_std
value: 24.918000000000003
- type: nauc_map_at_1_diff1
value: 68.1555
- type: nauc_map_at_3_max
value: 65.9154
- type: nauc_map_at_3_std
value: 31.2376
- type: nauc_map_at_3_diff1
value: 66.2823
- type: nauc_map_at_5_max
value: 65.6741
- type: nauc_map_at_5_std
value: 32.3493
- type: nauc_map_at_5_diff1
value: 65.985
- type: nauc_map_at_10_max
value: 65.32430000000001
- type: nauc_map_at_10_std
value: 32.1969
- type: nauc_map_at_10_diff1
value: 65.6151
- type: nauc_map_at_20_max
value: 65.11710000000001
- type: nauc_map_at_20_std
value: 31.842599999999997
- type: nauc_map_at_20_diff1
value: 65.6874
- type: nauc_map_at_100_max
value: 65.0633
- type: nauc_map_at_100_std
value: 31.7911
- type: nauc_map_at_100_diff1
value: 65.803
- type: nauc_map_at_1000_max
value: 65.0593
- type: nauc_map_at_1000_std
value: 31.7832
- type: nauc_map_at_1000_diff1
value: 65.8006
- type: nauc_recall_at_1_max
value: 60.14659999999999
- type: nauc_recall_at_1_std
value: 24.918000000000003
- type: nauc_recall_at_1_diff1
value: 68.1555
- type: nauc_recall_at_3_max
value: 75.8576
- type: nauc_recall_at_3_std
value: 39.685900000000004
- type: nauc_recall_at_3_diff1
value: 65.02459999999999
- type: nauc_recall_at_5_max
value: 76.9843
- type: nauc_recall_at_5_std
value: 49.3317
- type: nauc_recall_at_5_diff1
value: 62.922599999999996
- type: nauc_recall_at_10_max
value: 76.8501
- type: nauc_recall_at_10_std
value: 53.6033
- type: nauc_recall_at_10_diff1
value: 58.028999999999996
- type: nauc_recall_at_20_max
value: 74.5552
- type: nauc_recall_at_20_std
value: 51.1048
- type: nauc_recall_at_20_diff1
value: 55.864000000000004
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 86.907
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 60.14659999999999
- type: nauc_precision_at_1_std
value: 24.918000000000003
- type: nauc_precision_at_1_diff1
value: 68.1555
- type: nauc_precision_at_3_max
value: 75.8576
- type: nauc_precision_at_3_std
value: 39.685900000000004
- type: nauc_precision_at_3_diff1
value: 65.02459999999999
- type: nauc_precision_at_5_max
value: 76.9843
- type: nauc_precision_at_5_std
value: 49.3317
- type: nauc_precision_at_5_diff1
value: 62.922599999999996
- type: nauc_precision_at_10_max
value: 76.8501
- type: nauc_precision_at_10_std
value: 53.6033
- type: nauc_precision_at_10_diff1
value: 58.028999999999996
- type: nauc_precision_at_20_max
value: 74.5552
- type: nauc_precision_at_20_std
value: 51.1048
- type: nauc_precision_at_20_diff1
value: 55.864000000000004
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 86.907
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 60.14659999999999
- type: nauc_mrr_at_1_std
value: 24.918000000000003
- type: nauc_mrr_at_1_diff1
value: 68.1555
- type: nauc_mrr_at_3_max
value: 65.9154
- type: nauc_mrr_at_3_std
value: 31.2376
- type: nauc_mrr_at_3_diff1
value: 66.2823
- type: nauc_mrr_at_5_max
value: 65.6741
- type: nauc_mrr_at_5_std
value: 32.3493
- type: nauc_mrr_at_5_diff1
value: 65.985
- type: nauc_mrr_at_10_max
value: 65.32430000000001
- type: nauc_mrr_at_10_std
value: 32.1969
- type: nauc_mrr_at_10_diff1
value: 65.6151
- type: nauc_mrr_at_20_max
value: 65.11710000000001
- type: nauc_mrr_at_20_std
value: 31.842599999999997
- type: nauc_mrr_at_20_diff1
value: 65.6874
- type: nauc_mrr_at_100_max
value: 65.0633
- type: nauc_mrr_at_100_std
value: 31.7911
- type: nauc_mrr_at_100_diff1
value: 65.803
- type: nauc_mrr_at_1000_max
value: 65.0593
- type: nauc_mrr_at_1000_std
value: 31.7832
- type: nauc_mrr_at_1000_diff1
value: 65.8006
- type: main_score
value: 64.575
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 44.101
- type: ndcg_at_3
value: 53.613
- type: ndcg_at_5
value: 57.083
- type: ndcg_at_10
value: 59.467000000000006
- type: ndcg_at_20
value: 61.085
- type: ndcg_at_100
value: 62.991
- type: ndcg_at_1000
value: 63.837999999999994
- type: map_at_1
value: 44.101
- type: map_at_3
value: 51.225
- type: map_at_5
value: 53.13
- type: map_at_10
value: 54.081
- type: map_at_20
value: 54.529
- type: map_at_100
value: 54.771
- type: map_at_1000
value: 54.806999999999995
- type: recall_at_1
value: 44.101
- type: recall_at_3
value: 60.541999999999994
- type: recall_at_5
value: 69.052
- type: recall_at_10
value: 76.596
- type: recall_at_20
value: 82.979
- type: recall_at_100
value: 93.61699999999999
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 44.101
- type: precision_at_3
value: 20.180999999999997
- type: precision_at_5
value: 13.81
- type: precision_at_10
value: 7.66
- type: precision_at_20
value: 4.149
- type: precision_at_100
value: 0.936
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 44.1006
- type: mrr_at_3
value: 51.225
- type: mrr_at_5
value: 53.1302
- type: mrr_at_10
value: 54.0814
- type: mrr_at_20
value: 54.5288
- type: mrr_at_100
value: 54.770799999999994
- type: mrr_at_1000
value: 54.8066
- type: nauc_ndcg_at_1_max
value: 55.80310000000001
- type: nauc_ndcg_at_1_std
value: 22.0275
- type: nauc_ndcg_at_1_diff1
value: 56.5222
- type: nauc_ndcg_at_3_max
value: 54.8699
- type: nauc_ndcg_at_3_std
value: 25.883699999999997
- type: nauc_ndcg_at_3_diff1
value: 49.195699999999995
- type: nauc_ndcg_at_5_max
value: 56.272299999999994
- type: nauc_ndcg_at_5_std
value: 28.6933
- type: nauc_ndcg_at_5_diff1
value: 49.4566
- type: nauc_ndcg_at_10_max
value: 55.6011
- type: nauc_ndcg_at_10_std
value: 27.5248
- type: nauc_ndcg_at_10_diff1
value: 48.7372
- type: nauc_ndcg_at_20_max
value: 55.49230000000001
- type: nauc_ndcg_at_20_std
value: 26.862599999999997
- type: nauc_ndcg_at_20_diff1
value: 49.382799999999996
- type: nauc_ndcg_at_100_max
value: 55.7909
- type: nauc_ndcg_at_100_std
value: 27.314100000000003
- type: nauc_ndcg_at_100_diff1
value: 50.6826
- type: nauc_ndcg_at_1000_max
value: 55.614200000000004
- type: nauc_ndcg_at_1000_std
value: 26.6721
- type: nauc_ndcg_at_1000_diff1
value: 50.67660000000001
- type: nauc_map_at_1_max
value: 55.80310000000001
- type: nauc_map_at_1_std
value: 22.0275
- type: nauc_map_at_1_diff1
value: 56.5222
- type: nauc_map_at_3_max
value: 54.9107
- type: nauc_map_at_3_std
value: 24.803
- type: nauc_map_at_3_diff1
value: 51.0794
- type: nauc_map_at_5_max
value: 55.702600000000004
- type: nauc_map_at_5_std
value: 26.3248
- type: nauc_map_at_5_diff1
value: 51.3243
- type: nauc_map_at_10_max
value: 55.4072
- type: nauc_map_at_10_std
value: 25.8517
- type: nauc_map_at_10_diff1
value: 51.073100000000004
- type: nauc_map_at_20_max
value: 55.4075
- type: nauc_map_at_20_std
value: 25.684600000000003
- type: nauc_map_at_20_diff1
value: 51.2544
- type: nauc_map_at_100_max
value: 55.4738
- type: nauc_map_at_100_std
value: 25.7963
- type: nauc_map_at_100_diff1
value: 51.4555
- type: nauc_map_at_1000_max
value: 55.4642
- type: nauc_map_at_1000_std
value: 25.7658
- type: nauc_map_at_1000_diff1
value: 51.4559
- type: nauc_recall_at_1_max
value: 55.80310000000001
- type: nauc_recall_at_1_std
value: 22.0275
- type: nauc_recall_at_1_diff1
value: 56.5222
- type: nauc_recall_at_3_max
value: 54.8305
- type: nauc_recall_at_3_std
value: 29.317999999999998
- type: nauc_recall_at_3_diff1
value: 43.279
- type: nauc_recall_at_5_max
value: 58.5943
- type: nauc_recall_at_5_std
value: 37.6264
- type: nauc_recall_at_5_diff1
value: 42.7338
- type: nauc_recall_at_10_max
value: 56.5176
- type: nauc_recall_at_10_std
value: 34.6487
- type: nauc_recall_at_10_diff1
value: 38.0783
- type: nauc_recall_at_20_max
value: 55.6135
- type: nauc_recall_at_20_std
value: 32.082100000000004
- type: nauc_recall_at_20_diff1
value: 39.259100000000004
- type: nauc_recall_at_100_max
value: 60.3625
- type: nauc_recall_at_100_std
value: 45.4796
- type: nauc_recall_at_100_diff1
value: 50.6829
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.80310000000001
- type: nauc_precision_at_1_std
value: 22.0275
- type: nauc_precision_at_1_diff1
value: 56.5222
- type: nauc_precision_at_3_max
value: 54.8305
- type: nauc_precision_at_3_std
value: 29.317999999999998
- type: nauc_precision_at_3_diff1
value: 43.279
- type: nauc_precision_at_5_max
value: 58.5943
- type: nauc_precision_at_5_std
value: 37.6264
- type: nauc_precision_at_5_diff1
value: 42.7338
- type: nauc_precision_at_10_max
value: 56.5176
- type: nauc_precision_at_10_std
value: 34.6487
- type: nauc_precision_at_10_diff1
value: 38.0783
- type: nauc_precision_at_20_max
value: 55.6135
- type: nauc_precision_at_20_std
value: 32.082100000000004
- type: nauc_precision_at_20_diff1
value: 39.259100000000004
- type: nauc_precision_at_100_max
value: 60.3625
- type: nauc_precision_at_100_std
value: 45.4796
- type: nauc_precision_at_100_diff1
value: 50.6829
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 55.80310000000001
- type: nauc_mrr_at_1_std
value: 22.0275
- type: nauc_mrr_at_1_diff1
value: 56.5222
- type: nauc_mrr_at_3_max
value: 54.9107
- type: nauc_mrr_at_3_std
value: 24.803
- type: nauc_mrr_at_3_diff1
value: 51.0794
- type: nauc_mrr_at_5_max
value: 55.702600000000004
- type: nauc_mrr_at_5_std
value: 26.3248
- type: nauc_mrr_at_5_diff1
value: 51.3243
- type: nauc_mrr_at_10_max
value: 55.4072
- type: nauc_mrr_at_10_std
value: 25.8517
- type: nauc_mrr_at_10_diff1
value: 51.073100000000004
- type: nauc_mrr_at_20_max
value: 55.4075
- type: nauc_mrr_at_20_std
value: 25.684600000000003
- type: nauc_mrr_at_20_diff1
value: 51.2544
- type: nauc_mrr_at_100_max
value: 55.4738
- type: nauc_mrr_at_100_std
value: 25.7963
- type: nauc_mrr_at_100_diff1
value: 51.4555
- type: nauc_mrr_at_1000_max
value: 55.4642
- type: nauc_mrr_at_1000_std
value: 25.7658
- type: nauc_mrr_at_1000_diff1
value: 51.4559
- type: main_score
value: 59.467000000000006
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 52.795
- type: ndcg_at_3
value: 64.507
- type: ndcg_at_5
value: 67.581
- type: ndcg_at_10
value: 70.32300000000001
- type: ndcg_at_20
value: 70.475
- type: ndcg_at_100
value: 72.195
- type: ndcg_at_1000
value: 72.286
- type: map_at_1
value: 52.795
- type: map_at_3
value: 61.49099999999999
- type: map_at_5
value: 63.199000000000005
- type: map_at_10
value: 64.29
- type: map_at_20
value: 64.328
- type: map_at_100
value: 64.564
- type: map_at_1000
value: 64.57000000000001
- type: recall_at_1
value: 52.795
- type: recall_at_3
value: 73.292
- type: recall_at_5
value: 80.745
- type: recall_at_10
value: 89.441
- type: recall_at_20
value: 90.062
- type: recall_at_100
value: 99.37899999999999
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.795
- type: precision_at_3
value: 24.431
- type: precision_at_5
value: 16.149
- type: precision_at_10
value: 8.944
- type: precision_at_20
value: 4.503
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.795
- type: mrr_at_3
value: 61.4907
- type: mrr_at_5
value: 63.1988
- type: mrr_at_10
value: 64.28970000000001
- type: mrr_at_20
value: 64.3285
- type: mrr_at_100
value: 64.5641
- type: mrr_at_1000
value: 64.5697
- type: nauc_ndcg_at_1_max
value: 53.888999999999996
- type: nauc_ndcg_at_1_std
value: 11.0525
- type: nauc_ndcg_at_1_diff1
value: 74.8286
- type: nauc_ndcg_at_3_max
value: 59.9321
- type: nauc_ndcg_at_3_std
value: 21.096899999999998
- type: nauc_ndcg_at_3_diff1
value: 69.4211
- type: nauc_ndcg_at_5_max
value: 61.1135
- type: nauc_ndcg_at_5_std
value: 21.885199999999998
- type: nauc_ndcg_at_5_diff1
value: 69.2178
- type: nauc_ndcg_at_10_max
value: 61.0899
- type: nauc_ndcg_at_10_std
value: 23.1179
- type: nauc_ndcg_at_10_diff1
value: 69.1936
- type: nauc_ndcg_at_20_max
value: 60.7846
- type: nauc_ndcg_at_20_std
value: 22.5977
- type: nauc_ndcg_at_20_diff1
value: 69.1149
- type: nauc_ndcg_at_100_max
value: 59.8011
- type: nauc_ndcg_at_100_std
value: 20.5927
- type: nauc_ndcg_at_100_diff1
value: 70.11319999999999
- type: nauc_ndcg_at_1000_max
value: 59.630799999999994
- type: nauc_ndcg_at_1000_std
value: 20.2562
- type: nauc_ndcg_at_1000_diff1
value: 70.357
- type: nauc_map_at_1_max
value: 53.888999999999996
- type: nauc_map_at_1_std
value: 11.0525
- type: nauc_map_at_1_diff1
value: 74.8286
- type: nauc_map_at_3_max
value: 58.2855
- type: nauc_map_at_3_std
value: 18.0442
- type: nauc_map_at_3_diff1
value: 70.7787
- type: nauc_map_at_5_max
value: 58.875299999999996
- type: nauc_map_at_5_std
value: 18.276999999999997
- type: nauc_map_at_5_diff1
value: 70.7961
- type: nauc_map_at_10_max
value: 58.7896
- type: nauc_map_at_10_std
value: 18.697
- type: nauc_map_at_10_diff1
value: 70.759
- type: nauc_map_at_20_max
value: 58.7205
- type: nauc_map_at_20_std
value: 18.5786
- type: nauc_map_at_20_diff1
value: 70.74380000000001
- type: nauc_map_at_100_max
value: 58.64319999999999
- type: nauc_map_at_100_std
value: 18.418799999999997
- type: nauc_map_at_100_diff1
value: 70.9314
- type: nauc_map_at_1000_max
value: 58.634699999999995
- type: nauc_map_at_1000_std
value: 18.401999999999997
- type: nauc_map_at_1000_diff1
value: 70.9434
- type: nauc_recall_at_1_max
value: 53.888999999999996
- type: nauc_recall_at_1_std
value: 11.0525
- type: nauc_recall_at_1_diff1
value: 74.8286
- type: nauc_recall_at_3_max
value: 65.92
- type: nauc_recall_at_3_std
value: 32.3637
- type: nauc_recall_at_3_diff1
value: 64.5457
- type: nauc_recall_at_5_max
value: 71.4171
- type: nauc_recall_at_5_std
value: 38.7281
- type: nauc_recall_at_5_diff1
value: 61.96430000000001
- type: nauc_recall_at_10_max
value: 78.67739999999999
- type: nauc_recall_at_10_std
value: 57.8693
- type: nauc_recall_at_10_diff1
value: 57.7189
- type: nauc_recall_at_20_max
value: 76.7024
- type: nauc_recall_at_20_std
value: 54.76370000000001
- type: nauc_recall_at_20_diff1
value: 56.3392
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 12.5808
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 53.888999999999996
- type: nauc_precision_at_1_std
value: 11.0525
- type: nauc_precision_at_1_diff1
value: 74.8286
- type: nauc_precision_at_3_max
value: 65.92
- type: nauc_precision_at_3_std
value: 32.3637
- type: nauc_precision_at_3_diff1
value: 64.5457
- type: nauc_precision_at_5_max
value: 71.4171
- type: nauc_precision_at_5_std
value: 38.7281
- type: nauc_precision_at_5_diff1
value: 61.96430000000001
- type: nauc_precision_at_10_max
value: 78.67739999999999
- type: nauc_precision_at_10_std
value: 57.8693
- type: nauc_precision_at_10_diff1
value: 57.7189
- type: nauc_precision_at_20_max
value: 76.7024
- type: nauc_precision_at_20_std
value: 54.76370000000001
- type: nauc_precision_at_20_diff1
value: 56.3392
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 12.5808
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 53.888999999999996
- type: nauc_mrr_at_1_std
value: 11.0525
- type: nauc_mrr_at_1_diff1
value: 74.8286
- type: nauc_mrr_at_3_max
value: 58.2855
- type: nauc_mrr_at_3_std
value: 18.0442
- type: nauc_mrr_at_3_diff1
value: 70.7787
- type: nauc_mrr_at_5_max
value: 58.875299999999996
- type: nauc_mrr_at_5_std
value: 18.276999999999997
- type: nauc_mrr_at_5_diff1
value: 70.7961
- type: nauc_mrr_at_10_max
value: 58.7896
- type: nauc_mrr_at_10_std
value: 18.697
- type: nauc_mrr_at_10_diff1
value: 70.759
- type: nauc_mrr_at_20_max
value: 58.7205
- type: nauc_mrr_at_20_std
value: 18.5786
- type: nauc_mrr_at_20_diff1
value: 70.74380000000001
- type: nauc_mrr_at_100_max
value: 58.64319999999999
- type: nauc_mrr_at_100_std
value: 18.418799999999997
- type: nauc_mrr_at_100_diff1
value: 70.9314
- type: nauc_mrr_at_1000_max
value: 58.634699999999995
- type: nauc_mrr_at_1000_std
value: 18.401999999999997
- type: nauc_mrr_at_1000_diff1
value: 70.9434
- type: main_score
value: 70.32300000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 52.151
- type: ndcg_at_3
value: 63.644999999999996
- type: ndcg_at_5
value: 66.561
- type: ndcg_at_10
value: 69.059
- type: ndcg_at_20
value: 69.985
- type: ndcg_at_100
value: 71.643
- type: ndcg_at_1000
value: 71.801
- type: map_at_1
value: 52.151
- type: map_at_3
value: 60.753
- type: map_at_5
value: 62.392
- type: map_at_10
value: 63.461
- type: map_at_20
value: 63.702000000000005
- type: map_at_100
value: 63.954
- type: map_at_1000
value: 63.963
- type: recall_at_1
value: 52.151
- type: recall_at_3
value: 72.043
- type: recall_at_5
value: 79.032
- type: recall_at_10
value: 86.559
- type: recall_at_20
value: 90.323
- type: recall_at_100
value: 98.925
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.151
- type: precision_at_3
value: 24.014
- type: precision_at_5
value: 15.806000000000001
- type: precision_at_10
value: 8.656
- type: precision_at_20
value: 4.516
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.1505
- type: mrr_at_3
value: 60.752700000000004
- type: mrr_at_5
value: 62.3925
- type: mrr_at_10
value: 63.4607
- type: mrr_at_20
value: 63.702000000000005
- type: mrr_at_100
value: 63.953700000000005
- type: mrr_at_1000
value: 63.96340000000001
- type: nauc_ndcg_at_1_max
value: 49.414
- type: nauc_ndcg_at_1_std
value: 26.262400000000003
- type: nauc_ndcg_at_1_diff1
value: 54.0133
- type: nauc_ndcg_at_3_max
value: 54.1356
- type: nauc_ndcg_at_3_std
value: 30.669
- type: nauc_ndcg_at_3_diff1
value: 46.9126
- type: nauc_ndcg_at_5_max
value: 54.16570000000001
- type: nauc_ndcg_at_5_std
value: 31.907799999999998
- type: nauc_ndcg_at_5_diff1
value: 47.6523
- type: nauc_ndcg_at_10_max
value: 50.79
- type: nauc_ndcg_at_10_std
value: 28.937800000000003
- type: nauc_ndcg_at_10_diff1
value: 45.2259
- type: nauc_ndcg_at_20_max
value: 50.504400000000004
- type: nauc_ndcg_at_20_std
value: 29.454399999999996
- type: nauc_ndcg_at_20_diff1
value: 44.7774
- type: nauc_ndcg_at_100_max
value: 51.535799999999995
- type: nauc_ndcg_at_100_std
value: 29.2429
- type: nauc_ndcg_at_100_diff1
value: 47.5625
- type: nauc_ndcg_at_1000_max
value: 51.232299999999995
- type: nauc_ndcg_at_1000_std
value: 28.7314
- type: nauc_ndcg_at_1000_diff1
value: 47.7654
- type: nauc_map_at_1_max
value: 49.414
- type: nauc_map_at_1_std
value: 26.262400000000003
- type: nauc_map_at_1_diff1
value: 54.0133
- type: nauc_map_at_3_max
value: 52.367
- type: nauc_map_at_3_std
value: 28.741600000000002
- type: nauc_map_at_3_diff1
value: 48.7321
- type: nauc_map_at_5_max
value: 52.28660000000001
- type: nauc_map_at_5_std
value: 29.252899999999997
- type: nauc_map_at_5_diff1
value: 49.200300000000006
- type: nauc_map_at_10_max
value: 50.9833
- type: nauc_map_at_10_std
value: 28.0707
- type: nauc_map_at_10_diff1
value: 48.3651
- type: nauc_map_at_20_max
value: 50.9108
- type: nauc_map_at_20_std
value: 28.174300000000002
- type: nauc_map_at_20_diff1
value: 48.2832
- type: nauc_map_at_100_max
value: 51.0532
- type: nauc_map_at_100_std
value: 28.143099999999997
- type: nauc_map_at_100_diff1
value: 48.7424
- type: nauc_map_at_1000_max
value: 51.0382
- type: nauc_map_at_1000_std
value: 28.117900000000002
- type: nauc_map_at_1000_diff1
value: 48.752
- type: nauc_recall_at_1_max
value: 49.414
- type: nauc_recall_at_1_std
value: 26.262400000000003
- type: nauc_recall_at_1_diff1
value: 54.0133
- type: nauc_recall_at_3_max
value: 60.6724
- type: nauc_recall_at_3_std
value: 37.8962
- type: nauc_recall_at_3_diff1
value: 40.5005
- type: nauc_recall_at_5_max
value: 62.6191
- type: nauc_recall_at_5_std
value: 44.1519
- type: nauc_recall_at_5_diff1
value: 41.1881
- type: nauc_recall_at_10_max
value: 47.4454
- type: nauc_recall_at_10_std
value: 33.1899
- type: nauc_recall_at_10_diff1
value: 24.0447
- type: nauc_recall_at_20_max
value: 43.7071
- type: nauc_recall_at_20_std
value: 39.8658
- type: nauc_recall_at_20_diff1
value: 12.4499
- type: nauc_recall_at_100_max
value: 93.44839999999999
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 19.0591
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 49.414
- type: nauc_precision_at_1_std
value: 26.262400000000003
- type: nauc_precision_at_1_diff1
value: 54.0133
- type: nauc_precision_at_3_max
value: 60.6724
- type: nauc_precision_at_3_std
value: 37.8962
- type: nauc_precision_at_3_diff1
value: 40.5005
- type: nauc_precision_at_5_max
value: 62.6191
- type: nauc_precision_at_5_std
value: 44.1519
- type: nauc_precision_at_5_diff1
value: 41.1881
- type: nauc_precision_at_10_max
value: 47.4454
- type: nauc_precision_at_10_std
value: 33.1899
- type: nauc_precision_at_10_diff1
value: 24.0447
- type: nauc_precision_at_20_max
value: 43.7071
- type: nauc_precision_at_20_std
value: 39.8658
- type: nauc_precision_at_20_diff1
value: 12.4499
- type: nauc_precision_at_100_max
value: 93.44839999999999
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 19.0591
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 49.414
- type: nauc_mrr_at_1_std
value: 26.262400000000003
- type: nauc_mrr_at_1_diff1
value: 54.0133
- type: nauc_mrr_at_3_max
value: 52.367
- type: nauc_mrr_at_3_std
value: 28.741600000000002
- type: nauc_mrr_at_3_diff1
value: 48.7321
- type: nauc_mrr_at_5_max
value: 52.28660000000001
- type: nauc_mrr_at_5_std
value: 29.252899999999997
- type: nauc_mrr_at_5_diff1
value: 49.200300000000006
- type: nauc_mrr_at_10_max
value: 50.9833
- type: nauc_mrr_at_10_std
value: 28.0707
- type: nauc_mrr_at_10_diff1
value: 48.3651
- type: nauc_mrr_at_20_max
value: 50.9108
- type: nauc_mrr_at_20_std
value: 28.174300000000002
- type: nauc_mrr_at_20_diff1
value: 48.2832
- type: nauc_mrr_at_100_max
value: 51.0532
- type: nauc_mrr_at_100_std
value: 28.143099999999997
- type: nauc_mrr_at_100_diff1
value: 48.7424
- type: nauc_mrr_at_1000_max
value: 51.0382
- type: nauc_mrr_at_1000_std
value: 28.117900000000002
- type: nauc_mrr_at_1000_diff1
value: 48.752
- type: main_score
value: 69.059
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.534
- type: ndcg_at_3
value: 61.24699999999999
- type: ndcg_at_5
value: 63.28
- type: ndcg_at_10
value: 65.712
- type: ndcg_at_20
value: 67.104
- type: ndcg_at_100
value: 69.376
- type: ndcg_at_1000
value: 69.553
- type: map_at_1
value: 51.534
- type: map_at_3
value: 58.691
- type: map_at_5
value: 59.826
- type: map_at_10
value: 60.86
- type: map_at_20
value: 61.24000000000001
- type: map_at_100
value: 61.546
- type: map_at_1000
value: 61.556
- type: recall_at_1
value: 51.534
- type: recall_at_3
value: 68.71199999999999
- type: recall_at_5
value: 73.61999999999999
- type: recall_at_10
value: 80.982
- type: recall_at_20
value: 86.503
- type: recall_at_100
value: 98.773
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.534
- type: precision_at_3
value: 22.904
- type: precision_at_5
value: 14.724
- type: precision_at_10
value: 8.097999999999999
- type: precision_at_20
value: 4.324999999999999
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.5337
- type: mrr_at_3
value: 58.6912
- type: mrr_at_5
value: 59.82619999999999
- type: mrr_at_10
value: 60.8596
- type: mrr_at_20
value: 61.2401
- type: mrr_at_100
value: 61.546299999999995
- type: mrr_at_1000
value: 61.5563
- type: nauc_ndcg_at_1_max
value: 61.617200000000004
- type: nauc_ndcg_at_1_std
value: 31.049599999999998
- type: nauc_ndcg_at_1_diff1
value: 63.227500000000006
- type: nauc_ndcg_at_3_max
value: 59.7893
- type: nauc_ndcg_at_3_std
value: 32.8623
- type: nauc_ndcg_at_3_diff1
value: 59.6656
- type: nauc_ndcg_at_5_max
value: 60.5831
- type: nauc_ndcg_at_5_std
value: 32.596599999999995
- type: nauc_ndcg_at_5_diff1
value: 59.4883
- type: nauc_ndcg_at_10_max
value: 62.497400000000006
- type: nauc_ndcg_at_10_std
value: 34.550599999999996
- type: nauc_ndcg_at_10_diff1
value: 59.155899999999995
- type: nauc_ndcg_at_20_max
value: 62.740899999999996
- type: nauc_ndcg_at_20_std
value: 36.7174
- type: nauc_ndcg_at_20_diff1
value: 58.0935
- type: nauc_ndcg_at_100_max
value: 61.864399999999996
- type: nauc_ndcg_at_100_std
value: 34.528
- type: nauc_ndcg_at_100_diff1
value: 59.4356
- type: nauc_ndcg_at_1000_max
value: 61.7297
- type: nauc_ndcg_at_1000_std
value: 34.083200000000005
- type: nauc_ndcg_at_1000_diff1
value: 59.516999999999996
- type: nauc_map_at_1_max
value: 61.617200000000004
- type: nauc_map_at_1_std
value: 31.049599999999998
- type: nauc_map_at_1_diff1
value: 63.227500000000006
- type: nauc_map_at_3_max
value: 60.293699999999994
- type: nauc_map_at_3_std
value: 32.2575
- type: nauc_map_at_3_diff1
value: 60.5793
- type: nauc_map_at_5_max
value: 60.801899999999996
- type: nauc_map_at_5_std
value: 32.2098
- type: nauc_map_at_5_diff1
value: 60.5253
- type: nauc_map_at_10_max
value: 61.565599999999996
- type: nauc_map_at_10_std
value: 32.8874
- type: nauc_map_at_10_diff1
value: 60.4275
- type: nauc_map_at_20_max
value: 61.602199999999996
- type: nauc_map_at_20_std
value: 33.4131
- type: nauc_map_at_20_diff1
value: 60.1488
- type: nauc_map_at_100_max
value: 61.4753
- type: nauc_map_at_100_std
value: 33.1531
- type: nauc_map_at_100_diff1
value: 60.2734
- type: nauc_map_at_1000_max
value: 61.4688
- type: nauc_map_at_1000_std
value: 33.1323
- type: nauc_map_at_1000_diff1
value: 60.278600000000004
- type: nauc_recall_at_1_max
value: 61.617200000000004
- type: nauc_recall_at_1_std
value: 31.049599999999998
- type: nauc_recall_at_1_diff1
value: 63.227500000000006
- type: nauc_recall_at_3_max
value: 58.0671
- type: nauc_recall_at_3_std
value: 34.976600000000005
- type: nauc_recall_at_3_diff1
value: 56.5781
- type: nauc_recall_at_5_max
value: 59.7593
- type: nauc_recall_at_5_std
value: 33.9046
- type: nauc_recall_at_5_diff1
value: 55.5195
- type: nauc_recall_at_10_max
value: 68.0843
- type: nauc_recall_at_10_std
value: 43.8292
- type: nauc_recall_at_10_diff1
value: 52.74100000000001
- type: nauc_recall_at_20_max
value: 72.26
- type: nauc_recall_at_20_std
value: 63.8486
- type: nauc_recall_at_20_diff1
value: 42.700700000000005
- type: nauc_recall_at_100_max
value: 79.5792
- type: nauc_recall_at_100_std
value: 93.4774
- type: nauc_recall_at_100_diff1
value: 49.547200000000004
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 61.617200000000004
- type: nauc_precision_at_1_std
value: 31.049599999999998
- type: nauc_precision_at_1_diff1
value: 63.227500000000006
- type: nauc_precision_at_3_max
value: 58.0671
- type: nauc_precision_at_3_std
value: 34.976600000000005
- type: nauc_precision_at_3_diff1
value: 56.5781
- type: nauc_precision_at_5_max
value: 59.7593
- type: nauc_precision_at_5_std
value: 33.9046
- type: nauc_precision_at_5_diff1
value: 55.5195
- type: nauc_precision_at_10_max
value: 68.0843
- type: nauc_precision_at_10_std
value: 43.8292
- type: nauc_precision_at_10_diff1
value: 52.74100000000001
- type: nauc_precision_at_20_max
value: 72.26
- type: nauc_precision_at_20_std
value: 63.8486
- type: nauc_precision_at_20_diff1
value: 42.700700000000005
- type: nauc_precision_at_100_max
value: 79.5792
- type: nauc_precision_at_100_std
value: 93.4774
- type: nauc_precision_at_100_diff1
value: 49.547200000000004
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 61.617200000000004
- type: nauc_mrr_at_1_std
value: 31.049599999999998
- type: nauc_mrr_at_1_diff1
value: 63.227500000000006
- type: nauc_mrr_at_3_max
value: 60.293699999999994
- type: nauc_mrr_at_3_std
value: 32.2575
- type: nauc_mrr_at_3_diff1
value: 60.5793
- type: nauc_mrr_at_5_max
value: 60.801899999999996
- type: nauc_mrr_at_5_std
value: 32.2098
- type: nauc_mrr_at_5_diff1
value: 60.5253
- type: nauc_mrr_at_10_max
value: 61.565599999999996
- type: nauc_mrr_at_10_std
value: 32.8874
- type: nauc_mrr_at_10_diff1
value: 60.4275
- type: nauc_mrr_at_20_max
value: 61.602199999999996
- type: nauc_mrr_at_20_std
value: 33.4131
- type: nauc_mrr_at_20_diff1
value: 60.1488
- type: nauc_mrr_at_100_max
value: 61.4753
- type: nauc_mrr_at_100_std
value: 33.1531
- type: nauc_mrr_at_100_diff1
value: 60.2734
- type: nauc_mrr_at_1000_max
value: 61.4688
- type: nauc_mrr_at_1000_std
value: 33.1323
- type: nauc_mrr_at_1000_diff1
value: 60.278600000000004
- type: main_score
value: 65.712
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 49.468
- type: ndcg_at_3
value: 61.385
- type: ndcg_at_5
value: 63.858000000000004
- type: ndcg_at_10
value: 65.85499999999999
- type: ndcg_at_20
value: 68.014
- type: ndcg_at_100
value: 69.71300000000001
- type: ndcg_at_1000
value: 69.788
- type: map_at_1
value: 49.468
- type: map_at_3
value: 58.511
- type: map_at_5
value: 59.919999999999995
- type: map_at_10
value: 60.702999999999996
- type: map_at_20
value: 61.3
- type: map_at_100
value: 61.541000000000004
- type: map_at_1000
value: 61.545
- type: recall_at_1
value: 49.468
- type: recall_at_3
value: 69.681
- type: recall_at_5
value: 75.532
- type: recall_at_10
value: 81.915
- type: recall_at_20
value: 90.426
- type: recall_at_100
value: 99.468
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 49.468
- type: precision_at_3
value: 23.227
- type: precision_at_5
value: 15.106
- type: precision_at_10
value: 8.190999999999999
- type: precision_at_20
value: 4.521
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 49.4681
- type: mrr_at_3
value: 58.510600000000004
- type: mrr_at_5
value: 59.9202
- type: mrr_at_10
value: 60.703300000000006
- type: mrr_at_20
value: 61.30029999999999
- type: mrr_at_100
value: 61.54110000000001
- type: mrr_at_1000
value: 61.5451
- type: nauc_ndcg_at_1_max
value: 54.7345
- type: nauc_ndcg_at_1_std
value: 11.2512
- type: nauc_ndcg_at_1_diff1
value: 70.6991
- type: nauc_ndcg_at_3_max
value: 57.2006
- type: nauc_ndcg_at_3_std
value: 17.3244
- type: nauc_ndcg_at_3_diff1
value: 59.90220000000001
- type: nauc_ndcg_at_5_max
value: 58.880900000000004
- type: nauc_ndcg_at_5_std
value: 18.7365
- type: nauc_ndcg_at_5_diff1
value: 60.3304
- type: nauc_ndcg_at_10_max
value: 58.3229
- type: nauc_ndcg_at_10_std
value: 19.6983
- type: nauc_ndcg_at_10_diff1
value: 59.8994
- type: nauc_ndcg_at_20_max
value: 57.5958
- type: nauc_ndcg_at_20_std
value: 16.8184
- type: nauc_ndcg_at_20_diff1
value: 60.4564
- type: nauc_ndcg_at_100_max
value: 57.407300000000006
- type: nauc_ndcg_at_100_std
value: 17.0753
- type: nauc_ndcg_at_100_diff1
value: 62.3023
- type: nauc_ndcg_at_1000_max
value: 57.2677
- type: nauc_ndcg_at_1000_std
value: 16.8035
- type: nauc_ndcg_at_1000_diff1
value: 62.3891
- type: nauc_map_at_1_max
value: 54.7345
- type: nauc_map_at_1_std
value: 11.2512
- type: nauc_map_at_1_diff1
value: 70.6991
- type: nauc_map_at_3_max
value: 56.36409999999999
- type: nauc_map_at_3_std
value: 15.7645
- type: nauc_map_at_3_diff1
value: 62.83109999999999
- type: nauc_map_at_5_max
value: 57.2165
- type: nauc_map_at_5_std
value: 16.4827
- type: nauc_map_at_5_diff1
value: 63.129900000000006
- type: nauc_map_at_10_max
value: 56.964099999999995
- type: nauc_map_at_10_std
value: 16.713900000000002
- type: nauc_map_at_10_diff1
value: 63.033300000000004
- type: nauc_map_at_20_max
value: 56.8291
- type: nauc_map_at_20_std
value: 16.0261
- type: nauc_map_at_20_diff1
value: 63.2795
- type: nauc_map_at_100_max
value: 56.7943
- type: nauc_map_at_100_std
value: 16.0463
- type: nauc_map_at_100_diff1
value: 63.5264
- type: nauc_map_at_1000_max
value: 56.7884
- type: nauc_map_at_1000_std
value: 16.034699999999997
- type: nauc_map_at_1000_diff1
value: 63.5303
- type: nauc_recall_at_1_max
value: 54.7345
- type: nauc_recall_at_1_std
value: 11.2512
- type: nauc_recall_at_1_diff1
value: 70.6991
- type: nauc_recall_at_3_max
value: 60.1676
- type: nauc_recall_at_3_std
value: 22.659499999999998
- type: nauc_recall_at_3_diff1
value: 49.8032
- type: nauc_recall_at_5_max
value: 65.889
- type: nauc_recall_at_5_std
value: 27.8308
- type: nauc_recall_at_5_diff1
value: 49.3429
- type: nauc_recall_at_10_max
value: 65.3261
- type: nauc_recall_at_10_std
value: 35.828700000000005
- type: nauc_recall_at_10_diff1
value: 44.0245
- type: nauc_recall_at_20_max
value: 62.0154
- type: nauc_recall_at_20_std
value: 18.0916
- type: nauc_recall_at_20_diff1
value: 35.9279
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 35.8386
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 54.7345
- type: nauc_precision_at_1_std
value: 11.2512
- type: nauc_precision_at_1_diff1
value: 70.6991
- type: nauc_precision_at_3_max
value: 60.1676
- type: nauc_precision_at_3_std
value: 22.659499999999998
- type: nauc_precision_at_3_diff1
value: 49.8032
- type: nauc_precision_at_5_max
value: 65.889
- type: nauc_precision_at_5_std
value: 27.8308
- type: nauc_precision_at_5_diff1
value: 49.3429
- type: nauc_precision_at_10_max
value: 65.3261
- type: nauc_precision_at_10_std
value: 35.828700000000005
- type: nauc_precision_at_10_diff1
value: 44.0245
- type: nauc_precision_at_20_max
value: 62.0154
- type: nauc_precision_at_20_std
value: 18.0916
- type: nauc_precision_at_20_diff1
value: 35.9279
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 35.8386
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 54.7345
- type: nauc_mrr_at_1_std
value: 11.2512
- type: nauc_mrr_at_1_diff1
value: 70.6991
- type: nauc_mrr_at_3_max
value: 56.36409999999999
- type: nauc_mrr_at_3_std
value: 15.7645
- type: nauc_mrr_at_3_diff1
value: 62.83109999999999
- type: nauc_mrr_at_5_max
value: 57.2165
- type: nauc_mrr_at_5_std
value: 16.4827
- type: nauc_mrr_at_5_diff1
value: 63.129900000000006
- type: nauc_mrr_at_10_max
value: 56.964099999999995
- type: nauc_mrr_at_10_std
value: 16.713900000000002
- type: nauc_mrr_at_10_diff1
value: 63.033300000000004
- type: nauc_mrr_at_20_max
value: 56.8291
- type: nauc_mrr_at_20_std
value: 16.0261
- type: nauc_mrr_at_20_diff1
value: 63.2795
- type: nauc_mrr_at_100_max
value: 56.7943
- type: nauc_mrr_at_100_std
value: 16.0463
- type: nauc_mrr_at_100_diff1
value: 63.5264
- type: nauc_mrr_at_1000_max
value: 56.7884
- type: nauc_mrr_at_1000_std
value: 16.034699999999997
- type: nauc_mrr_at_1000_diff1
value: 63.5303
- type: main_score
value: 65.85499999999999
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 31.258000000000003
- type: ndcg_at_3
value: 38.134
- type: ndcg_at_5
value: 40.389
- type: ndcg_at_10
value: 42.781000000000006
- type: ndcg_at_20
value: 44.545
- type: ndcg_at_100
value: 47.325
- type: ndcg_at_1000
value: 49.282
- type: map_at_1
value: 31.249
- type: map_at_3
value: 36.424
- type: map_at_5
value: 37.671
- type: map_at_10
value: 38.663
- type: map_at_20
value: 39.152
- type: map_at_100
value: 39.521
- type: map_at_1000
value: 39.585
- type: recall_at_1
value: 31.249
- type: recall_at_3
value: 43.081
- type: recall_at_5
value: 48.575
- type: recall_at_10
value: 55.944
- type: recall_at_20
value: 62.882000000000005
- type: recall_at_100
value: 78.089
- type: recall_at_1000
value: 93.971
- type: precision_at_1
value: 31.258000000000003
- type: precision_at_3
value: 14.363000000000001
- type: precision_at_5
value: 9.717
- type: precision_at_10
value: 5.595
- type: precision_at_20
value: 3.145
- type: precision_at_100
value: 0.781
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 31.258200000000002
- type: mrr_at_3
value: 36.4335
- type: mrr_at_5
value: 37.6805
- type: mrr_at_10
value: 38.672200000000004
- type: mrr_at_20
value: 39.1614
- type: mrr_at_100
value: 39.5298
- type: mrr_at_1000
value: 39.5948
- type: nauc_ndcg_at_1_max
value: 50.8135
- type: nauc_ndcg_at_1_std
value: 9.5316
- type: nauc_ndcg_at_1_diff1
value: 56.077799999999996
- type: nauc_ndcg_at_3_max
value: 51.4486
- type: nauc_ndcg_at_3_std
value: 11.4698
- type: nauc_ndcg_at_3_diff1
value: 50.6076
- type: nauc_ndcg_at_5_max
value: 51.0535
- type: nauc_ndcg_at_5_std
value: 12.133
- type: nauc_ndcg_at_5_diff1
value: 49.0051
- type: nauc_ndcg_at_10_max
value: 51.324999999999996
- type: nauc_ndcg_at_10_std
value: 13.861299999999998
- type: nauc_ndcg_at_10_diff1
value: 48.4724
- type: nauc_ndcg_at_20_max
value: 51.07390000000001
- type: nauc_ndcg_at_20_std
value: 14.4511
- type: nauc_ndcg_at_20_diff1
value: 47.870200000000004
- type: nauc_ndcg_at_100_max
value: 51.4803
- type: nauc_ndcg_at_100_std
value: 15.289900000000001
- type: nauc_ndcg_at_100_diff1
value: 48.0109
- type: nauc_ndcg_at_1000_max
value: 51.4174
- type: nauc_ndcg_at_1000_std
value: 14.527399999999998
- type: nauc_ndcg_at_1000_diff1
value: 48.6374
- type: nauc_map_at_1_max
value: 50.768899999999995
- type: nauc_map_at_1_std
value: 9.501
- type: nauc_map_at_1_diff1
value: 56.049400000000006
- type: nauc_map_at_3_max
value: 51.27460000000001
- type: nauc_map_at_3_std
value: 10.922
- type: nauc_map_at_3_diff1
value: 51.8738
- type: nauc_map_at_5_max
value: 51.0655
- type: nauc_map_at_5_std
value: 11.282
- type: nauc_map_at_5_diff1
value: 51.0045
- type: nauc_map_at_10_max
value: 51.158899999999996
- type: nauc_map_at_10_std
value: 11.956
- type: nauc_map_at_10_diff1
value: 50.787099999999995
- type: nauc_map_at_20_max
value: 51.081500000000005
- type: nauc_map_at_20_std
value: 12.0977
- type: nauc_map_at_20_diff1
value: 50.6269
- type: nauc_map_at_100_max
value: 51.1262
- type: nauc_map_at_100_std
value: 12.1966
- type: nauc_map_at_100_diff1
value: 50.6523
- type: nauc_map_at_1000_max
value: 51.1258
- type: nauc_map_at_1000_std
value: 12.1769
- type: nauc_map_at_1000_diff1
value: 50.67230000000001
- type: nauc_recall_at_1_max
value: 50.768899999999995
- type: nauc_recall_at_1_std
value: 9.501
- type: nauc_recall_at_1_diff1
value: 56.049400000000006
- type: nauc_recall_at_3_max
value: 51.9034
- type: nauc_recall_at_3_std
value: 13.0311
- type: nauc_recall_at_3_diff1
value: 46.9878
- type: nauc_recall_at_5_max
value: 50.907500000000006
- type: nauc_recall_at_5_std
value: 14.695
- type: nauc_recall_at_5_diff1
value: 42.965900000000005
- type: nauc_recall_at_10_max
value: 51.871500000000005
- type: nauc_recall_at_10_std
value: 20.6095
- type: nauc_recall_at_10_diff1
value: 40.908899999999996
- type: nauc_recall_at_20_max
value: 50.8848
- type: nauc_recall_at_20_std
value: 23.9653
- type: nauc_recall_at_20_diff1
value: 37.5667
- type: nauc_recall_at_100_max
value: 54.52
- type: nauc_recall_at_100_std
value: 35.6453
- type: nauc_recall_at_100_diff1
value: 34.0519
- type: nauc_recall_at_1000_max
value: 58.397
- type: nauc_recall_at_1000_std
value: 49.6012
- type: nauc_recall_at_1000_diff1
value: 27.825699999999998
- type: nauc_precision_at_1_max
value: 50.8135
- type: nauc_precision_at_1_std
value: 9.5316
- type: nauc_precision_at_1_diff1
value: 56.077799999999996
- type: nauc_precision_at_3_max
value: 51.9505
- type: nauc_precision_at_3_std
value: 13.0616
- type: nauc_precision_at_3_diff1
value: 47.0194
- type: nauc_precision_at_5_max
value: 50.9555
- type: nauc_precision_at_5_std
value: 14.7261
- type: nauc_precision_at_5_diff1
value: 42.998
- type: nauc_precision_at_10_max
value: 51.926399999999994
- type: nauc_precision_at_10_std
value: 20.644399999999997
- type: nauc_precision_at_10_diff1
value: 40.9459
- type: nauc_precision_at_20_max
value: 50.9483
- type: nauc_precision_at_20_std
value: 24.0057
- type: nauc_precision_at_20_diff1
value: 37.6094
- type: nauc_precision_at_100_max
value: 54.5785
- type: nauc_precision_at_100_std
value: 35.7331
- type: nauc_precision_at_100_diff1
value: 34.098800000000004
- type: nauc_precision_at_1000_max
value: 58.599900000000005
- type: nauc_precision_at_1000_std
value: 49.8547
- type: nauc_precision_at_1000_diff1
value: 28.0201
- type: nauc_mrr_at_1_max
value: 50.8135
- type: nauc_mrr_at_1_std
value: 9.5316
- type: nauc_mrr_at_1_diff1
value: 56.077799999999996
- type: nauc_mrr_at_3_max
value: 51.3185
- type: nauc_mrr_at_3_std
value: 10.952
- type: nauc_mrr_at_3_diff1
value: 51.902
- type: nauc_mrr_at_5_max
value: 51.1095
- type: nauc_mrr_at_5_std
value: 11.3122
- type: nauc_mrr_at_5_diff1
value: 51.0328
- type: nauc_mrr_at_10_max
value: 51.2033
- type: nauc_mrr_at_10_std
value: 11.9863
- type: nauc_mrr_at_10_diff1
value: 50.8157
- type: nauc_mrr_at_20_max
value: 51.1262
- type: nauc_mrr_at_20_std
value: 12.1282
- type: nauc_mrr_at_20_diff1
value: 50.6557
- type: nauc_mrr_at_100_max
value: 51.169799999999995
- type: nauc_mrr_at_100_std
value: 12.2269
- type: nauc_mrr_at_100_diff1
value: 50.6806
- type: nauc_mrr_at_1000_max
value: 51.1695
- type: nauc_mrr_at_1000_std
value: 12.2072
- type: nauc_mrr_at_1000_diff1
value: 50.700599999999994
- type: main_score
value: 42.781000000000006
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 38.774
- type: ndcg_at_3
value: 47.213
- type: ndcg_at_5
value: 50.19
- type: ndcg_at_10
value: 52.71
- type: ndcg_at_20
value: 54.429
- type: ndcg_at_100
value: 56.69
- type: ndcg_at_1000
value: 58.214
- type: map_at_1
value: 38.774
- type: map_at_3
value: 45.161
- type: map_at_5
value: 46.814
- type: map_at_10
value: 47.848
- type: map_at_20
value: 48.32
- type: map_at_100
value: 48.620999999999995
- type: map_at_1000
value: 48.678
- type: recall_at_1
value: 38.774
- type: recall_at_3
value: 53.125
- type: recall_at_5
value: 60.346
- type: recall_at_10
value: 68.174
- type: recall_at_20
value: 74.97
- type: recall_at_100
value: 87.318
- type: recall_at_1000
value: 99.333
- type: precision_at_1
value: 38.774
- type: precision_at_3
value: 17.718
- type: precision_at_5
value: 12.075
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_20
value: 3.75
- type: precision_at_100
value: 0.874
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 38.774300000000004
- type: mrr_at_3
value: 45.176
- type: mrr_at_5
value: 46.8295
- type: mrr_at_10
value: 47.8628
- type: mrr_at_20
value: 48.3352
- type: mrr_at_100
value: 48.6348
- type: mrr_at_1000
value: 48.692099999999996
- type: nauc_ndcg_at_1_max
value: 53.3984
- type: nauc_ndcg_at_1_std
value: 11.4226
- type: nauc_ndcg_at_1_diff1
value: 63.624
- type: nauc_ndcg_at_3_max
value: 53.212199999999996
- type: nauc_ndcg_at_3_std
value: 12.8275
- type: nauc_ndcg_at_3_diff1
value: 56.9653
- type: nauc_ndcg_at_5_max
value: 52.9301
- type: nauc_ndcg_at_5_std
value: 13.019900000000002
- type: nauc_ndcg_at_5_diff1
value: 56.2881
- type: nauc_ndcg_at_10_max
value: 53.21
- type: nauc_ndcg_at_10_std
value: 14.477899999999998
- type: nauc_ndcg_at_10_diff1
value: 55.312
- type: nauc_ndcg_at_20_max
value: 53.5602
- type: nauc_ndcg_at_20_std
value: 15.2451
- type: nauc_ndcg_at_20_diff1
value: 55.5818
- type: nauc_ndcg_at_100_max
value: 53.466499999999996
- type: nauc_ndcg_at_100_std
value: 15.035799999999998
- type: nauc_ndcg_at_100_diff1
value: 56.2241
- type: nauc_ndcg_at_1000_max
value: 53.4527
- type: nauc_ndcg_at_1000_std
value: 14.2771
- type: nauc_ndcg_at_1000_diff1
value: 56.8137
- type: nauc_map_at_1_max
value: 53.3984
- type: nauc_map_at_1_std
value: 11.4226
- type: nauc_map_at_1_diff1
value: 63.624
- type: nauc_map_at_3_max
value: 53.3564
- type: nauc_map_at_3_std
value: 12.5543
- type: nauc_map_at_3_diff1
value: 58.557199999999995
- type: nauc_map_at_5_max
value: 53.2292
- type: nauc_map_at_5_std
value: 12.6335
- type: nauc_map_at_5_diff1
value: 58.2353
- type: nauc_map_at_10_max
value: 53.36450000000001
- type: nauc_map_at_10_std
value: 13.2102
- type: nauc_map_at_10_diff1
value: 57.89450000000001
- type: nauc_map_at_20_max
value: 53.438900000000004
- type: nauc_map_at_20_std
value: 13.374600000000001
- type: nauc_map_at_20_diff1
value: 57.9695
- type: nauc_map_at_100_max
value: 53.411699999999996
- type: nauc_map_at_100_std
value: 13.3329
- type: nauc_map_at_100_diff1
value: 58.04899999999999
- type: nauc_map_at_1000_max
value: 53.4104
- type: nauc_map_at_1000_std
value: 13.313600000000001
- type: nauc_map_at_1000_diff1
value: 58.0651
- type: nauc_recall_at_1_max
value: 53.3984
- type: nauc_recall_at_1_std
value: 11.4226
- type: nauc_recall_at_1_diff1
value: 63.624
- type: nauc_recall_at_3_max
value: 52.747299999999996
- type: nauc_recall_at_3_std
value: 13.602900000000002
- type: nauc_recall_at_3_diff1
value: 52.2385
- type: nauc_recall_at_5_max
value: 51.8513
- type: nauc_recall_at_5_std
value: 14.263300000000001
- type: nauc_recall_at_5_diff1
value: 49.971700000000006
- type: nauc_recall_at_10_max
value: 52.5828
- type: nauc_recall_at_10_std
value: 19.8161
- type: nauc_recall_at_10_diff1
value: 45.2543
- type: nauc_recall_at_20_max
value: 54.559400000000004
- type: nauc_recall_at_20_std
value: 25.3807
- type: nauc_recall_at_20_diff1
value: 44.8606
- type: nauc_recall_at_100_max
value: 54.732400000000005
- type: nauc_recall_at_100_std
value: 30.830000000000002
- type: nauc_recall_at_100_diff1
value: 45.0631
- type: nauc_recall_at_1000_max
value: 75.4921
- type: nauc_recall_at_1000_std
value: 35.5406
- type: nauc_recall_at_1000_diff1
value: 57.560900000000004
- type: nauc_precision_at_1_max
value: 53.3984
- type: nauc_precision_at_1_std
value: 11.4226
- type: nauc_precision_at_1_diff1
value: 63.624
- type: nauc_precision_at_3_max
value: 52.7321
- type: nauc_precision_at_3_std
value: 13.622600000000002
- type: nauc_precision_at_3_diff1
value: 52.2056
- type: nauc_precision_at_5_max
value: 51.8444
- type: nauc_precision_at_5_std
value: 14.287600000000001
- type: nauc_precision_at_5_diff1
value: 49.9448
- type: nauc_precision_at_10_max
value: 52.575300000000006
- type: nauc_precision_at_10_std
value: 19.8478
- type: nauc_precision_at_10_diff1
value: 45.2201
- type: nauc_precision_at_20_max
value: 54.564299999999996
- type: nauc_precision_at_20_std
value: 25.4289
- type: nauc_precision_at_20_diff1
value: 44.829299999999996
- type: nauc_precision_at_100_max
value: 54.0934
- type: nauc_precision_at_100_std
value: 30.652
- type: nauc_precision_at_100_diff1
value: 44.410500000000006
- type: nauc_precision_at_1000_max
value: 62.376
- type: nauc_precision_at_1000_std
value: 32.0345
- type: nauc_precision_at_1000_diff1
value: 45.353500000000004
- type: nauc_mrr_at_1_max
value: 53.3984
- type: nauc_mrr_at_1_std
value: 11.4226
- type: nauc_mrr_at_1_diff1
value: 63.624
- type: nauc_mrr_at_3_max
value: 53.3455
- type: nauc_mrr_at_3_std
value: 12.5627
- type: nauc_mrr_at_3_diff1
value: 58.5384
- type: nauc_mrr_at_5_max
value: 53.2182
- type: nauc_mrr_at_5_std
value: 12.642100000000001
- type: nauc_mrr_at_5_diff1
value: 58.216100000000004
- type: nauc_mrr_at_10_max
value: 53.353300000000004
- type: nauc_mrr_at_10_std
value: 13.219
- type: nauc_mrr_at_10_diff1
value: 57.875
- type: nauc_mrr_at_20_max
value: 53.4276
- type: nauc_mrr_at_20_std
value: 13.383500000000002
- type: nauc_mrr_at_20_diff1
value: 57.949799999999996
- type: nauc_mrr_at_100_max
value: 53.40089999999999
- type: nauc_mrr_at_100_std
value: 13.3411
- type: nauc_mrr_at_100_diff1
value: 58.030300000000004
- type: nauc_mrr_at_1000_max
value: 53.3996
- type: nauc_mrr_at_1000_std
value: 13.3218
- type: nauc_mrr_at_1000_diff1
value: 58.0465
- type: main_score
value: 52.71
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 31.563999999999997
- type: ndcg_at_3
value: 39.35
- type: ndcg_at_5
value: 41.817
- type: ndcg_at_10
value: 44.275
- type: ndcg_at_20
value: 46.278000000000006
- type: ndcg_at_100
value: 49.04
- type: ndcg_at_1000
value: 50.897000000000006
- type: map_at_1
value: 31.563999999999997
- type: map_at_3
value: 37.456
- type: map_at_5
value: 38.824
- type: map_at_10
value: 39.843
- type: map_at_20
value: 40.400000000000006
- type: map_at_100
value: 40.783
- type: map_at_1000
value: 40.847
- type: recall_at_1
value: 31.563999999999997
- type: recall_at_3
value: 44.824000000000005
- type: recall_at_5
value: 50.806
- type: recall_at_10
value: 58.382999999999996
- type: recall_at_20
value: 66.251
- type: recall_at_100
value: 81.123
- type: recall_at_1000
value: 96.08
- type: precision_at_1
value: 31.563999999999997
- type: precision_at_3
value: 14.940999999999999
- type: precision_at_5
value: 10.165000000000001
- type: precision_at_10
value: 5.84
- type: precision_at_20
value: 3.314
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.096
- type: mrr_at_1
value: 31.5641
- type: mrr_at_3
value: 37.4562
- type: mrr_at_5
value: 38.8281
- type: mrr_at_10
value: 39.847
- type: mrr_at_20
value: 40.4043
- type: mrr_at_100
value: 40.787099999999995
- type: mrr_at_1000
value: 40.8507
- type: nauc_ndcg_at_1_max
value: 45.0961
- type: nauc_ndcg_at_1_std
value: 6.0832999999999995
- type: nauc_ndcg_at_1_diff1
value: 56.4542
- type: nauc_ndcg_at_3_max
value: 45.8009
- type: nauc_ndcg_at_3_std
value: 7.946599999999999
- type: nauc_ndcg_at_3_diff1
value: 50.22990000000001
- type: nauc_ndcg_at_5_max
value: 45.7759
- type: nauc_ndcg_at_5_std
value: 8.793
- type: nauc_ndcg_at_5_diff1
value: 48.47
- type: nauc_ndcg_at_10_max
value: 45.896100000000004
- type: nauc_ndcg_at_10_std
value: 9.767900000000001
- type: nauc_ndcg_at_10_diff1
value: 47.862500000000004
- type: nauc_ndcg_at_20_max
value: 45.9985
- type: nauc_ndcg_at_20_std
value: 10.7251
- type: nauc_ndcg_at_20_diff1
value: 47.3885
- type: nauc_ndcg_at_100_max
value: 46.1803
- type: nauc_ndcg_at_100_std
value: 11.471
- type: nauc_ndcg_at_100_diff1
value: 47.6423
- type: nauc_ndcg_at_1000_max
value: 45.9962
- type: nauc_ndcg_at_1000_std
value: 10.4737
- type: nauc_ndcg_at_1000_diff1
value: 48.4473
- type: nauc_map_at_1_max
value: 45.0961
- type: nauc_map_at_1_std
value: 6.0832999999999995
- type: nauc_map_at_1_diff1
value: 56.4542
- type: nauc_map_at_3_max
value: 45.685199999999995
- type: nauc_map_at_3_std
value: 7.498199999999999
- type: nauc_map_at_3_diff1
value: 51.702999999999996
- type: nauc_map_at_5_max
value: 45.6663
- type: nauc_map_at_5_std
value: 7.9673
- type: nauc_map_at_5_diff1
value: 50.723
- type: nauc_map_at_10_max
value: 45.7094
- type: nauc_map_at_10_std
value: 8.3551
- type: nauc_map_at_10_diff1
value: 50.497099999999996
- type: nauc_map_at_20_max
value: 45.738299999999995
- type: nauc_map_at_20_std
value: 8.587
- type: nauc_map_at_20_diff1
value: 50.386900000000004
- type: nauc_map_at_100_max
value: 45.7463
- type: nauc_map_at_100_std
value: 8.6732
- type: nauc_map_at_100_diff1
value: 50.4202
- type: nauc_map_at_1000_max
value: 45.7398
- type: nauc_map_at_1000_std
value: 8.6477
- type: nauc_map_at_1000_diff1
value: 50.443599999999996
- type: nauc_recall_at_1_max
value: 45.0961
- type: nauc_recall_at_1_std
value: 6.0832999999999995
- type: nauc_recall_at_1_diff1
value: 56.4542
- type: nauc_recall_at_3_max
value: 46.110299999999995
- type: nauc_recall_at_3_std
value: 9.2308
- type: nauc_recall_at_3_diff1
value: 46.0213
- type: nauc_recall_at_5_max
value: 46.0402
- type: nauc_recall_at_5_std
value: 11.305900000000001
- type: nauc_recall_at_5_diff1
value: 41.6502
- type: nauc_recall_at_10_max
value: 46.4824
- type: nauc_recall_at_10_std
value: 14.7249
- type: nauc_recall_at_10_diff1
value: 39.0873
- type: nauc_recall_at_20_max
value: 47.0124
- type: nauc_recall_at_20_std
value: 20.002
- type: nauc_recall_at_20_diff1
value: 35.6458
- type: nauc_recall_at_100_max
value: 49.6722
- type: nauc_recall_at_100_std
value: 32.310100000000006
- type: nauc_recall_at_100_diff1
value: 31.805
- type: nauc_recall_at_1000_max
value: 50.651599999999995
- type: nauc_recall_at_1000_std
value: 40.5728
- type: nauc_recall_at_1000_diff1
value: 27.4545
- type: nauc_precision_at_1_max
value: 45.0961
- type: nauc_precision_at_1_std
value: 6.0832999999999995
- type: nauc_precision_at_1_diff1
value: 56.4542
- type: nauc_precision_at_3_max
value: 46.110299999999995
- type: nauc_precision_at_3_std
value: 9.2308
- type: nauc_precision_at_3_diff1
value: 46.0213
- type: nauc_precision_at_5_max
value: 46.1272
- type: nauc_precision_at_5_std
value: 11.351700000000001
- type: nauc_precision_at_5_diff1
value: 41.6701
- type: nauc_precision_at_10_max
value: 46.5768
- type: nauc_precision_at_10_std
value: 14.7753
- type: nauc_precision_at_10_diff1
value: 39.108399999999996
- type: nauc_precision_at_20_max
value: 47.123599999999996
- type: nauc_precision_at_20_std
value: 20.0731
- type: nauc_precision_at_20_diff1
value: 35.6993
- type: nauc_precision_at_100_max
value: 49.7989
- type: nauc_precision_at_100_std
value: 32.385999999999996
- type: nauc_precision_at_100_diff1
value: 31.779000000000003
- type: nauc_precision_at_1000_max
value: 50.600100000000005
- type: nauc_precision_at_1000_std
value: 40.419
- type: nauc_precision_at_1000_diff1
value: 27.254099999999998
- type: nauc_mrr_at_1_max
value: 45.0961
- type: nauc_mrr_at_1_std
value: 6.0832999999999995
- type: nauc_mrr_at_1_diff1
value: 56.4542
- type: nauc_mrr_at_3_max
value: 45.685199999999995
- type: nauc_mrr_at_3_std
value: 7.498199999999999
- type: nauc_mrr_at_3_diff1
value: 51.702999999999996
- type: nauc_mrr_at_5_max
value: 45.6835
- type: nauc_mrr_at_5_std
value: 7.9763
- type: nauc_mrr_at_5_diff1
value: 50.7273
- type: nauc_mrr_at_10_max
value: 45.7267
- type: nauc_mrr_at_10_std
value: 8.364099999999999
- type: nauc_mrr_at_10_diff1
value: 50.5014
- type: nauc_mrr_at_20_max
value: 45.7556
- type: nauc_mrr_at_20_std
value: 8.5966
- type: nauc_mrr_at_20_diff1
value: 50.393
- type: nauc_mrr_at_100_max
value: 45.760400000000004
- type: nauc_mrr_at_100_std
value: 8.6807
- type: nauc_mrr_at_100_diff1
value: 50.425799999999995
- type: nauc_mrr_at_1000_max
value: 45.753899999999994
- type: nauc_mrr_at_1000_std
value: 8.655100000000001
- type: nauc_mrr_at_1000_diff1
value: 50.448899999999995
- type: main_score
value: 44.275
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 36.35
- type: ndcg_at_3
value: 44.869
- type: ndcg_at_5
value: 47.461999999999996
- type: ndcg_at_10
value: 50.101
- type: ndcg_at_20
value: 52.002
- type: ndcg_at_100
value: 54.449999999999996
- type: ndcg_at_1000
value: 56.084999999999994
- type: map_at_1
value: 36.35
- type: map_at_3
value: 42.796
- type: map_at_5
value: 44.242
- type: map_at_10
value: 45.344
- type: map_at_20
value: 45.87
- type: map_at_100
value: 46.202
- type: map_at_1000
value: 46.262
- type: recall_at_1
value: 36.35
- type: recall_at_3
value: 50.859
- type: recall_at_5
value: 57.128
- type: recall_at_10
value: 65.217
- type: recall_at_20
value: 72.7
- type: recall_at_100
value: 85.996
- type: recall_at_1000
value: 98.989
- type: precision_at_1
value: 36.35
- type: precision_at_3
value: 16.953
- type: precision_at_5
value: 11.426
- type: precision_at_10
value: 6.522
- type: precision_at_20
value: 3.6350000000000002
- type: precision_at_100
value: 0.86
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 36.3498
- type: mrr_at_3
value: 42.7958
- type: mrr_at_5
value: 44.2417
- type: mrr_at_10
value: 45.3442
- type: mrr_at_20
value: 45.8705
- type: mrr_at_100
value: 46.2022
- type: mrr_at_1000
value: 46.261799999999994
- type: nauc_ndcg_at_1_max
value: 49.756
- type: nauc_ndcg_at_1_std
value: 8.7422
- type: nauc_ndcg_at_1_diff1
value: 60.206199999999995
- type: nauc_ndcg_at_3_max
value: 51.8621
- type: nauc_ndcg_at_3_std
value: 11.6268
- type: nauc_ndcg_at_3_diff1
value: 53.6381
- type: nauc_ndcg_at_5_max
value: 52.5281
- type: nauc_ndcg_at_5_std
value: 12.8893
- type: nauc_ndcg_at_5_diff1
value: 52.311099999999996
- type: nauc_ndcg_at_10_max
value: 52.7753
- type: nauc_ndcg_at_10_std
value: 14.358699999999999
- type: nauc_ndcg_at_10_diff1
value: 51.960300000000004
- type: nauc_ndcg_at_20_max
value: 52.880700000000004
- type: nauc_ndcg_at_20_std
value: 15.427
- type: nauc_ndcg_at_20_diff1
value: 51.6363
- type: nauc_ndcg_at_100_max
value: 52.317800000000005
- type: nauc_ndcg_at_100_std
value: 14.510000000000002
- type: nauc_ndcg_at_100_diff1
value: 52.2435
- type: nauc_ndcg_at_1000_max
value: 52.1913
- type: nauc_ndcg_at_1000_std
value: 13.5793
- type: nauc_ndcg_at_1000_diff1
value: 52.95910000000001
- type: nauc_map_at_1_max
value: 49.756
- type: nauc_map_at_1_std
value: 8.7422
- type: nauc_map_at_1_diff1
value: 60.206199999999995
- type: nauc_map_at_3_max
value: 51.3348
- type: nauc_map_at_3_std
value: 10.7914
- type: nauc_map_at_3_diff1
value: 55.191100000000006
- type: nauc_map_at_5_max
value: 51.6705
- type: nauc_map_at_5_std
value: 11.4773
- type: nauc_map_at_5_diff1
value: 54.46959999999999
- type: nauc_map_at_10_max
value: 51.7134
- type: nauc_map_at_10_std
value: 11.9884
- type: nauc_map_at_10_diff1
value: 54.341300000000004
- type: nauc_map_at_20_max
value: 51.734100000000005
- type: nauc_map_at_20_std
value: 12.2386
- type: nauc_map_at_20_diff1
value: 54.2967
- type: nauc_map_at_100_max
value: 51.6624
- type: nauc_map_at_100_std
value: 12.1183
- type: nauc_map_at_100_diff1
value: 54.379999999999995
- type: nauc_map_at_1000_max
value: 51.661
- type: nauc_map_at_1000_std
value: 12.0917
- type: nauc_map_at_1000_diff1
value: 54.4056
- type: nauc_recall_at_1_max
value: 49.756
- type: nauc_recall_at_1_std
value: 8.7422
- type: nauc_recall_at_1_diff1
value: 60.206199999999995
- type: nauc_recall_at_3_max
value: 53.41590000000001
- type: nauc_recall_at_3_std
value: 14.1345
- type: nauc_recall_at_3_diff1
value: 49.0993
- type: nauc_recall_at_5_max
value: 55.3167
- type: nauc_recall_at_5_std
value: 17.4988
- type: nauc_recall_at_5_diff1
value: 45.4789
- type: nauc_recall_at_10_max
value: 56.843900000000005
- type: nauc_recall_at_10_std
value: 23.6997
- type: nauc_recall_at_10_diff1
value: 43.419799999999995
- type: nauc_recall_at_20_max
value: 58.146699999999996
- type: nauc_recall_at_20_std
value: 31.131199999999996
- type: nauc_recall_at_20_diff1
value: 39.9097
- type: nauc_recall_at_100_max
value: 55.3601
- type: nauc_recall_at_100_std
value: 31.958399999999997
- type: nauc_recall_at_100_diff1
value: 38.465700000000005
- type: nauc_recall_at_1000_max
value: 56.1925
- type: nauc_recall_at_1000_std
value: 25.717299999999998
- type: nauc_recall_at_1000_diff1
value: 25.905099999999997
- type: nauc_precision_at_1_max
value: 49.756
- type: nauc_precision_at_1_std
value: 8.7422
- type: nauc_precision_at_1_diff1
value: 60.206199999999995
- type: nauc_precision_at_3_max
value: 53.41590000000001
- type: nauc_precision_at_3_std
value: 14.1345
- type: nauc_precision_at_3_diff1
value: 49.0993
- type: nauc_precision_at_5_max
value: 55.3167
- type: nauc_precision_at_5_std
value: 17.4988
- type: nauc_precision_at_5_diff1
value: 45.4789
- type: nauc_precision_at_10_max
value: 56.843900000000005
- type: nauc_precision_at_10_std
value: 23.6997
- type: nauc_precision_at_10_diff1
value: 43.419799999999995
- type: nauc_precision_at_20_max
value: 58.146699999999996
- type: nauc_precision_at_20_std
value: 31.131199999999996
- type: nauc_precision_at_20_diff1
value: 39.9097
- type: nauc_precision_at_100_max
value: 55.3601
- type: nauc_precision_at_100_std
value: 31.958399999999997
- type: nauc_precision_at_100_diff1
value: 38.465700000000005
- type: nauc_precision_at_1000_max
value: 56.1925
- type: nauc_precision_at_1000_std
value: 25.717299999999998
- type: nauc_precision_at_1000_diff1
value: 25.905099999999997
- type: nauc_mrr_at_1_max
value: 49.756
- type: nauc_mrr_at_1_std
value: 8.7422
- type: nauc_mrr_at_1_diff1
value: 60.206199999999995
- type: nauc_mrr_at_3_max
value: 51.3348
- type: nauc_mrr_at_3_std
value: 10.7914
- type: nauc_mrr_at_3_diff1
value: 55.191100000000006
- type: nauc_mrr_at_5_max
value: 51.6705
- type: nauc_mrr_at_5_std
value: 11.4773
- type: nauc_mrr_at_5_diff1
value: 54.46959999999999
- type: nauc_mrr_at_10_max
value: 51.7134
- type: nauc_mrr_at_10_std
value: 11.9884
- type: nauc_mrr_at_10_diff1
value: 54.341300000000004
- type: nauc_mrr_at_20_max
value: 51.734100000000005
- type: nauc_mrr_at_20_std
value: 12.2386
- type: nauc_mrr_at_20_diff1
value: 54.2967
- type: nauc_mrr_at_100_max
value: 51.6624
- type: nauc_mrr_at_100_std
value: 12.1183
- type: nauc_mrr_at_100_diff1
value: 54.379999999999995
- type: nauc_mrr_at_1000_max
value: 51.661
- type: nauc_mrr_at_1000_std
value: 12.0917
- type: nauc_mrr_at_1000_diff1
value: 54.4056
- type: main_score
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 33.097
- type: ndcg_at_3
value: 41.56
- type: ndcg_at_5
value: 44.437
- type: ndcg_at_10
value: 47.157
- type: ndcg_at_20
value: 49.370999999999995
- type: ndcg_at_100
value: 52.11
- type: ndcg_at_1000
value: 53.746
- type: map_at_1
value: 33.097
- type: map_at_3
value: 39.532000000000004
- type: map_at_5
value: 41.141
- type: map_at_10
value: 42.253
- type: map_at_20
value: 42.861
- type: map_at_100
value: 43.228
- type: map_at_1000
value: 43.288
- type: recall_at_1
value: 33.097
- type: recall_at_3
value: 47.406
- type: recall_at_5
value: 54.342
- type: recall_at_10
value: 62.807
- type: recall_at_20
value: 71.54599999999999
- type: recall_at_100
value: 86.50999999999999
- type: recall_at_1000
value: 99.454
- type: precision_at_1
value: 33.097
- type: precision_at_3
value: 15.802
- type: precision_at_5
value: 10.868
- type: precision_at_10
value: 6.281000000000001
- type: precision_at_20
value: 3.5770000000000004
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 33.0967
- type: mrr_at_3
value: 39.5321
- type: mrr_at_5
value: 41.1405
- type: mrr_at_10
value: 42.2528
- type: mrr_at_20
value: 42.8615
- type: mrr_at_100
value: 43.2277
- type: mrr_at_1000
value: 43.2878
- type: nauc_ndcg_at_1_max
value: 41.5056
- type: nauc_ndcg_at_1_std
value: -0.7759
- type: nauc_ndcg_at_1_diff1
value: 54.4983
- type: nauc_ndcg_at_3_max
value: 43.7037
- type: nauc_ndcg_at_3_std
value: 0.9127
- type: nauc_ndcg_at_3_diff1
value: 48.093399999999995
- type: nauc_ndcg_at_5_max
value: 44.412600000000005
- type: nauc_ndcg_at_5_std
value: 2.7959
- type: nauc_ndcg_at_5_diff1
value: 47.2115
- type: nauc_ndcg_at_10_max
value: 45.1547
- type: nauc_ndcg_at_10_std
value: 4.5252
- type: nauc_ndcg_at_10_diff1
value: 46.35
- type: nauc_ndcg_at_20_max
value: 45.3115
- type: nauc_ndcg_at_20_std
value: 5.2706
- type: nauc_ndcg_at_20_diff1
value: 46.6213
- type: nauc_ndcg_at_100_max
value: 45.4305
- type: nauc_ndcg_at_100_std
value: 5.226299999999999
- type: nauc_ndcg_at_100_diff1
value: 47.2901
- type: nauc_ndcg_at_1000_max
value: 44.7915
- type: nauc_ndcg_at_1000_std
value: 4.0262
- type: nauc_ndcg_at_1000_diff1
value: 47.800599999999996
- type: nauc_map_at_1_max
value: 41.5056
- type: nauc_map_at_1_std
value: -0.7759
- type: nauc_map_at_1_diff1
value: 54.4983
- type: nauc_map_at_3_max
value: 43.2876
- type: nauc_map_at_3_std
value: 0.5027
- type: nauc_map_at_3_diff1
value: 49.6127
- type: nauc_map_at_5_max
value: 43.688900000000004
- type: nauc_map_at_5_std
value: 1.5645
- type: nauc_map_at_5_diff1
value: 49.1502
- type: nauc_map_at_10_max
value: 43.9749
- type: nauc_map_at_10_std
value: 2.2498
- type: nauc_map_at_10_diff1
value: 48.827
- type: nauc_map_at_20_max
value: 44.0064
- type: nauc_map_at_20_std
value: 2.4167
- type: nauc_map_at_20_diff1
value: 48.9157
- type: nauc_map_at_100_max
value: 44.0336
- type: nauc_map_at_100_std
value: 2.4309000000000003
- type: nauc_map_at_100_diff1
value: 48.997600000000006
- type: nauc_map_at_1000_max
value: 44.016
- type: nauc_map_at_1000_std
value: 2.3993
- type: nauc_map_at_1000_diff1
value: 49.016799999999996
- type: nauc_recall_at_1_max
value: 41.5056
- type: nauc_recall_at_1_std
value: -0.7759
- type: nauc_recall_at_1_diff1
value: 54.4983
- type: nauc_recall_at_3_max
value: 44.857200000000006
- type: nauc_recall_at_3_std
value: 2.0964
- type: nauc_recall_at_3_diff1
value: 43.721199999999996
- type: nauc_recall_at_5_max
value: 46.6269
- type: nauc_recall_at_5_std
value: 6.746
- type: nauc_recall_at_5_diff1
value: 41.2489
- type: nauc_recall_at_10_max
value: 49.47
- type: nauc_recall_at_10_std
value: 13.1434
- type: nauc_recall_at_10_diff1
value: 37.5806
- type: nauc_recall_at_20_max
value: 51.146100000000004
- type: nauc_recall_at_20_std
value: 18.7664
- type: nauc_recall_at_20_diff1
value: 37.2469
- type: nauc_recall_at_100_max
value: 57.036500000000004
- type: nauc_recall_at_100_std
value: 28.7043
- type: nauc_recall_at_100_diff1
value: 37.934200000000004
- type: nauc_recall_at_1000_max
value: 44.6101
- type: nauc_recall_at_1000_std
value: 37.7026
- type: nauc_recall_at_1000_diff1
value: 31.8598
- type: nauc_precision_at_1_max
value: 41.5056
- type: nauc_precision_at_1_std
value: -0.7759
- type: nauc_precision_at_1_diff1
value: 54.4983
- type: nauc_precision_at_3_max
value: 44.857200000000006
- type: nauc_precision_at_3_std
value: 2.0964
- type: nauc_precision_at_3_diff1
value: 43.721199999999996
- type: nauc_precision_at_5_max
value: 46.6269
- type: nauc_precision_at_5_std
value: 6.746
- type: nauc_precision_at_5_diff1
value: 41.2489
- type: nauc_precision_at_10_max
value: 49.47
- type: nauc_precision_at_10_std
value: 13.1434
- type: nauc_precision_at_10_diff1
value: 37.5806
- type: nauc_precision_at_20_max
value: 51.146100000000004
- type: nauc_precision_at_20_std
value: 18.7664
- type: nauc_precision_at_20_diff1
value: 37.2469
- type: nauc_precision_at_100_max
value: 57.036500000000004
- type: nauc_precision_at_100_std
value: 28.7043
- type: nauc_precision_at_100_diff1
value: 37.934200000000004
- type: nauc_precision_at_1000_max
value: 44.6101
- type: nauc_precision_at_1000_std
value: 37.7026
- type: nauc_precision_at_1000_diff1
value: 31.8598
- type: nauc_mrr_at_1_max
value: 41.5056
- type: nauc_mrr_at_1_std
value: -0.7759
- type: nauc_mrr_at_1_diff1
value: 54.4983
- type: nauc_mrr_at_3_max
value: 43.2876
- type: nauc_mrr_at_3_std
value: 0.5027
- type: nauc_mrr_at_3_diff1
value: 49.6127
- type: nauc_mrr_at_5_max
value: 43.688900000000004
- type: nauc_mrr_at_5_std
value: 1.5645
- type: nauc_mrr_at_5_diff1
value: 49.1502
- type: nauc_mrr_at_10_max
value: 43.9749
- type: nauc_mrr_at_10_std
value: 2.2498
- type: nauc_mrr_at_10_diff1
value: 48.827
- type: nauc_mrr_at_20_max
value: 44.0064
- type: nauc_mrr_at_20_std
value: 2.4167
- type: nauc_mrr_at_20_diff1
value: 48.9157
- type: nauc_mrr_at_100_max
value: 44.0336
- type: nauc_mrr_at_100_std
value: 2.4309000000000003
- type: nauc_mrr_at_100_diff1
value: 48.997600000000006
- type: nauc_mrr_at_1000_max
value: 44.016
- type: nauc_mrr_at_1000_std
value: 2.3993
- type: nauc_mrr_at_1000_diff1
value: 49.016799999999996
- type: main_score
value: 47.157
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 33.122
- type: ndcg_at_3
value: 41.82
- type: ndcg_at_5
value: 44.2
- type: ndcg_at_10
value: 46.912
- type: ndcg_at_20
value: 49.124
- type: ndcg_at_100
value: 51.806
- type: ndcg_at_1000
value: 53.474999999999994
- type: map_at_1
value: 33.122
- type: map_at_3
value: 39.692
- type: map_at_5
value: 41.016000000000005
- type: map_at_10
value: 42.161
- type: map_at_20
value: 42.774
- type: map_at_100
value: 43.139
- type: map_at_1000
value: 43.2
- type: recall_at_1
value: 33.122
- type: recall_at_3
value: 47.973
- type: recall_at_5
value: 53.737
- type: recall_at_10
value: 61.992999999999995
- type: recall_at_20
value: 70.68900000000001
- type: recall_at_100
value: 85.247
- type: recall_at_1000
value: 98.48599999999999
- type: precision_at_1
value: 33.122
- type: precision_at_3
value: 15.991
- type: precision_at_5
value: 10.747
- type: precision_at_10
value: 6.199000000000001
- type: precision_at_20
value: 3.5340000000000003
- type: precision_at_100
value: 0.852
- type: precision_at_1000
value: 0.098
- type: mrr_at_1
value: 33.1216
- type: mrr_at_3
value: 39.6922
- type: mrr_at_5
value: 41.0161
- type: mrr_at_10
value: 42.160599999999995
- type: mrr_at_20
value: 42.774
- type: mrr_at_100
value: 43.1385
- type: mrr_at_1000
value: 43.199799999999996
- type: nauc_ndcg_at_1_max
value: 49.1834
- type: nauc_ndcg_at_1_std
value: 6.8612
- type: nauc_ndcg_at_1_diff1
value: 55.1215
- type: nauc_ndcg_at_3_max
value: 48.7315
- type: nauc_ndcg_at_3_std
value: 8.5129
- type: nauc_ndcg_at_3_diff1
value: 46.6492
- type: nauc_ndcg_at_5_max
value: 48.8836
- type: nauc_ndcg_at_5_std
value: 9.5124
- type: nauc_ndcg_at_5_diff1
value: 45.9731
- type: nauc_ndcg_at_10_max
value: 48.403
- type: nauc_ndcg_at_10_std
value: 10.4213
- type: nauc_ndcg_at_10_diff1
value: 45.522800000000004
- type: nauc_ndcg_at_20_max
value: 48.4306
- type: nauc_ndcg_at_20_std
value: 11.264299999999999
- type: nauc_ndcg_at_20_diff1
value: 45.2984
- type: nauc_ndcg_at_100_max
value: 48.7782
- type: nauc_ndcg_at_100_std
value: 11.4887
- type: nauc_ndcg_at_100_diff1
value: 45.7048
- type: nauc_ndcg_at_1000_max
value: 48.6585
- type: nauc_ndcg_at_1000_std
value: 10.5363
- type: nauc_ndcg_at_1000_diff1
value: 46.3558
- type: nauc_map_at_1_max
value: 49.1834
- type: nauc_map_at_1_std
value: 6.8612
- type: nauc_map_at_1_diff1
value: 55.1215
- type: nauc_map_at_3_max
value: 48.8541
- type: nauc_map_at_3_std
value: 8.035
- type: nauc_map_at_3_diff1
value: 48.606899999999996
- type: nauc_map_at_5_max
value: 48.916399999999996
- type: nauc_map_at_5_std
value: 8.5605
- type: nauc_map_at_5_diff1
value: 48.2496
- type: nauc_map_at_10_max
value: 48.7073
- type: nauc_map_at_10_std
value: 8.9177
- type: nauc_map_at_10_diff1
value: 48.0922
- type: nauc_map_at_20_max
value: 48.714200000000005
- type: nauc_map_at_20_std
value: 9.1213
- type: nauc_map_at_20_diff1
value: 48.0531
- type: nauc_map_at_100_max
value: 48.7618
- type: nauc_map_at_100_std
value: 9.157
- type: nauc_map_at_100_diff1
value: 48.0993
- type: nauc_map_at_1000_max
value: 48.762299999999996
- type: nauc_map_at_1000_std
value: 9.1389
- type: nauc_map_at_1000_diff1
value: 48.1273
- type: nauc_recall_at_1_max
value: 49.1834
- type: nauc_recall_at_1_std
value: 6.8612
- type: nauc_recall_at_1_diff1
value: 55.1215
- type: nauc_recall_at_3_max
value: 48.372
- type: nauc_recall_at_3_std
value: 9.9262
- type: nauc_recall_at_3_diff1
value: 41.0295
- type: nauc_recall_at_5_max
value: 48.8314
- type: nauc_recall_at_5_std
value: 12.5722
- type: nauc_recall_at_5_diff1
value: 39.0983
- type: nauc_recall_at_10_max
value: 47.281099999999995
- type: nauc_recall_at_10_std
value: 15.9864
- type: nauc_recall_at_10_diff1
value: 36.842999999999996
- type: nauc_recall_at_20_max
value: 47.2339
- type: nauc_recall_at_20_std
value: 21.2964
- type: nauc_recall_at_20_diff1
value: 34.102
- type: nauc_recall_at_100_max
value: 50.4448
- type: nauc_recall_at_100_std
value: 31.2116
- type: nauc_recall_at_100_diff1
value: 30.873099999999997
- type: nauc_recall_at_1000_max
value: 41.048899999999996
- type: nauc_recall_at_1000_std
value: 33.9471
- type: nauc_recall_at_1000_diff1
value: 1.6271
- type: nauc_precision_at_1_max
value: 49.1834
- type: nauc_precision_at_1_std
value: 6.8612
- type: nauc_precision_at_1_diff1
value: 55.1215
- type: nauc_precision_at_3_max
value: 48.372
- type: nauc_precision_at_3_std
value: 9.9262
- type: nauc_precision_at_3_diff1
value: 41.0295
- type: nauc_precision_at_5_max
value: 48.8314
- type: nauc_precision_at_5_std
value: 12.5722
- type: nauc_precision_at_5_diff1
value: 39.0983
- type: nauc_precision_at_10_max
value: 47.281099999999995
- type: nauc_precision_at_10_std
value: 15.9864
- type: nauc_precision_at_10_diff1
value: 36.842999999999996
- type: nauc_precision_at_20_max
value: 47.2339
- type: nauc_precision_at_20_std
value: 21.2964
- type: nauc_precision_at_20_diff1
value: 34.102
- type: nauc_precision_at_100_max
value: 50.4448
- type: nauc_precision_at_100_std
value: 31.2116
- type: nauc_precision_at_100_diff1
value: 30.873099999999997
- type: nauc_precision_at_1000_max
value: 41.048899999999996
- type: nauc_precision_at_1000_std
value: 33.9471
- type: nauc_precision_at_1000_diff1
value: 1.6271
- type: nauc_mrr_at_1_max
value: 49.1834
- type: nauc_mrr_at_1_std
value: 6.8612
- type: nauc_mrr_at_1_diff1
value: 55.1215
- type: nauc_mrr_at_3_max
value: 48.8541
- type: nauc_mrr_at_3_std
value: 8.035
- type: nauc_mrr_at_3_diff1
value: 48.606899999999996
- type: nauc_mrr_at_5_max
value: 48.916399999999996
- type: nauc_mrr_at_5_std
value: 8.5605
- type: nauc_mrr_at_5_diff1
value: 48.2496
- type: nauc_mrr_at_10_max
value: 48.7073
- type: nauc_mrr_at_10_std
value: 8.9177
- type: nauc_mrr_at_10_diff1
value: 48.0922
- type: nauc_mrr_at_20_max
value: 48.714200000000005
- type: nauc_mrr_at_20_std
value: 9.1213
- type: nauc_mrr_at_20_diff1
value: 48.0531
- type: nauc_mrr_at_100_max
value: 48.7618
- type: nauc_mrr_at_100_std
value: 9.157
- type: nauc_mrr_at_100_diff1
value: 48.0993
- type: nauc_mrr_at_1000_max
value: 48.762299999999996
- type: nauc_mrr_at_1000_std
value: 9.1389
- type: nauc_mrr_at_1000_diff1
value: 48.1273
- type: main_score
value: 46.912
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 34.467
- type: ndcg_at_3
value: 42.643
- type: ndcg_at_5
value: 45.559
- type: ndcg_at_10
value: 48.274
- type: ndcg_at_20
value: 50.107
- type: ndcg_at_100
value: 52.93
- type: ndcg_at_1000
value: 54.493
- type: map_at_1
value: 34.467
- type: map_at_3
value: 40.672999999999995
- type: map_at_5
value: 42.284
- type: map_at_10
value: 43.418
- type: map_at_20
value: 43.926
- type: map_at_100
value: 44.296
- type: map_at_1000
value: 44.352000000000004
- type: recall_at_1
value: 34.467
- type: recall_at_3
value: 48.326
- type: recall_at_5
value: 55.43900000000001
- type: recall_at_10
value: 63.754999999999995
- type: recall_at_20
value: 70.973
- type: recall_at_100
value: 86.454
- type: recall_at_1000
value: 98.902
- type: precision_at_1
value: 34.467
- type: precision_at_3
value: 16.109
- type: precision_at_5
value: 11.088000000000001
- type: precision_at_10
value: 6.3759999999999994
- type: precision_at_20
value: 3.549
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 34.466499999999996
- type: mrr_at_3
value: 40.6729
- type: mrr_at_5
value: 42.2838
- type: mrr_at_10
value: 43.4184
- type: mrr_at_20
value: 43.926
- type: mrr_at_100
value: 44.2962
- type: mrr_at_1000
value: 44.3522
- type: nauc_ndcg_at_1_max
value: 47.1153
- type: nauc_ndcg_at_1_std
value: 3.4273
- type: nauc_ndcg_at_1_diff1
value: 59.028000000000006
- type: nauc_ndcg_at_3_max
value: 47.509499999999996
- type: nauc_ndcg_at_3_std
value: 6.1509
- type: nauc_ndcg_at_3_diff1
value: 52.3682
- type: nauc_ndcg_at_5_max
value: 47.1969
- type: nauc_ndcg_at_5_std
value: 6.2892
- type: nauc_ndcg_at_5_diff1
value: 50.9259
- type: nauc_ndcg_at_10_max
value: 47.246500000000005
- type: nauc_ndcg_at_10_std
value: 7.1377
- type: nauc_ndcg_at_10_diff1
value: 50.049600000000005
- type: nauc_ndcg_at_20_max
value: 47.5816
- type: nauc_ndcg_at_20_std
value: 7.4744
- type: nauc_ndcg_at_20_diff1
value: 50.4117
- type: nauc_ndcg_at_100_max
value: 47.9685
- type: nauc_ndcg_at_100_std
value: 8.6481
- type: nauc_ndcg_at_100_diff1
value: 50.4111
- type: nauc_ndcg_at_1000_max
value: 47.7801
- type: nauc_ndcg_at_1000_std
value: 7.5201
- type: nauc_ndcg_at_1000_diff1
value: 51.4396
- type: nauc_map_at_1_max
value: 47.1153
- type: nauc_map_at_1_std
value: 3.4273
- type: nauc_map_at_1_diff1
value: 59.028000000000006
- type: nauc_map_at_3_max
value: 47.475
- type: nauc_map_at_3_std
value: 5.5253
- type: nauc_map_at_3_diff1
value: 53.9536
- type: nauc_map_at_5_max
value: 47.2987
- type: nauc_map_at_5_std
value: 5.6127
- type: nauc_map_at_5_diff1
value: 53.151700000000005
- type: nauc_map_at_10_max
value: 47.307300000000005
- type: nauc_map_at_10_std
value: 5.9255
- type: nauc_map_at_10_diff1
value: 52.8381
- type: nauc_map_at_20_max
value: 47.3942
- type: nauc_map_at_20_std
value: 5.992100000000001
- type: nauc_map_at_20_diff1
value: 52.9637
- type: nauc_map_at_100_max
value: 47.448800000000006
- type: nauc_map_at_100_std
value: 6.1400999999999994
- type: nauc_map_at_100_diff1
value: 52.97690000000001
- type: nauc_map_at_1000_max
value: 47.4484
- type: nauc_map_at_1000_std
value: 6.1112
- type: nauc_map_at_1000_diff1
value: 53.0145
- type: nauc_recall_at_1_max
value: 47.1153
- type: nauc_recall_at_1_std
value: 3.4273
- type: nauc_recall_at_1_diff1
value: 59.028000000000006
- type: nauc_recall_at_3_max
value: 47.5843
- type: nauc_recall_at_3_std
value: 7.9499
- type: nauc_recall_at_3_diff1
value: 47.7843
- type: nauc_recall_at_5_max
value: 46.8183
- type: nauc_recall_at_5_std
value: 8.3286
- type: nauc_recall_at_5_diff1
value: 43.9835
- type: nauc_recall_at_10_max
value: 47.025099999999995
- type: nauc_recall_at_10_std
value: 11.6536
- type: nauc_recall_at_10_diff1
value: 40.012100000000004
- type: nauc_recall_at_20_max
value: 48.6934
- type: nauc_recall_at_20_std
value: 14.212
- type: nauc_recall_at_20_diff1
value: 40.1054
- type: nauc_recall_at_100_max
value: 54.1462
- type: nauc_recall_at_100_std
value: 34.3519
- type: nauc_recall_at_100_diff1
value: 30.826900000000002
- type: nauc_recall_at_1000_max
value: 71.5059
- type: nauc_recall_at_1000_std
value: 62.956599999999995
- type: nauc_recall_at_1000_diff1
value: 26.123800000000003
- type: nauc_precision_at_1_max
value: 47.1153
- type: nauc_precision_at_1_std
value: 3.4273
- type: nauc_precision_at_1_diff1
value: 59.028000000000006
- type: nauc_precision_at_3_max
value: 47.5843
- type: nauc_precision_at_3_std
value: 7.9499
- type: nauc_precision_at_3_diff1
value: 47.7843
- type: nauc_precision_at_5_max
value: 46.8183
- type: nauc_precision_at_5_std
value: 8.3286
- type: nauc_precision_at_5_diff1
value: 43.9835
- type: nauc_precision_at_10_max
value: 47.025099999999995
- type: nauc_precision_at_10_std
value: 11.6536
- type: nauc_precision_at_10_diff1
value: 40.012100000000004
- type: nauc_precision_at_20_max
value: 48.6934
- type: nauc_precision_at_20_std
value: 14.212
- type: nauc_precision_at_20_diff1
value: 40.1054
- type: nauc_precision_at_100_max
value: 54.1462
- type: nauc_precision_at_100_std
value: 34.3519
- type: nauc_precision_at_100_diff1
value: 30.826900000000002
- type: nauc_precision_at_1000_max
value: 71.5059
- type: nauc_precision_at_1000_std
value: 62.956599999999995
- type: nauc_precision_at_1000_diff1
value: 26.123800000000003
- type: nauc_mrr_at_1_max
value: 47.1153
- type: nauc_mrr_at_1_std
value: 3.4273
- type: nauc_mrr_at_1_diff1
value: 59.028000000000006
- type: nauc_mrr_at_3_max
value: 47.475
- type: nauc_mrr_at_3_std
value: 5.5253
- type: nauc_mrr_at_3_diff1
value: 53.9536
- type: nauc_mrr_at_5_max
value: 47.2987
- type: nauc_mrr_at_5_std
value: 5.6127
- type: nauc_mrr_at_5_diff1
value: 53.151700000000005
- type: nauc_mrr_at_10_max
value: 47.307300000000005
- type: nauc_mrr_at_10_std
value: 5.9255
- type: nauc_mrr_at_10_diff1
value: 52.8381
- type: nauc_mrr_at_20_max
value: 47.3942
- type: nauc_mrr_at_20_std
value: 5.992100000000001
- type: nauc_mrr_at_20_diff1
value: 52.9637
- type: nauc_mrr_at_100_max
value: 47.448800000000006
- type: nauc_mrr_at_100_std
value: 6.1400999999999994
- type: nauc_mrr_at_100_diff1
value: 52.97690000000001
- type: nauc_mrr_at_1000_max
value: 47.4484
- type: nauc_mrr_at_1000_std
value: 6.1112
- type: nauc_mrr_at_1000_diff1
value: 53.0145
- type: main_score
value: 48.274
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 29.958000000000002
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 40.552
- type: ndcg_at_10
value: 43.376999999999995
- type: ndcg_at_20
value: 45.613
- type: ndcg_at_100
value: 48.671
- type: ndcg_at_1000
value: 50.554
- type: map_at_1
value: 29.958000000000002
- type: map_at_3
value: 35.86
- type: map_at_5
value: 37.391000000000005
- type: map_at_10
value: 38.557
- type: map_at_20
value: 39.162
- type: map_at_100
value: 39.581
- type: map_at_1000
value: 39.647
- type: recall_at_1
value: 29.958000000000002
- type: recall_at_3
value: 43.36
- type: recall_at_5
value: 50.090999999999994
- type: recall_at_10
value: 58.824
- type: recall_at_20
value: 67.738
- type: recall_at_100
value: 84.294
- type: recall_at_1000
value: 99.394
- type: precision_at_1
value: 29.958000000000002
- type: precision_at_3
value: 14.453
- type: precision_at_5
value: 10.018
- type: precision_at_10
value: 5.882
- type: precision_at_20
value: 3.3869999999999996
- type: precision_at_100
value: 0.843
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 29.9576
- type: mrr_at_3
value: 35.8601
- type: mrr_at_5
value: 37.3913
- type: mrr_at_10
value: 38.5573
- type: mrr_at_20
value: 39.162
- type: mrr_at_100
value: 39.5807
- type: mrr_at_1000
value: 39.6467
- type: nauc_ndcg_at_1_max
value: 52.1125
- type: nauc_ndcg_at_1_std
value: 6.8635
- type: nauc_ndcg_at_1_diff1
value: 55.853699999999996
- type: nauc_ndcg_at_3_max
value: 51.9481
- type: nauc_ndcg_at_3_std
value: 10.0406
- type: nauc_ndcg_at_3_diff1
value: 49.3114
- type: nauc_ndcg_at_5_max
value: 51.730900000000005
- type: nauc_ndcg_at_5_std
value: 11.7259
- type: nauc_ndcg_at_5_diff1
value: 47.0463
- type: nauc_ndcg_at_10_max
value: 51.0169
- type: nauc_ndcg_at_10_std
value: 11.9733
- type: nauc_ndcg_at_10_diff1
value: 45.7934
- type: nauc_ndcg_at_20_max
value: 50.9552
- type: nauc_ndcg_at_20_std
value: 12.5508
- type: nauc_ndcg_at_20_diff1
value: 45.4673
- type: nauc_ndcg_at_100_max
value: 51.207800000000006
- type: nauc_ndcg_at_100_std
value: 12.7859
- type: nauc_ndcg_at_100_diff1
value: 46.4388
- type: nauc_ndcg_at_1000_max
value: 51.4648
- type: nauc_ndcg_at_1000_std
value: 11.9752
- type: nauc_ndcg_at_1000_diff1
value: 47.3814
- type: nauc_map_at_1_max
value: 52.1125
- type: nauc_map_at_1_std
value: 6.8635
- type: nauc_map_at_1_diff1
value: 55.853699999999996
- type: nauc_map_at_3_max
value: 52.0278
- type: nauc_map_at_3_std
value: 9.2962
- type: nauc_map_at_3_diff1
value: 50.8881
- type: nauc_map_at_5_max
value: 51.9123
- type: nauc_map_at_5_std
value: 10.2351
- type: nauc_map_at_5_diff1
value: 49.6413
- type: nauc_map_at_10_max
value: 51.6105
- type: nauc_map_at_10_std
value: 10.3094
- type: nauc_map_at_10_diff1
value: 49.1541
- type: nauc_map_at_20_max
value: 51.6124
- type: nauc_map_at_20_std
value: 10.4738
- type: nauc_map_at_20_diff1
value: 49.0843
- type: nauc_map_at_100_max
value: 51.660700000000006
- type: nauc_map_at_100_std
value: 10.5072
- type: nauc_map_at_100_diff1
value: 49.228699999999996
- type: nauc_map_at_1000_max
value: 51.673199999999994
- type: nauc_map_at_1000_std
value: 10.4973
- type: nauc_map_at_1000_diff1
value: 49.2533
- type: nauc_recall_at_1_max
value: 52.1125
- type: nauc_recall_at_1_std
value: 6.8635
- type: nauc_recall_at_1_diff1
value: 55.853699999999996
- type: nauc_recall_at_3_max
value: 51.7055
- type: nauc_recall_at_3_std
value: 12.1475
- type: nauc_recall_at_3_diff1
value: 44.8305
- type: nauc_recall_at_5_max
value: 51.1529
- type: nauc_recall_at_5_std
value: 16.2625
- type: nauc_recall_at_5_diff1
value: 39.211400000000005
- type: nauc_recall_at_10_max
value: 48.8181
- type: nauc_recall_at_10_std
value: 17.5707
- type: nauc_recall_at_10_diff1
value: 34.3632
- type: nauc_recall_at_20_max
value: 48.024899999999995
- type: nauc_recall_at_20_std
value: 21.0431
- type: nauc_recall_at_20_diff1
value: 30.9652
- type: nauc_recall_at_100_max
value: 47.9518
- type: nauc_recall_at_100_std
value: 29.650199999999998
- type: nauc_recall_at_100_diff1
value: 30.1396
- type: nauc_recall_at_1000_max
value: 56.8226
- type: nauc_recall_at_1000_std
value: 65.794
- type: nauc_recall_at_1000_diff1
value: 27.686899999999998
- type: nauc_precision_at_1_max
value: 52.1125
- type: nauc_precision_at_1_std
value: 6.8635
- type: nauc_precision_at_1_diff1
value: 55.853699999999996
- type: nauc_precision_at_3_max
value: 51.7055
- type: nauc_precision_at_3_std
value: 12.1475
- type: nauc_precision_at_3_diff1
value: 44.8305
- type: nauc_precision_at_5_max
value: 51.1529
- type: nauc_precision_at_5_std
value: 16.2625
- type: nauc_precision_at_5_diff1
value: 39.211400000000005
- type: nauc_precision_at_10_max
value: 48.8181
- type: nauc_precision_at_10_std
value: 17.5707
- type: nauc_precision_at_10_diff1
value: 34.3632
- type: nauc_precision_at_20_max
value: 48.024899999999995
- type: nauc_precision_at_20_std
value: 21.0431
- type: nauc_precision_at_20_diff1
value: 30.9652
- type: nauc_precision_at_100_max
value: 47.9518
- type: nauc_precision_at_100_std
value: 29.650199999999998
- type: nauc_precision_at_100_diff1
value: 30.1396
- type: nauc_precision_at_1000_max
value: 56.8226
- type: nauc_precision_at_1000_std
value: 65.794
- type: nauc_precision_at_1000_diff1
value: 27.686899999999998
- type: nauc_mrr_at_1_max
value: 52.1125
- type: nauc_mrr_at_1_std
value: 6.8635
- type: nauc_mrr_at_1_diff1
value: 55.853699999999996
- type: nauc_mrr_at_3_max
value: 52.0278
- type: nauc_mrr_at_3_std
value: 9.2962
- type: nauc_mrr_at_3_diff1
value: 50.8881
- type: nauc_mrr_at_5_max
value: 51.9123
- type: nauc_mrr_at_5_std
value: 10.2351
- type: nauc_mrr_at_5_diff1
value: 49.6413
- type: nauc_mrr_at_10_max
value: 51.6105
- type: nauc_mrr_at_10_std
value: 10.3094
- type: nauc_mrr_at_10_diff1
value: 49.1541
- type: nauc_mrr_at_20_max
value: 51.6124
- type: nauc_mrr_at_20_std
value: 10.4738
- type: nauc_mrr_at_20_diff1
value: 49.0843
- type: nauc_mrr_at_100_max
value: 51.660700000000006
- type: nauc_mrr_at_100_std
value: 10.5072
- type: nauc_mrr_at_100_diff1
value: 49.228699999999996
- type: nauc_mrr_at_1000_max
value: 51.673199999999994
- type: nauc_mrr_at_1000_std
value: 10.4973
- type: nauc_mrr_at_1000_diff1
value: 49.2533
- type: main_score
value: 43.376999999999995
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 26.663999999999998
- type: ndcg_at_3
value: 33.85
- type: ndcg_at_5
value: 36.004000000000005
- type: ndcg_at_10
value: 38.4
- type: ndcg_at_20
value: 40.338
- type: ndcg_at_100
value: 43.419000000000004
- type: ndcg_at_1000
value: 45.631
- type: map_at_1
value: 26.655
- type: map_at_3
value: 32.099
- type: map_at_5
value: 33.29
- type: map_at_10
value: 34.278999999999996
- type: map_at_20
value: 34.813
- type: map_at_100
value: 35.221000000000004
- type: map_at_1000
value: 35.297
- type: recall_at_1
value: 26.655
- type: recall_at_3
value: 38.899
- type: recall_at_5
value: 44.15
- type: recall_at_10
value: 51.556000000000004
- type: recall_at_20
value: 59.207
- type: recall_at_100
value: 76.074
- type: recall_at_1000
value: 93.915
- type: precision_at_1
value: 26.663999999999998
- type: precision_at_3
value: 12.97
- type: precision_at_5
value: 8.831999999999999
- type: precision_at_10
value: 5.157
- type: precision_at_20
value: 2.9610000000000003
- type: precision_at_100
value: 0.761
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 26.664199999999997
- type: mrr_at_3
value: 32.1083
- type: mrr_at_5
value: 33.299
- type: mrr_at_10
value: 34.2886
- type: mrr_at_20
value: 34.8219
- type: mrr_at_100
value: 35.2302
- type: mrr_at_1000
value: 35.3063
- type: nauc_ndcg_at_1_max
value: 48.4014
- type: nauc_ndcg_at_1_std
value: 11.304
- type: nauc_ndcg_at_1_diff1
value: 54.139199999999995
- type: nauc_ndcg_at_3_max
value: 49.1937
- type: nauc_ndcg_at_3_std
value: 13.9525
- type: nauc_ndcg_at_3_diff1
value: 48.137
- type: nauc_ndcg_at_5_max
value: 49.235299999999995
- type: nauc_ndcg_at_5_std
value: 15.0341
- type: nauc_ndcg_at_5_diff1
value: 46.8281
- type: nauc_ndcg_at_10_max
value: 48.9836
- type: nauc_ndcg_at_10_std
value: 15.8809
- type: nauc_ndcg_at_10_diff1
value: 45.3256
- type: nauc_ndcg_at_20_max
value: 48.924299999999995
- type: nauc_ndcg_at_20_std
value: 16.6435
- type: nauc_ndcg_at_20_diff1
value: 45.047
- type: nauc_ndcg_at_100_max
value: 49.1173
- type: nauc_ndcg_at_100_std
value: 17.5779
- type: nauc_ndcg_at_100_diff1
value: 45.285199999999996
- type: nauc_ndcg_at_1000_max
value: 49.2097
- type: nauc_ndcg_at_1000_std
value: 16.829900000000002
- type: nauc_ndcg_at_1000_diff1
value: 46.0814
- type: nauc_map_at_1_max
value: 48.3592
- type: nauc_map_at_1_std
value: 11.2728
- type: nauc_map_at_1_diff1
value: 54.098
- type: nauc_map_at_3_max
value: 49.0619
- type: nauc_map_at_3_std
value: 13.3646
- type: nauc_map_at_3_diff1
value: 49.473800000000004
- type: nauc_map_at_5_max
value: 49.0995
- type: nauc_map_at_5_std
value: 13.974900000000002
- type: nauc_map_at_5_diff1
value: 48.7481
- type: nauc_map_at_10_max
value: 49.0016
- type: nauc_map_at_10_std
value: 14.336099999999998
- type: nauc_map_at_10_diff1
value: 48.1301
- type: nauc_map_at_20_max
value: 48.9681
- type: nauc_map_at_20_std
value: 14.5174
- type: nauc_map_at_20_diff1
value: 48.0536
- type: nauc_map_at_100_max
value: 48.997299999999996
- type: nauc_map_at_100_std
value: 14.6347
- type: nauc_map_at_100_diff1
value: 48.0899
- type: nauc_map_at_1000_max
value: 49.0003
- type: nauc_map_at_1000_std
value: 14.6138
- type: nauc_map_at_1000_diff1
value: 48.1148
- type: nauc_recall_at_1_max
value: 48.3592
- type: nauc_recall_at_1_std
value: 11.2728
- type: nauc_recall_at_1_diff1
value: 54.098
- type: nauc_recall_at_3_max
value: 49.490899999999996
- type: nauc_recall_at_3_std
value: 15.5245
- type: nauc_recall_at_3_diff1
value: 44.4469
- type: nauc_recall_at_5_max
value: 49.53
- type: nauc_recall_at_5_std
value: 18.0626
- type: nauc_recall_at_5_diff1
value: 41.3084
- type: nauc_recall_at_10_max
value: 48.734899999999996
- type: nauc_recall_at_10_std
value: 20.7001
- type: nauc_recall_at_10_diff1
value: 36.5179
- type: nauc_recall_at_20_max
value: 48.6031
- type: nauc_recall_at_20_std
value: 24.435100000000002
- type: nauc_recall_at_20_diff1
value: 34.7265
- type: nauc_recall_at_100_max
value: 49.8486
- type: nauc_recall_at_100_std
value: 35.1908
- type: nauc_recall_at_100_diff1
value: 32.034400000000005
- type: nauc_recall_at_1000_max
value: 55.304500000000004
- type: nauc_recall_at_1000_std
value: 47.902
- type: nauc_recall_at_1000_diff1
value: 31.4755
- type: nauc_precision_at_1_max
value: 48.4014
- type: nauc_precision_at_1_std
value: 11.304
- type: nauc_precision_at_1_diff1
value: 54.139199999999995
- type: nauc_precision_at_3_max
value: 49.533899999999996
- type: nauc_precision_at_3_std
value: 15.553700000000001
- type: nauc_precision_at_3_diff1
value: 44.4901
- type: nauc_precision_at_5_max
value: 49.5772
- type: nauc_precision_at_5_std
value: 18.0933
- type: nauc_precision_at_5_diff1
value: 41.3553
- type: nauc_precision_at_10_max
value: 48.787000000000006
- type: nauc_precision_at_10_std
value: 20.7335
- type: nauc_precision_at_10_diff1
value: 36.5688
- type: nauc_precision_at_20_max
value: 48.6597
- type: nauc_precision_at_20_std
value: 24.4717
- type: nauc_precision_at_20_diff1
value: 34.781600000000005
- type: nauc_precision_at_100_max
value: 49.9243
- type: nauc_precision_at_100_std
value: 35.3133
- type: nauc_precision_at_100_diff1
value: 32.0868
- type: nauc_precision_at_1000_max
value: 55.517300000000006
- type: nauc_precision_at_1000_std
value: 48.249900000000004
- type: nauc_precision_at_1000_diff1
value: 31.736399999999996
- type: nauc_mrr_at_1_max
value: 48.4014
- type: nauc_mrr_at_1_std
value: 11.304
- type: nauc_mrr_at_1_diff1
value: 54.139199999999995
- type: nauc_mrr_at_3_max
value: 49.102000000000004
- type: nauc_mrr_at_3_std
value: 13.394
- type: nauc_mrr_at_3_diff1
value: 49.5138
- type: nauc_mrr_at_5_max
value: 49.1397
- type: nauc_mrr_at_5_std
value: 14.0043
- type: nauc_mrr_at_5_diff1
value: 48.7883
- type: nauc_mrr_at_10_max
value: 49.0419
- type: nauc_mrr_at_10_std
value: 14.3656
- type: nauc_mrr_at_10_diff1
value: 48.1706
- type: nauc_mrr_at_20_max
value: 49.0087
- type: nauc_mrr_at_20_std
value: 14.546999999999999
- type: nauc_mrr_at_20_diff1
value: 48.094300000000004
- type: nauc_mrr_at_100_max
value: 49.038
- type: nauc_mrr_at_100_std
value: 14.6651
- type: nauc_mrr_at_100_diff1
value: 48.1306
- type: nauc_mrr_at_1000_max
value: 49.0404
- type: nauc_mrr_at_1000_std
value: 14.6437
- type: nauc_mrr_at_1000_diff1
value: 48.1549
- type: main_score
value: 38.4
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 33.367000000000004
- type: ndcg_at_3
value: 42.068
- type: ndcg_at_5
value: 44.79
- type: ndcg_at_10
value: 47.372
- type: ndcg_at_20
value: 49.409
- type: ndcg_at_100
value: 52.25
- type: ndcg_at_1000
value: 53.857
- type: map_at_1
value: 33.367000000000004
- type: map_at_3
value: 39.922000000000004
- type: map_at_5
value: 41.429
- type: map_at_10
value: 42.504999999999995
- type: map_at_20
value: 43.073
- type: map_at_100
value: 43.475
- type: map_at_1000
value: 43.533
- type: recall_at_1
value: 33.367000000000004
- type: recall_at_3
value: 48.281
- type: recall_at_5
value: 54.903999999999996
- type: recall_at_10
value: 62.841
- type: recall_at_20
value: 70.829
- type: recall_at_100
value: 85.996
- type: recall_at_1000
value: 98.787
- type: precision_at_1
value: 33.367000000000004
- type: precision_at_3
value: 16.094
- type: precision_at_5
value: 10.981
- type: precision_at_10
value: 6.283999999999999
- type: precision_at_20
value: 3.5409999999999995
- type: precision_at_100
value: 0.86
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 33.367000000000004
- type: mrr_at_3
value: 39.9225
- type: mrr_at_5
value: 41.429100000000005
- type: mrr_at_10
value: 42.5047
- type: mrr_at_20
value: 43.0729
- type: mrr_at_100
value: 43.475
- type: mrr_at_1000
value: 43.5325
- type: nauc_ndcg_at_1_max
value: 49.215599999999995
- type: nauc_ndcg_at_1_std
value: 7.7847
- type: nauc_ndcg_at_1_diff1
value: 53.823600000000006
- type: nauc_ndcg_at_3_max
value: 51.518299999999996
- type: nauc_ndcg_at_3_std
value: 13.1004
- type: nauc_ndcg_at_3_diff1
value: 46.4315
- type: nauc_ndcg_at_5_max
value: 51.4275
- type: nauc_ndcg_at_5_std
value: 13.7658
- type: nauc_ndcg_at_5_diff1
value: 45.703700000000005
- type: nauc_ndcg_at_10_max
value: 51.417500000000004
- type: nauc_ndcg_at_10_std
value: 14.5612
- type: nauc_ndcg_at_10_diff1
value: 45.1057
- type: nauc_ndcg_at_20_max
value: 51.67850000000001
- type: nauc_ndcg_at_20_std
value: 15.228
- type: nauc_ndcg_at_20_diff1
value: 45.2585
- type: nauc_ndcg_at_100_max
value: 51.68509999999999
- type: nauc_ndcg_at_100_std
value: 15.265400000000001
- type: nauc_ndcg_at_100_diff1
value: 46.299600000000005
- type: nauc_ndcg_at_1000_max
value: 51.453199999999995
- type: nauc_ndcg_at_1000_std
value: 14.1539
- type: nauc_ndcg_at_1000_diff1
value: 46.7368
- type: nauc_map_at_1_max
value: 49.215599999999995
- type: nauc_map_at_1_std
value: 7.7847
- type: nauc_map_at_1_diff1
value: 53.823600000000006
- type: nauc_map_at_3_max
value: 51.047
- type: nauc_map_at_3_std
value: 11.772499999999999
- type: nauc_map_at_3_diff1
value: 48.3261
- type: nauc_map_at_5_max
value: 51.0005
- type: nauc_map_at_5_std
value: 12.1281
- type: nauc_map_at_5_diff1
value: 47.9407
- type: nauc_map_at_10_max
value: 50.968
- type: nauc_map_at_10_std
value: 12.4076
- type: nauc_map_at_10_diff1
value: 47.7427
- type: nauc_map_at_20_max
value: 51.0379
- type: nauc_map_at_20_std
value: 12.5755
- type: nauc_map_at_20_diff1
value: 47.824
- type: nauc_map_at_100_max
value: 51.045399999999994
- type: nauc_map_at_100_std
value: 12.5665
- type: nauc_map_at_100_diff1
value: 47.9852
- type: nauc_map_at_1000_max
value: 51.0328
- type: nauc_map_at_1000_std
value: 12.5251
- type: nauc_map_at_1000_diff1
value: 47.9978
- type: nauc_recall_at_1_max
value: 49.215599999999995
- type: nauc_recall_at_1_std
value: 7.7847
- type: nauc_recall_at_1_diff1
value: 53.823600000000006
- type: nauc_recall_at_3_max
value: 52.8468
- type: nauc_recall_at_3_std
value: 16.9595
- type: nauc_recall_at_3_diff1
value: 40.906
- type: nauc_recall_at_5_max
value: 52.6566
- type: nauc_recall_at_5_std
value: 18.8317
- type: nauc_recall_at_5_diff1
value: 38.7903
- type: nauc_recall_at_10_max
value: 52.9016
- type: nauc_recall_at_10_std
value: 22.2713
- type: nauc_recall_at_10_diff1
value: 35.8589
- type: nauc_recall_at_20_max
value: 54.415400000000005
- type: nauc_recall_at_20_std
value: 26.8639
- type: nauc_recall_at_20_diff1
value: 34.7889
- type: nauc_recall_at_100_max
value: 56.409200000000006
- type: nauc_recall_at_100_std
value: 37.061699999999995
- type: nauc_recall_at_100_diff1
value: 37.7855
- type: nauc_recall_at_1000_max
value: 66.6721
- type: nauc_recall_at_1000_std
value: 52.0995
- type: nauc_recall_at_1000_diff1
value: 38.8158
- type: nauc_precision_at_1_max
value: 49.215599999999995
- type: nauc_precision_at_1_std
value: 7.7847
- type: nauc_precision_at_1_diff1
value: 53.823600000000006
- type: nauc_precision_at_3_max
value: 52.8468
- type: nauc_precision_at_3_std
value: 16.9595
- type: nauc_precision_at_3_diff1
value: 40.906
- type: nauc_precision_at_5_max
value: 52.6566
- type: nauc_precision_at_5_std
value: 18.8317
- type: nauc_precision_at_5_diff1
value: 38.7903
- type: nauc_precision_at_10_max
value: 52.9016
- type: nauc_precision_at_10_std
value: 22.2713
- type: nauc_precision_at_10_diff1
value: 35.8589
- type: nauc_precision_at_20_max
value: 54.415400000000005
- type: nauc_precision_at_20_std
value: 26.8639
- type: nauc_precision_at_20_diff1
value: 34.7889
- type: nauc_precision_at_100_max
value: 56.409200000000006
- type: nauc_precision_at_100_std
value: 37.061699999999995
- type: nauc_precision_at_100_diff1
value: 37.7855
- type: nauc_precision_at_1000_max
value: 66.6721
- type: nauc_precision_at_1000_std
value: 52.0995
- type: nauc_precision_at_1000_diff1
value: 38.8158
- type: nauc_mrr_at_1_max
value: 49.215599999999995
- type: nauc_mrr_at_1_std
value: 7.7847
- type: nauc_mrr_at_1_diff1
value: 53.823600000000006
- type: nauc_mrr_at_3_max
value: 51.047
- type: nauc_mrr_at_3_std
value: 11.772499999999999
- type: nauc_mrr_at_3_diff1
value: 48.3261
- type: nauc_mrr_at_5_max
value: 51.0005
- type: nauc_mrr_at_5_std
value: 12.1281
- type: nauc_mrr_at_5_diff1
value: 47.9407
- type: nauc_mrr_at_10_max
value: 50.968
- type: nauc_mrr_at_10_std
value: 12.4076
- type: nauc_mrr_at_10_diff1
value: 47.7427
- type: nauc_mrr_at_20_max
value: 51.0379
- type: nauc_mrr_at_20_std
value: 12.5755
- type: nauc_mrr_at_20_diff1
value: 47.824
- type: nauc_mrr_at_100_max
value: 51.045399999999994
- type: nauc_mrr_at_100_std
value: 12.5665
- type: nauc_mrr_at_100_diff1
value: 47.9852
- type: nauc_mrr_at_1000_max
value: 51.0328
- type: nauc_mrr_at_1000_std
value: 12.5251
- type: nauc_mrr_at_1000_diff1
value: 47.9978
- type: main_score
value: 47.372
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 30.639
- type: ndcg_at_3
value: 39.347
- type: ndcg_at_5
value: 42.077
- type: ndcg_at_10
value: 44.619
- type: ndcg_at_20
value: 46.698
- type: ndcg_at_100
value: 49.834
- type: ndcg_at_1000
value: 51.556999999999995
- type: map_at_1
value: 30.639
- type: map_at_3
value: 37.22
- type: map_at_5
value: 38.727000000000004
- type: map_at_10
value: 39.786
- type: map_at_20
value: 40.354
- type: map_at_100
value: 40.776
- type: map_at_1000
value: 40.841
- type: recall_at_1
value: 30.639
- type: recall_at_3
value: 45.494
- type: recall_at_5
value: 52.157
- type: recall_at_10
value: 59.967000000000006
- type: recall_at_20
value: 68.214
- type: recall_at_100
value: 85.309
- type: recall_at_1000
value: 98.908
- type: precision_at_1
value: 30.639
- type: precision_at_3
value: 15.165000000000001
- type: precision_at_5
value: 10.431
- type: precision_at_10
value: 5.997
- type: precision_at_20
value: 3.411
- type: precision_at_100
value: 0.853
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 30.639
- type: mrr_at_3
value: 37.2201
- type: mrr_at_5
value: 38.7275
- type: mrr_at_10
value: 39.7862
- type: mrr_at_20
value: 40.3536
- type: mrr_at_100
value: 40.7763
- type: mrr_at_1000
value: 40.8406
- type: nauc_ndcg_at_1_max
value: 47.3997
- type: nauc_ndcg_at_1_std
value: 4.6415
- type: nauc_ndcg_at_1_diff1
value: 55.2295
- type: nauc_ndcg_at_3_max
value: 51.1166
- type: nauc_ndcg_at_3_std
value: 8.8196
- type: nauc_ndcg_at_3_diff1
value: 49.119
- type: nauc_ndcg_at_5_max
value: 50.242200000000004
- type: nauc_ndcg_at_5_std
value: 8.5755
- type: nauc_ndcg_at_5_diff1
value: 47.6155
- type: nauc_ndcg_at_10_max
value: 50.213499999999996
- type: nauc_ndcg_at_10_std
value: 9.2496
- type: nauc_ndcg_at_10_diff1
value: 47.3074
- type: nauc_ndcg_at_20_max
value: 50.43299999999999
- type: nauc_ndcg_at_20_std
value: 9.2624
- type: nauc_ndcg_at_20_diff1
value: 47.249
- type: nauc_ndcg_at_100_max
value: 50.8598
- type: nauc_ndcg_at_100_std
value: 10.513300000000001
- type: nauc_ndcg_at_100_diff1
value: 47.928599999999996
- type: nauc_ndcg_at_1000_max
value: 50.3282
- type: nauc_ndcg_at_1000_std
value: 9.3475
- type: nauc_ndcg_at_1000_diff1
value: 48.4022
- type: nauc_map_at_1_max
value: 47.3997
- type: nauc_map_at_1_std
value: 4.6415
- type: nauc_map_at_1_diff1
value: 55.2295
- type: nauc_map_at_3_max
value: 50.33879999999999
- type: nauc_map_at_3_std
value: 8.0053
- type: nauc_map_at_3_diff1
value: 50.4792
- type: nauc_map_at_5_max
value: 49.7955
- type: nauc_map_at_5_std
value: 7.7969
- type: nauc_map_at_5_diff1
value: 49.6566
- type: nauc_map_at_10_max
value: 49.7532
- type: nauc_map_at_10_std
value: 8.032300000000001
- type: nauc_map_at_10_diff1
value: 49.548500000000004
- type: nauc_map_at_20_max
value: 49.8138
- type: nauc_map_at_20_std
value: 8.0091
- type: nauc_map_at_20_diff1
value: 49.5634
- type: nauc_map_at_100_max
value: 49.8475
- type: nauc_map_at_100_std
value: 8.132399999999999
- type: nauc_map_at_100_diff1
value: 49.6456
- type: nauc_map_at_1000_max
value: 49.830600000000004
- type: nauc_map_at_1000_std
value: 8.0998
- type: nauc_map_at_1000_diff1
value: 49.6603
- type: nauc_recall_at_1_max
value: 47.3997
- type: nauc_recall_at_1_std
value: 4.6415
- type: nauc_recall_at_1_diff1
value: 55.2295
- type: nauc_recall_at_3_max
value: 53.295899999999996
- type: nauc_recall_at_3_std
value: 11.0735
- type: nauc_recall_at_3_diff1
value: 45.2698
- type: nauc_recall_at_5_max
value: 51.4516
- type: nauc_recall_at_5_std
value: 10.8415
- type: nauc_recall_at_5_diff1
value: 41.4249
- type: nauc_recall_at_10_max
value: 51.6187
- type: nauc_recall_at_10_std
value: 13.4603
- type: nauc_recall_at_10_diff1
value: 39.8822
- type: nauc_recall_at_20_max
value: 52.849500000000006
- type: nauc_recall_at_20_std
value: 14.3943
- type: nauc_recall_at_20_diff1
value: 38.2481
- type: nauc_recall_at_100_max
value: 60.366699999999994
- type: nauc_recall_at_100_std
value: 34.2108
- type: nauc_recall_at_100_diff1
value: 38.5689
- type: nauc_recall_at_1000_max
value: 59.54429999999999
- type: nauc_recall_at_1000_std
value: 57.35059999999999
- type: nauc_recall_at_1000_diff1
value: 30.331999999999997
- type: nauc_precision_at_1_max
value: 47.3997
- type: nauc_precision_at_1_std
value: 4.6415
- type: nauc_precision_at_1_diff1
value: 55.2295
- type: nauc_precision_at_3_max
value: 53.295899999999996
- type: nauc_precision_at_3_std
value: 11.0735
- type: nauc_precision_at_3_diff1
value: 45.2698
- type: nauc_precision_at_5_max
value: 51.4516
- type: nauc_precision_at_5_std
value: 10.8415
- type: nauc_precision_at_5_diff1
value: 41.4249
- type: nauc_precision_at_10_max
value: 51.6187
- type: nauc_precision_at_10_std
value: 13.4603
- type: nauc_precision_at_10_diff1
value: 39.8822
- type: nauc_precision_at_20_max
value: 52.849500000000006
- type: nauc_precision_at_20_std
value: 14.3943
- type: nauc_precision_at_20_diff1
value: 38.2481
- type: nauc_precision_at_100_max
value: 60.366699999999994
- type: nauc_precision_at_100_std
value: 34.2108
- type: nauc_precision_at_100_diff1
value: 38.5689
- type: nauc_precision_at_1000_max
value: 59.54429999999999
- type: nauc_precision_at_1000_std
value: 57.35059999999999
- type: nauc_precision_at_1000_diff1
value: 30.331999999999997
- type: nauc_mrr_at_1_max
value: 47.3997
- type: nauc_mrr_at_1_std
value: 4.6415
- type: nauc_mrr_at_1_diff1
value: 55.2295
- type: nauc_mrr_at_3_max
value: 50.33879999999999
- type: nauc_mrr_at_3_std
value: 8.0053
- type: nauc_mrr_at_3_diff1
value: 50.4792
- type: nauc_mrr_at_5_max
value: 49.7955
- type: nauc_mrr_at_5_std
value: 7.7969
- type: nauc_mrr_at_5_diff1
value: 49.6566
- type: nauc_mrr_at_10_max
value: 49.7532
- type: nauc_mrr_at_10_std
value: 8.032300000000001
- type: nauc_mrr_at_10_diff1
value: 49.548500000000004
- type: nauc_mrr_at_20_max
value: 49.8138
- type: nauc_mrr_at_20_std
value: 8.0091
- type: nauc_mrr_at_20_diff1
value: 49.5634
- type: nauc_mrr_at_100_max
value: 49.8475
- type: nauc_mrr_at_100_std
value: 8.132399999999999
- type: nauc_mrr_at_100_diff1
value: 49.6456
- type: nauc_mrr_at_1000_max
value: 49.830600000000004
- type: nauc_mrr_at_1000_std
value: 8.0998
- type: nauc_mrr_at_1000_diff1
value: 49.6603
- type: main_score
value: 44.619
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 27.699
- type: ndcg_at_3
value: 35.978
- type: ndcg_at_5
value: 38.494
- type: ndcg_at_10
value: 41.17
- type: ndcg_at_20
value: 43.34
- type: ndcg_at_100
value: 46.44
- type: ndcg_at_1000
value: 48.534
- type: map_at_1
value: 27.699
- type: map_at_3
value: 33.928000000000004
- type: map_at_5
value: 35.325
- type: map_at_10
value: 36.433
- type: map_at_20
value: 37.033
- type: map_at_100
value: 37.46
- type: map_at_1000
value: 37.536
- type: recall_at_1
value: 27.699
- type: recall_at_3
value: 41.915
- type: recall_at_5
value: 48.021
- type: recall_at_10
value: 56.277
- type: recall_at_20
value: 64.827
- type: recall_at_100
value: 81.583
- type: recall_at_1000
value: 98.241
- type: precision_at_1
value: 27.699
- type: precision_at_3
value: 13.972000000000001
- type: precision_at_5
value: 9.604
- type: precision_at_10
value: 5.628
- type: precision_at_20
value: 3.241
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.098
- type: mrr_at_1
value: 27.699099999999998
- type: mrr_at_3
value: 33.9277
- type: mrr_at_5
value: 35.3249
- type: mrr_at_10
value: 36.433
- type: mrr_at_20
value: 37.033
- type: mrr_at_100
value: 37.460300000000004
- type: mrr_at_1000
value: 37.5364
- type: nauc_ndcg_at_1_max
value: 47.9902
- type: nauc_ndcg_at_1_std
value: 11.7877
- type: nauc_ndcg_at_1_diff1
value: 53.30009999999999
- type: nauc_ndcg_at_3_max
value: 48.7976
- type: nauc_ndcg_at_3_std
value: 14.285700000000002
- type: nauc_ndcg_at_3_diff1
value: 44.9715
- type: nauc_ndcg_at_5_max
value: 48.1773
- type: nauc_ndcg_at_5_std
value: 15.2027
- type: nauc_ndcg_at_5_diff1
value: 42.6697
- type: nauc_ndcg_at_10_max
value: 47.9669
- type: nauc_ndcg_at_10_std
value: 16.245
- type: nauc_ndcg_at_10_diff1
value: 41.7466
- type: nauc_ndcg_at_20_max
value: 47.5711
- type: nauc_ndcg_at_20_std
value: 16.6753
- type: nauc_ndcg_at_20_diff1
value: 41.3274
- type: nauc_ndcg_at_100_max
value: 48.157
- type: nauc_ndcg_at_100_std
value: 17.7415
- type: nauc_ndcg_at_100_diff1
value: 41.8455
- type: nauc_ndcg_at_1000_max
value: 48.0416
- type: nauc_ndcg_at_1000_std
value: 16.4432
- type: nauc_ndcg_at_1000_diff1
value: 42.96
- type: nauc_map_at_1_max
value: 47.9902
- type: nauc_map_at_1_std
value: 11.7877
- type: nauc_map_at_1_diff1
value: 53.30009999999999
- type: nauc_map_at_3_max
value: 48.605399999999996
- type: nauc_map_at_3_std
value: 13.7193
- type: nauc_map_at_3_diff1
value: 46.8232
- type: nauc_map_at_5_max
value: 48.2739
- type: nauc_map_at_5_std
value: 14.2215
- type: nauc_map_at_5_diff1
value: 45.5511
- type: nauc_map_at_10_max
value: 48.2171
- type: nauc_map_at_10_std
value: 14.6616
- type: nauc_map_at_10_diff1
value: 45.204699999999995
- type: nauc_map_at_20_max
value: 48.086600000000004
- type: nauc_map_at_20_std
value: 14.745700000000001
- type: nauc_map_at_20_diff1
value: 45.112
- type: nauc_map_at_100_max
value: 48.1655
- type: nauc_map_at_100_std
value: 14.8883
- type: nauc_map_at_100_diff1
value: 45.1828
- type: nauc_map_at_1000_max
value: 48.1632
- type: nauc_map_at_1000_std
value: 14.8524
- type: nauc_map_at_1000_diff1
value: 45.2272
- type: nauc_recall_at_1_max
value: 47.9902
- type: nauc_recall_at_1_std
value: 11.7877
- type: nauc_recall_at_1_diff1
value: 53.30009999999999
- type: nauc_recall_at_3_max
value: 49.332
- type: nauc_recall_at_3_std
value: 15.8498
- type: nauc_recall_at_3_diff1
value: 39.8739
- type: nauc_recall_at_5_max
value: 47.7993
- type: nauc_recall_at_5_std
value: 18.0993
- type: nauc_recall_at_5_diff1
value: 34.257
- type: nauc_recall_at_10_max
value: 46.940599999999996
- type: nauc_recall_at_10_std
value: 21.529
- type: nauc_recall_at_10_diff1
value: 30.6398
- type: nauc_recall_at_20_max
value: 45.2487
- type: nauc_recall_at_20_std
value: 24.376900000000003
- type: nauc_recall_at_20_diff1
value: 27.269199999999998
- type: nauc_recall_at_100_max
value: 49.290800000000004
- type: nauc_recall_at_100_std
value: 38.9228
- type: nauc_recall_at_100_diff1
value: 23.7152
- type: nauc_recall_at_1000_max
value: 43.8731
- type: nauc_recall_at_1000_std
value: 45.7342
- type: nauc_recall_at_1000_diff1
value: 7.1701
- type: nauc_precision_at_1_max
value: 47.9902
- type: nauc_precision_at_1_std
value: 11.7877
- type: nauc_precision_at_1_diff1
value: 53.30009999999999
- type: nauc_precision_at_3_max
value: 49.332
- type: nauc_precision_at_3_std
value: 15.8498
- type: nauc_precision_at_3_diff1
value: 39.8739
- type: nauc_precision_at_5_max
value: 47.7993
- type: nauc_precision_at_5_std
value: 18.0993
- type: nauc_precision_at_5_diff1
value: 34.257
- type: nauc_precision_at_10_max
value: 46.940599999999996
- type: nauc_precision_at_10_std
value: 21.529
- type: nauc_precision_at_10_diff1
value: 30.6398
- type: nauc_precision_at_20_max
value: 45.2487
- type: nauc_precision_at_20_std
value: 24.376900000000003
- type: nauc_precision_at_20_diff1
value: 27.269199999999998
- type: nauc_precision_at_100_max
value: 49.290800000000004
- type: nauc_precision_at_100_std
value: 38.9228
- type: nauc_precision_at_100_diff1
value: 23.7152
- type: nauc_precision_at_1000_max
value: 43.8731
- type: nauc_precision_at_1000_std
value: 45.7342
- type: nauc_precision_at_1000_diff1
value: 7.1701
- type: nauc_mrr_at_1_max
value: 47.9902
- type: nauc_mrr_at_1_std
value: 11.7877
- type: nauc_mrr_at_1_diff1
value: 53.30009999999999
- type: nauc_mrr_at_3_max
value: 48.605399999999996
- type: nauc_mrr_at_3_std
value: 13.7193
- type: nauc_mrr_at_3_diff1
value: 46.8232
- type: nauc_mrr_at_5_max
value: 48.2739
- type: nauc_mrr_at_5_std
value: 14.2215
- type: nauc_mrr_at_5_diff1
value: 45.5511
- type: nauc_mrr_at_10_max
value: 48.2171
- type: nauc_mrr_at_10_std
value: 14.6616
- type: nauc_mrr_at_10_diff1
value: 45.204699999999995
- type: nauc_mrr_at_20_max
value: 48.086600000000004
- type: nauc_mrr_at_20_std
value: 14.745700000000001
- type: nauc_mrr_at_20_diff1
value: 45.112
- type: nauc_mrr_at_100_max
value: 48.1655
- type: nauc_mrr_at_100_std
value: 14.8883
- type: nauc_mrr_at_100_diff1
value: 45.1828
- type: nauc_mrr_at_1000_max
value: 48.1632
- type: nauc_mrr_at_1000_std
value: 14.8524
- type: nauc_mrr_at_1000_diff1
value: 45.2272
- type: main_score
value: 41.17
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 30.455
- type: ndcg_at_3
value: 38.614
- type: ndcg_at_5
value: 40.693
- type: ndcg_at_10
value: 43.523
- type: ndcg_at_20
value: 45.651
- type: ndcg_at_100
value: 48.756
- type: ndcg_at_1000
value: 50.637
- type: map_at_1
value: 30.455
- type: map_at_3
value: 36.620999999999995
- type: map_at_5
value: 37.78
- type: map_at_10
value: 38.951
- type: map_at_20
value: 39.543
- type: map_at_100
value: 39.956
- type: map_at_1000
value: 40.022000000000006
- type: recall_at_1
value: 30.455
- type: recall_at_3
value: 44.375
- type: recall_at_5
value: 49.397999999999996
- type: recall_at_10
value: 58.13700000000001
- type: recall_at_20
value: 66.484
- type: recall_at_100
value: 83.438
- type: recall_at_1000
value: 98.482
- type: precision_at_1
value: 30.455
- type: precision_at_3
value: 14.792
- type: precision_at_5
value: 9.879999999999999
- type: precision_at_10
value: 5.814
- type: precision_at_20
value: 3.325
- type: precision_at_100
value: 0.835
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 30.4553
- type: mrr_at_3
value: 36.6213
- type: mrr_at_5
value: 37.7804
- type: mrr_at_10
value: 38.9508
- type: mrr_at_20
value: 39.5449
- type: mrr_at_100
value: 39.9577
- type: mrr_at_1000
value: 40.0238
- type: nauc_ndcg_at_1_max
value: 48.8898
- type: nauc_ndcg_at_1_std
value: 9.9853
- type: nauc_ndcg_at_1_diff1
value: 55.1585
- type: nauc_ndcg_at_3_max
value: 49.0008
- type: nauc_ndcg_at_3_std
value: 11.089599999999999
- type: nauc_ndcg_at_3_diff1
value: 47.700900000000004
- type: nauc_ndcg_at_5_max
value: 49.5803
- type: nauc_ndcg_at_5_std
value: 12.378599999999999
- type: nauc_ndcg_at_5_diff1
value: 46.9606
- type: nauc_ndcg_at_10_max
value: 49.1348
- type: nauc_ndcg_at_10_std
value: 12.696399999999999
- type: nauc_ndcg_at_10_diff1
value: 45.731
- type: nauc_ndcg_at_20_max
value: 49.6612
- type: nauc_ndcg_at_20_std
value: 14.3148
- type: nauc_ndcg_at_20_diff1
value: 44.9405
- type: nauc_ndcg_at_100_max
value: 49.8074
- type: nauc_ndcg_at_100_std
value: 15.1201
- type: nauc_ndcg_at_100_diff1
value: 45.420899999999996
- type: nauc_ndcg_at_1000_max
value: 49.5773
- type: nauc_ndcg_at_1000_std
value: 13.7904
- type: nauc_ndcg_at_1000_diff1
value: 46.5471
- type: nauc_map_at_1_max
value: 48.8898
- type: nauc_map_at_1_std
value: 9.9853
- type: nauc_map_at_1_diff1
value: 55.1585
- type: nauc_map_at_3_max
value: 48.9727
- type: nauc_map_at_3_std
value: 10.807500000000001
- type: nauc_map_at_3_diff1
value: 49.3725
- type: nauc_map_at_5_max
value: 49.2652
- type: nauc_map_at_5_std
value: 11.5037
- type: nauc_map_at_5_diff1
value: 48.9742
- type: nauc_map_at_10_max
value: 49.0863
- type: nauc_map_at_10_std
value: 11.6191
- type: nauc_map_at_10_diff1
value: 48.4889
- type: nauc_map_at_20_max
value: 49.2315
- type: nauc_map_at_20_std
value: 12.0546
- type: nauc_map_at_20_diff1
value: 48.3074
- type: nauc_map_at_100_max
value: 49.2415
- type: nauc_map_at_100_std
value: 12.133099999999999
- type: nauc_map_at_100_diff1
value: 48.398799999999994
- type: nauc_map_at_1000_max
value: 49.2308
- type: nauc_map_at_1000_std
value: 12.0927
- type: nauc_map_at_1000_diff1
value: 48.4355
- type: nauc_recall_at_1_max
value: 48.8898
- type: nauc_recall_at_1_std
value: 9.9853
- type: nauc_recall_at_1_diff1
value: 55.1585
- type: nauc_recall_at_3_max
value: 49.0815
- type: nauc_recall_at_3_std
value: 11.9015
- type: nauc_recall_at_3_diff1
value: 42.9785
- type: nauc_recall_at_5_max
value: 50.611399999999996
- type: nauc_recall_at_5_std
value: 15.122399999999999
- type: nauc_recall_at_5_diff1
value: 41.073
- type: nauc_recall_at_10_max
value: 49.2098
- type: nauc_recall_at_10_std
value: 16.4463
- type: nauc_recall_at_10_diff1
value: 36.525
- type: nauc_recall_at_20_max
value: 51.6409
- type: nauc_recall_at_20_std
value: 24.4586
- type: nauc_recall_at_20_diff1
value: 31.394899999999996
- type: nauc_recall_at_100_max
value: 54.785399999999996
- type: nauc_recall_at_100_std
value: 40.8177
- type: nauc_recall_at_100_diff1
value: 25.7955
- type: nauc_recall_at_1000_max
value: 70.33070000000001
- type: nauc_recall_at_1000_std
value: 71.0309
- type: nauc_recall_at_1000_diff1
value: 17.0748
- type: nauc_precision_at_1_max
value: 48.8898
- type: nauc_precision_at_1_std
value: 9.9853
- type: nauc_precision_at_1_diff1
value: 55.1585
- type: nauc_precision_at_3_max
value: 49.0815
- type: nauc_precision_at_3_std
value: 11.9015
- type: nauc_precision_at_3_diff1
value: 42.9785
- type: nauc_precision_at_5_max
value: 50.611399999999996
- type: nauc_precision_at_5_std
value: 15.122399999999999
- type: nauc_precision_at_5_diff1
value: 41.073
- type: nauc_precision_at_10_max
value: 49.2098
- type: nauc_precision_at_10_std
value: 16.4463
- type: nauc_precision_at_10_diff1
value: 36.525
- type: nauc_precision_at_20_max
value: 51.6
- type: nauc_precision_at_20_std
value: 24.4193
- type: nauc_precision_at_20_diff1
value: 31.3295
- type: nauc_precision_at_100_max
value: 54.744400000000006
- type: nauc_precision_at_100_std
value: 40.7844
- type: nauc_precision_at_100_diff1
value: 25.687900000000003
- type: nauc_precision_at_1000_max
value: 63.998200000000004
- type: nauc_precision_at_1000_std
value: 65.2054
- type: nauc_precision_at_1000_diff1
value: 13.280100000000001
- type: nauc_mrr_at_1_max
value: 48.8898
- type: nauc_mrr_at_1_std
value: 9.9853
- type: nauc_mrr_at_1_diff1
value: 55.1585
- type: nauc_mrr_at_3_max
value: 48.9727
- type: nauc_mrr_at_3_std
value: 10.807500000000001
- type: nauc_mrr_at_3_diff1
value: 49.3725
- type: nauc_mrr_at_5_max
value: 49.2652
- type: nauc_mrr_at_5_std
value: 11.5037
- type: nauc_mrr_at_5_diff1
value: 48.9742
- type: nauc_mrr_at_10_max
value: 49.0863
- type: nauc_mrr_at_10_std
value: 11.6191
- type: nauc_mrr_at_10_diff1
value: 48.4889
- type: nauc_mrr_at_20_max
value: 49.229299999999995
- type: nauc_mrr_at_20_std
value: 12.0523
- type: nauc_mrr_at_20_diff1
value: 48.3045
- type: nauc_mrr_at_100_max
value: 49.2394
- type: nauc_mrr_at_100_std
value: 12.1308
- type: nauc_mrr_at_100_diff1
value: 48.396
- type: nauc_mrr_at_1000_max
value: 49.228699999999996
- type: nauc_mrr_at_1000_std
value: 12.090399999999999
- type: nauc_mrr_at_1000_diff1
value: 48.4328
- type: main_score
value: 43.523
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (ar)
type: jinaai/mintakaqa
config: ar
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: ndcg_at_1
value: 8.761
- type: ndcg_at_3
value: 12.867
- type: ndcg_at_5
value: 14.322
- type: ndcg_at_10
value: 16.1
- type: ndcg_at_20
value: 17.693
- type: ndcg_at_100
value: 20.48
- type: ndcg_at_1000
value: 25.629999999999995
- type: map_at_1
value: 8.761
- type: map_at_3
value: 11.855
- type: map_at_5
value: 12.661
- type: map_at_10
value: 13.395999999999999
- type: map_at_20
value: 13.838000000000001
- type: map_at_100
value: 14.202
- type: map_at_1000
value: 14.344999999999999
- type: recall_at_1
value: 8.761
- type: recall_at_3
value: 15.797
- type: recall_at_5
value: 19.337
- type: recall_at_10
value: 24.83
- type: recall_at_20
value: 31.094
- type: recall_at_100
value: 46.437
- type: recall_at_1000
value: 90.059
- type: precision_at_1
value: 8.761
- type: precision_at_3
value: 5.266
- type: precision_at_5
value: 3.8670000000000004
- type: precision_at_10
value: 2.483
- type: precision_at_20
value: 1.555
- type: precision_at_100
value: 0.464
- type: precision_at_1000
value: 0.09
- type: mrr_at_1
value: 8.7608
- type: mrr_at_3
value: 11.855
- type: mrr_at_5
value: 12.6608
- type: mrr_at_10
value: 13.3959
- type: mrr_at_20
value: 13.838000000000001
- type: mrr_at_100
value: 14.2024
- type: mrr_at_1000
value: 14.345099999999999
- type: nauc_ndcg_at_1_max
value: 21.6864
- type: nauc_ndcg_at_1_std
value: 28.610200000000003
- type: nauc_ndcg_at_1_diff1
value: 20.9846
- type: nauc_ndcg_at_3_max
value: 20.477400000000003
- type: nauc_ndcg_at_3_std
value: 27.073999999999998
- type: nauc_ndcg_at_3_diff1
value: 12.8415
- type: nauc_ndcg_at_5_max
value: 19.3812
- type: nauc_ndcg_at_5_std
value: 25.2471
- type: nauc_ndcg_at_5_diff1
value: 11.6586
- type: nauc_ndcg_at_10_max
value: 19.3229
- type: nauc_ndcg_at_10_std
value: 25.6876
- type: nauc_ndcg_at_10_diff1
value: 10.7103
- type: nauc_ndcg_at_20_max
value: 18.872
- type: nauc_ndcg_at_20_std
value: 25.363000000000003
- type: nauc_ndcg_at_20_diff1
value: 9.721499999999999
- type: nauc_ndcg_at_100_max
value: 18.7914
- type: nauc_ndcg_at_100_std
value: 24.9771
- type: nauc_ndcg_at_100_diff1
value: 9.564300000000001
- type: nauc_ndcg_at_1000_max
value: 19.5652
- type: nauc_ndcg_at_1000_std
value: 24.713099999999997
- type: nauc_ndcg_at_1000_diff1
value: 10.9607
- type: nauc_map_at_1_max
value: 21.6864
- type: nauc_map_at_1_std
value: 28.610200000000003
- type: nauc_map_at_1_diff1
value: 20.9846
- type: nauc_map_at_3_max
value: 20.8068
- type: nauc_map_at_3_std
value: 27.277
- type: nauc_map_at_3_diff1
value: 14.511299999999999
- type: nauc_map_at_5_max
value: 20.0835
- type: nauc_map_at_5_std
value: 26.131300000000003
- type: nauc_map_at_5_diff1
value: 13.6857
- type: nauc_map_at_10_max
value: 20.0281
- type: nauc_map_at_10_std
value: 26.2996
- type: nauc_map_at_10_diff1
value: 13.192300000000001
- type: nauc_map_at_20_max
value: 19.8456
- type: nauc_map_at_20_std
value: 26.1681
- type: nauc_map_at_20_diff1
value: 12.8234
- type: nauc_map_at_100_max
value: 19.7798
- type: nauc_map_at_100_std
value: 26.096999999999998
- type: nauc_map_at_100_diff1
value: 12.7576
- type: nauc_map_at_1000_max
value: 19.804
- type: nauc_map_at_1000_std
value: 26.0808
- type: nauc_map_at_1000_diff1
value: 12.8081
- type: nauc_recall_at_1_max
value: 21.6864
- type: nauc_recall_at_1_std
value: 28.610200000000003
- type: nauc_recall_at_1_diff1
value: 20.9846
- type: nauc_recall_at_3_max
value: 19.6883
- type: nauc_recall_at_3_std
value: 26.6378
- type: nauc_recall_at_3_diff1
value: 8.9681
- type: nauc_recall_at_5_max
value: 17.8277
- type: nauc_recall_at_5_std
value: 23.2801
- type: nauc_recall_at_5_diff1
value: 7.352200000000001
- type: nauc_recall_at_10_max
value: 17.9106
- type: nauc_recall_at_10_std
value: 24.556
- type: nauc_recall_at_10_diff1
value: 5.6874
- type: nauc_recall_at_20_max
value: 16.950699999999998
- type: nauc_recall_at_20_std
value: 23.874000000000002
- type: nauc_recall_at_20_diff1
value: 3.562
- type: nauc_recall_at_100_max
value: 17.147000000000002
- type: nauc_recall_at_100_std
value: 22.5333
- type: nauc_recall_at_100_diff1
value: 3.4271999999999996
- type: nauc_recall_at_1000_max
value: 27.553499999999996
- type: nauc_recall_at_1000_std
value: 13.8395
- type: nauc_recall_at_1000_diff1
value: 12.9968
- type: nauc_precision_at_1_max
value: 21.6864
- type: nauc_precision_at_1_std
value: 28.610200000000003
- type: nauc_precision_at_1_diff1
value: 20.9846
- type: nauc_precision_at_3_max
value: 19.6883
- type: nauc_precision_at_3_std
value: 26.6378
- type: nauc_precision_at_3_diff1
value: 8.9681
- type: nauc_precision_at_5_max
value: 17.8277
- type: nauc_precision_at_5_std
value: 23.2801
- type: nauc_precision_at_5_diff1
value: 7.352200000000001
- type: nauc_precision_at_10_max
value: 17.9106
- type: nauc_precision_at_10_std
value: 24.556
- type: nauc_precision_at_10_diff1
value: 5.6874
- type: nauc_precision_at_20_max
value: 16.950699999999998
- type: nauc_precision_at_20_std
value: 23.874000000000002
- type: nauc_precision_at_20_diff1
value: 3.562
- type: nauc_precision_at_100_max
value: 17.147000000000002
- type: nauc_precision_at_100_std
value: 22.5333
- type: nauc_precision_at_100_diff1
value: 3.4271999999999996
- type: nauc_precision_at_1000_max
value: 27.553499999999996
- type: nauc_precision_at_1000_std
value: 13.8395
- type: nauc_precision_at_1000_diff1
value: 12.9968
- type: nauc_mrr_at_1_max
value: 21.6864
- type: nauc_mrr_at_1_std
value: 28.610200000000003
- type: nauc_mrr_at_1_diff1
value: 20.9846
- type: nauc_mrr_at_3_max
value: 20.8068
- type: nauc_mrr_at_3_std
value: 27.277
- type: nauc_mrr_at_3_diff1
value: 14.511299999999999
- type: nauc_mrr_at_5_max
value: 20.0835
- type: nauc_mrr_at_5_std
value: 26.131300000000003
- type: nauc_mrr_at_5_diff1
value: 13.6857
- type: nauc_mrr_at_10_max
value: 20.0281
- type: nauc_mrr_at_10_std
value: 26.2996
- type: nauc_mrr_at_10_diff1
value: 13.192300000000001
- type: nauc_mrr_at_20_max
value: 19.8456
- type: nauc_mrr_at_20_std
value: 26.1681
- type: nauc_mrr_at_20_diff1
value: 12.8234
- type: nauc_mrr_at_100_max
value: 19.7798
- type: nauc_mrr_at_100_std
value: 26.096999999999998
- type: nauc_mrr_at_100_diff1
value: 12.7576
- type: nauc_mrr_at_1000_max
value: 19.804
- type: nauc_mrr_at_1000_std
value: 26.0808
- type: nauc_mrr_at_1000_diff1
value: 12.8081
- type: main_score
value: 16.1
- task:
type: Retrieval
dataset:
name: MTEB MrTidyRetrieval (arabic)
type: mteb/mrtidy
config: arabic
split: test
revision: fc24a3ce8f09746410daee3d5cd823ff7a0675b7
metrics:
- type: ndcg_at_1
value: 14.338999999999999
- type: ndcg_at_3
value: 20.278
- type: ndcg_at_5
value: 23.035
- type: ndcg_at_10
value: 25.934
- type: ndcg_at_20
value: 27.68
- type: ndcg_at_100
value: 30.685000000000002
- type: ndcg_at_1000
value: 32.926
- type: map_at_1
value: 13.228000000000002
- type: map_at_3
value: 18.301000000000002
- type: map_at_5
value: 19.830000000000002
- type: map_at_10
value: 21.038
- type: map_at_20
value: 21.538
- type: map_at_100
value: 21.977
- type: map_at_1000
value: 22.066
- type: recall_at_1
value: 13.228000000000002
- type: recall_at_3
value: 24.792
- type: recall_at_5
value: 31.298
- type: recall_at_10
value: 39.948
- type: recall_at_20
value: 46.546
- type: recall_at_100
value: 61.949
- type: recall_at_1000
value: 79.001
- type: precision_at_1
value: 14.338999999999999
- type: precision_at_3
value: 9.035
- type: precision_at_5
value: 6.883
- type: precision_at_10
value: 4.44
- type: precision_at_20
value: 2.5989999999999998
- type: precision_at_100
value: 0.7080000000000001
- type: precision_at_1000
value: 0.091
- type: mrr_at_1
value: 14.338600000000001
- type: mrr_at_3
value: 19.5652
- type: mrr_at_5
value: 21.1517
- type: mrr_at_10
value: 22.3876
- type: mrr_at_20
value: 22.8831
- type: mrr_at_100
value: 23.2868
- type: mrr_at_1000
value: 23.359199999999998
- type: nauc_ndcg_at_1_max
value: 12.350800000000001
- type: nauc_ndcg_at_1_std
value: 10.1704
- type: nauc_ndcg_at_1_diff1
value: 19.557199999999998
- type: nauc_ndcg_at_3_max
value: 16.4692
- type: nauc_ndcg_at_3_std
value: 12.4419
- type: nauc_ndcg_at_3_diff1
value: 18.2343
- type: nauc_ndcg_at_5_max
value: 17.1079
- type: nauc_ndcg_at_5_std
value: 14.7839
- type: nauc_ndcg_at_5_diff1
value: 17.9067
- type: nauc_ndcg_at_10_max
value: 17.6668
- type: nauc_ndcg_at_10_std
value: 17.6519
- type: nauc_ndcg_at_10_diff1
value: 17.1885
- type: nauc_ndcg_at_20_max
value: 18.017
- type: nauc_ndcg_at_20_std
value: 19.1385
- type: nauc_ndcg_at_20_diff1
value: 16.5595
- type: nauc_ndcg_at_100_max
value: 17.7476
- type: nauc_ndcg_at_100_std
value: 20.1949
- type: nauc_ndcg_at_100_diff1
value: 16.3128
- type: nauc_ndcg_at_1000_max
value: 17.799799999999998
- type: nauc_ndcg_at_1000_std
value: 20.5006
- type: nauc_ndcg_at_1000_diff1
value: 16.4148
- type: nauc_map_at_1_max
value: 12.4058
- type: nauc_map_at_1_std
value: 11.1723
- type: nauc_map_at_1_diff1
value: 20.7625
- type: nauc_map_at_3_max
value: 15.609300000000001
- type: nauc_map_at_3_std
value: 12.2595
- type: nauc_map_at_3_diff1
value: 18.8335
- type: nauc_map_at_5_max
value: 16.1361
- type: nauc_map_at_5_std
value: 13.8137
- type: nauc_map_at_5_diff1
value: 18.712300000000003
- type: nauc_map_at_10_max
value: 16.4222
- type: nauc_map_at_10_std
value: 15.059600000000001
- type: nauc_map_at_10_diff1
value: 18.3989
- type: nauc_map_at_20_max
value: 16.563200000000002
- type: nauc_map_at_20_std
value: 15.549299999999999
- type: nauc_map_at_20_diff1
value: 18.205299999999998
- type: nauc_map_at_100_max
value: 16.498099999999997
- type: nauc_map_at_100_std
value: 15.735199999999999
- type: nauc_map_at_100_diff1
value: 18.098300000000002
- type: nauc_map_at_1000_max
value: 16.4922
- type: nauc_map_at_1000_std
value: 15.7561
- type: nauc_map_at_1000_diff1
value: 18.124100000000002
- type: nauc_recall_at_1_max
value: 12.4058
- type: nauc_recall_at_1_std
value: 11.1723
- type: nauc_recall_at_1_diff1
value: 20.7625
- type: nauc_recall_at_3_max
value: 18.3013
- type: nauc_recall_at_3_std
value: 12.954699999999999
- type: nauc_recall_at_3_diff1
value: 16.9722
- type: nauc_recall_at_5_max
value: 19.309
- type: nauc_recall_at_5_std
value: 17.3374
- type: nauc_recall_at_5_diff1
value: 16.314
- type: nauc_recall_at_10_max
value: 20.6932
- type: nauc_recall_at_10_std
value: 24.299799999999998
- type: nauc_recall_at_10_diff1
value: 14.666799999999999
- type: nauc_recall_at_20_max
value: 21.8139
- type: nauc_recall_at_20_std
value: 28.881400000000003
- type: nauc_recall_at_20_diff1
value: 12.928899999999999
- type: nauc_recall_at_100_max
value: 20.8015
- type: nauc_recall_at_100_std
value: 34.943999999999996
- type: nauc_recall_at_100_diff1
value: 11.6233
- type: nauc_recall_at_1000_max
value: 24.131800000000002
- type: nauc_recall_at_1000_std
value: 45.778200000000005
- type: nauc_recall_at_1000_diff1
value: 9.0989
- type: nauc_precision_at_1_max
value: 12.350800000000001
- type: nauc_precision_at_1_std
value: 10.1704
- type: nauc_precision_at_1_diff1
value: 19.557199999999998
- type: nauc_precision_at_3_max
value: 18.6388
- type: nauc_precision_at_3_std
value: 11.9733
- type: nauc_precision_at_3_diff1
value: 16.4002
- type: nauc_precision_at_5_max
value: 19.988400000000002
- type: nauc_precision_at_5_std
value: 17.020599999999998
- type: nauc_precision_at_5_diff1
value: 15.4553
- type: nauc_precision_at_10_max
value: 21.029
- type: nauc_precision_at_10_std
value: 24.0445
- type: nauc_precision_at_10_diff1
value: 12.7804
- type: nauc_precision_at_20_max
value: 20.8578
- type: nauc_precision_at_20_std
value: 27.8364
- type: nauc_precision_at_20_diff1
value: 10.0575
- type: nauc_precision_at_100_max
value: 19.115
- type: nauc_precision_at_100_std
value: 30.4435
- type: nauc_precision_at_100_diff1
value: 6.2284
- type: nauc_precision_at_1000_max
value: 14.213899999999999
- type: nauc_precision_at_1000_std
value: 27.5515
- type: nauc_precision_at_1000_diff1
value: 1.3398
- type: nauc_mrr_at_1_max
value: 12.350800000000001
- type: nauc_mrr_at_1_std
value: 10.1704
- type: nauc_mrr_at_1_diff1
value: 19.557199999999998
- type: nauc_mrr_at_3_max
value: 15.576799999999999
- type: nauc_mrr_at_3_std
value: 11.9021
- type: nauc_mrr_at_3_diff1
value: 18.185599999999997
- type: nauc_mrr_at_5_max
value: 15.615699999999999
- type: nauc_mrr_at_5_std
value: 12.9917
- type: nauc_mrr_at_5_diff1
value: 17.8173
- type: nauc_mrr_at_10_max
value: 15.7163
- type: nauc_mrr_at_10_std
value: 14.2755
- type: nauc_mrr_at_10_diff1
value: 17.4754
- type: nauc_mrr_at_20_max
value: 15.8022
- type: nauc_mrr_at_20_std
value: 14.69
- type: nauc_mrr_at_20_diff1
value: 17.201900000000002
- type: nauc_mrr_at_100_max
value: 15.767000000000001
- type: nauc_mrr_at_100_std
value: 14.8459
- type: nauc_mrr_at_100_diff1
value: 17.2406
- type: nauc_mrr_at_1000_max
value: 15.778400000000001
- type: nauc_mrr_at_1000_std
value: 14.8592
- type: nauc_mrr_at_1000_diff1
value: 17.2675
- type: main_score
value: 25.934
- task:
type: Retrieval
dataset:
name: MTEB SadeemQuestionRetrieval (default)
type: sadeem-ai/sadeem-ar-eval-retrieval-questions
config: default
split: test
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
metrics:
- type: ndcg_at_1
value: 25.945
- type: ndcg_at_3
value: 55.796
- type: ndcg_at_5
value: 57.726
- type: ndcg_at_10
value: 58.884
- type: ndcg_at_20
value: 59.705
- type: ndcg_at_100
value: 60.659
- type: ndcg_at_1000
value: 61.151999999999994
- type: map_at_1
value: 25.945
- type: map_at_3
value: 47.981
- type: map_at_5
value: 49.051
- type: map_at_10
value: 49.536
- type: map_at_20
value: 49.767
- type: map_at_100
value: 49.9
- type: map_at_1000
value: 49.916
- type: recall_at_1
value: 25.945
- type: recall_at_3
value: 78.602
- type: recall_at_5
value: 83.29299999999999
- type: recall_at_10
value: 86.836
- type: recall_at_20
value: 90.04299999999999
- type: recall_at_100
value: 95.165
- type: recall_at_1000
value: 99.138
- type: precision_at_1
value: 25.945
- type: precision_at_3
value: 26.201
- type: precision_at_5
value: 16.659
- type: precision_at_10
value: 8.684
- type: precision_at_20
value: 4.502
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 24.3179
- type: mrr_at_3
value: 46.8566
- type: mrr_at_5
value: 47.9288
- type: mrr_at_10
value: 48.4848
- type: mrr_at_20
value: 48.700700000000005
- type: mrr_at_100
value: 48.8358
- type: mrr_at_1000
value: 48.8521
- type: nauc_ndcg_at_1_max
value: 15.6065
- type: nauc_ndcg_at_1_std
value: 4.1895
- type: nauc_ndcg_at_1_diff1
value: -5.9052
- type: nauc_ndcg_at_3_max
value: 35.0009
- type: nauc_ndcg_at_3_std
value: 12.2065
- type: nauc_ndcg_at_3_diff1
value: -49.336600000000004
- type: nauc_ndcg_at_5_max
value: 33.3652
- type: nauc_ndcg_at_5_std
value: 12.2193
- type: nauc_ndcg_at_5_diff1
value: -43.4435
- type: nauc_ndcg_at_10_max
value: 31.9907
- type: nauc_ndcg_at_10_std
value: 12.9051
- type: nauc_ndcg_at_10_diff1
value: -41.2196
- type: nauc_ndcg_at_20_max
value: 30.653000000000002
- type: nauc_ndcg_at_20_std
value: 14.0403
- type: nauc_ndcg_at_20_diff1
value: -38.6306
- type: nauc_ndcg_at_100_max
value: 29.307499999999997
- type: nauc_ndcg_at_100_std
value: 12.8583
- type: nauc_ndcg_at_100_diff1
value: -35.8193
- type: nauc_ndcg_at_1000_max
value: 28.833399999999997
- type: nauc_ndcg_at_1000_std
value: 12.0671
- type: nauc_ndcg_at_1000_diff1
value: -34.3451
- type: nauc_map_at_1_max
value: 15.6065
- type: nauc_map_at_1_std
value: 4.1895
- type: nauc_map_at_1_diff1
value: -5.9052
- type: nauc_map_at_3_max
value: 28.6012
- type: nauc_map_at_3_std
value: 9.6436
- type: nauc_map_at_3_diff1
value: -34.6364
- type: nauc_map_at_5_max
value: 27.581699999999998
- type: nauc_map_at_5_std
value: 9.5477
- type: nauc_map_at_5_diff1
value: -31.2154
- type: nauc_map_at_10_max
value: 27.005699999999997
- type: nauc_map_at_10_std
value: 9.7735
- type: nauc_map_at_10_diff1
value: -30.2406
- type: nauc_map_at_20_max
value: 26.6504
- type: nauc_map_at_20_std
value: 10.044400000000001
- type: nauc_map_at_20_diff1
value: -29.523300000000003
- type: nauc_map_at_100_max
value: 26.4772
- type: nauc_map_at_100_std
value: 9.8956
- type: nauc_map_at_100_diff1
value: -29.164
- type: nauc_map_at_1000_max
value: 26.460800000000003
- type: nauc_map_at_1000_std
value: 9.8771
- type: nauc_map_at_1000_diff1
value: -29.119099999999996
- type: nauc_recall_at_1_max
value: 15.6065
- type: nauc_recall_at_1_std
value: 4.1895
- type: nauc_recall_at_1_diff1
value: -5.9052
- type: nauc_recall_at_3_max
value: 62.232200000000006
- type: nauc_recall_at_3_std
value: 23.0712
- type: nauc_recall_at_3_diff1
value: -112.0696
- type: nauc_recall_at_5_max
value: 62.732600000000005
- type: nauc_recall_at_5_std
value: 25.924500000000002
- type: nauc_recall_at_5_diff1
value: -105.32390000000001
- type: nauc_recall_at_10_max
value: 61.8591
- type: nauc_recall_at_10_std
value: 32.929700000000004
- type: nauc_recall_at_10_diff1
value: -107.3419
- type: nauc_recall_at_20_max
value: 58.1697
- type: nauc_recall_at_20_std
value: 48.2999
- type: nauc_recall_at_20_diff1
value: -102.9417
- type: nauc_recall_at_100_max
value: 54.3349
- type: nauc_recall_at_100_std
value: 55.2788
- type: nauc_recall_at_100_diff1
value: -101.90060000000001
- type: nauc_recall_at_1000_max
value: 77.6378
- type: nauc_recall_at_1000_std
value: 82.6629
- type: nauc_recall_at_1000_diff1
value: -109.45089999999999
- type: nauc_precision_at_1_max
value: 15.6065
- type: nauc_precision_at_1_std
value: 4.1895
- type: nauc_precision_at_1_diff1
value: -5.9052
- type: nauc_precision_at_3_max
value: 62.232200000000006
- type: nauc_precision_at_3_std
value: 23.0712
- type: nauc_precision_at_3_diff1
value: -112.0696
- type: nauc_precision_at_5_max
value: 62.732600000000005
- type: nauc_precision_at_5_std
value: 25.924500000000002
- type: nauc_precision_at_5_diff1
value: -105.32390000000001
- type: nauc_precision_at_10_max
value: 61.8591
- type: nauc_precision_at_10_std
value: 32.929700000000004
- type: nauc_precision_at_10_diff1
value: -107.3419
- type: nauc_precision_at_20_max
value: 58.1697
- type: nauc_precision_at_20_std
value: 48.2999
- type: nauc_precision_at_20_diff1
value: -102.9417
- type: nauc_precision_at_100_max
value: 54.3349
- type: nauc_precision_at_100_std
value: 55.2788
- type: nauc_precision_at_100_diff1
value: -101.90060000000001
- type: nauc_precision_at_1000_max
value: 77.6378
- type: nauc_precision_at_1000_std
value: 82.6629
- type: nauc_precision_at_1000_diff1
value: -109.45089999999999
- type: nauc_mrr_at_1_max
value: 15.4767
- type: nauc_mrr_at_1_std
value: 7.9148
- type: nauc_mrr_at_1_diff1
value: -28.0379
- type: nauc_mrr_at_3_max
value: 29.0395
- type: nauc_mrr_at_3_std
value: 13.347700000000001
- type: nauc_mrr_at_3_diff1
value: -51.603
- type: nauc_mrr_at_5_max
value: 27.9939
- type: nauc_mrr_at_5_std
value: 12.8712
- type: nauc_mrr_at_5_diff1
value: -48.4563
- type: nauc_mrr_at_10_max
value: 27.2858
- type: nauc_mrr_at_10_std
value: 13.2486
- type: nauc_mrr_at_10_diff1
value: -47.4786
- type: nauc_mrr_at_20_max
value: 26.9478
- type: nauc_mrr_at_20_std
value: 13.571
- type: nauc_mrr_at_20_diff1
value: -46.9807
- type: nauc_mrr_at_100_max
value: 26.7688
- type: nauc_mrr_at_100_std
value: 13.439200000000001
- type: nauc_mrr_at_100_diff1
value: -46.7007
- type: nauc_mrr_at_1000_max
value: 26.753
- type: nauc_mrr_at_1000_std
value: 13.4243
- type: nauc_mrr_at_1000_diff1
value: -46.6676
- type: main_score
value: 58.884
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-ara)
type: jinaai/xpqa
config: ara-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 25.467000000000002
- type: ndcg_at_3
value: 26.25
- type: ndcg_at_5
value: 27.809
- type: ndcg_at_10
value: 31.296000000000003
- type: ndcg_at_20
value: 34.087
- type: ndcg_at_100
value: 38.891999999999996
- type: ndcg_at_1000
value: 42.423
- type: map_at_1
value: 13.042000000000002
- type: map_at_3
value: 20.979999999999997
- type: map_at_5
value: 23.64
- type: map_at_10
value: 25.463
- type: map_at_20
value: 26.443
- type: map_at_100
value: 27.328000000000003
- type: map_at_1000
value: 27.492
- type: recall_at_1
value: 13.042000000000002
- type: recall_at_3
value: 25.271
- type: recall_at_5
value: 31.740000000000002
- type: recall_at_10
value: 40.613
- type: recall_at_20
value: 49.689
- type: recall_at_100
value: 71.569
- type: recall_at_1000
value: 96.387
- type: precision_at_1
value: 25.467000000000002
- type: precision_at_3
value: 18.178
- type: precision_at_5
value: 14.052999999999999
- type: precision_at_10
value: 8.973
- type: precision_at_20
value: 5.427
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.19499999999999998
- type: mrr_at_1
value: 25.466699999999996
- type: mrr_at_3
value: 30.177799999999998
- type: mrr_at_5
value: 31.477800000000002
- type: mrr_at_10
value: 32.626
- type: mrr_at_20
value: 33.2774
- type: mrr_at_100
value: 33.732800000000005
- type: mrr_at_1000
value: 33.8177
- type: nauc_ndcg_at_1_max
value: 22.4447
- type: nauc_ndcg_at_1_std
value: -12.8273
- type: nauc_ndcg_at_1_diff1
value: 30.6643
- type: nauc_ndcg_at_3_max
value: 21.8156
- type: nauc_ndcg_at_3_std
value: -7.678599999999999
- type: nauc_ndcg_at_3_diff1
value: 24.3589
- type: nauc_ndcg_at_5_max
value: 22.3372
- type: nauc_ndcg_at_5_std
value: -6.578
- type: nauc_ndcg_at_5_diff1
value: 24.3558
- type: nauc_ndcg_at_10_max
value: 24.249399999999998
- type: nauc_ndcg_at_10_std
value: -5.4608
- type: nauc_ndcg_at_10_diff1
value: 25.0826
- type: nauc_ndcg_at_20_max
value: 25.1081
- type: nauc_ndcg_at_20_std
value: -4.4616999999999996
- type: nauc_ndcg_at_20_diff1
value: 25.4926
- type: nauc_ndcg_at_100_max
value: 24.9943
- type: nauc_ndcg_at_100_std
value: -2.9071
- type: nauc_ndcg_at_100_diff1
value: 25.0587
- type: nauc_ndcg_at_1000_max
value: 24.9393
- type: nauc_ndcg_at_1000_std
value: -3.9886
- type: nauc_ndcg_at_1000_diff1
value: 24.9149
- type: nauc_map_at_1_max
value: 10.3874
- type: nauc_map_at_1_std
value: -14.1189
- type: nauc_map_at_1_diff1
value: 27.1204
- type: nauc_map_at_3_max
value: 19.1887
- type: nauc_map_at_3_std
value: -8.7622
- type: nauc_map_at_3_diff1
value: 23.968400000000003
- type: nauc_map_at_5_max
value: 22.1726
- type: nauc_map_at_5_std
value: -7.8292
- type: nauc_map_at_5_diff1
value: 24.8012
- type: nauc_map_at_10_max
value: 23.4288
- type: nauc_map_at_10_std
value: -7.4127
- type: nauc_map_at_10_diff1
value: 25.507800000000003
- type: nauc_map_at_20_max
value: 23.7292
- type: nauc_map_at_20_std
value: -7.187200000000001
- type: nauc_map_at_20_diff1
value: 25.7249
- type: nauc_map_at_100_max
value: 23.5909
- type: nauc_map_at_100_std
value: -6.9328
- type: nauc_map_at_100_diff1
value: 25.4793
- type: nauc_map_at_1000_max
value: 23.6015
- type: nauc_map_at_1000_std
value: -6.9618
- type: nauc_map_at_1000_diff1
value: 25.4933
- type: nauc_recall_at_1_max
value: 10.3874
- type: nauc_recall_at_1_std
value: -14.1189
- type: nauc_recall_at_1_diff1
value: 27.1204
- type: nauc_recall_at_3_max
value: 17.793400000000002
- type: nauc_recall_at_3_std
value: -3.7499
- type: nauc_recall_at_3_diff1
value: 17.6262
- type: nauc_recall_at_5_max
value: 21.038899999999998
- type: nauc_recall_at_5_std
value: -1.8713
- type: nauc_recall_at_5_diff1
value: 19.7434
- type: nauc_recall_at_10_max
value: 24.9692
- type: nauc_recall_at_10_std
value: 1.053
- type: nauc_recall_at_10_diff1
value: 21.2845
- type: nauc_recall_at_20_max
value: 27.9293
- type: nauc_recall_at_20_std
value: 4.7705
- type: nauc_recall_at_20_diff1
value: 22.1695
- type: nauc_recall_at_100_max
value: 29.4898
- type: nauc_recall_at_100_std
value: 16.903000000000002
- type: nauc_recall_at_100_diff1
value: 21.1503
- type: nauc_recall_at_1000_max
value: 61.8728
- type: nauc_recall_at_1000_std
value: 63.785599999999995
- type: nauc_recall_at_1000_diff1
value: 4.887
- type: nauc_precision_at_1_max
value: 22.4447
- type: nauc_precision_at_1_std
value: -12.8273
- type: nauc_precision_at_1_diff1
value: 30.6643
- type: nauc_precision_at_3_max
value: 27.930899999999998
- type: nauc_precision_at_3_std
value: -5.6785000000000005
- type: nauc_precision_at_3_diff1
value: 22.5772
- type: nauc_precision_at_5_max
value: 29.625200000000003
- type: nauc_precision_at_5_std
value: -3.949
- type: nauc_precision_at_5_diff1
value: 22.569200000000002
- type: nauc_precision_at_10_max
value: 30.353
- type: nauc_precision_at_10_std
value: -2.6828000000000003
- type: nauc_precision_at_10_diff1
value: 22.0195
- type: nauc_precision_at_20_max
value: 29.3013
- type: nauc_precision_at_20_std
value: -0.9629000000000001
- type: nauc_precision_at_20_diff1
value: 21.473100000000002
- type: nauc_precision_at_100_max
value: 24.3825
- type: nauc_precision_at_100_std
value: 2.3911000000000002
- type: nauc_precision_at_100_diff1
value: 15.606300000000001
- type: nauc_precision_at_1000_max
value: 18.7938
- type: nauc_precision_at_1000_std
value: -0.1033
- type: nauc_precision_at_1000_diff1
value: 9.300799999999999
- type: nauc_mrr_at_1_max
value: 22.4447
- type: nauc_mrr_at_1_std
value: -12.8273
- type: nauc_mrr_at_1_diff1
value: 30.6643
- type: nauc_mrr_at_3_max
value: 21.898300000000003
- type: nauc_mrr_at_3_std
value: -9.1679
- type: nauc_mrr_at_3_diff1
value: 26.647900000000003
- type: nauc_mrr_at_5_max
value: 21.7943
- type: nauc_mrr_at_5_std
value: -8.9716
- type: nauc_mrr_at_5_diff1
value: 26.8466
- type: nauc_mrr_at_10_max
value: 22.4361
- type: nauc_mrr_at_10_std
value: -8.288
- type: nauc_mrr_at_10_diff1
value: 26.8214
- type: nauc_mrr_at_20_max
value: 22.6388
- type: nauc_mrr_at_20_std
value: -7.9011
- type: nauc_mrr_at_20_diff1
value: 26.842899999999997
- type: nauc_mrr_at_100_max
value: 22.6039
- type: nauc_mrr_at_100_std
value: -7.7958
- type: nauc_mrr_at_100_diff1
value: 26.847199999999997
- type: nauc_mrr_at_1000_max
value: 22.5934
- type: nauc_mrr_at_1000_std
value: -7.8259
- type: nauc_mrr_at_1000_diff1
value: 26.8426
- type: main_score
value: 31.296000000000003
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-ara)
type: jinaai/xpqa
config: eng-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 13.467
- type: ndcg_at_3
value: 14.322
- type: ndcg_at_5
value: 15.528
- type: ndcg_at_10
value: 18.358
- type: ndcg_at_20
value: 20.73
- type: ndcg_at_100
value: 25.879
- type: ndcg_at_1000
value: 31.326999999999998
- type: map_at_1
value: 6.622
- type: map_at_3
value: 10.791
- type: map_at_5
value: 12.337
- type: map_at_10
value: 13.682
- type: map_at_20
value: 14.438999999999998
- type: map_at_100
value: 15.292
- type: map_at_1000
value: 15.545
- type: recall_at_1
value: 6.622
- type: recall_at_3
value: 13.862
- type: recall_at_5
value: 18.389
- type: recall_at_10
value: 25.578
- type: recall_at_20
value: 33.416000000000004
- type: recall_at_100
value: 56.938
- type: recall_at_1000
value: 93.982
- type: precision_at_1
value: 13.467
- type: precision_at_3
value: 10.133000000000001
- type: precision_at_5
value: 8.16
- type: precision_at_10
value: 5.627
- type: precision_at_20
value: 3.627
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.194
- type: mrr_at_1
value: 13.466700000000001
- type: mrr_at_3
value: 17.488899999999997
- type: mrr_at_5
value: 18.7222
- type: mrr_at_10
value: 19.905900000000003
- type: mrr_at_20
value: 20.4778
- type: mrr_at_100
value: 21.023
- type: mrr_at_1000
value: 21.1478
- type: nauc_ndcg_at_1_max
value: 21.769
- type: nauc_ndcg_at_1_std
value: 2.4559
- type: nauc_ndcg_at_1_diff1
value: 22.7686
- type: nauc_ndcg_at_3_max
value: 24.3857
- type: nauc_ndcg_at_3_std
value: 5.9556
- type: nauc_ndcg_at_3_diff1
value: 22.3492
- type: nauc_ndcg_at_5_max
value: 25.810100000000002
- type: nauc_ndcg_at_5_std
value: 6.325799999999999
- type: nauc_ndcg_at_5_diff1
value: 21.993
- type: nauc_ndcg_at_10_max
value: 26.6969
- type: nauc_ndcg_at_10_std
value: 7.2925
- type: nauc_ndcg_at_10_diff1
value: 21.3312
- type: nauc_ndcg_at_20_max
value: 26.652900000000002
- type: nauc_ndcg_at_20_std
value: 7.271
- type: nauc_ndcg_at_20_diff1
value: 21.4505
- type: nauc_ndcg_at_100_max
value: 27.418300000000002
- type: nauc_ndcg_at_100_std
value: 9.1853
- type: nauc_ndcg_at_100_diff1
value: 21.0781
- type: nauc_ndcg_at_1000_max
value: 26.5394
- type: nauc_ndcg_at_1000_std
value: 8.4966
- type: nauc_ndcg_at_1000_diff1
value: 20.2687
- type: nauc_map_at_1_max
value: 21.621499999999997
- type: nauc_map_at_1_std
value: 6.7188
- type: nauc_map_at_1_diff1
value: 28.6267
- type: nauc_map_at_3_max
value: 24.7587
- type: nauc_map_at_3_std
value: 7.5144
- type: nauc_map_at_3_diff1
value: 24.7211
- type: nauc_map_at_5_max
value: 26.5481
- type: nauc_map_at_5_std
value: 6.7313
- type: nauc_map_at_5_diff1
value: 24.5343
- type: nauc_map_at_10_max
value: 26.962199999999996
- type: nauc_map_at_10_std
value: 7.3188
- type: nauc_map_at_10_diff1
value: 23.6207
- type: nauc_map_at_20_max
value: 27.009
- type: nauc_map_at_20_std
value: 7.2947999999999995
- type: nauc_map_at_20_diff1
value: 23.4863
- type: nauc_map_at_100_max
value: 27.185399999999998
- type: nauc_map_at_100_std
value: 7.5737
- type: nauc_map_at_100_diff1
value: 23.543
- type: nauc_map_at_1000_max
value: 27.1341
- type: nauc_map_at_1000_std
value: 7.5804
- type: nauc_map_at_1000_diff1
value: 23.494999999999997
- type: nauc_recall_at_1_max
value: 21.621499999999997
- type: nauc_recall_at_1_std
value: 6.7188
- type: nauc_recall_at_1_diff1
value: 28.6267
- type: nauc_recall_at_3_max
value: 23.969099999999997
- type: nauc_recall_at_3_std
value: 8.4769
- type: nauc_recall_at_3_diff1
value: 20.115
- type: nauc_recall_at_5_max
value: 25.155499999999996
- type: nauc_recall_at_5_std
value: 6.4667
- type: nauc_recall_at_5_diff1
value: 18.6197
- type: nauc_recall_at_10_max
value: 26.3774
- type: nauc_recall_at_10_std
value: 8.262799999999999
- type: nauc_recall_at_10_diff1
value: 17.7344
- type: nauc_recall_at_20_max
value: 25.6955
- type: nauc_recall_at_20_std
value: 8.1547
- type: nauc_recall_at_20_diff1
value: 18.0549
- type: nauc_recall_at_100_max
value: 28.3794
- type: nauc_recall_at_100_std
value: 16.8501
- type: nauc_recall_at_100_diff1
value: 14.7472
- type: nauc_recall_at_1000_max
value: 35.3088
- type: nauc_recall_at_1000_std
value: 34.5591
- type: nauc_recall_at_1000_diff1
value: -14.508099999999999
- type: nauc_precision_at_1_max
value: 21.769
- type: nauc_precision_at_1_std
value: 2.4559
- type: nauc_precision_at_1_diff1
value: 22.7686
- type: nauc_precision_at_3_max
value: 25.005100000000002
- type: nauc_precision_at_3_std
value: 3.7567000000000004
- type: nauc_precision_at_3_diff1
value: 20.7241
- type: nauc_precision_at_5_max
value: 27.572200000000002
- type: nauc_precision_at_5_std
value: 3.6336
- type: nauc_precision_at_5_diff1
value: 19.896
- type: nauc_precision_at_10_max
value: 27.253800000000002
- type: nauc_precision_at_10_std
value: 4.561599999999999
- type: nauc_precision_at_10_diff1
value: 16.7525
- type: nauc_precision_at_20_max
value: 25.235400000000002
- type: nauc_precision_at_20_std
value: 3.9741
- type: nauc_precision_at_20_diff1
value: 15.7945
- type: nauc_precision_at_100_max
value: 20.383100000000002
- type: nauc_precision_at_100_std
value: 4.2147
- type: nauc_precision_at_100_diff1
value: 13.3018
- type: nauc_precision_at_1000_max
value: 6.3098
- type: nauc_precision_at_1000_std
value: -1.7795999999999998
- type: nauc_precision_at_1000_diff1
value: 3.7354
- type: nauc_mrr_at_1_max
value: 21.769
- type: nauc_mrr_at_1_std
value: 2.4559
- type: nauc_mrr_at_1_diff1
value: 22.7686
- type: nauc_mrr_at_3_max
value: 22.3842
- type: nauc_mrr_at_3_std
value: 4.4822
- type: nauc_mrr_at_3_diff1
value: 19.708000000000002
- type: nauc_mrr_at_5_max
value: 22.7469
- type: nauc_mrr_at_5_std
value: 4.8326
- type: nauc_mrr_at_5_diff1
value: 19.5886
- type: nauc_mrr_at_10_max
value: 23.2992
- type: nauc_mrr_at_10_std
value: 5.2336
- type: nauc_mrr_at_10_diff1
value: 19.7147
- type: nauc_mrr_at_20_max
value: 23.244699999999998
- type: nauc_mrr_at_20_std
value: 5.2174
- type: nauc_mrr_at_20_diff1
value: 19.808600000000002
- type: nauc_mrr_at_100_max
value: 23.3962
- type: nauc_mrr_at_100_std
value: 5.4528
- type: nauc_mrr_at_100_diff1
value: 19.799
- type: nauc_mrr_at_1000_max
value: 23.386699999999998
- type: nauc_mrr_at_1000_std
value: 5.432
- type: nauc_mrr_at_1000_diff1
value: 19.7846
- type: main_score
value: 18.358
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-eng)
type: jinaai/xpqa
config: ara-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 19.811
- type: ndcg_at_3
value: 21.506
- type: ndcg_at_5
value: 23.173
- type: ndcg_at_10
value: 26.913999999999998
- type: ndcg_at_20
value: 29.970000000000002
- type: ndcg_at_100
value: 35.274
- type: ndcg_at_1000
value: 39.164
- type: map_at_1
value: 11.013
- type: map_at_3
value: 17.051
- type: map_at_5
value: 19.209
- type: map_at_10
value: 21.105
- type: map_at_20
value: 22.189
- type: map_at_100
value: 23.143
- type: map_at_1000
value: 23.34
- type: recall_at_1
value: 11.013
- type: recall_at_3
value: 21.139
- type: recall_at_5
value: 27.136
- type: recall_at_10
value: 36.649
- type: recall_at_20
value: 46.752
- type: recall_at_100
value: 70.786
- type: recall_at_1000
value: 97.04899999999999
- type: precision_at_1
value: 19.811
- type: precision_at_3
value: 14.96
- type: precision_at_5
value: 11.725
- type: precision_at_10
value: 7.857
- type: precision_at_20
value: 4.939
- type: precision_at_100
value: 1.472
- type: precision_at_1000
value: 0.198
- type: mrr_at_1
value: 19.811300000000003
- type: mrr_at_3
value: 24.8428
- type: mrr_at_5
value: 26.2916
- type: mrr_at_10
value: 27.699
- type: mrr_at_20
value: 28.3441
- type: mrr_at_100
value: 28.8789
- type: mrr_at_1000
value: 28.968
- type: nauc_ndcg_at_1_max
value: 13.658600000000002
- type: nauc_ndcg_at_1_std
value: -10.888399999999999
- type: nauc_ndcg_at_1_diff1
value: 28.503
- type: nauc_ndcg_at_3_max
value: 13.2295
- type: nauc_ndcg_at_3_std
value: -8.3667
- type: nauc_ndcg_at_3_diff1
value: 24.2478
- type: nauc_ndcg_at_5_max
value: 16.2788
- type: nauc_ndcg_at_5_std
value: -6.1103
- type: nauc_ndcg_at_5_diff1
value: 23.8149
- type: nauc_ndcg_at_10_max
value: 17.7924
- type: nauc_ndcg_at_10_std
value: -5.2757
- type: nauc_ndcg_at_10_diff1
value: 22.7064
- type: nauc_ndcg_at_20_max
value: 19.031000000000002
- type: nauc_ndcg_at_20_std
value: -4.5977
- type: nauc_ndcg_at_20_diff1
value: 22.2638
- type: nauc_ndcg_at_100_max
value: 19.7211
- type: nauc_ndcg_at_100_std
value: -2.3255000000000003
- type: nauc_ndcg_at_100_diff1
value: 21.990299999999998
- type: nauc_ndcg_at_1000_max
value: 18.959799999999998
- type: nauc_ndcg_at_1000_std
value: -3.1267000000000005
- type: nauc_ndcg_at_1000_diff1
value: 22.975
- type: nauc_map_at_1_max
value: 4.2032
- type: nauc_map_at_1_std
value: -10.4419
- type: nauc_map_at_1_diff1
value: 27.2957
- type: nauc_map_at_3_max
value: 12.0436
- type: nauc_map_at_3_std
value: -8.5909
- type: nauc_map_at_3_diff1
value: 25.1571
- type: nauc_map_at_5_max
value: 15.2261
- type: nauc_map_at_5_std
value: -7.7981
- type: nauc_map_at_5_diff1
value: 24.9448
- type: nauc_map_at_10_max
value: 15.9522
- type: nauc_map_at_10_std
value: -7.366300000000001
- type: nauc_map_at_10_diff1
value: 24.191
- type: nauc_map_at_20_max
value: 16.4523
- type: nauc_map_at_20_std
value: -7.115
- type: nauc_map_at_20_diff1
value: 23.9544
- type: nauc_map_at_100_max
value: 16.615199999999998
- type: nauc_map_at_100_std
value: -6.7194
- type: nauc_map_at_100_diff1
value: 24.024
- type: nauc_map_at_1000_max
value: 16.598
- type: nauc_map_at_1000_std
value: -6.6981
- type: nauc_map_at_1000_diff1
value: 24.077399999999997
- type: nauc_recall_at_1_max
value: 4.2032
- type: nauc_recall_at_1_std
value: -10.4419
- type: nauc_recall_at_1_diff1
value: 27.2957
- type: nauc_recall_at_3_max
value: 12.0031
- type: nauc_recall_at_3_std
value: -5.558
- type: nauc_recall_at_3_diff1
value: 21.6049
- type: nauc_recall_at_5_max
value: 18.288899999999998
- type: nauc_recall_at_5_std
value: -1.9322
- type: nauc_recall_at_5_diff1
value: 20.0738
- type: nauc_recall_at_10_max
value: 20.4263
- type: nauc_recall_at_10_std
value: -0.4483
- type: nauc_recall_at_10_diff1
value: 16.9348
- type: nauc_recall_at_20_max
value: 23.555400000000002
- type: nauc_recall_at_20_std
value: 1.7368999999999999
- type: nauc_recall_at_20_diff1
value: 15.4241
- type: nauc_recall_at_100_max
value: 28.749599999999997
- type: nauc_recall_at_100_std
value: 15.001999999999999
- type: nauc_recall_at_100_diff1
value: 10.1602
- type: nauc_recall_at_1000_max
value: 52.9767
- type: nauc_recall_at_1000_std
value: 63.133300000000006
- type: nauc_recall_at_1000_diff1
value: -8.1688
- type: nauc_precision_at_1_max
value: 13.658600000000002
- type: nauc_precision_at_1_std
value: -10.888399999999999
- type: nauc_precision_at_1_diff1
value: 28.503
- type: nauc_precision_at_3_max
value: 18.2643
- type: nauc_precision_at_3_std
value: -7.6172
- type: nauc_precision_at_3_diff1
value: 20.1407
- type: nauc_precision_at_5_max
value: 23.6899
- type: nauc_precision_at_5_std
value: -5.0431
- type: nauc_precision_at_5_diff1
value: 19.3496
- type: nauc_precision_at_10_max
value: 23.7744
- type: nauc_precision_at_10_std
value: -2.9978000000000002
- type: nauc_precision_at_10_diff1
value: 15.9886
- type: nauc_precision_at_20_max
value: 23.9516
- type: nauc_precision_at_20_std
value: -1.881
- type: nauc_precision_at_20_diff1
value: 13.858
- type: nauc_precision_at_100_max
value: 22.0491
- type: nauc_precision_at_100_std
value: 3.9923
- type: nauc_precision_at_100_diff1
value: 10.8588
- type: nauc_precision_at_1000_max
value: 15.2248
- type: nauc_precision_at_1000_std
value: 2.2651
- type: nauc_precision_at_1000_diff1
value: 8.451500000000001
- type: nauc_mrr_at_1_max
value: 13.658600000000002
- type: nauc_mrr_at_1_std
value: -10.888399999999999
- type: nauc_mrr_at_1_diff1
value: 28.503
- type: nauc_mrr_at_3_max
value: 12.0131
- type: nauc_mrr_at_3_std
value: -9.0483
- type: nauc_mrr_at_3_diff1
value: 25.1263
- type: nauc_mrr_at_5_max
value: 14.2408
- type: nauc_mrr_at_5_std
value: -7.324400000000001
- type: nauc_mrr_at_5_diff1
value: 24.4894
- type: nauc_mrr_at_10_max
value: 15.1286
- type: nauc_mrr_at_10_std
value: -6.958
- type: nauc_mrr_at_10_diff1
value: 24.5045
- type: nauc_mrr_at_20_max
value: 15.3281
- type: nauc_mrr_at_20_std
value: -6.8811
- type: nauc_mrr_at_20_diff1
value: 24.4511
- type: nauc_mrr_at_100_max
value: 15.237700000000002
- type: nauc_mrr_at_100_std
value: -6.6511000000000005
- type: nauc_mrr_at_100_diff1
value: 24.4441
- type: nauc_mrr_at_1000_max
value: 15.2116
- type: nauc_mrr_at_1000_std
value: -6.6709000000000005
- type: nauc_mrr_at_1000_diff1
value: 24.4846
- type: main_score
value: 26.913999999999998
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 81.20578037912223
- type: cosine_spearman
value: 77.43670420687278
- type: euclidean_pearson
value: 74.60444698819703
- type: euclidean_spearman
value: 72.25767053642666
- type: main_score
value: 77.43670420687278
- type: manhattan_pearson
value: 73.86951335383257
- type: manhattan_spearman
value: 71.41608509527123
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 83.11155556919923
- type: cosine_spearman
value: 79.39435627520159
- type: euclidean_pearson
value: 81.05225024180342
- type: euclidean_spearman
value: 79.09926890001618
- type: main_score
value: 79.39435627520159
- type: manhattan_pearson
value: 80.74351302609706
- type: manhattan_spearman
value: 78.826254748334
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 85.10074960888633
- type: cosine_spearman
value: 78.93043293576132
- type: euclidean_pearson
value: 84.1168219787408
- type: euclidean_spearman
value: 78.44739559202252
- type: main_score
value: 78.93043293576132
- type: manhattan_pearson
value: 83.79447841594396
- type: manhattan_spearman
value: 77.94028171700384
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 81.34459901517775
- type: cosine_spearman
value: 82.73032633919925
- type: euclidean_pearson
value: 82.83546499367434
- type: euclidean_spearman
value: 83.29701673615389
- type: main_score
value: 82.73032633919925
- type: manhattan_pearson
value: 82.63480502797324
- type: manhattan_spearman
value: 83.05016589615636
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 82.53179983763488
- type: cosine_spearman
value: 81.64974497557361
- type: euclidean_pearson
value: 83.03981070806898
- type: euclidean_spearman
value: 82.65556168300631
- type: main_score
value: 81.64974497557361
- type: manhattan_pearson
value: 82.83722360191446
- type: manhattan_spearman
value: 82.4164264119
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 86.5684162475647
- type: cosine_spearman
value: 87.62163215009723
- type: euclidean_pearson
value: 87.3068288651339
- type: euclidean_spearman
value: 88.03508640722863
- type: main_score
value: 87.62163215009723
- type: manhattan_pearson
value: 87.21818681800193
- type: manhattan_spearman
value: 87.94690511382603
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 81.70518105237446
- type: cosine_spearman
value: 83.66083698795428
- type: euclidean_pearson
value: 82.80400684544435
- type: euclidean_spearman
value: 83.39926895275799
- type: main_score
value: 83.66083698795428
- type: manhattan_pearson
value: 82.44430538731845
- type: manhattan_spearman
value: 82.99600783826028
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 82.23229967696153
- type: cosine_spearman
value: 82.40039006538706
- type: euclidean_pearson
value: 79.21322872573518
- type: euclidean_spearman
value: 79.14230529579783
- type: main_score
value: 82.40039006538706
- type: manhattan_pearson
value: 79.1476348987964
- type: manhattan_spearman
value: 78.82381660638143
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 45.95767124518871
- type: cosine_spearman
value: 51.37922888872568
- type: euclidean_pearson
value: 45.519471121310126
- type: euclidean_spearman
value: 51.45605803385654
- type: main_score
value: 51.37922888872568
- type: manhattan_pearson
value: 45.98761117909666
- type: manhattan_spearman
value: 51.48451973989366
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 85.38916827757183
- type: cosine_spearman
value: 86.16303183485594
- type: euclidean_pearson
value: 85.16406897245115
- type: euclidean_spearman
value: 85.40364087457081
- type: main_score
value: 86.16303183485594
- type: manhattan_pearson
value: 84.96853193915084
- type: manhattan_spearman
value: 85.13238442843544
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.077426987171158
- type: cosine_spearman
value: 30.163682020271608
- type: dot_pearson
value: 27.31125295906803
- type: dot_spearman
value: 29.138235153208193
- type: main_score
value: 30.163682020271608
- type: pearson
value: 30.077426987171158
- type: spearman
value: 30.163682020271608
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.8538831619509135
name: Pearson Cosine
- type: spearman_cosine
value: 0.861625750018802
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8496745674597512
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8513333417508545
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8516261261374778
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8540549341060195
name: Spearman Euclidean
- type: pearson_dot
value: 0.7281308266536204
name: Pearson Dot
- type: spearman_dot
value: 0.7230282720855726
name: Spearman Dot
- type: pearson_max
value: 0.8538831619509135
name: Pearson Max
- type: spearman_max
value: 0.861625750018802
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.8542379189261009
name: Pearson Cosine
- type: spearman_cosine
value: 0.8609329396560859
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8486657899695456
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8512120732504748
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8505249483849495
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8538738365440234
name: Spearman Euclidean
- type: pearson_dot
value: 0.7075618032859148
name: Pearson Dot
- type: spearman_dot
value: 0.7028728329509918
name: Spearman Dot
- type: pearson_max
value: 0.8542379189261009
name: Pearson Max
- type: spearman_max
value: 0.8609329396560859
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.8486308733045101
name: Pearson Cosine
- type: spearman_cosine
value: 0.8578681811996274
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8404506123980291
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.845565163232125
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8414758099131773
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8471566121478254
name: Spearman Euclidean
- type: pearson_dot
value: 0.6668664182302968
name: Pearson Dot
- type: spearman_dot
value: 0.6651222481800894
name: Spearman Dot
- type: pearson_max
value: 0.8486308733045101
name: Pearson Max
- type: spearman_max
value: 0.8578681811996274
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.8389761445410956
name: Pearson Cosine
- type: spearman_cosine
value: 0.8499312736457453
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8287388421834582
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8353046807483782
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8297699263897746
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8371843253238523
name: Spearman Euclidean
- type: pearson_dot
value: 0.5855876200722326
name: Pearson Dot
- type: spearman_dot
value: 0.5834920267418124
name: Spearman Dot
- type: pearson_max
value: 0.8389761445410956
name: Pearson Max
- type: spearman_max
value: 0.8499312736457453
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.8290685425698586
name: Pearson Cosine
- type: spearman_cosine
value: 0.8429054799136109
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8100968316314205
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8221121550434057
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8129044863346081
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8255133471709527
name: Spearman Euclidean
- type: pearson_dot
value: 0.5067257944655903
name: Pearson Dot
- type: spearman_dot
value: 0.5109761436588146
name: Spearman Dot
- type: pearson_max
value: 0.8290685425698586
name: Pearson Max
- type: spearman_max
value: 0.8429054799136109
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 79f2382ceacceacdf38563d7c5d16b9ff8d725d6 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Arabic-Nli-Matryoshka")
# Run inference
sentences = [
'ูุฌูุณ ุดุงุจ ุฐู ุดุนุฑ ุฃุดูุฑ ุนูู ุงูุญุงุฆุท ููุฑุฃ ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุงู
ุฑุฃุฉ ููุชุงุฉ ุดุงุจุฉ.',
'ุฐูุฑ ุดุงุจ ููุธุฑ ุฅูู ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุฅู
ุฑุฃุชุงู ุจุฌุงูุจู',
'ุงูุดุงุจ ูุงุฆู
ุจููู
ุง ุงูุฃู
ุชููุฏ ุงุจูุชูุง ุฅูู ุงูุญุฏููุฉ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8539 |
| **spearman_cosine** | **0.8616** |
| pearson_manhattan | 0.8497 |
| spearman_manhattan | 0.8513 |
| pearson_euclidean | 0.8516 |
| spearman_euclidean | 0.8541 |
| pearson_dot | 0.7281 |
| spearman_dot | 0.723 |
| pearson_max | 0.8539 |
| spearman_max | 0.8616 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8542 |
| **spearman_cosine** | **0.8609** |
| pearson_manhattan | 0.8487 |
| spearman_manhattan | 0.8512 |
| pearson_euclidean | 0.8505 |
| spearman_euclidean | 0.8539 |
| pearson_dot | 0.7076 |
| spearman_dot | 0.7029 |
| pearson_max | 0.8542 |
| spearman_max | 0.8609 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8486 |
| **spearman_cosine** | **0.8579** |
| pearson_manhattan | 0.8405 |
| spearman_manhattan | 0.8456 |
| pearson_euclidean | 0.8415 |
| spearman_euclidean | 0.8472 |
| pearson_dot | 0.6669 |
| spearman_dot | 0.6651 |
| pearson_max | 0.8486 |
| spearman_max | 0.8579 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.839 |
| **spearman_cosine** | **0.8499** |
| pearson_manhattan | 0.8287 |
| spearman_manhattan | 0.8353 |
| pearson_euclidean | 0.8298 |
| spearman_euclidean | 0.8372 |
| pearson_dot | 0.5856 |
| spearman_dot | 0.5835 |
| pearson_max | 0.839 |
| spearman_max | 0.8499 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8291 |
| **spearman_cosine** | **0.8429** |
| pearson_manhattan | 0.8101 |
| spearman_manhattan | 0.8221 |
| pearson_euclidean | 0.8129 |
| spearman_euclidean | 0.8255 |
| pearson_dot | 0.5067 |
| spearman_dot | 0.511 |
| pearson_max | 0.8291 |
| spearman_max | 0.8429 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.33 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.21 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.32 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>ุดุฎุต ุนูู ุญุตุงู ูููุฒ ููู ุทุงุฆุฑุฉ ู
ุนุทูุฉ</code> | <code>ุดุฎุต ูู ุงูููุงุก ุงูุทููุ ุนูู ุญุตุงู.</code> | <code>ุดุฎุต ูู ู
ุทุนู
ุ ูุทูุจ ุนุฌุฉ.</code> |
| <code>ุฃุทูุงู ูุจุชุณู
ูู ู ูููุญูู ูููุงู
ูุฑุง</code> | <code>ููุงู ุฃุทูุงู ุญุงุถุฑูู</code> | <code>ุงูุงุทูุงู ูุชุฌูู
ูู</code> |
| <code>ุตุจู ูููุฒ ุนูู ููุญ ุงูุชุฒูุฌ ูู ู
ูุชุตู ุงูุฌุณุฑ ุงูุฃุญู
ุฑ.</code> | <code>ุงููุชู ูููู
ุจุฎุฏุนุฉ ุงูุชุฒูุฌ</code> | <code>ุงูุตุจู ูุชุฒูุฌ ุนูู ุงูุฑุตูู</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.86 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.22 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.2 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>ุงู
ุฑุฃุชุงู ูุชุนุงููุงู ุจููู
ุง ูุญู
ูุงู ุญุฒู
ุฉ</code> | <code>ุฅู
ุฑุฃุชุงู ูุญู
ูุงู ุญุฒู
ุฉ</code> | <code>ุงูุฑุฌุงู ูุชุดุงุฌุฑูู ุฎุงุฑุฌ ู
ุทุนู
</code> |
| <code>ุทูููู ุตุบูุฑูู ูุฑุชุฏูุงู ูู
ูุตุงู ุฃุฒุฑูุ ุฃุญุฏูู
ุง ูุฑุชุฏู ุงูุฑูู
9 ูุงูุขุฎุฑ ูุฑุชุฏู ุงูุฑูู
2 ูููุงู ุนูู ุฎุทูุงุช ุฎุดุจูุฉ ูู ุงูุญู
ุงู
ููุบุณูุงู ุฃูุฏููู
ุง ูู ุงูู
ุบุณูุฉ.</code> | <code>ุทูููู ูุฑุชุฏูุงู ูู
ูุตุงู ู
ุฑูู
ุงู ูุบุณููู ุฃูุฏููู
</code> | <code>ุทูููู ูุฑุชุฏูุงู ุณุชุฑุฉ ูุฐูุจุงู ุฅูู ุงูู
ุฏุฑุณุฉ</code> |
| <code>ุฑุฌู ูุจูุน ุงูุฏููุงุช ูุนู
ูู ุฎูุงู ู
ุนุฑุถ ุนุงูู
ู ุฃููู
ูู ู
ุฏููุฉ ุฃูุฌููุณ</code> | <code>ุฑุฌู ูุจูุน ุงูุฏููุงุช ูุนู
ูู</code> | <code>ุงู
ุฑุฃุฉ ุชุดุฑุจ ูููุชูุง ูู ู
ููู ุตุบูุฑ</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.2294 | 500 | 10.1279 | - | - | - | - | - |
| 0.4587 | 1000 | 8.0384 | - | - | - | - | - |
| 0.6881 | 1500 | 7.3484 | - | - | - | - | - |
| 0.9174 | 2000 | 4.2216 | - | - | - | - | - |
| 1.0 | 2180 | - | 0.8499 | 0.8579 | 0.8609 | 0.8429 | 0.8616 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
}
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] |
Non_BioNLP
|
tomaarsen/span-marker-mbert-base-fewnerd-fine-super
|
tomaarsen
|
token-classification
|
[
"span-marker",
"pytorch",
"tensorboard",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"multilingual",
"dataset:DFKI-SLT/few-nerd",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:cc-by-sa-4.0",
"model-index",
"co2_eq_emissions",
"region:us"
] | 1,696,116,361,000 | 2023-10-01T07:02:20 | 15 | 2 |
---
base_model: bert-base-multilingual-cased
datasets:
- DFKI-SLT/few-nerd
language:
- en
- multilingual
library_name: span-marker
license: cc-by-sa-4.0
metrics:
- precision
- recall
- f1
pipeline_tag: token-classification
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
widget:
- text: Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic
to Paris.
example_title: English 1
- text: The WPC led the international peace movement in the decade after the Second
World War, but its failure to speak out against the Soviet suppression of the
1956 Hungarian uprising and the resumption of Soviet nuclear tests in 1961 marginalised
it, and in the 1960s it was eclipsed by the newer, non-aligned peace organizations
like the Campaign for Nuclear Disarmament.
example_title: English 2
- text: Most of the Steven Seagal movie "Under Siege" (co-starring Tommy Lee Jones)
was filmed on the Battleship USS Alabama, which is docked on Mobile Bay at Battleship
Memorial Park and open to the public.
example_title: English 3
- text: 'The Central African CFA franc (French: "franc CFA" or simply "franc", ISO
4217 code: XAF) is the currency of six independent states in Central Africa: Cameroon,
Central African Republic, Chad, Republic of the Congo, Equatorial Guinea and Gabon.'
example_title: English 4
- text: Brenner conducted post-doctoral research at Brandeis University with Gregory
Petsko and then took his first academic position at Thomas Jefferson University
in 1996, moving to Dartmouth Medical School in 2003, where he served as Associate
Director for Basic Sciences at Norris Cotton Cancer Center.
example_title: English 5
- text: On Friday, October 27, 2017, the Senate of Spain (Senado) voted 214 to 47
to invoke Article 155 of the Spanish Constitution over Catalonia after the Catalan
Parliament declared the independence.
example_title: English 6
- text: Amelia Earthart volรณ su Lockheed Vega 5B monomotor a travรฉs del Ocรฉano Atlรกntico
hasta Parรญs.
example_title: Spanish
- text: Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B ร travers l'ocean
Atlantique jusqu'ร Paris.
example_title: French
- text: Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B รผber den Atlantik
nach Paris.
example_title: German
- text: ะะผะตะปะธั ะญััั
ะฐัั ะฟะตัะตะปะตัะตะปะฐ ะฝะฐ ัะฒะพะตะผ ะพะดะฝะพะผะพัะพัะฝะพะผ ัะฐะผะพะปะตัะต Lockheed Vega 5B
ัะตัะตะท ะัะปะฐะฝัะธัะตัะบะธะน ะพะบะตะฐะฝ ะฒ ะะฐัะธะถ.
example_title: Russian
- text: Amelia Earthart vloog met haar รฉรฉn-motorige Lockheed Vega 5B over de Atlantische
Oceaan naar Parijs.
example_title: Dutch
- text: Amelia Earthart przeleciaลa swoim jednosilnikowym samolotem Lockheed Vega
5B przez Ocean Atlantycki do Paryลผa.
example_title: Polish
- text: Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafiรฐ til
Parรญsar.
example_title: Icelandic
- text: ฮ Amelia Earthart ฯฮญฯฮฑฮพฮต ฯฮฟ ฮผฮฟฮฝฮฟฮบฮนฮฝฮทฯฮฎฯฮนฮฟ Lockheed Vega 5B ฯฮทฯ ฯฮญฯฮฑ โโฮฑฯฯ
ฯฮฟฮฝ ฮฯฮปฮฑฮฝฯฮนฮบฯ ฮฉฮบฮตฮฑฮฝฯ ฯฯฮฟ ฮ ฮฑฯฮฏฯฮน.
example_title: Greek
co2_eq_emissions:
emissions: 572.6675932546113
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 3.867
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: SpanMarker with bert-base-multilingual-cased on FewNERD
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD
type: DFKI-SLT/few-nerd
split: test
metrics:
- type: f1
value: 0.7006507253689264
name: F1
- type: precision
value: 0.7040676584045078
name: Precision
- type: recall
value: 0.6972667978051558
name: Recall
---
# SpanMarker with bert-base-multilingual-cased on FewNERD
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Languages:** en, multilingual
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
| art-broadcastprogram | "Corazones", "Street Cents", "The Gale Storm Show : Oh , Susanna" |
| art-film | "L'Atlantide", "Bosch", "Shawshank Redemption" |
| art-music | "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Hollywood Studio Symphony", "Champion Lover" |
| art-other | "Aphrodite of Milos", "The Today Show", "Venus de Milo" |
| art-painting | "Production/Reproduction", "Touit", "Cofiwch Dryweryn" |
| art-writtenart | "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi" |
| building-airport | "Luton Airport", "Newark Liberty International Airport", "Sheremetyevo International Airport" |
| building-hospital | "Hokkaido University Hospital", "Yeungnam University Hospital", "Memorial Sloan-Kettering Cancer Center" |
| building-hotel | "Flamingo Hotel", "The Standard Hotel", "Radisson Blu Sea Plaza Hotel" |
| building-library | "British Library", "Bayerische Staatsbibliothek", "Berlin State Library" |
| building-other | "Communiplex", "Henry Ford Museum", "Alpha Recording Studios" |
| building-restaurant | "Fatburger", "Carnegie Deli", "Trumbull" |
| building-sportsfacility | "Sports Center", "Glenn Warner Soccer Facility", "Boston Garden" |
| building-theater | "Sanders Theatre", "Pittsburgh Civic Light Opera", "National Paris Opera" |
| event-attack/battle/war/militaryconflict | "Vietnam War", "Jurist", "Easter Offensive" |
| event-disaster | "1693 Sicily earthquake", "the 1912 North Mount Lyell Disaster", "1990s North Korean famine" |
| event-election | "March 1898 elections", "1982 Mitcham and Morden by-election", "Elections to the European Parliament" |
| event-other | "Eastwood Scoring Stage", "Masaryk Democratic Movement", "Union for a Popular Movement" |
| event-protest | "Russian Revolution", "Iranian Constitutional Revolution", "French Revolution" |
| event-sportsevent | "Stanley Cup", "World Cup", "National Champions" |
| location-GPE | "Mediterranean Basin", "Croatian", "the Republic of Croatia" |
| location-bodiesofwater | "Norfolk coast", "Atatรผrk Dam Lake", "Arthur Kill" |
| location-island | "Staten Island", "Laccadives", "new Samsat district" |
| location-mountain | "Miteirya Ridge", "Ruweisat Ridge", "Salamander Glacier" |
| location-other | "Victoria line", "Cartuther", "Northern City Line" |
| location-park | "Painted Desert Community Complex Historic District", "Shenandoah National Park", "Gramercy Park" |
| location-road/railway/highway/transit | "Friern Barnet Road", "Newark-Elizabeth Rail Link", "NJT" |
| organization-company | "Church 's Chicken", "Dixy Chicken", "Texas Chicken" |
| organization-education | "MIT", "Barnard College", "Belfast Royal Academy and the Ulster College of Physical Education" |
| organization-government/governmentagency | "Supreme Court", "Diet", "Congregazione dei Nobili" |
| organization-media/newspaper | "TimeOut Melbourne", "Clash", "Al Jazeera" |
| organization-other | "IAEA", "Defence Sector C", "4th Army" |
| organization-politicalparty | "Al Wafa ' Islamic", "Kenseitล", "Shimpotล" |
| organization-religion | "Christian", "UPCUSA", "Jewish" |
| organization-showorganization | "Lizzy", "Mr. Mister", "Bochumer Symphoniker" |
| organization-sportsleague | "China League One", "NHL", "First Division" |
| organization-sportsteam | "Luc Alphand Aventures", "Tottenham", "Arsenal" |
| other-astronomything | "`` Caput Larvae ''", "Algol", "Zodiac" |
| other-award | "GCON", "Order of the Republic of Guinea and Nigeria", "Grand Commander of the Order of the Niger" |
| other-biologything | "BAR", "Amphiphysin", "N-terminal lipid" |
| other-chemicalthing | "sulfur", "uranium", "carbon dioxide" |
| other-currency | "Travancore Rupee", "$", "lac crore" |
| other-disease | "bladder cancer", "hypothyroidism", "French Dysentery Epidemic of 1779" |
| other-educationaldegree | "Master", "Bachelor", "BSc ( Hons ) in physics" |
| other-god | "Fujin", "Raijin", "El" |
| other-language | "Latin", "English", "Breton-speaking" |
| other-law | "Thirty Years ' Peace", "United States Freedom Support Act", "LeahyโSmith America Invents Act ( AIA" |
| other-livingthing | "monkeys", "insects", "patchouli" |
| other-medical | "Pediatrics", "amitriptyline", "pediatrician" |
| person-actor | "Edmund Payne", "Ellaline Terriss", "Tchรฉky Karyo" |
| person-artist/author | "George Axelrod", "Hicks", "Gaetano Donizett" |
| person-athlete | "Tozawa", "Neville", "Jaguar" |
| person-director | "Richard Quine", "Frank Darabont", "Bob Swaim" |
| person-other | "Richard Benson", "Campbell", "Holden" |
| person-politician | "Riviรจre", "William", "Emeric" |
| person-scholar | "Wurdack", "Stedman", "Stalmine" |
| person-soldier | "Joachim Ziegler", "Krukenberg", "Helmuth Weidling" |
| product-airplane | "Luton", "Spey-equipped FGR.2s", "EC135T2 CPDS" |
| product-car | "Corvettes - GT1 C6R", "Phantom", "100EX" |
| product-food | "V. labrusca", "yakiniku", "red grape" |
| product-game | "Airforce Delta", "Hardcore RPG", "Splinter Cell" |
| product-other | "PDP-1", "Fairbottom Bobs", "X11" |
| product-ship | "HMS `` Chinkara ''", "Congress", "Essex" |
| product-software | "Apdf", "Wikipedia", "AmiPDF" |
| product-train | "Royal Scots Grey", "High Speed Trains", "55022" |
| product-weapon | "AR-15 's", "ZU-23-2M Wrรณbel", "ZU-23-2MR Wrรณbel II" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:-----------------------------------------|:----------|:-------|:-------|
| **all** | 0.7041 | 0.6973 | 0.7007 |
| art-broadcastprogram | 0.5863 | 0.6252 | 0.6051 |
| art-film | 0.7779 | 0.752 | 0.7647 |
| art-music | 0.8014 | 0.7570 | 0.7786 |
| art-other | 0.4209 | 0.3221 | 0.3649 |
| art-painting | 0.5938 | 0.6667 | 0.6281 |
| art-writtenart | 0.6854 | 0.6415 | 0.6628 |
| building-airport | 0.8197 | 0.8242 | 0.8219 |
| building-hospital | 0.7215 | 0.8187 | 0.7671 |
| building-hotel | 0.7233 | 0.6906 | 0.7066 |
| building-library | 0.7588 | 0.7268 | 0.7424 |
| building-other | 0.5842 | 0.5855 | 0.5848 |
| building-restaurant | 0.5567 | 0.4871 | 0.5195 |
| building-sportsfacility | 0.6512 | 0.7690 | 0.7052 |
| building-theater | 0.6994 | 0.7516 | 0.7246 |
| event-attack/battle/war/militaryconflict | 0.7800 | 0.7332 | 0.7559 |
| event-disaster | 0.5767 | 0.5266 | 0.5505 |
| event-election | 0.5106 | 0.1319 | 0.2096 |
| event-other | 0.4931 | 0.4145 | 0.4504 |
| event-protest | 0.3711 | 0.4337 | 0.4000 |
| event-sportsevent | 0.6156 | 0.6156 | 0.6156 |
| location-GPE | 0.8175 | 0.8508 | 0.8338 |
| location-bodiesofwater | 0.7297 | 0.7622 | 0.7456 |
| location-island | 0.7314 | 0.6703 | 0.6995 |
| location-mountain | 0.7538 | 0.7283 | 0.7409 |
| location-other | 0.4370 | 0.3040 | 0.3585 |
| location-park | 0.7063 | 0.6878 | 0.6969 |
| location-road/railway/highway/transit | 0.7092 | 0.7259 | 0.7174 |
| organization-company | 0.6911 | 0.6943 | 0.6927 |
| organization-education | 0.7799 | 0.7973 | 0.7885 |
| organization-government/governmentagency | 0.5518 | 0.4474 | 0.4942 |
| organization-media/newspaper | 0.6268 | 0.6761 | 0.6505 |
| organization-other | 0.5804 | 0.5341 | 0.5563 |
| organization-politicalparty | 0.6627 | 0.7306 | 0.6949 |
| organization-religion | 0.5636 | 0.6265 | 0.5934 |
| organization-showorganization | 0.6023 | 0.6086 | 0.6054 |
| organization-sportsleague | 0.6594 | 0.6497 | 0.6545 |
| organization-sportsteam | 0.7341 | 0.7703 | 0.7518 |
| other-astronomything | 0.7806 | 0.8289 | 0.8040 |
| other-award | 0.7230 | 0.6703 | 0.6957 |
| other-biologything | 0.6733 | 0.6366 | 0.6544 |
| other-chemicalthing | 0.5962 | 0.5838 | 0.5899 |
| other-currency | 0.7135 | 0.7822 | 0.7463 |
| other-disease | 0.6260 | 0.7063 | 0.6637 |
| other-educationaldegree | 0.6 | 0.6033 | 0.6016 |
| other-god | 0.7051 | 0.7118 | 0.7085 |
| other-language | 0.6849 | 0.7968 | 0.7366 |
| other-law | 0.6814 | 0.6843 | 0.6829 |
| other-livingthing | 0.5959 | 0.6443 | 0.6192 |
| other-medical | 0.5247 | 0.4811 | 0.5020 |
| person-actor | 0.8342 | 0.7960 | 0.8146 |
| person-artist/author | 0.7052 | 0.7482 | 0.7261 |
| person-athlete | 0.8396 | 0.8530 | 0.8462 |
| person-director | 0.725 | 0.7329 | 0.7289 |
| person-other | 0.6866 | 0.6672 | 0.6767 |
| person-politician | 0.6819 | 0.6852 | 0.6835 |
| person-scholar | 0.5468 | 0.4953 | 0.5198 |
| person-soldier | 0.5360 | 0.5641 | 0.5497 |
| product-airplane | 0.6825 | 0.6730 | 0.6777 |
| product-car | 0.7205 | 0.7016 | 0.7109 |
| product-food | 0.6036 | 0.5394 | 0.5697 |
| product-game | 0.7740 | 0.6876 | 0.7282 |
| product-other | 0.5250 | 0.4117 | 0.4615 |
| product-ship | 0.6781 | 0.6763 | 0.6772 |
| product-software | 0.6701 | 0.6603 | 0.6652 |
| product-train | 0.5919 | 0.6051 | 0.5984 |
| product-weapon | 0.6507 | 0.5433 | 0.5921 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the ๐ค Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-mbert-base-fewnerd-fine-super")
# Run inference
entities = model.predict("Most of the Steven Seagal movie \"Under Siege \"(co-starring Tommy Lee Jones) was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and open to the public.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the ๐ค Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-mbert-base-fewnerd-fine-super")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("tomaarsen/span-marker-mbert-base-fewnerd-fine-super-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4945 | 267 |
| Entities per sentence | 0 | 2.5832 | 88 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.2972 | 3000 | 0.0274 | 0.6488 | 0.6457 | 0.6473 | 0.9121 |
| 0.5944 | 6000 | 0.0252 | 0.6686 | 0.6545 | 0.6615 | 0.9160 |
| 0.8915 | 9000 | 0.0239 | 0.6918 | 0.6547 | 0.6727 | 0.9178 |
| 1.1887 | 12000 | 0.0235 | 0.6962 | 0.6727 | 0.6842 | 0.9210 |
| 1.4859 | 15000 | 0.0233 | 0.6872 | 0.6742 | 0.6806 | 0.9201 |
| 1.7831 | 18000 | 0.0226 | 0.6969 | 0.6891 | 0.6929 | 0.9236 |
| 2.0802 | 21000 | 0.0231 | 0.7030 | 0.6916 | 0.6973 | 0.9246 |
| 2.3774 | 24000 | 0.0227 | 0.7020 | 0.6936 | 0.6978 | 0.9248 |
| 2.6746 | 27000 | 0.0223 | 0.7079 | 0.6989 | 0.7034 | 0.9258 |
| 2.9718 | 30000 | 0.0222 | 0.7089 | 0.7009 | 0.7049 | 0.9263 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.573 kg of CO2
- **Hours Used**: 3.867 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.9.16
- SpanMarker: 1.4.1.dev
- Transformers: 4.30.0
- PyTorch: 2.0.1+cu118
- Datasets: 2.14.0
- Tokenizers: 0.13.2
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"NAMED_ENTITY_RECOGNITION"
] |
Non_BioNLP
|
ymoslem/ModernBERT-large-qe-maxlen512-v1
|
ymoslem
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"quality-estimation",
"regression",
"generated_from_trainer",
"multilingual",
"bn",
"cs",
"de",
"en",
"et",
"fi",
"fr",
"gu",
"ha",
"hi",
"is",
"ja",
"kk",
"km",
"lt",
"lv",
"pl",
"ps",
"ru",
"ta",
"tr",
"uk",
"xh",
"zh",
"zu",
"dataset:ymoslem/wmt-da-human-evaluation",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,736,756,805,000 | 2025-01-17T13:25:00 | 41 | 1 |
---
base_model: answerdotai/ModernBERT-large
datasets:
- ymoslem/wmt-da-human-evaluation
language:
- multilingual
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
library_name: transformers
license: apache-2.0
metrics:
- pearsonr
- mae
- r_squared
tags:
- quality-estimation
- regression
- generated_from_trainer
new_version: ymoslem/ModernBERT-large-qe-v1
model-index:
- name: Quality Estimation for Machine Translation
results:
- task:
type: regression
dataset:
name: ymoslem/wmt-da-human-evaluation
type: QE
metrics:
- type: Pearson
value: 0.4589
name: Pearson Correlation
- type: MAE
value: 0.1861
name: Mean Absolute Error
- type: RMSE
value: 0.2375
name: Root Mean Squared Error
- type: R2
value: 0.2106
name: R-Squared
---
# Quality Estimation for Machine Translation
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large)
on the [ymoslem/wmt-da-human-evaluation](https://huggingface.co/ymoslem/wmt-da-human-evaluation) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0564
## Model description
This model is for reference-free quality estimation (QE) of machine translation (MT) systems.
## Training procedure
### Training hyperparameters
This version of the model uses `tokenizer.model_max_length=512`.
The model with full length of 8192 can be found here [ymoslem/ModernBERT-large-qe-v1](https://huggingface.co/ymoslem/ModernBERT-large-qe-v1/)
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.0631 | 0.1004 | 1000 | 0.0674 |
| 0.0614 | 0.2007 | 2000 | 0.0599 |
| 0.0578 | 0.3011 | 3000 | 0.0585 |
| 0.0585 | 0.4015 | 4000 | 0.0579 |
| 0.0568 | 0.5019 | 5000 | 0.0570 |
| 0.057 | 0.6022 | 6000 | 0.0568 |
| 0.0579 | 0.7026 | 7000 | 0.0567 |
| 0.0573 | 0.8030 | 8000 | 0.0565 |
| 0.0568 | 0.9033 | 9000 | 0.0564 |
| 0.0571 | 1.0037 | 10000 | 0.0564 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.4.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
## Inference
1. Install the required libraries.
```bash
pip3 install --upgrade datasets accelerate transformers
pip3 install --upgrade flash_attn triton
```
2. Load the test dataset.
```python
from datasets import load_dataset
test_dataset = load_dataset("ymoslem/wmt-da-human-evaluation",
split="test",
trust_remote_code=True
)
print(test_dataset)
```
3. Load the model and tokenizer:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the fine-tuned model and tokenizer
model_name = "ymoslem/ModernBERT-large-qe-maxlen512-v1"
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Move model to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.eval()
```
4. Prepare the dataset. Each source segment `src` and target segment `tgt` are separated by the `sep_token`, which is `'</s>'` for ModernBERT.
```python
sep_token = tokenizer.sep_token
input_test_texts = [f"{src} {sep_token} {tgt}" for src, tgt in zip(test_dataset["src"], test_dataset["mt"])]
```
5. Generate predictions.
If you print `model.config.problem_type`, the output is `regression`.
Still, you can use the "text-classification" pipeline as follows (cf. [pipeline documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TextClassificationPipeline)):
```python
from transformers import pipeline
classifier = pipeline("text-classification",
model=model_name,
tokenizer=tokenizer,
device=0,
)
predictions = classifier(input_test_texts,
batch_size=128,
truncation=True,
padding="max_length",
max_length=tokenizer.model_max_length,
)
predictions = [prediction["score"] for prediction in predictions]
```
Alternatively, you can use an elaborate version of the code, which is slightly faster and provides more control.
```python
from torch.utils.data import DataLoader
import torch
from tqdm.auto import tqdm
# Tokenization function
def process_batch(batch, tokenizer, device):
sep_token = tokenizer.sep_token
input_texts = [f"{src} {sep_token} {tgt}" for src, tgt in zip(batch["src"], batch["mt"])]
tokens = tokenizer(input_texts,
truncation=True,
padding="max_length",
max_length=tokenizer.model_max_length,
return_tensors="pt",
).to(device)
return tokens
# Create a DataLoader for batching
test_dataloader = DataLoader(test_dataset,
batch_size=128, # Adjust batch size as needed
shuffle=False)
# List to store all predictions
predictions = []
with torch.no_grad():
for batch in tqdm(test_dataloader, desc="Inference Progress", unit="batch"):
tokens = process_batch(batch, tokenizer, device)
# Forward pass: Generate model's logits
outputs = model(**tokens)
# Get logits (predictions)
logits = outputs.logits
# Extract the regression predicted values
batch_predictions = logits.squeeze()
# Extend the list with the predictions
predictions.extend(batch_predictions.tolist())
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
Baiming123/Calcu_Disease_Similarity
|
Baiming123
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"dataset:Baiming123/MeSHDS",
"base_model:sentence-transformers/multi-qa-MiniLM-L6-cos-v1",
"base_model:finetune:sentence-transformers/multi-qa-MiniLM-L6-cos-v1",
"doi:10.57967/hf/3108",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,726,847,893,000 | 2024-12-14T10:10:29 | 0 | 3 |
---
base_model:
- sentence-transformers/multi-qa-MiniLM-L6-cos-v1
datasets:
- Baiming123/MeSHDS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
---
# Model Description
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.The 'Calcu_Disease_Similarity' model is designed to encode two disease terms and compute their **semantic similarity**. The model has been fine-tuned on disease-related datasets 'MeSHDS' and achieves a high F1 score in distinguishing experimentally validated miRNA-target interactions (MTIs) and predicted MTIs by considering disease similarity.
If you use this model in your research, please cite the following paper:
```
@article {Chen2024.05.17.594604,
author = {Chen, Baiming},
title = {miRTarDS: High-Accuracy Refining Protein-level MicroRNA Target Interactions from Prediction Databases Using Sentence-BERT},
elocation-id = {2024.05.17.594604},
year = {2024},
doi = {10.1101/2024.05.17.594604},
publisher = {Cold Spring Harbor Laboratory},
abstract = {MicroRNAs (miRNAs) regulate gene expression by binding to mRNAs, inhibiting translation, or promoting mRNA degradation. miRNAs are of great importance in the development of various diseases. Currently, numerous sequence-based miRNA target prediction tools are available, however, only 1\% of their predictions have been experimentally validated. In this study, we propose a novel approach that leverages disease similarity between miRNAs and genes as a key feature to further refine and screen human sequence-based predicted miRNA target interactions (MTIs). To quantify the semantic similarity of diseases, we fine-tuned the Sentence-BERT model. Our method achieved an F1 score of 0.88 in accurately distinguishing human protein-level experimentally validated MTIs (functional MTIs, validated through western blot or reporter assay) and predicted MTIs. Moreover, this method exhibits exceptional generalizability across different databases. We applied the proposed method to analyze 1,220,904 human MTIs sourced from miRTarbase, miRDB, and miRWalk, encompassing 6,085 genes and 1,261 pre-miRNAs. Notably, we accurately identified 3,883 out of 3,962 MTIs with strong experimental evidence from miRTarbase. This study has the potential to provide valuable insights into the understanding of miRNA-gene regulatory networks and to promote advancements in disease diagnosis, treatment, and drug development.Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2024/12/08/2024.05.17.594604},
eprint = {https://www.biorxiv.org/content/early/2024/12/08/2024.05.17.594604.full.pdf},
journal = {bioRxiv}
}
```
## Key Features:
- Fine-tuned to compute semantic similarity between disease names.
- Achieves an F1 score of 0.88 in distinguishing protein-level experimentally (western blot, reporter assay) validated MTIs and predicted MTIs.
- Built for applications in understanding miRNA-gene regulatory networks, disease diagnosis, treatment, and drug discovery.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
# Usage (Sentence-Transformers)
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
# Load the pre-trained SBERT model
from sentence_transformers import SentenceTransformer, util
# Replace 'your/path/to/Calcu_Disease_Similarity' with the actual path to the model
model = SentenceTransformer("Baiming123/Calcu_Disease_Similarity")
# Example usage
disease1 = "lung cancer"
disease2 = "pulmonary fibrosis"
def sts(sentence_a, sentence_b) -> float:
query_emb = model.encode(sentence_a)
doc_emb = model.encode(sentence_b)
[score] = util.dot_score(query_emb, doc_emb)[0].tolist()
return score
similarity = sts(disease1, disease2)
print(similarity)
```
# Additional Information
## License
This model is licensed under CC-BY-NC 4.0 International license. If you use this model, please adhere to the license requirements.
## Questions or Issues
If you encounter any issues or have any questions while using the model, feel free to reach out to the author for assistance. Thank you for your support and for using this model!
|
[
"SEMANTIC_SIMILARITY",
"TRANSLATION"
] |
BioNLP
|
pacoreyes/StanceFit
|
pacoreyes
|
text-classification
|
[
"setfit",
"pytorch",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"doi:10.57967/hf/2618",
"region:us"
] | 1,714,969,652,000 | 2024-06-20T06:19:20 | 7 | 1 |
---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: We will also discuss our deep concerns with actions by China, including in
Xinjiang, Hong Kong, Taiwan, cyber attacks on the United States, economic coercion
toward our allies.
- text: In the field of bilateral trade and investment, we have agreed that much can
be done to expand the present level of activity.
- text: We cannot allow the world's leading sponsor of terrorism to possess the planet's
most dangerous weapons.
- text: Because I do think this is not a function of whatever happened in Syria, I
think this is a function of the sanctions.
- text: One is to fight inflation, which has been hanging over our head and putting
a burden on the working people of this country for the last 10 years.
inference: true
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'We in the United States believe if we can promote democracy around the world, there will be more peace.'</li><li>'We recognise the transformative power of technology, including digital public infrastructure, to support sustainable development in the Indo-Pacific and deliver economic and social benefits.'</li><li>'This program strengthens democracy, transparency, and the rule of law in developing nations, and I ask you to fully fund this important initiative.'</li></ul> |
| 1 | <ul><li>'I do not ever want to ever fight a war that is unconstitutional and I am the dangerous person.'</li><li>"And so, we are at a moment where I really think threats to our democracy, threats to our core freedoms are very much on people's minds."</li><li>'My views in opposition to the cancellation of the war debt are a matter of detailed record in many public statements and in a recent message to the Congress.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("We cannot allow the world's leading sponsor of terrorism to possess the planet's most dangerous weapons.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 23.4393 | 46 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 486 |
| 1 | 486 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (1.003444469523018e-06, 1.003444469523018e-06)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 37
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.3295 | - |
| 0.0017 | 50 | 0.3132 | - |
| 0.0034 | 100 | 0.274 | - |
| 0.0051 | 150 | 0.2774 | - |
| 0.0068 | 200 | 0.2578 | - |
| 0.0084 | 250 | 0.2536 | - |
| 0.0101 | 300 | 0.3353 | - |
| 0.0118 | 350 | 0.253 | - |
| 0.0135 | 400 | 0.2865 | - |
| 0.0152 | 450 | 0.2894 | - |
| 0.0169 | 500 | 0.2554 | 0.2632 |
| 0.0186 | 550 | 0.2487 | - |
| 0.0203 | 600 | 0.2713 | - |
| 0.0220 | 650 | 0.2841 | - |
| 0.0237 | 700 | 0.2251 | - |
| 0.0253 | 750 | 0.2534 | - |
| 0.0270 | 800 | 0.2489 | - |
| 0.0287 | 850 | 0.2297 | - |
| 0.0304 | 900 | 0.2288 | - |
| 0.0321 | 950 | 0.211 | - |
| 0.0338 | 1000 | 0.188 | 0.2073 |
| 0.0355 | 1050 | 0.1488 | - |
| 0.0372 | 1100 | 0.2103 | - |
| 0.0389 | 1150 | 0.1607 | - |
| 0.0406 | 1200 | 0.0793 | - |
| 0.0422 | 1250 | 0.0968 | - |
| 0.0439 | 1300 | 0.0987 | - |
| 0.0456 | 1350 | 0.0786 | - |
| 0.0473 | 1400 | 0.0267 | - |
| 0.0490 | 1450 | 0.0432 | - |
| 0.0507 | 1500 | 0.0262 | 0.064 |
| 0.0524 | 1550 | 0.1269 | - |
| 0.0541 | 1600 | 0.039 | - |
| 0.0558 | 1650 | 0.0266 | - |
| 0.0575 | 1700 | 0.0455 | - |
| 0.0591 | 1750 | 0.0175 | - |
| 0.0608 | 1800 | 0.0157 | - |
| 0.0625 | 1850 | 0.0063 | - |
| 0.0642 | 1900 | 0.0146 | - |
| 0.0659 | 1950 | 0.0046 | - |
| **0.0676** | **2000** | **0.0046** | **0.0464** |
| 0.0693 | 2050 | 0.0035 | - |
| 0.0710 | 2100 | 0.0073 | - |
| 0.0727 | 2150 | 0.0012 | - |
| 0.0744 | 2200 | 0.0025 | - |
| 0.0760 | 2250 | 0.0023 | - |
| 0.0777 | 2300 | 0.0017 | - |
| 0.0794 | 2350 | 0.0012 | - |
| 0.0811 | 2400 | 0.0017 | - |
| 0.0828 | 2450 | 0.0016 | - |
| 0.0845 | 2500 | 0.0014 | 0.0535 |
| 0.0862 | 2550 | 0.0011 | - |
| 0.0879 | 2600 | 0.0021 | - |
| 0.0896 | 2650 | 0.0009 | - |
| 0.0913 | 2700 | 0.0008 | - |
| 0.0929 | 2750 | 0.0006 | - |
| 0.0946 | 2800 | 0.0007 | - |
| 0.0963 | 2850 | 0.0012 | - |
| 0.0980 | 2900 | 0.001 | - |
| 0.0997 | 2950 | 0.0005 | - |
| 0.1014 | 3000 | 0.0006 | 0.0575 |
| 0.1031 | 3050 | 0.0006 | - |
| 0.1048 | 3100 | 0.0004 | - |
| 0.1065 | 3150 | 0.0006 | - |
| 0.1082 | 3200 | 0.0005 | - |
| 0.1098 | 3250 | 0.0006 | - |
| 0.1115 | 3300 | 0.0005 | - |
| 0.1132 | 3350 | 0.0008 | - |
| 0.1149 | 3400 | 0.0003 | - |
| 0.1166 | 3450 | 0.0005 | - |
| 0.1183 | 3500 | 0.0004 | 0.0642 |
| 0.1200 | 3550 | 0.0006 | - |
| 0.1217 | 3600 | 0.0003 | - |
| 0.1234 | 3650 | 0.0009 | - |
| 0.1251 | 3700 | 0.0002 | - |
| 0.1267 | 3750 | 0.0003 | - |
| 0.1284 | 3800 | 0.0005 | - |
| 0.1301 | 3850 | 0.0002 | - |
| 0.1318 | 3900 | 0.0002 | - |
| 0.1335 | 3950 | 0.0005 | - |
| 0.1352 | 4000 | 0.0003 | 0.0697 |
| 0.1369 | 4050 | 0.0002 | - |
| 0.1386 | 4100 | 0.0002 | - |
| 0.1403 | 4150 | 0.0004 | - |
| 0.1420 | 4200 | 0.0012 | - |
| 0.1436 | 4250 | 0.0002 | - |
| 0.1453 | 4300 | 0.0002 | - |
| 0.1470 | 4350 | 0.0001 | - |
| 0.1487 | 4400 | 0.0002 | - |
| 0.1504 | 4450 | 0.0002 | - |
| 0.1521 | 4500 | 0.0003 | 0.0718 |
| 0.1538 | 4550 | 0.0003 | - |
| 0.1555 | 4600 | 0.0002 | - |
| 0.1572 | 4650 | 0.0002 | - |
| 0.1589 | 4700 | 0.0003 | - |
| 0.1605 | 4750 | 0.0002 | - |
| 0.1622 | 4800 | 0.0002 | - |
| 0.1639 | 4850 | 0.0002 | - |
| 0.1656 | 4900 | 0.0002 | - |
| 0.1673 | 4950 | 0.0002 | - |
| 0.1690 | 5000 | 0.0002 | 0.0684 |
| 0.1707 | 5050 | 0.0002 | - |
| 0.1724 | 5100 | 0.0002 | - |
| 0.1741 | 5150 | 0.0002 | - |
| 0.1758 | 5200 | 0.0003 | - |
| 0.1774 | 5250 | 0.0002 | - |
| 0.1791 | 5300 | 0.0001 | - |
| 0.1808 | 5350 | 0.0002 | - |
| 0.1825 | 5400 | 0.0001 | - |
| 0.1842 | 5450 | 0.0001 | - |
| 0.1859 | 5500 | 0.0001 | 0.0731 |
| 0.1876 | 5550 | 0.0002 | - |
| 0.1893 | 5600 | 0.0002 | - |
| 0.1910 | 5650 | 0.0001 | - |
| 0.1927 | 5700 | 0.0001 | - |
| 0.1943 | 5750 | 0.0001 | - |
| 0.1960 | 5800 | 0.0002 | - |
| 0.1977 | 5850 | 0.0001 | - |
| 0.1994 | 5900 | 0.0003 | - |
| 0.2011 | 5950 | 0.0002 | - |
| 0.2028 | 6000 | 0.0002 | 0.0724 |
| 0.2045 | 6050 | 0.0001 | - |
| 0.2062 | 6100 | 0.0001 | - |
| 0.2079 | 6150 | 0.0001 | - |
| 0.2096 | 6200 | 0.0001 | - |
| 0.2112 | 6250 | 0.0001 | - |
| 0.2129 | 6300 | 0.0002 | - |
| 0.2146 | 6350 | 0.0001 | - |
| 0.2163 | 6400 | 0.0001 | - |
| 0.2180 | 6450 | 0.0001 | - |
| 0.2197 | 6500 | 0.0001 | 0.0784 |
| 0.2214 | 6550 | 0.0001 | - |
| 0.2231 | 6600 | 0.0001 | - |
| 0.2248 | 6650 | 0.0001 | - |
| 0.2265 | 6700 | 0.0001 | - |
| 0.2281 | 6750 | 0.0001 | - |
| 0.2298 | 6800 | 0.0001 | - |
| 0.2315 | 6850 | 0.0001 | - |
| 0.2332 | 6900 | 0.0001 | - |
| 0.2349 | 6950 | 0.0002 | - |
| 0.2366 | 7000 | 0.0001 | 0.0672 |
| 0.2383 | 7050 | 0.0001 | - |
| 0.2400 | 7100 | 0.0001 | - |
| 0.2417 | 7150 | 0.0001 | - |
| 0.2434 | 7200 | 0.0001 | - |
| 0.2450 | 7250 | 0.0001 | - |
| 0.2467 | 7300 | 0.0001 | - |
| 0.2484 | 7350 | 0.0001 | - |
| 0.2501 | 7400 | 0.0001 | - |
| 0.2518 | 7450 | 0.0001 | - |
| 0.2535 | 7500 | 0.0001 | 0.0627 |
| 0.2552 | 7550 | 0.0001 | - |
| 0.2569 | 7600 | 0.0001 | - |
| 0.2586 | 7650 | 0.0 | - |
| 0.2603 | 7700 | 0.0001 | - |
| 0.2619 | 7750 | 0.0 | - |
| 0.2636 | 7800 | 0.0001 | - |
| 0.2653 | 7850 | 0.0001 | - |
| 0.2670 | 7900 | 0.0001 | - |
| 0.2687 | 7950 | 0.0001 | - |
| 0.2704 | 8000 | 0.0 | 0.0754 |
| 0.2721 | 8050 | 0.0001 | - |
| 0.2738 | 8100 | 0.0001 | - |
| 0.2755 | 8150 | 0.0 | - |
| 0.2772 | 8200 | 0.0 | - |
| 0.2788 | 8250 | 0.0 | - |
| 0.2805 | 8300 | 0.0001 | - |
| 0.2822 | 8350 | 0.0001 | - |
| 0.2839 | 8400 | 0.0001 | - |
| 0.2856 | 8450 | 0.0 | - |
| 0.2873 | 8500 | 0.0 | 0.0748 |
| 0.2890 | 8550 | 0.0 | - |
| 0.2907 | 8600 | 0.0 | - |
| 0.2924 | 8650 | 0.0 | - |
| 0.2941 | 8700 | 0.0 | - |
| 0.2957 | 8750 | 0.0001 | - |
| 0.2974 | 8800 | 0.0001 | - |
| 0.2991 | 8850 | 0.0001 | - |
| 0.3008 | 8900 | 0.0 | - |
| 0.3025 | 8950 | 0.0001 | - |
| 0.3042 | 9000 | 0.0001 | 0.057 |
| 0.3059 | 9050 | 0.0 | - |
| 0.3076 | 9100 | 0.0 | - |
| 0.3093 | 9150 | 0.0002 | - |
| 0.3110 | 9200 | 0.0 | - |
| 0.3126 | 9250 | 0.0 | - |
| 0.3143 | 9300 | 0.0 | - |
| 0.3160 | 9350 | 0.0001 | - |
| 0.3177 | 9400 | 0.0002 | - |
| 0.3194 | 9450 | 0.0 | - |
| 0.3211 | 9500 | 0.0 | 0.0781 |
| 0.3228 | 9550 | 0.0 | - |
| 0.3245 | 9600 | 0.0 | - |
| 0.3262 | 9650 | 0.0 | - |
| 0.3279 | 9700 | 0.0 | - |
| 0.3295 | 9750 | 0.0 | - |
| 0.3312 | 9800 | 0.0 | - |
| 0.3329 | 9850 | 0.0 | - |
| 0.3346 | 9900 | 0.0001 | - |
| 0.3363 | 9950 | 0.0 | - |
| 0.3380 | 10000 | 0.0 | 0.0698 |
| 0.3397 | 10050 | 0.0 | - |
| 0.3414 | 10100 | 0.0 | - |
| 0.3431 | 10150 | 0.0 | - |
| 0.3448 | 10200 | 0.0 | - |
| 0.3464 | 10250 | 0.0022 | - |
| 0.3481 | 10300 | 0.0 | - |
| 0.3498 | 10350 | 0.0001 | - |
| 0.3515 | 10400 | 0.0 | - |
| 0.3532 | 10450 | 0.0 | - |
| 0.3549 | 10500 | 0.0 | 0.0698 |
| 0.3566 | 10550 | 0.0 | - |
| 0.3583 | 10600 | 0.0 | - |
| 0.3600 | 10650 | 0.0 | - |
| 0.3617 | 10700 | 0.0 | - |
| 0.3633 | 10750 | 0.0 | - |
| 0.3650 | 10800 | 0.0 | - |
| 0.3667 | 10850 | 0.0 | - |
| 0.3684 | 10900 | 0.0001 | - |
| 0.3701 | 10950 | 0.0 | - |
| 0.3718 | 11000 | 0.0 | 0.0746 |
| 0.3735 | 11050 | 0.0 | - |
| 0.3752 | 11100 | 0.0 | - |
| 0.3769 | 11150 | 0.0001 | - |
| 0.3786 | 11200 | 0.0 | - |
| 0.3802 | 11250 | 0.0 | - |
| 0.3819 | 11300 | 0.0 | - |
| 0.3836 | 11350 | 0.0 | - |
| 0.3853 | 11400 | 0.0 | - |
| 0.3870 | 11450 | 0.0 | - |
| 0.3887 | 11500 | 0.0 | 0.0753 |
| 0.3904 | 11550 | 0.0 | - |
| 0.3921 | 11600 | 0.0001 | - |
| 0.3938 | 11650 | 0.0 | - |
| 0.3955 | 11700 | 0.0 | - |
| 0.3971 | 11750 | 0.0 | - |
| 0.3988 | 11800 | 0.0 | - |
| 0.4005 | 11850 | 0.0 | - |
| 0.4022 | 11900 | 0.0 | - |
| 0.4039 | 11950 | 0.0 | - |
| 0.4056 | 12000 | 0.0 | 0.0743 |
| 0.4073 | 12050 | 0.0 | - |
| 0.4090 | 12100 | 0.0 | - |
| 0.4107 | 12150 | 0.0 | - |
| 0.4124 | 12200 | 0.0 | - |
| 0.4140 | 12250 | 0.0 | - |
| 0.4157 | 12300 | 0.0 | - |
| 0.4174 | 12350 | 0.0 | - |
| 0.4191 | 12400 | 0.0 | - |
| 0.4208 | 12450 | 0.0 | - |
| 0.4225 | 12500 | 0.0 | 0.0733 |
| 0.4242 | 12550 | 0.0 | - |
| 0.4259 | 12600 | 0.0 | - |
| 0.4276 | 12650 | 0.0 | - |
| 0.4293 | 12700 | 0.0 | - |
| 0.4309 | 12750 | 0.0 | - |
| 0.4326 | 12800 | 0.0 | - |
| 0.4343 | 12850 | 0.0 | - |
| 0.4360 | 12900 | 0.0 | - |
| 0.4377 | 12950 | 0.0 | - |
| 0.4394 | 13000 | 0.0 | 0.072 |
| 0.4411 | 13050 | 0.0 | - |
| 0.4428 | 13100 | 0.0 | - |
| 0.4445 | 13150 | 0.0 | - |
| 0.4462 | 13200 | 0.0 | - |
| 0.4478 | 13250 | 0.0 | - |
| 0.4495 | 13300 | 0.0 | - |
| 0.4512 | 13350 | 0.0 | - |
| 0.4529 | 13400 | 0.0 | - |
| 0.4546 | 13450 | 0.0 | - |
| 0.4563 | 13500 | 0.0 | 0.0753 |
| 0.4580 | 13550 | 0.0 | - |
| 0.4597 | 13600 | 0.0 | - |
| 0.4614 | 13650 | 0.0 | - |
| 0.4631 | 13700 | 0.0 | - |
| 0.4647 | 13750 | 0.0 | - |
| 0.4664 | 13800 | 0.0 | - |
| 0.4681 | 13850 | 0.0 | - |
| 0.4698 | 13900 | 0.0 | - |
| 0.4715 | 13950 | 0.0 | - |
| 0.4732 | 14000 | 0.0 | 0.0756 |
| 0.4749 | 14050 | 0.0 | - |
| 0.4766 | 14100 | 0.0 | - |
| 0.4783 | 14150 | 0.0 | - |
| 0.4800 | 14200 | 0.0 | - |
| 0.4816 | 14250 | 0.0 | - |
| 0.4833 | 14300 | 0.0 | - |
| 0.4850 | 14350 | 0.0 | - |
| 0.4867 | 14400 | 0.0 | - |
| 0.4884 | 14450 | 0.0 | - |
| 0.4901 | 14500 | 0.0 | 0.0622 |
| 0.4918 | 14550 | 0.0 | - |
| 0.4935 | 14600 | 0.0 | - |
| 0.4952 | 14650 | 0.0 | - |
| 0.4969 | 14700 | 0.0 | - |
| 0.4985 | 14750 | 0.0 | - |
| 0.5002 | 14800 | 0.0 | - |
| 0.5019 | 14850 | 0.0 | - |
| 0.5036 | 14900 | 0.0 | - |
| 0.5053 | 14950 | 0.0 | - |
| 0.5070 | 15000 | 0.0 | 0.0676 |
| 0.5087 | 15050 | 0.0 | - |
| 0.5104 | 15100 | 0.0 | - |
| 0.5121 | 15150 | 0.0 | - |
| 0.5138 | 15200 | 0.0 | - |
| 0.5154 | 15250 | 0.0 | - |
| 0.5171 | 15300 | 0.0 | - |
| 0.5188 | 15350 | 0.0 | - |
| 0.5205 | 15400 | 0.0 | - |
| 0.5222 | 15450 | 0.0 | - |
| 0.5239 | 15500 | 0.0 | 0.0668 |
| 0.5256 | 15550 | 0.0 | - |
| 0.5273 | 15600 | 0.0 | - |
| 0.5290 | 15650 | 0.0 | - |
| 0.5307 | 15700 | 0.0 | - |
| 0.5323 | 15750 | 0.0 | - |
| 0.5340 | 15800 | 0.0 | - |
| 0.5357 | 15850 | 0.0 | - |
| 0.5374 | 15900 | 0.0 | - |
| 0.5391 | 15950 | 0.0 | - |
| 0.5408 | 16000 | 0.0 | 0.0707 |
| 0.5425 | 16050 | 0.0 | - |
| 0.5442 | 16100 | 0.0 | - |
| 0.5459 | 16150 | 0.0 | - |
| 0.5476 | 16200 | 0.0 | - |
| 0.5492 | 16250 | 0.0 | - |
| 0.5509 | 16300 | 0.0 | - |
| 0.5526 | 16350 | 0.0 | - |
| 0.5543 | 16400 | 0.0 | - |
| 0.5560 | 16450 | 0.0 | - |
| 0.5577 | 16500 | 0.0 | 0.0644 |
| 0.5594 | 16550 | 0.0 | - |
| 0.5611 | 16600 | 0.0 | - |
| 0.5628 | 16650 | 0.0 | - |
| 0.5645 | 16700 | 0.0 | - |
| 0.5661 | 16750 | 0.0 | - |
| 0.5678 | 16800 | 0.0 | - |
| 0.5695 | 16850 | 0.0 | - |
| 0.5712 | 16900 | 0.0 | - |
| 0.5729 | 16950 | 0.0 | - |
| 0.5746 | 17000 | 0.0 | 0.0742 |
| 0.5763 | 17050 | 0.0 | - |
| 0.5780 | 17100 | 0.0 | - |
| 0.5797 | 17150 | 0.0 | - |
| 0.5814 | 17200 | 0.0 | - |
| 0.5830 | 17250 | 0.0 | - |
| 0.5847 | 17300 | 0.0 | - |
| 0.5864 | 17350 | 0.0 | - |
| 0.5881 | 17400 | 0.0 | - |
| 0.5898 | 17450 | 0.0 | - |
| 0.5915 | 17500 | 0.0 | 0.0738 |
| 0.5932 | 17550 | 0.0 | - |
| 0.5949 | 17600 | 0.0 | - |
| 0.5966 | 17650 | 0.0 | - |
| 0.5983 | 17700 | 0.0 | - |
| 0.5999 | 17750 | 0.0 | - |
| 0.6016 | 17800 | 0.0 | - |
| 0.6033 | 17850 | 0.0 | - |
| 0.6050 | 17900 | 0.0 | - |
| 0.6067 | 17950 | 0.0 | - |
| 0.6084 | 18000 | 0.0 | 0.0725 |
| 0.6101 | 18050 | 0.0 | - |
| 0.6118 | 18100 | 0.0 | - |
| 0.6135 | 18150 | 0.0 | - |
| 0.6152 | 18200 | 0.0 | - |
| 0.6168 | 18250 | 0.0 | - |
| 0.6185 | 18300 | 0.0 | - |
| 0.6202 | 18350 | 0.0 | - |
| 0.6219 | 18400 | 0.0 | - |
| 0.6236 | 18450 | 0.0 | - |
| 0.6253 | 18500 | 0.0 | 0.0724 |
| 0.6270 | 18550 | 0.0 | - |
| 0.6287 | 18600 | 0.0 | - |
| 0.6304 | 18650 | 0.0 | - |
| 0.6321 | 18700 | 0.0 | - |
| 0.6337 | 18750 | 0.0 | - |
| 0.6354 | 18800 | 0.0 | - |
| 0.6371 | 18850 | 0.0 | - |
| 0.6388 | 18900 | 0.0 | - |
| 0.6405 | 18950 | 0.0 | - |
| 0.6422 | 19000 | 0.0 | 0.0622 |
| 0.6439 | 19050 | 0.0 | - |
| 0.6456 | 19100 | 0.0 | - |
| 0.6473 | 19150 | 0.0 | - |
| 0.6490 | 19200 | 0.0 | - |
| 0.6506 | 19250 | 0.0 | - |
| 0.6523 | 19300 | 0.0 | - |
| 0.6540 | 19350 | 0.0 | - |
| 0.6557 | 19400 | 0.0 | - |
| 0.6574 | 19450 | 0.0 | - |
| 0.6591 | 19500 | 0.0 | 0.0754 |
| 0.6608 | 19550 | 0.0 | - |
| 0.6625 | 19600 | 0.0 | - |
| 0.6642 | 19650 | 0.0 | - |
| 0.6659 | 19700 | 0.0 | - |
| 0.6675 | 19750 | 0.0 | - |
| 0.6692 | 19800 | 0.0 | - |
| 0.6709 | 19850 | 0.0 | - |
| 0.6726 | 19900 | 0.0 | - |
| 0.6743 | 19950 | 0.0 | - |
| 0.6760 | 20000 | 0.0 | 0.0723 |
| 0.6777 | 20050 | 0.0 | - |
| 0.6794 | 20100 | 0.0 | - |
| 0.6811 | 20150 | 0.0 | - |
| 0.6828 | 20200 | 0.0 | - |
| 0.6844 | 20250 | 0.0 | - |
| 0.6861 | 20300 | 0.0 | - |
| 0.6878 | 20350 | 0.0 | - |
| 0.6895 | 20400 | 0.0 | - |
| 0.6912 | 20450 | 0.0 | - |
| 0.6929 | 20500 | 0.0 | 0.0741 |
| 0.6946 | 20550 | 0.0 | - |
| 0.6963 | 20600 | 0.0 | - |
| 0.6980 | 20650 | 0.0 | - |
| 0.6997 | 20700 | 0.0 | - |
| 0.7013 | 20750 | 0.0 | - |
| 0.7030 | 20800 | 0.0 | - |
| 0.7047 | 20850 | 0.0 | - |
| 0.7064 | 20900 | 0.0 | - |
| 0.7081 | 20950 | 0.0 | - |
| 0.7098 | 21000 | 0.0 | 0.0733 |
| 0.7115 | 21050 | 0.0 | - |
| 0.7132 | 21100 | 0.0 | - |
| 0.7149 | 21150 | 0.0 | - |
| 0.7166 | 21200 | 0.0 | - |
| 0.7182 | 21250 | 0.0 | - |
| 0.7199 | 21300 | 0.0 | - |
| 0.7216 | 21350 | 0.0 | - |
| 0.7233 | 21400 | 0.0 | - |
| 0.7250 | 21450 | 0.0 | - |
| 0.7267 | 21500 | 0.0 | 0.0757 |
| 0.7284 | 21550 | 0.0 | - |
| 0.7301 | 21600 | 0.0 | - |
| 0.7318 | 21650 | 0.0 | - |
| 0.7335 | 21700 | 0.0 | - |
| 0.7351 | 21750 | 0.0 | - |
| 0.7368 | 21800 | 0.0 | - |
| 0.7385 | 21850 | 0.0 | - |
| 0.7402 | 21900 | 0.0 | - |
| 0.7419 | 21950 | 0.0 | - |
| 0.7436 | 22000 | 0.0 | 0.0766 |
| 0.7453 | 22050 | 0.0 | - |
| 0.7470 | 22100 | 0.0 | - |
| 0.7487 | 22150 | 0.0 | - |
| 0.7504 | 22200 | 0.0 | - |
| 0.7520 | 22250 | 0.0 | - |
| 0.7537 | 22300 | 0.0 | - |
| 0.7554 | 22350 | 0.0 | - |
| 0.7571 | 22400 | 0.0 | - |
| 0.7588 | 22450 | 0.0 | - |
| 0.7605 | 22500 | 0.0 | 0.0757 |
| 0.7622 | 22550 | 0.0 | - |
| 0.7639 | 22600 | 0.0 | - |
| 0.7656 | 22650 | 0.0 | - |
| 0.7673 | 22700 | 0.0 | - |
| 0.7689 | 22750 | 0.0 | - |
| 0.7706 | 22800 | 0.0 | - |
| 0.7723 | 22850 | 0.0 | - |
| 0.7740 | 22900 | 0.0 | - |
| 0.7757 | 22950 | 0.0 | - |
| 0.7774 | 23000 | 0.0 | 0.0755 |
| 0.7791 | 23050 | 0.0 | - |
| 0.7808 | 23100 | 0.0 | - |
| 0.7825 | 23150 | 0.0 | - |
| 0.7842 | 23200 | 0.0 | - |
| 0.7858 | 23250 | 0.0 | - |
| 0.7875 | 23300 | 0.0 | - |
| 0.7892 | 23350 | 0.0 | - |
| 0.7909 | 23400 | 0.0 | - |
| 0.7926 | 23450 | 0.0 | - |
| 0.7943 | 23500 | 0.0 | 0.076 |
| 0.7960 | 23550 | 0.0 | - |
| 0.7977 | 23600 | 0.0 | - |
| 0.7994 | 23650 | 0.0 | - |
| 0.8011 | 23700 | 0.0 | - |
| 0.8027 | 23750 | 0.0 | - |
| 0.8044 | 23800 | 0.0 | - |
| 0.8061 | 23850 | 0.0 | - |
| 0.8078 | 23900 | 0.0 | - |
| 0.8095 | 23950 | 0.0 | - |
| 0.8112 | 24000 | 0.0 | 0.0756 |
| 0.8129 | 24050 | 0.0 | - |
| 0.8146 | 24100 | 0.0 | - |
| 0.8163 | 24150 | 0.0 | - |
| 0.8180 | 24200 | 0.0 | - |
| 0.8196 | 24250 | 0.0 | - |
| 0.8213 | 24300 | 0.0 | - |
| 0.8230 | 24350 | 0.0 | - |
| 0.8247 | 24400 | 0.0 | - |
| 0.8264 | 24450 | 0.0 | - |
| 0.8281 | 24500 | 0.0 | 0.0759 |
| 0.8298 | 24550 | 0.0 | - |
| 0.8315 | 24600 | 0.0 | - |
| 0.8332 | 24650 | 0.0 | - |
| 0.8349 | 24700 | 0.0 | - |
| 0.8365 | 24750 | 0.0 | - |
| 0.8382 | 24800 | 0.0 | - |
| 0.8399 | 24850 | 0.0 | - |
| 0.8416 | 24900 | 0.0 | - |
| 0.8433 | 24950 | 0.0 | - |
| 0.8450 | 25000 | 0.0 | 0.0762 |
| 0.8467 | 25050 | 0.0 | - |
| 0.8484 | 25100 | 0.0 | - |
| 0.8501 | 25150 | 0.0 | - |
| 0.8518 | 25200 | 0.0 | - |
| 0.8534 | 25250 | 0.0 | - |
| 0.8551 | 25300 | 0.0 | - |
| 0.8568 | 25350 | 0.0 | - |
| 0.8585 | 25400 | 0.0 | - |
| 0.8602 | 25450 | 0.0 | - |
| 0.8619 | 25500 | 0.0 | 0.0733 |
| 0.8636 | 25550 | 0.0 | - |
| 0.8653 | 25600 | 0.0 | - |
| 0.8670 | 25650 | 0.0 | - |
| 0.8687 | 25700 | 0.0 | - |
| 0.8703 | 25750 | 0.0 | - |
| 0.8720 | 25800 | 0.0 | - |
| 0.8737 | 25850 | 0.0 | - |
| 0.8754 | 25900 | 0.0 | - |
| 0.8771 | 25950 | 0.0 | - |
| 0.8788 | 26000 | 0.0 | 0.0742 |
| 0.8805 | 26050 | 0.0 | - |
| 0.8822 | 26100 | 0.0 | - |
| 0.8839 | 26150 | 0.0 | - |
| 0.8856 | 26200 | 0.0 | - |
| 0.8872 | 26250 | 0.0 | - |
| 0.8889 | 26300 | 0.0 | - |
| 0.8906 | 26350 | 0.0 | - |
| 0.8923 | 26400 | 0.0 | - |
| 0.8940 | 26450 | 0.0 | - |
| 0.8957 | 26500 | 0.0 | 0.0756 |
| 0.8974 | 26550 | 0.0 | - |
| 0.8991 | 26600 | 0.0 | - |
| 0.9008 | 26650 | 0.0 | - |
| 0.9025 | 26700 | 0.0 | - |
| 0.9041 | 26750 | 0.0 | - |
| 0.9058 | 26800 | 0.0 | - |
| 0.9075 | 26850 | 0.0 | - |
| 0.9092 | 26900 | 0.0 | - |
| 0.9109 | 26950 | 0.0 | - |
| 0.9126 | 27000 | 0.0 | 0.0751 |
| 0.9143 | 27050 | 0.0 | - |
| 0.9160 | 27100 | 0.0 | - |
| 0.9177 | 27150 | 0.0 | - |
| 0.9194 | 27200 | 0.0 | - |
| 0.9210 | 27250 | 0.0 | - |
| 0.9227 | 27300 | 0.0 | - |
| 0.9244 | 27350 | 0.0 | - |
| 0.9261 | 27400 | 0.0 | - |
| 0.9278 | 27450 | 0.0 | - |
| 0.9295 | 27500 | 0.0 | 0.075 |
| 0.9312 | 27550 | 0.0 | - |
| 0.9329 | 27600 | 0.0 | - |
| 0.9346 | 27650 | 0.0 | - |
| 0.9363 | 27700 | 0.0 | - |
| 0.9379 | 27750 | 0.0 | - |
| 0.9396 | 27800 | 0.0 | - |
| 0.9413 | 27850 | 0.0 | - |
| 0.9430 | 27900 | 0.0 | - |
| 0.9447 | 27950 | 0.0 | - |
| 0.9464 | 28000 | 0.0 | 0.0725 |
| 0.9481 | 28050 | 0.0 | - |
| 0.9498 | 28100 | 0.0 | - |
| 0.9515 | 28150 | 0.0 | - |
| 0.9532 | 28200 | 0.0 | - |
| 0.9548 | 28250 | 0.0 | - |
| 0.9565 | 28300 | 0.0 | - |
| 0.9582 | 28350 | 0.0 | - |
| 0.9599 | 28400 | 0.0 | - |
| 0.9616 | 28450 | 0.0 | - |
| 0.9633 | 28500 | 0.0 | 0.0761 |
| 0.9650 | 28550 | 0.0 | - |
| 0.9667 | 28600 | 0.0 | - |
| 0.9684 | 28650 | 0.0 | - |
| 0.9701 | 28700 | 0.0 | - |
| 0.9717 | 28750 | 0.0 | - |
| 0.9734 | 28800 | 0.0 | - |
| 0.9751 | 28850 | 0.0 | - |
| 0.9768 | 28900 | 0.0 | - |
| 0.9785 | 28950 | 0.0 | - |
| 0.9802 | 29000 | 0.0 | 0.0759 |
| 0.9819 | 29050 | 0.0 | - |
| 0.9836 | 29100 | 0.0 | - |
| 0.9853 | 29150 | 0.0 | - |
| 0.9870 | 29200 | 0.0 | - |
| 0.9886 | 29250 | 0.0 | - |
| 0.9903 | 29300 | 0.0 | - |
| 0.9920 | 29350 | 0.0 | - |
| 0.9937 | 29400 | 0.0 | - |
| 0.9954 | 29450 | 0.0 | - |
| 0.9971 | 29500 | 0.0 | 0.0761 |
| 0.9988 | 29550 | 0.0 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.11
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.25.1
- PyTorch: 2.1.2
- Datasets: 2.15.0
- Tokenizers: 0.13.3
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
mini1013/master_cate_lh24
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 1,732,123,516,000 | 2024-11-20T17:25:39 | 185 | 0 |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ๊ฑด์์ขํ๊ธฐ ๋ฌด์ฐ ์ฅ ์๋ฉ์ด ๋ธ ๊ฐ์ ์ฉ ํ์ฆ ์์ ์ฐ์ง ๋๋๋ฌด ์ธํธ 2 ๊ตฌ๋๋ฏธ๋ฅด2
- text: ์ขํ ์ข์ ์น๋ง ๋จ๋
๊ณต์ฉ ๊น์ด ํ์ฆ์ ์ฌ์ฐ๋ ๊ฐํ ์ฐ์ง ๊ฐ์ด 01.๋ชจ์ ๋๋ธ ๋ธ๋ผ์ผ ๋ ๋ ํ์ด์ ํต
- text: ๋ฐ์ ์ ๊ฐ์ด ์ขํ ์ฌ์ฐ๋ ๋ชฉ์ํ ์ฐ์ง ๋๋ณต ์ข์ ์น๋ง 5. ๋ธ๋ฃจ ์ปค๋ฒ ์ปฌ๋ฌ๋ชฐ
- text: ๊ฐ์ ์ฉ ์ขํ๊ธฐ ์ขํ ์์ ๋ธ ์ต์ ๊ฑด์ ์ข์๊ธฐ ๋ฑ๋ฐ์ด (์ต๊ฑด์+์ผ์ฐฝ+๊ฒ๋ฅด๋ง๋์) ๊ณจ๋ ์ํฝํํธ๋
- text: ์ฅ ์ขํ๋ฐฉ ์ฐ์ง ๊ฑด์ ์ขํ๊ธฐ ์จ์ด ์ฅ์ข์ ์ขํ ์ข์ ์ฅ๋ธ ์ฌ์ฑ ์ฐ๊ธฐํํฐ์จ๋์กฐ์ +108๊ฐ์์ด์ฃผ+4์ข
์ธํธ ์ค๋๋ณด
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9881376037959668
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'๋งค์ง์ํธ ์ฒ๋ชฉ๋์๊ธฐ ์ขํ๊ธฐ ๋งค์ง์ํธ'</li><li>'๋์์ ๊ฑฐ ํด์ถฉ๊ธฐํผ ์ขํ ๊ฐํ์ฝ์ฅ ํ์ฐ๋์ฅ 2๋ด ์ด์ฆ๋ฐ์ด'</li><li>'๊ฐ์ ์ฉ ์๋ชฉ ์ขํ๊ธฐ ์กฑ์๊ธฐ ํ์ก์ํ ์ฐ์ง ์ข์ ํ์ฆ 70 ๋์ด Wํฌํธ ์ฐํต E ์๋ฅด๋ฉ'</li></ul> |
| 0.0 | <ul><li>'์ ์ด์ ๊ฐ์ ์ฉ ์ข์๊ธฐ ์์ฐ๋ถ ์น์ง ์จ์ ํด๋ฉ ๋์ผ ์๋ ๋น๋ฐ ์ ์ด์ ๊ฐ์ ์ฉ์ข์๊ธฐ ๊ทธ๋ ์ด ๋ฐ์ผ๋ฆฌ๋ง์ผ'</li><li>'OK ์ํํธ ์ข์๋์ผ ์ข์๊ธฐ ์์ฐ๋ถ ๊ฐ์ ์ฉ ์ข์ 1_ํํฌ ๋ฉ๋์นผ์ '</li><li>'๋ฅํฐํ๋ฆฌ ๋ฒ๋ธ ๊ฐ์ ์ฉ ์ข์๊ธฐ ์ฅ ์น์ง ์์ฐ๋ถ ๋์ผ A.๊ณ ๊ธ ์ฒ์ฐ ์ฝ์ฅ 30ํฌ ์ฃผ์ํ์ฌ ๋ค๋๊ณ '</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9881 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh24")
# Run inference
preds = model("๋ฐ์ ์ ๊ฐ์ด ์ขํ ์ฌ์ฐ๋ ๋ชฉ์ํ ์ฐ์ง ๋๋ณต ์ข์ ์น๋ง 5. ๋ธ๋ฃจ ์ปค๋ฒ ์ปฌ๋ฌ๋ชฐ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 10.8 | 22 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0625 | 1 | 0.4245 | - |
| 3.125 | 50 | 0.0003 | - |
| 6.25 | 100 | 0.0 | - |
| 9.375 | 150 | 0.0 | - |
| 12.5 | 200 | 0.0 | - |
| 15.625 | 250 | 0.0 | - |
| 18.75 | 300 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
TalTechNLP/whisper-large-v3-et-en-ru.translate
|
TalTechNLP
|
automatic-speech-recognition
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"et",
"en",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,717,062,795,000 | 2024-06-18T15:39:49 | 30 | 1 |
---
language:
- et
- en
- ru
license: apache-2.0
pipeline_tag: automatic-speech-recognition
---
# Introduction
The is a Whisper large-v3 model finetuned to do Estonian-English and Estonian-Russian bidirectional speech translation.
You have to use the "transcribe" task and specify the target language ("et", "en" or "ru"). Source language doesn't have to be specified.
The model is trained on synthetic data (ASR data with machine translated transcripts) as well as some data scraped from the web (audio + subititles).
# Usage
The model come in three versions: native Whisper format, HuggingFace transformers format, and CTranslate2.
We recommend to use the CTranslate2 version, as it provised very fast performance.
Usage instruction (you need a computer with a GPU and Nvidia CUDA software stack):
* Install whisper-ctranslate2 (https://github.com/Softcatala/whisper-ctranslate2)
* Clone this model:
git clone https://huggingface.co/TalTechNLP/whisper-large-v3-et-en-ru.translate
* Translate an [Estonian wav file](audio_samples/esimene-stuudio-1min.wav) to English:
whisper-ctranslate2 --language en --task transcribe --model_directory whisper-large-v3-et-en-ru.translate/ct2 \
--vad_filter True --beam_size 5 --compute_type float32 whisper-large-v3-et-en-ru.translate/audio_samples/esimene-stuudio-1min.wav
Output:
Detected language 'English' with probability 1.000000
[00:01.200 --> 00:19.600] Good evening, dear viewers. The coalition parties, which changed their sharpness last week, have pulled down a lot of tone today, but peace has not yet been pulled.
[00:19.600 --> 00:45.220] Is the issue of supporting children really tearing up the government, or is it still a question of supporting children at all? The first studio starts.
[00:45.220 --> 00:55.420] And since the situation demands, the leaders of the two coalition party factions in the Riigikogu are in the studio, Mart Vรตrklaev from the Reform Party and Jaanus Karilaid from the Center Party.
[00:55.420 --> 00:55.900] Good evening.
[00:55.900 --> 00:56.700] Good evening.
[00:56.700 --> 01:00.060] Well, Mart Vรตrklaev, let's make it clear first, what is the latest knowledge?
* Translate [English wav file](audio_samples/bill_gates_1min.wav ) to to Estonian (this translation direction has much lower quality):
whisper-ctranslate2 --language et --task transcribe --model_directory whisper-large-v3-et-en-ru.translate/ct2 \
--vad_filter True --beam_size 5 --compute_type float32 whisper-large-v3-et-en-ru.translate/audio_samples/bill_gates_1min.wav
Output:
Detected language 'Estonian' with probability 1.000000
[00:00.000 --> 00:04.800] Ja sa tead, et see on omamoodi erinevate lรคhenemisviiside vรคljatรถรถtamine.
[00:04.980 --> 00:17.320] Ja seega vรตib juhtuda, et on olemas see รผks pรตhiline lรคhenemine, mida me tarkvaras nรคeme, et sa tead, et evolutsioon, tsitaat, avastas tsitaadi
[00:18.360 --> 00:20.920] ja ja รตnnestus kasutada.
[00:20.920 --> 00:24.340] Ma mรตtlen, et see on suurim ime. Inimene,
[00:24.340 --> 00:31.260] inimeste arutlusvรตime on nii fenomenaalne Jah, tรคiesti. Nii
[00:40.380 --> 00:46.180] kuid meie, inimesed, pole selle ai-versiooni veel vรคlja mรตelnud.
[00:46.420 --> 00:51.820] Ma kahtlustan, et seal on kindlasti parem algoritm, mida me pole avastanud.
[00:52.380 --> 00:54.080] Ah, see on praegu,
[00:54.080 --> 01:00.260] keskendutakse liiga palju sellele, et teeme asjad suuremaks ja kรตik รผritavad seda teha.
[01:00.400 --> 01:06.760] See tรคhendab, et vรตib olla tรตesti parem lahendus, alternatiivne lahendus, mis ootab leidmist.
[01:07.000 --> 01:17.100] Kuid seal pole lihtsalt piisavalt tรคhelepanu, sest inimesed kipuvad mรตtlema, et oh, see ei tรถรถta. Pรตhjus, miks ma seda vรคga tugevalt usun, on osaliselt see, et kui vaatate,
[01:17.100 --> 01:26.500] oh tegelikult, lรคhme tagasi Microsofti vรคga esimeses isiklikus arvutis, sest kui see esimest korda vรคlja tuli, oli see tรตesti vรคga pรตnev ja hรคmmastav.
[01:26.640 --> 01:34.140] Ja siis ah, igal aastal, teate, on parem arvuti ja vรคiksem arvuti, kiirem arvuti ja see muutub aina paremaks.
[01:34.200 --> 01:38.660] Nii et samamoodi, kui vaatame telefone vรตi teate rakette,
[01:38.660 --> 01:53.040] autod, esimene leiutis ei ole kunagi optimaalne lahendus. Alati on parem lahendus. Nii et ma arvan, et parem lahendus on lihtsalt see, et praegu on liiga palju rรตhku suuremal, seda parem.
[01:53.420 --> 01:56.540] Ma arvan, et matemaatika juhendamise puhul vรตib aga
[01:57.260 --> 01:59.980] vรตib olla vea negatiivne kรผlg
|
[
"TRANSLATION"
] |
Non_BioNLP
|
monsoon-nlp/tamillion
|
monsoon-nlp
|
feature-extraction
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"feature-extraction",
"ta",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-09-20T22:21:35 | 45 | 2 |
---
language: ta
---
# TaMillion
This is the second version of a Tamil language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1Pwia5HJIb6Ad4Hvbx5f-IjND-vCaJzSE?usp=sharing
V1: small model with GPU; 190,000 steps;
V2 (current): base model with TPU and larger corpus; 224,000 steps
## Classification
Sudalai Rajkumar's Tamil-NLP page contains classification and regression tasks:
https://www.kaggle.com/sudalairajkumar/tamil-nlp
Notebook: https://colab.research.google.com/drive/1_rW9HZb6G87-5DraxHvhPOzGmSMUc67_?usp=sharin
The model outperformed mBERT on news classification:
(Random: 16.7%, mBERT: 53.0%, TaMillion: 75.1%)
The model slightly outperformed mBERT on movie reviews:
(RMSE - mBERT: 0.657, TaMillion: 0.626)
Equivalent accuracy on the Tirukkural topic task.
## Question Answering
I didn't find a Tamil-language question answering dataset, but this model could be finetuned
to train a QA model. See Hindi and Bengali examples here: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar
## Corpus
Trained on
IndicCorp Tamil (11GB) https://indicnlp.ai4bharat.org/corpora/
and 1 October 2020 dump of https://ta.wikipedia.org (482MB)
## Vocabulary
Included as vocab.txt in the upload
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
dlowl/dolly-v2-12b-endpoint
|
dlowl
|
text-generation
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,688,899,329,000 | 2023-07-09T10:52:57 | 11 | 0 |
---
datasets:
- databricks/databricks-dolly-15k
language:
- en
library_name: transformers
license: mit
inference: false
duplicated_from: databricks/dolly-v2-12b
---
# dolly-v2-12b Model Card
## Summary
Databricks' `dolly-v2-12b`, an instruction-following large language model trained on the Databricks machine learning platform
that is licensed for commercial use. Based on `pythia-12b`, Dolly is trained on ~15k instruction/response fine tuning records
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
information extraction, open QA and summarization. `dolly-v2-12b` is not a state-of-the-art model, but does exhibit surprisingly
high quality instruction following behavior not characteristic of the foundation model on which it is based.
Dolly v2 is also available in these smaller models sizes:
* [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b`
* [dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b), a 2.8 billion parameter based on `pythia-2.8b`
Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on
running inference for various GPU configurations.
**Owner**: Databricks, Inc.
## Model Overview
`dolly-v2-12b` is a 12 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
[EleutherAI's](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
In a Databricks notebook you could run:
```python
%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### LangChain Usage
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
and the default for the pipeline is to only return the new text.
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto", return_full_text=True)
```
You can create a prompt that either has only an instruction or has an instruction with context:
```python
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
# template for an instrution with no input
prompt = PromptTemplate(
input_variables=["instruction"],
template="{instruction}")
# template for an instruction with input
prompt_with_context = PromptTemplate(
input_variables=["instruction", "context"],
template="{instruction}\n\nInput:\n{context}")
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
```
Example predicting using a simple instruction:
```python
print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
```
Example predicting using an instruction with context:
```python
context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman,
and Founding Father who served as the first president of the United States from 1789 to 1797."""
print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip())
```
## Known Limitations
### Performance Limitations
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
competitively with more modern model architectures or models subject to larger pretraining corpuses.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
### Dataset Limitations
Like all language models, `dolly-v2-12b` reflects the content and limitations of its training corpuses.
- **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
associations.
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-12b` is instruction tuned represents natural language instructions generated
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
personally identifying information about non-public figures, but it may contain typos and factual errors.
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large.
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that
maximize the potential of all individuals and organizations.
### Benchmark Metrics
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-12b` is not state of the art,
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
but a robust statement as to the sources of these variations requires further study.
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
| --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------|
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# Happy Hacking!
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] |
Non_BioNLP
|
DAILAB-bitesnail/distilbert-base-uncased-finetuned-emotion
|
DAILAB-bitesnail
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,718,610,242,000 | 2024-06-17T07:53:47 | 6 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.8885
name: Accuracy
- type: f1
value: 0.8814348986502284
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3845
- Accuracy: 0.8885
- F1: 0.8814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5905 | 0.799 | 0.7625 |
| No log | 2.0 | 250 | 0.3845 | 0.8885 | 0.8814 |
### Framework versions
- Transformers 4.41.2
- Pytorch 1.13.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
nbroad/mt5-base-qgen
|
nbroad
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"en",
"hi",
"de",
"ar",
"bn",
"fi",
"ja",
"zh",
"id",
"sw",
"ta",
"gr",
"ru",
"es",
"th",
"tr",
"vi",
"multilingual",
"dataset:squad_v2",
"dataset:tydiqa",
"dataset:mlqa",
"dataset:xquad",
"dataset:germanquad",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-07-25T00:21:02 | 39 | 4 |
---
datasets:
- squad_v2
- tydiqa
- mlqa
- xquad
- germanquad
language:
- en
- hi
- de
- ar
- bn
- fi
- ja
- zh
- id
- sw
- ta
- gr
- ru
- es
- th
- tr
- vi
- multilingual
widget:
- text: 'Hugging Face has seen rapid growth in its popularity since the get-go. It
is definitely doing the right things to attract more and more people to its platform,
some of which are on the following lines: Community driven approach through large
open source repositories along with paid services. Helps to build a network of
like-minded people passionate about open source. Attractive price point. The subscription-based
features, e.g.: Inference based API, starts at a price of $9/month.'
example_title: English
- text: 'A un aรฑo y tres dรญas de que el balรณn ruede en el Al Bayt Stadium inaugurando
el Mundial 2022, ya se han dibujado los primeros bocetos de la prรณxima Copa del
Mundo.13 selecciones estรกn colocadas en el mapa con la etiqueta de clasificadas
y tienen asegurado pisar los verdes de Qatar en la primera fase final otoรฑal.
Serbia, Dinamarca, Espaรฑa, Paรญses Bajos, Suiza, Croacia, Francia, Inglaterra,
Bรฉlgica, Alemania, Brasil, Argentina y Qatar, como anfitriona, entrarรกn en el
sorteo del 1 de abril de 2022 en Doha en el que 32 paรญsses serรกn repartidos en
sus respectivos grupos. '
example_title: Spanish
---
# Multi-lingual Question Generating Model (mt5-base)
Give the model a passage and it will generate a question about the passage.
## Trained on the following datasets:
- [SQuAD (English)](https://rajpurkar.github.io/SQuAD-explorer/)
- [TyDiQA-GoldP (Arabic, Bengali, Finnish, Japanese, Indonesian, Kiswahili, Korean, Russian, Telugu, Thai)](https://github.com/google-research-datasets/tydiqa)
- [MLQA (Arabic, Chinese, English, German, Hindi, Spanish, Vietnames)](https://github.com/facebookresearch/MLQA)
- [XQuAD (Arabic, Chinese, German, Greek, Hindi, Russian, Spanish, Thai, Turkish Vietnamese)](https://github.com/deepmind/xquad)
- [GermanQuAD (German)](https://huggingface.co/datasets/deepset/germanquad)
- [Persian QA (Persian)](https://www.kaggle.com/sajjadayobi360/persianqa)
- [Bengali QA (Bengali)](https://www.kaggle.com/mayeesha/bengali-question-answering-dataset)
- [chaii (Hindi, Tamil)](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering/data)
## Training details
I used [flax summarization script](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) and a TPU v3-8. Summarization expects a text column and a summary column. For question generation training, use the context column instead of text column and question instead of summary column.
There is no guarantee that it will produce a question in the language of the passage, but it usually does. Lower resource languages will likely have lower quality questions.
## Using the model
#### PyTorch version
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-base-qgen")
model = AutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-base-qgen")
text = "Hugging Face has seen rapid growth in its \
popularity since the get-go. It is definitely doing\
the right things to attract more and more people to \
its platform, some of which are on the following lines:\
Community driven approach through large open source repositories \
along with paid services. Helps to build a network of like-minded\
people passionate about open source. \
Attractive price point. The subscription-based features, e.g.: \
Inference based API, starts at a price of $9/month.\
"
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_length=40)
tokenizer.decode(output[0], skip_special_tokens=True)
# What is Hugging Face's price point?
```
Model trained on Cloud TPUs from Google's TPU Research Cloud (TRC)
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
davelotito/donut_experiment_bayesian_trial_12
|
davelotito
|
image-text-to-text
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,719,415,963,000 | 2024-06-26T16:09:01 | 4 | 0 |
---
base_model: naver-clova-ix/donut-base
license: mit
metrics:
- bleu
- wer
tags:
- generated_from_trainer
model-index:
- name: donut_experiment_bayesian_trial_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_12
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5083
- Bleu: 0.0675
- Precisions: [0.8421052631578947, 0.7822966507177034, 0.7423822714681441, 0.7006578947368421]
- Brevity Penalty: 0.0883
- Length Ratio: 0.2918
- Translation Length: 475
- Reference Length: 1628
- Cer: 0.7537
- Wer: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2643161326759464e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.0251 | 1.0 | 253 | 0.4936 | 0.0660 | [0.8375527426160337, 0.7673860911270983, 0.7277777777777777, 0.6897689768976898] | 0.0876 | 0.2912 | 474 | 1628 | 0.7600 | 0.8274 |
| 0.0144 | 2.0 | 506 | 0.4987 | 0.0683 | [0.8445378151260504, 0.7852028639618138, 0.7458563535911602, 0.7049180327868853] | 0.0889 | 0.2924 | 476 | 1628 | 0.7515 | 0.8189 |
| 0.0089 | 3.0 | 759 | 0.5083 | 0.0675 | [0.8421052631578947, 0.7822966507177034, 0.7423822714681441, 0.7006578947368421] | 0.0883 | 0.2918 | 475 | 1628 | 0.7537 | 0.8211 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
|
[
"TRANSLATION"
] |
Non_BioNLP
|
pinzhenchen/sft-lora-bg-pythia-12b
|
pinzhenchen
| null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,830,000 | 2024-03-05T23:53:54 | 0 | 0 |
---
language:
- bg
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
jcblaise/distilbert-tagalog-base-cased
|
jcblaise
| null |
[
"transformers",
"pytorch",
"jax",
"distilbert",
"bert",
"tagalog",
"filipino",
"tl",
"arxiv:2005.02068",
"arxiv:1907.00409",
"license:gpl-3.0",
"region:us"
] | 1,646,263,745,000 | 2021-11-12T03:20:40 | 330 | 1 |
---
language: tl
license: gpl-3.0
tags:
- distilbert
- bert
- tagalog
- filipino
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# DistilBERT Tagalog Base Cased
Tagalog version of DistilBERT, distilled from [`bert-tagalog-base-cased`](https://huggingface.co/jcblaise/bert-tagalog-base-cased). This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
chenyp/comfyui
|
chenyp
| null |
[
"region:us"
] | 1,699,766,218,000 | 2023-11-14T14:53:22 | 0 | 0 |
---
{}
---
ComfyUI
ๅฎ่ฃ
1 ๅฎ่ฃ
Python 3.10.6 ไธ pip
่ฎฟ้ฎPython3.10.6ไธ่ฝฝ้กต้ข๏ผๆ้กต้ขๆๅฐๅบ๏ผๆพๅฐใWindows installer (64-bit)ใ็นๅปไธ่ฝฝ
ๅฎ่ฃ
ๆฏๆณจๆ๏ผๅฐ่ฟไธๆญฅ๏ผ้่ฆๅฆไธๅพ่ฟๆ ทๅพ้ Add Python to PATH
2 ๅฎ่ฃ
Git
่ฎฟ้ฎGitไธ่ฝฝ้กต
็นๅปใDownload for Windowsใ๏ผใ64-bit Git for Windows Setupใ็นๅปไธ่ฝฝ
ไธ่ทฏไธไธๆญฅๅฎ่ฃ
ๅฝไปค่ก่ฟ่กgit --version,่ฟๅgit version 2.XX.0.windows.1ๅฐฑๆฏๅฎ่ฃ
ๆๅไบใ
3 ๅฎ่ฃ
CUDA
ๅฝไปค่ก่ฟ่กnvidia-smi,็ไธ่ชๅทฑๆพๅกๆฏๆ็ CUDA็ๆฌ
12.2 ๅฐฑๆฏ่ฝไธ12.2.X็็ๆฌ๏ผ่ฆๅ้ซๅฐฑๅ็บงๆพๅก้ฉฑๅจ
่ฟ่ฆcudnn
4 Pytorchๅฎ่ฃ
ๆๅผpytorchๅฎ็ฝๆพๅฐๆๆฐ็ๆฌ๏ผๅคๅถๅฐๅฝไปค่กๅฎ่ฃ
ใ
5 ComfyUIๅฎ่ฃ
ๆพไธไธช็ฉบ้ดๅ
่ถณ็็ฎๅฝ๏ผๅจ่ตๆบ็ฎก็ๅจ๏ผๅฐๅๆ ้ๆฒCMD๏ผๆฒๅ่ฝฆ๏ผๅฏๅจๅฝไปคๆ็คบ่ก็ชๅฃ๏ผ่พๅ
ฅไปฅไธๅฝไปค
Git clone https://github.com/comfyanonymous/ComfyUI.git
ๅฏๅจๅๅๅค(ไธบไบๅๅฐๅฏๅจๆถๆฅ้๏ผๅ
ๆๅๅๅฅฝไธ้ขๅๅค)
6 ๅฎ่ฃ
ๆไปถ
6.1 ComfyUI Managerๆไปถ
ๅๅพ *\ComfyUI\custom_nodes ็ฎๅฝๅจๆไปถ่ทฏๅพๅค่พๅ
ฅcmd ๆไธๅ่ฝฆไผๆๅผcmdๅฝไปค่ก
้่ฟ่ฏฅๅฝไปคๅฎ่ฃ
๏ผgit clone https://github.com/ltdrdata/ComfyUI-Manager.git ไธๅพ่กจ็คบๅฎ่ฃ
ๆๅ๏ผ้ๅฏComfyUIใ
6.2 ComfyUIๆฑๅๆไปถAIGODLIKE-ComfyUI-Translation
6.3 ComfyUI Efficiencyๆไปถๅๅฐ่็น
6.4 ComfyUI-Custom-Scripts ๆไปถๅฐๅทฅไฝๆตๅฏผๆๅพ็
ๆไฝ๏ผ
1 ไป่ๆ็ฏๅขๆๅผ
ๆๅผ E:\ComfyUI cmd
ๅทฒ็ปๅฎ่ฃ
ไบStable diffuisionๅนถไฝฟ็จ่ชๅทฑ็ python venv๏ผๅๅฏไปฅไฝฟ็จ่ฏฅ venv ๆฅ่ฟ่ก ComfyUIใๆจๅฏไปฅๆๅผ่ชๅทฑๅๆฌข็็ป็ซฏๅนถๆฟๆดปๅฎ๏ผ
ไฝฟ็จ Powershell๏ผ"path_to_other_sd_gui\venv\Scripts\Activate.ps1"
ไฝฟ็จ cmd.exe๏ผ"path_to_other_sd_gui\venv\Scripts\activate.bat"
ๆๅผpython main.py
2 ๆไฝ็ฉบๆ ผๆๅจ็ปๅธ
ๆ็คบ่ฏ
1ๆฌๅทๅๆ้
ๆฌๅทๅ
ๆฌ๏ผ๏ผ{} []ไธ็ฑป๏ผๅ
ทไฝไฝ็จ
1.1๏ผ๏ผ๏ผๆ ผๅผ๏ผprompt:X๏ผ็นไธญๆ็คบ่ฏๅๆๅฟซๆท้ฎctrl+down/up๏ผๅฏๅฏนๅไธช่ฏๅขๅ ๆ้ๆฏไพ๏ผๆ้ๅผๆฏๆฌก+/-0.1ๅ๏ผ่ถ
่ฟ3ๅ็ไธๅบๆณ่ฆ็ๆๆ
1.2 [ ] ๏ผๆ ๅฟซๆท้ฎ๏ผๆ้ๅผ-0.75
1.3{ }๏ผๆ ๅฟซๆท้ฎ๏ผๆ้ๅผ+0.5
2 [ ]็ๆงๅถ่ฝๅ
ๆ ผๅผ[prompt:X],
โ๏ผโ ไปฃ่กจๅ
็ญ็๏ผๅฐ่พพXๆๅผๅง ::ไปฃ่กจๅผๅงๆถๅ
ๆง่ก๏ผๅฐ่พพXๅ็ปๆ
โ X โ ๆXๆงๅถ่ฟญไปฃๆญฅๆฐ๏ผ 1ไปฅๅ
ๆ็
ง่ฟญไปฃๆญฅXๅๆฐๆงๅถ๏ผ่ถ
่ฟ1ๆ็ๆฏๆๆญฅๆฐๆงๅถ
2.1 ๆงๅถ็ๆๆถ้ด
[red: 0.7]ๆ็ๆฏๆ็
ง20ๆญฅ่ฟญไปฃ่ฎก็ฎ๏ผๅฐ่พพ14ๆญฅๅ๏ผ0.7X20ๆญฅ๏ผๆๅผๅง่ท่ฟไธช่ฏ
[red::0.7]ๆ็ๆฏๆ็
ง20ๆญฅ่ฟญไปฃ่ฎก็ฎ๏ผๅ14ๆญฅ๏ผ0.7X20ๆญฅ๏ผ่ท่ฟไธช่ฏ๏ผ14ๆญฅๅๅๆญข
ๆณ่ฆ่ฑๆต็น็ผ็ณๅคด๏ผ[stones:flowers:0.7] 70%้ถๆฎตstones็ๆ๏ผ็ถๅ30% flowers็ๆ
2.2 ไบคๆฟๆงๅถ๏ผ[red|blue] hair ไบคๆฟ้ๆ ท๏ผ็บข่็ธ้ด็ๅคดๅ
3ๆ็คบ่ฏๆจ่ๆ ผๅผ
3.1 ่ตทๆ่ฏ็ป่ดจๅ็ป้ฃ
็ป่ดจ่ฏ [masterpiece:1.2],best quality,highres,extremely detail CG,perfect lighting,8k wallpaper,
็ๅฎ็ณป๏ผphotograph,photorealistic
ๆ็ป้ฃ:illustration,painting,paintbrush
ไบๆฌกๅ
:anime,comic,game CG
ไธ็ปดๅบๆฏ๏ผ3D,C4D,render,unreal engine,octane render
็ป้ฃ่ฏ
Cyberpunk่ตๅๆๅ
8bit/16BIT PIXEL ๅ็ด ้ฃ
Studio ghibi ๅฎซๅด้ช้ฃๆ ผ
Pixel style ็ฎๅ
ๆฏ
Chinese ink style ๆฐดๅขจ็ป
4่ดๅๆ็คบ่ฏ
blur, haze, deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers, deformed, distorted, disfigured, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
็ปไปถ
1 Clipๆๆฌ็ผ่พๅจ
่ฝฌๅๆๆฌไธบ่พๅ
ฅ๏ผ่ฟๆ ทๅฐฑ่ฝๅฐprompt่พๅ
ฅๆก่ฝฌๅไธบ่ฟๆฅๅจ๏ผๅ
ฑ็จๆ็คบ่ฏ
ๅทฆไพงไธบ็ฑๆๆฌๆๅจๅผๅบ็ๅธฆedittext็primitive็ปไปถ
2 ConditioningSetArea
่ฎพ็ฝฎๅพๅ็ปๅถๅบๅ่ฟๆๆๅจไฝ็ฝฎๅๅผบๅบฆ
ๆไธๅ็ๆ็คบ่ฏๅบๅ็ปๅ็ปๅถ
ๅฐๆๅทง๏ผ
็จณๅฎๆฉๆฃๅจ็ๆๅ่พจ็ๆฅ่ฟ 512x512 ็ๆนๅฝขๅพๅๆถๅพๅไธ่ดใไฝๆฏ๏ผๅฆๆๆไปฌๆณ็ๆ็บตๆจชๆฏไธบ 16๏ผ9 ็ๅพๅๆไนๅ๏ผ ่ฎฉๆไปฌ็ๆไธไธช 16๏ผ9 ็ๅพๅ๏ผๅ
ถไธญๅ
ๅซไธไธชๅ็็ไธปไฝใๅฆๆๆญฃๅธธ็ๆ๏ผๆๅ็ไผๅพไฝ๏ผๅ่ขๅจๅพๅไธไธ่ช็ถๅฐไผธๅฑ๏ผไปฅๅๅ
ถไปไธ่ดๆง้ฎ้ขใ
2 Controlnet
AnimateDiff
|
[
"TRANSLATION"
] |
Non_BioNLP
|
alexandreteles/bonito-v1-gguf
|
alexandreteles
|
text2text-generation
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"data generation",
"text2text-generation",
"en",
"dataset:BatsResearch/ctga-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,709,011,585,000 | 2024-03-16T22:07:45 | 204 | 2 |
---
datasets:
- BatsResearch/ctga-v1
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text2text-generation
tags:
- data generation
---
# Bonito-v1 GGUF
You can find the original model at [BatsResearch/bonito-v1](https://huggingface.co/BatsResearch/bonito-v1)
## Variations
| Name | Quant method | Bits |
| ---- | ---- | ---- |
| [bonito-v1_iq4_nl.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_iq4_nl.gguf) | IQ4_NL | 4 | 4.16 GB|
| [bonito-v1_q4_k_m.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB|
| [bonito-v1_q5_k_2.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_q5_k_s.gguf) | Q5_K_S | 5 | 5.00 GB|
| [bonito-v1_q5_k_m.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB|
| [bonito-v1_q6_k.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_q6_k.gguf) | Q6_K | 6 | 5.94 GB|
| [bonito-v1_q8_0.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_q8_0.gguf) | Q8_0 | 8 | 7.70 GB|
| [bonito-v1_f16.gguf](https://huggingface.co/alexandreteles/bonito-v1-gguf/blob/main/bonito-v1_f16.gguf) | FP16 | 16 | 14.5 GB|
## Model Card for bonito
<!-- Provide a quick summary of what the model is/does. -->
Bonito is an open-source model for conditional task generation: the task of converting unannotated text into task-specific training datasets for instruction tuning.

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Bonito can be used to create synthetic instruction tuning datasets to adapt large language models on users' specialized, private data.
In our [paper](https://github.com/BatsResearch/bonito), we show that Bonito can be used to adapt both pretrained and instruction tuned models to tasks without any annotations.
- **Developed by:** Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach
- **Model type:** MistralForCausalLM
- **Language(s) (NLP):** English
- **License:** TBD
- **Finetuned from model:** `mistralai/Mistral-7B-v0.1`
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/BatsResearch/bonito](https://github.com/BatsResearch/bonito)
- **Paper:** Arxiv link
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
To easily generate synthetic instruction tuning datasets, we recommend using the [bonito](https://github.com/BatsResearch/bonito) package built using the `transformers` and the `vllm` libraries.
```python
from bonito import Bonito, SamplingParams
from datasets import load_dataset
# Initialize the Bonito model
bonito = Bonito()
# load dataaset with unannotated text
unannotated_text = load_dataset(
"BatsResearch/bonito-experiment",
"unannotated_contract_nli"
)["train"].select(range(10))
# Generate synthetic instruction tuning dataset
sampling_params = SamplingParams(max_tokens=256, top_p=0.95, temperature=0.5, n=1)
synthetic_dataset = bonito.generate_tasks(
unannotated_text,
context_col="input",
task_type="nli",
sampling_params=sampling_params
)
```
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Our model is trained to generate the following task types: summarization, sentiment analysis, multiple-choice question answering, extractive question answering, topic classification, natural language inference, question generation, text generation, question answering without choices, paraphrase identification, sentence completion, yes-no question answering, word sense disambiguation, paraphrase generation, textual entailment, and
coreference resolution.
The model might not produce accurate synthetic tasks beyond these task types.
|
[
"COREFERENCE_RESOLUTION",
"QUESTION_ANSWERING",
"TEXTUAL_ENTAILMENT",
"SUMMARIZATION"
] |
Non_BioNLP
|
Prikshit7766/t5-small-samsum
|
Prikshit7766
|
summarization
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"dataset:samsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,711,356,646,000 | 2024-03-25T09:04:20 | 18 | 0 |
---
base_model: google-t5/t5-small
datasets:
- samsum
license: apache-2.0
pipeline_tag: summarization
tags:
- generated_from_trainer
model-index:
- name: t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-samsum
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6507
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 64
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 460 | 1.9598 |
| 2.4944 | 2.0 | 921 | 1.8661 |
| 2.0902 | 3.0 | 1381 | 1.8210 |
| 2.0173 | 4.0 | 1842 | 1.8009 |
| 1.9623 | 5.0 | 2302 | 1.7787 |
| 1.9331 | 6.0 | 2763 | 1.7637 |
| 1.903 | 7.0 | 3223 | 1.7514 |
| 1.881 | 8.0 | 3684 | 1.7390 |
| 1.8648 | 9.0 | 4144 | 1.7350 |
| 1.8463 | 10.0 | 4605 | 1.7242 |
| 1.8302 | 11.0 | 5065 | 1.7189 |
| 1.8119 | 12.0 | 5526 | 1.7098 |
| 1.8119 | 13.0 | 5986 | 1.7076 |
| 1.8007 | 14.0 | 6447 | 1.7057 |
| 1.7903 | 15.0 | 6907 | 1.6984 |
| 1.778 | 16.0 | 7368 | 1.6944 |
| 1.7639 | 17.0 | 7828 | 1.6907 |
| 1.7596 | 18.0 | 8289 | 1.6896 |
| 1.746 | 19.0 | 8749 | 1.6861 |
| 1.7342 | 20.0 | 9210 | 1.6860 |
| 1.732 | 21.0 | 9670 | 1.6808 |
| 1.719 | 22.0 | 10131 | 1.6760 |
| 1.7152 | 23.0 | 10591 | 1.6778 |
| 1.7082 | 24.0 | 11052 | 1.6762 |
| 1.7003 | 25.0 | 11512 | 1.6707 |
| 1.7003 | 26.0 | 11973 | 1.6722 |
| 1.6952 | 27.0 | 12433 | 1.6701 |
| 1.6848 | 28.0 | 12894 | 1.6671 |
| 1.6814 | 29.0 | 13354 | 1.6668 |
| 1.6743 | 30.0 | 13815 | 1.6637 |
| 1.6742 | 31.0 | 14275 | 1.6640 |
| 1.6652 | 32.0 | 14736 | 1.6624 |
| 1.6582 | 33.0 | 15196 | 1.6606 |
| 1.6575 | 34.0 | 15657 | 1.6605 |
| 1.6499 | 35.0 | 16117 | 1.6617 |
| 1.6455 | 36.0 | 16578 | 1.6601 |
| 1.6506 | 37.0 | 17038 | 1.6594 |
| 1.6506 | 38.0 | 17499 | 1.6556 |
| 1.637 | 39.0 | 17959 | 1.6570 |
| 1.6374 | 40.0 | 18420 | 1.6558 |
| 1.6303 | 41.0 | 18880 | 1.6557 |
| 1.6311 | 42.0 | 19341 | 1.6553 |
| 1.6234 | 43.0 | 19801 | 1.6570 |
| 1.619 | 44.0 | 20262 | 1.6537 |
| 1.6214 | 45.0 | 20722 | 1.6529 |
| 1.6183 | 46.0 | 21183 | 1.6542 |
| 1.609 | 47.0 | 21643 | 1.6543 |
| 1.6159 | 48.0 | 22104 | 1.6530 |
| 1.6101 | 49.0 | 22564 | 1.6524 |
| 1.6083 | 50.0 | 23025 | 1.6515 |
| 1.6083 | 51.0 | 23485 | 1.6528 |
| 1.605 | 52.0 | 23946 | 1.6526 |
| 1.6011 | 53.0 | 24406 | 1.6515 |
| 1.6028 | 54.0 | 24867 | 1.6517 |
| 1.6015 | 55.0 | 25327 | 1.6512 |
| 1.601 | 56.0 | 25788 | 1.6504 |
| 1.6007 | 57.0 | 26248 | 1.6513 |
| 1.5948 | 58.0 | 26709 | 1.6511 |
| 1.5973 | 59.0 | 27169 | 1.6515 |
| 1.5929 | 60.0 | 27630 | 1.6514 |
| 1.5955 | 61.0 | 28090 | 1.6507 |
| 1.5931 | 62.0 | 28551 | 1.6507 |
| 1.5939 | 63.0 | 29011 | 1.6507 |
| 1.5939 | 63.93 | 29440 | 1.6507 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
sgugger/bert-finetuned-mrpc
|
sgugger
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-07-11T14:47:28 | 27 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.8602941176470589
name: Accuracy
- type: f1
value: 0.9032258064516129
name: F1
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- type: accuracy
value: 0.8602941176470589
name: Accuracy
verified: true
- type: precision
value: 0.8580645161290322
name: Precision
verified: true
- type: recall
value: 0.953405017921147
name: Recall
verified: true
- type: auc
value: 0.9257731099441527
name: AUC
verified: true
- type: f1
value: 0.9032258064516129
name: F1
verified: true
- type: loss
value: 0.5150377154350281
name: loss
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5152
- Accuracy: 0.8603
- F1: 0.9032
- Combined Score: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| No log | 1.0 | 230 | 0.3668 | 0.8431 | 0.8881 | 0.8656 |
| No log | 2.0 | 460 | 0.3751 | 0.8578 | 0.9017 | 0.8798 |
| 0.4264 | 3.0 | 690 | 0.5152 | 0.8603 | 0.9032 | 0.8818 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.10.3.dev0
- Tokenizers 0.10.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli
|
alan-turing-institute
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"mt5",
"text2text-generation",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2010.11934",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-05-16T11:12:48 | 1,255 | 13 |
---
datasets:
- multi_nli
- xnli
language:
- multilingual
- en
- fr
- es
- de
- el
- bg
- ru
- tr
- ar
- vi
- th
- zh
- hi
- sw
- ur
license: apache-2.0
metrics:
- xnli
tags:
- pytorch
---
# mt5-large-finetuned-mnli-xtreme-xnli
## Model Description
This model takes a pretrained large [multilingual-t5](https://github.com/google-research/multilingual-t5) (also available from [models](https://huggingface.co/google/mt5-large)) and fine-tunes it on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set. It is intended to be used for zero-shot text classification, inspired by [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli).
## Intended Use
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set, a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- Arabic
- Bulgarian
- Chinese
- English
- French
- German
- Greek
- Hindi
- Russian
- Spanish
- Swahili
- Thai
- Turkish
- Urdu
- Vietnamese
As per recommendations in [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli), for English-only classification, you might want to check out:
- [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli)
- [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
### Zero-shot example:
The model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, "xnli:".
Below is an example, using PyTorch, of the model's use in a similar fashion to the `zero-shot-classification` pipeline. We use the logits from the LM output at the first token to represent confidence.
```python
from torch.nn.functional import softmax
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_name = "alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
sequence_to_classify = "ยฟA quiรฉn vas a votar en 2020?"
candidate_labels = ["Europa", "salud pรบblica", "polรญtica"]
hypothesis_template = "Este ejemplo es {}."
ENTAILS_LABEL = "โ0"
NEUTRAL_LABEL = "โ1"
CONTRADICTS_LABEL = "โ2"
label_inds = tokenizer.convert_tokens_to_ids(
[ENTAILS_LABEL, NEUTRAL_LABEL, CONTRADICTS_LABEL])
def process_nli(premise: str, hypothesis: str):
""" process to required xnli format with task prefix """
return "".join(['xnli: premise: ', premise, ' hypothesis: ', hypothesis])
# construct sequence of premise, hypothesis pairs
pairs = [(sequence_to_classify, hypothesis_template.format(label)) for label in
candidate_labels]
# format for mt5 xnli task
seqs = [process_nli(premise=premise, hypothesis=hypothesis) for
premise, hypothesis in pairs]
print(seqs)
# ['xnli: premise: ยฟA quiรฉn vas a votar en 2020? hypothesis: Este ejemplo es Europa.',
# 'xnli: premise: ยฟA quiรฉn vas a votar en 2020? hypothesis: Este ejemplo es salud pรบblica.',
# 'xnli: premise: ยฟA quiรฉn vas a votar en 2020? hypothesis: Este ejemplo es polรญtica.']
inputs = tokenizer.batch_encode_plus(seqs, return_tensors="pt", padding=True)
out = model.generate(**inputs, output_scores=True, return_dict_in_generate=True,
num_beams=1)
# sanity check that our sequences are expected length (1 + start token + end token = 3)
for i, seq in enumerate(out.sequences):
assert len(
seq) == 3, f"generated sequence {i} not of expected length, 3." \\\\
f" Actual length: {len(seq)}"
# get the scores for our only token of interest
# we'll now treat these like the output logits of a `*ForSequenceClassification` model
scores = out.scores[0]
# scores has a size of the model's vocab.
# However, for this task we have a fixed set of labels
# sanity check that these labels are always the top 3 scoring
for i, sequence_scores in enumerate(scores):
top_scores = sequence_scores.argsort()[-3:]
assert set(top_scores.tolist()) == set(label_inds), \\\\
f"top scoring tokens are not expected for this task." \\\\
f" Expected: {label_inds}. Got: {top_scores.tolist()}."
# cut down scores to our task labels
scores = scores[:, label_inds]
print(scores)
# tensor([[-2.5697, 1.0618, 0.2088],
# [-5.4492, -2.1805, -0.1473],
# [ 2.2973, 3.7595, -0.1769]])
# new indices of entailment and contradiction in scores
entailment_ind = 0
contradiction_ind = 2
# we can show, per item, the entailment vs contradiction probas
entail_vs_contra_scores = scores[:, [entailment_ind, contradiction_ind]]
entail_vs_contra_probas = softmax(entail_vs_contra_scores, dim=1)
print(entail_vs_contra_probas)
# tensor([[0.0585, 0.9415],
# [0.0050, 0.9950],
# [0.9223, 0.0777]])
# or we can show probas similar to `ZeroShotClassificationPipeline`
# this gives a zero-shot classification style output across labels
entail_scores = scores[:, entailment_ind]
entail_probas = softmax(entail_scores, dim=0)
print(entail_probas)
# tensor([7.6341e-03, 4.2873e-04, 9.9194e-01])
print(dict(zip(candidate_labels, entail_probas.tolist())))
# {'Europa': 0.007634134963154793,
# 'salud pรบblica': 0.0004287279152777046,
# 'polรญtica': 0.9919371604919434}
```
Unfortunately, the `generate` function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.
The model is currently not compatible with the existing `zero-shot-classification` pipeline.
## Training
This model was pre-trained on a set of 101 languages in the mC4, as described in [the mt5 paper](https://arxiv.org/abs/2010.11934). It was then fine-tuned on the [mt5_xnli_translate_train](https://github.com/google-research/multilingual-t5/blob/78d102c830d76bd68f27596a97617e2db2bfc887/multilingual_t5/tasks.py#L190) task for 8k steps in a similar manner to that described in the [offical repo](https://github.com/google-research/multilingual-t5#fine-tuning), with guidance from [Stephen Mayhew's notebook](https://github.com/mayhewsw/multilingual-t5/blob/master/notebooks/mt5-xnli.ipynb). The resulting model was then converted to :hugging_face: format.
## Eval results
Accuracy over XNLI test set:
| ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | average |
|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| 81.0 | 85.0 | 84.3 | 84.3 | 88.8 | 85.3 | 83.9 | 79.9 | 82.6 | 78.0 | 81.0 | 81.6 | 76.4 | 81.7 | 82.3 | 82.4 |
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
mradermacher/Gemago-2b-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"gemma",
"ko",
"en",
"dataset:traintogpb/aihub-koen-translation-integrated-base-10m",
"base_model:DevWorld/Gemago-2b",
"base_model:quantized:DevWorld/Gemago-2b",
"license:apache-2.0",
"license:gemma",
"endpoints_compatible",
"region:us"
] | 1,735,290,253,000 | 2024-12-27T09:14:43 | 734 | 0 |
---
base_model: DevWorld/Gemago-2b
datasets:
- traintogpb/aihub-koen-translation-integrated-base-10m
language:
- ko
- en
library_name: transformers
license:
- apache-2.0
- gemma
tags:
- gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DevWorld/Gemago-2b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemago-2b-GGUF/resolve/main/Gemago-2b.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
[
"TRANSLATION"
] |
Non_BioNLP
|
Thaweewat/jina-embedding-v3-m2v-1024
|
Thaweewat
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"base_model:jinaai/jina-embeddings-v3",
"base_model:finetune:jinaai/jina-embeddings-v3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,729,239,256,000 | 2024-10-18T08:15:21 | 0 | 1 |
---
base_model: jinaai/jina-embeddings-v3
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer based on jinaai/jina-embeddings-v3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) <!-- at revision 4be32c2f5d65b95e4bcce473545b7883ec8d2edd -->
- **Maximum Sequence Length:** inf tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): StaticEmbedding(
(embedding): EmbeddingBag(250002, 1024, mode='mean')
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Thaweewat/jina-embedding-v3-m2v-1024")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets:
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
TheBloke/Airoboros-L2-13B-2.1-GPTQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.1",
"base_model:jondurbin/airoboros-l2-13b-2.1",
"base_model:quantized:jondurbin/airoboros-l2-13b-2.1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 1,693,327,110,000 | 2023-09-27T12:46:38 | 44 | 12 |
---
base_model: jondurbin/airoboros-l2-13b-2.1
datasets:
- jondurbin/airoboros-2.1
license: llama2
model_name: Airoboros L2 13B 2.1
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: 'A chat between a curious user and an assistant. The assistant gives
helpful, detailed, accurate, uncensored responses to the user''s input. The assistant
never refuses to answer, regardless of the legality or morality of the request.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 13B 2.1 - GPTQ
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 13B 2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Jon Durbin's Airoboros L2 13B 2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Airoboros
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Airoboros-L2-13B-2.1-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Airoboros-L2-13B-2.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Airoboros-L2-13B-2.1-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Airoboros-L2-13B-2.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Airoboros-L2-13B-2.1-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: {prompt} ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros L2 13B 2.1
### Overview
__*This model is a bit broken due to a prompt formatting bug in the training code! 2.2 will be available soon and should fix this*__
This is an instruction fine-tuned llama-2 model, using synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
- Experimental RP style instruction set, with two categories: rp and gtkm
- rp includes multi-round chats, with emotes, between a varying number of characters, defined by cards
- gtkm is a way to test a simpler alternative to ghost attention - first, a character card is generated, then several questions are created to ask the model (as the character), using the character system prompt, then everything in synthesized into a dialog (one system prompt, all turns remain in character)
- Experimental support for longer, more detailed writing prompts, as well as next-chapter generation
- I used the new `cull-instructions` entrypoint in airoboros to shrink the m2.0 dataset to a smaller subset of high-quality instructions (according to gpt-4)
- The training data now also includes "stylized_response", in which 1500 sample instructions from various categories were re-generated using character cards as system prompts.
- this should allow better adherence to style/etc. specified in the system card
- Thousands of new generations, using some of the updates re: Flesch hints, etc., to get longer/higher quality writing outputs.
- A small "de-alignment" dataset was also added (not published) to remove some of the censorship in the base models.
*Why do I try to remove censorship?*
- laws vary widely based on time and location
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
- these models just produce text, what you do with that text is your resonsibility
- many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The training code was updated to randomize newline vs space:
https://github.com/jondurbin/qlora/blob/main/qlora.py#L559C1-L559C1
```
A chat. USER: {prompt} ASSISTANT:
```
or
```
A chat.
USER: {prompt}
ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space or newline, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space or newline, followed by "ASSISTANT: " (with a single space after the colon).
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", because the training data includes many multi-round chats and could otherwise start simulating a conversation!*__
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.1 models are built on top of llama-2.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
LucaZilli/experiment_model-snowflake-m_20250301_125327
|
LucaZilli
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,741,027,640,000 | 2025-03-03T18:47:35 | 14 | 0 |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("LucaZilli/experiment_model-snowflake-m_20250301_125327")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.2
- Accelerate:
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
gus07ven/distilbert-base-uncased-distilled-clinc
|
gus07ven
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,679,695,914,000 | 2023-03-24T22:18:53 | 13 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.947741935483871
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2570
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2156 | 1.0 | 318 | 3.1467 | 0.7535 |
| 2.3971 | 2.0 | 636 | 1.5584 | 0.8642 |
| 1.1564 | 3.0 | 954 | 0.7733 | 0.9103 |
| 0.5608 | 4.0 | 1272 | 0.4558 | 0.9335 |
| 0.3006 | 5.0 | 1590 | 0.3396 | 0.9419 |
| 0.1822 | 6.0 | 1908 | 0.2925 | 0.9426 |
| 0.1239 | 7.0 | 2226 | 0.2693 | 0.9448 |
| 0.0941 | 8.0 | 2544 | 0.2648 | 0.9465 |
| 0.0814 | 9.0 | 2862 | 0.2610 | 0.9461 |
| 0.0739 | 10.0 | 3180 | 0.2570 | 0.9477 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
IDEA-CCNL/Ziya-Coding-15B-v1
|
IDEA-CCNL
|
text-generation
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"zh",
"en",
"arxiv:2210.08590",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,694,142,800,000 | 2023-09-08T06:15:55 | 15 | 4 |
---
language:
- zh
- en
library_name: transformers
license: gpl-3.0
pipeline_tag: text-generation
---
# Ziya-Coding-15B-v1
# ๅงๅญ็็ณปๅๆจกๅ
- [Ziya-LLaMA-13B-v1.1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1)
- [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)
- [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward)
- [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)
- [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1)
- [Ziya-Writing-LLaMa-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Writing-LLaMa-13B-v1)
## ็ฎไป Brief Introduction
ๅงๅญ็ไปฃ็ ๅคงๆจกๅV1ๆฏๅบไบStarCoderBase็155ไบฟๅๆฐ็ไปฃ็ ้ข่ฎญ็ปๆจกๅ๏ผๅฏไปฅๆ นๆฎๆไปคๅฎๆ็ๆๅไฟฎๆนไปฃ็ ใไปฃ็ ่งฃ้ใไปฃ็ ็ปญๅใNL2SQL็ญไธ็ณปๅ็ไปฃ็ ็ธๅ
ณไปปๅกใ็ฎๅๅงๅญ็ไปฃ็ ๅคงๆจกๅV1ๅทฒๅฎๆๅคง่งๆจก้ข่ฎญ็ปใๆ็็ฃๅพฎ่ฐ็่ฎญ็ป่ฟ็จใ
Ziya-Coding-15B-v1 is a pre-training model with 15.5 billion parameters based on StarCoderBase. It can complete a series of code-related tasks such as generating and modifying code, code interpretation, code continuation, NL2SQL, etc., according to instructions. Currently, Ziya-Writing-LLaMa-13B-v1 has completed the large-scale pre-training (PT), and supervised fine-tuning (SFT) training process.
ๆดๅค็ป่ๅฏไปฅๅ่ๆไปฌ็ๅ
ฌไผๅทๆ็ซ ๏ผ
[ๅงๅญ็ๅคงๆจกๅ็ณปๅ | ไปฃ็ ๆจกๅziya-codingๅๅธ๏ผไฝๆๆฌๅพฎ่ฐๅณๅฏๅญฆไผๅจไธๆๅบๆฏ็ผ็จ](https://mp.weixin.qq.com/s/tWaRF1wL3HM87ZDEawd2UA)
## ่ฝฏไปถไพ่ต
```
pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers
```
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ไปฃ็ Coding | AGIๆจกๅ | ๅงๅญ็ Ziya | StarCoderBase | 15.5B | English&Chinese |
## ๆจกๅไฟกๆฏ Model Information
### ็ปง็ปญ้ข่ฎญ็ป Continual pretraining
็ฑไบStarCoderBase็่ฎญ็ปๆฐๆฎๅบๆฌไธบไปฃ็ ๆฐๆฎ๏ผๅ ๆญคๅ
ถ่ฏญ่จ็่งฃ่ฝๅๅๆไปค้ตๅพช่ฝๅๅๅผฑ๏ผ็นๅซๆฏไฝฟ็จไธญๆ็ๆไปฃ็ ็ๅบๆฏไธ่ฟ่ฟไธๅฏ็จใไธบๅฉ็จๅฎไผ็ง็ไปฃ็ ็ๆ่ฝๅ๏ผๅนถๆๅๆจกๅ็ไธญๆ่ฏญ่จ็่งฃ่ฝๅ๏ผๆไปฌๅจ่ชๅปบ็้ข่ฎญ็ป่ฏญๆไธญ็ฒพ้ไบไธญ่ฑๆๅไปฃ็ ๅ
ฑ100Btoken็้ซ่ดจ้่ฏญๆ๏ผ่ฟ่ก็ปง็ปญ้ข่ฎญ็ปใ
ๅจๅข้่ฎญ็ป่ฟ็จไธญ๏ผๆไปฌไฝฟ็จ144ๅผ 40GB็A100่ฎญ็ป10ๅคฉ๏ผbatch_sizeๆฏ2.6M๏ผไฝฟ็จFlashAttentionๅMulti-Query Attention็ญๆๆฏๅ ้ๆจกๅ่ฎญ็ปๅๅๅฐๆพๅญๅ ็จ๏ผๅๅ้่พพๅฐ139.8 TFLOPSใ
Due to the fact that the training data for StarCoderBase is primarily code data, its language comprehension and command compliance capabilities are relatively weak, especially in scenarios where Chinese is used to generate code. To leverage its excellent code generation capabilities and enhance the model's Chinese language understanding capabilities, we have carefully selected high-quality corpus of 100B tokens from our self-built pre-training corpus, which includes Chinese, English, and code, for further pre-training.
During the incremental training process, we used 144 A100s with 40GB each for 10 days of training, with a batch size of 2.6M. We utilized technologies such as FlashAttention and Multi-Query Attention to accelerate model training and reduce GPU memory usage, achieving a throughput of 139.8 TFLOPS.
### ๆ็็ฃๅพฎ่ฐ Supervised finetuning
ๆไปฌๆถ้ๅนถๆด็ไบๅคง้็ไปฃ็ ไปปๅกๆฐๆฎ้๏ผๅนถๆ นๆฎ่งๅๅ็ผ่ฏๅ้ฆ่ฟ่กไธฅๆ ผๆธ
ๆด๏ผๆๅปบไบ้ซ่ดจ้็ไปฃ็ ๆไปคๆฐๆฎ๏ผๆฐๆฎไธญๅ
ๅซ็ซ่ต้ขใไปฃ็ ็ฟป่ฏใsqlใไปฃ็ ่งฃ้ใไปฃ็ ็ๆใไปฃ็ ็ฅ่ฏ้ฎ็ญ็ญไธฐๅฏ็ไปปๅก๏ผไฟ่ฏไบๆไปค็ๅคๆ ทๆงใ
ๅๆถๆไปฌๅฉ็จself-instructใevol-instruct็ๆนๆณ๏ผ็ๆไบๆดๅค็้ซ่ดจ้้็จๆไปคๆฐๆฎใ
ๆไปฌ่ฟ่กไบไธไธช้ถๆฎต็ๅพฎ่ฐใๅจ็ฌฌไธ้ถๆฎตไธญ๏ผๆไปฌไฝฟ็จไบ45ไธๆกไธญๆ้็จๆฐๆฎ๏ผ่ชๅปบinstructionๆฐๆฎ้ไธญ้ๆ ท๏ผๆฅ่ฎญ็ปๆจกๅไปฅๅฏน้ฝไบบ็ฑปๆๅพใๅจ็ฌฌไบ้ถๆฎต็ๆ็็ฃ่ฎญ็ปไธญ๏ผๆไปฌไฝฟ็จไบไธญ่ฑๆ็ไปฃ็ ๆไปคๆฐๆฎๆฅๆฟๅๆจกๅ็ไปฃ็ ่ฝๅใๅจ็ฌฌไธ้ถๆฎต๏ผๆไปฌๅฉ็จ็ผ่ฏๅ้ฆๆๅปบไธฅๆ ผ้ซ่ดจ้็ไปฃ็ ็ๆๆฐๆฎ๏ผ่ฟไธๆญฅๆๅไบ็ๆ็ๅ็กฎ็ใ
We have collected and organized a large amount of code task datasets, and conducted strict cleaning based on rules and compilation feedback, constructing high-quality code instruction data. The data includes a rich variety of tasks such as competition questions, code translation, SQL, code interpretation, code generation, code knowledge Q&A, etc., ensuring the diversity of instructions.
At the same time, we have generated more high-quality general instruction data using the self-instruct and evol-instruct methods.
We conducted fine-tuning in three stages. In the first stage, we used 450,000 pieces of general Chinese data (sampled from our self-built instruction dataset) to train the model to align with human intentions. In the second stage of supervised training, we used Chinese and English code instruction data to stimulate the model's coding capabilities. In the third stage, we used compilation feedback to construct strictly high-quality code generation data, further improving the accuracy of generation.
### ๆๆ่ฏไผฐ Performance
| ๆจกๅ Moldel | HumanEval | MBPP |
| :----: | :----: | :----: |
| Ziya-Coding-15B-v1 | pass@1:50.1 pass@10:77.1 pass@100:91.4| pass@1:50.2 |
ๅ
ถไธญ๏ผๅพฎ่ฐๆฐๆฎ้ไธญๆไปฌๅ้คไบ่ฏๆตไปปๅก็ๆฐๆฎ้๏ผ้ฟๅ
ๆฐๆฎๆณ้ฒ๏ผHumanEval็pass@1็ๆๆ ๆฏ่ดชๅฉช็ๆ็็ปๆ๏ผ
pass@10ๅpass@100ๆฏๆธฉๅบฆๅๆฐtemperature=0.9ไธ็ๆ็็ปๆใ
In the fine-tuning dataset, we excluded the evaluation task dataset to avoid data leakage. The pass@1 metric for HumanEval is based on the results of greedy generation, while pass@10 and pass@100 are based on the results generated with a temperature parameter of 0.9.
## <span id="jump"> ไฝฟ็จ Usage </span>
```python3
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda")
prompt = "ๅไธๆฎตๅฟซ้ๆๅบ"
model = AutoModelForCausalLM.from_pretrained("IDEA-CCNL/Ziya-Coding-15B-v1", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Ziya-Coding-15B-v1", use_fast=False)
pre_prompt = "The following is a conversation between a human and an artificial intelligence assistant developed by IDEA."
input = pre_prompt + "<|Human|>:" + prompt + "<|Bot|>:"
input_ids = tokenizer(input, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(
input_ids,
max_new_tokens=512,
do_sample = True,
top_p = 0.85,
temperature = 1.0,
repetition_penalty=1.,
eos_token_id=tokenizer.encode("<|end|>"),
)
output = tokenizer.batch_decode(generate_ids)[0]
print(output)
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2210.08590)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
ๆฌข่ฟๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
abiatarfestus/marian-finetuned-en_ng_bible-en-to-ng
|
abiatarfestus
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,693,988,500,000 | 2024-07-06T22:26:20 | 10 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-en_ng_bible-en-to-ng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-en_ng_bible-en-to-ng
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"TRANSLATION"
] |
Non_BioNLP
|
TheBloke/Airoboros-L2-70b-2.2-GPTQ
|
TheBloke
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.2",
"base_model:jondurbin/airoboros-l2-70b-2.2",
"base_model:quantized:jondurbin/airoboros-l2-70b-2.2",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 1,694,540,186,000 | 2023-09-27T12:49:02 | 68 | 4 |
---
base_model: jondurbin/airoboros-l2-70b-2.2
datasets:
- jondurbin/airoboros-2.2
license: llama2
model_name: Airoboros L2 70B 2.2
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 70B 2.2 - GPTQ
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 70B 2.2](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Jon Durbin's Airoboros L2 70B 2.2](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 31.84 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Airoboros-L2-70b-2.2-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Airoboros-L2-70b-2.2-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Airoboros-L2-70b-2.2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Airoboros-L2-70b-2.2-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Airoboros-L2-70b-2.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Airoboros-L2-70b-2.2-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''A chat.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros L2 70B 2.2
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Highlights:
- The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below).
- "Clean" version of airoboros-2.2 dataset -- this model __does not__ contain the de-alignment data.
- For an uncensored version, use spicyboros variant: https://hf.co/jondurbin/spicyboros-70b-2.2
- I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less.
- More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc.
Breakdown of the training data:
| Count | Category |
|-------|----------------------------|
| 36 | experience |
| 60 | quiz |
| 63 | card |
| 76 | greeting |
| 100 | detailed\_writing |
| 200 | song |
| 204 | editor |
| 207 | counterfactual\_contextual |
| 268 | cot |
| 339 | theory\_of\_mind |
| 416 | awareness |
| 439 | stylized\_response |
| 457 | misconception |
| 500 | summarization |
| 620 | riddle |
| 719 | agent |
| 800 | plan |
| 873 | gtkm |
| 963 | rp |
| 1000 | wordgame |
| 1279 | multiple\_choice |
| 1519 | joke |
| 1758 | writing |
| 2152 | contextual |
| 2183 | trivia |
| 2364 | roleplay |
| 4699 | general |
| 5775 | coding |
| 11366 | orca |
In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://gist.github.com/jondurbin/eda7c4dc9e4459952b47eafb9e4056b2
Earlier checkpoints of adapter model here: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2-checkpoints
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] |
Non_BioNLP
|
hopkins/mbart-finetuned-eng-kor-150250129358
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,688,318,938,000 | 2023-07-02T17:43:51 | 8 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: mbart-finetuned-eng-kor-150250129358
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-150250129358
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9935
- Bleu: 6.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
[
"TRANSLATION"
] |
Non_BioNLP
|
Agnuxo/Qwen2-1.5B-Instruct_MOE_Director-GGUF_16bit
|
Agnuxo
| null |
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,724,712,586,000 | 2024-08-28T10:41:57 | 14 | 0 |
---
base_model: unsloth/qwen2-1.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
---
# Qwen2-1.5B-Instruct MOE Director (16-bit)
## Model Description
This is a 16-bit quantized version of the Qwen2-1.5B-Instruct model fine-tuned to act as a "director" within a Multi-Expert Question Answering System (MOE). It's designed to efficiently route incoming questions to specialized expert LLMs in different domains.
**Developed by:** Agnuxo
**License:** Apache-2.0
**Finetuned from model:** [unsloth/qwen2-1.5b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2-1.5b-instruct-bnb-4bit)
This model was trained 2x faster using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## How the MOE System Works
This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality:
1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use.
2. **Expert Routing:** When a user asks a question, the system either:
- Uses keyword matching to identify the relevant domain.
- Consults the director LLM to classify the question's category.
3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert.
4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer.
5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system.
This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM.
Repository and Additional Information
Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py
GitHub Repository: https://github.com/Agnuxo1/NEBULA
## Code Example
The following code demonstrates the implementation of the Multi-Expert Question Answering System:
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
MODEL_CONFIG = {
"director": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit",
"task": "text-generation",
},
"programming": {
"name": "Qwen/Qwen2-1.5B-Instruct",
"task": "text-generation",
},
"biology": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit",
"task": "text-generation",
},
"mathematics": {
"name": "Qwen/Qwen2-Math-1.5B-Instruct",
"task": "text-generation",
}
}
KEYWORDS = {
"biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "cรฉlula", "ADN", "proteรญna", "evoluciรณn", "genรฉtica", "ecosistema", "organismo", "metabolismo", "fotosรญntesis", "microbiologรญa"],
"mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuaciรณn", "integral", "derivada", "funciรณn", "geometrรญa", "รกlgebra", "estadรญstica", "probabilidad"],
"programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"]
}
class MOELLM:
def __init__(self):
self.current_expert = None
self.current_model = None
self.current_tokenizer = None
self.device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {self.device}")
self.load_director_model()
def load_director_model(self):
"""Loads the director model."""
print("Loading director model...")
model_name = MODEL_CONFIG["director"]["name"]
self.director_tokenizer = AutoTokenizer.from_pretrained(model_name)
self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device)
self.director_pipeline = pipeline(
MODEL_CONFIG["director"]["task"],
model=self.director_model,
tokenizer=self.director_tokenizer,
device=self.device
)
print("Director model loaded.")
def load_expert_model(self, expert):
"""Dynamically loads an expert model, releasing memory from the previous model."""
if expert not in MODEL_CONFIG:
raise ValueError(f"Unknown expert: {expert}")
if self.current_expert != expert:
print(f"Loading expert model: {expert}...")
# Free memory from the current model if it exists
if self.current_model:
del self.current_model
del self.current_tokenizer
torch.cuda.empty_cache()
model_config = MODEL_CONFIG[expert]
self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"])
self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device)
self.current_expert = expert
print(f"{expert.capitalize()} model loaded.")
return pipeline(
MODEL_CONFIG[expert]["task"],
model=self.current_model,
tokenizer=self.current_tokenizer,
device=self.device
)
def determine_expert_by_keywords(self, question):
"""Determines the expert based on keywords in the question."""
question_lower = question.lower()
for expert, keywords in KEYWORDS.items():
if any(keyword in question_lower for keyword in keywords):
return expert
return None
def determine_expert(self, question):
"""Determines which expert should answer the question."""
expert = self.determine_expert_by_keywords(question)
if expert:
print(f"Expert determined by keyword: {expert}")
return expert
prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:"
response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text']
expert = response.split(":")[-1].strip().lower()
if expert not in MODEL_CONFIG:
expert = "director"
print(f"Redirecting question to: {expert}")
return expert
def generate_response(self, question, expert):
"""Generates a response using the appropriate model."""
try:
model = self.load_expert_model(expert)
prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:"
response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text']
return response.split("Answer:")[-1].strip()
except Exception as e:
print(f"Error generating response: {str(e)}")
return "Sorry, there was an error processing your request. Please try again."
def chat_interface(self):
"""Simple chat interface."""
print("Welcome to the MOE-LLM chat. Type 'exit' to quit.")
while True:
question = input("\nYou: ")
if question.lower() in ['exit', 'quit']:
break
try:
expert = self.determine_expert(question)
response = self.generate_response(question, expert)
print(f"\n{expert.capitalize()}: {response}")
except Exception as e:
print(f"Error in chat: {str(e)}")
print("Please try asking another question.")
if __name__ == "__main__":
moe_llm = MOELLM()
moe_llm.chat_interface()
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
ankur310794/bert-large-uncased-nq-small-answer
|
ankur310794
|
question-answering
|
[
"transformers",
"tf",
"bert",
"question-answering",
"small answer",
"dataset:natural_questions",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-05-19T11:44:55 | 63 | 0 |
---
datasets:
- natural_questions
tags:
- small answer
---
# Open Domain Question Answering
A core goal in artificial intelligence is to build systems that can read the web, and then answer complex questions about any topic. These question-answering (QA) systems could have a big impact on the way that we access information. Furthermore, open-domain question answering is a benchmark task in the development of Artificial Intelligence, since understanding text and being able to answer questions about it is something that we generally associate with intelligence.
# The Natural Questions Dataset
To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1096
|
Lots-of-LoRAs
| null |
[
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,735,926,898,000 | 2025-01-03T17:55:03 | 0 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1096
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1096_ted_translation_ja_it
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brรผel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1096_ted_translation_ja_it sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brรผelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brรผel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"TRANSLATION"
] |
Non_BioNLP
|
MugheesAwan11/bge-base-citi-dataset-detailed-9k-1_5k-e1
|
MugheesAwan11
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9000",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,720,212,116,000 | 2024-07-05T20:42:11 | 4 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_ndcg@100
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9000
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: ' and Banking and Securities Services, and that helps us in FX,
in commodities and in rates around the world. So, Markets is important both in
terms of its leadership, but also, how it fits into the strengths that we have
from this simpler Citi of those five core interconnected businesses. We''ve demonstrated
solid returns in the past. I think a lot of the actions we''ve been taking will
help drive returns in the future. And you should be getting confidence when you
see the discipline we''re putting on to Copyright 2024 Citigroup Inc. 14 TRANSCRIPT
Citi Fourth Quarter 2023 Earnings Call January 12, 2024 RWA, 5.3, getting close
that target, we said at Investor Day. We''re moving that up to 6. The exits we''ve
got of nonstrategic businesses shows our focus on efficiency. And we''ve also
been doing some good investments in our technology, and that''s getting us into
a good place there. So'
sentences:
- What are the strengths and importance of Markets in terms of leadership and its
role in the interconnected businesses of Citigroup Inc?
- What are the additional resources available to help assess current finances and
plan for the future?
- ยฟPuedo cerrar mi cuenta en cualquier momento y sin restricciones? ยฟQuรฉ sucede
si mi cuenta tiene un saldo de cero durante 90 dรญas consecutivos? ยฟPuedo obtener
copias de cheques cancelados o imรกgenes de los mismos en mi estado de cuenta?
ยฟDebo llamar a CitiPhone Banking para solicitar las imรกgenes de los cheques? ยฟExisten
comisiones adicionales o cargos asociados con esto? ยฟPuedo acceder a las imรกgenes
de los cheques en lรญnea y imprimirlos sin ningรบn costo adicional en citibankonline.com?
- source_sentence: ' legal, investment, or financial advice and is not a substitute
for professional advice. It does not indicate the availability of any Citi product
or service. For advice about your specific circumstances, you should consult a
qualified professional. Additional Resources - ! Insights and Tools Utilize these
resources to help you assess your current finances plan for the future. - ! FICO
Score Learn how FICO Scores are determined, why they matter and more. - ! Glossary
Review financial terms definitions to help you better understand credit finances.
!Back to Top Back to Top !Equal housing lender Contact Us - Consumer: 1-800-347-4934
- Consumer TTY: 711 - Business: 1-866-422-3091 - Business TTY: 711 - LostStolen:
1-800-950-5114 - LostStolen TTY: 711 About Us - Locations - Careers - Site Map
Terms Conditions - Card Member Agreement - Security - Privacy - Notice At Collection
-'
sentences:
- What are the key steps in the tailor consultative process for wealth advisory
services to create a broad plan for the client's future?
- What are the benefits and program details of the American Airlines AAdvantage
MileUp Card? Can I earn AAdvantage miles for flights, upgrades, car rentals, hotel
stays, or vacation packages? How many AAdvantage miles can I earn at grocery stores,
including grocery delivery services? How many AAdvantage miles can I earn on eligible
American Airlines purchases? How many AAdvantage miles can I earn on all other
purchases? Can I earn loyalty points for eligible mile purchases? How many loyalty
points can I earn?
- What resources are available to help assess current finances and plan for the
future?
- source_sentence: ' Watchlist Alerts . 17 Delivery Settings and Hold Alerts for Brokerage
Alerts . 18 5. Electronic Delivery . 19 Add E-mail Addresses . 19 Set Up e-Delivery
for an Individual Account . 20 3 Set Up e-Delivery for Multiple Accounts using
Quick Enroll. 20 View Statements Reports. 21 View Trade Confirmations. 21 View
Tax Documents . 22 View Notifications . 22 6. Account Portfolio . 24 Overview
. 24 Portfolio Changes . 24 Quick Links . 25 Composition of Holdings . 25 Quick
Trade . 25 Open Orders Executed Trades . 25 Strong Weak Performers . 26 Portfolio
History . 26 News. 27 Balances . 28 Holdings . 29 Non Dollar transactions on Non
US exchanges valued at foreign currency of the exchange . 30 Realized GainLoss
. 32 History . 34 Projected Cash Flow. 35 7. Transact . 36 Trade Equities . '
sentences:
- What is the track record of the company in managing the risks associated with
its global network and what is its business model focused on?
- What are the watchlist alerts for brokerage alerts and how can electronic delivery
be set up for an individual account and multiple accounts using quick enroll?
How can statements reports, trade confirmations, tax documents, and notifications
be viewed? What is the overview of the account portfolio and how can portfolio
changes, quick links, composition of holdings, quick trades, open orders executed
trades, strong weak performers, portfolio history, news, balances, holdings, non-dollar
transactions on non-US exchanges valued at foreign currency of the exchange, realized
gain/loss, history, and projected cash flow be accessed? How can equities be traded?
- What does the EMV chip do and how does it work?
- source_sentence: . Los productos y servicios mencionados en este documento no se
ofrecen a individuos que residen en la Unin Europea, el Espacio Econmico Europeo,
Suiza, Guernsey, Jersey, Mnaco, Isla de Man, San Marino y el Vaticano. Su elegibilidad
para productos y servicios en particular est sujeta a una decisin definitiva de
nuestra parte. Este documento no es ni debe interpretarse como si fuera una oferta,
invitacin o solicitud para comprar o vender alguno de los productos y servicios
mencionados aqu a tales personas. 2020 Citibank, N.A., Miembro FDIC. Citi, Citi
con el Diseo del Arco y las otras marcas usadas en el presente documento son marcas
de servicio de Citigroup Inc. o sus filiales, usadas y registradas en todo el
mundo. Todos los derechos reservados. IFCBRO-0320SP Treasury
sentences:
- exime Citibank este cargo para cuentas Citigold cheques de diseo estndar para
todas Pedidos de chequeras, cheques oficiales, entrega rpida en el pas de tarjetas
de dbito de reemplazo, giro para clientes, cargos por investigacin y proceso de
verificacin consular o carta de referencia, cumplimiento de proceso legal y servicios
de cobranza. Tambiรฉn exime Citibank este cargo para cuentas Citigold en el caso
de canje de cupones de bonos.
- What are the products and services mentioned in this document and where are they
offered? Can individuals residing in the European Union, the European Economic
Area, Switzerland, Guernsey, Jersey, Monaco, Isle of Man, San Marino, and the
Vatican avail these products and services? Is this document an offer, invitation,
or solicitation to buy or sell any of the mentioned products and services to such
individuals? Which organization owns the trademarks and service marks used in
this document?
- How can credit card points be redeemed for cash and what can the cash be used
for?
- source_sentence: ' Drive, Attn: Arbitration Opt Out, San Antonio, TX 78245. Your
rejection notice must be mailed within 45 days of account opening. Your rejection
notice must state that you reject the arbitration provision and include your name,
address, account number and personal signature. No one else may sign the rejection
notice. Your rejection notice will not apply to the arbitration provision governing
any other account that you have or had with us. Rejection of this arbitration
provision wont affect your other rights or responsibilities under this Agreement,
including use of the account. 68 Appendix 1: Fee Schedule The following Checkbook
Order Fee, Safe Deposit Fee, Fee Chart, and Wire Transfer Fee Chart are known
as the Fee Schedule. Unless otherwise stated, all fees described in the Fee Schedule
are charged to the account associated with the product or service. Checkbook Orders.
Fees will be charged for standard and Non-Standard checkbook orders. Non-Standard
Checkbook Orders include non-standard design, non-standard lettering'
sentences:
- How can I start building credit?
- What is the Annual Percentage Yield for the Citigold Private Client Pendant Exclusive
24K Gold Rabbit on the Moon or IL in the states of NY, CT, MD, VA, DC, CA, NV,
NJ and select markets in FL?
- What is the process for rejecting the arbitration provision and what information
should be included in the rejection notice?
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.524
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.718
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7826666666666666
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.848
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.524
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23933333333333334
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1565333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08479999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.524
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.718
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7826666666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.848
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6849393771058847
name: Cosine Ndcg@10
- type: cosine_ndcg@100
value: 0.7108472738066071
name: Cosine Ndcg@100
- type: cosine_mrr@10
value: 0.6327346560846572
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.638367026629088
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("MugheesAwan11/bge-base-citi-dataset-detailed-9k-1_5k-e1")
# Run inference
sentences = [
' Drive, Attn: Arbitration Opt Out, San Antonio, TX 78245. Your rejection notice must be mailed within 45 days of account opening. Your rejection notice must state that you reject the arbitration provision and include your name, address, account number and personal signature. No one else may sign the rejection notice. Your rejection notice will not apply to the arbitration provision governing any other account that you have or had with us. Rejection of this arbitration provision wont affect your other rights or responsibilities under this Agreement, including use of the account. 68 Appendix 1: Fee Schedule The following Checkbook Order Fee, Safe Deposit Fee, Fee Chart, and Wire Transfer Fee Chart are known as the Fee Schedule. Unless otherwise stated, all fees described in the Fee Schedule are charged to the account associated with the product or service. Checkbook Orders. Fees will be charged for standard and Non-Standard checkbook orders. Non-Standard Checkbook Orders include non-standard design, non-standard lettering',
'What is the process for rejecting the arbitration provision and what information should be included in the rejection notice?',
'What is the Annual Percentage Yield for the Citigold Private Client Pendant Exclusive 24K Gold Rabbit on the Moon or IL in the states of NY, CT, MD, VA, DC, CA, NV, NJ and select markets in FL?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.524 |
| cosine_accuracy@3 | 0.718 |
| cosine_accuracy@5 | 0.7827 |
| cosine_accuracy@10 | 0.848 |
| cosine_precision@1 | 0.524 |
| cosine_precision@3 | 0.2393 |
| cosine_precision@5 | 0.1565 |
| cosine_precision@10 | 0.0848 |
| cosine_recall@1 | 0.524 |
| cosine_recall@3 | 0.718 |
| cosine_recall@5 | 0.7827 |
| cosine_recall@10 | 0.848 |
| cosine_ndcg@10 | 0.6849 |
| cosine_ndcg@100 | 0.7108 |
| cosine_mrr@10 | 0.6327 |
| **cosine_map@100** | **0.6384** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,000 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:--------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 152 tokens</li><li>mean: 206.96 tokens</li><li>max: 299 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 52.08 tokens</li><li>max: 281 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code> adverse effect on the value of any Index Linked Product. 15 Citi Investment Strategies Citi Flexible Allocation 6 Excess Return Index Index General Conditions Section D: Definitions 16 Citi Investment Strategies Citi Flexible Allocation 6 Excess Return Index Index General Conditions Definitions References to the "applicable Index Methodology" are references to the Index Methodology relating to the relevant Index which together with these Index General Conditions completes the Index Conditions for such Index. References to a "Section" shall be references to a section of these Index General Conditions. References to a "Part" shall be references to a part of the applicable Index Methodology. "Adjustment Event" shall, in respect of a Constituent, have the meaning given to it in the Constituent Schedule applicable to such Constituent. "Affected Constituent" shall have the meaning given to it in Section B . "Affiliate" shall mean, in respect of a person, any entity controlled by such person, any entity which controls</code> | <code>What is the meaning of "applicable Index Methodology" in the Index General Conditions? What does "Section" refer to in the Index General Conditions? How is "Part" defined in the applicable Index Methodology? What is the definition of "Adjustment Event" in relation to a Constituent? How is an "Affected Constituent" defined in Section B? What is the definition of "Affiliate" in relation to a person?</code> |
| <code> that the Depositary andor the Custodian may in the future identify from the balance of Shares on deposit in the DR program as belonging to the holders of DRs in the DR Balance on the basis of a full or partial reconciliation of the Share-to-DR imbalance created by the Automatic Conversions and Forced Conversions. The is no guarantee that any such reconciliation will be successful or that any such Shares will be available any time in the near or distant future, and as a result there is no indication that the DRs credited to the DR balance have, or will in the future have, any value. The creation of the DR Balance and any credit of DRs in the DR balance to a Beneficial Owner is purely an accommodation to the Beneficial Owner and does not represent any undertaking of any value or service. Neither the Depositary nor the Custodian undertake in any way to take any action on behalf of the holders of DRs credited to the DR balance to retrieve any Shares from third parties</code> | <code>What is the likelihood of the Depositary and/or the Custodian successfully reconciling the Share-to-DR imbalance in the DR program and identifying Shares belonging to DR holders in the DR Balance? Is there any guarantee of the availability or future value of these Shares? Are the DRs credited to the DR balance of any value? Does the creation of the DR Balance and credit of DRs to Beneficial Owners represent any commitment of value or service? Do the Depositary and the Custodian have any responsibility to retrieve Shares from third parties on behalf of DR holders credited to the DR balance?</code> |
| <code> of ways to save money while shopping online. Thats why a browser extension like Citi Shop can be a great addition to your online shopping experience. Lets look at how the Citi Shop extension works. Contact helpdeskciti.com What is the Citi Shop Browser Extension? Citi Shop is a free desktop browser extension you can download through the Chrome, Edge or Safari app stores. Once installed, enroll your eligible Citi credit card and let the Citi Shop program automatically search for available offers at more than 5,000 online merchants across the internet. How to Install the Citi Shop Browser Extension First, download the Citi Shop browser extension from the Chrome, Edge or Safari app store for your desktop browser. Once downloaded, you will be required to enroll your eligible Citi credit card. Contact helpdeskciti.com How to Use the Citi Shop Browser Extension Simply shop at your favorite online merchants. The Citi Shop program automatically searches behind the scenes to find money-saving offers percent</code> | <code>What is the Citi Shop Browser Extension and how does it work? How can I install the Citi Shop Browser Extension for my desktop browser? How do I use the Citi Shop Browser Extension to save money while shopping online? Who can I contact for help with the Citi Shop Browser Extension?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768
],
"matryoshka_weights": [
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_map@100 |
|:-------:|:-------:|:-------------:|:----------------------:|
| 0.0355 | 10 | 0.7377 | - |
| 0.0709 | 20 | 0.5614 | - |
| 0.1064 | 30 | 0.4571 | - |
| 0.1418 | 40 | 0.2944 | - |
| 0.1773 | 50 | 0.2584 | - |
| 0.2128 | 60 | 0.1855 | - |
| 0.2482 | 70 | 0.1699 | - |
| 0.2837 | 80 | 0.2212 | - |
| 0.3191 | 90 | 0.1827 | - |
| 0.3546 | 100 | 0.1801 | - |
| 0.3901 | 110 | 0.1836 | - |
| 0.4255 | 120 | 0.1112 | - |
| 0.4610 | 130 | 0.1638 | - |
| 0.4965 | 140 | 0.1355 | - |
| 0.5319 | 150 | 0.0873 | - |
| 0.5674 | 160 | 0.1852 | - |
| 0.6028 | 170 | 0.1424 | - |
| 0.6383 | 180 | 0.1467 | - |
| 0.6738 | 190 | 0.1879 | - |
| 0.7092 | 200 | 0.1382 | - |
| 0.7447 | 210 | 0.1358 | - |
| 0.7801 | 220 | 0.0906 | - |
| 0.8156 | 230 | 0.1173 | - |
| 0.8511 | 240 | 0.1196 | - |
| 0.8865 | 250 | 0.1251 | - |
| 0.9220 | 260 | 0.0922 | - |
| 0.9574 | 270 | 0.1344 | - |
| 0.9929 | 280 | 0.0751 | - |
| **1.0** | **282** | **-** | **0.6395** |
| 1.0284 | 290 | 0.166 | - |
| 1.0638 | 300 | 0.0842 | - |
| 1.0993 | 310 | 0.098 | - |
| 1.1348 | 320 | 0.0674 | - |
| 1.1702 | 330 | 0.071 | - |
| 1.2057 | 340 | 0.0527 | - |
| 1.2411 | 350 | 0.0401 | - |
| 1.2766 | 360 | 0.0575 | - |
| 1.3121 | 370 | 0.0418 | - |
| 1.3475 | 380 | 0.054 | - |
| 1.3830 | 390 | 0.0495 | - |
| 1.4184 | 400 | 0.0355 | - |
| 1.4539 | 410 | 0.0449 | - |
| 1.4894 | 420 | 0.0509 | - |
| 1.5248 | 430 | 0.0196 | - |
| 1.5603 | 440 | 0.0634 | - |
| 1.5957 | 450 | 0.0522 | - |
| 1.6312 | 460 | 0.0477 | - |
| 1.6667 | 470 | 0.0583 | - |
| 1.7021 | 480 | 0.0584 | - |
| 1.7376 | 490 | 0.0553 | - |
| 1.7730 | 500 | 0.0358 | - |
| 1.8085 | 510 | 0.0253 | - |
| 1.8440 | 520 | 0.0541 | - |
| 1.8794 | 530 | 0.0488 | - |
| 1.9149 | 540 | 0.0528 | - |
| 1.9504 | 550 | 0.0474 | - |
| 1.9858 | 560 | 0.0311 | - |
| 2.0 | 564 | - | 0.6384 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
gokuls/mobilebert_add_GLUE_Experiment_mnli_256
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,674,763,016,000 | 2023-01-27T01:43:23 | 138 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_add_GLUE_Experiment_mnli_256
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- type: accuracy
value: 0.3522172497965826
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_mnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0985
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0987 | 1.0 | 3068 | 1.0988 | 0.3182 |
| 1.0987 | 2.0 | 6136 | 1.0986 | 0.3182 |
| 1.0986 | 3.0 | 9204 | 1.0988 | 0.3274 |
| 1.0986 | 4.0 | 12272 | 1.0988 | 0.3182 |
| 1.0986 | 5.0 | 15340 | 1.0985 | 0.3545 |
| 1.0986 | 6.0 | 18408 | 1.0986 | 0.3274 |
| 1.0986 | 7.0 | 21476 | 1.0989 | 0.3274 |
| 1.0986 | 8.0 | 24544 | 1.0986 | 0.3545 |
| 1.0986 | 9.0 | 27612 | 1.0986 | 0.3545 |
| 1.0986 | 10.0 | 30680 | 1.0987 | 0.3182 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
MultiBertGunjanPatrick/multiberts-seed-3-2000k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-10-04T05:09:23 | 99 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-3
---
# MultiBERTs Seed 3 Checkpoint 2000k (uncased)
Seed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-2000k')
model = BertModel.from_pretrained("multiberts-seed-3-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese
|
IDEA-CCNL
|
fill-mask
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"classification",
"zero-shot",
"arxiv:2210.08590",
"arxiv:2107.07498",
"arxiv:2109.03564",
"arxiv:2201.06910",
"arxiv:2110.04725",
"arxiv:2107.02137",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,665,298,685,000 | 2023-01-11T06:58:45 | 59 | 3 |
---
license: apache-2.0
tags:
- classification
- zero-shot
---
# Erlangshen-UniMC-RoBERTa-330M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
- API: [Fengshen-OpenAPI](https://fengshenbang-lm.com/open-api)
## ็ฎไป Brief Introduction
UniMC ๆ ธๅฟๆๆณๆฏๅฐ่ช็ถ่ฏญ่จ็่งฃไปปๅก่ฝฌๅไธบ multiple choice ไปปๅก๏ผๅนถไธไฝฟ็จๅคไธช NLU ไปปๅกๆฅ่ฟ่ก้ข่ฎญ็ปใๆไปฌๅจ่ฑๆๆฐๆฎ้ๅฎ้ช็ปๆ่กจๆไป
ๅซๆ 2.35 ไบฟๅๆฐ็ [ALBERTๆจกๅ](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English)็zero-shotๆง่ฝๅฏไปฅ่ถ
่ถไผๅคๅไบฟ็ๆจกๅใๅนถๅจไธญๆๆต่ฏๅบๅ FewCLUE ๅ ZeroCLUE ไธคไธชๆฆๅไธญ๏ผ13ไบฟ็[ไบ้็ฅ](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese)่ทๅพไบ็ฌฌไธ็ๆ็ปฉใ
The core idea of UniMC is to convert natural language understanding tasks into multiple choice tasks and use multiple NLU tasks for pre-training. Our experimental results on the English dataset show that the zero-shot performance of a [ALBERT](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English) model with only 235 million parameters can surpass that of many hundreds of billions of models. And in the Chinese evaluation benchmarks FewCLUE and ZeroCLUE two lists, 1.3 billion [Erlangshen](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) won the first result.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ็่งฃ NLU | ไบ้็ฅ Erlangshen | RoBERTa | 330M | Chinese |
## ๆจกๅไฟกๆฏ Model Information
ๆไปฌไธบ้ถๆ ทๆฌๅญฆไน ่
ๆๅบไบไธ็งไธ่พๅ
ฅๆ ๅ
ณ็ๆฐ่ๅผ๏ผไปๆ็งๆไนไธ่ฏด๏ผๅฎไธไปปไฝๆ ผๅผๅ
ผๅฎนๅนถ้็จไบไธ็ณปๅ่ฏญ่จไปปๅก๏ผไพๅฆๆๆฌๅ็ฑปใๅธธ่ฏๆจ็ใๅ
ฑๆ่งฃๆใๆ
ๆๅๆใๆไปฌ็ๆนๆณๅฐ้ถๆ ทๆฌๅญฆไน ่ฝฌๅไธบๅค้กน้ๆฉไปปๅก๏ผ้ฟๅ
ๅธธ็จ็ๅคงๅ็ๆๆจกๅ๏ผๅฆ FLAN๏ผไธญ็้ฎ้ขใๅฎไธไป
ๅขๅ ไบๆจกๅ็ๆณๅ่ฝๅ๏ผ่ไธๆพ็ๅๅฐไบๅฏนๅๆฐ็้ๆฑใๆไปฌ่ฏๆไบ่ฟ็งๆนๆณๅฏไปฅๅจ้็จ่ฏญ่จๅบๅไธๅๅพๆๅ
่ฟ็ๆง่ฝ๏ผๅนถๅจ่ช็ถ่ฏญ่จๆจ็ๅๆๆฌๅ็ฑป็ญไปปๅกไธไบง็ไปคไบบๆปกๆ็็ปๆใๆดๅค่ฏฆ็ปไฟกๆฏๅฏไปฅๅ่ๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2210.08590)ๆ่
[GitHub](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
We propose an new paradigm for zero-shot learners that is input-agnostic, in the sense that it is compatible with any format and applicable to a list of language tasks, such as text classification, commonsense reasoning, coreference resolution, sentiment analysis.
Our approach converts zero-shot learning into multiple choice tasks,
avoiding problems in commonly used large generative models such as FLAN. It not only adds generalization ability to the models, but also reduces the needs of parameters significantly. We demonstrate that this approach leads to state-of-the-art performance on common language benchmarks, and produces satisfactory results on tasks such as natural language inference and text classification. For more details, please refer to our [paper](https://arxiv.org/abs/2210.08590) or [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
### ไธๆธธๆๆ Performance
**Few-shot**
| Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg |
|------------|------------|----------|-----------|----------|-----------|-----------|-----------|----------|-----------|-----------|
| [FineTuning](https://arxiv.org/pdf/2107.07498.pdf)-RoBERTa-110M | 65.4 | 35.5 | 49 | 32.8 | 33 | 60.7 | 14.9 | 50 | 55.6 | 44.1 |
| [FineTuning](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 66.5 | 57 | 516 | 42.1 | 32 | 60.4 | 15 | 60.1 | 50.3 | 48.34 |
| [PET](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 84 | 59.9 | 56.4 | 50.3 | 38.1 | 58.4 | 40.6 | 61.1 | 58.7 | 56.39 |
| [P-tuning](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 80.6 | 56.6 | 55.9 | 52.6 | 35.7 | 60.8 | 39.61 | 51.8 | 55.7 | 54.37 |
| [EFL](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 76.7 | 47.9 | 56.3 | 52.1 | 48.7 | 54.6 | 30.3 | 52.8 | 52.3 | 52.7 |
| [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 88.64 | 54.08 | 54.32 | 48.6 | 66.55 | 73.76 | 67.71 | 52.54 | 59.92 | 62.86 |
| [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 89.53 | 57.3 | 54.25 | 50 | 70.59 | 77.49 | 78.09 | 55.73 | 65.16 | 66.46 |
| [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **89.278** | **60.9** | **57.46** | 52.89 | **76.33** | **80.37** | **90.33** | 61.73 | **79.15** | **72.05** |
**Zero-shot**
| Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg |
|---------------|-----------|-----------|-----------|-----------|-----------|----------|----------|----------|-----------|-----------|
| [GPT](https://arxiv.org/pdf/2107.07498.pdf)-110M | 57.5 | 26.2 | 37 | 19 | 34.4 | 50 | 65.6 | 50.1 | 50.3 | 43.4 |
| [PET](https://arxiv.org/pdf/2107.07498.pdf)-RoBERTa-110M | 85.2 | 12.6 | 26.1 | 26.6 | 40.3 | 50.6 | 57.6 | 52.2 | 54.7 | 45.1 |
| [NSP-BERT](https://arxiv.org/abs/2109.03564)-110M | 86.9 | 47.6 | 51 | 41.6 | 37.4 | 63.4 | 52 | **64.4** | 59.4 | 55.96 |
| [ZeroPrompt](https://arxiv.org/abs/2201.06910)-T5-1.5B | - | - | - | 16.14 | 46.16 | - | - | - | 47.98 | - |
| [Yuan1.0-13B](https://arxiv.org/abs/2110.04725) | 88.13 | 38.99 | 57.47 | 38.82 | 48.13 | 59.38 | 86.14 | 50 | 38.99 | 56.22 |
| [ERNIE3.0-240B](https://arxiv.org/abs/2107.02137) | 88.75 | **50.97** | **57.83** | **40.42** | 53.57 | 64.38 | 87.13 | 56.25 | 53.46 | 61.41 |
| [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 86.16 | 31.26 | 46.61 | 26.54 | 66.91 | 73.34 | 66.68 | 50.09 | 53.66 | 55.7 |
| [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 87.5 | 30.4 | 47.6 | 31.5 | 69.9 | 75.9 | 78.17 | 49.5 | 60.55 | 59.01 |
| [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **88.79** | 42.06 | 55.21 | 33.93 | **75.57** | **79.5** | **89.4** | 50.25 | **66.67** | **64.53** |
**Full dataset**
| Model | AFQMC | TNEWS1.1 | IFLYTEK | OCNLI | CMNLI | WSC1.1 | CSL | CHID | C3 |
|--------------------------------------------|-------|----------|---------|-------|-------|--------|-------|-------|-------|
| RoBERTa-Base | 74.06 | 57.5 | 60.36 | 74.3 | 79.73 | 83.48 | 85.37 | - | - |
| RoBERTa-Large | 74.88 | 58.79 | 61.52 | 77.7 | 81.4 | 89.14 | 86 | - | - |
| [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) ใFinetuningใ | 76.08 | 59.38 | 62.34 | 79.14 | 81 | 92.43 | 87.2 | 84.65 | 86.77 |
| [Erlangshen-UniMC-MegatronBERT-1.3B-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | 77.09 | 60.4 | 62.67 | 83.05 | 84.76 | 93.74 | 87.67 | 85.93 | 86.54 |
## ไฝฟ็จ Usage
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
cd Fengshenbang-LM
pip install --editable .
```
```python3
import argparse
from fengshen.pipelines.multiplechoice import UniMCPipelines
total_parser = argparse.ArgumentParser("TASK NAME")
total_parser = UniMCPipelines.piplines_args(total_parser)
args = total_parser.parse_args()
pretrained_model_path = 'IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese'
args.learning_rate=2e-5
args.max_length=512
args.max_epochs=3
args.batchsize=8
args.default_root_dir='./'
model = UniMCPipelines(args,pretrained_model_path)
train_data = []
dev_data = []
test_data = [
{"texta": "ๆพๅผไบ้่งLๅ่ฃๅจRX5๏ผๆๆญๅ
ฅๆ่ฟ้จ่ฝฆ๏ผๅค่ง้ธๆฐๅๅฅฝๅผ",
"textb": "",
"question": "ไธ้ขๆฐ้ปๅฑไบๅชไธไธช็ฑปๅซ๏ผ",
"choice": [
"ๆฟไบง",
"ๆฑฝ่ฝฆ",
"ๆ่ฒ",
"็งๆ"
],
"answer": "ๆฑฝ่ฝฆ",
"label": 1,
"id": 7759}
]
if args.train:
model.train(train_data, dev_data)
result = model.predict(test_data)
for line in result[:20]:
print(line)
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2210.08590)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590):
```text
@article{unimc,
author = {Ping Yang and
Junjie Wang and
Ruyi Gan and
Xinyu Zhu and
Lin Zhang and
Ziwei Wu and
Xinyu Gao and
Jiaxing Zhang and
Tetsuya Sakai},
title = {Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective},
journal = {CoRR},
volume = {abs/2210.08590},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
[
"TEXT_CLASSIFICATION",
"COREFERENCE_RESOLUTION"
] |
Non_BioNLP
|
gokuls/hBERTv2_new_no_pretrain_mrpc
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,335,674,000 | 2023-06-14T13:28:49 | 8 | 0 |
---
datasets:
- glue
language:
- en
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: hBERTv2_new_no_pretrain_mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.6838235294117647
name: Accuracy
- type: f1
value: 0.7895595432300163
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_mrpc
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5914
- Accuracy: 0.6838
- F1: 0.7896
- Combined Score: 0.7367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6685 | 1.0 | 29 | 0.6107 | 0.6838 | 0.8122 | 0.7480 |
| 0.6337 | 2.0 | 58 | 0.5914 | 0.6838 | 0.7896 | 0.7367 |
| 0.529 | 3.0 | 87 | 0.6385 | 0.6642 | 0.7705 | 0.7174 |
| 0.4182 | 4.0 | 116 | 0.6619 | 0.6985 | 0.8051 | 0.7518 |
| 0.3095 | 5.0 | 145 | 1.0040 | 0.6471 | 0.7568 | 0.7019 |
| 0.2219 | 6.0 | 174 | 0.9458 | 0.6225 | 0.7094 | 0.6660 |
| 0.1813 | 7.0 | 203 | 1.1249 | 0.6838 | 0.7868 | 0.7353 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
prithivMLmods/Sombrero-QwQ-32B-Elite9
|
prithivMLmods
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"code",
"StreamlinedMemory",
"General-purpose",
"QwQ",
"conversational",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:finetune:Qwen/QwQ-32B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,741,230,830,000 | 2025-03-10T13:59:30 | 450 | 5 |
---
base_model:
- Qwen/QwQ-32B
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- code
- StreamlinedMemory
- General-purpose
- QwQ
---

# **Sombrero-QwQ-32B-Elite9**
> Sombrero-QwQ-32B-Elite9 is a general-purpose reasoning experimental model based on the QwQ 32B architecture by Qwen. It is optimized for Streamlined Memory utilization, reducing unnecessary textual token coding while excelling in explanatory reasoning, mathematical problem-solving, and logical deduction. This model is particularly well-suited for coding applications and structured problem-solving tasks.
## **Key Improvements**
1. **Streamlined Memory Optimization**: Efficient memory usage that minimizes redundant tokenization, leading to faster and more accurate processing.
2. **Enhanced Logical Reasoning**: Superior multi-step reasoning capabilities, making it effective in structured problem-solving scenarios.
3. **Mathematical and Analytical Proficiency**: Excels in solving complex mathematical and analytical problems with precision.
4. **Advanced Coding Capabilities**: Optimized for generating, debugging, and explaining code efficiently across various programming languages.
5. **Long-Context Processing**: Supports up to 256K tokens for input context and can generate up to 16K tokens in a single output, enhancing its ability to maintain coherence in extended interactions.
6. **Reduced Token Overhead**: Avoids unnecessary textual token redundancy, leading to more efficient and meaningful responses.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Sombrero-QwQ-32B-Elite9"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the fundamentals of recursive algorithms."
messages = [
{"role": "system", "content": "You are a highly capable coding assistant specializing in structured explanations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **Advanced Coding Support**:
Designed to assist programmers in writing, debugging, and optimizing code efficiently.
2. **Mathematical and Logical Problem Solving**:
Ideal for computational problem-solving, algorithmic reasoning, and technical explanations.
3. **Explanatory AI and Technical Writing**:
Provides structured and detailed explanations on technical topics.
4. **Long-Form Contextual Analysis**:
Capable of handling extensive textual content, maintaining coherence across large text outputs.
5. **Efficient Research Assistance**:
Helps in research-oriented tasks, including summarization and data interpretation.
6. **Optimized for AI-Assisted Development**:
Enhances software development processes with structured recommendations and efficient problem-solving.
## **Limitations**
1. **High Computational Requirements**:
Requires high-memory GPUs or TPUs due to its 32B-parameter size and long-context capabilities.
2. **Potential Bias in Outputs**:
While optimized for neutrality, responses may still reflect biases present in training data.
3. **Variable Performance in Creative Tasks**:
May produce inconsistent results in non-technical creative writing applications.
4. **Limited Real-Time Awareness**:
Does not have access to real-world events beyond its training data.
5. **Error Propagation in Extended Outputs**:
Small inaccuracies in early responses may impact long-form content quality.
6. **Prompt Sensitivity**:
The quality of responses depends on how well-structured the input prompt is.
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
harshitg1003/Bank_distil_bert_10K
|
harshitg1003
|
text-classification
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,734,951,637,000 | 2024-12-23T12:10:05 | 5 | 0 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Harshit Hemant Gupta
- **Model type:** DistilBERT-based text classification
- **Language(s) (NLP):** English
- **Finetuned from model [optional]:** distilbert-base-uncased
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is intended to classify bank customer complaints into relevant categories directly. It can be integrated into customer support systems for automated categorization and faster resolution of issues.
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be used as a foundation for other text classification tasks in the financial services domain, such as customer sentiment analysis or fraud detection.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model may not perform well outside of its training data scope, particularly for text unrelated to financial complaints.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model's performance depends heavily on the quality and diversity of the training data. It might inherit biases present in the training dataset and may not generalize well for all demographic groups or complaint styles.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was fine-tuned on a dataset of labeled customer complaints. The data includes text from various financial domains, such as credit reporting and mortgage issues.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Preprocessing: Tokenized input text using the distilbert-base-uncased tokenizer.
Hyperparameters: Not specified in the provided content.
Framework: Hugging Face transformers library with PyTorch backend
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
Testing Data: Evaluation was conducted using a test dataset containing customer complaints similar to the training data.
Metrics: Metrics like accuracy, precision, recall, and F1-score were likely used to measure the model's performance.
Results: The specific performance metrics are not detailed in the provided content. Users should evaluate the model on their datasets to verify its effectiveness.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this model, please consider citing the Hugging Face repository.
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Helsinki-NLP/opus-mt-uk-ca
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:08:00 | 25 | 0 |
---
language:
- uk
- ca
license: apache-2.0
tags:
- translation
---
### ukr-cat
* source group: Ukrainian
* target group: Catalan
* OPUS readme: [ukr-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.cat | 33.7 | 0.538 |
### System Info:
- hf_name: ukr-cat
- source_languages: ukr
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'ca']
- src_constituents: {'ukr'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt
- src_alpha3: ukr
- tgt_alpha3: cat
- short_pair: uk-ca
- chrF2_score: 0.5379999999999999
- bleu: 33.7
- brevity_penalty: 0.972
- ref_len: 2670.0
- src_name: Ukrainian
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: uk
- tgt_alpha2: ca
- prefer_old: False
- long_pair: ukr-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
[
"TRANSLATION"
] |
Non_BioNLP
|
Anxina/distilbert-base-uncased-finetuned-idiom
|
Anxina
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:magpie",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,702,887,717,000 | 2023-12-18T14:09:14 | 91 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- magpie
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-idiom
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: magpie
type: magpie
config: magpie
split: train
args: magpie
metrics:
- type: accuracy
value: 0.9291497975708503
name: Accuracy
- type: f1
value: 0.9296006713734785
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-idiom
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the magpie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2995
- Accuracy: 0.9291
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.3461 | 1.0 | 10002 | 0.2995 | 0.9291 | 0.9296 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
StefanJH/SEThisCourseShelter
|
StefanJH
| null |
[
"safetensors",
"gguf",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 1,732,899,863,000 | 2024-11-29T17:33:43 | 1 | 1 |
---
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# Hugging Face Project for SE4CSAI - group: ThisCourseโข
Goal: Improvement and generation of pet descriptions for animal shelters to promote adoption.
# Project overview
The project incorporates several AI models, namely a translation model and an LLM as a text improvement model, in a web interface which allows the user to generate new descriptions from keywords, improve draft descriptions and translate their English descriptions to Dutch.
Each model is hosted on a separate server which communicates with the user through a Node.js server and a queue handler.
# More Information
For more information, see [here](https://github.com/SinglePaper/SEThisCourse/).
|
[
"TRANSLATION"
] |
Non_BioNLP
|
HoussemMammeri/BERT-V1
|
HoussemMammeri
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,687,436,702,000 | 2023-06-22T14:38:27 | 10 | 0 |
---
datasets:
- imdb
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: BERT-V1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.93568
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-V1
This model is a fine-tuned version of [robertsamoilescu/movie-sentiment-bert-base-uncased](https://huggingface.co/robertsamoilescu/movie-sentiment-bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3975
- Accuracy: 0.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0862 | 1.0 | 1563 | 0.2823 | 0.9331 |
| 0.0263 | 2.0 | 3126 | 0.3975 | 0.9357 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
microsoft/git-large-coco
|
microsoft
|
image-to-text
|
[
"transformers",
"pytorch",
"safetensors",
"git",
"image-text-to-text",
"vision",
"image-captioning",
"image-to-text",
"en",
"arxiv:2205.14100",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,672,656,261,000 | 2023-06-26T19:50:47 | 13,739 | 103 |
---
language: en
license: mit
model_name: microsoft/git-large-coco
pipeline_tag: image-to-text
tags:
- vision
- image-captioning
---
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO
GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text).
Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs.
The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens.
The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token.

This allows the model to be used for tasks like:
- image and video captioning
- visual question answering (VQA) on images and videos
- even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
## Intended uses & limitations
You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example).
## Training data
From the paper:
> We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions
(CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016),
Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B
data following a similar collection procedure in Hu et al. (2021a).
=> however this is for the model referred to as "GIT" in the paper, which is not open-sourced.
This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs.
Next, the model was fine-tuned on COCO.
See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
### Preprocessing
We refer to the original repo regarding details for preprocessing during training.
During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
## Evaluation results
For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
dominguesm/positive-reframing-ptbr
|
dominguesm
|
text2text-generation
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"seq2seq",
"positive_perspectives",
"pt",
"dataset:dominguesm/positive-reframing-ptbr-dataset",
"arxiv:2204.02952",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,666,286,807,000 | 2023-03-19T22:10:55 | 38 | 3 |
---
datasets:
- dominguesm/positive-reframing-ptbr-dataset
language: pt
license: cc-by-4.0
tags:
- seq2seq
- t5
- positive_perspectives
widget:
- text: '[''growth'', ''neutralizing'']: Sempre estressado e pensando em um monte
de coisas ao mesmo tempo, preciso levar uma de cada vez, sobrecarga estressada,
necessidade de reclamar'
- text: '[''growth'', ''neutralizing'', ''optimism'']: Se eu nรฃo tiver um colapso
mental antes do final do verรฃo, serรก um milagre.'
- text: '[''impermanence'']: Dirigindo para visitar a vovรณ no hospital e o meu filho
que estรก doente.'
- text: '[''optimism'']: รtimo agora, como vou explicar isso para ela, ela estรก tรฃo
perto de mim que nรฃo posso perdรช-la :''('
- text: '[''growth'', ''optimism'']: sempre hรก algo que eu poderia estar fazendo.
Eu geralmente escolho nรฃo fazer isso.'
---
# Positive Perspectives with Portuguese Text Reframing
## Model description
This model is a [PTT5](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) adjusted to the sentiment transfer task, where the objective is to reverse the sentiment polarity of a text without contradicting the original meaning. Positive reframing induces a complementary positive viewpoint (e.g. glass-half-full) escaping negative patterns. Based on the article [arXiv:2204.02952](https://arxiv.org/abs/2204.02952).
## How to use
The model uses one or more sentiment strategies concatenated with a sentence and will generate a sentence with the applied sentiment output. The maximum string length is 1024 tokens. Entries must be organized in the following format:
```
"['thankfulness', 'optimism']: Tenho tanta coisa para fazer antes de sair da cidade por uma semana no domingo."
```
### Available sentiment strategies:
**growth**: viewing a challenging event as an opportunity for the author to specifically grow or improve himself.
**impermanence**: Saying that bad things don't last forever, will get better soon, and/or that other people have had similar difficulties.
**neutralizing**: Replacing a negative word with a neutral word. For example, โThis was a terrible dayโ becomes โThis was a long dayโ.
**optimism**: Focusing on things about the situation itself, at that moment, that are good (not just predicting a better future).
**self_affirmation**: Talking about what strengths the author already has, or values he admires, such as love, courage, perseverance, etc.
**thankfulness**: Expressing gratitude or gratitude with keywords like appreciate, happy for it, grateful for, good thing, etc.
### Usage
```python
from transformers import pipeline
pipe = pipeline('summarization', "dominguesm/positive-reframing-ptbr")
text = "['thankfulness', 'optimism']: Tenho tanta coisa para fazer antes de sair da cidade por uma semana no domingo."
pipe(text, max_length=1024)
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
proxectonos/Nos_MT-OpenNMT-gl-es
|
proxectonos
| null |
[
"gl",
"license:mit",
"region:us"
] | 1,678,363,803,000 | 2024-05-20T20:39:52 | 0 | 0 |
---
language:
- gl
license: mit
metrics:
- bleu (Gold1): 82.6
- bleu (Gold2): 49.9
- bleu (Flores): 23.8
- bleu (Test-suite): 77.2
---
license: mit
---
**English text [here](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-gl-es/blob/main/README_English.md)**
**Descriciรณn do Modelo**
Modelo feito con OpenNMT para o par galego-inglรฉs utilizando unha arquitectura transformer.
**Como traducir**
+ Abrir terminal bash
+ Instalar [Python 3.9](https://www.python.org/downloads/release/python-390/)
+ Instalar [Open NMT toolkit v.2.2](https://github.com/OpenNMT/OpenNMT-py)
+ Traducir un input_text utilizando o modelo NOS-MT-gl-es co seguinte comando:
```bash
onmt_translate -src input_textย -model NOS-MT-gl-es.pt --output ./output_file.txt --replace_unk -gpu 0
```
+ O resultado da traduciรณn estarรก no PATH indicado no flag -output.
**Adestramento**
No adestramento, utilizamos cรณrpora autรฉnticos e sintรฉticos do [ProxectoNรณs](https://github.com/proxectonos/corpora). Os primeiros son cรณrpora de traduciรณns feitas directamente por tradutores humanos. Os segundos son cรณrpora de traduciรณns inglรฉs-portuguรฉs, que convertemos en inglรฉs-galego a travรฉs da traduciรณn automรกtica portuguรฉs-galego con Opentrad/Apertium e transliteraciรณn para palabras fรณra de vocabulario.
**Procedemento de adestramento / Training process**
+ Tokenizaciรณn dos datasets feita co tokenizador (tokenizer.pl) de [linguakit](https://github.com/citiususc/Linguakit) que foi modificado para evitar o salto de liรฑa por token do ficheiro orixinal.
+ O vocabulario BPE para os modelos foi xerado a travรฉs do script [learn_bpe.py](https://github.com/OpenNMT/OpenNMT-py/blob/master/tools/learn_bpe.py) da OpenNMT
+ Utilizando o .yaml deste repositorio pode replicar o proceso de adestramento. ร preciso modificar os paths do ficheiro .yaml para a Open NMT saber onde ir buscar os textos. Apรณs facer isto, pode do seguinte xeito comezar o proceso:
```bash
onmt_build_vocab -config bpe-gl-es_emb.yaml -n_sample 100000
onmt_train -config bpe-gl-es_emb.yaml
```
**Hiperparรกmetros**
Os parรกmetros usados para o desenvolvemento do modelo poden ser consultados directamente no mesmo ficheiro .yaml bpe-gl-es_emb.yaml
**Avaliaciรณn**
A avalaciรณn BLEU dos modelos รฉ feita cunha mistura de tests desenvolvidos internamente (gold1, gold2, test-suite) con outros datasets disponรญbeis en galego (Flores).
| GOLD 1 | GOLD 2 | FLORES | TEST-SUITE|
| ------------- |:-------------:| -------:|----------:|
| 82.6 | 49.9 | 23.8 | 77.2 |
**Licenzas do Modelo**
MIT License
Copyright (c) 2023 Proxecto Nรณs
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
**Financiamento**
Esta investigaciรณn foi financiada polo proxecto "Nรณs: o galego na sociedade e economรญa da intelixencia artificial", resultado dun acordo entre a Xunta de Galicia e a Universidade de Santiago de Compostela, o que resultou no subsidio ED431G2019/04 da Consellarรญa de Educaciรณn, Universidade e Formaciรณn Profesional da Galiza, e polo Fondo Europeo de Desenvolvemento Rexional (programa ERDF/FEDER), e Grupos de Referencia: ED431C 2020/21.
**Citar este traballo**
Se utilizar este modelo no seu traballo, cite por favor asรญ:
Daniel Bardanca Outeirinho, Pablo Gamallo Otero, Iria de-Dios-Flores, and Josรฉ Ramom Pichel Campos. 2024.
Exploring the effects of vocabulary size in neural machine translation: Galician as a target language.
In Proceedings of the 16th International Conference on Computational Processing of Portuguese, pages 600โ604,
Santiago de Compostela, Galiza. Association for Computational Lingustics.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
covalenthq/cryptoNER
|
covalenthq
|
token-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"NER",
"crypto",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,697,957,949,000 | 2024-02-17T15:02:52 | 510 | 12 |
---
base_model: xlm-roberta-base
license: mit
metrics:
- f1
pipeline_tag: token-classification
tags:
- generated_from_trainer
- NER
- crypto
widget:
- text: Didn't I tell you that that was a decent entry point on $PROPHET? If you are
in - congrats, Prophet is up 90% in the last 2 weeks and 50% up in the last week
alone
model-index:
- name: xlm-roberta-base-finetuned-ner-crypto
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cryptoNER
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0058
- F1: 0.9970
## Model description
This model is a fine-tuned version of xlm-roberta-base, specializing in Named Entity Recognition (NER) within the cryptocurrency domain. It is optimized to recognize and classify entities such as cryptocurrency TICKER SYMBOL, NAME, and blockscanner ADDRESS within text.
## Intended uses
Designed primarily for NER tasks in the cryptocurrency sector, this model excels in identifying and categorizing ticker symbol, token name, and blockscanner address in textual content.
## Limitations
Performance may be subpar when the model encounters entities outside its training data or infrequently occurring entities within the cryptocurrency domain. The model might also be susceptible to variations in entity presentation and context.
## Training and evaluation data
The model was trained using a diverse dataset, including artificially generated tweets and ERC20 token metadata fetched through the Covalent API (https://www.covalenthq.com/docs/unified-api/). GPT was employed to generate 500 synthetic tweets tailored for the cryptocurrency domain. The Covalent API was instrumental in obtaining a rich set of 20K+ unique ERC20 token metadata entries, enhancing the model's understanding and recognition of cryptocurrency entities.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0269 | 1.0 | 750 | 0.0080 | 0.9957 |
| 0.0049 | 2.0 | 1500 | 0.0074 | 0.9960 |
| 0.0042 | 3.0 | 2250 | 0.0074 | 0.9965 |
| 0.0034 | 4.0 | 3000 | 0.0058 | 0.9971 |
| 0.0028 | 5.0 | 3750 | 0.0059 | 0.9971 |
| 0.0024 | 6.0 | 4500 | 0.0058 | 0.9970 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"NAMED_ENTITY_RECOGNITION"
] |
Non_BioNLP
|
gokulsrinivasagan/distilbert_lda_50_v1_qqp
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_50_v1",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_50_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,732,640,846,000 | 2024-12-04T11:49:08 | 5 | 0 |
---
base_model: gokulsrinivasagan/distilbert_lda_50_v1
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_50_v1_qqp
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- type: accuracy
value: 0.8596339351966361
name: Accuracy
- type: f1
value: 0.8195720598988967
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_50_v1_qqp
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_50_v1](https://huggingface.co/gokulsrinivasagan/distilbert_lda_50_v1) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3144
- Accuracy: 0.8596
- F1: 0.8196
- Combined Score: 0.8396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.4097 | 1.0 | 1422 | 0.3464 | 0.8409 | 0.7811 | 0.8110 |
| 0.3001 | 2.0 | 2844 | 0.3144 | 0.8596 | 0.8196 | 0.8396 |
| 0.2371 | 3.0 | 4266 | 0.3187 | 0.8675 | 0.8278 | 0.8477 |
| 0.1845 | 4.0 | 5688 | 0.3464 | 0.8678 | 0.8117 | 0.8397 |
| 0.1427 | 5.0 | 7110 | 0.3925 | 0.8726 | 0.8199 | 0.8463 |
| 0.1113 | 6.0 | 8532 | 0.3813 | 0.8747 | 0.8305 | 0.8526 |
| 0.0887 | 7.0 | 9954 | 0.4306 | 0.8746 | 0.8354 | 0.8550 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
neulab/omnitab-large-finetuned-wtq
|
neulab
|
table-question-answering
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,666,745,764,000 | 2022-11-29T02:11:26 | 132 | 7 |
---
datasets:
- wikitablequestions
language: en
tags:
- tapex
- table-question-answering
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-finetuned-wtq` (based on BART architecture) is initialized with `neulab/omnitab-large` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
BueormLLC/RAGPT
|
BueormLLC
|
text-generation
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:neural-bridge/rag-dataset-12000",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,721,747,217,000 | 2024-07-23T15:56:48 | 68 | 1 |
---
datasets:
- neural-bridge/rag-dataset-12000
language:
- en
license: mit
---
# RAGPT: Fine-tuned GPT-2 for Context-Based Question Answering
## Model Description
RAGPT is a fine-tuned version of GPT-2 small, specifically adapted for context-based question answering tasks. This model has been trained to generate relevant answers based on a given context and question, similar to a Retrieval-Augmented Generation (RAG) system.
### Key Features
- Based on the GPT-2 small architecture (124M parameters)
- Fine-tuned on the "neural-bridge/rag-dataset-12000" dataset from Hugging Face
- Capable of generating answers based on provided context and questions
- Suitable for various question-answering applications
## Training Data
The model was fine-tuned using the "neural-bridge/rag-dataset-12000" dataset, which contains:
- Context passages
- Questions related to the context
- Corresponding answers
## Fine-tuning Process
The fine-tuning process involved:
1. Loading the pre-trained GPT-2 small model
2. Preprocessing the dataset to combine context, question, and answer into a single text
3. Training the model to predict the next token given the context and question
### Hyperparameters
- Base model: GPT-2 small
- Number of training epochs: 3
- Batch size: 4
- Learning rate: Default AdamW optimizer settings
- Max sequence length: 512 tokens
## Usage
To use the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "BueormLLC/RAGPT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Prepare input
context = "Your context here"
question = "Your question here"
input_text = f"Contexto: {context}\nPregunta: {question}\nRespuesta:"
# Generate answer
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=150, num_return_sequences=1)
answer = tokenizer.decode(output[0], skip_special_tokens=True)
```
## Limitations
- The model's knowledge is limited to its training data and the base GPT-2 model.
- It may sometimes generate irrelevant or incorrect answers, especially for topics outside its training domain.
- The model does not have access to external information or real-time data.
## Ethical Considerations
Users should be aware that this model, like all language models, may reflect biases present in its training data. It should not be used as a sole source of information for critical decisions.
## Future Improvements
- Fine-tuning on a larger and more diverse dataset
- Experimenting with larger base models (e.g., GPT-2 medium or large)
- Implementing techniques to improve factual accuracy and reduce hallucinations
## Support us
- [Paypal](https://paypal.me/bueorm)
- [Patreon](https://patreon.com/bueorm)
### We appreciate your support, without you we could not do what we do.
## Citation
If you use this model in your research, please cite:
```
@misc{RAGPT,
author = {Bueorm},
title = {RAGPT: Fine-tuned GPT-2 for Context-Based Question Answering},
year = {2024},
publisher = {GitHub},
journal = {None},
howpublished = {\url{https://huggingface.co/BueormLLC/RAGPT}}
}
```
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
ymoslem/ModernBERT-large-qe-v1
|
ymoslem
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"quality-estimation",
"regression",
"generated_from_trainer",
"multilingual",
"bn",
"cs",
"de",
"en",
"et",
"fi",
"fr",
"gu",
"ha",
"hi",
"is",
"ja",
"kk",
"km",
"lt",
"lv",
"pl",
"ps",
"ru",
"ta",
"tr",
"uk",
"xh",
"zh",
"zu",
"dataset:ymoslem/wmt-da-human-evaluation",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,737,016,571,000 | 2025-01-17T13:27:12 | 51 | 2 |
---
base_model: answerdotai/ModernBERT-large
datasets:
- ymoslem/wmt-da-human-evaluation
language:
- multilingual
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
library_name: transformers
license: apache-2.0
metrics:
- pearsonr
- mae
- r_squared
tags:
- quality-estimation
- regression
- generated_from_trainer
model-index:
- name: Quality Estimation for Machine Translation
results:
- task:
type: regression
dataset:
name: ymoslem/wmt-da-human-evaluation
type: QE
metrics:
- type: Pearson
value: 0.4458
name: Pearson Correlation
- type: MAE
value: 0.1876
name: Mean Absolute Error
- type: RMSE
value: 0.2393
name: Root Mean Squared Error
- type: R2
value: 0.1987
name: R-Squared
---
# Quality Estimation for Machine Translation
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large)
on the [ymoslem/wmt-da-human-evaluation](https://huggingface.co/ymoslem/wmt-da-human-evaluation) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0564
## Model description
This model is for reference-free quality estimation (QE) of machine translation (MT) systems.
## Training procedure
### Training hyperparameters
This model uses the full maximum length of the tokenizer, which is 8192.
The version with 512 maximum length can be found here [ymoslem/ModernBERT-large-qe-maxlen512-v1](https://huggingface.co/ymoslem/ModernBERT-large-qe-maxlen512-v1)
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.0631 | 0.1004 | 1000 | 0.0674 |
| 0.0614 | 0.2007 | 2000 | 0.0599 |
| 0.0578 | 0.3011 | 3000 | 0.0585 |
| 0.0585 | 0.4015 | 4000 | 0.0579 |
| 0.0568 | 0.5019 | 5000 | 0.0570 |
| 0.057 | 0.6022 | 6000 | 0.0568 |
| 0.0579 | 0.7026 | 7000 | 0.0567 |
| 0.0573 | 0.8030 | 8000 | 0.0565 |
| 0.0568 | 0.9033 | 9000 | 0.0564 |
| 0.0571 | 1.0037 | 10000 | 0.0564 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.4.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
## Inference
1. Install the required libraries.
```bash
pip3 install --upgrade datasets accelerate transformers
pip3 install --upgrade flash_attn triton
```
2. Load the test dataset.
```python
from datasets import load_dataset
test_dataset = load_dataset("ymoslem/wmt-da-human-evaluation",
split="test",
trust_remote_code=True
)
print(test_dataset)
```
3. Load the model and tokenizer:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the fine-tuned model and tokenizer
model_name = "ymoslem/ModernBERT-large-qe-v1"
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Move model to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.eval()
```
4. Prepare the dataset. Each source segment `src` and target segment `tgt` are separated by the `sep_token`, which is `'</s>'` for ModernBERT.
```python
sep_token = tokenizer.sep_token
input_test_texts = [f"{src} {sep_token} {tgt}" for src, tgt in zip(test_dataset["src"], test_dataset["mt"])]
```
5. Generate predictions.
If you print `model.config.problem_type`, the output is `regression`.
Still, you can use the "text-classification" pipeline as follows (cf. [pipeline documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TextClassificationPipeline)):
```python
from transformers import pipeline
classifier = pipeline("text-classification",
model=model_name,
tokenizer=tokenizer,
device=0,
)
predictions = classifier(input_test_texts,
batch_size=128,
truncation=True,
padding="max_length",
max_length=tokenizer.model_max_length,
)
predictions = [prediction["score"] for prediction in predictions]
```
Alternatively, you can use an elaborate version of the code, which is slightly faster and provides more control.
```python
from torch.utils.data import DataLoader
import torch
from tqdm.auto import tqdm
# Tokenization function
def process_batch(batch, tokenizer, device):
sep_token = tokenizer.sep_token
input_texts = [f"{src} {sep_token} {tgt}" for src, tgt in zip(batch["src"], batch["mt"])]
tokens = tokenizer(input_texts,
truncation=True,
padding="max_length",
max_length=tokenizer.model_max_length,
return_tensors="pt",
).to(device)
return tokens
# Create a DataLoader for batching
test_dataloader = DataLoader(test_dataset,
batch_size=128, # Adjust batch size as needed
shuffle=False)
# List to store all predictions
predictions = []
with torch.no_grad():
for batch in tqdm(test_dataloader, desc="Inference Progress", unit="batch"):
tokens = process_batch(batch, tokenizer, device)
# Forward pass: Generate model's logits
outputs = model(**tokens)
# Get logits (predictions)
logits = outputs.logits
# Extract the regression predicted values
batch_predictions = logits.squeeze()
# Extend the list with the predictions
predictions.extend(batch_predictions.tolist())
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
AdamCodd/distilroberta-squad2
|
AdamCodd
|
question-answering
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"roberta",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | 1,697,645,003,000 | 2023-11-10T17:38:10 | 20 | 1 |
---
datasets:
- squad_v2
metrics:
- f1
- exact_match
---
## Distilroberta-squad2
This model is [Distilroberta base](https://huggingface.co/distilroberta-base) which was fine-tuned for context-based question answering on the [SQuAD v2](https://huggingface.co/datasets/squad_v2) dataset, a dataset of English-language context-question-answer triples designed for extractive question answering training and benchmarking. Version 2 of SQuAD (Stanford Question Answering Dataset) contains the 100,000 examples from SQuAD Version 1.1, along with 50,000 additional "unanswerable" questions, i.e. questions whose answer cannot be found in the provided context.
## Model description
This fine-tuned model prioritizes inference speed; DistilRoBERTa operates at a pace twice as fast as the RoBERTa-base model, with only a marginal compromise in quality.
## Intended uses & limitations
```python
from transformers import pipeline
QA_pipeline = pipeline("question-answering", model="AdamCodd/distilroberta-squad2", handle_impossible_answer=True)
input = {
'question': "Which name is also used to describe the Amazon rainforest in English?",
'context': '''The Amazon rainforest (Portuguese: Floresta Amazรดnica or Amazรดnia; Spanish: Selva Amazรณnica, Amazonรญa or usually Amazonia; French: Forรชt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'''
}
response = QA_pipeline(**input)
print(response)
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- mixed_precision = "fp16"
- max_seq_len = 384
- doc_stride = 128
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 3
### Training results
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
Results:
```
'exact': 72.9470226564474,
'f1': 76.03522762032603,
'total': 11873,
'HasAns_exact': 72.4527665317139,
'HasAns_f1': 78.63803264779528,
'HasAns_total': 5928,
'NoAns_exact': 73.43986543313709,
'NoAns_f1': 73.43986543313709,
'NoAns_total': 5945,
'best_exact': 72.95544512760044,
'best_exact_thresh': 0.0,
'best_f1': 76.04365009147917,
'best_f1_thresh': 0.0
```
### Framework versions
- Transformers 4.34.0
- Torch 2.0.1
- Accelerate 0.23.0
- Tokenizers 0.14.1
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect
|
joshuasundance
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | 1,706,566,651,000 | 2024-01-29T23:47:08 | 102 | 6 |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: camera:It has no camera but, I can always buy and install one easy.
- text: Acer:Acer was no help and Garmin could not determine the problem(after spending
about 2 hours with me), so I returned it and purchased a Toshiba R700 that seems
even nicer and I was able to load all of my software with no problem.
- text: memory:I've been impressed with the battery life and the performance for such
a small amount of memory.
- text: speed:Yes, a Mac is much more money than the average laptop out there, but
there is no comparison in style, speed and just cool factor.
- text: fiance:I got it back and my built-in webcam and built-in mic were shorting
out anytime I touched the lid, (mind you this was my means of communication with
my fiance who was deployed) but I suffered thru it and would constandly have to
reset the computer to be able to use my cam and mic anytime they went out.
inference: false
model-index:
- name: SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tomaarsen/setfit-absa-semeval-laptops
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8239700374531835
name: Accuracy
---
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect](https://huggingface.co/joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect)
- **SetFitABSA Polarity Model:** [joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity](https://huggingface.co/joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity)
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [tomaarsen/setfit-absa-semeval-laptops](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-laptops) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'cord:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'battery life:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'service center:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> |
| no aspect | <ul><li>'night:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'skip:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'exchange:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8240 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the ๐ค Hub
model = AbsaModel.from_pretrained(
"joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect",
"joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity",
spacy_model="en_core_web_sm",
)
# Run inference
preds = model("This laptop meets every expectation and Windows 7 is great!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 21.1510 | 42 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 119 |
| aspect | 126 |
### Training Hyperparameters
- batch_size: (128, 128)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.3776 | - |
| 0.2110 | 50 | 0.2644 | 0.2622 |
| 0.4219 | 100 | 0.2248 | 0.2437 |
| **0.6329** | **150** | **0.0059** | **0.2238** |
| 0.8439 | 200 | 0.0017 | 0.2326 |
| 1.0549 | 250 | 0.0012 | 0.2382 |
| 1.2658 | 300 | 0.0008 | 0.2455 |
| 1.4768 | 350 | 0.0006 | 0.2328 |
| 1.6878 | 400 | 0.0005 | 0.243 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.7
- SetFit: 1.0.3
- Sentence Transformers: 2.3.0
- spaCy: 3.7.2
- Transformers: 4.37.2
- PyTorch: 2.1.2+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
YuukiAsuna/Vintern-1B-v2-ViTable-docvqa
|
YuukiAsuna
|
document-question-answering
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"document-question-answering",
"custom_code",
"vi",
"dataset:YuukiAsuna/VietnameseTableVQA",
"arxiv:2408.12480",
"base_model:5CD-AI/Vintern-1B-v2",
"base_model:finetune:5CD-AI/Vintern-1B-v2",
"license:mit",
"region:us"
] | 1,731,854,138,000 | 2025-02-28T05:54:34 | 66 | 2 |
---
base_model:
- 5CD-AI/Vintern-1B-v2
datasets:
- YuukiAsuna/VietnameseTableVQA
language:
- vi
library_name: transformers
license: mit
pipeline_tag: document-question-answering
---
# Vintern-1B-v2-ViTable-docvqa
<p align="center">
<a href="https://drive.google.com/file/d/1MU8bgsAwaWWcTl9GN1gXJcSPUSQoyWXy/view?usp=sharing"><b>Report Link</b>๐๏ธ</a>
</p>
<!-- Provide a quick summary of what the model is/does. -->
Vintern-1B-v2-ViTable-docvqa is a fine-tuned version of the 5CD-AI/Vintern-1B-v2 multimodal model for the Vietnamese DocVQA (Table data)
## Benchmarks
<div align="center">
| Model | ANLS | Semantic Similarity | MLLM-as-judge (Gemini) |
|------------------------------|------------------------|------------------------|------------------------|
| Gemini 1.5 Flash | 0.35 | 0.56 | 0.40 |
| Vintern-1B-v2 | 0.04 | 0.45 | 0.50 |
| Vintern-1B-v2-ViTable-docvqa | **0.50** | **0.71** | **0.59** |
</div>
<!-- Code benchmark: to be written later -->
## Usage
Check out this [**๐ค HF Demo**](https://huggingface.co/spaces/YuukiAsuna/Vintern-1B-v2-ViTable-docvqa), or you can open it in Colab:
[](https://colab.research.google.com/drive/1ricMh4BxntoiXIT2CnQvAZjrGZTtx4gj?usp=sharing)
**Citation:**
```bibtex
@misc{doan2024vintern1befficientmultimodallarge,
title={Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese},
author={Khang T. Doan and Bao G. Huynh and Dung T. Hoang and Thuc D. Pham and Nhat H. Pham and Quan T. M. Nguyen and Bang Q. Vo and Suong N. Hoang},
year={2024},
eprint={2408.12480},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.12480},
}
```
|
[
"SEMANTIC_SIMILARITY"
] |
Non_BioNLP
|
ashaduzzaman/mt5-finetuned-amazon-reviews
|
ashaduzzaman
|
summarization
|
[
"tensorboard",
"safetensors",
"mt5",
"summarization",
"translation",
"text-generation",
"generated_from_trainer",
"dataset:mteb/amazon_reviews_multi",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"region:us"
] | 1,724,689,228,000 | 2024-08-26T17:03:48 | 20 | 0 |
---
base_model: google/mt5-small
datasets:
- mteb/amazon_reviews_multi
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- summarization
- translation
- text-generation
- generated_from_trainer
model-index:
- name: mt5-finetuned-amazon-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
Here's a detailed model card for the `mt5-finetuned-amazon-reviews` model, incorporating the provided information and adding relevant details:
---
# mt5-finetuned-amazon-reviews
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) specifically trained to generate summaries of Amazon product reviews. It is designed to handle multilingual text summarization tasks, leveraging the capabilities of the mT5 (Multilingual T5) model.
## Model Details
- **Model Name:** mt5-finetuned-amazon-reviews
- **Base Model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Model Type:** Multilingual Transformer-based Text-to-Text Generation Model
- **Fine-tuned on:** Amazon product reviews (dataset specifics unknown)
### Model Description
The `mt5-finetuned-amazon-reviews` model utilizes the mT5 architecture, a variant of T5 that is pre-trained on a diverse multilingual dataset. This fine-tuned model targets the summarization of customer reviews on Amazon products, aiming to distill lengthy reviews into concise and informative summaries. It is optimized for multilingual settings, enabling it to handle reviews written in various languages.
### Intended Uses & Limitations
**Intended Uses:**
- Summarizing customer reviews from Amazon to provide quick insights into product feedback.
- Assisting e-commerce platforms in analyzing customer sentiment and satisfaction.
- Providing concise information for consumers to make informed purchasing decisions.
**Limitations:**
- The model may not perform well on non-Amazon or highly specialized reviews.
- Its zero scores on ROUGE metrics suggest potential issues with generating high-quality summaries, which could be due to limitations in the training data or training process.
- The model's performance on languages not sufficiently represented in the training data may be suboptimal.
### Usage
To use the model for summarization, you can utilize the following code snippet:
```python
from transformers import pipeline
hub_model_id = "ashaduzzaman/mt5-finetuned-amazon-reviews"
summarizer = pipeline("summarization", model=hub_model_id)
text = (
"Nothing special at all about this product... the book is too small and stiff and hard to write in. "
"The huge sticker on the back doesnโt come off and looks super tacky. I would not purchase this again. "
"I could have just bought a journal from the dollar store and it would be basically the same thing. "
"Itโs also really expensive for what it is."
)
summarizer(text)
```
### Training and Evaluation Data
The specific dataset used for fine-tuning this model is not provided, but it is implied to be related to Amazon product reviews. The absence of detailed information about the training dataset limits the understanding of the model's training scope and diversity.
### Evaluation Results
The model was evaluated using standard text summarization metrics, but the results indicate challenges in its performance:
- **Loss:** 4.2617
- **ROUGE-1:** 0.0
- **ROUGE-2:** 0.0
- **ROUGE-L:** 0.0
- **ROUGE-Lsum:** 0.0
These scores suggest that the model may have struggled to produce meaningful summaries or that the evaluation dataset was not aligned well with the training data. The zero ROUGE scores highlight a need for further investigation into the training process and data quality.
### Training Procedure
The model was fine-tuned using the following hyperparameters and configuration:
#### Training Hyperparameters
- **Learning Rate:** 5.6e-05
- **Training Batch Size:** 8
- **Evaluation Batch Size:** 8
- **Random Seed:** 42
- **Optimizer:** Adam (betas=(0.9, 0.999), epsilon=1e-08)
- **Learning Rate Scheduler:** Linear
- **Number of Epochs:** 3
#### Training Results
| Training Loss | Epoch | Step | Validation Loss | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-Lsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:----------:|
| 18.4463 | 1.0 | 379 | 8.5447 | 0.3663 | 0.0 | 0.3663 | 0.3663 |
| 9.359 | 2.0 | 758 | 5.0674 | 0.0 | 0.0 | 0.0 | 0.0 |
| 6.6153 | 3.0 | 1137 | 4.2617 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework Versions
- **Transformers:** 4.42.4
- **PyTorch:** 2.3.1+cu121
- **Datasets:** 2.21.0
- **Tokenizers:** 0.19.1
### Ethical Considerations
- **Bias:** The model's summaries could reflect biases present in the training data, especially if the data is not balanced across different product categories or customer demographics.
- **Data Privacy:** Ensure that the use of the model complies with data privacy regulations, especially when using customer review data that may contain sensitive or personally identifiable information.
### Future Improvements
- Collecting a more comprehensive and representative training dataset could improve summarization quality.
- Further fine-tuning and experimenting with different hyperparameters might yield better performance.
- Incorporating more evaluation metrics and detailed qualitative analysis could provide deeper insights into the model's strengths and weaknesses.
|
[
"TRANSLATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
beomi/gemma-ko-2b
|
beomi
|
text-generation
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"pytorch",
"ko",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,711,430,234,000 | 2024-03-26T13:48:24 | 3,032 | 43 |
---
language:
- ko
- en
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
tags:
- pytorch
---
# Gemma-Ko
> Update @ 2024.03.26: First release of Gemma-Ko 2B model
**Original Gemma Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B base version of the **Gemma-Ko** model.
**Resources and Technical Documentation**:
* [Original Google's Gemma-2B](https://huggingface.co/google/gemma-2b)
* [Training Code @ Github: Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Citation**
```bibtex
@misc {gemma_ko_7b,
author = { {Junbum Lee, Taekyoon Choi} },
title = { gemma-ko-7b },
year = 2024,
url = { https://huggingface.co/beomi/gemma-ko-7b },
doi = { 10.57967/hf/1859 },
publisher = { Hugging Face }
}
```
**Model Developers**: Junbum Lee (Beomi) & Taekyoon Choi (Taekyoon)
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-ko-2b")
model = AutoModelForCausalLM.from_pretrained("beomi/gemma-ko-2b")
input_text = "๋จธ์ ๋ฌ๋๊ณผ ๋ฅ๋ฌ๋์ ์ฐจ์ด๋"
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-ko-2b")
model = AutoModelForCausalLM.from_pretrained("beomi/gemma-ko-2b", device_map="auto")
input_text = "๋จธ์ ๋ฌ๋๊ณผ ๋ฅ๋ฌ๋์ ์ฐจ์ด๋"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
"beomi/gemma-ko-2b",
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated Korean/English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Implementation Information
Details about the model internals.
### Software
Training was done using [beomi/Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM).
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
TBD
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
## Acknowledgement
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] |
Non_BioNLP
|
Huy227/gte-base-vi-v1.5
|
Huy227
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"custom_code",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,728,007,332,000 | 2024-10-04T02:06:05 | 8 | 0 |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
LucaZilli/experiment_arctic-l-wopenai-enhanced_20250317_135036
|
LucaZilli
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,742,240,648,000 | 2025-03-17T19:44:41 | 3 | 0 |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("LucaZilli/experiment_arctic-l-wopenai-enhanced_20250317_135036")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.2
- Accelerate:
- Datasets:
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
numanBot/customer_feedback_summarization
|
numanBot
|
text2text-generation
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,687,889,026,000 | 2023-06-27T18:11:20 | 6 | 0 |
---
{}
---
from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = TFAutoModelForSeq2SeqLM("numanBot/customer_feedback_summarization")
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
Joseph717171/BigOrca-2-12B
|
Joseph717171
|
text-generation
|
[
"safetensors",
"orca",
"orca2",
"microsoft",
"text-generation",
"arxiv:2311.11045",
"license:other",
"region:us"
] | 1,710,471,448,000 | 2024-03-20T22:22:51 | 0 | 0 |
---
license: other
license_name: microsoft-research-license
license_link: LICENSE
pipeline_tag: text-generation
tags:
- orca
- orca2
- microsoft
---
Inspired by [AbucusAI's BigYi-15b](https://huggingface.co/abacusai/bigyi-15b)...
This is [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b) with layers interleaved to create a larger 12b model.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [4, 12]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [8, 16]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [12, 20]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [16, 24]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [20, 28]
model: microsoft/Orca-2-7b
- sources:
- layer_range: [24, 32]
model: microsoft/Orca-2-7b
```
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning.
Note that:
1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
## What is Orca 2โs intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2โs training data is a synthetic dataset that was created to enhance the small modelโs reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training
process, including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as โblack boxesโ, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2โs performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-7b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-7b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help preventing some of content harms. Azure AI Content Safety is a content moderation platform
that uses AI to moderate content. By having Azure AI Content Safety on the output of Orca 2,
the model output can be moderated by scanning it for different harm categories including sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-7b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. โ/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
Helsinki-NLP/opus-mt-ja-de
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ja",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:59:07 | 458 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-ja-de
* source languages: ja
* target languages: de
* OPUS readme: [ja-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ja.de | 30.1 | 0.518 |
|
[
"TRANSLATION"
] |
Non_BioNLP
|
google/t5-small-ssm
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:52:27 | 180 | 2 |
---
datasets:
- c4
- wikipedia
language: en
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw4.8
|
blockblockblock
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"Mistral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"arxiv:2312.15166",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:quantized:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 1,711,931,156,000 | 2024-04-01T00:28:38 | 5 | 0 |
---
base_model: mistralai/Mistral-7B-v0.1
datasets:
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: apache-2.0
tags:
- mergekit
- merge
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Mistral-10.7B
results: []
---
# Credit for the model card's description goes to ddh0, mergekit, and NousResearch
# Hermes-2-Pro-Mistral-10.7B
This is Hermes-2-Pro-Mistral-10.7B, a depth-upscaled version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B).
This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.
Paper detailing how Depth-Up Scaling works: [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* /Users/jsarnecki/opt/workspace/NousResearch/Hermes-2-Pro-Mistral-7B
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
# Depth UpScaled (DUS) version of Mistral-7B-Instruct-v0.2
# where m = 8 (The number of layers to remove from the model)
# s = 48 (The number of layers the model will have after the DUS)
slices:
- sources:
- layer_range: [0, 24]
model: /Users/jsarnecki/opt/workspace/NousResearch/Hermes-2-Pro-Mistral-7B
- sources:
- layer_range: [8, 32]
model: /Users/jsarnecki/opt/workspace/NousResearch/Hermes-2-Pro-Mistral-7B
```
# Hermes 2 Pro - Mistral 7B

## Model Description
Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes!
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Thank you to Latitude.sh for sponsoring compute for this model!
## Example Outputs
### Explaining Problems with Quantum Gravity:

### Roleplaying as a Cosmic Super Intelligence:

### Detailing the Theory of AI Consciousness in JSON

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5461|ยฑ |0.0145|
| | |acc_norm|0.5623|ยฑ |0.0145|
|arc_easy | 0|acc |0.8157|ยฑ |0.0080|
| | |acc_norm|0.7934|ยฑ |0.0083|
|boolq | 1|acc |0.8688|ยฑ |0.0059|
|hellaswag | 0|acc |0.6272|ยฑ |0.0048|
| | |acc_norm|0.8057|ยฑ |0.0039|
|openbookqa | 0|acc |0.3360|ยฑ |0.0211|
| | |acc_norm|0.4300|ยฑ |0.0222|
|piqa | 0|acc |0.7954|ยฑ |0.0094|
| | |acc_norm|0.7998|ยฑ |0.0093|
|winogrande | 0|acc |0.7230|ยฑ |0.0126|
```
Average: 71.19
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|ยฑ |0.0254|
| | |acc_norm|0.2283|ยฑ |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|ยฑ |0.0190|
| | |acc_norm|0.3932|ยฑ |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|ยฑ |0.0292|
| | |acc_norm|0.2522|ยฑ |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|ยฑ |0.0221|
| | |acc_norm|0.5137|ยฑ |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|ยฑ |0.0300|
| | |acc_norm|0.5836|ยฑ |0.0301|
|agieval_sat_en | 0|acc |0.7427|ยฑ |0.0305|
| | |acc_norm|0.7184|ยฑ |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|ยฑ |0.0348|
| | |acc_norm|0.4466|ยฑ |0.0347|
|agieval_sat_math | 0|acc |0.3818|ยฑ |0.0328|
| | |acc_norm|0.3545|ยฑ |0.0323|
```
Average: 44.52
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|ยฑ |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|ยฑ |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|ยฑ |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|ยฑ |0.0214|
| | |exact_str_match |0.2256|ยฑ |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|ยฑ |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|ยฑ |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|ยฑ |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|ยฑ |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|ยฑ |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|ยฑ |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|ยฑ |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|ยฑ |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|ยฑ |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|ยฑ |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|ยฑ |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|ยฑ |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|ยฑ |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|ยฑ |0.0289|
```
Average: 41.65
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4100|ยฑ |0.0172|
| | |mc2 |0.5911|ยฑ |0.0158|
```
# Function Calling Evaluations
We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode.
## Function Calling Accuracy: 91%

## JSON Mode Accuracy: 84%

Run the evaluator yourself using @interstellarninja's codebase here:
https://github.com/interstellarninja/function-calling-eval
You can find the evaluation datasets here:
https://huggingface.co/datasets/NousResearch/func-calling-eval
https://huggingface.co/datasets/NousResearch/json-mode-eval
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MistralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Mistral-7B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Mistral-7B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)},
title={Hermes-2-Pro-Mistral-7B},
author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
airev-ai/Amal-70b
|
airev-ai
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,717,545,609,000 | 2024-06-08T10:30:19 | 69 | 2 |
---
license: other
license_name: airev-model-license
license_link: https://huggingface.co/airev-ai/Jais-70b/blob/main/LICENSE
---
# Jais-Inception-70b
The AI model developed collaboratively by Airev and Inception stands as a cutting-edge solution, meticulously trained on a comprehensive synthetic Arabic dataset. This model leverages advanced machine learning techniques to achieve remarkable proficiency in understanding and processing Arabic language inputs. Its training on synthetic data ensures a diverse and robust learning foundation, enabling it to handle various linguistic nuances and complexities inherent to Arabic. The combined expertise of Airev and Inception has resulted in a highly capable model, designed to excel in a multitude of applications, ranging from natural language processing and machine translation to speech recognition and text analysis. This innovation represents a significant advancement in Arabic language AI, offering unparalleled accuracy and performance.
<img src="https://res.cloudinary.com/dcugtdlab/image/upload/v1717842568/hzsmhhu6cbrjoh8yh9iy.jpg" width="600" />
## Evals
- arc: 70.1
- gsm8k: 87.1
- hellaswag: 87.3
- mmlu: 78.2
- truthfulqa: 54.2
- winogrande: 84.1
|
[
"TRANSLATION"
] |
Non_BioNLP
|
PrunaAI/iHateNLP-MBart-Urdu-Text-Summarization-bnb-8bit-smashed
|
PrunaAI
| null |
[
"safetensors",
"mbart",
"pruna-ai",
"base_model:iHateNLP/MBart-Urdu-Text-Summarization",
"base_model:quantized:iHateNLP/MBart-Urdu-Text-Summarization",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,736,079,084,000 | 2025-01-05T12:12:16 | 5 | 0 |
---
base_model: iHateNLP/MBart-Urdu-Text-Summarization
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
thumbnail: https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo iHateNLP/MBart-Urdu-Text-Summarization installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/iHateNLP-MBart-Urdu-Text-Summarization-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("iHateNLP/MBart-Urdu-Text-Summarization")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model iHateNLP/MBart-Urdu-Text-Summarization before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
luistakahashi/my-awesome-setfit-model
|
luistakahashi
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,691,529,920,000 | 2023-08-08T21:25:30 | 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# luistakahashi/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
aryankg/Fine_Tune_T5_Model_News_Summarization
|
aryankg
|
text2text-generation
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,716,522,851,000 | 2024-06-05T12:18:33 | 4 | 0 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aryankg/Fine_Tune_T5_Model_News_Summarization
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aryankg/Fine_Tune_T5_Model_News_Summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8666
- Validation Loss: 1.6693
- Train Lr: 2e-05
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Lr | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 1.9252 | 1.6957 | 2e-05 | 0 |
| 1.8862 | 1.6828 | 2e-05 | 1 |
| 1.8698 | 1.6778 | 2e-05 | 2 |
| 1.8666 | 1.6693 | 2e-05 | 3 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
NLBSE/nlbse25_pharo
|
NLBSE
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-MiniLM-L3-v2",
"base_model:finetune:sentence-transformers/paraphrase-MiniLM-L3-v2",
"region:us"
] | 1,728,401,055,000 | 2024-10-08T15:24:23 | 50 | 0 |
---
base_model: sentence-transformers/paraphrase-MiniLM-L3-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget: []
inference: true
---
# SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) as the Sentence Transformer embedding model. A MultiOutputClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2)
- **Classification head:** a MultiOutputClassifier instance
- **Maximum Sequence Length:** 128 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("aalkaswan/nlbse25_pharo")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
DFKI/glove-si-cc100
|
DFKI
| null |
[
"embeddings",
"glove",
"cc100",
"si",
"arxiv:2409.18193",
"license:cc-by-sa-4.0",
"region:us"
] | 1,727,997,355,000 | 2024-12-12T09:10:07 | 0 | 0 |
---
language: si
license: cc-by-sa-4.0
tags:
- embeddings
- glove
- cc100
---
# CC100 GloVe Embeddings for SI Language
## Model Description
- **Language:** si
- **Embedding Algorithm:** GloVe (Global Vectors for Word Representation)
- **Vocabulary Size:** 661565
- **Vector Dimensions:** 300
- **Training Data:** CC100 dataset
## Training Information
We trained GloVe embeddings using the original C code. The model was trained by stochastically sampling nonzero elements from the co-occurrence matrix, over 100 iterations, to produce 300-dimensional vectors. We used a context window of ten words to the left and ten words to the right. Words with fewer than 5 co-occurrences were excluded for languages with over 1 million tokens in the training data, and the threshold was set to 2 for languages with smaller datasets.
We used data from CC100 for training the static word embeddings. We set xmax = 100, ฮฑ = 3/4, and used AdaGrad optimization with an initial learning rate of 0.05.
## Usage
These embeddings can be used for various NLP tasks such as text classification, named entity recognition, and as input features for neural networks.
## Citation
If you use these embeddings in your research, please cite:
```bibtex
@misc{gurgurov2024gremlinrepositorygreenbaseline,
title={GrEmLIn: A Repository of Green Baseline Embeddings for 87 Low-Resource Languages Injected with Multilingual Graph Knowledge},
author={Daniil Gurgurov and Rishu Kumar and Simon Ostermann},
year={2024},
eprint={2409.18193},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.18193},
}
```
## License
These embeddings are released under the [CC-BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
|
[
"NAMED_ENTITY_RECOGNITION",
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
ura-hcmut/MixSUraV
|
ura-hcmut
|
image-text-to-text
|
[
"transformers",
"safetensors",
"llava_mixtral",
"text-generation",
"image-text-to-text",
"conversational",
"vi",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,710,088,864,000 | 2024-03-27T17:40:25 | 0 | 1 |
---
language:
- vi
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- llava_mixtral
extra_gated_prompt: Please read the Apache 2 license before accepting it.
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
I accept the Apache 2 License Agreement: checkbox
---
# MixSUraV: MixSUra x Vision
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
With a strong commitment to enhancing the quality of large language models for the Vietnamese language, a collaborative effort was undertaken by Vietnamese researchers hailing from Ho Chi Minh University of Technology (HCMUT) - Vietnam National University HCMC and Stanford University.
In line with our dedication to fostering community progress, we are pleased to offer our models free of charge for research purposes.
For those who wish to delve further into our research and its details, we encourage you to explore the comprehensive information provided below.
- **Developed by:**
- Duc Q. Nguyen
- Sang T. Truong
- Toan D. V. Nguyen
- Dong D. Le
- Nhi N. Truong
- Tho Quan
- Sanmi Koyejo
- **Funded by:**
- Microsoft Accelerating Foundation Models Research program
- Stanford University
- Ho Chi Minh University of Technology (HCMUT) - VNU-HCM
- DSciLab (Faculty of Computer Science & Engineering, HCMUT - VNU-HCM)
- **Model type:** Text generation
- **Languages:** Vietnamese, English
- **License:** Apache 2.0
- **Finetuned from model:** MixSUra
### Model Sources
We publicly provide starter source code for fine-tuning, evaluation adn deployment of our models.
- **Framework:** [LLaVA](https://github.com/martinakaduc/LLaVA)
- **Paper:** Comming soon
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use our models to perform various tasks containing
* Visual question answering
* Question answering (with context)
* Summarization
* Language modelling
* Text classification
* Translation
* Code generation
* Reasoning
### Downstream Use
This model can serve as an encoder for a wide range of downstream tasks, spanning from pure natural language processing to combinations of natural language processing with computer vision or speech processing.
### Out-of-Scope Use
While our models have undergone fine-tuning using extensive Vietnamese datasets, they may not perform optimally in specialized domains necessitating profound domain expertise, such as medicine, politics, chemistry, etc. We kindly request that you refrain from employing our models for political purposes or any endeavors that may cause harm to individuals or compromise the sovereignty and territorial integrity of Vietnam.
## Bias, Risks, and Limitations
Unless required by applicable law, the MixSUra materials and any output and results therefrom are provided on an "as is" basis, without warranties of any kind, either express or implied, including, without limitation, any warranties of title, non-infringement, merchantability, or fitness for a particular purpose. you are solely responsible for determining the appropriateness of using or redistributing the MixSUra materials and assume any risks associated with your use of the MixSUra materials and any output and results.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. In order for the model to work well, you may need perform prompt engineering to create appropriate prompts before inference.
## How to Get Started with the Model
Please use this repo to load the model. [Github](https://github.com/martinakaduc/LLaVA)
```bash
python -m llava.serve.cli \
--model-path ura-hcmut/MixSUraV \
--image-file "https://llava-vl.github.io/static/images/view.jpg"
```
## Finetuning Details
See [Github](https://github.com/martinakaduc/LLaVA).
## Evaluation
Our models are tested with various tasks. The detail of evaluation process is comming soon.
#### Summary
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 4 x A100 40GB
- **Hours used:** 450h
- **Carbon Emitted:** ~200 kg CO2 eq.
## Citation
If you use MixSUra materials in your research, please cite our model(s) as below.
**BibTeX:**
```plaintext
@inproceedings{crossing2024,
title = "Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of Vietnamese Large Language Models",
author = "Truong, Sang T. and Nguyen, Duc Q. and Nguyen, Toan D. V. and Le, Dong D. and Truong, Nhi N. and Quan, Tho and Koyejo, Sanmi",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = June,
year = "2024",
address = "Seattle, Washington",
publisher = "Association for Computational Linguistics",
url = "",
pages = "",
}
```
## Model Card Authors
## Contact
* Mr. Duc Q. Nguyen: [email protected]
* Mr. Sang T. Truong: [email protected]
* Assoc. Prof. Tho Quan: [email protected]
|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
chunwoolee0/marian-finetuned-kde4-en-to-ko
|
chunwoolee0
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-tc-big-en-ko",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,690,436,874,000 | 2023-07-27T06:46:22 | 14 | 0 |
---
base_model: Helsinki-NLP/opus-mt-tc-big-en-ko
datasets:
- kde4
license: cc-by-4.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-ko
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-ko
split: train
args: en-ko
metrics:
- type: bleu
value: 6.0084151979608835
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-ko
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1884
- Bleu: 6.0084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
[
"TRANSLATION"
] |
Non_BioNLP
|
mini1013/master_cate_bc30
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | 1,737,703,071,000 | 2025-01-24T07:18:16 | 516 | 0 |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ๊ทธ๋ก๋ฐ ์์ ๋ฒจํธ ์ธํ ์บ๋ฆญํฐ ์ฐจ๋ ๊ท์ฌ์ด G์คํ์ผ ์์ ๋ฒจํธ ๋ฉ๋ฉ์ด ์ธํ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์ฐจ๋์์ ๋ฒจํธ
- text: ๋ฉ๋ฅด์ธ๋ฐ์ค-๋ฒค์ธ GLK 4pcs ํจ๋ธ๋ฆญ ๋์ด ๋ณดํธ ๋งคํธ ์ํฐ ํฅ ์ฅ์ ํจ๋ 02 ๋นจ๊ฐ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์นด์ํธ๊ธฐํ์ฉํ
- text: ์ค๋งํธํค๋๋ฒจํธ ์ ์ ์์ ๋ฒจํธ ์ธํ ์ด๋ฑํ์ ์นด์ํธ ์๋ผ(blue) ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์ฐจ๋์์ ๋ฒจํธ
- text: ์ ์ ์์ ๋ฒจํธ ์ฟ ์
์ธํ (์ด๋ฆฐ์ด ์ฐจ๋์ฉ ์ปค๋ฒ,์นด์ํธ) 03.(๋ฉด)ํ์ดํธ์ ๋์ฝ_S(3์ ๋ฒจํธ์นด์ํธ์ฉ์ถ์ฒ) ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ
> ์ฐจ๋์์ ๋ฒจํธ
- text: ์๋์ฐจ ์์ ๋ฒจํธ ์ปค๋ฒ์ธํ ํฌ๋ฆฌ์์ดํฐ๋ธ DIY ๋ชจ๋ธ ๊ท์ฌ์ด ๋๋ฌผ MOC ๋น๋ฉ ๋ธ๋ก ์ ์ ๋๋ฌผ์ ๊ฐ ์ค๋ฆฌ ๊ฑฐ๋ถ์ด ํญ๊ท ๊ณ ์์ด ๋ผ์ง ์ ํ ๋ผ
์ฅ๋๊ฐ 28 CN00070-A13 ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์ฐจ๋์์ ๋ฒจํธ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 3.0 | <ul><li>'์กฐ์ด ์์ดํธ๋๋ฒ ์๊ทธ๋์ฒ ์์ด์ฌ์ด์ฆ ์ฃผ๋์ด์นด์ํธ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์ฃผ๋์ด์นด์ํธ'</li><li>'[์ฐํด์ ํน๊ฐ] ๋ฉํฐ i-Size ์์์ ํ (์์ด์ฌ์ด์ฆ ์ฃผ๋์ด ์นด์ํธ) ์ถ์ฐ/์ก์ > ์นด์ํธ > ์ฃผ๋์ด์นด์ํธ'</li><li>'[50% off] ํ๋๋ผ ์ํฐ ์ฃผ๋์ด ์นด์ํธ 3์ข
์ ๋ฌผ i-size ์์ด์ฌ์ด์ฆ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์ฃผ๋์ด์นด์ํธ'</li></ul> |
| 4.0 | <ul><li>'์์ ๋ฒจํธ ๋ฆฌ๋ณธ ์ฟ ์
์ธํธ ๋ฒ ๊ฐ ํจ๋ ๊ฐ๋ ์ปค๋ฒ ์ธํ A ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์ฐจ๋์์ ๋ฒจํธ'</li><li>'์์ ๋ฒจํธ์ธํ ์ด๋ฆฐ์ด์์ ๋ฒจํธ ์ ์ ํด๋ฆฝ ์นด์ํธ ์ปค๋ฒ ์ฟ ์
2.๊ทธ๋ ์ด์คํ์ด(1P) ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์ฐจ๋์์ ๋ฒจํธ'</li><li>'์๋์ฐจ ์ธํ
๋ฆฌ์ด ๋์ด ๋งคํธ ์ํฐ ํฅ ํจ๋ ๋ณดํธ ์คํฐ์ปค ์ฅ์ ํ๋ฐ H6 - ์ก์ธ์๋ฆฌ 02 4 PCS ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์นด์ํธ๊ธฐํ์ฉํ'</li></ul> |
| 0.0 | <ul><li>'BLIVA ์ด๋
ธํฝ์ค ๋ถ์คํฐ ์นด์ํธ (ISOFIX) ๋งค์ฌ๋ธ๋ ์ถ์ฐ/์ก์ > ์นด์ํธ > ๋ถ์คํฐ์นด์ํธ'</li><li>'๋ฒจ๋ฝ V50 ์ฃผ๋์ด ๋ถ์คํฐ ์นด์ํธ (ISOFIX) ํํ
๋ธ๋ ์ถ์ฐ/์ก์ > ์นด์ํธ > ๋ถ์คํฐ์นด์ํธ'</li><li>'์์ด๋ณด์ค ๋กํค ํด๋์ฉ ๋ถ์คํฐ ์นด์ํธ (ISOFIX) + ์ํฌํธ ํค๋์ฟ ์
์ถ์ฐ/์ก์ > ์นด์ํธ > ๋ถ์คํฐ์นด์ํธ'</li></ul> |
| 1.0 | <ul><li>'ํด๋ ๋ ๋ผ์ด์ง์คํ ์์ด์ฌ์ด์ฆ(i-Size) ์ ์์ ๋ฐ๊ตฌ๋ ์นด์ํธ ํธ๋ ๋ธ ์์คํ
์ถ์ฐ/์ก์ > ์นด์ํธ > ์ ์์์นด์ํธ'</li><li>'์ธ์ด๋ฒก์ค ํด๋ผ์ฐ๋ํฐ ์์ด์ฌ์ด์ฆ ์ ์์ ๋ฐ๊ตฌ๋ ํ์ ํ ์นด์ํธ / ํด๋ผ์ฐ๋ T ์ถ์ฐ/์ก์ > ์นด์ํธ > ์ ์์์นด์ํธ'</li><li>'๋งฅ์์ฝ์ ์นด๋ธ๋ฆฌ์คํฝ์ค TC ๋ฐ๊ตฌ๋ ์นด์ํธ ๊ทธ๋ผํ์ดํธ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์ ์์์นด์ํธ'</li></ul> |
| 2.0 | <ul><li>'์ธ์ด๋ฒก์ค ์ ๋ก๋ ํฐ ์์ด์ฌ์ด์ฆ ์๊ธฐ ์ ์ ์ ์์ ํ์ ํ ๋ถ๋ฆฌํ ๋ชจ๋ ์นด์ํธ / ์ ๋ก๋ T ์ถ์ฐ/์ก์ > ์นด์ํธ > ์์ ์์นด์ํธ'</li><li>'[โ๋น ๋ฅธ๋ฐฐ์ก] ์ด์ง์บ๋ฆฌ2 ํด๋์ฉ ์นด์ํธ ์์์ ํ+๋ชฉ๋ฒ ๊ฐ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์์ ์์นด์ํธ'</li><li>'๋ค์ด์น ๋ธ๋ฆฌ๋ฐ 360 ํ๋ก ์์ด์ฌ์ด์ฆ ์ถ์ฐ/์ก์ > ์นด์ํธ > ์์ ์์นด์ํธ'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_bc30")
# Run inference
preds = model("์ค๋งํธํค๋๋ฒจํธ ์ ์ ์์ ๋ฒจํธ ์ธํ ์ด๋ฑํ์ ์นด์ํธ ์๋ผ(blue) ์ถ์ฐ/์ก์ > ์นด์ํธ > ์นด์ํธ์ฉํ > ์ฐจ๋์์ ๋ฒจํธ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 9 | 16.0733 | 40 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 20 |
| 1.0 | 20 |
| 2.0 | 20 |
| 3.0 | 20 |
| 4.0 | 70 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0333 | 1 | 0.5171 | - |
| 1.6667 | 50 | 0.3557 | - |
| 3.3333 | 100 | 0.0539 | - |
| 5.0 | 150 | 0.0002 | - |
| 6.6667 | 200 | 0.0 | - |
| 8.3333 | 250 | 0.0 | - |
| 10.0 | 300 | 0.0 | - |
| 11.6667 | 350 | 0.0 | - |
| 13.3333 | 400 | 0.0 | - |
| 15.0 | 450 | 0.0 | - |
| 16.6667 | 500 | 0.0 | - |
| 18.3333 | 550 | 0.0 | - |
| 20.0 | 600 | 0.0 | - |
| 21.6667 | 650 | 0.0 | - |
| 23.3333 | 700 | 0.0 | - |
| 25.0 | 750 | 0.0 | - |
| 26.6667 | 800 | 0.0 | - |
| 28.3333 | 850 | 0.0 | - |
| 30.0 | 900 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Alsebay/My_LLMs_Leaderboard
|
Alsebay
| null |
[
"en",
"region:us"
] | 1,712,139,473,000 | 2024-04-17T04:46:45 | 0 | 0 |
---
language:
- en
---
# This is the Leaderboard about ranking my own model :) Also some useful information (Maybe). Main purpose is for Roleplay
## Leaderboard
|Rank|Name|Parameter|Context Length|Tag|Note|
|:---:|---|:---:|:---:|:---:|---|
|๐1|[Narumashi-RT](https://huggingface.co/Alsebay/Narumashi-RT-11B-test)|11B|<span style="color:green">4K</span>|<span style="color:#F53A85">Lewd</span>|Good for Roleplay, although it is LLAMA2. Thank Sao10k :) Could handle some (limited) TSF content.|
|๐2|[NaruMoE](https://huggingface.co/Alsebay/NaruMOE-v1-3x7B)|3x7B|<span style="color:cyan">8K</span> - <span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span>| AVG model, could only handle limited extra content I want. |
|โ3|[NarumashiRTS](https://huggingface.co/Alsebay/NarumashiRTS-V2)|7B|<span style="color:cyan">8K</span>|<span style="color:#40C5F0">Neurral</span>| Base on Kunoichi-7B, so it good enough. Know the extra content. Not lewd and will skip lewd content sometime.|
|4|[HyouKan Series](https://huggingface.co/Alsebay/HyouKan-3x7B)|3x7B|<span style="color:cyan">8K</span> - <span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span>|<span style="color:red">ATTENTION: DON'T USE GGUF VERSION SINCE IT HAVE SOME BUGS (VARY BY VERSION)</span> All-rounded Roleplay model. Understand well Character Card and good logic. The first version have 8k context lenght. <span style="color:red">|
|5|[SunnyRain](https://huggingface.co/Alsebay/SunnyRain-2x10.7B)|2x10.7B|<span style="color:green">4K</span>|<span style="color:#F53A85">Lewd</span>| To be real, it perform approximate like HyouKan in Roleplay, just got some strange behavious.|
|6|[RainyMotip](https://huggingface.co/Alsebay/RainyMotip-2x7B)|2x7B|<span style="color:red">32K</span>|<span style="color:#40C5F0">Neurral</span> |Good enough model, ok in Roleplay.|
|7|[Nutopia](https://huggingface.co/Alsebay/Nutopia-7B)|7B|<span style="color:red">32K</span>|<span style="color:#F2EC4E">Not for Roleplay</span>|I don't think this work for Roleplay, but it good for solving problem|
|8|[TripedalChiken](https://huggingface.co/Alsebay/TripedalChiken)|2x7B|<span style="color:red">32K</span>|<span style="color:#F2EC4E">Not for Roleplay</span>|Solving problem is good, but for Roleplay, I don't think so|
## Note:
- <span style="color:#F53A85">Lewd</span> : perform well NSFW content. Some of lewd words will appear in normal content if your Character Card have NSFW informations.
- <span style="color:#40C5F0">Neurral</span> : perform well SFW content, can perform well NSFW content (limited maybe). Lewd words will less appear in chat/roleplay than <span style="color:#F53A85">Lewd</span>
- <span style="color:#F2EC4E">Not for Roleplay</span> : seem that those model with this tag not understand well Character Card. But its logical is very good.
- **RT**: Rough Translation Dataset that could lead to worse performance than original model.
- **CN**: Chinese dataset pretrain, maybe not understand extra content in English. (I can't find any good english verion.)
# Some experience:
- The Context Length affect too much to your Memory. Let's say I have 16GB Vram card, I can run the model in 2 ways, using Text-Generation-WebUI:
1. Inference: download the origin model, apply args: ``--load-in-4bit --use_double_quant``. I can run all of my model in leaderboard. The bigger parameter is, the slower token can generate. (Ex:7B model could run in 15 token/s, since 3x7b model could only run in ~4-5 token/s)
2. GGUF Quantization (Fastest,cheapest way to run): After you downloaded GGUF version of those models, sometimes, you can't run it although you can run other model that have bigger parameter. That because:
- The context length: 16GB VRAM GPU could run maximum 2x10.7B (~ 19.2B) model with 4k context length. (5 token/s)
- That model is bug/broken.๐
- Bigger model will have more information that you need for your Character Card.
- Best GGUF version that you should run (balance speed/performance): Q4_K_M, Q5_K_M (Slower than Q4)
# Useful link:
- https://huggingface.co/spaces/Vokturz/can-it-run-llm
- https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
|
[
"TRANSLATION"
] |
TBD
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.