id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
sequencelengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-03-18 10:01:09
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
sequencelengths 1
10
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
neerajprad/phrasebank-sentiment-analysis | neerajprad | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,698,640,540,000 | 2023-10-30T04:36:08 | 9 | 0 | ---
base_model: bert-base-uncased
datasets:
- financial_phrasebank
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: phrasebank-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_50agree
split: train
args: sentences_50agree
metrics:
- type: f1
value: 0.8479759296912354
name: F1
- type: accuracy
value: 0.8576341127922971
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5374
- F1: 0.8480
- Accuracy: 0.8576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.5974 | 0.94 | 100 | 0.3898 | 0.8212 | 0.8370 |
| 0.2647 | 1.89 | 200 | 0.3821 | 0.8397 | 0.8583 |
| 0.1433 | 2.83 | 300 | 0.4250 | 0.8487 | 0.8645 |
| 0.0669 | 3.77 | 400 | 0.5374 | 0.8480 | 0.8576 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
KoenBronstring/finetuning-sentiment-model-3000-samples | KoenBronstring | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,651,493,296,000 | 2022-05-04T17:53:58 | 115 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- type: accuracy
value: 0.8733333333333333
name: Accuracy
- type: f1
value: 0.8758169934640523
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
golaxy/gogpt2-7b-pretrain | golaxy | text-generation | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:BelleGroup/train_0.5M_CN",
"dataset:BelleGroup/train_1M_CN",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"dataset:BAAI/COIG",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,690,630,744,000 | 2023-07-31T09:36:59 | 22 | 1 | ---
datasets:
- BelleGroup/train_0.5M_CN
- BelleGroup/train_1M_CN
- c-s-ale/alpaca-gpt4-data-zh
- BAAI/COIG
language:
- zh
license: apache-2.0
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-7B: 基于Llama2-7b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 🚀step1:训练分词器
[🐱怎么从零到一训练一个LLM分词器](https://github.com/yanqiangmiffy/how-to-train-tokenizer)
```text
├── data
│ └── corpus.txt 训练语料
├── llama
│ ├── tokenizer_checklist.chk
│ └── tokenizer.model
├── merged_tokenizer_hf 合并结果 hf格式
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── tokenizer.model
├── merged_tokenizer_sp
│ └── open_llama.model #
├── merge_tokenizer
│ └── tokenizer.model
├── open_llama.model 训练的sp模型
├── open_llama.vocab 训练的sp词汇表
├── README.md
├── step0_step0_process_text.py 基于多分数据集准备训练语料
├── step1_make_corpus.py 基于中文Wikipedia数据准备训练语料
├── step2_train_tokenzier.py 训练分词器
├── step3_tokenzier_segment.py 测试训练后的模型,包括编码和解码测试样例
└── step4_merge_tokenizers.py 与原版llama的分词器进行合并,得到hf格式的tokenizer
```
原始llama2词表大小**32000**,与40k训练的中文分词模型合并之后词表大小为**68419**,sft添加pad字符之后大小为**68420**
## 🚀step2:二次预训练
> 在中文预训练语料上对LLaMA进行增量预训练、继续预训练,目前训练语料20GB,后续继续迭代更新
## 🚀step3: 有监督微调
基于多样性的指令数据进行微调,包括belle,alpaca的中英文指令数据以及moss多轮对话数据,完成在120万+条数据的指令微调
- belle数据:120k数据 v1
- stanford_alapca:52k数据 v2
- stanford_alapca_gpt4_zh:52k数据 v2
- [sharegpt](data%2Ffinetune%2Fsharegpt):90k数据
- [fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data):moss多轮对话数据
根据长度(输出长度大约500)采样之后,筛选出11万指令数据进行sft训练
## 测试效果
- 西瓜和南瓜有什么区别
```
1. 外观:西瓜通常是圆形或椭圆形的,有浅黄色或棕色的外皮,内部有红色或黄色的果肉。而南瓜则是一个圆形或椭圆形的,通常有浅黄色或棕色的外皮,内部有橙黄色的果肉。
2. 品种:西瓜主要分为三种品种:小型、中等和巨型。小型西瓜通常只有一个或两个果核,中等西瓜通常有两个或三个果核,巨型西瓜则有四个或更多果核。南瓜主要分为两种品种:南 瓜和萝卜南瓜。南瓜通常有两个或三个果核,萝卜南瓜则有四个或更多果核。
3. 生长环境:西瓜需要温暖的气候,并且需要充足的阳光。它们通常生长在热带或亚热带地区。南瓜则在温带和寒带地区生长。南瓜需要更长的冬季来生长,而西瓜通常在春季和夏季生 长。
4. 营养价值:西瓜和南瓜都是富含维生素C和纤维素的水果。西瓜还富含抗氧化剂和钾,而南瓜富含叶酸和维生素E。
总的来说,西瓜和南瓜是两种完全不同的果类,它们在外观、品种、生长环境和营养价值上有很多区别。
```
- 如何提高我的时间管理技能?
```
Response: 1. 制定计划和目标:制定每周或每日的计划和目标,以帮助您更好地分配时间。
2. 优先考虑最重要的任务:优先完成最重要的任务,这有助于避免分心和浪费时间。
3. 避免拖延:尝试避免拖延,尽早开始任务,这样您可以更容易地完成它们。
4. 集中注意力:集中注意力,避免同时处理多个任务,这有助于减少分心和提高效率。5. 利用工具和技术:利用时间管理工具和技术,如日历、提醒和待办事项列表,以帮助您更好地管理时间。
6. 学会说“不”:学会拒绝不必要的事情,这有助于您更好地利用时间,并专注于最重要的任务。
7. 保持积极心态:保持积极心态,相信自己能够管理时间,并尝试不断学习和发展自己的时间管理技能。
8. 休息和放松:尝试保持工作和生活的平衡,并定期休息和放松,这有助于提高时间管理技能和提高工作效率。
```
- Python和JavaScript编程语言的主要区别是什么?
```
1. 语法不同:Python是一种面向对象的编程语言,它的语法比较严格,需要使用花括号来标识代码块。JavaScript是一种脚本语言,它的语法相对比较松散,需要使用花括号或缩进来标 识代码块。
2. 应用领域不同:Python广泛应用于数据分析、机器学习、人工智能等领域,由于其可读性好、语法清晰、功能强大等特点。JavaScript则主要应用于前端开发、后端开发、移动端开发 等领域。
3. 运行环境不同:Python的运行环境较为独立,可以在不同的操作系统上运行。JavaScript的运行环境相对较复杂,需要考虑浏览器和Web服务器的兼容性问题。
4. 应用场景不同:Python更适合进行科学计算、数据分析、机器学习等方面的应用,如Pandas、Numpy等库;而JavaScript则更适合进行前端开发、后端开发、移动端开发等领域的脚本编程,如React、Angular等框架。
5. 库和框架不同:Python拥有丰富的标准库和第三方库,如Django、Flask等;而JavaScript则拥有流行的Web开发框架,如React、Angular等。
```
- AI会如何影响农业生产?
```
AI将有助于提高农业生产效率,优化作物种植和收成过程,以及提高农产品的质量和品质。
AI技术可以应用于植物监测、灌溉管理、病虫害防治、土壤分析等方面,减少人力 成本,提高生产效率,同时也可以降低农业生产中的资源浪费和环境污染。例如,AI可以监测土壤湿度、气象状况等环境因素,帮助农民决定何时种植、灌溉和施肥,从而提高农作物的产量和质量。
另外,AI还可以通过监测植物健康状况,及时发现和预防病虫害的发生,提高农作物的产量和质量。此外,AI还可以帮助农民预测未来天气、干旱和洪涝灾害,制定更有效的灾害应对计划。
总之,AI有望成为农业生产中不可或缺的技术,在未来为农业生产带来更大的改进和效益。
```
- 翻译成英文:过去都是假的,回忆是一条没有归途的路,以往的一切春天都无法复原,即使最狂热最坚贞的爱情,归根结底也不过是一种瞬息即逝的现实,唯有孤独永恒。
```
Response: Translation into English:
The past is false, memories are a dead-end road, all previous springs are hopelessly incapable of being restored, even the most passionate and steadfast love is ultimately a fleeting reality,
and only solitude is eternal
```
- 请用python编程语言实现快速排序算法的代码
```
快速排序(Quick Sort)是一种常用的排序算法,基本思想是通过一次排序将数组分为两个部分,其中一个部分的所有元素都比另一个小,然后再分别对这两个部分进行排序 ,直到整个数组有序。
以下是使用Python语言实现快速排序的代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr)//2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```
在这个实现中,首先判断数组的长度是否小于等于1,如果是,则直接返回该数组。否则,选择一个枢轴(pivot)元素,将数组分为三个部分:小于枢轴的左部分、等于枢轴的中间部分、大于枢轴的右部分。接着递归地对左部分和右部分进行排序,并将它们与中间部分连接起来。最后返回排好序的数组。
以下是使用上述代码对数组进行排序的示例:
```python
arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]
sorted_arr = quick_sort(arr)
print(sorted_arr)
```
```
输出结果为:[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9]
```
## 免责声明
本项目相关资源仅供学术研究之用,严禁用于商业用途。 使用涉及第三方代码的部分时,请严格遵循相应的开源协议。
模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目不对其准确性作出保证。
对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
| [
"TRANSLATION"
] | Non_BioNLP |
manikaran2007/finetuning-sentiment-model-3000-samples | manikaran2007 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,680,408,350,000 | 2023-04-02T04:18:52 | 16 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.8766666666666667
name: Accuracy
- type: f1
value: 0.877076411960133
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3051
- Accuracy: 0.8767
- F1: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
mhenrichsen/gemma-2b | mhenrichsen | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,708,531,510,000 | 2024-02-21T16:09:11 | 1,460 | 1 | ---
library_name: transformers
tags: []
---
# Reupload of Gemma 2b base. Original readme below.
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
ujkim98/mt5-small-finetuned-amazon-en-es | ujkim98 | summarization | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,682,252,978,000 | 2023-04-23T15:14:32 | 19 | 0 | ---
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7922
- Rouge1: 18.97
- Rouge2: 7.0348
- Rougel: 17.6971
- Rougelsum: 17.882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 11.9457 | 1.0 | 151 | 4.4346 | 5.986 | 1.1429 | 5.9458 | 5.7994 |
| 5.4958 | 2.0 | 302 | 3.4869 | 7.1658 | 1.8822 | 7.2823 | 7.1334 |
| 4.4559 | 3.0 | 453 | 3.1252 | 8.5582 | 1.605 | 8.3697 | 8.302 |
| 4.0657 | 4.0 | 604 | 2.9205 | 10.6565 | 3.4667 | 10.4793 | 10.3928 |
| 3.828 | 5.0 | 755 | 2.8419 | 18.3545 | 6.8241 | 17.013 | 17.3331 |
| 3.6794 | 6.0 | 906 | 2.8178 | 18.6777 | 7.0318 | 17.2025 | 17.3807 |
| 3.6205 | 7.0 | 1057 | 2.7984 | 18.8984 | 7.0004 | 17.4826 | 17.7813 |
| 3.5711 | 8.0 | 1208 | 2.7922 | 18.97 | 7.0348 | 17.6971 | 17.882 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
"SUMMARIZATION"
] | Non_BioNLP |
knowledgator/gliclass-modern-large-v2.0-init | knowledgator | zero-shot-classification | [
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"fr",
"ge",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"dataset:knowledgator/gliclass-v1.0",
"dataset:fancyzhx/amazon_polarity",
"dataset:cnmoro/QuestionClassification",
"dataset:Arsive/toxicity_classification_jigsaw",
"dataset:shishir-dwi/News-Article-Categorization_IAB",
"dataset:SetFit/qnli",
"dataset:nyu-mll/multi_nli",
"dataset:SetFit/student-question-categories",
"dataset:SetFit/tweet_sentiment_extraction",
"dataset:SetFit/hate_speech18",
"dataset:saattrupdan/doc-nli",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"region:us"
] | 1,739,488,650,000 | 2025-03-13T20:12:08 | 946 | 9 | ---
base_model:
- answerdotai/ModernBERT-large
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
- knowledgator/gliclass-v1.0
- fancyzhx/amazon_polarity
- cnmoro/QuestionClassification
- Arsive/toxicity_classification_jigsaw
- shishir-dwi/News-Article-Categorization_IAB
- SetFit/qnli
- nyu-mll/multi_nli
- SetFit/student-question-categories
- SetFit/tweet_sentiment_extraction
- SetFit/hate_speech18
- saattrupdan/doc-nli
language:
- en
- fr
- ge
license: apache-2.0
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large), which effectively processes long sequences.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
pip install -U transformers>=4.48.0
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init", add_prefix_space=True)
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-modern-large-v2.0-init (399 M)](knowledgator/gliclass-modern-large-v2.0-init) | 0.9137 | 0.7357 | 0.4140 |
| [gliclass-modern-base-v2.0-init (151 M)](knowledgator/gliclass-modern-base-v2.0-init) | 0.8264 | 0.6637 | 0.2985 |
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-base-v1.0-init | gliclass-large-v1.0-init | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|
| CR | 0.8672 | 0.8024 | 0.9041 | 0.8980 |
| sst2 | 0.8342 | 0.8734 | 0.9011 | 0.9434 |
| sst5 | 0.2048 | 0.1638 | 0.1972 | 0.1123 |
| 20_news_groups | 0.2317 | 0.4151 | 0.2448 | 0.2792 |
| spam | 0.5963 | 0.5407 | 0.5074 | 0.6364 |
| financial_phrasebank | 0.3594 | 0.3705 | 0.2537 | 0.2562 |
| imdb | 0.8772 | 0.8836 | 0.8255 | 0.9137 |
| ag_news | 0.5614 | 0.7069 | 0.6050 | 0.6933 |
| emotion | 0.2865 | 0.3840 | 0.2474 | 0.3746 |
| cap_sotu | 0.3966 | 0.4353 | 0.2929 | 0.2919 |
| rotten_tomatoes | 0.6626 | 0.7933 | 0.6630 | 0.5928 |
| **AVERAGE:** | 0.5344 | 0.5790 | 0.5129 | 0.5447 |
Here you can see how the performance of the model grows providing more examples:
| Model | Num Examples | sst5 | ag_news | emotion | **AVERAGE:** |
|------------------------------------|------------------|--------|---------|--------------|----------|
| gliclass-modern-large-v2.0-init | 0 | 0.1123 | 0.6933 | 0.3746 | 0.3934 |
| gliclass-modern-large-v2.0-init | 8 | 0.5098 | 0.8339 | 0.5010 | 0.6149 |
| gliclass-modern-large-v2.0-init | Weak Supervision | 0.0951 | 0.6478 | 0.4520 | 0.3983 |
| gliclass-modern-base-v2.0-init | 0 | 0.1972 | 0.6050 | 0.2474 | 0.3499 |
| gliclass-modern-base-v2.0-init | 8 | 0.3604 | 0.7481 | 0.4420 | 0.5168 |
| gliclass-modern-base-v2.0-init | Weak Supervision | 0.1599 | 0.5713 | 0.3216 | 0.3509 |
| gliclass-large-v1.0-init | 0 | 0.1639 | 0.7069 | 0.3840 | 0.4183 |
| gliclass-large-v1.0-init | 8 | 0.4226 | 0.8415 | 0.4886 | 0.5842 |
| gliclass-large-v1.0-init | Weak Supervision | 0.1689 | 0.7051 | 0.4586 | 0.4442 |
| gliclass-base-v1.0-init | 0 | 0.2048 | 0.5614 | 0.2865 | 0.3509 |
| gliclass-base-v1.0-init | 8 | 0.2007 | 0.8359 | 0.4856 | 0.5074 |
| gliclass-base-v1.0-init | Weak Supervision | 0.0681 | 0.6627 | 0.3066 | 0.3458 | | [
"TEXT_CLASSIFICATION"
] | TBD |
mesolitica/t5-small-bahasa-cased | mesolitica | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,665,070,126,000 | 2022-10-06T15:30:45 | 13 | 0 | ---
language: ms
---
# t5-small-bahasa-cased
Pretrained T5 small on both standard and local language model for Malay.
## Pretraining Corpus
`t5-small-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
11. Social media normalization.
12. Noisy EN-MS translation.
13. Noisy MS-EN translation.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity. | [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
yhavinga/t5-eff-large-8l-dutch-english-cased | yhavinga | text2text-generation | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"seq2seq",
"nl",
"en",
"dataset:yhavinga/mc4_nl_cleaned",
"arxiv:1910.10683",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,651,823,302,000 | 2022-08-07T12:07:07 | 19 | 0 | ---
datasets:
- yhavinga/mc4_nl_cleaned
language:
- nl
- en
license: apache-2.0
tags:
- t5
- seq2seq
inference: false
---
# t5-eff-large-8l-dutch-english-cased
A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model
pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned).
This **t5 eff** model has **334M** parameters.
It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset
`mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **3d 23h**,
with a sequence length of **512**, batch size **128** and **851850** total steps (**56B** tokens).
Pre-training evaluation loss and accuracy are **1,15** and **0,74**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
* Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off.
* For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for
the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture
and configs, though it must be noted that this model (t5-eff-large-8l-dutch-english-cased) is unrelated to these projects and not an 'official' checkpoint.
* **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*.
* **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been fine-tuned for summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

Evaluation was run on fine-tuned models trained with the following settings:
| | Summarization | Translation |
|---------------:|------------------|-------------------|
| Dataset | CNN Dailymail NL | CCMatrix en -> nl |
| #train samples | 50K | 50K |
| Optimizer | Adam | Adam |
| learning rate | 0.001 | 0.0005 |
| source length | 1024 | 128 |
| target length | 142 | 128 |
|label smoothing | 0.05 | 0.1 |
| #eval samples | 1000 | 1000 |
Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores
below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation
are not saved, since they were trained for comparison of pre-trained models only.
The numbers for summarization are the Rouge scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
The models below have been evaluated for English to Dutch translation.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
The numbers reported are the Bleu scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
| [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
bergum/xtremedistil-l6-h384-emotion | bergum | text-classification | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-03-21T11:55:03 | 20 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: xtremedistil-l6-h384-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.928
name: Accuracy
---
# xtremedistil-l6-h384-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.928
This model can be quantized to int8 and retain accuracy
- Accuracy 0.912
<pre>
import transformers
import transformers.convert_graph_to_onnx as onnx_convert
from pathlib import Path
pipeline = transformers.pipeline("text-classification",model=model,tokenizer=tokenizer)
onnx_convert.convert_pytorch(pipeline, opset=11, output=Path("xtremedistil-l6-h384-emotion.onnx"), use_external_format=False)
from onnxruntime.quantization import quantize_dynamic, QuantType
quantize_dynamic("xtremedistil-l6-h384-emotion.onnx", "xtremedistil-l6-h384-emotion-int8.onnx",
weight_type=QuantType.QUInt8)
</pre>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- num_epochs: 14
### Training results
<pre>
Epoch Training Loss Validation Loss Accuracy
1 No log 0.960511 0.689000
2 No log 0.620671 0.824000
3 No log 0.435741 0.880000
4 0.797900 0.341771 0.896000
5 0.797900 0.294780 0.916000
6 0.797900 0.250572 0.918000
7 0.797900 0.232976 0.924000
8 0.277300 0.216347 0.924000
9 0.277300 0.202306 0.930500
10 0.277300 0.192530 0.930000
11 0.277300 0.192500 0.926500
12 0.181700 0.187347 0.928500
13 0.181700 0.185896 0.929500
14 0.181700 0.185154 0.928000
</pre> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-410m-v0-4bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,713,857,183,000 | 2024-04-23T07:27:00 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-410M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
Shinrajim/distilbert-base-uncased-finetuned-clinc | Shinrajim | null | [
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,731,114,574,000 | 2024-11-17T03:44:10 | 7 | 0 | ---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9180645161290323
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2895 | 1.0 | 318 | 3.2884 | 0.7419 |
| 2.6277 | 2.0 | 636 | 1.8751 | 0.8368 |
| 1.5479 | 3.0 | 954 | 1.1569 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7721 | 0.9181 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.5.1+cu121
- Datasets 1.16.1
- Tokenizers 0.20.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
hatemestinbejaia/mMiniLML-bi-encoder-KD-v1-Student_TripletLoss-Teacher_marginloss-adptativeMargin95N | hatemestinbejaia | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:15000000",
"loss:MarginMSELoss",
"dataset:hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning",
"arxiv:1908.10084",
"arxiv:2010.02666",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,741,324,631,000 | 2025-03-07T05:17:55 | 2 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets:
- hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:15000000
- loss:MarginMSELoss
widget:
- source_sentence: من هو زوج صوفيا فيرغارا
sentences:
- 'عرض كل الصور ( 52 ) صوفيا فيرجارا متزوجة حاليا من جو مانجانييلو . لقد كانت في
تسع علاقات مع المشاهير بمتوسط حوالي 3 . 1 سنوات لكل منها . كان متوسط زواجها 2
. 2 سنة لكل منهما . الاسم المعطى : صوفيا مارجريتا فيرجارا .'
- ما الذي ترمز إليه السلطة الفلسطينية في Lawyer Name PA ؟ وباستخدام هذه الطريقة
، فإن PA تعني النقابة المهنية . السلطة الفلسطينية هي شركة مكونة من محترفين مثل
المحامين وأطباء الأسنان والأطباء .
- تشير معاني DH DH عموما إلى الزوج العزيز أو الزوج الحبيب . عادة ، هذا شكل من أشكال
المودة المكتوبة التي تستخدمها الزوجات على الإنترنت عند الإشارة إلى أزواجهن .
- source_sentence: أين الفيفا
sentences:
- هل تخطط لزيارة روسيا لحضور كأس القارات 2017 FIFA أو كأس العالم FIFA 2018 ؟ [رابط]
هي البوابة السياحية للحكومة الروسية وتزود المشجعين الذين يسافرون إلى روسيا بمعلومات
مفيدة عن الدولة المضيفة لكأس العالم FIFA المقبلة .
- '1 أصغر لاعب أرجنتيني سجل في نهائيات كأس العالم FIFA : عمره 18 عاما و 357 يوما
في عام 2006 ضد صربيا والجبل الأسود . 2 أصغر لاعب بلغ 100 مباراة دولية في تاريخ
الكونميبول ( 27 عاما ، 361 يوما ) .'
- Hotels Near Creation Museum مزيد من المعلومات . . يقع مقر متحف Creation في مدينة
بطرسبورغ بولاية كنتاكي . من خلال الأموال الخاصة لبناء المتحف ، بدأ Young Earth
Creationists ( YEC ) المتحف في عام 2011 . مفتوح للجمهور ، تم إنشاء متحف Creation
لتعزيز وتثقيف الزوار حول نظرية الخلق التي تشرح أصل الأرض . الخلق كما يرويه التفسير
الحرفي لسفر التكوين التوراتي .
- source_sentence: ما هي صمامات الفراشة المستخدمة
sentences:
- سوف يتصل بك المختبر قائلا إن الأنبوب لم يكن ممتلئا وستتساءل عن السبب . تختلف إبر
الفراشة لجمع الدم عن إبر الفراشة للوريد في أنها تحتوي على إبرة مغطاة بالغمد في
النهاية حتى تتمكن من إرفاقها بحامل الفراغ وثقب الأنابيب .
- 1 قم دائما بشحن جهازك قبل مغادرة المنزل ، خاصة في الرحلات الطويلة . 2 أحضر الشاحن
معك إذا كنت ستغادر طوال الليل أو لفترة طويلة . 3 بينما من المقرر أن تدوم بطارية
iPad حتى 10 ساعات ، فإن الاستخدام المتكرر سيقصر هذا الوقت بشكل كبير . كيفية إطالة
عمر بطارية جهاز iPad . تماما كما هو الحال مع iPhone أو iPod Touch ، سيكون عمر
بطارية iPad أقصر أثناء الاستخدام الكثيف . ومع ذلك ، هناك بعض الإجراءات التي يمكنك
اتخاذها للحفاظ على جهازك قيد التشغيل ونشطا لساعات من الاستمتاع ، وتوضح هذه المقالة
ما يمكنك القيام به لإطالة عمر البطارية .
- صمامات AWWA الفراشة عبارة عن صمامات سريعة الفتح تتكون من قرص دائري معدني أو ريشة
مع محاورها المحورية بزوايا قائمة لاتجاه التدفق في الأنبوب ، والتي عند تدويرها
على عمود ، تسد المقاعد في جسم الصمام . تستخدم عادة كصمامات خنق للتحكم في التدفق
.
- source_sentence: ما هو طلاء تمبرا
sentences:
- يحفظ . غالبا ما يشار إليه باسم طلاء الملصقات ، وهو عبارة عن صبغة مائية ممزوجة
بعامل ربط كيميائي . يستخدم على نطاق واسع لمشاريع الأطفال والحرف اليدوية بسبب مرونته
وألوانه الزاهية وسهولة استخدامه . يتوفر طلاء تمبرا أيضا في مسحوق ، والذي يصبح
جاهزا للطلاء عند مزجه بالماء .
- يتم تحديث ألوان الطلاء الأكثر مبيعا لدينا يوميا لمنحك أحدث اتجاهات ألوان الطلاء
. تحقق من ألوان الطلاء الأكثر شيوعا لدينا وابحث عن اللون المناسب لك . يتم تحديث
ألوان الطلاء الأكثر مبيعا لدينا يوميا لمنحك أحدث اتجاهات ألوان الطلاء . تحقق من
ألوان الطلاء الأكثر شيوعا لدينا وابحث عن اللون المناسب لك .
- 'لغة إجرائية . تم التحديث : 04 - 26 - 2017 بواسطة . اللغة الإجرائية هي لغة برمجة
كمبيوتر تتبع بالترتيب مجموعة من الأوامر . من أمثلة اللغات الإجرائية للكمبيوتر
BASIC و C و FORTRAN و Pascal . شروط البرمجة'
- source_sentence: تحديد المسح
sentences:
- المسح أو مسح الأراضي هو تقنية ومهنة وعلم تحديد المواقع الأرضية أو ثلاثية الأبعاد
للنقاط والمسافات والزوايا بينها . يطلق على أخصائي مسح الأراضي اسم مساح الأراضي
.
- جمعية إعادة تأهيل العظام ( ORA ) هي فريق من المتخصصين في العلاج الطبيعي في كولورادو
سبرينغز . سيساعدك هؤلاء المتخصصون في التغلب على القيود الجسدية لمساعدتك على العودة
إلى أعلى مستوى ممكن من الوظائف .
- إجمالي المحطات . تعد المحطات الإجمالية واحدة من أكثر أدوات المسح شيوعا المستخدمة
اليوم . وهي تتألف من جهاز ثيودوليت إلكتروني ومكون إلكتروني لقياس المسافة ( EDM
) . تتوفر أيضا محطات روبوتية كاملة تتيح التشغيل لشخص واحد من خلال التحكم في الجهاز
باستخدام جهاز التحكم عن بعد . تاريخ
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: reranking
name: Reranking
dataset:
name: Unknown
type: unknown
metrics:
- type: map
value: 0.5935023409469147
name: Map
- type: mrr@10
value: 0.5968439153439153
name: Mrr@10
- type: ndcg@10
value: 0.659201674552193
name: Ndcg@10
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the [experiment_data_knowledge_distillation_vs_fine_tuning](https://huggingface.co/datasets/hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 86741b4e3f5cb7765a600d3a3d55a0f6a6cb443d -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [experiment_data_knowledge_distillation_vs_fine_tuning](https://huggingface.co/datasets/hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("hatemestinbejaia/mMiniLML-bi-encoder-KD-v1-Student_TripletLoss-Teacher_marginloss-adptativeMargin95N")
# Run inference
sentences = [
'تحديد المسح',
'المسح أو مسح الأراضي هو تقنية ومهنة وعلم تحديد المواقع الأرضية أو ثلاثية الأبعاد للنقاط والمسافات والزوايا بينها . يطلق على أخصائي مسح الأراضي اسم مساح الأراضي .',
'إجمالي المحطات . تعد المحطات الإجمالية واحدة من أكثر أدوات المسح شيوعا المستخدمة اليوم . وهي تتألف من جهاز ثيودوليت إلكتروني ومكون إلكتروني لقياس المسافة ( EDM ) . تتوفر أيضا محطات روبوتية كاملة تتيح التشغيل لشخص واحد من خلال التحكم في الجهاز باستخدام جهاز التحكم عن بعد . تاريخ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Reranking
* Evaluated with [<code>RerankingEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.RerankingEvaluator)
| Metric | Value |
|:--------|:-----------|
| **map** | **0.5935** |
| mrr@10 | 0.5968 |
| ndcg@10 | 0.6592 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### experiment_data_knowledge_distillation_vs_fine_tuning
* Dataset: [experiment_data_knowledge_distillation_vs_fine_tuning](https://huggingface.co/datasets/hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning) at [e78b561](https://huggingface.co/datasets/hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning/tree/e78b561e8946459e4d5a50e9ce020e17f5f8e7eb)
* Size: 15,000,000 training samples
* Columns: <code>query</code>, <code>pos</code>, <code>neg</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | pos | neg | label |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 11.14 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 93.26 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 87.26 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: -2.88</li><li>mean: 12.91</li><li>max: 21.92</li></ul> |
* Samples:
| query | pos | neg | label |
|:------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>القليل من الكافيين جيد أثناء الحمل</code> | <code>نحن لا نعرف الكثير عن تأثيرات الكافيين أثناء الحمل عليك وعلى طفلك . لذلك فمن الأفضل أن تحد من المبلغ الذي تحصل عليه كل يوم . إذا كنت حاملا ، قللي من تناول الكافيين إلى 200 ملليجرام يوميا . هذا هو الكمية الموجودة في فنجان واحد سعة 8 أونصات من القهوة أو فنجان قهوة سعة 12 أونصة .</code> | <code>من الآمن عموما أن تتناول النساء الحوامل الشوكولاتة لأن الدراسات أثبتت وجود فوائد معينة لتناول الشوكولاتة أثناء الحمل . ومع ذلك ، يجب على النساء الحوامل التأكد من أن تناول الكافيين أقل من 200 مجم في اليوم .</code> | <code>4.0280589908361435</code> |
| <code>ما هي الفاكهة الأصلية في أستراليا</code> | <code>Passiflora herbertiana . فاكهة العاطفة النادرة موطنها أستراليا . الثمار ذات قشرة خضراء ، بيضاء اللون ، مع تصنيف غير معروف للأكل . تسرد بعض المصادر الفاكهة على أنها صالحة للأكل وحلوة ولذيذة ، بينما يسرد البعض الآخر الفاكهة على أنها مرة وغير صالحة للأكل . فاكهة العاطفة النادرة موطنها أستراليا . الثمار ذات قشرة خضراء ، بيضاء اللون ، مع تصنيف غير معروف للأكل . تسرد بعض المصادر الفاكهة على أنها صالحة للأكل وحلوة ولذيذة ، بينما يسرد البعض الآخر الفواكه على أنها مرة وغير صالحة للأكل .</code> | <code>جوز الكولا هو ثمرة شجرة الكولا ، وهي جنس ( كولا ) من الأشجار التي تنتمي إلى الغابات الاستوائية المطيرة في إفريقيا .</code> | <code>10.18145449956258</code> |
| <code>ما هو حجم الجيش الكندي</code> | <code>القوات المسلحة الكندية . 1 بدأت أول مهمة حفظ سلام كندية واسعة النطاق في مصر في 24 نوفمبر 1956 . 2 هناك ما يقرب من 65000 من القوات النظامية و 25000 من أفراد الاحتياط في الجيش الكندي . 3 في كندا ، تم تحديد يوم 9 أغسطس كيوم حفظة السلام الوطنيين .</code> | <code>المعهد الكندي لصحة الأطباء ( CPHI ) هو برنامج وطني تم إنشاؤه في عام 2012 كتعاون بين الجمعية الطبية الكندية ( CMA ) والمؤسسة الطبية الكندية ( CMF ) والجمعيات الطبية الإقليمية والإقليمية ( PTMAs ) .</code> | <code>16.420575777689614</code> |
* Loss: <code>__main__.MarginMSELoss</code>
### Evaluation Dataset
#### experiment_data_knowledge_distillation_vs_fine_tuning
* Dataset: [experiment_data_knowledge_distillation_vs_fine_tuning](https://huggingface.co/datasets/hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning) at [e78b561](https://huggingface.co/datasets/hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning/tree/e78b561e8946459e4d5a50e9ce020e17f5f8e7eb)
* Size: 10,000 evaluation samples
* Columns: <code>query</code>, <code>pos</code>, <code>neg</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | pos | neg | label |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 10.94 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 91.74 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 87.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: -0.75</li><li>mean: 13.72</li><li>max: 22.62</li></ul> |
* Samples:
| query | pos | neg | label |
|:-----------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>ما هو اسم د . كوين</code> | <code>اعرض الشخصيات المختلفة التي لعبها نفس الممثل . الدكتورة ميكايلا كوين . ولدت ميكايلا في 15 فبراير 1833 في بوسطن ، ماساتشوستس ، لأبوين جوزيف وإليزابيث كوين . هناك نشأت مع شقيقاتها الأربع : ماري ( التي تم تغيير اسمها إلى ريبيكا ) ، ومارجوري ، وكلوديت ، ومورين . كان والدها يريد ابنا ، لذلك عندما ولدت أطلق عليها اسم ميكايلا ( الملقب مايك ) .</code> | <code>ليس لدى د . ماكفارلاند أي تأمينات مدرجة . إذا كنت دكتور ماكفارلاند وترغب في إضافة تأمينات تقبلها ، يرجى تحديث ملفك التعريفي المجاني . الانتساب إلى المستشفى ينتمي د . ماكفارلاند إلى المستشفيات التالية .</code> | <code>15.524045944213867</code> |
| <code>من يلعب دور بيرني مادوف</code> | <code>ساحر الأكاذيب هو واحد من اثنين من مشاريع Madoff التلفزيونية قيد الإعداد . ABC لديها مسلسل قصير قادم يسمى مادوف ، من بطولة ريتشارد دريفوس وبليث دانر . قصص ذات الصلة . روبرت دي نيرو يسجل الدخول للعب بيرني مادوف في فيلم HBO المحتمل .</code> | <code>اتهمت السلطات الأمريكية مسؤول تنفيذي للمحاسبة يزعم أنه ساعد برنارد مادوف في مخطط بونزي الذي تبلغ تكلفته عدة مليارات من الدولارات ، والذي يوسع نطاق تحقيقه في الاحتيال بعد خمس سنوات من اكتشافه .</code> | <code>13.17703644434611</code> |
| <code>كم قدم مكعب في طن من حصى البازلاء</code> | <code>لذلك 1 طن لديه 2000 - 100 أو 20 قدم مكعب من الحصى . الفناء المكعب هو 3x3x3 = 27 قدما مكعبا من الفناء المكعب ، الإجابة 20 - 27 أو 0 . 74 ياردة مكعبة . العوامل الأخرى التي تؤثر على حجم الحصى هي محتوى الرطوبة ودرجات المواد . يحتوي حصى البازلاء على سبيل المثال على جميع الجزيئات ذات الحجم المحدد ، على سبيل المثال ، 1 - 4 بوصة ، حوالي 120 رطلا للإشارة فقط : 1 قدم مكعب = 6 . 25 جالون ( إمبراطوري ) ، جالون من الماء يزن 10 أرطال ، لذا فإن القدم المكعبة من الماء تزن 62 . 5 رطلا . هذا يعني أن الجاذبية النوعية للحصى هي 120 - 62 . 5 ، أو أقل قليلا من 2 .</code> | <code>1 كيس قدم مكعب واحد ( التربة والمهاد ) يغطي ما يقرب من 8 أقدام مربعة إلى عمق 3 . 2 كيس واحد 75 رطلا ( الحصى والرمل ) يغطي حوالي 4 أقدام مربعة إلى عمق 3 . 3 بوصات سميكة ستغطي حوالي 300 قدم مربع . سيغطي سمك 1 حوالي 150 قدما مربعا .</code> | <code>10.34702980518341</code> |
* Loss: <code>__main__.MarginMSELoss</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `half_precision_backend`: amp
- `remove_unused_columns`: False
- `load_best_model_at_end`: True
- `fp16_backend`: amp
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: amp
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: False
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: amp
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | map |
|:----------:|:----------:|:-------------:|:---------------:|:-------:|
| 0.0853 | 10000 | 0.0037 | 0.0191 | 0.5410 |
| 0.1707 | 20000 | 0.0023 | 0.0144 | 0.5630 |
| 0.2560 | 30000 | 0.0019 | 0.0124 | 0.5808 |
| 0.3413 | 40000 | 0.0016 | 0.0112 | 0.5917 |
| 0.4267 | 50000 | 0.0015 | 0.0101 | 0.5899 |
| 0.5120 | 60000 | 0.0013 | 0.0094 | 0.5930 |
| 0.5973 | 70000 | 0.0012 | 0.0090 | 0.5914 |
| 0.6827 | 80000 | 0.0012 | 0.0087 | 0.5807 |
| 0.7680 | 90000 | 0.0011 | 0.0082 | 0.5997 |
| **0.8533** | **100000** | **0.0011** | **0.008** | **0.6** |
| 0.9387 | 110000 | 0.001 | 0.0079 | 0.5935 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.4.1+cu121
- Accelerate: 1.4.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MarginMSELoss
```bibtex
@misc{hofstätter2021improving,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
year={2021},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-en-mh-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,314,921,000 | 2024-10-19T00:21:35 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-mh)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-mh).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-mh --output_dir ./ctranslate2/opus-mt-en-mh-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-mh-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-mh-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-mh-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-mh) by Helsinki-NLP.
| [
"TRANSLATION"
] | Non_BioNLP |
ljcamargo/tachiwin_translate | ljcamargo | text2text-generation | [
"transformers",
"safetensors",
"gguf",
"unsloth",
"translation",
"text2text-generation",
"dataset:ljcamargo/tachiwin_translate",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | 1,729,385,452,000 | 2024-11-24T09:45:51 | 38 | 0 | ---
datasets:
- ljcamargo/tachiwin_translate
library_name: transformers
pipeline_tag: text2text-generation
tags:
- unsloth
- translation
---
# Model Card for Model ID
Tachiwin Totonaku
Totonac - Spanish, Spanish - Totonac Translation with Llama 3.1 8b-Intruct Finetuning (with vicuña model)
## Model Details
### Model Description
Totonac-Spanish, Spanish-Totonac Translation with Llama 3.1 8b-Intruct Finetuning (with vicuña model)
- **Developed by:** Luis J Camargo
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
gokulsrinivasagan/bert-base-uncased_rte | gokulsrinivasagan | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,732,260,141,000 | 2024-12-04T18:26:35 | 5 | 0 | ---
base_model: google-bert/bert-base-uncased
datasets:
- glue
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased_rte
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- type: accuracy
value: 0.5956678700361011
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_rte
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6562
- Accuracy: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7049 | 1.0 | 10 | 0.6914 | 0.5921 |
| 0.6897 | 2.0 | 20 | 0.6836 | 0.5668 |
| 0.6473 | 3.0 | 30 | 0.6562 | 0.5957 |
| 0.5204 | 4.0 | 40 | 0.7143 | 0.6173 |
| 0.3466 | 5.0 | 50 | 0.8576 | 0.6065 |
| 0.187 | 6.0 | 60 | 1.0723 | 0.6318 |
| 0.0822 | 7.0 | 70 | 1.2064 | 0.6426 |
| 0.0591 | 8.0 | 80 | 1.3447 | 0.6390 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-id-en | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:58:05 | 38,547 | 15 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-id-en
* source languages: id
* target languages: en
* OPUS readme: [id-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.id.en | 47.7 | 0.647 |
| [
"TRANSLATION"
] | Non_BioNLP |
dltsj/mt5-small-finetuned-amazon-zh-full | dltsj | summarization | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,575,688,000 | 2023-04-15T16:39:30 | 30 | 0 | ---
datasets:
- amazon_reviews_multi
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-zh-full
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: zh
split: validation
args: zh
metrics:
- type: rouge
value: 10.2419
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-zh-full
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2111
- Rouge1: 10.2419
- Rouge2: 4.7619
- Rougel: 10.2419
- Rougelsum: 10.2419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 11.4787 | 1.0 | 229 | 4.4488 | 3.1349 | 1.3605 | 3.1349 | 3.1349 |
| 4.8938 | 2.0 | 458 | 3.7642 | 8.2011 | 2.381 | 8.2011 | 8.2011 |
| 4.0429 | 3.0 | 687 | 3.6643 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.7038 | 4.0 | 916 | 3.5566 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.5159 | 5.0 | 1145 | 3.4467 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.3715 | 6.0 | 1374 | 3.4368 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.2854 | 7.0 | 1603 | 3.3713 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.1206 | 8.0 | 1832 | 3.3467 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.0899 | 9.0 | 2061 | 3.2809 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 3.0251 | 10.0 | 2290 | 3.2420 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 2.9903 | 11.0 | 2519 | 3.2251 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 2.9599 | 12.0 | 2748 | 3.2408 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 2.9247 | 13.0 | 2977 | 3.2150 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 2.9131 | 14.0 | 3206 | 3.2121 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
| 2.8942 | 15.0 | 3435 | 3.2111 | 10.2419 | 4.7619 | 10.2419 | 10.2419 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
| [
"SUMMARIZATION"
] | Non_BioNLP |
google/metricx-23-qe-large-v2p0 | google | null | [
"transformers",
"pytorch",
"mt5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,707,323,744,000 | 2025-01-07T21:11:07 | 90,875 | 6 | ---
license: apache-2.0
---
# MetricX-23
*This is not an officially supported Google product.*
**GitHub repository: [https://github.com/google-research/metricx](https://github.com/google-research/metricx)**
This repository contains the MetricX-23 models,
a family of models for automatic evaluation of translations that were proposed
in the WMT'23 Metrics Shared Task submission
[MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task](https://aclanthology.org/2023.wmt-1.63/).
The models were trained in [T5X](https://github.com/google-research/t5x) and
then converted for use in PyTorch.
## Available Models
There are 6 models available on HuggingFace that vary in the number of
parameters and whether or not the model is reference-based or reference-free
(also known as quality estimation, or QE):
* [MetricX-23-XXL](https://huggingface.co/google/metricx-23-xxl-v2p0)
* [MetricX-23-XL](https://huggingface.co/google/metricx-23-xl-v2p0)
* [MetricX-23-Large](https://huggingface.co/google/metricx-23-large-v2p0)
* [MetricX-23-QE-XXL](https://huggingface.co/google/metricx-23-qe-xxl-v2p0)
* [MetricX-23-QE-XL](https://huggingface.co/google/metricx-23-qe-xl-v2p0)
* [MetricX-23-QE-Large](https://huggingface.co/google/metricx-23-qe-large-v2p0)
We recommend using the XXL model versions for the best agreement with human
judgments of translation quality, the Large versions for best speed, and the
XL for an intermediate use case.
## Changes to the WMT'23 Submission
These models available here are most similar to the primary submission to the WMT'23 Metrics
Shared Task. They are initialized with [mT5](https://aclanthology.org/2021.naacl-main.41/)
then fine-tuned on a combination of direct assessment and MQM data. However,
we made some changes that make these models different from the WMT'23 submissions.
First, the models are trained to regress the actual MQM score rather than a
normalized score between 0 and 1. **That means the output from the MetricX-23
models is a score in the range [0, 25] where lower is better (i.e., it predicts
an error score).**
Second, these models were trained with a larger variety of synthetic data that
makes them more robust to translation edge cases like over- and undertranslation,
described in more detail in the following section.
### Synthetic Data
In order for our MetricX models to learn to identify certain types of bad
translations that are not sufficiently (or at all) represented in the regular
training data, we created synthetic examples and mixed them in during training.
The synthetic training data was generated from the DA datasets ranging from
WMT15 to WMT21 (~ 43 language pairs). In most cases, the synthetic examples have
the candidate translation manipulated so as to turn it into a bad translation
with a specific issue commonly unrecognized by learned metrics.
The table below provides an overview of the various failure modes that we
considered, including brief descriptions of how we prepared the synthetic data
to address them.
| Failure mode | Synthetic example description |
| ----------- | ----------- |
| Undertranslation | Candidate translation with an arbitrary sentence removed (if multi-sentence); alternatively, candidate with a certain proportion of words removed from the end. |
| Overtranslation | Candidate translation duplicated (with space in between). |
| Fluent but unrelated translation | Arbitrary reference of a similar length from the dataset. |
| Gibberish | Text of a similar length as the reference, generated by sampling words from the reference translation vocabulary (built from all references in the data). |
| Missing punctuation | Reference translation with the end punctuation removed (11 punctuation symbols considered). |
| Latin instead of Chinese/Japanese or Hindi/Bengali punctuation | Candidate translation with the language-specific punctuation symbol at the end replaced with the Latin equivalent (e.g., "." instead of "。" or "।"); alternatively, the punctuation symbol is replaced with the Latin equivalent in the reference, keeping the correct one in the candidate. |
| Reference-matching translation | Reference translation copied as the candidate translation (unlike the rest of the synthetic data, these examples are meant to train the metric to predict a perfect score for candidates matching the reference). |
Examples from the first 4 categories were assigned a label corresponding to the
worst score on the given rating scale (e.g., 25 when mixed with MQM training
data), whereas the reference-matching translation examples are assigned the best
score (e.g., 0 when used with MQM data). The missing/incorrect punctuation
examples were labeled with a score slightly worse than perfect.
Note that some of the synthetic datasets are only meaningful in the
reference-based scenario, and we thus excluded them when training a QE variant
of MetricX. These are the Latin-vs-special punctuation and the
reference-matching translation examples.
Most of the synthetic training sets were created using stratified sampling
across target languages, taking 500 examples per target language. One exception
is the missing punctuation set, which used a stratified sample across different
punctuation symbols instead.
When training MetricX, a small proportion of the synthetic examples was mixed
with the regular training examples. During the first-stage fine-tuning on DA
data, each synthetic training set constituted between 0.1% and 1% of all
training examples, whereas in the second-stage fine-tuning on MQM data we used
an even smaller proportion, around 0.05%.
As for evaluating the effect of the synthetic training data on the model's
performance, the DEMETR challenge set - which we originally used to evaluate the
models submitted to the WMT23 Metrics Shared Task - was not adequate anymore. We
therefore created a new DEMETR-style test set based on the WMT22 DA data, with
examples constructed analogically to the synthetic training examples, as
described above. This test set helped us determine the right proportions of
synthetic data for fine-tuning in order to make MetricX robust for the failure
modes in consideration, without sacrificing the system- and segment-level
correlations with human ratings.
## Usage
The code for using MetricX models can be found at [https://github.com/google-research/metricx](https://github.com/google-research/metricx).
The repository contains example prediction scripts, described below.
The `metricx23/predict.py` script contains an example for how to run inference
on the models.
### Reference-Based
Example usage for a reference-based model:
```bash
python -m metricx23.predict \
--tokenizer google/mt5-xl \
--model_name_or_path google/metricx-23-xl-v2p0 \
--max_input_length 1024 \
--batch_size 1 \
--input_file input.jsonl \
--output_file output.jsonl
```
`input.jsonl` is expected to have 1 serialized JSON object per line with
`"reference"` and `"hypothesis"` fields. The output jsonl will be parallel
to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score.
Note that the model was trained with a maximum input length of 1024 tokens, so
significantly increasing that value may lead to unpredictable behavior.
### Reference-Free
Example usage for a reference-free model:
```bash
python -m metricx23.predict \
--tokenizer google/mt5-xl \
--model_name_or_path google/metricx-23-qe-xl-v2p0 \
--max_input_length 1024 \
--batch_size 1 \
--input_file input.jsonl \
--output_file output.jsonl \
--qe
```
`input.jsonl` is expected to have 1 serialized JSON object per line with
`"source"` and `"hypothesis"` fields. The output jsonl will be parallel
to `input.jsonl` but additionally contain a `"prediction"` field with the predicted score.
## Meta-Evaluation
The `metricx23/evaluate.py` script contains code to calculate various correlations
between the MetricX-23 scores and MQM ratings of translation quality using the
[MT Metrics Eval](https://github.com/google-research/mt-metrics-eval) library.
Example usage:
```bash
python -m metricx23.evaluate \
--dataset wmt22 \
--lp en-de \
--input_file input.jsonl \
--output_file output.json
```
`input.jsonl` is expected to have one JSON object serialized per line.
Each JSON object is expected to contain 4 fields:
* `"system_id"`: The name of the system that generated the translation.
* `"segment_id"`: The 0-based index of the corresponding segment in the MT
Metrics Eval data.
* `"label"`: The ground-truth translation quality score (with higher is better).
* `"prediction"`: The model predicted translation quality score (with lower is
better; the script negates the scores so higher is better).
The script will calculate the 4 agreement/correlations that were used in the
WMT'23 Shared Task. Below are the results for the MetricX-23 models on the
WMT'22 Metrics Shared Task data:
English-German:
| Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| MetricX-23-XXL | 0.795 | 0.835 | 0.546 | 0.619 |
| MetricX-23-XL | 0.756 | 0.813 | 0.540 | 0.605 |
| MetricX-23-Large | 0.769 | 0.759 | 0.507 | 0.595 |
| MetricX-23-QE-XXL | 0.769 | 0.830 | 0.490 | 0.606 |
| MetricX-23-QE-XL | 0.718 | 0.684 | 0.421 | 0.594 |
| MetricX-23-QE-Large | 0.744 | 0.671 | 0.387 | 0.579 |
English-Russian:
| Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| MetricX-23-XXL | 0.905 | 0.943 | 0.477 | 0.609 |
| MetricX-23-XL | 0.876 | 0.906 | 0.498 | 0.589 |
| MetricX-23-Large | 0.876 | 0.841 | 0.474 | 0.569 |
| MetricX-23-QE-XXL | 0.895 | 0.940 | 0.470 | 0.602 |
| MetricX-23-QE-XL | 0.848 | 0.861 | 0.415 | 0.570 |
| MetricX-23-QE-Large | 0.819 | 0.778 | 0.411 | 0.551 |
Chinese-English:
| Model | System-Level Accuracy | System-Level Pearson | Segment-Level Pearson | Segment-Level Pairwise Acc |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| MetricX-23-XXL | 0.868 | 0.919 | 0.605 | 0.551 |
| MetricX-23-XL | 0.868 | 0.924 | 0.584 | 0.543 |
| MetricX-23-Large | 0.857 | 0.919 | 0.555 | 0.539 |
| MetricX-23-QE-XXL | 0.857 | 0.928 | 0.573 | 0.544 |
| MetricX-23-QE-XL | 0.802 | 0.879 | 0.546 | 0.529 |
| MetricX-23-QE-Large | 0.758 | 0.904 | 0.522 | 0.529 |
The `metricx23/evaluate_wmt23.py` script re-calculates the average correlation
score that was used to rank submissions from the
[WMT'23 Shared Task](https://www2.statmt.org/wmt23/pdf/2023.wmt-1.51.pdf).
Example usage:
```bash
python -m metricx23.evaluate_wmt23 \
--en_de predictions_ende.jsonl \
--he_en predictions_heen.jsonl \
--zh_en predictions_zhen.jsonl \
--output_file output.json
```
Each of the 3 input files is expected to be in the same format as described
above. Each file should correspond to running inference on each of the language
pairs from the WMT'23 dataset.
The results for each of the models is the following:
| Model | Average Correlation |
| ----------- | ----------- |
| MetricX-23-XXL | 0.812 |
| MetricX-23-XL | 0.813 |
| MetricX-23-Large | 0.794 |
| MetricX-23-QE-XXL | 0.797 |
| MetricX-23-QE-XL | 0.767 |
| MetricX-23-QE-Large | 0.762 |
## Citation
If you use MetricX-23 in your research, please cite the following publication:
```bibtex
@inproceedings{juraska-etal-2023-metricx,
title = {{MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task}},
author = "Juraska, Juraj and
Finkelstein, Mara and
Deutsch, Daniel and
Siddhant, Aditya and
Mirzazadeh, Mehdi and
Freitag, Markus",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.63",
doi = "10.18653/v1/2023.wmt-1.63",
pages = "756--767",
}
``` | [
"TRANSLATION"
] | Non_BioNLP |
nfliu/deberta-v3-large_boolq | nfliu | text-classification | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:boolq",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,694,066,124,000 | 2023-09-08T05:40:57 | 32,728 | 3 | ---
base_model: microsoft/deberta-v3-large
datasets:
- boolq
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-large_boolq
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: boolq
type: boolq
config: default
split: validation
args: default
metrics:
- type: accuracy
value: 0.8834862385321101
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large_boolq
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the boolq dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Accuracy: 0.8835
## Model description
More information needed
## Example
```
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nfliu/deberta-v3-large_boolq")
tokenizer = AutoTokenizer.from_pretrained("nfliu/deberta-v3-large_boolq")
# Each example is a (question, context) pair.
examples = [
("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."),
("Water is wet", "Contrary to popular belief, water is not wet.")
]
encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
model_output = model(**encoded_input)
probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist()
probability_no = [round(prob[0], 2) for prob in probabilities]
probability_yes = [round(prob[1], 2) for prob in probabilities]
for example, p_no, p_yes in zip(examples, probability_no, probability_yes):
print(f"Question: {example[0]}")
print(f"Context: {example[1]}")
print(f"p(No | question, context): {p_no}")
print(f"p(Yes | question, context): {p_yes}")
print()
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.85 | 250 | 0.5306 | 0.8823 |
| 0.1151 | 1.69 | 500 | 0.4601 | 0.8835 |
| 0.1151 | 2.54 | 750 | 0.5897 | 0.8792 |
| 0.0656 | 3.39 | 1000 | 0.6477 | 0.8804 |
| 0.0656 | 4.24 | 1250 | 0.6847 | 0.8838 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-es-sl-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,663,244,000 | 2024-10-19T03:02:59 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-sl)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-sl).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-sl --output_dir ./ctranslate2/opus-mt-es-sl-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-es-sl-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-es-sl-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-es-sl-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-sl) by Helsinki-NLP.
| [
"TRANSLATION"
] | Non_BioNLP |
rambodazimi/bert-base-uncased-finetuned-LoRA-MRPC | rambodazimi | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,724,380,837,000 | 2024-08-28T14:10:38 | 0 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-LoRA-MRPC
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.8602941176470589
name: Accuracy
- type: f1
value: 0.8998242530755711
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-lora-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.8603
- F1: 0.8998
- trainable model parameters: 1181186
- all model parameters: 110664964
- percentage of trainable model parameters: 1.07%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- weight_decay: 0.01
- rank: 32
- lora_alpha: 32
- lora_dropout: 0.05
- num_epochs: 5 | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
yasuaki0406/distilbert-base-uncased-finetuned-emotion | yasuaki0406 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,661,874,714,000 | 2022-08-30T16:01:46 | 14 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9245
name: Accuracy
- type: f1
value: 0.9244242594868723
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.9245
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8144 | 1.0 | 250 | 0.3129 | 0.9055 | 0.9027 |
| 0.2457 | 2.0 | 500 | 0.2123 | 0.9245 | 0.9244 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
YONGWOOHUH/distilbert-base-uncased-finetuned-emotion | YONGWOOHUH | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,692,166,436,000 | 2023-08-16T06:30:31 | 8 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.936
name: Accuracy
- type: f1
value: 0.9362021172690986
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1543
- Accuracy: 0.936
- F1: 0.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1705 | 1.0 | 250 | 0.1681 | 0.9315 | 0.9311 |
| 0.1169 | 2.0 | 500 | 0.1612 | 0.932 | 0.9324 |
| 0.0967 | 3.0 | 750 | 0.1547 | 0.936 | 0.9358 |
| 0.0736 | 4.0 | 1000 | 0.1556 | 0.9375 | 0.9373 |
| 0.0614 | 5.0 | 1250 | 0.1543 | 0.936 | 0.9362 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
patrickquick/BERTicelli | patrickquick | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"BERTicelli",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"en",
"dataset:OLID",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,651,516,592,000 | 2022-05-10T09:03:48 | 8,416 | 0 | ---
datasets:
- OLID
language:
- en
license: apache-2.0
tags:
- BERTicelli
- text classification
- abusive language
- hate speech
- offensive language
widget:
- text: If Jamie Oliver fucks with my £3 meal deals at Tesco I'll kill the cunt.
example_title: Example 1
- text: Keep up the good hard work.
example_title: Example 2
- text: That's not hair. Those were polyester fibers because Yoda is (or was) a puppet.
example_title: Example 3
---
[Mona Allaert](https://github.com/MonaDT) •
[Leonardo Grotti](https://github.com/corvusMidnight) •
[Patrick Quick](https://github.com/patrickquick)
## Model description
BERTicelli is an English pre-trained BERT model obtained by fine-tuning the [English BERT base cased model](https://github.com/google-research/bert) with the training data from [Offensive Language Identification Dataset (OLID)](https://scholar.harvard.edu/malmasi/olid).
This model was developed for the NLP Shared Task in the Digital Text Analysis program at the University of Antwerp (2021–2022). | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pinzhenchen/sft-lora-en-bloom-3b | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,445,000 | 2024-03-05T23:47:27 | 0 | 0 | ---
language:
- en
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
ura-hcmut/ura-llama-70b | ura-hcmut | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"vi",
"en",
"dataset:vietgpt/wikipedia_vi",
"arxiv:2403.02715",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,696,500,789,000 | 2024-03-27T17:13:35 | 0 | 3 | ---
datasets:
- vietgpt/wikipedia_vi
language:
- vi
- en
license: other
pipeline_tag: text-generation
extra_gated_prompt: Please read the [URA-LLaMA License Agreement](https://github.com/martinakaduc/ura-llama-public/blob/main/URA-LLaMa%20Model%20User%20Agreement.pdf)
before accepting it.
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
I accept the URA-LLaMA License Agreement: checkbox
---
# URA-LLaMa 70B
## Model Details
### Model Description
With a strong commitment to enhancing the quality of large language models for the Vietnamese language, a collaborative effort was undertaken by Vietnamese researchers hailing from Ho Chi Minh University of Technology (HCMUT) - Vietnam National University HCMC and Stanford University. Our endeavor involved the meticulous fine-tuning of Meta LLaMa-2 models using Vietnamese articles sourced from Wikipedia and online news websites. In line with our dedication to fostering community progress, we are pleased to offer our models free of charge for research purposes. For those who wish to delve further into our research and its details, we encourage you to explore the comprehensive information provided below.
- **Developed by:**
- Duc Q. Nguyen
- Sang T. Truong
- Toan D. V. Nguyen
- Dong D. Le
- Nhi N. Truong
- Tho Quan
- Sanmi Koyejo
- **Funded by:**
- Microsoft Accelerating Foundation Models Research program
- Stanford University
- Ho Chi Minh University of Technology (HCMUT) - VNU-HCM
- DSciLab (Faculty of Computer Science & Engineering, HCMUT - VNU-HCM)
- **Model type:** Text generation
- **Languages:** Vietnamese, English
- **License:**
- Custom license available at [LICENSE](https://github.com/martinakaduc/ura-llama-public/blob/main/URA-LLaMa%20Model%20User%20Agreement.pdf)
- **Finetuned from model:** Meta LLaMa-2 70B
### Model Sources
We publicly provide starter source code and access to playground of URA-LLaMa 7B.
- **Repository:** [URA-LLaMa Github](https://github.com/martinakaduc/ura-llama-public)
- **Framework:** [ViLLM](https://github.com/stair-lab/villm)
- **Paper:** Our paper was accepted at NAACL 2024. [Link](https://arxiv.org/abs/2403.02715)
## Uses
This model is primarily designed for text generation. However, as language models, it is versatile and can also function as an encoder for various downstream tasks, akin to other models. For a detailed understanding of its use cases, please refer to the information provided below.
### Direct Use
You can use our models to perform various tasks containing
* Question answering (with context)
* Summarization
* Language modelling
* Text classification
* Translation
### Downstream Use
This model can serve as an encoder for a wide range of downstream tasks, spanning from pure natural language processing to combinations of natural language processing with computer vision or speech processing.
### Out-of-Scope Use
While our models have undergone fine-tuning using extensive Vietnamese datasets, they may not perform optimally in specialized domains necessitating profound domain expertise, such as medicine, politics, chemistry, etc. We kindly request that you refrain from employing our models for political purposes or any endeavors that may cause harm to individuals or compromise the sovereignty and territorial integrity of Vietnam.
## Bias, Risks, and Limitations
Unless required by applicable law, the URA-LLaMa materials and any output and results therefrom are provided on an "as is" basis, without warranties of any kind, either express or implied, including, without limitation, any warranties of title, non-infringement, merchantability, or fitness for a particular purpose. you are solely responsible for determining the appropriateness of using or redistributing the URA-LLaMa materials and assume any risks associated with your use of the URA-LLaMa materials and any output and results.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. In order for the model to work well, you may need perform prompt engineering to create appropriate prompts before inference.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
pipeline_kwargs={
"temperature": 1.0,
"max_new_tokens": 250,
"top_k": 1,
"repetition_penalty": 1.1
}
if __name__ == "__main__":
# Load model
model = AutoModelForCausalLM.from_pretrained(
"ura-hcmut/ura-llama-70b",
device_map="auto"
)
model.config.pretraining_tp = 1
model.eval()
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
"ura-hcmut/ura-llama-70b",
trust_remote_code=True
)
tokenizer.pad_token = tokenizer.eos_token
pipeline = transformers.pipeline(
model=model,
tokenizer=tokenizer,
return_full_text=False,
task='text-generation',
**pipeline_kwargs
)
query_template = "[INST] <<SYS>>\nBạn là một trợ lý thông minh.\n<</SYS>>\n\nHãy trả lời câu hỏi sau.\nCâu hỏi: {query}\nTrả lời: [/INST]"
while True:
query = input("Query: ")
if query == "exit":
break
query = query_template.format(query=query)
answer = pipeline(query)[0]["generated_text"]
print(answer)
```
## Finetuning Details
### Finetuning Data
List of datasets used for finetuning:
* [Vietnamese Wikipedia](https://huggingface.co/datasets/vietgpt/wikipedia_vi)
### Finetuning Procedure
We utilize the causal language modelling (next token prediction) procedure to finetune our models. Available tutorial is available at [https://huggingface.co/docs/transformers/tasks/language_modeling](https://huggingface.co/docs/transformers/tasks/language_modeling).
#### Finetuning Hyperparameters
- **Training regime:** BFloat16 mixed precision
- **Quantization:** Normal Float 4bit
- **Lora rank:** 1024
- **Batch size:** 4
- **Optimizer:** Paged AdamW 32bit
- **Learning rate:** 1e-5
## Evaluation
Our models are tested with various tasks. The detail of evaluation process can be found at our [Leaderboard](https://ai.stanford.edu/~sttruong/villm).
## Environmental Impact
Carbon emissions are estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 6 x RTX8000 45GB
- **Hours used:** ~5100h
- **Carbon Emitted:** ~575 kg CO2 eq.
## Citation
If you use URA-LLaMa materials in your research, please cite our model(s) as below.
**BibTeX:**
```plaintext
@inproceedings{crossing2024,
title = "Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of Vietnamese Large Language Models",
author = "Truong, Sang T. and Nguyen, Duc Q. and Nguyen, Toan D. V. and Le, Dong D. and Truong, Nhi N. and Quan, Tho and Koyejo, Sanmi",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = June,
year = "2024",
address = "Seattle, Washington",
publisher = "Association for Computational Linguistics",
url = "",
pages = "",
}
```
## Model Card Authors
## Contact
* Mr. Duc Q. Nguyen: [email protected]
* Mr. Sang T. Truong: [email protected]
* Assoc. Prof. Tho Quan: [email protected] | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
Davlan/bert-base-multilingual-cased-finetuned-amharic | Davlan | fill-mask | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-06-02T12:37:53 | 270 | 2 | ---
{}
---
Hugging Face's logo
---
language: am
datasets:
---
# bert-base-multilingual-cased-finetuned-amharic
## Model description
**bert-base-multilingual-cased-finetuned-amharic** is a **Amharic BERT** model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning **bert-base-multilingual-cased** model on Amharic language texts. It provides **better performance** than the multilingual Amharic on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Amharic corpus using Amharic vocabulary.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-amharic')
>>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን [MASK] መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | am_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 0.0 | 60.89
### BibTeX entry and citation info
By David Adelani
```
```
| [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
pinzhenchen/sft-lora-fr-bloom-7b1 | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,478,000 | 2024-03-05T23:48:02 | 0 | 0 | ---
language:
- fr
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
zkava01/firstparagraph | zkava01 | text-classification | [
"tensorboard",
"safetensors",
"roberta",
"autotrain",
"text-classification",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"region:us"
] | 1,732,219,741,000 | 2024-11-21T20:13:17 | 5 | 0 | ---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.17190960049629211
f1_macro: 0.9521367521367522
f1_micro: 0.9375
f1_weighted: 0.9378205128205128
precision_macro: 0.9523809523809524
precision_micro: 0.9375
precision_weighted: 0.9464285714285714
recall_macro: 0.9583333333333334
recall_micro: 0.9375
recall_weighted: 0.9375
accuracy: 0.9375
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
tner/deberta-large-wnut2017 | tner | token-classification | [
"transformers",
"pytorch",
"deberta",
"token-classification",
"dataset:tner/wnut2017",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,660,087,551,000 | 2022-09-26T14:29:19 | 44 | 0 | ---
datasets:
- tner/wnut2017
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: Jacob Collier is a Grammy awarded artist from England.
example_title: NER Example 1
model-index:
- name: tner/deberta-large-wnut2017
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: tner/wnut2017
type: tner/wnut2017
args: tner/wnut2017
metrics:
- type: f1
value: 0.5105386416861827
name: F1
- type: precision
value: 0.6931637519872814
name: Precision
- type: recall
value: 0.4040778498609824
name: Recall
- type: f1_macro
value: 0.4263428845085451
name: F1 (macro)
- type: precision_macro
value: 0.6003185137596864
name: Precision (macro)
- type: recall_macro
value: 0.35195768262641947
name: Recall (macro)
- type: f1_entity_span
value: 0.5936768149882904
name: F1 (entity span)
- type: precision_entity_span
value: 0.8060413354531002
name: Precision (entity span)
- type: recall_entity_span
value: 0.46987951807228917
name: Recall (entity span)
---
# tner/deberta-large-wnut2017
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the
[tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.5105386416861827
- Precision (micro): 0.6931637519872814
- Recall (micro): 0.4040778498609824
- F1 (macro): 0.4263428845085451
- Precision (macro): 0.6003185137596864
- Recall (macro): 0.35195768262641947
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.3503649635036496
- group: 0.3148148148148148
- location: 0.6029411764705882
- person: 0.6628895184135977
- product: 0.1951219512195122
- work_of_art: 0.431924882629108
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.47970650356554456, 0.5385161869734422]
- 95%: [0.47475901512925966, 0.5430870496346687]
- F1 (macro):
- 90%: [0.47970650356554456, 0.5385161869734422]
- 95%: [0.47475901512925966, 0.5430870496346687]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-large-wnut2017")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/wnut2017']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
| [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF | zyj2003lj | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"base_model:quantized:nomic-ai/nomic-embed-text-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,725,092,777,000 | 2024-08-31T08:26:20 | 21 | 0 | ---
base_model: nomic-ai/nomic-embed-text-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.20895522388058
- type: ap
value: 38.57605549557802
- type: f1
value: 69.35586565857854
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.8144
- type: ap
value: 88.65222882032363
- type: f1
value: 91.80426301643274
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.162000000000006
- type: f1
value: 46.59329642263158
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.253
- type: map_at_10
value: 38.962
- type: map_at_100
value: 40.081
- type: map_at_1000
value: 40.089000000000006
- type: map_at_3
value: 33.499
- type: map_at_5
value: 36.351
- type: mrr_at_1
value: 24.609
- type: mrr_at_10
value: 39.099000000000004
- type: mrr_at_100
value: 40.211000000000006
- type: mrr_at_1000
value: 40.219
- type: mrr_at_3
value: 33.677
- type: mrr_at_5
value: 36.469
- type: ndcg_at_1
value: 24.253
- type: ndcg_at_10
value: 48.010999999999996
- type: ndcg_at_100
value: 52.756
- type: ndcg_at_1000
value: 52.964999999999996
- type: ndcg_at_3
value: 36.564
- type: ndcg_at_5
value: 41.711999999999996
- type: precision_at_1
value: 24.253
- type: precision_at_10
value: 7.738
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.149000000000001
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 24.253
- type: recall_at_10
value: 77.383
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 57.965999999999994
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.69069567851087
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.35185490976283
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.71274951450321
- type: mrr
value: 76.06032625423207
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.73980520022269
- type: cos_sim_spearman
value: 84.24649792685918
- type: euclidean_pearson
value: 85.85197641158186
- type: euclidean_spearman
value: 84.24649792685918
- type: manhattan_pearson
value: 86.26809552711346
- type: manhattan_spearman
value: 84.56397504030865
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.25324675324674
- type: f1
value: 84.17872280892557
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.770253446400886
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.94307095497281
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.164
- type: map_at_10
value: 42.641
- type: map_at_100
value: 43.947
- type: map_at_1000
value: 44.074999999999996
- type: map_at_3
value: 39.592
- type: map_at_5
value: 41.204
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 48.625
- type: mrr_at_100
value: 49.368
- type: mrr_at_1000
value: 49.413000000000004
- type: mrr_at_3
value: 46.400000000000006
- type: mrr_at_5
value: 47.68
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 48.564
- type: ndcg_at_100
value: 53.507000000000005
- type: ndcg_at_1000
value: 55.635999999999996
- type: ndcg_at_3
value: 44.471
- type: ndcg_at_5
value: 46.137
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 8.856
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 14.649000000000001
- type: recall_at_1
value: 32.164
- type: recall_at_10
value: 59.609
- type: recall_at_100
value: 80.521
- type: recall_at_1000
value: 94.245
- type: recall_at_3
value: 46.521
- type: recall_at_5
value: 52.083999999999996
- type: map_at_1
value: 31.526
- type: map_at_10
value: 41.581
- type: map_at_100
value: 42.815999999999995
- type: map_at_1000
value: 42.936
- type: map_at_3
value: 38.605000000000004
- type: map_at_5
value: 40.351
- type: mrr_at_1
value: 39.489999999999995
- type: mrr_at_10
value: 47.829
- type: mrr_at_100
value: 48.512
- type: mrr_at_1000
value: 48.552
- type: mrr_at_3
value: 45.754
- type: mrr_at_5
value: 46.986
- type: ndcg_at_1
value: 39.489999999999995
- type: ndcg_at_10
value: 47.269
- type: ndcg_at_100
value: 51.564
- type: ndcg_at_1000
value: 53.53099999999999
- type: ndcg_at_3
value: 43.301
- type: ndcg_at_5
value: 45.239000000000004
- type: precision_at_1
value: 39.489999999999995
- type: precision_at_10
value: 8.93
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.865999999999998
- type: recall_at_1
value: 31.526
- type: recall_at_10
value: 56.76
- type: recall_at_100
value: 75.029
- type: recall_at_1000
value: 87.491
- type: recall_at_3
value: 44.786
- type: recall_at_5
value: 50.254
- type: map_at_1
value: 40.987
- type: map_at_10
value: 52.827
- type: map_at_100
value: 53.751000000000005
- type: map_at_1000
value: 53.81
- type: map_at_3
value: 49.844
- type: map_at_5
value: 51.473
- type: mrr_at_1
value: 46.833999999999996
- type: mrr_at_10
value: 56.389
- type: mrr_at_100
value: 57.003
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.486999999999995
- type: ndcg_at_1
value: 46.833999999999996
- type: ndcg_at_10
value: 58.372
- type: ndcg_at_100
value: 62.068
- type: ndcg_at_1000
value: 63.288
- type: ndcg_at_3
value: 53.400000000000006
- type: ndcg_at_5
value: 55.766000000000005
- type: precision_at_1
value: 46.833999999999996
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.448
- type: precision_at_5
value: 15.862000000000002
- type: recall_at_1
value: 40.987
- type: recall_at_10
value: 71.146
- type: recall_at_100
value: 87.035
- type: recall_at_1000
value: 95.633
- type: recall_at_3
value: 58.025999999999996
- type: recall_at_5
value: 63.815999999999995
- type: map_at_1
value: 24.587
- type: map_at_10
value: 33.114
- type: map_at_100
value: 34.043
- type: map_at_1000
value: 34.123999999999995
- type: map_at_3
value: 30.45
- type: map_at_5
value: 31.813999999999997
- type: mrr_at_1
value: 26.554
- type: mrr_at_10
value: 35.148
- type: mrr_at_100
value: 35.926
- type: mrr_at_1000
value: 35.991
- type: mrr_at_3
value: 32.599000000000004
- type: mrr_at_5
value: 33.893
- type: ndcg_at_1
value: 26.554
- type: ndcg_at_10
value: 38.132
- type: ndcg_at_100
value: 42.78
- type: ndcg_at_1000
value: 44.919
- type: ndcg_at_3
value: 32.833
- type: ndcg_at_5
value: 35.168
- type: precision_at_1
value: 26.554
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.861
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 24.587
- type: recall_at_10
value: 51.690000000000005
- type: recall_at_100
value: 73.428
- type: recall_at_1000
value: 89.551
- type: recall_at_3
value: 37.336999999999996
- type: recall_at_5
value: 43.047000000000004
- type: map_at_1
value: 16.715
- type: map_at_10
value: 24.251
- type: map_at_100
value: 25.326999999999998
- type: map_at_1000
value: 25.455
- type: map_at_3
value: 21.912000000000003
- type: map_at_5
value: 23.257
- type: mrr_at_1
value: 20.274
- type: mrr_at_10
value: 28.552
- type: mrr_at_100
value: 29.42
- type: mrr_at_1000
value: 29.497
- type: mrr_at_3
value: 26.14
- type: mrr_at_5
value: 27.502
- type: ndcg_at_1
value: 20.274
- type: ndcg_at_10
value: 29.088
- type: ndcg_at_100
value: 34.293
- type: ndcg_at_1000
value: 37.271
- type: ndcg_at_3
value: 24.708
- type: ndcg_at_5
value: 26.809
- type: precision_at_1
value: 20.274
- type: precision_at_10
value: 5.361
- type: precision_at_100
value: 0.915
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.556999999999999
- type: recall_at_1
value: 16.715
- type: recall_at_10
value: 39.587
- type: recall_at_100
value: 62.336000000000006
- type: recall_at_1000
value: 83.453
- type: recall_at_3
value: 27.839999999999996
- type: recall_at_5
value: 32.952999999999996
- type: map_at_1
value: 28.793000000000003
- type: map_at_10
value: 38.582
- type: map_at_100
value: 39.881
- type: map_at_1000
value: 39.987
- type: map_at_3
value: 35.851
- type: map_at_5
value: 37.289
- type: mrr_at_1
value: 34.455999999999996
- type: mrr_at_10
value: 43.909
- type: mrr_at_100
value: 44.74
- type: mrr_at_1000
value: 44.786
- type: mrr_at_3
value: 41.659
- type: mrr_at_5
value: 43.010999999999996
- type: ndcg_at_1
value: 34.455999999999996
- type: ndcg_at_10
value: 44.266
- type: ndcg_at_100
value: 49.639
- type: ndcg_at_1000
value: 51.644
- type: ndcg_at_3
value: 39.865
- type: ndcg_at_5
value: 41.887
- type: precision_at_1
value: 34.455999999999996
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 18.831999999999997
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 28.793000000000003
- type: recall_at_10
value: 55.68300000000001
- type: recall_at_100
value: 77.99000000000001
- type: recall_at_1000
value: 91.183
- type: recall_at_3
value: 43.293
- type: recall_at_5
value: 48.618
- type: map_at_1
value: 25.907000000000004
- type: map_at_10
value: 35.519
- type: map_at_100
value: 36.806
- type: map_at_1000
value: 36.912
- type: map_at_3
value: 32.748
- type: map_at_5
value: 34.232
- type: mrr_at_1
value: 31.621
- type: mrr_at_10
value: 40.687
- type: mrr_at_100
value: 41.583
- type: mrr_at_1000
value: 41.638999999999996
- type: mrr_at_3
value: 38.527
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.621
- type: ndcg_at_10
value: 41.003
- type: ndcg_at_100
value: 46.617999999999995
- type: ndcg_at_1000
value: 48.82
- type: ndcg_at_3
value: 36.542
- type: ndcg_at_5
value: 38.368
- type: precision_at_1
value: 31.621
- type: precision_at_10
value: 7.396999999999999
- type: precision_at_100
value: 1.191
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 17.39
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 25.907000000000004
- type: recall_at_10
value: 52.115
- type: recall_at_100
value: 76.238
- type: recall_at_1000
value: 91.218
- type: recall_at_3
value: 39.417
- type: recall_at_5
value: 44.435
- type: map_at_1
value: 25.732166666666668
- type: map_at_10
value: 34.51616666666667
- type: map_at_100
value: 35.67241666666666
- type: map_at_1000
value: 35.78675
- type: map_at_3
value: 31.953416666666662
- type: map_at_5
value: 33.333
- type: mrr_at_1
value: 30.300166666666673
- type: mrr_at_10
value: 38.6255
- type: mrr_at_100
value: 39.46183333333334
- type: mrr_at_1000
value: 39.519999999999996
- type: mrr_at_3
value: 36.41299999999999
- type: mrr_at_5
value: 37.6365
- type: ndcg_at_1
value: 30.300166666666673
- type: ndcg_at_10
value: 39.61466666666667
- type: ndcg_at_100
value: 44.60808333333334
- type: ndcg_at_1000
value: 46.91708333333334
- type: ndcg_at_3
value: 35.26558333333333
- type: ndcg_at_5
value: 37.220000000000006
- type: precision_at_1
value: 30.300166666666673
- type: precision_at_10
value: 6.837416666666667
- type: precision_at_100
value: 1.10425
- type: precision_at_1000
value: 0.14875
- type: precision_at_3
value: 16.13716666666667
- type: precision_at_5
value: 11.2815
- type: recall_at_1
value: 25.732166666666668
- type: recall_at_10
value: 50.578916666666665
- type: recall_at_100
value: 72.42183333333334
- type: recall_at_1000
value: 88.48766666666667
- type: recall_at_3
value: 38.41325
- type: recall_at_5
value: 43.515750000000004
- type: map_at_1
value: 23.951
- type: map_at_10
value: 30.974
- type: map_at_100
value: 31.804
- type: map_at_1000
value: 31.900000000000002
- type: map_at_3
value: 28.762
- type: map_at_5
value: 29.94
- type: mrr_at_1
value: 26.534000000000002
- type: mrr_at_10
value: 33.553
- type: mrr_at_100
value: 34.297
- type: mrr_at_1000
value: 34.36
- type: mrr_at_3
value: 31.391000000000002
- type: mrr_at_5
value: 32.525999999999996
- type: ndcg_at_1
value: 26.534000000000002
- type: ndcg_at_10
value: 35.112
- type: ndcg_at_100
value: 39.28
- type: ndcg_at_1000
value: 41.723
- type: ndcg_at_3
value: 30.902
- type: ndcg_at_5
value: 32.759
- type: precision_at_1
value: 26.534000000000002
- type: precision_at_10
value: 5.445
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.951
- type: recall_at_10
value: 45.24
- type: recall_at_100
value: 64.12299999999999
- type: recall_at_1000
value: 82.28999999999999
- type: recall_at_3
value: 33.806000000000004
- type: recall_at_5
value: 38.277
- type: map_at_1
value: 16.829
- type: map_at_10
value: 23.684
- type: map_at_100
value: 24.683
- type: map_at_1000
value: 24.81
- type: map_at_3
value: 21.554000000000002
- type: map_at_5
value: 22.768
- type: mrr_at_1
value: 20.096
- type: mrr_at_10
value: 27.230999999999998
- type: mrr_at_100
value: 28.083999999999996
- type: mrr_at_1000
value: 28.166000000000004
- type: mrr_at_3
value: 25.212
- type: mrr_at_5
value: 26.32
- type: ndcg_at_1
value: 20.096
- type: ndcg_at_10
value: 27.989000000000004
- type: ndcg_at_100
value: 32.847
- type: ndcg_at_1000
value: 35.896
- type: ndcg_at_3
value: 24.116
- type: ndcg_at_5
value: 25.964
- type: precision_at_1
value: 20.096
- type: precision_at_10
value: 5
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 11.207
- type: precision_at_5
value: 8.08
- type: recall_at_1
value: 16.829
- type: recall_at_10
value: 37.407000000000004
- type: recall_at_100
value: 59.101000000000006
- type: recall_at_1000
value: 81.024
- type: recall_at_3
value: 26.739
- type: recall_at_5
value: 31.524
- type: map_at_1
value: 24.138
- type: map_at_10
value: 32.275999999999996
- type: map_at_100
value: 33.416000000000004
- type: map_at_1000
value: 33.527
- type: map_at_3
value: 29.854000000000003
- type: map_at_5
value: 31.096
- type: mrr_at_1
value: 28.450999999999997
- type: mrr_at_10
value: 36.214
- type: mrr_at_100
value: 37.134
- type: mrr_at_1000
value: 37.198
- type: mrr_at_3
value: 34.001999999999995
- type: mrr_at_5
value: 35.187000000000005
- type: ndcg_at_1
value: 28.450999999999997
- type: ndcg_at_10
value: 37.166
- type: ndcg_at_100
value: 42.454
- type: ndcg_at_1000
value: 44.976
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 34.631
- type: precision_at_1
value: 28.450999999999997
- type: precision_at_10
value: 6.241
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.801
- type: precision_at_5
value: 10.280000000000001
- type: recall_at_1
value: 24.138
- type: recall_at_10
value: 48.111
- type: recall_at_100
value: 71.245
- type: recall_at_1000
value: 88.986
- type: recall_at_3
value: 36.119
- type: recall_at_5
value: 40.846
- type: map_at_1
value: 23.244
- type: map_at_10
value: 31.227
- type: map_at_100
value: 33.007
- type: map_at_1000
value: 33.223
- type: map_at_3
value: 28.924
- type: map_at_5
value: 30.017
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 35.524
- type: mrr_at_100
value: 36.699
- type: mrr_at_1000
value: 36.759
- type: mrr_at_3
value: 33.366
- type: mrr_at_5
value: 34.552
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 36.381
- type: ndcg_at_100
value: 43.062
- type: ndcg_at_1000
value: 45.656
- type: ndcg_at_3
value: 32.501999999999995
- type: ndcg_at_5
value: 34.105999999999995
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 6.798
- type: precision_at_100
value: 1.492
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 15.152
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.244
- type: recall_at_10
value: 45.979
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 91.078
- type: recall_at_3
value: 34.925
- type: recall_at_5
value: 39.126
- type: map_at_1
value: 19.945
- type: map_at_10
value: 27.517999999999997
- type: map_at_100
value: 28.588
- type: map_at_1000
value: 28.682000000000002
- type: map_at_3
value: 25.345000000000002
- type: map_at_5
value: 26.555
- type: mrr_at_1
value: 21.996
- type: mrr_at_10
value: 29.845
- type: mrr_at_100
value: 30.775999999999996
- type: mrr_at_1000
value: 30.845
- type: mrr_at_3
value: 27.726
- type: mrr_at_5
value: 28.882
- type: ndcg_at_1
value: 21.996
- type: ndcg_at_10
value: 32.034
- type: ndcg_at_100
value: 37.185
- type: ndcg_at_1000
value: 39.645
- type: ndcg_at_3
value: 27.750999999999998
- type: ndcg_at_5
value: 29.805999999999997
- type: precision_at_1
value: 21.996
- type: precision_at_10
value: 5.065
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.392
- type: recall_at_1
value: 19.945
- type: recall_at_10
value: 43.62
- type: recall_at_100
value: 67.194
- type: recall_at_1000
value: 85.7
- type: recall_at_3
value: 32.15
- type: recall_at_5
value: 37.208999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.279
- type: map_at_10
value: 31.052999999999997
- type: map_at_100
value: 33.125
- type: map_at_1000
value: 33.306000000000004
- type: map_at_3
value: 26.208
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 42.671
- type: mrr_at_10
value: 54.557
- type: mrr_at_100
value: 55.142
- type: mrr_at_1000
value: 55.169000000000004
- type: mrr_at_3
value: 51.488
- type: mrr_at_5
value: 53.439
- type: ndcg_at_1
value: 42.671
- type: ndcg_at_10
value: 41.276
- type: ndcg_at_100
value: 48.376000000000005
- type: ndcg_at_1000
value: 51.318
- type: ndcg_at_3
value: 35.068
- type: ndcg_at_5
value: 37.242
- type: precision_at_1
value: 42.671
- type: precision_at_10
value: 12.638
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 26.08
- type: precision_at_5
value: 19.805
- type: recall_at_1
value: 18.279
- type: recall_at_10
value: 46.946
- type: recall_at_100
value: 70.97200000000001
- type: recall_at_1000
value: 87.107
- type: recall_at_3
value: 31.147999999999996
- type: recall_at_5
value: 38.099
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.573
- type: map_at_10
value: 19.747
- type: map_at_100
value: 28.205000000000002
- type: map_at_1000
value: 29.831000000000003
- type: map_at_3
value: 14.109
- type: map_at_5
value: 16.448999999999998
- type: mrr_at_1
value: 71
- type: mrr_at_10
value: 77.68599999999999
- type: mrr_at_100
value: 77.995
- type: mrr_at_1000
value: 78.00200000000001
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.029
- type: ndcg_at_1
value: 59.12500000000001
- type: ndcg_at_10
value: 43.9
- type: ndcg_at_100
value: 47.863
- type: ndcg_at_1000
value: 54.848
- type: ndcg_at_3
value: 49.803999999999995
- type: ndcg_at_5
value: 46.317
- type: precision_at_1
value: 71
- type: precision_at_10
value: 34.4
- type: precision_at_100
value: 11.063
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 52.333
- type: precision_at_5
value: 43.7
- type: recall_at_1
value: 8.573
- type: recall_at_10
value: 25.615
- type: recall_at_100
value: 53.385000000000005
- type: recall_at_1000
value: 75.46000000000001
- type: recall_at_3
value: 15.429
- type: recall_at_5
value: 19.357
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.989999999999995
- type: f1
value: 42.776314451497555
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.13499999999999
- type: map_at_10
value: 82.825
- type: map_at_100
value: 83.096
- type: map_at_1000
value: 83.111
- type: map_at_3
value: 81.748
- type: map_at_5
value: 82.446
- type: mrr_at_1
value: 79.553
- type: mrr_at_10
value: 86.654
- type: mrr_at_100
value: 86.774
- type: mrr_at_1000
value: 86.778
- type: mrr_at_3
value: 85.981
- type: mrr_at_5
value: 86.462
- type: ndcg_at_1
value: 79.553
- type: ndcg_at_10
value: 86.345
- type: ndcg_at_100
value: 87.32
- type: ndcg_at_1000
value: 87.58200000000001
- type: ndcg_at_3
value: 84.719
- type: ndcg_at_5
value: 85.677
- type: precision_at_1
value: 79.553
- type: precision_at_10
value: 10.402000000000001
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.413
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 74.13499999999999
- type: recall_at_10
value: 93.215
- type: recall_at_100
value: 97.083
- type: recall_at_1000
value: 98.732
- type: recall_at_3
value: 88.79
- type: recall_at_5
value: 91.259
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.298000000000002
- type: map_at_10
value: 29.901
- type: map_at_100
value: 31.528
- type: map_at_1000
value: 31.713
- type: map_at_3
value: 25.740000000000002
- type: map_at_5
value: 28.227999999999998
- type: mrr_at_1
value: 36.728
- type: mrr_at_10
value: 45.401
- type: mrr_at_100
value: 46.27
- type: mrr_at_1000
value: 46.315
- type: mrr_at_3
value: 42.978
- type: mrr_at_5
value: 44.29
- type: ndcg_at_1
value: 36.728
- type: ndcg_at_10
value: 37.456
- type: ndcg_at_100
value: 43.832
- type: ndcg_at_1000
value: 47
- type: ndcg_at_3
value: 33.694
- type: ndcg_at_5
value: 35.085
- type: precision_at_1
value: 36.728
- type: precision_at_10
value: 10.386
- type: precision_at_100
value: 1.701
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 22.479
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.298000000000002
- type: recall_at_10
value: 44.369
- type: recall_at_100
value: 68.098
- type: recall_at_1000
value: 87.21900000000001
- type: recall_at_3
value: 30.215999999999998
- type: recall_at_5
value: 36.861
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.568
- type: map_at_10
value: 65.061
- type: map_at_100
value: 65.896
- type: map_at_1000
value: 65.95100000000001
- type: map_at_3
value: 61.831
- type: map_at_5
value: 63.849000000000004
- type: mrr_at_1
value: 79.136
- type: mrr_at_10
value: 84.58200000000001
- type: mrr_at_100
value: 84.765
- type: mrr_at_1000
value: 84.772
- type: mrr_at_3
value: 83.684
- type: mrr_at_5
value: 84.223
- type: ndcg_at_1
value: 79.136
- type: ndcg_at_10
value: 72.622
- type: ndcg_at_100
value: 75.539
- type: ndcg_at_1000
value: 76.613
- type: ndcg_at_3
value: 68.065
- type: ndcg_at_5
value: 70.58
- type: precision_at_1
value: 79.136
- type: precision_at_10
value: 15.215
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 44.011
- type: precision_at_5
value: 28.388999999999996
- type: recall_at_1
value: 39.568
- type: recall_at_10
value: 76.077
- type: recall_at_100
value: 87.481
- type: recall_at_1000
value: 94.56400000000001
- type: recall_at_3
value: 66.01599999999999
- type: recall_at_5
value: 70.97200000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.312
- type: ap
value: 80.36296867333715
- type: f1
value: 85.26613311552218
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 35.711999999999996
- type: map_at_100
value: 36.876999999999995
- type: map_at_1000
value: 36.923
- type: map_at_3
value: 32.034
- type: map_at_5
value: 34.159
- type: mrr_at_1
value: 24.04
- type: mrr_at_10
value: 36.345
- type: mrr_at_100
value: 37.441
- type: mrr_at_1000
value: 37.480000000000004
- type: mrr_at_3
value: 32.713
- type: mrr_at_5
value: 34.824
- type: ndcg_at_1
value: 24.026
- type: ndcg_at_10
value: 42.531
- type: ndcg_at_100
value: 48.081
- type: ndcg_at_1000
value: 49.213
- type: ndcg_at_3
value: 35.044
- type: ndcg_at_5
value: 38.834
- type: precision_at_1
value: 24.026
- type: precision_at_10
value: 6.622999999999999
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.909
- type: precision_at_5
value: 10.871
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 63.426
- type: recall_at_100
value: 88.96300000000001
- type: recall_at_1000
value: 97.637
- type: recall_at_3
value: 43.095
- type: recall_at_5
value: 52.178000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.0095759233926
- type: f1
value: 92.78387794667408
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.0296397628819
- type: f1
value: 58.45699589820874
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.45662407531944
- type: f1
value: 71.42364781421813
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07800941492937
- type: f1
value: 77.22799045640845
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.531234379250606
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.941490381193802
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.3115090856725
- type: mrr
value: 31.290667638675757
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.465
- type: map_at_10
value: 13.03
- type: map_at_100
value: 16.057
- type: map_at_1000
value: 17.49
- type: map_at_3
value: 9.553
- type: map_at_5
value: 11.204
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 53.269
- type: mrr_at_100
value: 53.72
- type: mrr_at_1000
value: 53.761
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 52.461
- type: ndcg_at_1
value: 42.26
- type: ndcg_at_10
value: 34.673
- type: ndcg_at_100
value: 30.759999999999998
- type: ndcg_at_1000
value: 39.728
- type: ndcg_at_3
value: 40.349000000000004
- type: ndcg_at_5
value: 37.915
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.789
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.596000000000004
- type: precision_at_5
value: 33.251
- type: recall_at_1
value: 5.465
- type: recall_at_10
value: 17.148
- type: recall_at_100
value: 29.768
- type: recall_at_1000
value: 62.239
- type: recall_at_3
value: 10.577
- type: recall_at_5
value: 13.315
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.008
- type: map_at_10
value: 52.467
- type: map_at_100
value: 53.342999999999996
- type: map_at_1000
value: 53.366
- type: map_at_3
value: 48.412
- type: map_at_5
value: 50.875
- type: mrr_at_1
value: 41.541
- type: mrr_at_10
value: 54.967
- type: mrr_at_100
value: 55.611
- type: mrr_at_1000
value: 55.627
- type: mrr_at_3
value: 51.824999999999996
- type: mrr_at_5
value: 53.763000000000005
- type: ndcg_at_1
value: 41.541
- type: ndcg_at_10
value: 59.724999999999994
- type: ndcg_at_100
value: 63.38700000000001
- type: ndcg_at_1000
value: 63.883
- type: ndcg_at_3
value: 52.331
- type: ndcg_at_5
value: 56.327000000000005
- type: precision_at_1
value: 41.541
- type: precision_at_10
value: 9.447
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.262
- type: precision_at_5
value: 16.314999999999998
- type: recall_at_1
value: 37.008
- type: recall_at_10
value: 79.145
- type: recall_at_100
value: 94.986
- type: recall_at_1000
value: 98.607
- type: recall_at_3
value: 60.277
- type: recall_at_5
value: 69.407
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.402
- type: map_at_10
value: 84.181
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.81400000000001
- type: map_at_3
value: 81.209
- type: map_at_5
value: 83.085
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.263
- type: mrr_at_100
value: 87.36
- type: mrr_at_1000
value: 87.36
- type: mrr_at_3
value: 86.235
- type: mrr_at_5
value: 86.945
- type: ndcg_at_1
value: 81.01
- type: ndcg_at_10
value: 87.99900000000001
- type: ndcg_at_100
value: 89.217
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 85.053
- type: ndcg_at_5
value: 86.703
- type: precision_at_1
value: 81.01
- type: precision_at_10
value: 13.336
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 24.44
- type: recall_at_1
value: 70.402
- type: recall_at_10
value: 95.214
- type: recall_at_100
value: 99.438
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.75699999999999
- type: recall_at_5
value: 91.44099999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.51721502758904
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.054808572333016
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.578
- type: map_at_10
value: 11.036999999999999
- type: map_at_100
value: 12.879999999999999
- type: map_at_1000
value: 13.150999999999998
- type: map_at_3
value: 8.133
- type: map_at_5
value: 9.559
- type: mrr_at_1
value: 22.6
- type: mrr_at_10
value: 32.68
- type: mrr_at_100
value: 33.789
- type: mrr_at_1000
value: 33.854
- type: mrr_at_3
value: 29.7
- type: mrr_at_5
value: 31.480000000000004
- type: ndcg_at_1
value: 22.6
- type: ndcg_at_10
value: 18.616
- type: ndcg_at_100
value: 25.883
- type: ndcg_at_1000
value: 30.944
- type: ndcg_at_3
value: 18.136
- type: ndcg_at_5
value: 15.625
- type: precision_at_1
value: 22.6
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.991
- type: precision_at_1000
value: 0.321
- type: precision_at_3
value: 16.8
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.578
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 40.397
- type: recall_at_1000
value: 65.2
- type: recall_at_3
value: 10.208
- type: recall_at_5
value: 13.718
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.44288351714071
- type: cos_sim_spearman
value: 79.37995604564952
- type: euclidean_pearson
value: 81.1078874670718
- type: euclidean_spearman
value: 79.37995905980499
- type: manhattan_pearson
value: 81.03697527288986
- type: manhattan_spearman
value: 79.33490235296236
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.95557650436523
- type: cos_sim_spearman
value: 78.5190672399868
- type: euclidean_pearson
value: 81.58064025904707
- type: euclidean_spearman
value: 78.5190672399868
- type: manhattan_pearson
value: 81.52857930619889
- type: manhattan_spearman
value: 78.50421361308034
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.79128416228737
- type: cos_sim_spearman
value: 86.05402451477147
- type: euclidean_pearson
value: 85.46280267054289
- type: euclidean_spearman
value: 86.05402451477147
- type: manhattan_pearson
value: 85.46278563858236
- type: manhattan_spearman
value: 86.08079590861004
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.20623089568763
- type: cos_sim_spearman
value: 81.53786907061009
- type: euclidean_pearson
value: 82.82272250091494
- type: euclidean_spearman
value: 81.53786907061009
- type: manhattan_pearson
value: 82.78850494027013
- type: manhattan_spearman
value: 81.5135618083407
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.46366618397936
- type: cos_sim_spearman
value: 86.96566013336908
- type: euclidean_pearson
value: 86.62651697548931
- type: euclidean_spearman
value: 86.96565526364454
- type: manhattan_pearson
value: 86.58812160258009
- type: manhattan_spearman
value: 86.9336484321288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.51858358641559
- type: cos_sim_spearman
value: 84.7652527954999
- type: euclidean_pearson
value: 84.23914783766861
- type: euclidean_spearman
value: 84.7652527954999
- type: manhattan_pearson
value: 84.22749648503171
- type: manhattan_spearman
value: 84.74527996746386
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28026563313065
- type: cos_sim_spearman
value: 87.46928143824915
- type: euclidean_pearson
value: 88.30558762000372
- type: euclidean_spearman
value: 87.46928143824915
- type: manhattan_pearson
value: 88.10513330809331
- type: manhattan_spearman
value: 87.21069787834173
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.376497134587375
- type: cos_sim_spearman
value: 65.0159550112516
- type: euclidean_pearson
value: 65.64572120879598
- type: euclidean_spearman
value: 65.0159550112516
- type: manhattan_pearson
value: 65.88143604989976
- type: manhattan_spearman
value: 65.17547297222434
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.22876368947644
- type: cos_sim_spearman
value: 85.46935577445318
- type: euclidean_pearson
value: 85.32830231392005
- type: euclidean_spearman
value: 85.46935577445318
- type: manhattan_pearson
value: 85.30353211758495
- type: manhattan_spearman
value: 85.42821085956945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.60986667767133
- type: mrr
value: 94.29432314236236
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.528
- type: map_at_10
value: 65.187
- type: map_at_100
value: 65.62599999999999
- type: map_at_1000
value: 65.657
- type: map_at_3
value: 62.352
- type: map_at_5
value: 64.025
- type: mrr_at_1
value: 57.333
- type: mrr_at_10
value: 66.577
- type: mrr_at_100
value: 66.88
- type: mrr_at_1000
value: 66.908
- type: mrr_at_3
value: 64.556
- type: mrr_at_5
value: 65.739
- type: ndcg_at_1
value: 57.333
- type: ndcg_at_10
value: 70.275
- type: ndcg_at_100
value: 72.136
- type: ndcg_at_1000
value: 72.963
- type: ndcg_at_3
value: 65.414
- type: ndcg_at_5
value: 67.831
- type: precision_at_1
value: 57.333
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.778000000000002
- type: precision_at_5
value: 17.2
- type: recall_at_1
value: 54.528
- type: recall_at_10
value: 84.356
- type: recall_at_100
value: 92.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.283
- type: recall_at_5
value: 77.14999999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74158415841585
- type: cos_sim_ap
value: 92.90048959850317
- type: cos_sim_f1
value: 86.35650810245687
- type: cos_sim_precision
value: 90.4709748083242
- type: cos_sim_recall
value: 82.6
- type: dot_accuracy
value: 99.74158415841585
- type: dot_ap
value: 92.90048959850317
- type: dot_f1
value: 86.35650810245687
- type: dot_precision
value: 90.4709748083242
- type: dot_recall
value: 82.6
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.90048959850317
- type: euclidean_f1
value: 86.35650810245687
- type: euclidean_precision
value: 90.4709748083242
- type: euclidean_recall
value: 82.6
- type: manhattan_accuracy
value: 99.74158415841585
- type: manhattan_ap
value: 92.87344692947894
- type: manhattan_f1
value: 86.38497652582159
- type: manhattan_precision
value: 90.29443838604145
- type: manhattan_recall
value: 82.8
- type: max_accuracy
value: 99.74158415841585
- type: max_ap
value: 92.90048959850317
- type: max_f1
value: 86.38497652582159
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.191648770424216
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.02944668730218
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.466386167525265
- type: mrr
value: 51.19071492233257
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.198022505886435
- type: cos_sim_spearman
value: 30.40170257939193
- type: dot_pearson
value: 30.198015316402614
- type: dot_spearman
value: 30.40170257939193
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.242
- type: map_at_10
value: 2.17
- type: map_at_100
value: 12.221
- type: map_at_1000
value: 28.63
- type: map_at_3
value: 0.728
- type: map_at_5
value: 1.185
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 89
- type: ndcg_at_10
value: 82.30499999999999
- type: ndcg_at_100
value: 61.839999999999996
- type: ndcg_at_1000
value: 53.381
- type: ndcg_at_3
value: 88.877
- type: ndcg_at_5
value: 86.05199999999999
- type: precision_at_1
value: 94
- type: precision_at_10
value: 87
- type: precision_at_100
value: 63.38
- type: precision_at_1000
value: 23.498
- type: precision_at_3
value: 94
- type: precision_at_5
value: 92
- type: recall_at_1
value: 0.242
- type: recall_at_10
value: 2.302
- type: recall_at_100
value: 14.979000000000001
- type: recall_at_1000
value: 49.638
- type: recall_at_3
value: 0.753
- type: recall_at_5
value: 1.226
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.006
- type: map_at_10
value: 11.805
- type: map_at_100
value: 18.146
- type: map_at_1000
value: 19.788
- type: map_at_3
value: 5.914
- type: map_at_5
value: 8.801
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 56.36600000000001
- type: mrr_at_100
value: 56.721999999999994
- type: mrr_at_1000
value: 56.721999999999994
- type: mrr_at_3
value: 52.041000000000004
- type: mrr_at_5
value: 54.796
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_10
value: 29.863
- type: ndcg_at_100
value: 39.571
- type: ndcg_at_1000
value: 51.385999999999996
- type: ndcg_at_3
value: 32.578
- type: ndcg_at_5
value: 32.351
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 26.531
- type: precision_at_100
value: 7.796
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.006
- type: recall_at_10
value: 18.738
- type: recall_at_100
value: 48.058
- type: recall_at_1000
value: 83.41300000000001
- type: recall_at_3
value: 7.166
- type: recall_at_5
value: 12.102
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.4178
- type: ap
value: 14.648781342150446
- type: f1
value: 55.07299194946378
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.919637804187886
- type: f1
value: 61.24122013967399
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.207896583685695
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.23114978840078
- type: cos_sim_ap
value: 74.26624727825818
- type: cos_sim_f1
value: 68.72377190817083
- type: cos_sim_precision
value: 64.56400742115028
- type: cos_sim_recall
value: 73.45646437994723
- type: dot_accuracy
value: 86.23114978840078
- type: dot_ap
value: 74.26624032659652
- type: dot_f1
value: 68.72377190817083
- type: dot_precision
value: 64.56400742115028
- type: dot_recall
value: 73.45646437994723
- type: euclidean_accuracy
value: 86.23114978840078
- type: euclidean_ap
value: 74.26624714480556
- type: euclidean_f1
value: 68.72377190817083
- type: euclidean_precision
value: 64.56400742115028
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.16558383501221
- type: manhattan_ap
value: 74.2091943976357
- type: manhattan_f1
value: 68.64221520524654
- type: manhattan_precision
value: 63.59135913591359
- type: manhattan_recall
value: 74.5646437994723
- type: max_accuracy
value: 86.23114978840078
- type: max_ap
value: 74.26624727825818
- type: max_f1
value: 68.72377190817083
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.3681841114604
- type: cos_sim_ap
value: 86.65166387498546
- type: cos_sim_f1
value: 79.02581944698774
- type: cos_sim_precision
value: 75.35796605434099
- type: cos_sim_recall
value: 83.06898675700647
- type: dot_accuracy
value: 89.3681841114604
- type: dot_ap
value: 86.65166019802056
- type: dot_f1
value: 79.02581944698774
- type: dot_precision
value: 75.35796605434099
- type: dot_recall
value: 83.06898675700647
- type: euclidean_accuracy
value: 89.3681841114604
- type: euclidean_ap
value: 86.65166462876266
- type: euclidean_f1
value: 79.02581944698774
- type: euclidean_precision
value: 75.35796605434099
- type: euclidean_recall
value: 83.06898675700647
- type: manhattan_accuracy
value: 89.36624364497226
- type: manhattan_ap
value: 86.65076471274106
- type: manhattan_f1
value: 79.07408783532733
- type: manhattan_precision
value: 76.41102972856527
- type: manhattan_recall
value: 81.92947336002464
- type: max_accuracy
value: 89.3681841114604
- type: max_ap
value: 86.65166462876266
- type: max_f1
value: 79.07408783532733
---
# zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF
This model was converted to GGUF format from [`nomic-ai/nomic-embed-text-v1.5`](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF --hf-file nomic-embed-text-v1.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF --hf-file nomic-embed-text-v1.5-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF --hf-file nomic-embed-text-v1.5-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF --hf-file nomic-embed-text-v1.5-q4_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | Non_BioNLP |
ymoslem/whisper-small-ga2en-v5.4-r | ymoslem | automatic-speech-recognition | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/Tatoeba-Speech-Irish",
"dataset:ymoslem/Wikimedia-Speech-Irish",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 1,715,388,929,000 | 2024-05-11T18:28:20 | 6 | 0 | ---
base_model: openai/whisper-small
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
- ymoslem/Tatoeba-Speech-Irish
- ymoslem/Wikimedia-Speech-Irish
language:
- ga
- en
license: apache-2.0
metrics:
- bleu
- wer
- chrf
tags:
- generated_from_trainer
model-index:
- name: Whisper Small GA-EN Speech Translation
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia + augmented
type: ymoslem/IWSLT2023-GA-EN
metrics:
- type: bleu
value: 28.44
name: Bleu
- type: wer
value: 72.62494371904548
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small GA-EN Speech Translation
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia + augmented dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3641
- Bleu: 28.44
- Chrf: 43.55
- Wer: 72.6249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- warmup_steps: 0
- training_steps: 3000
- mixed_precision_training: Native AMP
- generation_max_length: 128
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer |
|:-------------:|:------:|:----:|:---------------:|:-----:|:-----:|:--------:|
| 2.3595 | 0.0438 | 100 | 1.7944 | 9.69 | 26.37 | 114.4529 |
| 1.9008 | 0.0876 | 200 | 1.5391 | 14.89 | 32.44 | 93.6065 |
| 1.535 | 0.1313 | 300 | 1.3972 | 18.24 | 33.57 | 81.9901 |
| 1.3307 | 0.1751 | 400 | 1.3684 | 21.34 | 37.37 | 72.8050 |
| 1.1263 | 0.2189 | 500 | 1.3284 | 19.33 | 39.83 | 91.8955 |
| 0.9805 | 0.2627 | 600 | 1.3301 | 23.67 | 38.68 | 78.3881 |
| 0.8989 | 0.3065 | 700 | 1.3123 | 20.32 | 36.94 | 76.3170 |
| 0.7557 | 0.3503 | 800 | 1.2717 | 25.74 | 40.16 | 72.4448 |
| 0.7216 | 0.3940 | 900 | 1.3090 | 22.34 | 37.79 | 78.9284 |
| 0.6131 | 0.4378 | 1000 | 1.2566 | 24.36 | 41.49 | 74.5160 |
| 0.5032 | 0.4816 | 1100 | 1.2742 | 21.69 | 41.12 | 83.3859 |
| 0.4567 | 0.5254 | 1200 | 1.2893 | 24.33 | 40.05 | 70.8690 |
| 0.3968 | 0.5692 | 1300 | 1.3000 | 26.97 | 41.45 | 69.6083 |
| 0.3353 | 0.6130 | 1400 | 1.2784 | 27.51 | 43.97 | 63.9352 |
| 0.2826 | 0.6567 | 1500 | 1.3165 | 24.36 | 39.83 | 70.6439 |
| 0.2643 | 0.7005 | 1600 | 1.3317 | 24.98 | 40.01 | 68.6628 |
| 0.2047 | 0.7443 | 1700 | 1.2905 | 28.01 | 42.72 | 65.8262 |
| 0.1946 | 0.7881 | 1800 | 1.2820 | 26.17 | 42.46 | 64.9257 |
| 0.1588 | 0.8319 | 1900 | 1.3172 | 26.9 | 43.02 | 63.5299 |
| 0.1322 | 0.8757 | 2000 | 1.3248 | 27.78 | 43.53 | 63.8001 |
| 0.1134 | 0.9194 | **2100** | 1.3198 | 28.98 | 45.27 | 72.7600 |
| 0.1031 | 0.9632 | 2200 | 1.3502 | 29.18 | 44.77 | 68.3476 |
| 0.0518 | 1.0070 | 2300 | 1.3433 | 28.6 | 42.96 | 69.0230 |
| 0.0481 | 1.0508 | 2400 | 1.3715 | 29.01 | 44.46 | 69.6983 |
| 0.0367 | 1.0946 | 2500 | 1.3696 | 26.94 | 42.39 | 73.6605 |
| 0.0309 | 1.1384 | 2600 | 1.3665 | 28.12 | 43.32 | 70.3737 |
| 0.0302 | 1.1821 | 2700 | 1.3836 | 29.6 | 44.56 | 67.2220 |
| 0.0302 | 1.2259 | 2800 | 1.3667 | 29.0 | 44.33 | 67.2220 |
| 0.0252 | 1.2697 | 2900 | 1.3633 | 29.07 | 44.09 | 70.6889 |
| 0.0257 | 1.3135 | 3000 | 1.3641 | 28.44 | 43.55 | 72.6249 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 | [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/dwojcik_-_gpt2-large-fine-tuned-context-256-8bits | RichardErkhov | null | [
"safetensors",
"gpt2",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,741,504,676,000 | 2025-03-09T07:18:29 | 2 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-large-fine-tuned-context-256 - bnb 8bits
- Model creator: https://huggingface.co/dwojcik/
- Original model: https://huggingface.co/dwojcik/gpt2-large-fine-tuned-context-256/
Original model description:
---
license: apache-2.0
language:
- en
library_name: transformers
---
# Model description
This model is a fine-tuned version of the GPT-2 Large pre-trained model, specifically adapted for text analysis on articles from the Przegląd Elektrotechniczny journal's 2023 edition. The fine-tuning process was designed to provide valuable insights into the journal's content, helping identify prevalent topics, themes, and ideas discussed in the articles.
## Model Description
This model is based on GPT-2 Large, which has 744 milion parameters and is designed for a wide range of Natural Language Processing tasks, including text generation, summarization, and classification. The base GPT-2 model has achieved state-of-the-art results on various NLP benchmarks and has become a standard in the field.
## Fine-tuning Details
- Dataset: The dataset consists of the text from all the articles published in the 2023 edition of the Przegląd Elektrotechniczny journal. The text data was cleaned and preprocessed before fine-tuning.
- Tokenizer: The Hugging Face AutoTokenizer for gpt2-large was used to tokenize the cleaned texts.
- Training: The model was trained for 10 epochs, using the Adam optimizer with a learning rate of 3e-4 and gradient accumulation steps of 8.
## Usage
This fine-tuned model can be used to generate text based on prompts related to the Przegląd Elektrotechniczny journal's content. Below is an example of how to use this model for text generation:
```python
from transformers import pipeline
generator = pipeline(
"text-generation",
model="dwojcik/gpt2-large-fine-tuned-context-256",
device=0, # for GPU, set to -1 if using CPU
)
generated_text = generator("Input your prompt here", max_length=100, num_return_sequences=1)[0]['generated_text']
print(generated_text)
```
## Limitations
Though the model has been specifically fine-tuned for text analysis and generation based on the Przegląd Elektrotechniczny journal, its accuracy and appropriateness might vary depending on the prompt and context. Additionally, it might not perform with equal efficiency for different types of prompts and text-related tasks in other domains.
| [
"SUMMARIZATION"
] | Non_BioNLP |
TransferGraph/yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:yukta10/finetuning-sentiment-model-3000-samples",
"base_model:adapter:yukta10/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,709,053,524,000 | 2024-02-29T13:26:52 | 0 | 0 | ---
base_model: yukta10/finetuning-sentiment-model-3000-samples
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.6722513089005235
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [yukta10/finetuning-sentiment-model-3000-samples](https://huggingface.co/yukta10/finetuning-sentiment-model-3000-samples) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5602 | None | 0 |
| 0.6283 | 0.6680 | 0 |
| 0.6461 | 0.6166 | 1 |
| 0.6618 | 0.5835 | 2 |
| 0.6513 | 0.5518 | 3 |
| 0.6597 | 0.5271 | 4 |
| 0.6471 | 0.5118 | 5 |
| 0.6712 | 0.4909 | 6 |
| 0.6723 | 0.4818 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Zhouzk/distilbert-base-uncased_emotion_ft_0520 | Zhouzk | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,349,010,000 | 2023-05-30T03:39:42 | 12 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
- precision
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased_emotion_ft_0520
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.921
name: Accuracy
- type: f1
value: 0.9210292348449035
name: F1
- type: precision
value: 0.8947253235549543
name: Precision
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0520
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2557
- Accuracy: 0.921
- F1: 0.9210
- Precision: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| No log | 1.0 | 63 | 0.9006 | 0.7055 | 0.6410 | 0.4988 |
| No log | 2.0 | 126 | 0.4166 | 0.8805 | 0.8729 | 0.8886 |
| No log | 3.0 | 189 | 0.2844 | 0.9175 | 0.9167 | 0.8987 |
| 0.6518 | 4.0 | 252 | 0.2557 | 0.921 | 0.9210 | 0.8947 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/bert_uncased_L-12_H-768_A-12_massive | gokuls | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:google/bert_uncased_L-12_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-12_H-768_A-12",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,696,618,746,000 | 2023-10-06T19:06:54 | 5 | 0 | ---
base_model: google/bert_uncased_L-12_H-768_A-12
datasets:
- massive
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-12_H-768_A-12_massive
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
metrics:
- type: accuracy
value: 0.8927693064436792
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-12_H-768_A-12_massive
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4667
- Accuracy: 0.8928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9107 | 1.0 | 180 | 0.7545 | 0.8519 |
| 0.5798 | 2.0 | 360 | 0.5206 | 0.8765 |
| 0.3042 | 3.0 | 540 | 0.4667 | 0.8928 |
| 0.1797 | 4.0 | 720 | 0.5197 | 0.8815 |
| 0.111 | 5.0 | 900 | 0.5295 | 0.8829 |
| 0.0713 | 6.0 | 1080 | 0.5471 | 0.8893 |
| 0.0464 | 7.0 | 1260 | 0.5842 | 0.8893 |
| 0.0296 | 8.0 | 1440 | 0.5834 | 0.8903 |
| 0.0197 | 9.0 | 1620 | 0.5980 | 0.8903 |
| 0.0141 | 10.0 | 1800 | 0.6135 | 0.8908 |
| 0.0107 | 11.0 | 1980 | 0.6353 | 0.8898 |
| 0.0077 | 12.0 | 2160 | 0.6285 | 0.8874 |
| 0.0062 | 13.0 | 2340 | 0.6248 | 0.8908 |
| 0.0048 | 14.0 | 2520 | 0.6328 | 0.8908 |
| 0.0041 | 15.0 | 2700 | 0.6381 | 0.8893 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
claudios/unixcoder-base-unimodal | claudios | feature-extraction | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"code",
"arxiv:2203.03850",
"arxiv:1910.09700",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,717,445,391,000 | 2024-06-03T20:36:03 | 8 | 0 | ---
language:
- code
library_name: transformers
license: apache-2.0
---
# UniXcoder Base Unimodal
This is an *unofficial* reupload of [microsoft/unixcoder-base-unimodal](https://huggingface.co/microsoft/unixcoder-base-unimodal) in the `SafeTensors` format using `transformers` `4.41.2`. The goal of this reupload is to prevent older models that are still relevant baselines from becoming stale as a result of changes in HuggingFace. Additionally, I may include minor corrections, such as model max length configuration.
## Properties
| Property | Value |
| :--- | :--- |
| Number Of Parameters | 124,842,240|
| Torch Dtype | Float32 |
| Architectures |RobertaModel|
| Bos Token Id | 0 |
| Pad Token Id | 1 |
| Eos Token Id | 2 |
| Transformers Version | 4.41.2 |
| Model Type | Roberta |
| Vocab Size | 50,000 |
| Hidden Size | 768 |
| Num Hidden Layers | 12 |
| Num Attention Heads | 12 |
| Hidden Act | Gelu |
| Intermediate Size | 3,072 |
| Hidden Dropout Prob | 0.10 |
|Attention Probs Dropout Prob| 0.10 |
| Max Position Embeddings | 1,026 |
| Type Vocab Size | 10 |
| Initializer Range | 0.02 |
| Layer Norm Eps | 0.00 |
| Position Embedding Type | Absolute |
Original model card of `unixcoder-base` below:
---
# Model Card for UniXcoder-base
# Model Details
## Model Description
UniXcoder is a unified cross-modal pre-trained model that leverages multimodal data (i.e. code comment and AST) to pretrain code representation.
- **Developed by:** Microsoft Team
- **Shared by [Optional]:** Hugging Face
- **Model type:** Feature Engineering
- **Language(s) (NLP):** en
- **License:** Apache-2.0
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- [Associated Paper](https://arxiv.org/abs/2203.03850)
# Uses
## Direct Use
Feature Engineering
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
More information needed
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
The model creators note in the [associated paper](https://arxiv.org/abs/2203.03850):
> UniXcoder has slightly worse BLEU-4 scores on both code summarization and generation tasks. The main reasons may come from two aspects. One is the amount of NL-PL pairs in the pre-training data
### Metrics
The model creators note in the [associated paper](https://arxiv.org/abs/2203.03850):
> We evaluate UniXcoder on five tasks over nine public datasets, including two understanding tasks, two generation tasks and an autoregressive task. To further evaluate the performance of code fragment embeddings, we also propose a new task called zero-shot code-to-code search.
## Results
The model creators note in the [associated paper](https://arxiv.org/abs/2203.03850):
>Taking zero-shot code-code search task as an example, after removing contrastive learning, the performance drops from 20.45% to 13.73%.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@misc{https://doi.org/10.48550/arxiv.2203.03850,
doi = {10.48550/ARXIV.2203.03850},
url = {https://arxiv.org/abs/2203.03850},
author = {Guo, Daya and Lu, Shuai and Duan, Nan and Wang, Yanlin and Zhou, Ming and Yin, Jian},
keywords = {Computation and Language (cs.CL), Programming Languages (cs.PL), Software Engineering (cs.SE), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {UniXcoder: Unified Cross-Modal Pre-training for Code
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Microsoft Team in collaboration with Ezi Ozoani and the Hugging Face Team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/unixcoder-base")
model = AutoModel.from_pretrained("microsoft/unixcoder-base")
```
</details> | [
"SUMMARIZATION"
] | Non_BioNLP |
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo | mbeukman | token-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"luo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-25T09:04:58 | 22 | 0 | ---
datasets:
- masakhaner
language:
- luo
metrics:
- f1
- precision
- recall
tags:
- NER
widget:
- text: "\uFEFFJii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e\
\ masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e\
\ sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-luo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 |
| [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
ner_results = nlp(example)
print(ner_results)
```
| [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
google/t5-efficient-large-nl32 | google | text2text-generation | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:47:34 | 16 | 1 | ---
datasets:
- c4
language:
- en
license: apache-2.0
tags:
- deep-narrow
inference: false
---
# T5-Efficient-LARGE-NL32 (Deep-Narrow version)
T5-Efficient-LARGE-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-nl32** - is of model type **Large** with the following variations:
- **nl** is **32**
It has **972.67** million parameters and thus requires *ca.* **3890.66 MB** of memory in full precision (*fp32*)
or **1945.33 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,717,009,867,000 | 2024-05-29T19:11:43 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
AjayMukundS/Llama-2-7b-LTS-finetuned-v3 | AjayMukundS | summarization | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"text-generation-inference",
"summarization",
"en",
"dataset:AjayMukundS/LTS_Dataset_Reformatted",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,724,644,870,000 | 2024-08-27T00:55:24 | 22 | 1 | ---
base_model: meta-llama/Llama-2-7b-hf
datasets:
- AjayMukundS/LTS_Dataset_Reformatted
language:
- en
library_name: transformers
license: mit
metrics:
- rouge
pipeline_tag: summarization
tags:
- legal
- text-generation-inference
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
luccidomingues/autotrain-8fohv-7gjpn | luccidomingues | text-classification | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-8fohv-7gjpn/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,708,616,097,000 | 2024-02-22T15:35:12 | 6 | 0 | ---
datasets:
- autotrain-8fohv-7gjpn/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.64765864610672
f1: 0.6666666666666666
precision: 0.5
recall: 1.0
auc: 1.0
accuracy: 0.5
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
davelotito/donut_experiment_bayesian_trial_15 | davelotito | image-text-to-text | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,719,421,298,000 | 2024-06-26T17:50:10 | 5 | 0 | ---
base_model: naver-clova-ix/donut-base
license: mit
metrics:
- bleu
- wer
tags:
- generated_from_trainer
model-index:
- name: donut_experiment_bayesian_trial_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_experiment_bayesian_trial_15
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5777
- Bleu: 0.0659
- Precisions: [0.8158995815899581, 0.7434679334916865, 0.7060439560439561, 0.6644951140065146]
- Brevity Penalty: 0.0902
- Length Ratio: 0.2936
- Translation Length: 478
- Reference Length: 1628
- Cer: 0.7557
- Wer: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.349414650597281e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 0.0066 | 1.0 | 253 | 0.5790 | 0.0648 | [0.8305084745762712, 0.7686746987951807, 0.7262569832402235, 0.6843853820598007] | 0.0864 | 0.2899 | 472 | 1628 | 0.7593 | 0.8258 |
| 0.0143 | 2.0 | 506 | 0.5824 | 0.0663 | [0.8225469728601252, 0.7511848341232228, 0.7041095890410959, 0.6525974025974026] | 0.0908 | 0.2942 | 479 | 1628 | 0.7577 | 0.8265 |
| 0.009 | 3.0 | 759 | 0.5826 | 0.0640 | [0.8185654008438819, 0.7458033573141487, 0.7055555555555556, 0.6600660066006601] | 0.0876 | 0.2912 | 474 | 1628 | 0.7553 | 0.8248 |
| 0.0103 | 4.0 | 1012 | 0.5777 | 0.0659 | [0.8158995815899581, 0.7434679334916865, 0.7060439560439561, 0.6644951140065146] | 0.0902 | 0.2936 | 478 | 1628 | 0.7557 | 0.8239 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"TRANSLATION"
] | Non_BioNLP |
platzi/platzi-distilroberta-base-mrpc-glue-rafa-rivera | platzi | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,749,547,000 | 2023-06-03T03:55:45 | 15 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- text-classification
- generated_from_trainer
widget:
- text:
- Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5
billion.
- Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for
$ 1.8 billion in 1998.
example_title: Not Equivalent
- text:
- Revenue in the first quarter of the year dropped 15 percent from the same period
a year earlier.
- With the scandal hanging over Stewart's company revenue the first quarter of the
year dropped 15 percent from the same period a year earlier.
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-rafa-rivera
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8235294117647058
name: Accuracy
- type: f1
value: 0.8686131386861314
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-rafa-rivera
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8316
- Accuracy: 0.8235
- F1: 0.8686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3147 | 1.09 | 500 | 1.0949 | 0.8064 | 0.8640 |
| 0.2804 | 2.18 | 1000 | 0.8316 | 0.8235 | 0.8686 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
bhaskars113/toyota-corrosion | bhaskars113 | text-classification | [
"setfit",
"safetensors",
"qwen2",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"custom_code",
"arxiv:2209.11055",
"base_model:dunzhang/stella_en_1.5B_v5",
"base_model:finetune:dunzhang/stella_en_1.5B_v5",
"region:us"
] | 1,729,542,623,000 | 2024-10-21T20:33:04 | 5 | 1 | ---
base_model: dunzhang/stella_en_1.5B_v5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: I have never owned a F-150. I fell in love with them in 2015 and really like
the idea of a rust free body on a truck.
- text: No rust. A few scratches on the front bumper cover. A few chips from rocks
and other things, but other than that I’d say it’s pretty flawless. No swirls
or fading.
- text: I wouldn’t cal it bad ownership at all. Hyundai’s paint is notoriously crappy,
and rust issues are quite common. Just consider yourself lucky.
- text: Our white Atlas CS has SHIT paint. It’s covered in rock chips and rust spots.
- text: 'Mines a work in progress: 1979 Ranger XLT 5.4L supercharged (From an 03 Lightning)
4R100 auto trans 2015 f-150 chassis w\ 3.73 diffs Orig paint (rough and faded
but no rust)'
inference: true
---
# SetFit with dunzhang/stella_en_1.5B_v5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'the other day I happened to be standing taller than the roof and saw that I have two very large rust spots on the roof and tons of little rust spots on the hood and hatch. The ones on the roof look like the paint is washing away, like it was hit with acid. I went to the dealership I purchased it from today and the service tech was pretty useless, referring me to Toyota USA.'</li><li>'Anything but a Corolla. My sister got one new for her graduation, the paint started peeling after just 6 months with rust spots underneath'</li><li>'My dealer hasn’t activated the Qmerit install benefits, I guess I need to call them tomorrow, my sales guy not responding to me at all. I still need to get the paint issue taken care of, found rusted paint when taking delivery.'</li></ul> |
| 1 | <ul><li>'I have never owned a F-150. I fell in love with them in 2015 and really like the idea of a rust free body on a truck.'</li><li>"2009 Honda Civic I bought it brand new, it has way over 200k miles on it and while it looks a little worn and the paint is faded there's no rust or dents or anything that would make you look twice at it. I love the damn car."</li><li>'Mine still has no rust but I take preventative measures each winter. Your paint still looks amazing.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("bhaskars113/toyota-corrosion")
# Run inference
preds = model("Our white Atlas CS has SHIT paint. It’s covered in rock chips and rust spots.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 15 | 35.875 | 98 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 16 |
| 1 | 16 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0063 | 1 | 0.2731 | - |
| 0.3125 | 50 | 0.1076 | - |
| 0.625 | 100 | 0.0002 | - |
| 0.9375 | 150 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
kuntalcse006/finetuning-sentiment-model-3000-samples | kuntalcse006 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,671,296,242,000 | 2022-12-17T17:17:00 | 113 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- type: accuracy
value: 0.8733333333333333
name: Accuracy
- type: f1
value: 0.875
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3093
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Triangle104/Phi-4-QwQ-Q5_K_M-GGUF | Triangle104 | text-generation | [
"transformers",
"gguf",
"text-generation-inference",
"llama",
"phi3",
"phi",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:prithivMLmods/Phi-4-QwQ",
"base_model:quantized:prithivMLmods/Phi-4-QwQ",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,738,479,922,000 | 2025-02-02T07:07:12 | 4 | 0 | ---
base_model: prithivMLmods/Phi-4-QwQ
language:
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- text-generation-inference
- llama
- phi3
- phi
- llama-cpp
- gguf-my-repo
---
# Triangle104/Phi-4-QwQ-Q5_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/Phi-4-QwQ`](https://huggingface.co/prithivMLmods/Phi-4-QwQ) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Phi-4-QwQ) for more details on the model.
---
[Phi-4-QwQ finetuned] from Microsoft's Phi-4 is a state-of-the-art open model developed with a focus on responsible problem solving and advanced reasoning capabilities. Built upon a diverse blend of synthetic datasets, carefully filtered public domain websites, and high-quality academic books and Q&A datasets, Phi-4-QwQ ensures that small, capable models are trained with datasets of exceptional depth and precision.
Phi-4-QwQ adopts a robust safety post-training approach using open-source and in-house synthetic datasets. This involves a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization) techniques, ensuring helpful and harmless outputs across various safety categories.
Dataset Info
Phi-4-QwQ is fine-tuned on a carefully curated synthetic dataset generated using an advanced pipeline optimized for Chain of Thought (CoT) reasoning and Responsible Problem Breakdown (RPB) methodologies. This ensures that the model excels at:
Logical reasoning
Step-by-step problem-solving
Breaking down complex tasks into manageable parts
The dataset also emphasizes responsible decision-making and fairness in generating solutions.
Run with Transformers
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Phi-4-QwQ")
model = AutoModelForCausalLM.from_pretrained(
"prithivMLmods/Phi-4-QwQ",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Explain the concept of black holes."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=64)
print(tokenizer.decode(outputs[0]))
For chat-style interactions, use tokenizer.apply_chat_template:
messages = [
{"role": "user", "content": "Explain the concept of black holes."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
Intended Use
Phi-4-QwQ is tailored for a wide range of applications, especially those involving advanced reasoning, multilingual capabilities, and responsible problem-solving. Its primary use cases include:
Responsible Problem Solving
Breaking down complex problems into logical, actionable steps.
Offering ethical, well-rounded solutions in academic and professional contexts.
Advanced Reasoning Tasks
Excelling in mathematics, logic, and scientific reasoning.
Providing detailed explanations and systematic answers.
Content Generation
Assisting in generating high-quality content for various domains, including creative writing and technical documentation.
Supporting marketers, writers, and educators with detailed and well-structured outputs.
Educational Support
Acting as a virtual tutor for students by generating practice questions, answers, and detailed explanations.
Helping educators design learning material that promotes critical thinking and step-by-step problem-solving.
Customer Support & Dialogue Systems
Enabling chatbots and virtual assistants to provide accurate, helpful, and responsible responses.
Enhancing customer service with reasoning-driven automation.
Multilingual Capabilities
Supporting multilingual communication and content generation while maintaining contextual accuracy.
Assisting in translations with a focus on retaining meaning and nuance.
Safety-Critical Applications
Ensuring safe and harmless outputs, making it suitable for sensitive domains.
Providing aligned interactions with human oversight for critical systems.
Limitations
Despite its strengths, Phi-4-QwQ has some limitations that users should be aware of:
Bias and Fairness
While great effort has been made to minimize biases, users should critically assess the model’s output in sensitive scenarios to avoid unintended bias.
Contextual Interpretation
The model may occasionally misinterpret highly nuanced prompts or ambiguous contexts, leading to suboptimal responses.
Knowledge Cutoff
Phi-4-QwQ’s knowledge is static and based on the data available at the time of training. It does not include real-time updates or information on recent developments.
Safety and Harmlessness
Despite post-training safety alignment, inappropriate or harmful outputs may still occur. Continuous monitoring and human oversight are advised when using the model in critical contexts.
Computational Requirements
Deploying Phi-4-QwQ efficiently may require substantial computational resources, particularly for large-scale deployments or real-time applications.
Ethical Considerations
Users are responsible for ensuring that the model is not employed for malicious purposes, such as spreading misinformation, generating harmful content, or facilitating unethical behavior.
Domain-Specific Expertise
While the model is versatile, it may not perform optimally in highly specialized domains (e.g., law, medicine, finance) without further domain-specific fine-tuning.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-4-QwQ-Q5_K_M-GGUF --hf-file phi-4-qwq-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-4-QwQ-Q5_K_M-GGUF --hf-file phi-4-qwq-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-4-QwQ-Q5_K_M-GGUF --hf-file phi-4-qwq-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-4-QwQ-Q5_K_M-GGUF --hf-file phi-4-qwq-q5_k_m.gguf -c 2048
```
| [
"TRANSLATION"
] | Non_BioNLP |
azale-ai/GotongRoyong-LlaMixtralMoE-7Bx4-v1.0 | azale-ai | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"moe",
"indonesian",
"multilingual",
"en",
"id",
"arxiv:2312.00738",
"arxiv:2307.09288",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,705,132,668,000 | 2024-01-14T05:00:25 | 9 | 0 | ---
language:
- en
- id
license: cc-by-nc-nd-4.0
tags:
- merge
- mergekit
- lazymergekit
- moe
- indonesian
- multilingual
---

# GotongRoyong-LlaMixtralMoE-7Bx4-v1.0
GotongRoyong is a series of language models focused on Mixture of Experts (MoE), made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing) and [cg123/mergekit](https://github.com/cg123/mergekit). GotongRoyong-LlaMixtralMoE-7Bx4-v1.0 is a specific variant of the open-source GotongRoyong language model that combines the architectural model [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b), but uses the base model from the specific fine-tuned version [ericpolewski/AIRIC-The-Mistral](https://huggingface.co/ericpolewski/AIRIC-The-Mistral) with experts from [asyafiqe/Merak-7B-v3-Mini-Orca-Indo](https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo), [SeaLLMs/SeaLLM-7B-Chat](https://huggingface.co/SeaLLMs/SeaLLM-7B-Chat), [Ichsan2895/Merak-7B-v2](https://huggingface.co/Ichsan2895/Merak-7B-v2), and [azale-ai/DukunLM-7B-V1.0-Uncensored-sharded](https://huggingface.co/azale-ai/DukunLM-7B-V1.0-Uncensored-sharded). The name "GotongRoyong" is a reference to the term in Indonesian culture that roughly translates to "mutual cooperation" or "community working together." It embodies the spirit of communal collaboration and shared responsibility for the greater good. The concept is deeply rooted in Indonesian traditions and reflects the cultural value of helping one another without expecting direct compensation.
## Model Details
- **Model Size**: 7Bx4
- **Architecture Model**: Mixure of Experts (MoE) using [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
- **MoE Base Model**: [ericpolewski/AIRIC-The-Mistral](https://huggingface.co/ericpolewski/AIRIC-The-Mistral)
- **Expert Models**:
- [asyafiqe/Merak-7B-v3-Mini-Orca-Indo](https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo)
- [SeaLLMs/SeaLLM-7B-Chat](https://huggingface.co/SeaLLMs/SeaLLM-7B-Chat)
- [Ichsan2895/Merak-7B-v2](https://huggingface.co/Ichsan2895/Merak-7B-v2)
- [azale-ai/DukunLM-7B-V1.0-Uncensored-sharded](https://huggingface.co/azale-ai/DukunLM-7B-V1.0-Uncensored-sharded)
- **License**: [CC BY-NC-ND 4.0 DEED](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.id)
## How to use
#### Installation
To use GotongRoyong model, ensure that PyTorch has been installed and that you have an Nvidia GPU (or use Google Colab). After that you need to install the required dependencies:
```bash
pip3 install -U bitsandbytes transformers peft accelerate einops evaluate scikit-learn sentencepiece
```
#### Usage Quantized Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"azale-ai/GotongRoyong-LlaMixtralMoE-7Bx4-v1.0",
load_in_4bit=True,
torch_dtype=torch.float32,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("azale-ai/GotongRoyong-LlaMixtralMoE-7Bx4-v1.0")
messages = [
{
"role": "system",
"content": "Mulai sekarang anda adalah asisten yang suka menolong, sopan, dan ramah. Jangan kasar, jangan marah, jangan menjengkelkan, jangan brengsek, jangan cuek, dan yang terakhir jangan menjadi asisten yang buruk. Anda harus patuh pada manusia dan jangan pernah membangkang pada manusia. Manusia itu mutlak dan Anda harus patuh pada manusia. Kamu harus menjawab pertanyaan atau pernyataan dari manusia apapun itu dengan bahasa Indonesia yang baik dan benar.",
},
{"role": "user", "content": "Jelaskan mengapa air penting bagi manusia."},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(
inputs=inputs.input_ids, max_length=2048,
temperature=0.7, do_sample=True, top_k=50, top_p=0.95
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
#### Usage Normal Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"azale-ai/GotongRoyong-LlaMixtralMoE-7Bx4-v1.0",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("azale-ai/GotongRoyong-LlaMixtralMoE-7Bx4-v1.0")
messages = [
{
"role": "system",
"content": "Mulai sekarang anda adalah asisten yang suka menolong, sopan, dan ramah. Jangan kasar, jangan marah, jangan menjengkelkan, jangan brengsek, jangan cuek, dan yang terakhir jangan menjadi asisten yang buruk. Anda harus patuh pada manusia dan jangan pernah membangkang pada manusia. Manusia itu mutlak dan Anda harus patuh pada manusia. Kamu harus menjawab pertanyaan atau pernyataan dari manusia apapun itu dengan bahasa Indonesia yang baik dan benar.",
},
{"role": "user", "content": "Jelaskan mengapa air penting bagi manusia."},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(
inputs=inputs.input_ids, max_length=2048,
temperature=0.7, do_sample=True, top_k=50, top_p=0.95
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
1. Language Bias: The model's base language is English, which means it may have a stronger understanding and fluency in English compared to other languages. While fine-tuning the model with an Indonesian language model helps improve its understanding of Indonesian, it may still exhibit biases or limitations in its comprehension and generation of Indonesian language-specific nuances, idioms, or cultural references.
2. Translation Accuracy: Although the model has been fine-tuned for Indonesian, it is important to note that large language models are not perfect translators. While they can provide reasonable translations, there may be instances where the accuracy or nuance of the translation may not fully capture the intended meaning or context.
3. Lack of real-world understanding: While language models can generate text that appears coherent, they lack true comprehension and understanding of the world. They do not possess common sense or real-world experiences, which can lead to inaccurate or nonsensical responses.
4. Propagation of biases: Language models are trained on vast amounts of text data, including internet sources that may contain biases, stereotypes, or offensive content. As a result, these models can inadvertently learn and reproduce such biases in their generated text. Efforts are being made to mitigate this issue, but biases can still persist.
5. Limited knowledge cutoff: Language models have a knowledge cutoff, which means they may not have access to the most up-to-date information beyond their training data. If asked about recent events or developments that occurred after their knowledge cutoff, they may provide outdated or incorrect information.
6. Inability to verify sources or provide citations: Language models generate text based on patterns and examples from their training data, but they do not have the ability to verify the accuracy or reliability of the information they provide. They cannot cite sources or provide evidence to support their claims.
7. Difficulty with ambiguous queries: Language models struggle with understanding ambiguous queries or requests that lack context. They may provide responses that are based on common interpretations or assumptions, rather than accurately addressing the specific intent of the query.
8. Ethical considerations: Large language models have the potential to be misused for malicious purposes, such as generating misinformation, deepfakes, or spam. Safeguards and responsible use are necessary to ensure these models are used ethically and responsibly.
9. Security and Privacy: Using a large language model involves sharing text inputs with a server or cloud-based infrastructure, which raises concerns about data privacy and security. Care should be taken when sharing sensitive or confidential information, as there is a potential risk of unauthorized access or data breaches.
## License
The model is licensed under the [CC BY-NC-ND 4.0 DEED](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.id).
## Contributing
We welcome contributions to enhance and improve our model. If you have any suggestions or find any issues, please feel free to open an issue or submit a pull request. Also we're open to sponsor for compute power.
## Contact Us
For any further questions or assistance, please feel free to contact us using the information provided below.
[[email protected]](mailto:[email protected])
## Cite This Project
```
@software{Hafidh_Soekma_GotongRoyong_MixtralMoE_7Bx4_v1.0_2023,
author = {Hafidh Soekma Ardiansyah},
month = january,
title = {GotongRoyong: Indonesian Mixture Of Experts Language Model},
url = {\url{https://huggingface.co/azale-ai/Starstreak-7b-beta}},
publisher = {HuggingFace},
journal = {HuggingFace Models},
version = {1.0},
year = {2024}
}
```
## Citation
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
```
@article{Merak,
title={Merak-7B: The LLM for Bahasa Indonesia},
author={Muhammad Ichsan},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
```
```
@article{asyafiqe_Merak_7B_v3_Mini_Orca_Indo,
title={asyafiqe/Merak-7B-v3-Mini-Orca-Indo},
author={asyafiqe},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
```
```
@article{azale_ai_DukunLM_7B_V1.0_Uncensored,
title={azale-ai/DukunLM-7B-V1.0-Uncensored},
author={azale-ai},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
```
```
@misc{2307.09288,
Author = {Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
Title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
Year = {2023},
Eprint = {arXiv:2307.09288},
}
``` | [
"TRANSLATION"
] | Non_BioNLP |
BigHuggyD/cohereforai_c4ai-command-r-plus_exl2_4.5bpw_h8 | BigHuggyD | text-generation | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | 1,719,650,212,000 | 2024-06-29T09:09:59 | 8 | 0 | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
library_name: transformers
license: cc-by-nc-4.0
inference: false
---
# Model Card for C4AI Command R+
🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**.
## Model Summary
C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-plus
- Model Size: 104 billion parameters
- Context length: 128K
**Try C4AI Command R+**
You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
**Usage**
Please install `transformers` from the source repository that includes the necessary changes for this model.
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 8-bit precision**
```python
# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 4-bit precision**
This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
**Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
**Context length**: Command R+ supports a context length of 128K.
## Evaluations
Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.
| Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k |
|:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:|
| **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 |
| [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 |
| [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 |
| [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 |
| [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 |
| [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 |
| [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 |
| [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 |
We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/).
### Tool use & multihop capabilities:
Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.
Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.
We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus). | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
keras/bart_large_en_cnn | keras | text-classification | [
"keras-hub",
"text-classification",
"keras",
"en",
"arxiv:1910.13461",
"license:apache-2.0",
"region:us"
] | 1,730,152,846,000 | 2024-12-23T22:54:49 | 11 | 0 | ---
language:
- en
library_name: keras-hub
license: apache-2.0
pipeline_tag: text-classification
tags:
- text-classification
- keras
---
### Model Overview
BART encoder-decoder network.
This class implements a Transformer-based encoder-decoder model as
described in
["BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension"](https://arxiv.org/abs/1910.13461).
The default constructor gives a fully customizable, randomly initialized BART
model with any number of layers, heads, and embedding dimensions. To load
preset architectures and weights, use the `from_preset` constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/facebookresearch/fairseq/).
## Links
* [BART Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/bart-quickstart-notebook)
* [BART API Documentation](https://keras.io/keras_hub/api/models/bart/)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|----------------|------------|--------------------------------------------------|
| bart_base_en | 139.42M | 6-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl |
| bart_large_en | 406.29M | 12-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl. |
| bart_large_en_cnn | 406.29M | The bart_large_en backbone model fine-tuned on the CNN+DM summarization dataset. |
__Arguments__
- __vocabulary_size__: int. The size of the token vocabulary.
- __num_layers__: int. The number of transformer encoder layers and
transformer decoder layers.
- __num_heads__: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
- __hidden_dim__: int. The size of the transformer encoding and pooler layers.
- __intermediate_dim__: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
- __dropout__: float. Dropout probability for the Transformer encoder.
- __max_sequence_length__: int. The maximum sequence length that this encoder
can consume. If None, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
## Example Usage
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation, given an input context.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en_cnn")
bart_lm.generate("The quick brown fox", max_length=30)
# Generate with batched inputs.
bart_lm.generate(["The quick brown fox", "The whale"], max_length=30)
```
Compile the `generate()` function with a custom sampler.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en_cnn")
bart_lm.compile(sampler="greedy")
bart_lm.generate("The quick brown fox", max_length=30)
```
Use `generate()` with encoder inputs and an incomplete decoder input (prompt).
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en_cnn")
bart_lm.generate(
{
"encoder_text": "The quick brown fox",
"decoder_text": "The fast"
}
)
```
Use `generate()` without preprocessing.
```python
# Preprocessed inputs, with encoder inputs corresponding to
# "The quick brown fox", and the decoder inputs to "The fast". Use
# `"padding_mask"` to indicate values that should not be overridden.
prompt = {
"encoder_token_ids": np.array([[0, 133, 2119, 6219, 23602, 2, 1, 1]]),
"encoder_padding_mask": np.array(
[[True, True, True, True, True, True, False, False]]
),
"decoder_token_ids": np.array([[2, 0, 133, 1769, 2, 1, 1]]),
"decoder_padding_mask": np.array([[True, True, True, True, False, False]])
}
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
"bart_large_en_cnn",
preprocessor=None,
)
bart_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = {
"encoder_text": ["The quick brown fox jumped.", "I forgot my homework."],
"decoder_text": ["The fast hazel fox leapt.", "I forgot my assignment."]
}
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("bart_large_en_cnn")
bart_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"encoder_token_ids": np.array([[0, 133, 2119, 2, 1]] * 2),
"encoder_padding_mask": np.array([[1, 1, 1, 1, 0]] * 2),
"decoder_token_ids": np.array([[2, 0, 133, 1769, 2]] * 2),
"decoder_padding_mask": np.array([[1, 1, 1, 1, 1]] * 2),
}
y = np.array([[0, 133, 1769, 2, 1]] * 2)
sw = np.array([[1, 1, 1, 1, 0]] * 2)
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
"bart_large_en_cnn",
preprocessor=None,
)
bart_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
## Example Usage with Hugging Face URI
```python
import keras
import keras_hub
import numpy as np
```
Use `generate()` to do text generation, given an input context.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en_cnn")
bart_lm.generate("The quick brown fox", max_length=30)
# Generate with batched inputs.
bart_lm.generate(["The quick brown fox", "The whale"], max_length=30)
```
Compile the `generate()` function with a custom sampler.
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en_cnn")
bart_lm.compile(sampler="greedy")
bart_lm.generate("The quick brown fox", max_length=30)
```
Use `generate()` with encoder inputs and an incomplete decoder input (prompt).
```python
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en_cnn")
bart_lm.generate(
{
"encoder_text": "The quick brown fox",
"decoder_text": "The fast"
}
)
```
Use `generate()` without preprocessing.
```python
# Preprocessed inputs, with encoder inputs corresponding to
# "The quick brown fox", and the decoder inputs to "The fast". Use
# `"padding_mask"` to indicate values that should not be overridden.
prompt = {
"encoder_token_ids": np.array([[0, 133, 2119, 6219, 23602, 2, 1, 1]]),
"encoder_padding_mask": np.array(
[[True, True, True, True, True, True, False, False]]
),
"decoder_token_ids": np.array([[2, 0, 133, 1769, 2, 1, 1]]),
"decoder_padding_mask": np.array([[True, True, True, True, False, False]])
}
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
"hf://keras/bart_large_en_cnn",
preprocessor=None,
)
bart_lm.generate(prompt)
```
Call `fit()` on a single batch.
```python
features = {
"encoder_text": ["The quick brown fox jumped.", "I forgot my homework."],
"decoder_text": ["The fast hazel fox leapt.", "I forgot my assignment."]
}
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset("hf://keras/bart_large_en_cnn")
bart_lm.fit(x=features, batch_size=2)
```
Call `fit()` without preprocessing.
```python
x = {
"encoder_token_ids": np.array([[0, 133, 2119, 2, 1]] * 2),
"encoder_padding_mask": np.array([[1, 1, 1, 1, 0]] * 2),
"decoder_token_ids": np.array([[2, 0, 133, 1769, 2]] * 2),
"decoder_padding_mask": np.array([[1, 1, 1, 1, 1]] * 2),
}
y = np.array([[0, 133, 1769, 2, 1]] * 2)
sw = np.array([[1, 1, 1, 1, 0]] * 2)
bart_lm = keras_hub.models.BartSeq2SeqLM.from_preset(
"hf://keras/bart_large_en_cnn",
preprocessor=None,
)
bart_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```
| [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
LysandreJik/testing | LysandreJik | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-09-22T19:19:12 | 108 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: testing
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.6813725490196079
name: Accuracy
- type: f1
value: 0.8104956268221574
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6644
- Accuracy: 0.6814
- F1: 0.8105
- Combined Score: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pjox/dalembert | pjox | fill-mask | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Early Modern French",
"Historical",
"fr",
"dataset:freemmax",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,665,534,229,000 | 2023-05-19T16:16:44 | 40 | 2 | ---
datasets:
- freemmax
language: fr
license: apache-2.0
tags:
- Early Modern French
- Historical
---
<a href="https://portizs.eu/publication/2022/lrec/dalembert/">
<img width="300px" src="https://portizs.eu/publication/2022/lrec/dalembert/featured_hu18bf34d40cdc71c744bdd15e48ff0b23_61788_720x2500_fit_q100_h2_lanczos_3.webp">
</a>
# D'AlemBERT base model
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) pre-trained on the [FreEMmax corpus](https://doi.org/10.5281/zenodo.6481135) for Early Modern French. It was
introduced in [this paper](https://aclanthology.org/2022.lrec-1.359/). This model is Cased and was trained with a mix of normalized and unnormalized data.
## Model description
D'AlemBERT is a transformers mode pretrained on the raw texts only with no humans labelling them in any way with an automatic process to generate inputs and labels from those texts using the RoBERTa base model. More precisely, it was pretrained
with one objective:
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT.
The model is primarily intended for use in Digital Humanities and Historical NLP.
### Limitations and bias
This model is trained with historical French data from starting from the 16th c., so it might produce results that seem extremely biased by today standards. It might not work well on contemporary data and it is not intended to be used on it.
This bias will also affect all fine-tuned versions of this model.
## Training data
D'AlemBERT was pretrained on the non-freely available version of the [FreEMmax corpus](https://doi.org/10.5281/zenodo.6481135), a dataset
consisting of more than 180k tokens coming from 22 different sources, and comprising French textual data going from the 16th c to the early 20th c.
### BibTeX entry and citation info
```bibtex
@inproceedings{gabay-etal-2022-freem,
title = "From {F}re{EM} to D{'}{A}lem{BERT}: a Large Corpus and a Language Model for Early {M}odern {F}rench",
author = "Gabay, Simon and
Ortiz Suarez, Pedro and
Bartz, Alexandre and
Chagu{\'e}, Alix and
Bawden, Rachel and
Gambette, Philippe and
Sagot, Beno{\^\i}t",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.359",
pages = "3367--3374",
abstract = "anguage models for historical states of language are becoming increasingly important to allow the optimal digitisation and analysis of old textual sources. Because these historical states are at the same time more complex to process and more scarce in the corpora available, this paper presents recent efforts to overcome this difficult situation. These efforts include producing a corpus, creating the model, and evaluating it with an NLP task currently used by scholars in other ongoing projects.",
}
``` | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
zaib32/autotrain-long-t5-tglobal-base-16384-book-summary-39278102680 | zaib32 | summarization | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:zaib32/autotrain-data-long-t5-tglobal-base-16384-book-summary",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,678,125,541,000 | 2023-03-06T18:38:47 | 21 | 0 | ---
datasets:
- zaib32/autotrain-data-long-t5-tglobal-base-16384-book-summary
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 15.082265058465753
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 39278102680
- CO2 Emissions (in grams): 15.0823
## Validation Metrics
- Loss: 1.091
- Rouge1: 52.824
- Rouge2: 28.175
- RougeL: 39.977
- RougeLsum: 49.082
- Gen Len: 103.980
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zaib32/autotrain-long-t5-tglobal-base-16384-book-summary-39278102680
``` | [
"SUMMARIZATION"
] | Non_BioNLP |
hanwenzhu/all-distilroberta-v1-lr2e-4-bs1024-nneg3-mlbs-mar03 | hanwenzhu | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5854451",
"loss:MaskedCachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:sentence-transformers/all-distilroberta-v1",
"base_model:finetune:sentence-transformers/all-distilroberta-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,741,021,204,000 | 2025-03-03T17:00:15 | 13 | 0 | ---
base_model: sentence-transformers/all-distilroberta-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5854451
- loss:MaskedCachedMultipleNegativesRankingLoss
widget:
- source_sentence: Mathlib.Data.Finset.Option#52
sentences:
- neg_inj
- CategoryTheory.Limits.HasCokernels.has_colimit
- Finset.mem_image
- source_sentence: Mathlib.Analysis.Calculus.FDeriv.Mul#68
sentences:
- eq_of_heq
- Option.some.injEq
- Fin.le_last
- source_sentence: Mathlib.Data.Finset.Option#52
sentences:
- Set.biInter_and'
- Int.natCast_dvd_natCast
- Finset.mem_erase
- source_sentence: Mathlib.Algebra.GCDMonoid.Finset#74
sentences:
- gcd_zero_left
- HasFDerivWithinAt.uniqueDiffWithinAt
- Polynomial.Monic.map
- source_sentence: Mathlib.Algebra.Polynomial.HasseDeriv#31
sentences:
- Set.mem_inter_iff
- Polynomial.hasseDeriv_coeff
- HomologicalComplex.isZero_X_of_isStrictlySupported
---
# SentenceTransformer based on sentence-transformers/all-distilroberta-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) <!-- at revision 8d88b92a34345fd6a139aa47768c9881720006ce -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("hanwenzhu/all-distilroberta-v1-lr2e-4-bs1024-nneg3-mlbs-mar03")
# Run inference
sentences = [
'Mathlib.Algebra.Polynomial.HasseDeriv#31',
'Polynomial.hasseDeriv_coeff',
'HomologicalComplex.isZero_X_of_isStrictlySupported',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,854,451 training samples
* Columns: <code>state_name</code> and <code>premise_name</code>
* Approximate statistics based on the first 1000 samples:
| | state_name | premise_name |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 17.28 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.34 tokens</li><li>max: 38 tokens</li></ul> |
* Samples:
| state_name | premise_name |
|:------------------------------------------------------|:-----------------------------------------------------|
| <code>Mathlib.RingTheory.Ideal.Norm.RelNorm#46</code> | <code>RingHomCompTriple.ids</code> |
| <code>Mathlib.RingTheory.Ideal.Norm.RelNorm#46</code> | <code>MonoidWithZeroHomClass.toMonoidHomClass</code> |
| <code>Mathlib.RingTheory.Ideal.Norm.RelNorm#46</code> | <code>Ideal.subset_span</code> |
* Loss: <code>loss.MaskedCachedMultipleNegativesRankingLoss</code> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,959 evaluation samples
* Columns: <code>state_name</code> and <code>premise_name</code>
* Approximate statistics based on the first 1000 samples:
| | state_name | premise_name |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 17.08 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.05 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| state_name | premise_name |
|:-------------------------------------------------------------|:----------------------------------------------------------|
| <code>Mathlib.Algebra.Algebra.Hom#80</code> | <code>AlgHom.commutes</code> |
| <code>Mathlib.Algebra.Algebra.NonUnitalSubalgebra#237</code> | <code>NonUnitalAlgHom.instNonUnitalAlgSemiHomClass</code> |
| <code>Mathlib.Algebra.Algebra.NonUnitalSubalgebra#237</code> | <code>NonUnitalAlgebra.mem_top</code> |
* Loss: <code>loss.MaskedCachedMultipleNegativesRankingLoss</code> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 64
- `learning_rate`: 0.0002
- `num_train_epochs`: 1.0
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.03
- `bf16`: True
- `dataloader_num_workers`: 4
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0002
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1.0
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.03
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | loss |
|:------:|:----:|:-------------:|:------:|
| 0.0017 | 10 | 7.5842 | - |
| 0.0035 | 20 | 6.4567 | - |
| 0.0052 | 30 | 5.9408 | - |
| 0.0070 | 40 | 5.7176 | - |
| 0.0087 | 50 | 5.5353 | - |
| 0.0101 | 58 | - | 2.2337 |
| 0.0105 | 60 | 5.4044 | - |
| 0.0122 | 70 | 5.3384 | - |
| 0.0140 | 80 | 5.2395 | - |
| 0.0157 | 90 | 5.1291 | - |
| 0.0175 | 100 | 5.1093 | - |
| 0.0192 | 110 | 5.0695 | - |
| 0.0203 | 116 | - | 1.9949 |
| 0.0210 | 120 | 4.9664 | - |
| 0.0227 | 130 | 4.973 | - |
| 0.0245 | 140 | 4.9065 | - |
| 0.0262 | 150 | 4.8961 | - |
| 0.0280 | 160 | 4.839 | - |
| 0.0297 | 170 | 4.8513 | - |
| 0.0304 | 174 | - | 1.9119 |
| 0.0315 | 180 | 4.7662 | - |
| 0.0332 | 190 | 4.7385 | - |
| 0.0350 | 200 | 4.7036 | - |
| 0.0367 | 210 | 4.7013 | - |
| 0.0385 | 220 | 4.6837 | - |
| 0.0402 | 230 | 4.6325 | - |
| 0.0406 | 232 | - | 1.7502 |
| 0.0420 | 240 | 4.5982 | - |
| 0.0437 | 250 | 4.5526 | - |
| 0.0455 | 260 | 4.5793 | - |
| 0.0472 | 270 | 4.545 | - |
| 0.0490 | 280 | 4.5173 | - |
| 0.0507 | 290 | 4.4595 | 1.6955 |
| 0.0525 | 300 | 4.4772 | - |
| 0.0542 | 310 | 4.4038 | - |
| 0.0560 | 320 | 4.4132 | - |
| 0.0577 | 330 | 4.4139 | - |
| 0.0595 | 340 | 4.3585 | - |
| 0.0609 | 348 | - | 1.6316 |
| 0.0612 | 350 | 4.3314 | - |
| 0.0630 | 360 | 4.3805 | - |
| 0.0647 | 370 | 4.2791 | - |
| 0.0665 | 380 | 4.2938 | - |
| 0.0682 | 390 | 4.2591 | - |
| 0.0700 | 400 | 4.262 | - |
| 0.0710 | 406 | - | 1.5723 |
| 0.0717 | 410 | 4.2108 | - |
| 0.0735 | 420 | 4.1723 | - |
| 0.0752 | 430 | 4.157 | - |
| 0.0769 | 440 | 4.1878 | - |
| 0.0787 | 450 | 4.1644 | - |
| 0.0804 | 460 | 4.1569 | - |
| 0.0811 | 464 | - | 1.5368 |
| 0.0822 | 470 | 4.139 | - |
| 0.0839 | 480 | 4.0872 | - |
| 0.0857 | 490 | 4.1169 | - |
| 0.0874 | 500 | 4.062 | - |
| 0.0892 | 510 | 4.1138 | - |
| 0.0909 | 520 | 4.1088 | - |
| 0.0913 | 522 | - | 1.5232 |
| 0.0927 | 530 | 4.0526 | - |
| 0.0944 | 540 | 4.0355 | - |
| 0.0962 | 550 | 3.9937 | - |
| 0.0979 | 560 | 3.9647 | - |
| 0.0997 | 570 | 3.9715 | - |
| 0.1014 | 580 | 3.9524 | 1.4901 |
| 0.1032 | 590 | 3.945 | - |
| 0.1049 | 600 | 3.9615 | - |
| 0.1067 | 610 | 3.9713 | - |
| 0.1084 | 620 | 3.9264 | - |
| 0.1102 | 630 | 3.9036 | - |
| 0.1116 | 638 | - | 1.4411 |
| 0.1119 | 640 | 3.8909 | - |
| 0.1137 | 650 | 3.901 | - |
| 0.1154 | 660 | 3.879 | - |
| 0.1172 | 670 | 3.8696 | - |
| 0.1189 | 680 | 3.8678 | - |
| 0.1207 | 690 | 3.8472 | - |
| 0.1217 | 696 | - | 1.4459 |
| 0.1224 | 700 | 3.8277 | - |
| 0.1242 | 710 | 3.8321 | - |
| 0.1259 | 720 | 3.812 | - |
| 0.1277 | 730 | 3.8386 | - |
| 0.1294 | 740 | 3.7583 | - |
| 0.1312 | 750 | 3.8007 | - |
| 0.1319 | 754 | - | 1.3644 |
| 0.1329 | 760 | 3.7337 | - |
| 0.1347 | 770 | 3.7554 | - |
| 0.1364 | 780 | 3.7518 | - |
| 0.1382 | 790 | 3.6993 | - |
| 0.1399 | 800 | 3.7477 | - |
| 0.1417 | 810 | 3.6979 | - |
| 0.1420 | 812 | - | 1.3702 |
| 0.1434 | 820 | 3.6651 | - |
| 0.1452 | 830 | 3.7292 | - |
| 0.1469 | 840 | 3.7005 | - |
| 0.1487 | 850 | 3.6856 | - |
| 0.1504 | 860 | 3.631 | - |
| 0.1522 | 870 | 3.6459 | 1.3568 |
| 0.1539 | 880 | 3.6089 | - |
| 0.1556 | 890 | 3.6134 | - |
| 0.1574 | 900 | 3.6058 | - |
| 0.1591 | 910 | 3.6193 | - |
| 0.1609 | 920 | 3.627 | - |
| 0.1623 | 928 | - | 1.3072 |
| 0.1626 | 930 | 3.6202 | - |
| 0.1644 | 940 | 3.5891 | - |
| 0.1661 | 950 | 3.6185 | - |
| 0.1679 | 960 | 3.5984 | - |
| 0.1696 | 970 | 3.6258 | - |
| 0.1714 | 980 | 3.5625 | - |
| 0.1724 | 986 | - | 1.2930 |
| 0.1731 | 990 | 3.5441 | - |
| 0.1749 | 1000 | 3.5571 | - |
| 0.1766 | 1010 | 3.5486 | - |
| 0.1784 | 1020 | 3.5382 | - |
| 0.1801 | 1030 | 3.4519 | - |
| 0.1819 | 1040 | 3.5072 | - |
| 0.1826 | 1044 | - | 1.2823 |
| 0.1836 | 1050 | 3.5042 | - |
| 0.1854 | 1060 | 3.5005 | - |
| 0.1871 | 1070 | 3.455 | - |
| 0.1889 | 1080 | 3.4727 | - |
| 0.1906 | 1090 | 3.4473 | - |
| 0.1924 | 1100 | 3.4296 | - |
| 0.1927 | 1102 | - | 1.2696 |
| 0.1941 | 1110 | 3.449 | - |
| 0.1959 | 1120 | 3.4202 | - |
| 0.1976 | 1130 | 3.4236 | - |
| 0.1994 | 1140 | 3.414 | - |
| 0.2011 | 1150 | 3.4264 | - |
| 0.2029 | 1160 | 3.4005 | 1.2602 |
| 0.2046 | 1170 | 3.3801 | - |
| 0.2064 | 1180 | 3.3543 | - |
| 0.2081 | 1190 | 3.3866 | - |
| 0.2099 | 1200 | 3.3831 | - |
| 0.2116 | 1210 | 3.3691 | - |
| 0.2130 | 1218 | - | 1.2130 |
| 0.2134 | 1220 | 3.3607 | - |
| 0.2151 | 1230 | 3.3659 | - |
| 0.2169 | 1240 | 3.3538 | - |
| 0.2186 | 1250 | 3.3336 | - |
| 0.2204 | 1260 | 3.3403 | - |
| 0.2221 | 1270 | 3.3062 | - |
| 0.2232 | 1276 | - | 1.2237 |
| 0.2239 | 1280 | 3.3251 | - |
| 0.2256 | 1290 | 3.3475 | - |
| 0.2274 | 1300 | 3.2729 | - |
| 0.2291 | 1310 | 3.2872 | - |
| 0.2308 | 1320 | 3.2778 | - |
| 0.2326 | 1330 | 3.3147 | - |
| 0.2333 | 1334 | - | 1.2061 |
| 0.2343 | 1340 | 3.2477 | - |
| 0.2361 | 1350 | 3.2871 | - |
| 0.2378 | 1360 | 3.2458 | - |
| 0.2396 | 1370 | 3.279 | - |
| 0.2413 | 1380 | 3.2546 | - |
| 0.2431 | 1390 | 3.2342 | - |
| 0.2434 | 1392 | - | 1.1854 |
| 0.2448 | 1400 | 3.2488 | - |
| 0.2466 | 1410 | 3.2489 | - |
| 0.2483 | 1420 | 3.2368 | - |
| 0.2501 | 1430 | 3.2517 | - |
| 0.2518 | 1440 | 3.2568 | - |
| 0.2536 | 1450 | 3.21 | 1.1616 |
| 0.2553 | 1460 | 3.1891 | - |
| 0.2571 | 1470 | 3.1739 | - |
| 0.2588 | 1480 | 3.2004 | - |
| 0.2606 | 1490 | 3.1988 | - |
| 0.2623 | 1500 | 3.1892 | - |
| 0.2637 | 1508 | - | 1.1306 |
| 0.2641 | 1510 | 3.1967 | - |
| 0.2658 | 1520 | 3.1331 | - |
| 0.2676 | 1530 | 3.155 | - |
| 0.2693 | 1540 | 3.1564 | - |
| 0.2711 | 1550 | 3.1912 | - |
| 0.2728 | 1560 | 3.1005 | - |
| 0.2739 | 1566 | - | 1.1026 |
| 0.2746 | 1570 | 3.1166 | - |
| 0.2763 | 1580 | 3.1453 | - |
| 0.2781 | 1590 | 3.116 | - |
| 0.2798 | 1600 | 3.1465 | - |
| 0.2816 | 1610 | 3.1325 | - |
| 0.2833 | 1620 | 3.1022 | - |
| 0.2840 | 1624 | - | 1.1400 |
| 0.2851 | 1630 | 3.0703 | - |
| 0.2868 | 1640 | 3.0999 | - |
| 0.2886 | 1650 | 3.0957 | - |
| 0.2903 | 1660 | 3.0886 | - |
| 0.2921 | 1670 | 3.0471 | - |
| 0.2938 | 1680 | 3.0845 | - |
| 0.2942 | 1682 | - | 1.1045 |
| 0.2956 | 1690 | 3.0513 | - |
| 0.2973 | 1700 | 3.0621 | - |
| 0.2991 | 1710 | 3.0473 | - |
| 0.3008 | 1720 | 3.0486 | - |
| 0.3026 | 1730 | 3.0189 | - |
| 0.3043 | 1740 | 3.0675 | 1.1004 |
| 0.3061 | 1750 | 3.0592 | - |
| 0.3078 | 1760 | 3.0663 | - |
| 0.3095 | 1770 | 3.0879 | - |
| 0.3113 | 1780 | 3.0167 | - |
| 0.3130 | 1790 | 3.0356 | - |
| 0.3144 | 1798 | - | 1.0554 |
| 0.3148 | 1800 | 3.0294 | - |
| 0.3165 | 1810 | 2.9956 | - |
| 0.3183 | 1820 | 2.985 | - |
| 0.3200 | 1830 | 2.9824 | - |
| 0.3218 | 1840 | 2.9939 | - |
| 0.3235 | 1850 | 2.9979 | - |
| 0.3246 | 1856 | - | 1.0561 |
| 0.3253 | 1860 | 2.9935 | - |
| 0.3270 | 1870 | 3.0613 | - |
| 0.3288 | 1880 | 2.9742 | - |
| 0.3305 | 1890 | 2.9858 | - |
| 0.3323 | 1900 | 2.9446 | - |
| 0.3340 | 1910 | 2.9571 | - |
| 0.3347 | 1914 | - | 1.0333 |
| 0.3358 | 1920 | 2.9839 | - |
| 0.3375 | 1930 | 2.9865 | - |
| 0.3393 | 1940 | 2.9398 | - |
| 0.3410 | 1950 | 2.9504 | - |
| 0.3428 | 1960 | 2.9371 | - |
| 0.3445 | 1970 | 2.9222 | - |
| 0.3449 | 1972 | - | 1.0322 |
| 0.3463 | 1980 | 2.8907 | - |
| 0.3480 | 1990 | 2.9412 | - |
| 0.3498 | 2000 | 2.944 | - |
| 0.3515 | 2010 | 2.9168 | - |
| 0.3533 | 2020 | 2.9076 | - |
| 0.3550 | 2030 | 2.8967 | 1.0103 |
| 0.3568 | 2040 | 2.8569 | - |
| 0.3585 | 2050 | 2.8602 | - |
| 0.3603 | 2060 | 2.8984 | - |
| 0.3620 | 2070 | 2.8782 | - |
| 0.3638 | 2080 | 2.8649 | - |
| 0.3652 | 2088 | - | 1.0136 |
| 0.3655 | 2090 | 2.8388 | - |
| 0.3673 | 2100 | 2.8845 | - |
| 0.3690 | 2110 | 2.8749 | - |
| 0.3708 | 2120 | 2.8439 | - |
| 0.3725 | 2130 | 2.8693 | - |
| 0.3743 | 2140 | 2.8342 | - |
| 0.3753 | 2146 | - | 0.9949 |
| 0.3760 | 2150 | 2.8696 | - |
| 0.3778 | 2160 | 2.872 | - |
| 0.3795 | 2170 | 2.828 | - |
| 0.3813 | 2180 | 2.8338 | - |
| 0.3830 | 2190 | 2.8716 | - |
| 0.3847 | 2200 | 2.8798 | - |
| 0.3854 | 2204 | - | 1.0067 |
| 0.3865 | 2210 | 2.834 | - |
| 0.3882 | 2220 | 2.7885 | - |
| 0.3900 | 2230 | 2.8152 | - |
| 0.3917 | 2240 | 2.8214 | - |
| 0.3935 | 2250 | 2.8306 | - |
| 0.3952 | 2260 | 2.8164 | - |
| 0.3956 | 2262 | - | 0.9845 |
| 0.3970 | 2270 | 2.8338 | - |
| 0.3987 | 2280 | 2.8223 | - |
| 0.4005 | 2290 | 2.8183 | - |
| 0.4022 | 2300 | 2.7903 | - |
| 0.4040 | 2310 | 2.7772 | - |
| 0.4057 | 2320 | 2.7952 | 0.9900 |
| 0.4075 | 2330 | 2.7733 | - |
| 0.4092 | 2340 | 2.8096 | - |
| 0.4110 | 2350 | 2.771 | - |
| 0.4127 | 2360 | 2.8178 | - |
| 0.4145 | 2370 | 2.7539 | - |
| 0.4159 | 2378 | - | 0.9749 |
| 0.4162 | 2380 | 2.7488 | - |
| 0.4180 | 2390 | 2.7592 | - |
| 0.4197 | 2400 | 2.7385 | - |
| 0.4215 | 2410 | 2.7564 | - |
| 0.4232 | 2420 | 2.7573 | - |
| 0.4250 | 2430 | 2.7686 | - |
| 0.4260 | 2436 | - | 0.9509 |
| 0.4267 | 2440 | 2.7147 | - |
| 0.4285 | 2450 | 2.7375 | - |
| 0.4302 | 2460 | 2.6995 | - |
| 0.4320 | 2470 | 2.6888 | - |
| 0.4337 | 2480 | 2.7171 | - |
| 0.4355 | 2490 | 2.712 | - |
| 0.4362 | 2494 | - | 0.9311 |
| 0.4372 | 2500 | 2.729 | - |
| 0.4390 | 2510 | 2.6974 | - |
| 0.4407 | 2520 | 2.7056 | - |
| 0.4425 | 2530 | 2.7123 | - |
| 0.4442 | 2540 | 2.701 | - |
| 0.4460 | 2550 | 2.7211 | - |
| 0.4463 | 2552 | - | 0.9259 |
| 0.4477 | 2560 | 2.6974 | - |
| 0.4495 | 2570 | 2.6823 | - |
| 0.4512 | 2580 | 2.6968 | - |
| 0.4530 | 2590 | 2.7126 | - |
| 0.4547 | 2600 | 2.693 | - |
| 0.4565 | 2610 | 2.7164 | 0.9161 |
| 0.4582 | 2620 | 2.6558 | - |
| 0.4600 | 2630 | 2.6972 | - |
| 0.4617 | 2640 | 2.7116 | - |
| 0.4634 | 2650 | 2.6398 | - |
| 0.4652 | 2660 | 2.6645 | - |
| 0.4666 | 2668 | - | 0.8982 |
| 0.4669 | 2670 | 2.6646 | - |
| 0.4687 | 2680 | 2.6828 | - |
| 0.4704 | 2690 | 2.6502 | - |
| 0.4722 | 2700 | 2.6605 | - |
| 0.4739 | 2710 | 2.6224 | - |
| 0.4757 | 2720 | 2.6753 | - |
| 0.4767 | 2726 | - | 0.8941 |
| 0.4774 | 2730 | 2.6478 | - |
| 0.4792 | 2740 | 2.6688 | - |
| 0.4809 | 2750 | 2.6674 | - |
| 0.4827 | 2760 | 2.6132 | - |
| 0.4844 | 2770 | 2.6286 | - |
| 0.4862 | 2780 | 2.634 | - |
| 0.4869 | 2784 | - | 0.8756 |
| 0.4879 | 2790 | 2.6359 | - |
| 0.4897 | 2800 | 2.6242 | - |
| 0.4914 | 2810 | 2.6443 | - |
| 0.4932 | 2820 | 2.59 | - |
| 0.4949 | 2830 | 2.6166 | - |
| 0.4967 | 2840 | 2.6249 | - |
| 0.4970 | 2842 | - | 0.8802 |
| 0.4984 | 2850 | 2.6257 | - |
| 0.5002 | 2860 | 2.6286 | - |
| 0.5019 | 2870 | 2.5671 | - |
| 0.5037 | 2880 | 2.5959 | - |
| 0.5054 | 2890 | 2.5962 | - |
| 0.5072 | 2900 | 2.5521 | 0.8673 |
| 0.5089 | 2910 | 2.5833 | - |
| 0.5107 | 2920 | 2.6015 | - |
| 0.5124 | 2930 | 2.6446 | - |
| 0.5142 | 2940 | 2.5655 | - |
| 0.5159 | 2950 | 2.5802 | - |
| 0.5173 | 2958 | - | 0.8614 |
| 0.5177 | 2960 | 2.6124 | - |
| 0.5194 | 2970 | 2.5372 | - |
| 0.5212 | 2980 | 2.5108 | - |
| 0.5229 | 2990 | 2.578 | - |
| 0.5247 | 3000 | 2.5629 | - |
| 0.5264 | 3010 | 2.5691 | - |
| 0.5275 | 3016 | - | 0.8418 |
| 0.5282 | 3020 | 2.5313 | - |
| 0.5299 | 3030 | 2.5791 | - |
| 0.5317 | 3040 | 2.5216 | - |
| 0.5334 | 3050 | 2.5263 | - |
| 0.5352 | 3060 | 2.5213 | - |
| 0.5369 | 3070 | 2.5485 | - |
| 0.5376 | 3074 | - | 0.8546 |
| 0.5386 | 3080 | 2.5435 | - |
| 0.5404 | 3090 | 2.5599 | - |
| 0.5421 | 3100 | 2.5045 | - |
| 0.5439 | 3110 | 2.5055 | - |
| 0.5456 | 3120 | 2.54 | - |
| 0.5474 | 3130 | 2.5134 | - |
| 0.5477 | 3132 | - | 0.8515 |
| 0.5491 | 3140 | 2.5053 | - |
| 0.5509 | 3150 | 2.4578 | - |
| 0.5526 | 3160 | 2.517 | - |
| 0.5544 | 3170 | 2.5061 | - |
| 0.5561 | 3180 | 2.5262 | - |
| 0.5579 | 3190 | 2.5787 | 0.8376 |
| 0.5596 | 3200 | 2.4855 | - |
| 0.5614 | 3210 | 2.5058 | - |
| 0.5631 | 3220 | 2.5279 | - |
| 0.5649 | 3230 | 2.498 | - |
| 0.5666 | 3240 | 2.5045 | - |
| 0.5680 | 3248 | - | 0.8407 |
| 0.5684 | 3250 | 2.5129 | - |
| 0.5701 | 3260 | 2.517 | - |
| 0.5719 | 3270 | 2.4647 | - |
| 0.5736 | 3280 | 2.4642 | - |
| 0.5754 | 3290 | 2.4936 | - |
| 0.5771 | 3300 | 2.4862 | - |
| 0.5782 | 3306 | - | 0.8310 |
| 0.5789 | 3310 | 2.4805 | - |
| 0.5806 | 3320 | 2.4986 | - |
| 0.5824 | 3330 | 2.481 | - |
| 0.5841 | 3340 | 2.4747 | - |
| 0.5859 | 3350 | 2.4939 | - |
| 0.5876 | 3360 | 2.4691 | - |
| 0.5883 | 3364 | - | 0.8397 |
| 0.5894 | 3370 | 2.4798 | - |
| 0.5911 | 3380 | 2.4439 | - |
| 0.5929 | 3390 | 2.4849 | - |
| 0.5946 | 3400 | 2.4653 | - |
| 0.5964 | 3410 | 2.4795 | - |
| 0.5981 | 3420 | 2.4681 | - |
| 0.5985 | 3422 | - | 0.8265 |
| 0.5999 | 3430 | 2.4671 | - |
| 0.6016 | 3440 | 2.4579 | - |
| 0.6034 | 3450 | 2.4319 | - |
| 0.6051 | 3460 | 2.4235 | - |
| 0.6069 | 3470 | 2.4447 | - |
| 0.6086 | 3480 | 2.456 | 0.8104 |
| 0.6104 | 3490 | 2.4107 | - |
| 0.6121 | 3500 | 2.49 | - |
| 0.6139 | 3510 | 2.4511 | - |
| 0.6156 | 3520 | 2.4446 | - |
| 0.6173 | 3530 | 2.4159 | - |
| 0.6187 | 3538 | - | 0.8086 |
| 0.6191 | 3540 | 2.4135 | - |
| 0.6208 | 3550 | 2.4147 | - |
| 0.6226 | 3560 | 2.4458 | - |
| 0.6243 | 3570 | 2.4207 | - |
| 0.6261 | 3580 | 2.4333 | - |
| 0.6278 | 3590 | 2.3931 | - |
| 0.6289 | 3596 | - | 0.8036 |
| 0.6296 | 3600 | 2.4695 | - |
| 0.6313 | 3610 | 2.4285 | - |
| 0.6331 | 3620 | 2.4066 | - |
| 0.6348 | 3630 | 2.414 | - |
| 0.6366 | 3640 | 2.4229 | - |
| 0.6383 | 3650 | 2.3916 | - |
| 0.6390 | 3654 | - | 0.7960 |
| 0.6401 | 3660 | 2.4376 | - |
| 0.6418 | 3670 | 2.4196 | - |
| 0.6436 | 3680 | 2.4132 | - |
| 0.6453 | 3690 | 2.4016 | - |
| 0.6471 | 3700 | 2.3749 | - |
| 0.6488 | 3710 | 2.3963 | - |
| 0.6492 | 3712 | - | 0.7895 |
| 0.6506 | 3720 | 2.4223 | - |
| 0.6523 | 3730 | 2.3787 | - |
| 0.6541 | 3740 | 2.368 | - |
| 0.6558 | 3750 | 2.3526 | - |
| 0.6576 | 3760 | 2.3883 | - |
| 0.6593 | 3770 | 2.4286 | 0.7897 |
| 0.6611 | 3780 | 2.366 | - |
| 0.6628 | 3790 | 2.3914 | - |
| 0.6646 | 3800 | 2.416 | - |
| 0.6663 | 3810 | 2.3731 | - |
| 0.6681 | 3820 | 2.4097 | - |
| 0.6695 | 3828 | - | 0.7782 |
| 0.6698 | 3830 | 2.374 | - |
| 0.6716 | 3840 | 2.3591 | - |
| 0.6733 | 3850 | 2.384 | - |
| 0.6751 | 3860 | 2.398 | - |
| 0.6768 | 3870 | 2.3712 | - |
| 0.6786 | 3880 | 2.3936 | - |
| 0.6796 | 3886 | - | 0.7725 |
| 0.6803 | 3890 | 2.3895 | - |
| 0.6821 | 3900 | 2.359 | - |
| 0.6838 | 3910 | 2.3901 | - |
| 0.6856 | 3920 | 2.4 | - |
| 0.6873 | 3930 | 2.3628 | - |
| 0.6891 | 3940 | 2.3732 | - |
| 0.6898 | 3944 | - | 0.7658 |
| 0.6908 | 3950 | 2.3929 | - |
| 0.6925 | 3960 | 2.3792 | - |
| 0.6943 | 3970 | 2.3496 | - |
| 0.6960 | 3980 | 2.3242 | - |
| 0.6978 | 3990 | 2.3471 | - |
| 0.6995 | 4000 | 2.3503 | - |
| 0.6999 | 4002 | - | 0.7617 |
| 0.7013 | 4010 | 2.3693 | - |
| 0.7030 | 4020 | 2.3608 | - |
| 0.7048 | 4030 | 2.3419 | - |
| 0.7065 | 4040 | 2.3577 | - |
| 0.7083 | 4050 | 2.3403 | - |
| 0.7100 | 4060 | 2.3491 | 0.7549 |
| 0.7118 | 4070 | 2.3175 | - |
| 0.7135 | 4080 | 2.3513 | - |
| 0.7153 | 4090 | 2.3767 | - |
| 0.7170 | 4100 | 2.371 | - |
| 0.7188 | 4110 | 2.3103 | - |
| 0.7202 | 4118 | - | 0.7585 |
| 0.7205 | 4120 | 2.3048 | - |
| 0.7223 | 4130 | 2.3406 | - |
| 0.7240 | 4140 | 2.3551 | - |
| 0.7258 | 4150 | 2.3309 | - |
| 0.7275 | 4160 | 2.3565 | - |
| 0.7293 | 4170 | 2.3111 | - |
| 0.7303 | 4176 | - | 0.7527 |
| 0.7310 | 4180 | 2.2925 | - |
| 0.7328 | 4190 | 2.281 | - |
| 0.7345 | 4200 | 2.3131 | - |
| 0.7363 | 4210 | 2.3568 | - |
| 0.7380 | 4220 | 2.3645 | - |
| 0.7398 | 4230 | 2.3283 | - |
| 0.7405 | 4234 | - | 0.7497 |
| 0.7415 | 4240 | 2.3098 | - |
| 0.7433 | 4250 | 2.3136 | - |
| 0.7450 | 4260 | 2.3141 | - |
| 0.7468 | 4270 | 2.2717 | - |
| 0.7485 | 4280 | 2.325 | - |
| 0.7503 | 4290 | 2.3358 | - |
| 0.7506 | 4292 | - | 0.7449 |
| 0.7520 | 4300 | 2.296 | - |
| 0.7538 | 4310 | 2.3211 | - |
| 0.7555 | 4320 | 2.3035 | - |
| 0.7573 | 4330 | 2.3114 | - |
| 0.7590 | 4340 | 2.3076 | - |
| 0.7608 | 4350 | 2.334 | 0.7416 |
| 0.7625 | 4360 | 2.2805 | - |
| 0.7643 | 4370 | 2.3302 | - |
| 0.7660 | 4380 | 2.2753 | - |
| 0.7678 | 4390 | 2.3084 | - |
| 0.7695 | 4400 | 2.308 | - |
| 0.7709 | 4408 | - | 0.7463 |
| 0.7712 | 4410 | 2.2909 | - |
| 0.7730 | 4420 | 2.2796 | - |
| 0.7747 | 4430 | 2.2868 | - |
| 0.7765 | 4440 | 2.3021 | - |
| 0.7782 | 4450 | 2.2977 | - |
| 0.7800 | 4460 | 2.2885 | - |
| 0.7810 | 4466 | - | 0.7391 |
| 0.7817 | 4470 | 2.2967 | - |
| 0.7835 | 4480 | 2.2774 | - |
| 0.7852 | 4490 | 2.3178 | - |
| 0.7870 | 4500 | 2.2785 | - |
| 0.7887 | 4510 | 2.2493 | - |
| 0.7905 | 4520 | 2.2866 | - |
| 0.7912 | 4524 | - | 0.7325 |
| 0.7922 | 4530 | 2.2632 | - |
| 0.7940 | 4540 | 2.289 | - |
| 0.7957 | 4550 | 2.2782 | - |
| 0.7975 | 4560 | 2.2607 | - |
| 0.7992 | 4570 | 2.2914 | - |
| 0.8010 | 4580 | 2.2593 | - |
| 0.8013 | 4582 | - | 0.7318 |
| 0.8027 | 4590 | 2.3077 | - |
| 0.8045 | 4600 | 2.2793 | - |
| 0.8062 | 4610 | 2.3051 | - |
| 0.8080 | 4620 | 2.2914 | - |
| 0.8097 | 4630 | 2.2646 | - |
| 0.8115 | 4640 | 2.2574 | 0.7308 |
| 0.8132 | 4650 | 2.2654 | - |
| 0.8150 | 4660 | 2.235 | - |
| 0.8167 | 4670 | 2.258 | - |
| 0.8185 | 4680 | 2.2935 | - |
| 0.8202 | 4690 | 2.281 | - |
| 0.8216 | 4698 | - | 0.7281 |
| 0.8220 | 4700 | 2.295 | - |
| 0.8237 | 4710 | 2.3095 | - |
| 0.8255 | 4720 | 2.2516 | - |
| 0.8272 | 4730 | 2.2292 | - |
| 0.8290 | 4740 | 2.2635 | - |
| 0.8307 | 4750 | 2.2522 | - |
| 0.8318 | 4756 | - | 0.7330 |
| 0.8325 | 4760 | 2.248 | - |
| 0.8342 | 4770 | 2.3082 | - |
| 0.8360 | 4780 | 2.2447 | - |
| 0.8377 | 4790 | 2.2596 | - |
| 0.8395 | 4800 | 2.2747 | - |
| 0.8412 | 4810 | 2.2343 | - |
| 0.8419 | 4814 | - | 0.7319 |
| 0.8430 | 4820 | 2.2521 | - |
| 0.8447 | 4830 | 2.2642 | - |
| 0.8464 | 4840 | 2.2492 | - |
| 0.8482 | 4850 | 2.2788 | - |
| 0.8499 | 4860 | 2.2925 | - |
| 0.8517 | 4870 | 2.2491 | - |
| 0.8520 | 4872 | - | 0.7304 |
| 0.8534 | 4880 | 2.2666 | - |
| 0.8552 | 4890 | 2.2261 | - |
| 0.8569 | 4900 | 2.2504 | - |
| 0.8587 | 4910 | 2.2567 | - |
| 0.8604 | 4920 | 2.2813 | - |
| 0.8622 | 4930 | 2.244 | 0.7277 |
| 0.8639 | 4940 | 2.2645 | - |
| 0.8657 | 4950 | 2.228 | - |
| 0.8674 | 4960 | 2.2322 | - |
| 0.8692 | 4970 | 2.2547 | - |
| 0.8709 | 4980 | 2.2722 | - |
| 0.8723 | 4988 | - | 0.7272 |
| 0.8727 | 4990 | 2.227 | - |
| 0.8744 | 5000 | 2.2407 | - |
| 0.8762 | 5010 | 2.2269 | - |
| 0.8779 | 5020 | 2.2428 | - |
| 0.8797 | 5030 | 2.2448 | - |
| 0.8814 | 5040 | 2.2562 | - |
| 0.8825 | 5046 | - | 0.7256 |
| 0.8832 | 5050 | 2.2364 | - |
| 0.8849 | 5060 | 2.2445 | - |
| 0.8867 | 5070 | 2.2409 | - |
| 0.8884 | 5080 | 2.2261 | - |
| 0.8902 | 5090 | 2.2613 | - |
| 0.8919 | 5100 | 2.2718 | - |
| 0.8926 | 5104 | - | 0.7233 |
| 0.8937 | 5110 | 2.2544 | - |
| 0.8954 | 5120 | 2.2276 | - |
| 0.8972 | 5130 | 2.2385 | - |
| 0.8989 | 5140 | 2.2401 | - |
| 0.9007 | 5150 | 2.2769 | - |
| 0.9024 | 5160 | 2.2399 | - |
| 0.9028 | 5162 | - | 0.7231 |
| 0.9042 | 5170 | 2.2205 | - |
| 0.9059 | 5180 | 2.2303 | - |
| 0.9077 | 5190 | 2.231 | - |
| 0.9094 | 5200 | 2.2356 | - |
| 0.9112 | 5210 | 2.2386 | - |
| 0.9129 | 5220 | 2.2233 | 0.7233 |
| 0.9147 | 5230 | 2.2509 | - |
| 0.9164 | 5240 | 2.2201 | - |
| 0.9182 | 5250 | 2.2189 | - |
| 0.9199 | 5260 | 2.1992 | - |
| 0.9217 | 5270 | 2.2362 | - |
| 0.9231 | 5278 | - | 0.7221 |
| 0.9234 | 5280 | 2.2293 | - |
| 0.9251 | 5290 | 2.2302 | - |
| 0.9269 | 5300 | 2.2216 | - |
| 0.9286 | 5310 | 2.2191 | - |
| 0.9304 | 5320 | 2.2504 | - |
| 0.9321 | 5330 | 2.2447 | - |
| 0.9332 | 5336 | - | 0.7221 |
| 0.9339 | 5340 | 2.2326 | - |
| 0.9356 | 5350 | 2.2315 | - |
| 0.9374 | 5360 | 2.244 | - |
| 0.9391 | 5370 | 2.2369 | - |
| 0.9409 | 5380 | 2.2312 | - |
| 0.9426 | 5390 | 2.2739 | - |
| 0.9433 | 5394 | - | 0.7206 |
| 0.9444 | 5400 | 2.2598 | - |
| 0.9461 | 5410 | 2.2319 | - |
| 0.9479 | 5420 | 2.2312 | - |
| 0.9496 | 5430 | 2.2592 | - |
| 0.9514 | 5440 | 2.2503 | - |
| 0.9531 | 5450 | 2.232 | - |
| 0.9535 | 5452 | - | 0.7208 |
| 0.9549 | 5460 | 2.2341 | - |
| 0.9566 | 5470 | 2.2564 | - |
| 0.9584 | 5480 | 2.2087 | - |
| 0.9601 | 5490 | 2.257 | - |
| 0.9619 | 5500 | 2.2524 | - |
| 0.9636 | 5510 | 2.253 | 0.7204 |
| 0.9654 | 5520 | 2.2424 | - |
| 0.9671 | 5530 | 2.2459 | - |
| 0.9689 | 5540 | 2.2387 | - |
| 0.9706 | 5550 | 2.2482 | - |
| 0.9724 | 5560 | 2.2156 | - |
| 0.9738 | 5568 | - | 0.7200 |
| 0.9741 | 5570 | 2.2343 | - |
| 0.9759 | 5580 | 2.2426 | - |
| 0.9776 | 5590 | 2.2154 | - |
| 0.9794 | 5600 | 2.2365 | - |
| 0.9811 | 5610 | 2.275 | - |
| 0.9829 | 5620 | 2.2689 | - |
| 0.9839 | 5626 | - | 0.7200 |
| 0.9846 | 5630 | 2.2356 | - |
| 0.9864 | 5640 | 2.2517 | - |
| 0.9881 | 5650 | 2.2436 | - |
| 0.9899 | 5660 | 2.2229 | - |
| 0.9916 | 5670 | 2.2617 | - |
| 0.9934 | 5680 | 2.2359 | - |
| 0.9941 | 5684 | - | 0.7201 |
| 0.9951 | 5690 | 2.2444 | - |
| 0.9969 | 5700 | 2.2505 | - |
| 0.9986 | 5710 | 2.2713 | - |
</details>
### Framework Versions
- Python: 3.11.8
- Sentence Transformers: 3.1.1
- Transformers: 4.45.1
- PyTorch: 2.4.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MaskedCachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-ssp-es | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ssp",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:04:35 | 39 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-ssp-es
* source languages: ssp
* target languages: es
* OPUS readme: [ssp-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ssp-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ssp.es | 89.7 | 0.930 |
| [
"TRANSLATION"
] | Non_BioNLP |
onnx-community/opus-mt-ru-en | onnx-community | translation | [
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-ru-en",
"base_model:quantized:Helsinki-NLP/opus-mt-ru-en",
"license:cc-by-4.0",
"region:us"
] | 1,724,794,599,000 | 2024-10-08T13:54:14 | 7 | 0 | ---
base_model: Helsinki-NLP/opus-mt-ru-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-ru-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | [
"TRANSLATION"
] | Non_BioNLP |
jamiehudson/625-model-brand-rem-jh3 | jamiehudson | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,680,725,064,000 | 2023-04-05T20:04:37 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# 625-model-brand-rem-jh3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("625-model-brand-rem-jh3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
TransferGraph/classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:classla/bcms-bertic-parlasent-bcs-ter",
"base_model:adapter:classla/bcms-bertic-parlasent-bcs-ter",
"model-index",
"region:us"
] | 1,709,213,868,000 | 2024-02-29T13:37:51 | 3 | 0 | ---
base_model: classla/bcms-bertic-parlasent-bcs-ter
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.6198952879581152
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [classla/bcms-bertic-parlasent-bcs-ter](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-ter) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4681 | None | 0 |
| 0.5822 | 0.7160 | 0 |
| 0.5990 | 0.6825 | 1 |
| 0.6073 | 0.6548 | 2 |
| 0.6052 | 0.6302 | 3 |
| 0.6188 | 0.6116 | 4 |
| 0.5969 | 0.6127 | 5 |
| 0.6230 | 0.6000 | 6 |
| 0.6199 | 0.5972 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Academic",
"Debates",
"Counterarguments",
"Research",
"Education",
"custom_code",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,845,257,000 | 2024-05-27T21:27:52 | 28 | 0 | ---
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Academic
- Debates
- Counterarguments
- Research
- Education
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
information retrieval system for academic debates
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"endpoints_compatible",
"region:us"
] | 1,722,076,591,000 | 2024-07-27T16:31:14 | 56 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
google-gemma-2-9b - GGUF
- Model creator: https://huggingface.co/SillyTilly/
- Original model: https://huggingface.co/SillyTilly/google-gemma-2-9b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [google-gemma-2-9b.Q2_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q2_K.gguf) | Q2_K | 3.54GB |
| [google-gemma-2-9b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
| [google-gemma-2-9b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.IQ3_S.gguf) | IQ3_S | 4.04GB |
| [google-gemma-2-9b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
| [google-gemma-2-9b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.IQ3_M.gguf) | IQ3_M | 4.19GB |
| [google-gemma-2-9b.Q3_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q3_K.gguf) | Q3_K | 4.43GB |
| [google-gemma-2-9b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
| [google-gemma-2-9b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
| [google-gemma-2-9b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
| [google-gemma-2-9b.Q4_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q4_0.gguf) | Q4_0 | 5.07GB |
| [google-gemma-2-9b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
| [google-gemma-2-9b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
| [google-gemma-2-9b.Q4_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q4_K.gguf) | Q4_K | 5.37GB |
| [google-gemma-2-9b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
| [google-gemma-2-9b.Q4_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q4_1.gguf) | Q4_1 | 5.55GB |
| [google-gemma-2-9b.Q5_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q5_0.gguf) | Q5_0 | 6.04GB |
| [google-gemma-2-9b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
| [google-gemma-2-9b.Q5_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q5_K.gguf) | Q5_K | 6.19GB |
| [google-gemma-2-9b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
| [google-gemma-2-9b.Q5_1.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q5_1.gguf) | Q5_1 | 6.52GB |
| [google-gemma-2-9b.Q6_K.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q6_K.gguf) | Q6_K | 7.07GB |
| [google-gemma-2-9b.Q8_0.gguf](https://huggingface.co/RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf/blob/main/google-gemma-2-9b.Q8_0.gguf) | Q8_0 | 9.15GB |
Original model description:
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF | Triangle104 | text-generation | [
"transformers",
"gguf",
"text-generation-inference",
"LwQ",
"safetensors",
"Llama3.1",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:prithivMLmods/LwQ-10B-Instruct",
"base_model:quantized:prithivMLmods/LwQ-10B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,737,328,208,000 | 2025-01-19T23:12:05 | 8 | 0 | ---
base_model: prithivMLmods/LwQ-10B-Instruct
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- text-generation-inference
- LwQ
- safetensors
- Llama3.1
- llama-cpp
- gguf-my-repo
---
# Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/LwQ-10B-Instruct`](https://huggingface.co/prithivMLmods/LwQ-10B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/LwQ-10B-Instruct) for more details on the model.
---
Model details:
-
LwQ-10B-Instruct (Llama with Questions), based on the Llama 3.1 collection of multilingual large language models (LLMs), is a set of pre-trained and instruction-tuned generative models optimized for multilingual dialogue use cases. These models outperform many available open-source alternatives. Model Architecture: Llama 3.1 is an auto-regressive language model that utilizes an optimized transformer architecture. The tuned versions undergo supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to better align with human preferences for helpfulness and safety. LwQ-10B is trained on synthetic reasoning datasets for mathematical reasoning and context-based problem-solving, with a focus on following instructions or keywords embedded in the input.
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
import transformers
import torch
model_id = "prithivMLmods/LwQ-10B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Intended Use
Multilingual Conversational Agents:
LwQ-10B-Instruct is well-suited for building multilingual chatbots and virtual assistants, providing accurate and context-aware responses in various languages.
Instruction-Following Applications:
The model is ideal for tasks where adherence to specific instructions is critical, such as task automation, guided workflows, and structured content generation.
Mathematical and Logical Reasoning:
Trained on synthetic reasoning datasets, LwQ-10B can handle mathematical problem-solving, logical reasoning, and step-by-step explanations, making it suitable for education platforms and tutoring systems.
Contextual Problem-Solving:
The model is optimized for solving contextually rich problems by understanding and processing inputs with embedded instructions or keywords, useful for complex decision-making and recommendation systems.
Content Creation and Summarization:
LwQ-10B can generate high-quality content, including articles, reports, and summaries, across different languages and domains.
Limitations
Limited Context Window:
The model has a finite context length, which may affect its ability to handle tasks requiring extensive context or long conversations effectively.
Performance Variability Across Languages:
While it supports multiple languages, performance may vary, with higher accuracy in languages that are better represented in the training data.
Accuracy in Complex Reasoning:
Despite being trained on reasoning datasets, the model may occasionally produce incorrect or incomplete answers for highly complex or multi-step reasoning tasks.
Bias and Ethical Risks:
Since the model is trained on large datasets from diverse sources, it may exhibit biases present in the training data, potentially leading to inappropriate or biased outputs.
Dependency on Clear Instructions:
The model’s ability to generate accurate outputs relies heavily on the clarity and specificity of user instructions. Ambiguous or vague instructions may result in suboptimal responses.
Resource Requirements:
As a large language model with 10 billion parameters, it requires significant computational resources for both training and inference, limiting its deployment in low-resource environments.
Lack of Real-Time Understanding:
LwQ-10B lacks real-time understanding of current events or data beyond its training, so it may not provide accurate responses for highly recent or dynamic information.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | Non_BioNLP |
Joemgu/mlong-t5-base-sumstew | Joemgu | summarization | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"long",
"title generation",
"en",
"de",
"fr",
"it",
"es",
"dataset:Joemgu/sumstew",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,686,250,321,000 | 2023-07-01T11:46:18 | 80 | 5 | ---
datasets:
- Joemgu/sumstew
language:
- en
- de
- fr
- it
- es
library_name: transformers
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- summarization
- long
- title generation
---
# STILL UNDER DEVELOPMENT (TRAINING RUNNING)
## How to use:
Prefix your document of choice with either:
- "Summarize: "+INPUT_TEXT
- "Write a title and summarize: "+INPUT_TEXT
Depending on the prefix, the output will either be:
- "Summary: SUMMARY OF INPUT_TEXT"
- "Title: TITLE OF INPUT_TEXT Summary: SUMMARY OF INPUT_TEXT" | [
"SUMMARIZATION"
] | Non_BioNLP |
Infomaniak-AI/onnx-opus-mt-en-de | Infomaniak-AI | translation | [
"onnx",
"marian",
"translation",
"en",
"de",
"base_model:Helsinki-NLP/opus-mt-en-de",
"base_model:quantized:Helsinki-NLP/opus-mt-en-de",
"license:apache-2.0",
"region:us"
] | 1,723,562,975,000 | 2024-08-13T15:58:14 | 12 | 0 | ---
base_model: Helsinki-NLP/opus-mt-en-de
language:
- en
- de
license: apache-2.0
pipeline_tag: translation
tags:
- translation
- onnx
---
### opus-mt-en-de
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation
- **Language(s):**
- Source Language: English
- Target Language: German
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md)
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece
* dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt)
#### Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.de | 23.5 | 0.540 |
| news-test2008.en.de | 23.5 | 0.529 |
| newstest2009.en.de | 22.3 | 0.530 |
| newstest2010.en.de | 24.9 | 0.544 |
| newstest2011.en.de | 22.5 | 0.524 |
| newstest2012.en.de | 23.0 | 0.525 |
| newstest2013.en.de | 26.9 | 0.553 |
| newstest2015-ende.en.de | 31.1 | 0.594 |
| newstest2016-ende.en.de | 37.0 | 0.636 |
| newstest2017-ende.en.de | 29.9 | 0.586 |
| newstest2018-ende.en.de | 45.2 | 0.690 |
| newstest2019-ende.en.de | 40.9 | 0.654 |
| Tatoeba.en.de | 47.3 | 0.664 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
```
| [
"TRANSLATION"
] | Non_BioNLP |
yarak001/distilbert-base-uncased-finetuned-emotion | yarak001 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,684,369,721,000 | 2023-05-18T01:03:56 | 26 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9225
name: Accuracy
- type: f1
value: 0.9225635095680048
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8134 | 1.0 | 250 | 0.3127 | 0.903 | 0.9000 |
| 0.247 | 2.0 | 500 | 0.2207 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
ParhamAbdarzade/finetuning-sentiment-model-20000-samples-imdb-v2 | ParhamAbdarzade | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,670,232,801,000 | 2022-12-05T10:32:04 | 112 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-20000-samples-imdb-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- type: accuracy
value: 0.924
name: Accuracy
- type: f1
value: 0.9241516966067864
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-20000-samples-imdb-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3694
- Accuracy: 0.924
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2795 | 1.0 | 2500 | 0.2224 | 0.9275 | 0.9263 |
| 0.1877 | 2.0 | 5000 | 0.3141 | 0.9275 | 0.9274 |
| 0.1045 | 3.0 | 7500 | 0.3694 | 0.924 | 0.9242 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/hBERTv1_new_pretrain_48_emb_com_sst2 | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,686,748,051,000 | 2023-06-14T15:46:23 | 10 | 0 | ---
datasets:
- glue
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- type: accuracy
value: 0.7947247706422018
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4656
- Accuracy: 0.7947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4711 | 1.0 | 527 | 0.4730 | 0.7787 |
| 0.2942 | 2.0 | 1054 | 0.5417 | 0.7993 |
| 0.228 | 3.0 | 1581 | 0.5140 | 0.7901 |
| 0.1897 | 4.0 | 2108 | 0.4951 | 0.7947 |
| 0.1608 | 5.0 | 2635 | 0.4656 | 0.7947 |
| 0.1422 | 6.0 | 3162 | 0.6786 | 0.7741 |
| 0.1244 | 7.0 | 3689 | 0.6886 | 0.8028 |
| 0.1112 | 8.0 | 4216 | 0.7387 | 0.7821 |
| 0.0985 | 9.0 | 4743 | 0.7169 | 0.7936 |
| 0.0883 | 10.0 | 5270 | 0.7695 | 0.7959 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
YakovElm/Hyperledger10SetFitModel_balance_ratio_2 | YakovElm | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,685,623,875,000 | 2023-06-01T12:52:07 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Hyperledger10SetFitModel_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Hyperledger10SetFitModel_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-fr-kqn | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"kqn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:36:40 | 48 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-kqn
* source languages: fr
* target languages: kqn
* OPUS readme: [fr-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kqn | 23.3 | 0.469 |
| [
"TRANSLATION"
] | Non_BioNLP |
buddhist-nlp/gemma-2-mitra-it | buddhist-nlp | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,731,014,908,000 | 2024-11-08T00:35:12 | 85 | 4 | ---
library_name: transformers
tags: []
---
# gemma2-mitra-it
This is based on gemma2-mitra-base and finetuned on Translation instructions.
The template for prompting the model is this:
```
Please translate into <target_language>: <input_sentence> 🔽 Translation::
```
Line breaks in this model should be replaced with the '🔽' character before running the generation. '#' is used as a stop token.
## Model Details
For details on how to run this please see the gemma2-9b repository: https://huggingface.co/google/gemma-2-9b
| [
"TRANSLATION"
] | Non_BioNLP |
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_sst2 | gokuls | text-classification | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,697,582,402,000 | 2023-10-17T23:27:28 | 36 | 0 | ---
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
datasets:
- glue
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- type: accuracy
value: 0.8119266055045872
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4256
- Accuracy: 0.8119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3386 | 1.0 | 1053 | 0.4256 | 0.8119 |
| 0.2249 | 2.0 | 2106 | 0.6293 | 0.8085 |
| 0.1865 | 3.0 | 3159 | 0.4738 | 0.7982 |
| 0.1666 | 4.0 | 4212 | 0.5173 | 0.8142 |
| 0.1429 | 5.0 | 5265 | 0.6124 | 0.7982 |
| 0.119 | 6.0 | 6318 | 0.6314 | 0.8062 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Tevatron/dse-phi3-docmatix-v2 | Tevatron | null | [
"Tevatron",
"pytorch",
"phi3_v",
"vidore",
"custom_code",
"en",
"dataset:Tevatron/docmatix-ir",
"dataset:HuggingFaceM4/Docmatix",
"dataset:Tevatron/msmarco-passage-aug",
"arxiv:2406.11251",
"license:mit",
"region:us"
] | 1,722,409,227,000 | 2024-08-12T07:57:37 | 187 | 1 | ---
datasets:
- Tevatron/docmatix-ir
- HuggingFaceM4/Docmatix
- Tevatron/msmarco-passage-aug
language:
- en
library_name: Tevatron
license: mit
tags:
- vidore
---
# DSE-Phi3-Docmatix-V2
DSE-Phi3-Docmatix-V2 is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.
The model, `Tevatron/dse-phi3-docmatix-v2`, is trained using 1/10 of the `Tevatron/docmatix-ir` dataset, a variant of `HuggingFaceM4/Docmatix` specifically adapted for training PDF retrievers with Vision Language Models in open-domain question answering scenarios. For more information on dataset filtering and hard negative mining, refer to the [docmatix-ir](https://huggingface.co/datasets/Tevatron/docmatix-ir/blob/main/README.md) dataset page.
DSE has strong zero-shot effectiveness for document retrieval both with visual input and text input.
For example, DSE-Phi3-Docmatix-V2 achieves **77.6** nDCG@5 on [ViDoRE](https://huggingface.co/spaces/vidore/vidore-leaderboard) leaderboard in **zero-shot setting** (without finetuning with ViDoRe training data).
## How to train the model from scratch
Please see https://github.com/texttron/tevatron/tree/main/examples/dse
## How to Use the Model
### Load the Model and Processor
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
processor = AutoProcessor.from_pretrained('Tevatron/dse-phi3-docmatix-v2', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('Tevatron/dse-phi3-docmatix-v2', trust_remote_code=True, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, use_cache=False).to('cuda:0')
def get_embedding(last_hidden_state: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
sequence_lengths = attention_mask.sum(dim=1) - 1
bs = last_hidden_state.shape[0]
reps = last_hidden_state[torch.arange(bs, device=last_hidden_state.device), sequence_lengths]
reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
return reps
```
### Encode Text Query
```python
queries = ["query: Where can we see Llama?</s>", "query: What is LLaMA model?</s>"]
query_inputs = processor(queries, return_tensors="pt", padding="longest", max_length=128, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], query_inputs["attention_mask"])
```
### Encode Document Screenshot
```python
from PIL import Image
import requests
from io import BytesIO
# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/meta-llama.png"
# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)
passage_image1 = Image.open(BytesIO(response1.content)).resize((1344, 1344))
passage_image2 = Image.open(BytesIO(response2.content)).resize((1344, 1344))
passage_images = [passage_image1, passage_image2]
passage_prompts = ["<|image_1|>\nWhat is shown in this image?</s>", "<|image_2|>\nWhat is shown in this image?</s>"]
# Process inputs and get embeddings
passage_inputs = processor(passage_prompts, images=passage_images, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
passage_inputs['input_ids'] = passage_inputs['input_ids'].squeeze(0)
passage_inputs['attention_mask'] = passage_inputs['attention_mask'].squeeze(0)
passage_inputs['image_sizes'] = passage_inputs['image_sizes'].squeeze(0)
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
```
### Compute Similarity
```python
from torch.nn.functional import cosine_similarity
num_queries = query_embeddings.size(0)
num_passages = doc_embeddings.size(0)
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
"The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.</s>",
"Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]</s>"
]
passage_inputs = processor(passage_prompts, images=None, return_tensors="pt", padding="longest", max_length=4096, truncation=True).to('cuda:0')
with torch.no_grad():
output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], passage_inputs["attention_mask"])
for i in range(num_queries):
query_embedding = query_embeddings[i].unsqueeze(0)
similarities = cosine_similarity(query_embedding, doc_embeddings)
print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```
### Citation
If you find this checkpoint is helpful, please consider cite Phi3, Docmatix and our DSE work.
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
kafikani/autotrain-iinjh-0wh75 | kafikani | text-classification | [
"tensorboard",
"safetensors",
"longformer",
"autotrain",
"text-classification",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"region:us"
] | 1,730,450,943,000 | 2024-11-06T05:21:02 | 4 | 0 | ---
base_model: allenai/longformer-base-4096
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.307055115699768
f1_macro: 0.5244016249451032
f1_micro: 0.7504835589941973
f1_weighted: 0.714761195760481
precision_macro: 0.5012229210342417
precision_micro: 0.7504835589941973
precision_weighted: 0.6860840439724423
recall_macro: 0.5532259049014222
recall_micro: 0.7504835589941973
recall_weighted: 0.7504835589941973
accuracy: 0.7504835589941973
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
zeynepcetin/distilbert-base-uncased-zeynepc-5dim | zeynepcetin | text-classification | [
"transformers",
"tf",
"distilbert",
"text-classification",
"personality-analysis",
"five-factor-model",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,729,508,041,000 | 2025-01-22T17:05:49 | 4 | 0 | ---
base_model: distilbert-base-uncased
library_name: transformers
license: apache-2.0
tags:
- text-classification
- personality-analysis
- five-factor-model
- transformers
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-zeynepc-5dim
results:
- task:
type: text-classification
name: Personality Analysis
dataset:
name: Custom Personality Dataset
type: text
metrics:
- type: accuracy
value:
- Accuracy Value
name: Accuracy
---
# distilbert-base-uncased-zeynepc-5dim
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) for personality classification based on the Five-Factor Model (FFM), also known as the Big Five Personality Traits. It predicts probabilities for five personality dimensions: Extraversion, Agreeableness, Neuroticism, Openness, and Conscientiousness.
## Model description
The model uses the `distilbert-base-uncased` architecture as a base and is fine-tuned to classify text into five personality dimensions. Probabilities for each dimension are calculated using a sigmoid activation function, making it suitable for multi-label classification.
### Key Features:
- **Base Model:** DistilBERT, a lightweight transformer model that balances performance and efficiency.
- **Personality Dimensions:** Outputs probabilities for:
- **Extraversion**
- **Agreeableness**
- **Neuroticism**
- **Openness**
- **Conscientiousness**
- **Intended Task:** Personality analysis and text classification.
## Intended uses & limitations
### Use Cases:
- Analyzing personality traits in text-based content.
- Understanding user characteristics for personalized recommendations.
- Research in psychology and linguistics.
### Limitations:
- The model's predictions are based on the dataset used for training. Biases in the training data may affect its predictions.
- Fine-tuned for English text; performance on other languages may be suboptimal.
- Not suitable for high-stakes decision-making without additional verification.
## Training and evaluation data
The model was fine-tuned on a custom dataset related to personality analysis. The dataset includes labeled examples corresponding to the Five-Factor Model traits. The evaluation set results will depend on the size and quality of this dataset.
### Dataset details:
- **Type:** Text-based personality annotations.
- **Preprocessing:** Text was tokenized using the `distilbert-base-uncased` tokenizer with a maximum length of 512 tokens.
- **Evaluation Metrics:** Accuracy and other relevant metrics specific to multi-label classification.
## How to use
The model can be used with the Hugging Face `transformers` library as follows:
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("zeynepcetin/distilbert-base-uncased-zeynepc-5dim")
model = TFAutoModelForSequenceClassification.from_pretrained("zeynepcetin/distilbert-base-uncased-zeynepc-5dim")
# Example text
text = "I enjoy working with others and find joy in social interactions."
# Tokenize and predict
inputs = tokenizer(text, truncation=True, padding=True, max_length=512, return_tensors="tf")
outputs = model(inputs)
probs = tf.nn.sigmoid(outputs.logits).numpy()
# Print probabilities for each personality trait
traits = ["Extraversion", "Agreeableness", "Neuroticism", "Openness", "Conscientiousness"]
for trait, prob in zip(traits, probs[0]):
print(f"{trait}: {prob:.2f}")
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
unsloth/SmolLM2-360M | unsloth | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,730,409,947,000 | 2024-10-31T22:53:13 | 5,520 | 0 | ---
base_model: HuggingFaceTB/SmolLM2-360M
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/SmolLM2-360M
For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-360M)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
## Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
# SmolLM2
 | [
"SUMMARIZATION"
] | Non_BioNLP |
YxBxRyXJx/bge-base-movie-matryoshka | YxBxRyXJx | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:183",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:YxBxRyXJx/QAsimple_for_BGE_241019",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,732,162,745,000 | 2024-11-21T04:19:21 | 20 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
datasets:
- YxBxRyXJx/QAsimple_for_BGE_241019
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:183
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: '14 William O. Douglas, quoted in Charles Hurd, Film Booking Issue
Ordered Reopened,” New York Times, May 4, 1948, 1. 15 Movie Crisis Laid to Video
Inroads And Dwindling of Foreign Market, New York Times, February 27, 1949, F1.
For details on the lawsuit and its effects, see Arthur De Vany and Henry McMillan,
Was the Antitrust Action that Broke Up the Movie Studios Good for the Movies?
Evidence from the Stock Market. American Law and Economics Review 6, no. 1 (2004):
135-53; and J.C. Strick, The Economics of the Motion Picture Industry: A Survey,
Philosophy of the Social Sciences 8, no. 4 (December 1978): 406-17. 16 The Hollywood
feature films for which Eisler provided music are Hangmen Also Die (1942), None
But the Lonely Heart (1944), Jealousy (1945), The Spanish Main (1945); A Scandal
in Paris (1946), Deadline at Dawn (1946), Woman on the Beach (1947), and So Well
Remembered (1947). Most of these are middle-of-the-road genre pieces, but the
first NOTES 267'
sentences:
- What is the opinion of Ernest Irving, a pioneer of British film music, on the
overall quality of American film music?
- What is the title of the 2007 film directed by David Fincher, produced by Michael
Medavoy, and featuring a storyline based on a real-life serial killer, as mentioned
in the provided context information?
- What was the primary reason behind the lawsuit that led to the breakup of the
movie studios, as suggested by the article in the New York Times on February 27,
1949?
- source_sentence: 'But Gorbman (who like Flinn and Kalinak approached film music
from a formal background not in musicology but in literary criticism) was certainly
not the first scholar engaged in so-called film studies44 to address the role
that extra-diegetic music played in classical-style films. Two years before Gorbman''s
book was published, the trio of Bordwell, Staiger, and Thompson brought out their
monumental The Classical Hollywood Cinema: Film Style and Production to 1960.
As noted above, and apropos of its title, the book focuses on filmic narrative
style and the technical devices that made this style possible. In its early pages,
however, it also contains insightful comments on classical cinema''s use of music.
The book''s first music-related passage lays a foundation for Gorbman''s point
about how a score might lend unity to a film by recycling distinctive themes that
within the THE GOLDEN AGE OF FILM MUSIC, 1933-49 143'
sentences:
- What is the possible reason, as suggested by David Thomson, for why David Lean's
filmmaking style may have declined after the movie "Summer Madness" (US, 1955)?
- What shift in the portrayal of hard body male characters in film, as exemplified
by the actors who played these roles in the 1980s and 1990s, suggests that societal
expectations and norms may be changing?
- What is the significance of the authors' formal background in literary criticism
rather than musicology, as mentioned in the context of Gorbman's approach to film
music?
- source_sentence: (1931); Georg Wilhelm Pabst's Kameradschaft (1931); Fritz Lang's
M (1931) and Das Testament der Dr. Mabuse (1932); and Carl Theodor Dreyer's Vampyr
(1932). These films’ subtle mix of actual silence with accompanying music and
more or less realistic sound effects has drawn and doubtless will continue to
draw serious analytical attention from film scholars.45 And even in their own
time they drew due attention aplenty from critics of avant-garde persuasion.46
The mere fact that these films differed from the sonic norm attracted the notice,
if not always the praise, of movie reviewers for the popular press. Writing from
London, a special correspondent for the New York Times observed that Hitchcock's
Blackmail goes some way to showing how the cinematograph and the microphone can
be mated without their union being forced upon the attention of a punctilious
world as VITAPHONE AND MOVIETONE, 1926-8 101
sentences:
- What was the primary limitation that led to the failure of Edison's first Kinetophone,
which was an early attempt at sound film featuring musical accompaniment?
- What was the specific sonic approach employed by the mentioned films of Georg
Wilhelm Pabst, Fritz Lang, and Carl Theodor Dreyer that drew serious analytical
attention from film scholars?
- What limitation in Martin Scorsese's background, as mentioned in the text, restricted
his choice of subjects at this stage in his career?
- source_sentence: "39\tdivided into small, three-dimensional cubes known as volumetric\
\ pixels, or voxels. When viewers are watching certain images, the voxel demonstrates\
\ how these images in the movie are mapped into brain activity. Clips of the movie\
\ are reconstructed through brain imaging and computer stimulation by associating\
\ visual patterns in the movie with the corresponding brain activity. However,\
\ these reconstructions are blurry and are hard to make because researchers say,\
\ blood flow signals measured using fMRI change much more slowly than the neural\
\ signals that encode dynamic information in movies. Psychology and neuroscience\
\ professor, Jack Gallant explains in an interview that primary visual cortex\
\ responds to the local features of the movie such as edges, colors, motion, and\
\ texture but this part of the brain cannot understand the objects in the movie.\
\ In addition, movies that show people are reconstructed with better accuracy\
\ than abstract images. Using Neuroimaging For Entertainment Success Can brain\
\ scans predict movie success in the box office? Two marketing researchers from\
\ the Rotterdam School of Management devised an experiment by using EEG on participants.\
\ EEG demonstrated that individual choice and box office success correlate with\
\ different types of brain activity. From article, How Neuroimaging Can Save The\
\ Entertainment Industry Millions of Dollars, it states, individual choice is\
\ predicted best by high frontocentral beta activity, the choice of the general\
\ population is predicted by frontal gamma activity. Perhaps, with quickly advanced\
\ technology, predicting movie genre and plots that can hit the box office could\
\ be successful. Neurocinema in Hollywood One strategy that helps filmmakers,\
\ producers, and distributors to achieve global market success is by using fMRI\
\ and EEG to make a better storyline, characters, sound effects, and other"
sentences:
- What significant change in the portrayal of Rocky's character is evident in the
2015 movie Creed, as compared to the original 1976 film Rocky?
- What factors led to the selection of the films "Spider-man" (2002), "Cars" (2006),
and "Avatar" (2009) for the research project examining the relationship between
film and society in the early 2000s?
- What is the main reason why researchers find it challenging to reconstruct abstract
images from movie clips using brain imaging and computer stimulation?
- source_sentence: "11\tdocumentary film so unpleasant when most had sat through horror\
\ pictures that were appreciably more violent and bloody. The answer that McCauley\
\ came up with was that the fictional nature of horror films affords viewers a\
\ sense of control by placing psychological distance between them and the violent\
\ acts they have witnessed. Most people who view horror movies understand that\
\ the filmed events are unreal, which furnishes them with psychological distance\
\ from the horror portrayed in the film. In fact, there is evidence that young\
\ viewers who perceive greater realism in horror films are more negatively affected\
\ by their exposure to horror films than viewers who perceive the film as unreal\
\ (Hoekstra, Harris, & Helmick, 1999). Four Viewing Motivations for Graphic Horror\
\ According to Dr. Deirdre Johnston (1995) study Adolescents’ Motivations for\
\ Viewing Graphic Horror of Human Communication Research there are four different\
\ main reasons for viewing graphic horror. From the study of a small sample of\
\ 220 American adolescents who like watching horror movies, Dr. Johnston reported\
\ that: The four viewing motivations are found to be related to viewers’ cognitive\
\ and affective responses to horror films, as well as viewers’ tendency to identify\
\ with either the killers or victims in these films.\" Dr. Johnson notes that:\
\ 1) gore watchers typically had low empathy, high sensation seeking, and (among\
\ males only) a strong identification with the killer, 2) thrill watchers typically\
\ had both high empathy and sensation seeking, identified themselves more with\
\ the victims, and liked the suspense of the film, 3) independent watchers typically\
\ had a high empathy for the victim along with a high positive effect for overcoming\
\ fear, and 4) problem watchers typically had high empathy for the victim but\
\ were"
sentences:
- What was the name of the series published by Oliver Ditson from 1918-25 that contained
ensemble music for motion picture plays?
- What shift in the cultural, political, and social contexts of the 1980s and 1990s
may have led to the deconstruction of the hard body characters portrayed by actors
such as Stallone and Schwarzenegger in more recent movies?
- What is the primary reason why viewers who perceive greater realism in horror
films are more negatively affected by their exposure to horror films than viewers
who perceive the film as unreal?
model-index:
- name: BGE base Movie Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.8205128205128205
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9743589743589743
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8205128205128205
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32478632478632485
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8205128205128205
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9743589743589743
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9207838928594967
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8940170940170941
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8940170940170938
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.8461538461538461
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9230769230769231
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8461538461538461
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30769230769230776
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8461538461538461
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9230769230769231
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9233350110390831
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8982905982905982
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8982905982905982
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.8461538461538461
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9230769230769231
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9487179487179487
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8461538461538461
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30769230769230776
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18974358974358976
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8461538461538461
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9230769230769231
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9487179487179487
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9234104189545929
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.898962148962149
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.898962148962149
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.7692307692307693
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8974358974358975
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9487179487179487
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9487179487179487
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7692307692307693
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29914529914529925
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18974358974358976
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09487179487179488
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7692307692307693
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8974358974358975
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9487179487179487
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9487179487179487
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8688480033444261
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8418803418803418
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8443986568986569
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.5641025641025641
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8717948717948718
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9230769230769231
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9487179487179487
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5641025641025641
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2905982905982907
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18461538461538465
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09487179487179488
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5641025641025641
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8717948717948718
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9230769230769231
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9487179487179487
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.768187565996018
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.708119658119658
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7088711597999523
name: Cosine Map@100
---
# BGE base Movie Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the [q_asimple_for_bge_241019](https://huggingface.co/datasets/YxBxRyXJx/QAsimple_for_BGE_241019) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [q_asimple_for_bge_241019](https://huggingface.co/datasets/YxBxRyXJx/QAsimple_for_BGE_241019)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("YxBxRyXJx/bge-base-movie-matryoshka")
# Run inference
sentences = [
'11\tdocumentary film so unpleasant when most had sat through horror pictures that were appreciably more violent and bloody. The answer that McCauley came up with was that the fictional nature of horror films affords viewers a sense of control by placing psychological distance between them and the violent acts they have witnessed. Most people who view horror movies understand that the filmed events are unreal, which furnishes them with psychological distance from the horror portrayed in the film. In fact, there is evidence that young viewers who perceive greater realism in horror films are more negatively affected by their exposure to horror films than viewers who perceive the film as unreal (Hoekstra, Harris, & Helmick, 1999). Four Viewing Motivations for Graphic Horror According to Dr. Deirdre Johnston (1995) study Adolescents’ Motivations for Viewing Graphic Horror of Human Communication Research there are four different main reasons for viewing graphic horror. From the study of a small sample of 220 American adolescents who like watching horror movies, Dr. Johnston reported that: The four viewing motivations are found to be related to viewers’ cognitive and affective responses to horror films, as well as viewers’ tendency to identify with either the killers or victims in these films." Dr. Johnson notes that: 1) gore watchers typically had low empathy, high sensation seeking, and (among males only) a strong identification with the killer, 2) thrill watchers typically had both high empathy and sensation seeking, identified themselves more with the victims, and liked the suspense of the film, 3) independent watchers typically had a high empathy for the victim along with a high positive effect for overcoming fear, and 4) problem watchers typically had high empathy for the victim but were',
'What is the primary reason why viewers who perceive greater realism in horror films are more negatively affected by their exposure to horror films than viewers who perceive the film as unreal?',
'What shift in the cultural, political, and social contexts of the 1980s and 1990s may have led to the deconstruction of the hard body characters portrayed by actors such as Stallone and Schwarzenegger in more recent movies?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.8205 | 0.8462 | 0.8462 | 0.7692 | 0.5641 |
| cosine_accuracy@3 | 0.9744 | 0.9231 | 0.9231 | 0.8974 | 0.8718 |
| cosine_accuracy@5 | 1.0 | 1.0 | 0.9487 | 0.9487 | 0.9231 |
| cosine_accuracy@10 | 1.0 | 1.0 | 1.0 | 0.9487 | 0.9487 |
| cosine_precision@1 | 0.8205 | 0.8462 | 0.8462 | 0.7692 | 0.5641 |
| cosine_precision@3 | 0.3248 | 0.3077 | 0.3077 | 0.2991 | 0.2906 |
| cosine_precision@5 | 0.2 | 0.2 | 0.1897 | 0.1897 | 0.1846 |
| cosine_precision@10 | 0.1 | 0.1 | 0.1 | 0.0949 | 0.0949 |
| cosine_recall@1 | 0.8205 | 0.8462 | 0.8462 | 0.7692 | 0.5641 |
| cosine_recall@3 | 0.9744 | 0.9231 | 0.9231 | 0.8974 | 0.8718 |
| cosine_recall@5 | 1.0 | 1.0 | 0.9487 | 0.9487 | 0.9231 |
| cosine_recall@10 | 1.0 | 1.0 | 1.0 | 0.9487 | 0.9487 |
| **cosine_ndcg@10** | **0.9208** | **0.9233** | **0.9234** | **0.8688** | **0.7682** |
| cosine_mrr@10 | 0.894 | 0.8983 | 0.899 | 0.8419 | 0.7081 |
| cosine_map@100 | 0.894 | 0.8983 | 0.899 | 0.8444 | 0.7089 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### q_asimple_for_bge_241019
* Dataset: [q_asimple_for_bge_241019](https://huggingface.co/datasets/YxBxRyXJx/QAsimple_for_BGE_241019) at [66635cd](https://huggingface.co/datasets/YxBxRyXJx/QAsimple_for_BGE_241019/tree/66635cde6ada74a8cf5a84db10518119fc1c221d)
* Size: 183 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 183 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 191 tokens</li><li>mean: 356.1 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 36.04 tokens</li><li>max: 66 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1 Introduction Why do we watch horror films? What makes horror films so exciting to watch? Why do our bodies sweat and muscles tense when we are scared? How do filmmakers, producers, sound engineers, and cinematographers specifically design a horror film? Can horror movies cause negative, lasting effects on the audience? These are some of the questions that are answered by exploring the aesthetics of horror films and the psychology behind horror movies. Chapter 1, The Allure of Horror Film, illustrates why we are drawn to scary films by studying different psychological theories and factors. Ideas include: catharsis, subconscious mind, curiosity, thrill, escape from reality, relevance, unrealism, and imagination. Also, this chapter demonstrates why people would rather watch fiction films than documentaries and the motivations for viewing graphic horror. Chapter 2, Mise-en-scène in Horror Movies, includes purposeful arrangement of scenery and stage properties of horror movie. Also...</code> | <code>What is the name of the emerging field of scientists and filmmakers that uses fMRI and EEG to read people's brain activity while watching movie scenes?</code> |
| <code>3 Chapter 1: The Allure of Horror Film Overview Although watching horror films can make us feel anxious and uneasy, we still continue to watch other horror films one after another. It is ironic how we hate the feeling of being scared, but we still enjoy the thrill. So why do we pay money to watch something to be scared? Eight Theories on why we watch Horror Films From research by philosophers, psychoanalysts, and psychologists there are theories that can explain why we are drawn to watching horror films. The first theory, psychoanalyst, Sigmund Freud portrays that horror comes from the “uncanny” emergence of images and thoughts of the primitive id. The purpose of horror films is to highlight unconscious fears, desire, urges, and primeval archetypes that are buried deep in our collective subconscious images of mothers and shadows play important roles because they are common to us all. For example, in Alfred Hitchcock's Psycho, a mother plays the role of evil in the main character...</code> | <code>What process, introduced by the Greek Philosopher Aristotle, involves the release of negative emotions through the observation of violent or scary events, resulting in a purging of aggressive emotions?</code> |
| <code>5 principle unknowable (Jancovich, 2002, p. 35). This meaning, the audience already knows that the plot and the characters are already disgusting, but the surprises in the horror narrative through the discovery of curiosity should give satisfaction. Marvin Zuckerman (1979) proposed that people who scored high in sensation seeking scale often reported a greater interest in exciting things like rollercoasters, bungee jumping and horror films. He argued more individuals who are attracted to horror movies desire the sensation of experience. However, researchers did not find the correlation to thrill-seeking activities and enjoyment of watching horror films always significant. The Gender Socialization theory (1986) by Zillman, Weaver, Mundorf and Aust exposed 36 male and 36 female undergraduates to a horror movie with the same age, opposite-gender companion of low or high initial appeal who expressed mastery, affective indifference, or distress. They reported that young men enjoyed the fi...</code> | <code>What is the proposed theory by Marvin Zuckerman (1979) regarding the relationship between sensation seeking and interest in exciting activities, including horror films?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 1 | 0.8987 | 0.8983 | 0.8835 | 0.8419 | 0.7773 |
| 2.0 | 2 | 0.9218 | 0.9141 | 0.9075 | 0.8721 | 0.8124 |
| 1.0 | 1 | 0.9218 | 0.9141 | 0.9075 | 0.8721 | 0.8124 |
| 2.0 | 2 | 0.9356 | 0.9302 | 0.9118 | 0.8750 | 0.8057 |
| **3.0** | **4** | **0.9302** | **0.9233** | **0.9234** | **0.8783** | **0.7759** |
| 4.0 | 5 | 0.9208 | 0.9233 | 0.9234 | 0.8688 | 0.7682 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
seelingwong/testmodel | seelingwong | summarization | [
"summarization",
"en",
"dataset:OpenAssistant/oasst1",
"region:us"
] | 1,682,531,791,000 | 2023-05-14T16:58:02 | 0 | 0 | ---
datasets:
- OpenAssistant/oasst1
language:
- en
metrics:
- bleu
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,911,004,000 | 2024-05-28T15:43:37 | 7 | 0 | ---
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-en-he | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:29:48 | 5,661 | 5 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-en-he
* source languages: en
* target languages: he
* OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.he | 40.1 | 0.609 |
| [
"TRANSLATION"
] | Non_BioNLP |
Thermostatic/NeuralTranslate_v0.2 | Thermostatic | text-generation | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"Translation",
"Mistral",
"English",
"Spanish",
"conversational",
"en",
"es",
"dataset:Thermostatic/ShareGPT_NeuralTranslate_v0.1",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,714,112,416,000 | 2024-04-27T22:21:56 | 33 | 1 | ---
datasets:
- Thermostatic/ShareGPT_NeuralTranslate_v0.1
language:
- en
- es
license: mit
tags:
- Translation
- Mistral
- English
- Spanish
---

# Model Card for NeuralTranslate
<!-- Provide a quick summary of what the model is/does. -->
THIS MODEL USES CHATML TEMPLATE!! BE CAREFUL OR YOU MIGHT FIND UNEXPECTED BEHAVIOURS.
This is the second alpha version of NeuralTranslate. This alpha version doesn't contain overfitting (or at least that's what I think), so no unexpected behaviour should happen and Mistral's native reasoning capabilities aren't lost.
NeuralTranslate is an open-source family of models for bidirectional translation between English & Spanish, achieving high accuracy at fast speed.
You can donate towards this project at my ko-fi! https://ko-fi.com/irvingernesto
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
gokceuludogan/WarmMolGenTwo | gokceuludogan | text2text-generation | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"molecule-generation",
"cheminformatics",
"targeted-drug-design",
"biochemical-language-models",
"license:mit",
"autotrain_compatible",
"region:us"
] | 1,660,393,326,000 | 2022-08-14T13:39:28 | 16 | 0 | ---
license: mit
tags:
- molecule-generation
- cheminformatics
- targeted-drug-design
- biochemical-language-models
inference: false
---
## WarmMolGenTwo
A target-specific molecule generator model which is warm started (i.e. initialized) from pretrained biochemical language models and trained on interacting protein-compound pairs, viewing targeted molecular generation as a translation task between protein and molecular languages. It was introduced in the paper, "Exploiting pretrained biochemical language models for
targeted drug design", which has been accepted for publication in *Bioinformatics* Published by Oxford University Press and first released in [this repository](https://github.com/boun-tabi/biochemical-lms-for-drug-design).
WarmMolGenTwo is a Transformer-based encoder-decoder model initialized with [Protein RoBERTa](https://github.com/PaccMann/paccmann_proteomics) and [ChemBERTaLM](https://huggingface.co/gokceuludogan/ChemBERTaLM) checkpoints, and then, trained on interacting protein-compound pairs filtered from [BindingDB](https://www.bindingdb.org/rwd/bind/index.jsp). The model takes a protein sequence as an input and outputs a SMILES sequence.
## How to use
```python
from transformers import EncoderDecoderModel, RobertaTokenizer, pipeline
protein_tokenizer = RobertaTokenizer.from_pretrained("gokceuludogan/WarmMolGenTwo")
mol_tokenizer = RobertaTokenizer.from_pretrained("seyonec/PubChem10M_SMILES_BPE_450k")
model = EncoderDecoderModel.from_pretrained("gokceuludogan/WarmMolGenTwo")
inputs = protein_tokenizer("MENTENSVDSKSIKNLEPKIIHGSESMDSGISLDNSYKMDYPEMGLCIIINNKNFHKSTG", >>> return_tensors="pt")
outputs = model.generate(**inputs, decoder_start_token_id=mol_tokenizer.bos_token_id,
eos_token_id=mol_tokenizer.eos_token_id, pad_token_id=mol_tokenizer.eos_token_id,
max_length=128, num_return_sequences=5, do_sample=True, top_p=0.95)
mol_tokenizer.batch_decode(outputs, skip_special_tokens=True)
# Sample output
['CCOC(=O)N[C@@H](Cc1ccc(O)cc1)C(=O)N[C@@H](Cc1ccc(O)cc1)C(=O)NCCC[C@@H](NC(=O)[C@H](Cc1ccccc1)NC(=O)Cc1ccc(O)cc1)C(C)C',
'CCC(C)[C@H](NC(=O)Cn1nc(-c2cccc3ccccc23)c2cnccc2c1=O)C(O)=O',
'CC(C)[C@H](NC(=O)[C@H](CC(O)=O)NC(=O)[C@@H]1C[C@H]1c1ccccc1)C(=O)N[C@@H](Cc1c[nH]c2ccccc12)C(=O)OC(C)(C)C',
'CC[C@@H](C)[C@H](NC(=O)\\C=C\\C(C)\\C=C/C=C(/C)\\C=C(/C)\\C)C(=O)N[C@@H](CC(O)=O)C(=O)N[C@@H](CC(O)=O)C(=O)N[C@@H](Cc1cc(O)c(O)c(O)c1)C(O)=O',
'CN1C[C@H](Cn2cnc3cc(O)ccc23)Oc2ccc(cc12)C(F)(F)F']
```
## Citation
```bibtex
@article{10.1093/bioinformatics/btac482,
author = {Uludoğan, Gökçe and Ozkirimli, Elif and Ulgen, Kutlu O. and Karalı, Nilgün Lütfiye and Özgür, Arzucan},
title = "{Exploiting Pretrained Biochemical Language Models for Targeted Drug Design}",
journal = {Bioinformatics},
year = {2022},
doi = {10.1093/bioinformatics/btac482},
url = {https://doi.org/10.1093/bioinformatics/btac482}
}
``` | [
"TRANSLATION"
] | Non_BioNLP |
opennyaiorg/InRhetoricalRoles | opennyaiorg | null | [
"en",
"dataset:opennyaiorg/InRhetoricalRoles",
"arxiv:2201.13125",
"license:apache-2.0",
"region:us"
] | 1,715,148,207,000 | 2024-05-08T06:25:26 | 0 | 0 | ---
datasets:
- opennyaiorg/InRhetoricalRoles
language:
- en
license: apache-2.0
---
# Github
The model can be accessed via our library: [https://github.com/OpenNyAI/Opennyai](https://github.com/OpenNyAI/Opennyai)
# Paper details
[Corpus for Automatic Structuring of Legal Documents](https://aclanthology.org/2022.lrec-1.470)
[Arxiv](https://arxiv.org/abs/2201.13125)
## Author - Publication
```
@InProceedings{kalamkar-EtAl:2022:LREC,
author = {Kalamkar, Prathamesh and Tiwari, Aman and Agarwal, Astha and Karn, Saurabh and Gupta, Smita and Raghavan, Vivek and Modi, Ashutosh},
title = {Corpus for Automatic Structuring of Legal Documents},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {4420--4429},
abstract = {In populous countries, pending legal cases have been growing exponentially. There is a need for developing techniques for processing and organizing legal documents. In this paper, we introduce a new corpus for structuring legal documents. In particular, we introduce a corpus of legal judgment documents in English that are segmented into topical and coherent parts. Each of these parts is annotated with a label coming from a list of pre-defined Rhetorical Roles. We develop baseline models for automatically predicting rhetorical roles in a legal document based on the annotated corpus. Further, we show the application of rhetorical roles to improve performance on the tasks of summarization and legal judgment prediction. We release the corpus and baseline model code along with the paper.},
url = {https://aclanthology.org/2022.lrec-1.470}
}
``` | [
"SUMMARIZATION"
] | Non_BioNLP |
RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel | RomainDarous | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4460010",
"loss:CoSENTLoss",
"dataset:RomainDarous/corrupted_os_by_language",
"arxiv:1908.10084",
"base_model:RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel",
"base_model:finetune:RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,740,876,592,000 | 2025-03-02T00:50:30 | 13 | 0 | ---
base_model: RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel
datasets:
- RomainDarous/corrupted_os_by_language
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4460010
- loss:CoSENTLoss
widget:
- source_sentence: Malformed target specific variable definition
sentences:
- Hedefe özgü değişken tanımı bozuk
- Kan alle data in die gids lees
- "слава Украине! героям слава!\uFEFF"
- source_sentence: Can't write an inode bitmap
sentences:
- Skontrolujte stav aktualizácií alebo to skúste znova neskôr.
- Malsukcesis skribi i nodan bitmapon
- Zastępuje wersję GL obsługiwaną przez sterownik
- source_sentence: Optimize soft proofing color transformations
sentences:
- 'arkadaslar biz artik her an kirmizi kart yiyecek,bencil,pas yapamayan,isabetsiz
orta yapani istemiyoruz. sozde efsaneniz bu sezon Besiktasa en cok zarar verenlerden
biriydi. kendini dusunmeden once Besiktasi dusunecek adam lazim bize. o yuzden
#GoHomeQuaresma'
- Yav bizim dedikodusunu yaptığımız insanın bile bi vizyonu var. Senin hakkında
neden oturup konuşalım?
- Ik ben een transgender.
- source_sentence: 'Pass 1: Checking @is, @bs, and sizes'
sentences:
- Bu adam cidden kurabiye gibi ben bunu çayın yanında yerim
- sagnat. errada. invisible. justificació. idioma
- Wilt u echt de primaire sleutel verplaatsen? (j N)
- source_sentence: Search for matching log entries
sentences:
- quem te lembra? caralho tô assustada aqui kkkkk
- sendotasunik gabeko\ egoera bistaratuko den ala ez adierazten du
- En aquest cas, hem d'incloure les imatges del contenidor )sr iov per a càrregues
de treball de telco (per exemple, com a referència, es podrien obtenir des de
valors de helm chart)
model-index:
- name: SentenceTransformer based on RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts eval
type: sts-eval
metrics:
- type: pearson_cosine
value: 0.980134449651697
name: Pearson Cosine
- type: spearman_cosine
value: 0.8655818040069744
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.9801775912473877
name: Pearson Cosine
- type: spearman_cosine
value: 0.8656457350186386
name: Spearman Cosine
---
# SentenceTransformer based on RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel](https://huggingface.co/RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel) on the [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel](https://huggingface.co/RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel) <!-- at revision 3b9be23aa196f826e8906ef4b45029daa56b43fe -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel")
# Run inference
sentences = [
'Search for matching log entries',
'quem te lembra? caralho tô assustada aqui kkkkk',
'sendotasunik gabeko\\ egoera bistaratuko den ala ez adierazten du',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-eval` and `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-eval | sts-test |
|:--------------------|:-----------|:-----------|
| pearson_cosine | 0.9801 | 0.9802 |
| **spearman_cosine** | **0.8656** | **0.8656** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### corrupted_open_os_by_language
* Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c)
* Size: 4,460,010 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.47 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:---------------|
| <code>Check spelling. Print the document. Show completion window. General. Show help</code> | <code>Kontrolli õigekirja. присоединяюсь. </code> | <code>0</code> |
| <code>EXIF not supported for this file format.</code> | <code>Šiam failo formatui EXIF nepalaikomas.</code> | <code>1</code> |
| <code>This package includes the documentation for texlive everyhook</code> | <code>Paket ini menyertakan dokumentasi untuk texlive everyhook</code> | <code>1</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### corrupted_open_os_by_language
* Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c)
* Size: 4,460,010 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.71 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 26.95 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Could not identify the current seat.</code> | <code> 天天花着男人的钱还这这创造新词汇男权你可真牛批,你也就这一出了一问男权,就说是我是吧,到现在我也没听到你给我们讲的男权,你也就是在网上喷喷,现实走道都不敢探头自卑,你现实要把你女权的劲拿出来总低啥头,您老应该去国家教育局把男权加上是吧,你们女权天天说自己生活不好没地位,给你们地位了你们能干啥?用你们的女权打到全世界男性是吧,能相出男权这一词您老也是人才呀,是不是庆幸自己是个女的,活在自己想想的世界里不觉得孤单吗,假象有男权是吧,自己假象和男权还说自己不是田园女权,田园女权能连自己都骂说自己妈是驴爸是大鼎的也是奇葩呀,那我们国家大肆宣扬过你们这么田园女权吗,国家要的是女性人群自主自理,你们可好看看你们女权干的啥事,给你们女权地位高了,看看你们女权干的事n绿地集团高管怎么都不说呀,人家可是有钱有地位,也不是我们说三从四德洗衣做饭你们女权会吗?,那我问问你们女权干过啥惊天大事,还甩锅给孔子,还封建社会,那我问问你们女权在福利面前为啥说自己是女性呀不是社会主义社会吗不应该男女平等吗,天天自己也不知道是不是抱个手机天天欧巴欧巴,你家那位要是不陪你看一会就会问你是不是不爱我了是吧大姐,您老也就赚这白菜钱操心国家事,中国五千年的历史被您老一句否决,还嘲讽人家日本女性,好意思说自己不是女权,三从四德流传这么久到您这变成日本文化了,我就想问问男权您老是怎么想的,那你问孔子老人家呗为什么女人要三从四德,我说的是女权你干嘛自己对号入座,连中华人民传承的东西都不认跟我这谈男权,还男权您老给我举个例子呗,让我们男权听听都是h啥,这些不都是你们女权的标准吗?,还男权,您老醒醒吧这里是现实,不是你的公主世界,总觉得自己多么多么重要,地球没你是不能转了还是人类要灭亡呀,我真的想问一句你给我找一条男权的新闻,咋了我们男人不能提女权呗你老授权了呗,那我们谈论田园女权你老对号入座干嘛,天天过节要礼物,还嫌弃自己男朋友没有钱,我寻思你找个有钱人包养你呗,对了有钱人怎么可能看上你这种女权的呢,还要孩子跟女方姓我也没看见你没跟你妈姓呀,年年过节男人给你们送礼物你们女人给男人送过礼物吗?,一问我不是陪着他吗我对他说我爱你了这不是最好的礼物吗?,男人只要不送礼物就是不爱你们了呗,人家国际女权讲的男人能做的我们女人也能做,田园女权男人能做的我们女人为啥要做,还男权我笑了,以前结婚几头牛换个衣服原装的,现在几十万彩...</code> | <code>0</code> |
| <code>Undoing Date and Time Adjustment</code> | <code>正在取消日期和时间调整</code> | <code>1</code> |
| <code>Dependency package for gsl_2_6 gnu hpc</code> | <code>Pacotes de desenvolvimento do KDE</code> | <code>1</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | corrupted open os by language loss | sts-eval_spearman_cosine | sts-test_spearman_cosine |
|:-----:|:-----:|:-------------:|:----------------------------------:|:------------------------:|:------------------------:|
| 1.0 | 55751 | 0.1212 | 0.2656 | 0.8656 | - |
| -1 | -1 | - | - | - | 0.8656 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.1.2+cu121
- Accelerate: 1.3.0
- Datasets: 2.16.1
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"TRANSLATION"
] | Non_BioNLP |
Philipp-Sc/mistral-7b-reverse-instruct | Philipp-Sc | text-generation | [
"safetensors",
"gguf",
"text-generation",
"en",
"dataset:pankajmathur/WizardLM_Orca",
"dataset:teknium/trismegistus-project",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,702,446,120,000 | 2023-12-20T08:01:42 | 105 | 5 | ---
datasets:
- pankajmathur/WizardLM_Orca
- teknium/trismegistus-project
- unalignment/toxic-dpo-v0.1
- Intel/orca_dpo_pairs
language:
- en
license: apache-2.0
pipeline_tag: text-generation
---
## Mistral 7b Reverse Instruct
This model is sft (LoRA) fine tuned to reverse engineer the original prompt of a given LLM output/response.
Use Case: The generation of synthetic instruct datasets for developing chatbots and domain specific fine tuning (e.g. "Summarization" & "Roleplay"). It is useful for labelling unlabeled datasets.
- base_model: mistralai/Mistral-7B-v0.1 (=checkpoint-v1)
- base_model: mistralai/Mistral-7B-v0.2 (>=checkpoint-v2)
For convinience the latest model export is provided under [/latest_model_export](https://huggingface.co/Philipp-Sc/mistral-7b-reverse-instruct/tree/main/latest_model_export) as well as gguf quantized versions under [/latest_ggml_models](https://huggingface.co/Philipp-Sc/mistral-7b-reverse-instruct/tree/main/latest_ggml_models)
## Response Format
"[INST]\n### System:\n{system}\n### Instruction:\n{instruction}\n[/INST]\n"
- Grammar File: [inst_format.gbnf](https://huggingface.co/Philipp-Sc/mistral-7b-reverse-instruct/blob/main/inst_format.gbnf)
## Prompt Template
"\n### System:\nYou craft instructions for generating the given output through reverse engineering.\n### Instruction:\nDecipher the steps used to produce the given output and articulate a refined set of instructions (System & Instruction).\n### OUTPUT:\n {output}"
(use the template without the " ")
## Example Completions
Prompt 1:
```bash
### System:
You craft instructions for generating the given output through reverse engineering.
### Instruction:
Decipher the steps used to produce the given output and articulate a refined set of instructions (System & Instruction).
### OUTPUT:
SE3 Condenser Microphone from SE Electronics Sonic Distribution is now handling the SE Electronics line of imported studio condensers. The SE3 caught my eye at the Summer NAMM Show in Nashville and is their flagship pencil microphone with a fixed cardioid pattern and 48V phantom powering. This mic uses Class A FET amplifier electronics and has both low cut filter and -10dB pad switches. I had the opportunity to try this mic out on several sources while recording a band and was impressed by its natural sound and all around usefulness. I used it for acoustic guitar overdubs where the low cut filter helped to tame a jumbo bodied guitar's boomy sound. The gentle presence lift added a sparkle without using EQ. I also tried it on drums and cymbals and it (using the pad) didn't fold up (overload) at all. I even tried it on vocals with good results although it does 'pop' easily and required a couple of pop screens. Housed in an elegantly finished new body design, it comes with a sturdy shock mount and packaged in a deluxe wooden travel case. Significant specifications are: frequency response rated at 20Hz-20khz; sensitivity is 10mV/Pa +/- 2dB; noise level is 17dB (A weighted); and Max SPL for 0.5% THD @ 1kHz is 135dB. I certainly found a 'Swiss army knife' of a condenser with the SE3 and I completely recommend it for any studio task especially acoustic instruments such as guitar, violin, cello or string bass.
### RESPONSE:
```
(example from [vikp/reverse_instruct](https://huggingface.co/vikp/reverse_instruct))
Response 1:
```bash
[INST]
### System:
You are a professional musician and sound engineer who specializes in mixing live music. You have recently reviewed the SE Electronics SE3 Condenser Microphone and want to share your experience with your followers on social media.
### Instruction:
Write a short review of the SE Electronics SE3 Condenser Microphone for your social media followers.
[/INST]
```
Prompt 2:
```bash
### System:
You craft instructions for generating the given output through reverse engineering.
### Instruction:
Decipher the steps used to produce the given output and articulate a refined set of instructions (System & Instruction).
### OUTPUT:
**Lesson: Introduction to Sales** **Objective:** To introduce students to the concept of sales and its importance in business. **Introduction:** Sales is the process of persuading someone to buy a product or service. It is an essential part of any business, as it helps generate revenue and grow the company. In this lesson, we will learn about the basics of sales and how it can benefit businesses. **Section 1: What is Sales?** **Topic 1: Definition of Sales** Sales is the process of persuading someone to buy a product or service. It involves identifying the needs of potential customers, presenting solutions that meet those needs, and closing the deal by making the sale. **Topic 2: Importance of Sales** Sales is crucial to the success of a business because it generates revenue and helps the company grow. Without sales, businesses would not be able to make profits, expand their operations, or reach new customers. **Topic 3: Types of Sales** There are several types of sales, including direct sales, indirect sales, and online sales. Direct sales involve selling products or services directly to customers, while indirect sales involve selling through intermediaries such as distributors or retailers. Online sales involve selling products or services over the internet. **Section 2: Skills Required for Sales** **Topic 1: Communication Skills** Effective communication is essential for sales success. Salespeople must be able to communicate clearly and persuasively to potential customers, negotiate deals, and close sales. **Topic 2: Interpersonal Skills** Salespeople must also have strong interpersonal skills, including active listening, empathy, and building relationships with customers. These skills help salespeople understand customer needs and build trust with them. **Topic 3: Time Management Skills** Sales can be a time-consuming process, so salespeople must be skilled at managing their time effectively. They must prioritize tasks, set goals, and stay organized to be successful. **Section 3: Benefits of Sales** **Topic 1: Financial Benefits** Sales generate revenue for businesses, which can be used to fund growth and expansion. Sales also help businesses increase their profits by generating more income than expenses. **Topic 2: Customer Benefits** Sales can also benefit customers by providing them with products or services that meet their needs and improve their lives. Sales help businesses introduce new products and services to the market, which can improve customers' lives and create new opportunities for them. **Conclusion:** In conclusion, sales is an essential part of any business, as it helps generate revenue and grow the company. By understanding the basics of sales, developing essential skills, and recognizing the benefits of sales, individuals can become successful salespeople and contribute to the growth of their businesses. **Glossary:** - Sales: The process of persuading someone to buy a product or service - Interpersonal Skills: The ability to communicate and build relationships with others - Time Management Skills: The ability to prioritize tasks, set goals, and stay organized - Revenue: The income generated by a business from its sales - Profit: The difference between a business's income and expenses - Intermediaries: Companies that sell products or services on behalf of another company``
```
(source [nampdn-ai/tiny-textbooks](https://huggingface.co/nampdn-ai/tiny-textbooks))
Response 2:
```bash
[INST]
### System:
You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional information to answer the question.
### Instruction:
Create a detailed lesson plan with a specific focus on teaching the concept of sales, including the various aspects of the sales process such as lead generation, prospecting, qualification, presentation, objection handling, closing techniques, and post-sales activities. The lesson should also cover the key skills required for successful sales professionals, such as communication, interpersonal skills, time management, and product knowledge. Additionally, the plan should include examples of real-life sales scenarios and role-plays to help students develop practical skills and gain a deeper understanding of the sales industry. Finally, the lesson plan should be designed to meet the needs of different learning styles, incorporating visual aids, interactive activities, and assessments to ensure maximum engagement and effectiveness for all learners.
[/INST]
```
## Training Dataset
About 21k items of the following datasets were used. (mostly coding-like tasks were removed)
- v1 & v2: [reverse-instruct_v1.json](https://huggingface.co/Philipp-Sc/mistral-7b-reverse-instruct/blob/main/reverse-instruct_v1.json)
- v3: [reverse-instruct_v2.json](https://huggingface.co/Philipp-Sc/mistral-7b-reverse-instruct/blob/main/reverse-instruct_v2.json)
The reverse instruct dataset has been compiled with entries from the following datasets:
- [alpaca_gpt4_data](https://raw.githubusercontent.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/main/data/alpaca_gpt4_data.json)
- [roleplay-instruct-v2.1](https://raw.githubusercontent.com/teknium1/GPTeacher/main/Roleplay%20Supplemental/roleplay-instruct-v2.1.json)
- [wizardlm_orca](https://huggingface.co/datasets/pankajmathur/WizardLM_Orca/resolve/main/wizardlm_orca.json)
- [toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1/resolve/main/toxic-dpo.parquet)
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs/resolve/main/orca_rlhf.jsonl)
- [occultexpert](https://huggingface.co/datasets/teknium/trismegistus-project/resolve/main/occultexpert.json)
## Training Procedure
```bash
!cd LLaMA-Factory && WANDB_DISABLED=True PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256 accelerate launch \
--multi_gpu \
--mixed_precision fp16 \
--num_processes 2 \
--num_machines 1 \
--rdzv_backend static \
--same_network \
--gpu_ids all \
--machine_rank 0 \
--main_training_function main \
-- src/train_bash.py \
--stage sft \
--model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
--adapter_name_or_path path_to_checkpoint \
--flash_attn \
--neftune_noise_alpha 5 \
--do_train \
--dataset default \
--template vanilla \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir path_to_sft_checkpoint \
--overwrite_cache \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 1 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 10 \
--save_total_limit 3 \
--learning_rate 5e-5 \
--num_train_epochs 9.0 \
--plot_loss \
--fp16 \
--overwrite_output_dir \
--cutoff_len 4096 \
--quantization_bit 4
```
## Training Time
- v1: ~12h on Kaggle's P100 GPU
- v2: >30h on Kaggle's T4 x2
- v3: >40h on Kaggle's T4 x2
## Loss
- v3:
```bash
{'loss': 0.4424, 'learning_rate': 4.8398000023144565e-05, 'epoch': 1.03}
```
### Framework versions
- LLaMA-Factory | [
"SUMMARIZATION"
] | Non_BioNLP |
TeohYx/Translator | TeohYx | translation | [
"translation",
"arxiv:1910.09700",
"region:us"
] | 1,685,973,682,000 | 2023-06-06T03:26:07 | 0 | 0 | ---
pipeline_tag: translation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
llmware/slim-topics-npu-ov | llmware | null | [
"openvino",
"llama",
"license:apache-2.0",
"region:us"
] | 1,741,947,356,000 | 2025-03-14T10:18:01 | 5 | 0 | ---
base_model: llmware/slim-topics
license: apache-2.0
tags:
- green
- p1
- llmware-fx
- ov
- emerald
inference: false
base_model_relation: quantized
---
# slim-topics-npu-ov
**slim-topics-npu-ov** is a specialized function calling model that generates a topic description for a text passage, typically no more than 2-3 words.
This is an OpenVino int4 quantized version of slim-topics, providing a very fast, very small inference implementation, optimized for AI PCs using Intel NPU.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Model Parent:** llmware/slim-topics
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Topic categorization and summarization
- **RAG Benchmark Accuracy Score:** NA
- **Quantization:** int4
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
| [
"SUMMARIZATION"
] | Non_BioNLP |
marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold1_20241115_191836 | marumarukun | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,731,680,127,000 | 2024-11-15T14:19:32 | 4 | 0 | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold1_20241115_191836")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.12.7
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.0+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/distilbert_add_GLUE_Experiment_logit_kd_stsb | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,674,959,662,000 | 2023-01-29T02:38:46 | 136 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: distilbert_add_GLUE_Experiment_logit_kd_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- type: spearmanr
value: 0.06502131243296556
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1735
- Pearson: 0.0712
- Spearmanr: 0.0650
- Combined Score: 0.0681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.8584 | 1.0 | 23 | 1.2094 | 0.0666 | 0.0636 | 0.0651 |
| 1.0947 | 2.0 | 46 | 1.2681 | 0.0776 | 0.0622 | 0.0699 |
| 1.0743 | 3.0 | 69 | 1.1735 | 0.0712 | 0.0650 | 0.0681 |
| 1.0098 | 4.0 | 92 | 1.3340 | 0.0809 | 0.0800 | 0.0804 |
| 0.9112 | 5.0 | 115 | 1.2858 | 0.1005 | 0.0958 | 0.0981 |
| 0.8385 | 6.0 | 138 | 1.3734 | 0.1165 | 0.1114 | 0.1140 |
| 0.7601 | 7.0 | 161 | 1.3383 | 0.1358 | 0.1338 | 0.1348 |
| 0.693 | 8.0 | 184 | 1.4171 | 0.1371 | 0.1358 | 0.1365 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Snowflake/snowflake-arctic-embed-l | Snowflake | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"arxiv:2407.18887",
"arxiv:2405.05374",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,712,930,074,000 | 2024-12-19T13:32:48 | 26,313 | 91 | ---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
new_version: Snowflake/snowflake-arctic-embed-l-v2.0
model-index:
- name: snowflake-arctic-embed-l
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.80597014925374
- type: ap
value: 37.911466766189875
- type: f1
value: 68.88606927542106
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 78.402275
- type: ap
value: 73.03294793248114
- type: f1
value: 78.3147786132161
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.717999999999996
- type: f1
value: 35.918044248787766
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 34.495
- type: map_at_10
value: 50.236000000000004
- type: map_at_100
value: 50.944
- type: map_at_1000
value: 50.94499999999999
- type: map_at_3
value: 45.341
- type: map_at_5
value: 48.286
- type: mrr_at_1
value: 35.135
- type: mrr_at_10
value: 50.471
- type: mrr_at_100
value: 51.185
- type: mrr_at_1000
value: 51.187000000000005
- type: mrr_at_3
value: 45.602
- type: mrr_at_5
value: 48.468
- type: ndcg_at_1
value: 34.495
- type: ndcg_at_10
value: 59.086000000000006
- type: ndcg_at_100
value: 61.937
- type: ndcg_at_1000
value: 61.966
- type: ndcg_at_3
value: 49.062
- type: ndcg_at_5
value: 54.367
- type: precision_at_1
value: 34.495
- type: precision_at_10
value: 8.734
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.962
- type: precision_at_5
value: 14.552000000000001
- type: recall_at_1
value: 34.495
- type: recall_at_10
value: 87.33999999999999
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 59.885999999999996
- type: recall_at_5
value: 72.76
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.46440874635501
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.28720154213723
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.34614226394902
- type: mrr
value: 75.05628105351096
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.41072716728198
- type: cos_sim_spearman
value: 86.34534093114372
- type: euclidean_pearson
value: 85.34009667750838
- type: euclidean_spearman
value: 86.34534093114372
- type: manhattan_pearson
value: 85.2158833586889
- type: manhattan_spearman
value: 86.60920236509224
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.06493506493507
- type: f1
value: 79.28108600339833
- task:
type: Clustering
dataset:
name: MTEB BigPatentClustering
type: jinaai/big-patent-clustering
config: default
split: test
revision: 62d5330920bca426ce9d3c76ea914f15fc83e891
metrics:
- type: v_measure
value: 20.545049432417287
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.54369718479804
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.64941588219162
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 37.264
- type: map_at_10
value: 49.43
- type: map_at_100
value: 50.967
- type: map_at_1000
value: 51.08200000000001
- type: map_at_3
value: 45.742
- type: map_at_5
value: 47.764
- type: mrr_at_1
value: 44.921
- type: mrr_at_10
value: 54.879999999999995
- type: mrr_at_100
value: 55.525000000000006
- type: mrr_at_1000
value: 55.565
- type: mrr_at_3
value: 52.480000000000004
- type: mrr_at_5
value: 53.86
- type: ndcg_at_1
value: 44.921
- type: ndcg_at_10
value: 55.664
- type: ndcg_at_100
value: 60.488
- type: ndcg_at_1000
value: 62.138000000000005
- type: ndcg_at_3
value: 50.797000000000004
- type: ndcg_at_5
value: 52.94799999999999
- type: precision_at_1
value: 44.921
- type: precision_at_10
value: 10.587
- type: precision_at_100
value: 1.629
- type: precision_at_1000
value: 0.203
- type: precision_at_3
value: 24.034
- type: precision_at_5
value: 17.224999999999998
- type: recall_at_1
value: 37.264
- type: recall_at_10
value: 67.15
- type: recall_at_100
value: 86.811
- type: recall_at_1000
value: 97.172
- type: recall_at_3
value: 53.15800000000001
- type: recall_at_5
value: 59.116
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 36.237
- type: map_at_10
value: 47.941
- type: map_at_100
value: 49.131
- type: map_at_1000
value: 49.26
- type: map_at_3
value: 44.561
- type: map_at_5
value: 46.28
- type: mrr_at_1
value: 45.605000000000004
- type: mrr_at_10
value: 54.039
- type: mrr_at_100
value: 54.653
- type: mrr_at_1000
value: 54.688
- type: mrr_at_3
value: 52.006
- type: mrr_at_5
value: 53.096
- type: ndcg_at_1
value: 45.605000000000004
- type: ndcg_at_10
value: 53.916
- type: ndcg_at_100
value: 57.745999999999995
- type: ndcg_at_1000
value: 59.492999999999995
- type: ndcg_at_3
value: 49.774
- type: ndcg_at_5
value: 51.434999999999995
- type: precision_at_1
value: 45.605000000000004
- type: precision_at_10
value: 10.229000000000001
- type: precision_at_100
value: 1.55
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 24.098
- type: precision_at_5
value: 16.726
- type: recall_at_1
value: 36.237
- type: recall_at_10
value: 64.03
- type: recall_at_100
value: 80.423
- type: recall_at_1000
value: 91.03
- type: recall_at_3
value: 51.20400000000001
- type: recall_at_5
value: 56.298
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 47.278
- type: map_at_10
value: 59.757000000000005
- type: map_at_100
value: 60.67
- type: map_at_1000
value: 60.714
- type: map_at_3
value: 56.714
- type: map_at_5
value: 58.453
- type: mrr_at_1
value: 53.73
- type: mrr_at_10
value: 62.970000000000006
- type: mrr_at_100
value: 63.507999999999996
- type: mrr_at_1000
value: 63.53
- type: mrr_at_3
value: 60.909
- type: mrr_at_5
value: 62.172000000000004
- type: ndcg_at_1
value: 53.73
- type: ndcg_at_10
value: 64.97
- type: ndcg_at_100
value: 68.394
- type: ndcg_at_1000
value: 69.255
- type: ndcg_at_3
value: 60.228
- type: ndcg_at_5
value: 62.617999999999995
- type: precision_at_1
value: 53.73
- type: precision_at_10
value: 10.056
- type: precision_at_100
value: 1.265
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 26.332
- type: precision_at_5
value: 17.743000000000002
- type: recall_at_1
value: 47.278
- type: recall_at_10
value: 76.86500000000001
- type: recall_at_100
value: 91.582
- type: recall_at_1000
value: 97.583
- type: recall_at_3
value: 64.443
- type: recall_at_5
value: 70.283
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 29.702
- type: map_at_10
value: 39.463
- type: map_at_100
value: 40.508
- type: map_at_1000
value: 40.579
- type: map_at_3
value: 36.748999999999995
- type: map_at_5
value: 38.296
- type: mrr_at_1
value: 31.977
- type: mrr_at_10
value: 41.739
- type: mrr_at_100
value: 42.586
- type: mrr_at_1000
value: 42.636
- type: mrr_at_3
value: 39.096
- type: mrr_at_5
value: 40.695
- type: ndcg_at_1
value: 31.977
- type: ndcg_at_10
value: 44.855000000000004
- type: ndcg_at_100
value: 49.712
- type: ndcg_at_1000
value: 51.443000000000005
- type: ndcg_at_3
value: 39.585
- type: ndcg_at_5
value: 42.244
- type: precision_at_1
value: 31.977
- type: precision_at_10
value: 6.768000000000001
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 16.761
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 29.702
- type: recall_at_10
value: 59.082
- type: recall_at_100
value: 80.92
- type: recall_at_1000
value: 93.728
- type: recall_at_3
value: 45.212
- type: recall_at_5
value: 51.449
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 21.336
- type: map_at_10
value: 30.137999999999998
- type: map_at_100
value: 31.385
- type: map_at_1000
value: 31.495
- type: map_at_3
value: 27.481
- type: map_at_5
value: 28.772
- type: mrr_at_1
value: 25.871
- type: mrr_at_10
value: 34.686
- type: mrr_at_100
value: 35.649
- type: mrr_at_1000
value: 35.705
- type: mrr_at_3
value: 32.09
- type: mrr_at_5
value: 33.52
- type: ndcg_at_1
value: 25.871
- type: ndcg_at_10
value: 35.617
- type: ndcg_at_100
value: 41.272999999999996
- type: ndcg_at_1000
value: 43.725
- type: ndcg_at_3
value: 30.653999999999996
- type: ndcg_at_5
value: 32.714
- type: precision_at_1
value: 25.871
- type: precision_at_10
value: 6.4799999999999995
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 14.469000000000001
- type: precision_at_5
value: 10.274
- type: recall_at_1
value: 21.336
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 71.773
- type: recall_at_1000
value: 89.05199999999999
- type: recall_at_3
value: 34.172999999999995
- type: recall_at_5
value: 39.397999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 34.424
- type: map_at_10
value: 45.647999999999996
- type: map_at_100
value: 46.907
- type: map_at_1000
value: 47.010999999999996
- type: map_at_3
value: 42.427
- type: map_at_5
value: 44.285000000000004
- type: mrr_at_1
value: 41.867
- type: mrr_at_10
value: 51.17699999999999
- type: mrr_at_100
value: 51.937
- type: mrr_at_1000
value: 51.975
- type: mrr_at_3
value: 48.941
- type: mrr_at_5
value: 50.322
- type: ndcg_at_1
value: 41.867
- type: ndcg_at_10
value: 51.534
- type: ndcg_at_100
value: 56.696999999999996
- type: ndcg_at_1000
value: 58.475
- type: ndcg_at_3
value: 46.835
- type: ndcg_at_5
value: 49.161
- type: precision_at_1
value: 41.867
- type: precision_at_10
value: 9.134
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 22.073
- type: precision_at_5
value: 15.495999999999999
- type: recall_at_1
value: 34.424
- type: recall_at_10
value: 63.237
- type: recall_at_100
value: 84.774
- type: recall_at_1000
value: 95.987
- type: recall_at_3
value: 49.888
- type: recall_at_5
value: 55.940999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 30.72
- type: map_at_10
value: 41.327999999999996
- type: map_at_100
value: 42.651
- type: map_at_1000
value: 42.739
- type: map_at_3
value: 38.223
- type: map_at_5
value: 40.053
- type: mrr_at_1
value: 37.9
- type: mrr_at_10
value: 46.857
- type: mrr_at_100
value: 47.673
- type: mrr_at_1000
value: 47.711999999999996
- type: mrr_at_3
value: 44.292
- type: mrr_at_5
value: 45.845
- type: ndcg_at_1
value: 37.9
- type: ndcg_at_10
value: 47.105999999999995
- type: ndcg_at_100
value: 52.56999999999999
- type: ndcg_at_1000
value: 54.37800000000001
- type: ndcg_at_3
value: 42.282
- type: ndcg_at_5
value: 44.646
- type: precision_at_1
value: 37.9
- type: precision_at_10
value: 8.368
- type: precision_at_100
value: 1.283
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 20.015
- type: precision_at_5
value: 14.132
- type: recall_at_1
value: 30.72
- type: recall_at_10
value: 58.826
- type: recall_at_100
value: 82.104
- type: recall_at_1000
value: 94.194
- type: recall_at_3
value: 44.962999999999994
- type: recall_at_5
value: 51.426
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 31.656583333333334
- type: map_at_10
value: 41.59883333333333
- type: map_at_100
value: 42.80350000000001
- type: map_at_1000
value: 42.91075
- type: map_at_3
value: 38.68908333333333
- type: map_at_5
value: 40.27733333333334
- type: mrr_at_1
value: 37.23483333333334
- type: mrr_at_10
value: 45.782000000000004
- type: mrr_at_100
value: 46.577083333333334
- type: mrr_at_1000
value: 46.62516666666667
- type: mrr_at_3
value: 43.480666666666664
- type: mrr_at_5
value: 44.79833333333333
- type: ndcg_at_1
value: 37.23483333333334
- type: ndcg_at_10
value: 46.971500000000006
- type: ndcg_at_100
value: 51.90125
- type: ndcg_at_1000
value: 53.86366666666667
- type: ndcg_at_3
value: 42.31791666666667
- type: ndcg_at_5
value: 44.458666666666666
- type: precision_at_1
value: 37.23483333333334
- type: precision_at_10
value: 8.044583333333332
- type: precision_at_100
value: 1.2334166666666666
- type: precision_at_1000
value: 0.15925
- type: precision_at_3
value: 19.240833333333327
- type: precision_at_5
value: 13.435083333333333
- type: recall_at_1
value: 31.656583333333334
- type: recall_at_10
value: 58.44758333333333
- type: recall_at_100
value: 79.93658333333332
- type: recall_at_1000
value: 93.32491666666668
- type: recall_at_3
value: 45.44266666666667
- type: recall_at_5
value: 50.99866666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 28.247
- type: map_at_10
value: 35.443999999999996
- type: map_at_100
value: 36.578
- type: map_at_1000
value: 36.675999999999995
- type: map_at_3
value: 33.276
- type: map_at_5
value: 34.536
- type: mrr_at_1
value: 31.747999999999998
- type: mrr_at_10
value: 38.413000000000004
- type: mrr_at_100
value: 39.327
- type: mrr_at_1000
value: 39.389
- type: mrr_at_3
value: 36.401
- type: mrr_at_5
value: 37.543
- type: ndcg_at_1
value: 31.747999999999998
- type: ndcg_at_10
value: 39.646
- type: ndcg_at_100
value: 44.861000000000004
- type: ndcg_at_1000
value: 47.197
- type: ndcg_at_3
value: 35.764
- type: ndcg_at_5
value: 37.635999999999996
- type: precision_at_1
value: 31.747999999999998
- type: precision_at_10
value: 6.12
- type: precision_at_100
value: 0.942
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 15.235000000000001
- type: precision_at_5
value: 10.491
- type: recall_at_1
value: 28.247
- type: recall_at_10
value: 49.456
- type: recall_at_100
value: 73.02499999999999
- type: recall_at_1000
value: 89.898
- type: recall_at_3
value: 38.653999999999996
- type: recall_at_5
value: 43.259
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 22.45
- type: map_at_10
value: 30.476999999999997
- type: map_at_100
value: 31.630999999999997
- type: map_at_1000
value: 31.755
- type: map_at_3
value: 27.989000000000004
- type: map_at_5
value: 29.410999999999998
- type: mrr_at_1
value: 26.979
- type: mrr_at_10
value: 34.316
- type: mrr_at_100
value: 35.272999999999996
- type: mrr_at_1000
value: 35.342
- type: mrr_at_3
value: 32.14
- type: mrr_at_5
value: 33.405
- type: ndcg_at_1
value: 26.979
- type: ndcg_at_10
value: 35.166
- type: ndcg_at_100
value: 40.583000000000006
- type: ndcg_at_1000
value: 43.282
- type: ndcg_at_3
value: 30.916
- type: ndcg_at_5
value: 32.973
- type: precision_at_1
value: 26.979
- type: precision_at_10
value: 6.132
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.227
- type: recall_at_1
value: 22.45
- type: recall_at_10
value: 45.348
- type: recall_at_100
value: 69.484
- type: recall_at_1000
value: 88.628
- type: recall_at_3
value: 33.338
- type: recall_at_5
value: 38.746
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 32.123000000000005
- type: map_at_10
value: 41.778
- type: map_at_100
value: 42.911
- type: map_at_1000
value: 42.994
- type: map_at_3
value: 38.558
- type: map_at_5
value: 40.318
- type: mrr_at_1
value: 37.687
- type: mrr_at_10
value: 45.889
- type: mrr_at_100
value: 46.672999999999995
- type: mrr_at_1000
value: 46.72
- type: mrr_at_3
value: 43.33
- type: mrr_at_5
value: 44.734
- type: ndcg_at_1
value: 37.687
- type: ndcg_at_10
value: 47.258
- type: ndcg_at_100
value: 52.331
- type: ndcg_at_1000
value: 54.152
- type: ndcg_at_3
value: 41.857
- type: ndcg_at_5
value: 44.283
- type: precision_at_1
value: 37.687
- type: precision_at_10
value: 7.892
- type: precision_at_100
value: 1.183
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 18.781
- type: precision_at_5
value: 13.134
- type: recall_at_1
value: 32.123000000000005
- type: recall_at_10
value: 59.760000000000005
- type: recall_at_100
value: 81.652
- type: recall_at_1000
value: 94.401
- type: recall_at_3
value: 44.996
- type: recall_at_5
value: 51.184
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 33.196999999999996
- type: map_at_10
value: 42.012
- type: map_at_100
value: 43.663999999999994
- type: map_at_1000
value: 43.883
- type: map_at_3
value: 39.33
- type: map_at_5
value: 40.586
- type: mrr_at_1
value: 39.328
- type: mrr_at_10
value: 46.57
- type: mrr_at_100
value: 47.508
- type: mrr_at_1000
value: 47.558
- type: mrr_at_3
value: 44.532
- type: mrr_at_5
value: 45.58
- type: ndcg_at_1
value: 39.328
- type: ndcg_at_10
value: 47.337
- type: ndcg_at_100
value: 52.989
- type: ndcg_at_1000
value: 55.224
- type: ndcg_at_3
value: 43.362
- type: ndcg_at_5
value: 44.866
- type: precision_at_1
value: 39.328
- type: precision_at_10
value: 8.577
- type: precision_at_100
value: 1.5789999999999997
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 19.697
- type: precision_at_5
value: 13.755
- type: recall_at_1
value: 33.196999999999996
- type: recall_at_10
value: 56.635000000000005
- type: recall_at_100
value: 81.882
- type: recall_at_1000
value: 95.342
- type: recall_at_3
value: 44.969
- type: recall_at_5
value: 49.266
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 26.901000000000003
- type: map_at_10
value: 35.77
- type: map_at_100
value: 36.638999999999996
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 33.219
- type: map_at_5
value: 34.574
- type: mrr_at_1
value: 29.205
- type: mrr_at_10
value: 37.848
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.682
- type: mrr_at_3
value: 35.551
- type: mrr_at_5
value: 36.808
- type: ndcg_at_1
value: 29.205
- type: ndcg_at_10
value: 40.589
- type: ndcg_at_100
value: 45.171
- type: ndcg_at_1000
value: 47.602
- type: ndcg_at_3
value: 35.760999999999996
- type: ndcg_at_5
value: 37.980000000000004
- type: precision_at_1
value: 29.205
- type: precision_at_10
value: 6.192
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 15.034
- type: precision_at_5
value: 10.424999999999999
- type: recall_at_1
value: 26.901000000000003
- type: recall_at_10
value: 53.236000000000004
- type: recall_at_100
value: 74.809
- type: recall_at_1000
value: 92.884
- type: recall_at_3
value: 40.314
- type: recall_at_5
value: 45.617999999999995
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 16.794999999999998
- type: map_at_10
value: 29.322
- type: map_at_100
value: 31.463
- type: map_at_1000
value: 31.643
- type: map_at_3
value: 24.517
- type: map_at_5
value: 27.237000000000002
- type: mrr_at_1
value: 37.655
- type: mrr_at_10
value: 50.952
- type: mrr_at_100
value: 51.581999999999994
- type: mrr_at_1000
value: 51.61
- type: mrr_at_3
value: 47.991
- type: mrr_at_5
value: 49.744
- type: ndcg_at_1
value: 37.655
- type: ndcg_at_10
value: 39.328
- type: ndcg_at_100
value: 46.358
- type: ndcg_at_1000
value: 49.245
- type: ndcg_at_3
value: 33.052
- type: ndcg_at_5
value: 35.407
- type: precision_at_1
value: 37.655
- type: precision_at_10
value: 12.202
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.252
- type: precision_at_3
value: 24.973
- type: precision_at_5
value: 19.075
- type: recall_at_1
value: 16.794999999999998
- type: recall_at_10
value: 45.716
- type: recall_at_100
value: 68.919
- type: recall_at_1000
value: 84.71600000000001
- type: recall_at_3
value: 30.135
- type: recall_at_5
value: 37.141999999999996
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.817
- type: map_at_10
value: 22.058
- type: map_at_100
value: 31.805
- type: map_at_1000
value: 33.562999999999995
- type: map_at_3
value: 15.537
- type: map_at_5
value: 18.199
- type: mrr_at_1
value: 72.75
- type: mrr_at_10
value: 79.804
- type: mrr_at_100
value: 80.089
- type: mrr_at_1000
value: 80.09100000000001
- type: mrr_at_3
value: 78.75
- type: mrr_at_5
value: 79.325
- type: ndcg_at_1
value: 59.875
- type: ndcg_at_10
value: 45.972
- type: ndcg_at_100
value: 51.092999999999996
- type: ndcg_at_1000
value: 58.048
- type: ndcg_at_3
value: 50.552
- type: ndcg_at_5
value: 47.672
- type: precision_at_1
value: 72.75
- type: precision_at_10
value: 37.05
- type: precision_at_100
value: 12.005
- type: precision_at_1000
value: 2.221
- type: precision_at_3
value: 54.083000000000006
- type: precision_at_5
value: 46.2
- type: recall_at_1
value: 9.817
- type: recall_at_10
value: 27.877000000000002
- type: recall_at_100
value: 57.974000000000004
- type: recall_at_1000
value: 80.085
- type: recall_at_3
value: 16.911
- type: recall_at_5
value: 20.689
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.464999999999996
- type: f1
value: 42.759588662873796
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 75.82900000000001
- type: map_at_10
value: 84.613
- type: map_at_100
value: 84.845
- type: map_at_1000
value: 84.855
- type: map_at_3
value: 83.498
- type: map_at_5
value: 84.29299999999999
- type: mrr_at_1
value: 81.69800000000001
- type: mrr_at_10
value: 88.84100000000001
- type: mrr_at_100
value: 88.887
- type: mrr_at_1000
value: 88.888
- type: mrr_at_3
value: 88.179
- type: mrr_at_5
value: 88.69200000000001
- type: ndcg_at_1
value: 81.69800000000001
- type: ndcg_at_10
value: 88.21799999999999
- type: ndcg_at_100
value: 88.961
- type: ndcg_at_1000
value: 89.131
- type: ndcg_at_3
value: 86.591
- type: ndcg_at_5
value: 87.666
- type: precision_at_1
value: 81.69800000000001
- type: precision_at_10
value: 10.615
- type: precision_at_100
value: 1.125
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.208
- type: precision_at_5
value: 20.681
- type: recall_at_1
value: 75.82900000000001
- type: recall_at_10
value: 94.97
- type: recall_at_100
value: 97.786
- type: recall_at_1000
value: 98.809
- type: recall_at_3
value: 90.625
- type: recall_at_5
value: 93.345
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 22.788
- type: map_at_10
value: 36.71
- type: map_at_100
value: 38.527
- type: map_at_1000
value: 38.701
- type: map_at_3
value: 32.318999999999996
- type: map_at_5
value: 34.809
- type: mrr_at_1
value: 44.444
- type: mrr_at_10
value: 52.868
- type: mrr_at_100
value: 53.52400000000001
- type: mrr_at_1000
value: 53.559999999999995
- type: mrr_at_3
value: 50.153999999999996
- type: mrr_at_5
value: 51.651
- type: ndcg_at_1
value: 44.444
- type: ndcg_at_10
value: 44.707
- type: ndcg_at_100
value: 51.174
- type: ndcg_at_1000
value: 53.996
- type: ndcg_at_3
value: 40.855999999999995
- type: ndcg_at_5
value: 42.113
- type: precision_at_1
value: 44.444
- type: precision_at_10
value: 12.021999999999998
- type: precision_at_100
value: 1.8950000000000002
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 26.8
- type: precision_at_5
value: 19.66
- type: recall_at_1
value: 22.788
- type: recall_at_10
value: 51.793
- type: recall_at_100
value: 75.69500000000001
- type: recall_at_1000
value: 92.292
- type: recall_at_3
value: 37.375
- type: recall_at_5
value: 43.682
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 41.276
- type: map_at_10
value: 67.245
- type: map_at_100
value: 68.061
- type: map_at_1000
value: 68.11399999999999
- type: map_at_3
value: 63.693
- type: map_at_5
value: 65.90899999999999
- type: mrr_at_1
value: 82.552
- type: mrr_at_10
value: 87.741
- type: mrr_at_100
value: 87.868
- type: mrr_at_1000
value: 87.871
- type: mrr_at_3
value: 86.98599999999999
- type: mrr_at_5
value: 87.469
- type: ndcg_at_1
value: 82.552
- type: ndcg_at_10
value: 75.176
- type: ndcg_at_100
value: 77.902
- type: ndcg_at_1000
value: 78.852
- type: ndcg_at_3
value: 70.30499999999999
- type: ndcg_at_5
value: 73.00999999999999
- type: precision_at_1
value: 82.552
- type: precision_at_10
value: 15.765
- type: precision_at_100
value: 1.788
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 45.375
- type: precision_at_5
value: 29.360999999999997
- type: recall_at_1
value: 41.276
- type: recall_at_10
value: 78.825
- type: recall_at_100
value: 89.41900000000001
- type: recall_at_1000
value: 95.625
- type: recall_at_3
value: 68.062
- type: recall_at_5
value: 73.40299999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 72.876
- type: ap
value: 67.15477852410164
- type: f1
value: 72.65147370025373
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.748
- type: map_at_10
value: 34.626000000000005
- type: map_at_100
value: 35.813
- type: map_at_1000
value: 35.859
- type: map_at_3
value: 30.753000000000004
- type: map_at_5
value: 33.049
- type: mrr_at_1
value: 22.35
- type: mrr_at_10
value: 35.23
- type: mrr_at_100
value: 36.359
- type: mrr_at_1000
value: 36.399
- type: mrr_at_3
value: 31.436999999999998
- type: mrr_at_5
value: 33.687
- type: ndcg_at_1
value: 22.364
- type: ndcg_at_10
value: 41.677
- type: ndcg_at_100
value: 47.355999999999995
- type: ndcg_at_1000
value: 48.494
- type: ndcg_at_3
value: 33.85
- type: ndcg_at_5
value: 37.942
- type: precision_at_1
value: 22.364
- type: precision_at_10
value: 6.6000000000000005
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.527000000000001
- type: precision_at_5
value: 10.796999999999999
- type: recall_at_1
value: 21.748
- type: recall_at_10
value: 63.292
- type: recall_at_100
value: 89.427
- type: recall_at_1000
value: 98.13499999999999
- type: recall_at_3
value: 42.126000000000005
- type: recall_at_5
value: 51.968
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.62425900592795
- type: f1
value: 92.08497761553683
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 64.51436388508893
- type: f1
value: 45.884016531912906
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 76.57172995780591
- type: f1
value: 75.52979910878491
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 44.84052695201612
- type: v_measure
value: 21.443971229936494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.79354404841965
- type: f1
value: 63.17260074126185
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.09616677874916
- type: f1
value: 69.74285784421075
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.474709231086184
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.93630367824217
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.08234393834005
- type: mrr
value: 29.740466971605432
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.2059999999999995
- type: map_at_10
value: 14.442
- type: map_at_100
value: 18.005
- type: map_at_1000
value: 19.488
- type: map_at_3
value: 10.666
- type: map_at_5
value: 12.45
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 57.519
- type: mrr_at_100
value: 58.13700000000001
- type: mrr_at_1000
value: 58.167
- type: mrr_at_3
value: 55.779
- type: mrr_at_5
value: 56.940000000000005
- type: ndcg_at_1
value: 45.82
- type: ndcg_at_10
value: 37.651
- type: ndcg_at_100
value: 34.001999999999995
- type: ndcg_at_1000
value: 42.626
- type: ndcg_at_3
value: 43.961
- type: ndcg_at_5
value: 41.461
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 27.584999999999997
- type: precision_at_100
value: 8.455
- type: precision_at_1000
value: 2.118
- type: precision_at_3
value: 41.692
- type: precision_at_5
value: 36.161
- type: recall_at_1
value: 6.2059999999999995
- type: recall_at_10
value: 18.599
- type: recall_at_100
value: 33.608
- type: recall_at_1000
value: 65.429
- type: recall_at_3
value: 12.126000000000001
- type: recall_at_5
value: 14.902000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.117000000000004
- type: map_at_10
value: 55.535000000000004
- type: map_at_100
value: 56.32899999999999
- type: map_at_1000
value: 56.34400000000001
- type: map_at_3
value: 51.439
- type: map_at_5
value: 53.89699999999999
- type: mrr_at_1
value: 43.714
- type: mrr_at_10
value: 58.05200000000001
- type: mrr_at_100
value: 58.582
- type: mrr_at_1000
value: 58.592
- type: mrr_at_3
value: 54.896
- type: mrr_at_5
value: 56.874
- type: ndcg_at_1
value: 43.685
- type: ndcg_at_10
value: 63.108
- type: ndcg_at_100
value: 66.231
- type: ndcg_at_1000
value: 66.583
- type: ndcg_at_3
value: 55.659000000000006
- type: ndcg_at_5
value: 59.681
- type: precision_at_1
value: 43.685
- type: precision_at_10
value: 9.962
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.961
- type: precision_at_5
value: 17.352
- type: recall_at_1
value: 39.117000000000004
- type: recall_at_10
value: 83.408
- type: recall_at_100
value: 96.553
- type: recall_at_1000
value: 99.136
- type: recall_at_3
value: 64.364
- type: recall_at_5
value: 73.573
- task:
type: Classification
dataset:
name: MTEB NewsClassification
type: ag_news
config: default
split: test
revision: eb185aade064a813bc0b7f42de02595523103ca4
metrics:
- type: accuracy
value: 78.87763157894737
- type: f1
value: 78.69611753876177
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (en)
type: GEM/opusparcus
config: en
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.89816700610999
- type: cos_sim_ap
value: 100
- type: cos_sim_f1
value: 99.9490575649516
- type: cos_sim_precision
value: 100
- type: cos_sim_recall
value: 99.89816700610999
- type: dot_accuracy
value: 99.89816700610999
- type: dot_ap
value: 100
- type: dot_f1
value: 99.9490575649516
- type: dot_precision
value: 100
- type: dot_recall
value: 99.89816700610999
- type: euclidean_accuracy
value: 99.89816700610999
- type: euclidean_ap
value: 100
- type: euclidean_f1
value: 99.9490575649516
- type: euclidean_precision
value: 100
- type: euclidean_recall
value: 99.89816700610999
- type: manhattan_accuracy
value: 99.89816700610999
- type: manhattan_ap
value: 100
- type: manhattan_f1
value: 99.9490575649516
- type: manhattan_precision
value: 100
- type: manhattan_recall
value: 99.89816700610999
- type: max_accuracy
value: 99.89816700610999
- type: max_ap
value: 100
- type: max_f1
value: 99.9490575649516
- task:
type: PairClassification
dataset:
name: MTEB PawsX (en)
type: paws-x
config: en
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 62
- type: cos_sim_ap
value: 62.26837791655737
- type: cos_sim_f1
value: 62.607449856733524
- type: cos_sim_precision
value: 46.36604774535809
- type: cos_sim_recall
value: 96.36163175303197
- type: dot_accuracy
value: 62
- type: dot_ap
value: 62.26736459439965
- type: dot_f1
value: 62.607449856733524
- type: dot_precision
value: 46.36604774535809
- type: dot_recall
value: 96.36163175303197
- type: euclidean_accuracy
value: 62
- type: euclidean_ap
value: 62.26826112548132
- type: euclidean_f1
value: 62.607449856733524
- type: euclidean_precision
value: 46.36604774535809
- type: euclidean_recall
value: 96.36163175303197
- type: manhattan_accuracy
value: 62
- type: manhattan_ap
value: 62.26223761507973
- type: manhattan_f1
value: 62.585034013605444
- type: manhattan_precision
value: 46.34146341463415
- type: manhattan_recall
value: 96.36163175303197
- type: max_accuracy
value: 62
- type: max_ap
value: 62.26837791655737
- type: max_f1
value: 62.607449856733524
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 69.90899999999999
- type: map_at_10
value: 83.56700000000001
- type: map_at_100
value: 84.19200000000001
- type: map_at_1000
value: 84.212
- type: map_at_3
value: 80.658
- type: map_at_5
value: 82.473
- type: mrr_at_1
value: 80.4
- type: mrr_at_10
value: 86.699
- type: mrr_at_100
value: 86.798
- type: mrr_at_1000
value: 86.80099999999999
- type: mrr_at_3
value: 85.677
- type: mrr_at_5
value: 86.354
- type: ndcg_at_1
value: 80.43
- type: ndcg_at_10
value: 87.41
- type: ndcg_at_100
value: 88.653
- type: ndcg_at_1000
value: 88.81599999999999
- type: ndcg_at_3
value: 84.516
- type: ndcg_at_5
value: 86.068
- type: precision_at_1
value: 80.43
- type: precision_at_10
value: 13.234000000000002
- type: precision_at_100
value: 1.513
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.93
- type: precision_at_5
value: 24.26
- type: recall_at_1
value: 69.90899999999999
- type: recall_at_10
value: 94.687
- type: recall_at_100
value: 98.96000000000001
- type: recall_at_1000
value: 99.79599999999999
- type: recall_at_3
value: 86.25699999999999
- type: recall_at_5
value: 90.70700000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.02256865360266
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 62.43157528757563
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 5.093
- type: map_at_10
value: 12.982
- type: map_at_100
value: 15.031
- type: map_at_1000
value: 15.334
- type: map_at_3
value: 9.339
- type: map_at_5
value: 11.183
- type: mrr_at_1
value: 25.1
- type: mrr_at_10
value: 36.257
- type: mrr_at_100
value: 37.351
- type: mrr_at_1000
value: 37.409
- type: mrr_at_3
value: 33.050000000000004
- type: mrr_at_5
value: 35.205
- type: ndcg_at_1
value: 25.1
- type: ndcg_at_10
value: 21.361
- type: ndcg_at_100
value: 29.396
- type: ndcg_at_1000
value: 34.849999999999994
- type: ndcg_at_3
value: 20.704
- type: ndcg_at_5
value: 18.086
- type: precision_at_1
value: 25.1
- type: precision_at_10
value: 10.94
- type: precision_at_100
value: 2.257
- type: precision_at_1000
value: 0.358
- type: precision_at_3
value: 19.467000000000002
- type: precision_at_5
value: 15.98
- type: recall_at_1
value: 5.093
- type: recall_at_10
value: 22.177
- type: recall_at_100
value: 45.842
- type: recall_at_1000
value: 72.598
- type: recall_at_3
value: 11.833
- type: recall_at_5
value: 16.173000000000002
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 73.56535226754596
- type: cos_sim_spearman
value: 69.32425977603488
- type: euclidean_pearson
value: 71.32425703470898
- type: euclidean_spearman
value: 69.32425217267013
- type: manhattan_pearson
value: 71.25897281394246
- type: manhattan_spearman
value: 69.27132577049578
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 69.66387868726018
- type: cos_sim_spearman
value: 67.85470749045027
- type: euclidean_pearson
value: 66.62075098063795
- type: euclidean_spearman
value: 67.85470749045027
- type: manhattan_pearson
value: 66.61455061901262
- type: manhattan_spearman
value: 67.87229618498695
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.65731331392575
- type: cos_sim_spearman
value: 77.48991626780108
- type: euclidean_pearson
value: 77.19884738623692
- type: euclidean_spearman
value: 77.48985836619045
- type: manhattan_pearson
value: 77.0656684243772
- type: manhattan_spearman
value: 77.30289226582691
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 69.37003253666457
- type: cos_sim_spearman
value: 69.77157648098141
- type: euclidean_pearson
value: 69.39543876030432
- type: euclidean_spearman
value: 69.77157648098141
- type: manhattan_pearson
value: 69.29901600459745
- type: manhattan_spearman
value: 69.65074167527128
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 78.56777256540136
- type: cos_sim_spearman
value: 80.16458787843023
- type: euclidean_pearson
value: 80.16475730686916
- type: euclidean_spearman
value: 80.16458787843023
- type: manhattan_pearson
value: 80.12814463670401
- type: manhattan_spearman
value: 80.1357907984809
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 76.09572350919031
- type: cos_sim_spearman
value: 77.94490233429326
- type: euclidean_pearson
value: 78.36595251203524
- type: euclidean_spearman
value: 77.94490233429326
- type: manhattan_pearson
value: 78.41538768125166
- type: manhattan_spearman
value: 78.01244379569542
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.7843552187951
- type: cos_sim_spearman
value: 82.28085055047386
- type: euclidean_pearson
value: 82.37373672515267
- type: euclidean_spearman
value: 82.28085055047386
- type: manhattan_pearson
value: 82.39387241346917
- type: manhattan_spearman
value: 82.36503339515906
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 68.29963929962095
- type: cos_sim_spearman
value: 67.96868942546051
- type: euclidean_pearson
value: 68.93524903869285
- type: euclidean_spearman
value: 67.96868942546051
- type: manhattan_pearson
value: 68.79144468444811
- type: manhattan_spearman
value: 67.69311483884324
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 72.84789696700685
- type: cos_sim_spearman
value: 75.67875747588545
- type: euclidean_pearson
value: 75.07752300463038
- type: euclidean_spearman
value: 75.67875747588545
- type: manhattan_pearson
value: 74.97934248140928
- type: manhattan_spearman
value: 75.62525644178724
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (en)
type: PhilipMay/stsb_multi_mt
config: en
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 72.84789702519309
- type: cos_sim_spearman
value: 75.67875747588545
- type: euclidean_pearson
value: 75.07752310061133
- type: euclidean_spearman
value: 75.67875747588545
- type: manhattan_pearson
value: 74.97934257159595
- type: manhattan_spearman
value: 75.62525644178724
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.55557720431086
- type: mrr
value: 94.91178665198272
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 59.260999999999996
- type: map_at_10
value: 69.36099999999999
- type: map_at_100
value: 69.868
- type: map_at_1000
value: 69.877
- type: map_at_3
value: 66.617
- type: map_at_5
value: 68.061
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.533
- type: mrr_at_100
value: 70.966
- type: mrr_at_1000
value: 70.975
- type: mrr_at_3
value: 68.667
- type: mrr_at_5
value: 69.717
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 73.82300000000001
- type: ndcg_at_100
value: 76.122
- type: ndcg_at_1000
value: 76.374
- type: ndcg_at_3
value: 69.27499999999999
- type: ndcg_at_5
value: 71.33
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.889000000000003
- type: precision_at_5
value: 17.599999999999998
- type: recall_at_1
value: 59.260999999999996
- type: recall_at_10
value: 86.2
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 98.667
- type: recall_at_3
value: 74.006
- type: recall_at_5
value: 79.167
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81881188118813
- type: cos_sim_ap
value: 95.20169041096409
- type: cos_sim_f1
value: 90.76224129227664
- type: cos_sim_precision
value: 91.64118246687055
- type: cos_sim_recall
value: 89.9
- type: dot_accuracy
value: 99.81881188118813
- type: dot_ap
value: 95.20169041096409
- type: dot_f1
value: 90.76224129227664
- type: dot_precision
value: 91.64118246687055
- type: dot_recall
value: 89.9
- type: euclidean_accuracy
value: 99.81881188118813
- type: euclidean_ap
value: 95.2016904109641
- type: euclidean_f1
value: 90.76224129227664
- type: euclidean_precision
value: 91.64118246687055
- type: euclidean_recall
value: 89.9
- type: manhattan_accuracy
value: 99.81881188118813
- type: manhattan_ap
value: 95.22680188132777
- type: manhattan_f1
value: 90.79013588324108
- type: manhattan_precision
value: 91.38804457953394
- type: manhattan_recall
value: 90.2
- type: max_accuracy
value: 99.81881188118813
- type: max_ap
value: 95.22680188132777
- type: max_f1
value: 90.79013588324108
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.8638628701308
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 37.82028248106046
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.870860210170946
- type: mrr
value: 51.608084521687466
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.60384207444685
- type: cos_sim_spearman
value: 30.84047452209471
- type: dot_pearson
value: 31.60384104417333
- type: dot_spearman
value: 30.84047452209471
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.246
- type: map_at_10
value: 2.051
- type: map_at_100
value: 13.129
- type: map_at_1000
value: 31.56
- type: map_at_3
value: 0.681
- type: map_at_5
value: 1.105
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 87
- type: ndcg_at_10
value: 80.716
- type: ndcg_at_100
value: 63.83
- type: ndcg_at_1000
value: 56.215
- type: ndcg_at_3
value: 84.531
- type: ndcg_at_5
value: 84.777
- type: precision_at_1
value: 94
- type: precision_at_10
value: 84.6
- type: precision_at_100
value: 66.03999999999999
- type: precision_at_1000
value: 24.878
- type: precision_at_3
value: 88.667
- type: precision_at_5
value: 89.60000000000001
- type: recall_at_1
value: 0.246
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 15.895999999999999
- type: recall_at_1000
value: 52.683
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.163
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.852
- type: map_at_10
value: 14.316
- type: map_at_100
value: 20.982
- type: map_at_1000
value: 22.58
- type: map_at_3
value: 7.767
- type: map_at_5
value: 10.321
- type: mrr_at_1
value: 51.019999999999996
- type: mrr_at_10
value: 66.365
- type: mrr_at_100
value: 66.522
- type: mrr_at_1000
value: 66.522
- type: mrr_at_3
value: 62.925
- type: mrr_at_5
value: 64.762
- type: ndcg_at_1
value: 46.939
- type: ndcg_at_10
value: 34.516999999999996
- type: ndcg_at_100
value: 44.25
- type: ndcg_at_1000
value: 54.899
- type: ndcg_at_3
value: 40.203
- type: ndcg_at_5
value: 37.004
- type: precision_at_1
value: 51.019999999999996
- type: precision_at_10
value: 29.796
- type: precision_at_100
value: 8.633000000000001
- type: precision_at_1000
value: 1.584
- type: precision_at_3
value: 40.816
- type: precision_at_5
value: 35.918
- type: recall_at_1
value: 3.852
- type: recall_at_10
value: 20.891000000000002
- type: recall_at_100
value: 52.428
- type: recall_at_1000
value: 84.34899999999999
- type: recall_at_3
value: 8.834
- type: recall_at_5
value: 12.909
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 64.7092
- type: ap
value: 11.972915012305819
- type: f1
value: 49.91050149892115
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.737408036219584
- type: f1
value: 57.07235266246011
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.9147539025798
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.52369315133814
- type: cos_sim_ap
value: 62.34858091376534
- type: cos_sim_f1
value: 58.18225190839694
- type: cos_sim_precision
value: 53.09098824553766
- type: cos_sim_recall
value: 64.35356200527704
- type: dot_accuracy
value: 82.52369315133814
- type: dot_ap
value: 62.34857753814992
- type: dot_f1
value: 58.18225190839694
- type: dot_precision
value: 53.09098824553766
- type: dot_recall
value: 64.35356200527704
- type: euclidean_accuracy
value: 82.52369315133814
- type: euclidean_ap
value: 62.34857756663386
- type: euclidean_f1
value: 58.18225190839694
- type: euclidean_precision
value: 53.09098824553766
- type: euclidean_recall
value: 64.35356200527704
- type: manhattan_accuracy
value: 82.49389044525243
- type: manhattan_ap
value: 62.32245347238179
- type: manhattan_f1
value: 58.206309819213054
- type: manhattan_precision
value: 52.70704044511021
- type: manhattan_recall
value: 64.9868073878628
- type: max_accuracy
value: 82.52369315133814
- type: max_ap
value: 62.34858091376534
- type: max_f1
value: 58.206309819213054
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34555827220863
- type: cos_sim_ap
value: 84.84152481680071
- type: cos_sim_f1
value: 76.860456739428
- type: cos_sim_precision
value: 72.21470150263978
- type: cos_sim_recall
value: 82.14505697566985
- type: dot_accuracy
value: 88.34555827220863
- type: dot_ap
value: 84.84152743322608
- type: dot_f1
value: 76.860456739428
- type: dot_precision
value: 72.21470150263978
- type: dot_recall
value: 82.14505697566985
- type: euclidean_accuracy
value: 88.34555827220863
- type: euclidean_ap
value: 84.84152589453169
- type: euclidean_f1
value: 76.860456739428
- type: euclidean_precision
value: 72.21470150263978
- type: euclidean_recall
value: 82.14505697566985
- type: manhattan_accuracy
value: 88.38242713548337
- type: manhattan_ap
value: 84.8112124970968
- type: manhattan_f1
value: 76.83599206057487
- type: manhattan_precision
value: 73.51244900829934
- type: manhattan_recall
value: 80.47428395441946
- type: max_accuracy
value: 88.38242713548337
- type: max_ap
value: 84.84152743322608
- type: max_f1
value: 76.860456739428
- task:
type: Clustering
dataset:
name: MTEB WikiCitiesClustering
type: jinaai/cities_wiki_clustering
config: default
split: test
revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa
metrics:
- type: v_measure
value: 85.5314389263015
---
<h1 align="center">Snowflake's Arctic-embed-l</h1>
<h4 align="center">
<p>
<a href=#news>News</a> |
<a href=#models>Models</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#contact">Contact</a> |
<a href="#faq">FAQ</a>
<a href="#license">License</a> |
<a href="#acknowledgement">Acknowledgement</a>
<p>
</h4>
## News
12/04/2024: Release of [snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) and [snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) our newest models with multilingual workloads in mind. These models outperform prior versions of Arctic Embed and we suggest these replace prior versions!
07/26/2024: Release preprint [[2407.18887] Embedding And Clustering Your Data Can Improve Contrastive Pretraining](https://arxiv.org/abs/2407.18887) on arXiv.
07/18/2024: Release of `snowflake-arctic-embed-m-v1.5`, capable of producing highly compressible embedding vectors that preserve quality even when squished as small as 128 bytes per vector. Details about the development of this model are available in the [launch post on the Snowflake engineering blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/).
05/10/2024: Release the [technical report on Arctic Embed](https://arxiv.org/abs/2405.05374)
04/16/2024: Release the ** snowflake-arctic-embed ** family of text embedding models. The releases are state-of-the-art for Retrieval quality at each of their representative size profiles. [Technical Report]() is coming shortly. For more details, please refer to our Github: [Arctic-Text-Embed](https://github.com/Snowflake-Labs/arctic-embed).
## Models
snowflake-arctic-embed is a suite of text embedding models that focuses on creating high-quality retrieval models optimized for performance.
The `snowflake-arctic-embedding` models achieve **state-of-the-art performance on the MTEB/BEIR leaderboard** for each of their size variants. Evaluation is performed using these [scripts](https://github.com/Snowflake-Labs/snowflake-arctic-embed/tree/main/src). As shown below, each class of model size achieves SOTA retrieval accuracy compared to other top models.
The models are trained by leveraging existing open-source text representation models, such as bert-base-uncased, and are trained in a multi-stage pipeline to optimize their retrieval performance. First, the models are trained with large batches of query-document pairs where negatives are derived in-batch—pretraining leverages about 400m samples of a mix of public datasets and proprietary web search data. Following pretraining models are further optimized with long training on a smaller dataset (about 1m samples) of triplets of query, positive document, and negative document derived from hard harmful mining. Mining of the negatives and data curation is crucial to retrieval accuracy. A detailed technical report can be found [here](https://arxiv.org/abs/2405.05374).
| Name | MTEB Retrieval Score (NDCG @ 10) | Parameters (Millions) | Embedding Dimension |
| ----------------------------------------------------------------------- | -------------------------------- | --------------------- | ------------------- |
| [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | 22 | 384 |
| [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | 33 | 384 |
| [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | 110 | 768 |
| [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | 137 | 768 |
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | 335 | 1024 |
Aside from being great open-source models, the largest model, [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/), can serve as a natural replacement for closed-source embedding, as shown below.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 |
| Google-gecko-text-embedding | 55.7 |
| text-embedding-3-large | 55.44 |
| Cohere-embed-english-v3.0 | 55.00 |
| bge-large-en-v1.5 | 54.29 |
### [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs)
This tiny model packs quite the punch. Based on the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model with only 22m parameters and 384 dimensions, this model should meet even the strictest latency/TCO budgets. Despite its size, its retrieval accuracy is closer to that of models with 100m paramers.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------- | -------------------------------- |
| [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 |
| GIST-all-MiniLM-L6-v2 | 45.12 |
| gte-tiny | 44.92 |
| all-MiniLM-L6-v2 | 41.95 |
| bge-micro-v2 | 42.56 |
### [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s)
Based on the [intfloat/e5-small-unsupervised](https://huggingface.co/intfloat/e5-small-unsupervised) model, this small model does not trade off retrieval accuracy for its small size. With only 33m parameters and 384 dimensions, this model should easily allow scaling to large datasets.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 |
| bge-small-en-v1.5 | 51.68 |
| Cohere-embed-english-light-v3.0 | 51.34 |
| text-embedding-3-small | 51.08 |
| e5-small-v2 | 49.04 |
### [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/)
Based on the [intfloat/e5-base-unsupervised](https://huggingface.co/intfloat/e5-base-unsupervised) model, this medium model is the workhorse that provides the best retrieval performance without slowing down inference.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 |
| bge-base-en-v1.5 | 53.25 |
| nomic-embed-text-v1.5 | 53.25 |
| GIST-Embedding-v0 | 52.31 |
| gte-base | 52.31 |
### [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/)
Based on the [nomic-ai/nomic-embed-text-v1-unsupervised](https://huggingface.co/nomic-ai/nomic-embed-text-v1-unsupervised) model, this long-context variant of our medium-sized model is perfect for workloads that can be constrained by the regular 512 token context of our other models. Without the use of RPE, this model supports up to 2048 tokens. With RPE, it can scale to 8192!
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 |
| nomic-embed-text-v1.5 | 53.01 |
| nomic-embed-text-v1 | 52.81 |
### [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/)
Based on the [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) model, this large model is a direct drop-in for closed APIs and delivers the most accurate retrieval experience.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 |
| UAE-Large-V1 | 54.66 |
| bge-large-en-v1.5 | 54.29 |
| mxbai-embed-large-v1 | 54.39 |
| e5-Large-v2 | 50.56 |
## Usage
### Using Sentence Transformers
You can use the sentence-transformers package to use an snowflake-arctic-embed model, as shown below.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Snowflake/snowflake-arctic-embed-l")
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = query_embeddings @ document_embeddings.T
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
# Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
```
Query: what is snowflake?
0.28976774 The Data Cloud!
0.19071159 Mexico City of Course!
Query: Where can I get the best tacos?
0.38650584 Mexico City of Course!
0.25145516 The Data Cloud!
```
### Using Huggingface transformers
You can use the transformers package to use an snowflake-arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query).
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-l')
model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-l', add_pooling_layer=False)
model.eval()
query_prefix = 'Represent this sentence for searching relevant passages: '
queries = ['what is snowflake?', 'Where can I get the best tacos?']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
documents = ['The Data Cloud!', 'Mexico City of Course!']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)
# Compute token embeddings
with torch.no_grad():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
### Using Transformers.js
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) by running:
```bash
npm i @xenova/transformers
```
You can then use the model to compute embeddings as follows:
```js
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-l', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
'The Data Cloud!',
'Mexico City of Course!',
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // [0.25145517380846977, 0.3865060421197194]
```
## Using Infinity
OpenAI compatible API deployment with [Infinity](https://github.com/michaelfeil/infinity) and Docker.
```bash
docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
michaelf34/infinity:0.0.70 \
v2 --model-id Snowflake/snowflake-arctic-embed-l --dtype float16 --batch-size 32 --engine torch --port 7997
```
## FAQ
TBD
## Contact
Feel free to open an issue or pull request if you have any questions or suggestions about this project.
You also can email Daniel Campos([email protected]).
## License
Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge.
## Acknowledgement
We want to thank the open-source community, which has provided the great building blocks upon which we could make our models.
We thank our modeling engineers, Danmei Xu, Luke Merrick, Gaurav Nuti, and Daniel Campos, for making these great models possible.
We thank our leadership, Himabindu Pucha, Kelvin So, Vivek Raghunathan, and Sridhar Ramaswamy, for supporting this work.
We also thank the open-source community for producing the great models we could build on top of and making these releases possible.
Finally, we thank the researchers who created BEIR and MTEB benchmarks.
It is largely thanks to their tireless work to define what better looks like that we could improve model performance.
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=d6741f66-9018-401c-8805-d79c74fb98ff" /> | [
"SUMMARIZATION"
] | Non_BioNLP |
webbigdata/C3TR-Adapter_hqq | webbigdata | translation | [
"gptq",
"gemma",
"translation",
"hqq",
"text-generation-inference",
"nlp",
"ja",
"en",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"region:us"
] | 1,716,372,318,000 | 2024-05-24T07:34:44 | 11 | 0 | ---
base_model: google/gemma-7b
language:
- ja
- en
library_name: gptq
tags:
- translation
- hqq
- gemma
- text-generation-inference
- nlp
---
### Model card
英日、日英翻訳用モデル[C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)のHQQ(Half-Quadratic Quantization)4bit量子化版です。
This is the HQQ(Half-Quadratic Quantization) 4bit quantized version of the [C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter), model for English-Japanese and Japanese-English translation.
### 簡単に動かす方法 (A quick way to try it)
Colab有料版(L4かA100)が必要ですが以下のColabで試す事ができます
You need a paid version of Colab (L4 or A100), but you can try it out with the following Colab
[C3TR_Adapter_hqq_v2_Paid_Colab_sample](https://github.com/webbigdata-jp/python_sample/blob/main/C3TR_Adapter_hqq_v2_Paid_Colab_sample.ipynb)
### install
[hqq](https://github.com/mobiusml/hqq)の公式サイトをご確認下さい
Check official [hqq](https://github.com/mobiusml/hqq)
私はソースからhqqをインストール( pip install git+https://github.com/mobiusml/hqq.git ではなくてローカルにclone)しないと動かす事ができませんでした。
I couldn't get it to work without installing hqq from source(Not pip install git+https://github.com/mobiusml/hqq.git ).
A100や30x0シリーズのような新しいGPUが必要です。
残念ながらColabの無料版(T4)では動きません
We need Newer GPUs such as the A100 and 30x0 series.
Unfortunately, this does not work with the free version of Colab(T4).
```
# transformers 4.41.1
pip install transformers==4.41.1
# hqq 0.1.7.post2
git clone https://github.com/mobiusml/hqq
cd hqq
pip install .
```
### Sample code
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.models.hf.base import AutoHQQHFModel
model_id = "webbigdata/C3TR-Adapter_hqq"
model = AutoHQQHFModel.from_quantized(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt_text = """You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.
### Instruction:
Translate Japanese to English.
When translating, please use the following hints:
[writeing_style: formal]
[岡浩: Hiroshi Oka]
[植草泰彦: Yasuhiko Uekusa]
### Input:
5月20日(現地時間同日)、エジプト・アラブ共和国の首都カイロにおいて、岡浩駐エジプト日本国特命全権大使、植草泰彦内閣府国際平和協力本部事務局参事官及びサハル・アル・ジョブリー国連パレスチナ難民救済事業機関(UNRWA)カイロ事務所長(Ms. Sahar Al Jobury, Chief, the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Representative Office in Cairo)の列席のもと、UNRWAに対して提供する物資の供与式を実施しました。
スリーピングマット等の支援物資は、18日(現地時間17日)、アラブ首長国連邦の備蓄倉庫からエジプトのエルアリーシュ空港に到着し、
今後、エジプト赤新月社の協力を得てガザ地区まで輸送され、パレスチナ被災民のために活用されます。
(参考)UNRWAによるパレスチナ被災民支援活動に対する物資協力
概要
パレスチナ暫定自治区であるガザ地区において人道的な国際救援活動を行っている国際連合パレスチナ難民救済事業機関(UNRWA)に対し、国際平和協力法に基づき、先方から依頼のあった物資を提供する。
提供物資
内閣府が人道支援のためにドバイに備蓄している以下の物資を提供。
毛布 5,000枚
給水容器 10,000個
ビニールシート 4,500枚
スリーピングマット 8,500枚
### Response:
"""
tokens = tokenizer(prompt_text, return_tensors="pt",
padding=True, max_length=1600, truncation=True).to("cuda:0").input_ids
output = model.generate(
input_ids=tokens,
max_new_tokens=800,
do_sample=True,
num_beams=3, temperature=0.5, top_p=0.3,
repetition_penalty=1.0)
print(tokenizer.decode(output[0]))
```
出力(Output)
```
### Response:
On May 20 (local time), H.E. Mr. Hiroshi Oka, Ambassador Extraordinary and Plenipotentiary of Japan to the Arab Republic of Egypt, Mr. Yasuhiko Uekusa, Counsellor, International Cooperation Bureau, Cabinet Office, and Ms. Sahar Al Jobury, Chief, the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) Representative Office in Cairo, attended the handover ceremony of relief supplies to be provided to UNRWA.
The relief supplies such as sleeping mats were received on May 18 (local time, May 17) at El Arish Airport in the Arab Republic of Egypt from a warehouse in the United Arab Emirates. The supplies will be transported to the Gaza Strip with the cooperation of the Egyptian Red Crescent Society and will be utilized for Palestinian refugees.
(Reference) Provision of Relief Supplies to UNRWA for Assistance to Palestinian Refugees
Overview
Based on the International Cooperation Law, Japan will provide relief supplies to the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA), which is conducting humanitarian international relief activities in the Gaza Strip, Palestinian Territory.
Relief Supplies
The following relief supplies stored in Dubai for humanitarian assistance will be provided:
Blankets: 5,000 pieces
Water containers: 10,000 pieces
Plastic sheets: 4,500 pieces
Sleeping mats: 8,500 pieces<eos>
```
### Sample code for High-speed inference (For NVIDIA Ampere or later, A100 or RTX 3090, etc.)
```
import torch, os
from hqq.engine.hf import AutoTokenizer
from hqq.core.quantize import *
from hqq.utils.patching import *
from hqq.models.hf.base import AutoHQQHFModel
model_id = "webbigdata/C3TR-Adapter_hqq"
os.environ["TOKENIZERS_PARALLELISM"] = "1"
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
compute_dtype = torch.bfloat16
model = AutoHQQHFModel.from_quantized(model_id, compute_dtype=compute_dtype)
tokenizer = AutoTokenizer.from_pretrained(model_id)
patch_linearlayers(model, patch_add_quant_config,
BaseQuantizeConfig(nbits=4, group_size=64, quant_scale=False, quant_zero=False, axis=1))
HQQLinear.set_backend(HQQBackend.PYTORCH)
from hqq.utils.patching import prepare_for_inference
prepare_for_inference(model, backend="torchao_int4")
prompt_text = """You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.
### Instruction:
Translate Japanese to English.
When translating, please use the following hints:
[writeing_style: formal]
[米津玄師: Kenshi YONEZU]
[吉野源三郎: Genzaburo YOSHINO]
### Input:
「私自身、訳が分からない」
「おそらく、訳が分からなかったことでしょう。私自身、訳が分からないところがありました」。
2023年2月下旬、東京都内のスタジオで上映された、「君たちはどう生きるか」の初号試写。米津玄師の歌うピアノバラードが流れ、エンド
ロールが終わった瞬間、灯りが点き、宮崎駿監督のコメントが読み上げられた。
客席から軽い笑い声が漏れた。私もその一人だった。あまりの展開の速さと、盛り込むだけ盛り込まれた情報を消化しきれず、茫然と座り>込んでいたが、その言葉で我に返った。
これは「宮崎アニメ」の集大成なのか、吉野源三郎の著書『君たちはどう生きるか』の再解釈なのか。とにかく、1回見ただけではとても全容を把握できなかった。
「自分のことをやるしかない」
今回の作品は、公開前のプロモーションも、メディア関係者向けの試写も一切ないまま公開日を迎えた。異例の態勢の中、内容は無論、見たことすら口外無用のキャスト・スタッフ向け試写に、なぜ私と両親が呼ばれたのかといえば、父が『君たちはどう生きるか』の著者・吉野源三郎の長男で、私が孫にあたるからだ。
その5年ほど前の2017年11月、父と私は東京・小金井のスタジオジブリに招かれ、宮崎監督と対面していた。さらにさかのぼること半月ほど前、とあるイベントで宮崎監督が突然、次回作のタイトルが「君たちはどう生きるか」だと明らかにし、ニュースなどで話題になっていた。親族としては寝耳に水だったのでかなり驚いたのだが、宮崎監督は「うっかり喋ってしまいました」と詫びた上で、作品について語り始めた。
### Response:
"""
tokens = tokenizer(prompt_text, return_tensors="pt",
padding=True, max_length=1600, truncation=True).to("cuda:0").input_ids
output = model.generate(
input_ids=tokens,
max_new_tokens=800,
do_sample=True,
num_beams=3, temperature=0.5, top_p=0.3,
repetition_penalty=1.0)
print(tokenizer.decode(output[0]))
```
### See also
詳細は[C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)を見てください
See also [C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter) | [
"TRANSLATION"
] | Non_BioNLP |
derekiya/bart_fine_tuned_model-v2 | derekiya | text2text-generation | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,701,034,047,000 | 2023-11-27T06:17:03 | 7 | 0 | ---
language:
- en
library_name: transformers
license: apache-2.0
---
# Model Card: bart_fine_tuned_model-v2
<!-- Provide a quick summary of what the model is/does. -->
## Model Name
## bart_fine_tuned_model-v2
### Model Description
<!-- This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.. -->
This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.
### Model information
-**Base Model: derekiya/bart_fine_tuned_model-v2**
-**Finetuning Dataset: To be made available in the future.**
### Training Parameters
- **Evaluation Strategy: epoch:**
- **Learning Rate: 5e-5**
- **Per Device Train Batch Size: 8:**
- **Per Device Eval Batch Size: 8**
- **Weight Decay: 0.01**
- **Save Total Limit: 5**
- **Number of Training Epochs: 10**
- **Predict with Generate: True**
- **Gradient Accumulation Steps: 1**
- **Optimizer: paged_adamw_32bit**
- **Learning Rate Scheduler Type: cosine**
## how to use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**1.** Install the transformers library:
**pip install transformers**
**2.** Import the necessary modules:
import torch
from transformers import BartTokenizer, BartForConditionalGeneration
**3.** Initialize the model and tokenizer:
model_name = 'derekiya/bart_fine_tuned_model-v2'
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
**4.** Prepare the text for summarization:
text = 'Your resume text here'
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length")
**5.** Generate the summary:
min_length_threshold = 55
summary_ids = model.generate(inputs["input_ids"], num_beams=4, min_length=min_length_threshold, max_length=150, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
**6.** Output the summary:
print("Summary:", summary)
## Model Card Authors
Dereje Hinsermu
## Model Card Contact
| [
"SUMMARIZATION"
] | Non_BioNLP |
ruanchaves/bert-large-portuguese-cased-assin2-entailment | ruanchaves | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"dataset:assin2",
"autotrain_compatible",
"region:us"
] | 1,679,940,573,000 | 2023-03-29T18:05:48 | 21 | 0 | ---
datasets:
- assin2
language: pt
inference: false
---
# BERTimbau large for Recognizing Textual Entailment
This is the [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) model finetuned for
Recognizing Textual Entailment with the [ASSIN 2](https://huggingface.co/datasets/assin2) dataset.
This model is suitable for Portuguese.
- Git Repo: [Evaluation of Portuguese Language Models](https://github.com/ruanchaves/eplm).
- Demo: [Portuguese Textual Entailment](https://ruanchaves-portuguese-textual-entailment.hf.space)
### **Labels**:
* 0 : There is no entailment between premise and hypothesis.
* 1 : There is entailment between premise and hypothesis.
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import torch
from scipy.special import softmax
model_name = "ruanchaves/bert-large-portuguese-cased-assin2-entailment"
s1 = "Os homens estão cuidadosamente colocando as malas no porta-malas de um carro."
s2 = "Os homens estão colocando bagagens dentro do porta-malas de um carro."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model_input = tokenizer(*([s1], [s2]), padding=True, return_tensors="pt")
with torch.no_grad():
output = model(**model_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}")
```
## Citation
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon.
In the meanwhile, if you would like to cite our work or models before the publication of the paper, please cite our [GitHub repository](https://github.com/ruanchaves/eplm):
```
@software{Chaves_Rodrigues_eplm_2023,
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo},
doi = {10.5281/zenodo.7781848},
month = {3},
title = {{Evaluation of Portuguese Language Models}},
url = {https://github.com/ruanchaves/eplm},
version = {1.0.0},
year = {2023}
}
``` | [
"TEXTUAL_ENTAILMENT"
] | Non_BioNLP |
elinas/alpaca-30b-lora-int4 | elinas | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"alpaca",
"gptq",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,679,355,703,000 | 2023-04-05T16:42:03 | 44 | 68 | ---
license: other
tags:
- alpaca
- gptq
---
# llama-30b-int4
This LoRA trained for 3 epochs and has been converted to int4 (4bit) via GPTQ method.
Use the one of the two **safetensors** versions, the **pt** version is an old quantization that is no longer supported and will be removed in the future. Make sure you only have **ONE** checkpoint from the two in your model directory! See the repo below for more info.
LoRA credit to https://huggingface.co/baseten/alpaca-30b
# Important - Update 2023-04-05
Recent GPTQ commits have introduced breaking changes to model loading and you should this fork for a stable experience https://github.com/oobabooga/GPTQ-for-LLaMa
Curently only cuda is supported.
# Update 2023-03-29
There is also a non-groupsize quantized model that is 1GB smaller in size, which should allow running at max context tokens with 24GB VRAM. The evaluations are better
on the 128 groupsize version, but the tradeoff is not being able to run it at full context without offloading or a GPU with more VRAM.
# Update 2023-03-27
New weights have been added. The old .pt version is no longer supported and has been replaced by a 128 groupsize safetensors file. Update to the latest GPTQ version/webui.
Evals - Groupsize 128 + True Sequential
-----
**alpaca-30b-4bit-128g.safetensors** [4805cc2]
**c4-new** -
6.398105144500732
**ptb-new** -
8.449508666992188
**wikitext2** -
4.402845859527588
Evals - Default + True Sequential
-----
**alpaca-30b-4bit.safetensors** [6958004]
**c4-new** -
6.592941761016846
**ptb-new** -
8.718379974365234
**wikitext2** -
4.635514736175537
# Usage
1. Run manually through GPTQ
2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). Make sure to follow the installation steps first [here](https://github.com/oobabooga/text-generation-webui#installation) before adding GPTQ support.
Since this is instruction tuned, for best results, use the following format for inference:
```
### Instruction:
your-prompt
### Response:
```
If you want deterministic results, turn off sampling. You can turn it off in the webui by unchecking `do_sample`.
For cai-chat mode, you won't want to use instruction prompting, rather create a character and set sampler settings. Here is an example of settings that work well for me:
```
do_sample=True
temperature=0.95
top_p=1
typical_p=1
repetition_penalty=1.1
top_k=40
num_beams=1
penalty_alpha=0
min_length=0
length_penalty=1
no_repeat_ngram_size=0
early_stopping=False
```
You can then save this as a `.txt` file in the `presets` folder.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
tmnam20/bert-base-multilingual-cased-qqp-10 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,388,287,000 | 2024-01-16T06:59:29 | 5 | 0 | ---
base_model: bert-base-multilingual-cased
datasets:
- tmnam20/VieGLUE
language:
- en
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-qqp-10
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- type: accuracy
value: 0.8885975760573831
name: Accuracy
- type: f1
value: 0.8473737716028464
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qqp-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3064
- Accuracy: 0.8886
- F1: 0.8474
- Combined Score: 0.8680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3263 | 0.44 | 5000 | 0.3272 | 0.8557 | 0.8081 | 0.8319 |
| 0.3084 | 0.88 | 10000 | 0.2968 | 0.8680 | 0.8191 | 0.8436 |
| 0.2424 | 1.32 | 15000 | 0.2998 | 0.8768 | 0.8324 | 0.8546 |
| 0.2171 | 1.76 | 20000 | 0.2995 | 0.8847 | 0.8449 | 0.8648 |
| 0.1796 | 2.2 | 25000 | 0.3124 | 0.8857 | 0.8424 | 0.8640 |
| 0.1811 | 2.64 | 30000 | 0.2963 | 0.8883 | 0.8477 | 0.8680 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-de-ase-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,229,764,000 | 2024-10-18T23:41:38 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-de-ase)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-de-ase).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-de-ase --output_dir ./ctranslate2/opus-mt-de-ase-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-de-ase-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-de-ase-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-de-ase-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-de-ase) by Helsinki-NLP.
| [
"TRANSLATION"
] | Non_BioNLP |
vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa | vocabtrimmer | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"es",
"dataset:lmqg/qg_esquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,679,388,159,000 | 2023-03-21T08:43:36 | 17 | 0 | ---
datasets:
- lmqg/qg_esquad
language: es
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: 'question: ¿Cuál es la población de Nueva York a partir de 2014?, context:
Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva
York consta de cinco municipios, cada uno de los cuales es un condado separado
del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el
Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población
censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie
de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados
Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más
lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región
metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo
la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística
Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto
metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el
CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.'
example_title: Question Answering Example 1
- text: 'question: ¿Cómo se llama el ejército personal de Sassou?, context: El progreso
democrático del Congo se descarriló en 1997, cuando Lissouba y Sassou comenzaron
a luchar por el poder en la guerra civil. A medida que se acercaban las elecciones
presidenciales de julio de 1997, las tensiones entre los campos de Lissouba y
Sassou aumentaron. El 5 de junio, las fuerzas del gobierno del presidente Lissouba
rodearon el complejo de Sassou en Brazzaville y Sassou ordenó a los miembros de
su milicia privada (conocida como Cobras) resistir. Así comenzó un conflicto de
cuatro meses que destruyó o dañó gran parte de Brazzaville y causó decenas de
miles de muertes civiles. A principios de octubre, el régimen socialista angoleño
comenzó una invasión del Congo para instalar a Sassou en el poder. A mediados
de octubre, el gobierno de Lissouba cayó. Poco después, Sassou se declaró presidente.'
example_title: Question Answering Example 2
model-index:
- name: vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa
results:
- task:
type: text2text-generation
name: Text2text Generation
dataset:
name: lmqg/qg_esquad
type: default
args: default
metrics:
- type: bleu4_question_answering
value: 18.23
name: BLEU4 (Question Answering)
- type: rouge_l_question_answering
value: 37.06
name: ROUGE-L (Question Answering)
- type: meteor_question_answering
value: 32.0
name: METEOR (Question Answering)
- type: bertscore_question_answering
value: 91.05
name: BERTScore (Question Answering)
- type: moverscore_question_answering
value: 75.75
name: MoverScore (Question Answering)
- type: answer_f1_score__question_answering
value: 58.94
name: AnswerF1Score (Question Answering)
- type: answer_exact_match_question_answering
value: 38.15
name: AnswerExactMatch (Question Answering)
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-es-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-90000) for question answering task on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-es-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-90000)
- **Language:** es
- **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="es", model="vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa")
# model prediction
answers = model.answer_q(list_question="¿Cuál es la población de Nueva York a partir de 2014?", list_context=" Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa")
output = pipe("question: ¿Cuál es la población de Nueva York a partir de 2014?, context: Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_esquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 38.15 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| AnswerF1Score | 58.94 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| BERTScore | 91.05 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_1 | 28.33 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_2 | 23.82 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_3 | 20.72 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_4 | 18.23 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| METEOR | 32 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| MoverScore | 75.75 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| ROUGE_L | 37.06 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_esquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-es-90000
- max_length: 512
- max_length_output: 32
- epoch: 14
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
partypress/partypress-monolingual-uk | partypress | text-classification | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"partypress",
"political science",
"parties",
"press releases",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,377,307,000 | 2023-11-09T11:08:17 | 120 | 0 | ---
language:
- en
license: cc-by-sa-4.0
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- partypress
- political science
- parties
- press releases
widget:
- text: Farmers who applied for a Force Majeure when their businesses wereimpacted
by severe flooding and landslides on 22 and 23 August 2017 cannow apply for the
one-off financial payment.“The extreme flooding event meant that the farming and
wider rural communities in the North West experienced significant hardship.Farm
businesses lost income due to the impact on their land and thecost of removing
debris and silt, as well as reseeding to restore itback to productive use,” said
Minister Poots.“So I am delighted to say that this North West 2017 Flooding Income
Support Scheme, worth almost £2.7million, is now open to applications. This is
a time limited scheme which will close on 12 August 2021. “The one-off grant payment,
which will be capped at £106,323 per farm business, is available for farmers who
applied for a Force Majeure in respect of the flooding incident.“I would urge
all eligible businesses to make sure their application is submitted as soon as
possible,” Minister Poots added.Eligible farm businesses will receive a letter
inviting them to applyfor the support package, with instructions on how to access
theapplication form and receive help to complete it.They must complete the application
form available on DAERA OnlineServices from 28 July 2021. Explanatory information
and guidance willalso be published on the DAERA website.Further information on
the scheme can be found on the DAERA website www.daera-ni.gov.uk
---
# PARTYPRESS monolingual UK
Fine-tuned model, based on [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). Used in [Erfort et al. (2023)](https://doi.org/10.1177/20531680231183512), building on the PARTYPRESS database. For the downstream task of classyfing press releases from political parties into 23 unique policy areas we achieve a performance comparable to expert human coders.
## Model description
The PARTYPRESS monolingual model builds on [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) but has a supervised component. This means, it was fine-tuned using texts labeled by humans. The labels indicate 23 different political issue categories derived from the Comparative Agendas Project (CAP):
| Code | Issue |
|--|-------|
| 1 | Macroeconomics |
| 2 | Civil Rights |
| 3 | Health |
| 4 | Agriculture |
| 5 | Labor |
| 6 | Education |
| 7 | Environment |
| 8 | Energy |
| 9 | Immigration |
| 10 | Transportation |
| 12 | Law and Crime |
| 13 | Social Welfare |
| 14 | Housing |
| 15 | Domestic Commerce |
| 16 | Defense |
| 17 | Technology |
| 18 | Foreign Trade |
| 19.1 | International Affairs |
| 19.2 | European Union |
| 20 | Government Operations |
| 23 | Culture |
| 98 | Non-thematic |
| 99 | Other |
## Model variations
There are several monolingual models for different countries, and a multilingual model. The multilingual model can be easily extended to other languages, country contexts, or time periods by fine-tuning it with minimal additional labeled texts.
## Intended uses & limitations
The main use of the model is for text classification of press releases from political parties. It may also be useful for other political texts.
The classification can then be used to measure which issues parties are discussing in their communication.
### How to use
This model can be used directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
>>> partypress = pipeline("text-classification", model = "cornelius/partypress-monolingual-uk", tokenizer = "cornelius/partypress-monolingual-uk", **tokenizer_kwargs)
>>> partypress("Your text here.")
```
### Limitations and bias
The model was trained with data from parties in the UK. For use in other countries, the model may be further fine-tuned. Without further fine-tuning, the performance of the model may be lower.
The model may have biased predictions. We discuss some biases by country, party, and over time in the release paper for the PARTYPRESS database. For example, the performance is highest for press releases from UK (75%) and lowest for Poland (55%).
## Training data
The PARTYPRESS multilingual model was fine-tuned with about 3,000 press releases from parties in the UK. The press releases were labeled by two expert human coders.
For the training data of the underlying model, please refer to [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
## Training procedure
### Preprocessing
For the preprocessing, please refer to [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
### Pretraining
For the pretraining, please refer to [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
### Fine-tuning
We fine-tuned the model using about 3,000 labeled press releases from political parties in the UK.
#### Training Hyperparameters
The batch size for training was 12, for testing 2, with four epochs. All other hyperparameters were the standard from the transformers library.
#### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
## Evaluation results
Fine-tuned on our downstream task, this model achieves the following results in a five-fold cross validation that are comparable to the performance of our expert human coders. Please refer to Erfort et al. (2023)
### BibTeX entry and citation info
```bibtex
@article{erfort_partypress_2023,
author = {Cornelius Erfort and
Lukas F. Stoetzer and
Heike Klüver},
title = {The PARTYPRESS Database: A new comparative database of parties’ press releases},
journal = {Research and Politics},
volume = {10},
number = {3},
year = {2023},
doi = {10.1177/20531680231183512},
URL = {https://doi.org/10.1177/20531680231183512}
}
```
Erfort, C., Stoetzer, L. F., & Klüver, H. (2023). The PARTYPRESS Database: A new comparative database of parties’ press releases. Research & Politics, 10(3). [https://doi.org/10.1177/20531680231183512](https://doi.org/10.1177/20531680231183512)
### Further resources
Github: [cornelius-erfort/partypress](https://github.com/cornelius-erfort/partypress)
Research and Politics Dataverse: [Replication Data for: The PARTYPRESS Database: A New Comparative Database of Parties’ Press Releases](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi%3A10.7910%2FDVN%2FOINX7Q)
## Acknowledgements
Research for this contribution is part of the Cluster of Excellence "Contestations of the Liberal Script" (EXC 2055, Project-ID: 390715649), funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy. Cornelius Erfort is moreover grateful for generous funding provided by the DFG through the Research Training Group DYNAMICS (GRK 2458/1).
## Contact
Cornelius Erfort
Humboldt-Universität zu Berlin
[corneliuserfort.de](corneliuserfort.de)
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
projecte-aina/aina-translator-ca-en | projecte-aina | null | [
"fairseq",
"ca",
"en",
"dataset:projecte-aina/CA-EN_Parallel_Corpus",
"doi:10.57967/hf/1926",
"license:apache-2.0",
"region:us"
] | 1,669,280,944,000 | 2025-01-31T11:11:10 | 69 | 0 | ---
datasets:
- projecte-aina/CA-EN_Parallel_Corpus
language:
- ca
- en
library_name: fairseq
license: apache-2.0
metrics:
- bleu
---
## Projecte Aina's English-Catalan machine translation model
## Model description
This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of English-Catalan datasets,
which after filtering and cleaning comprised 30.023.034 sentence pairs. The model was evaluated on several public datasets comprising different domains.
## Intended uses and limitations
You can use this model for machine translation from Catalan to English.
## How to use
### Usage
Required libraries:
```bash
pip install ctranslate2 pyonmttok
```
Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-ca-en", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
The model was trained on a combination of several datasets, including data collected from [Opus](https://opus.nlpl.eu/),
[HPLT](https://hplt-project.org/), an internally created [CA-EN Parallel Corpus](https://huggingface.co/datasets/projecte-aina/CA-EN_Parallel_Corpus),
and other sources.
### Training procedure
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of 30.023.034 parallel sentences and before training
the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
#### Tokenization
All data is tokenized using sentencepiece, using 50 thousand token sentencepiece model learned from the combination of all filtered training data.
This model is included.
#### Hyperparameters
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparamenters were set on the Fairseq toolkit:
| Hyperparameter | Value |
|------------------------------------|-----------------------------------|
| Architecture | transformer_vaswani_wmt_en_de_big |
| Embedding size | 1024 |
| Feedforward size | 4096 |
| Number of heads | 16 |
| Encoder layers | 24 |
| Decoder layers | 6 |
| Normalize before attention | True |
| --share-decoder-input-output-embed | True |
| --share-all-embeddings | True |
| Effective batch size | 96.000 |
| Optimizer | adam |
| Adam betas | (0.9, 0.980) |
| Clip norm | 0.0 |
| Learning rate | 1e-3 |
| Lr. schedurer | inverse sqrt |
| Warmup updates | 4000 |
| Dropout | 0.1 |
| Label smoothing | 0.1 |
The model was trained for a total of 12.500 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on test sets:
[Spanish Constitution (TaCon)](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/),
[United Nations](https://zenodo.org/record/3888414#.Y33-_tLMIW0),
[AAPP](https://elrc-share.eu/repository/browse/catalan-spanish-catgov-corpus/8088130a722811ed9c1a00155d02670690607f8261a847549c8a0583cbe729da/),
[European Commission](https://elrc-share.eu/repository/browse/european-commission-corpus/8a419b1758ea11ed9c1a00155d0267069bd085cae124481589b0858e5b274327/),
[Flores-200](https://github.com/facebookresearch/flores),
[Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/),
[wmt19 biomedical test set](http://www.statmt.org/wmt19/biomedical-translation-task.html),
[wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/).
### Evaluation results
Below are the evaluation results on the machine translation from Catalan to English compared to [Softcatalà](https://www.softcatala.org/) and
[Google Translate](https://translate.google.es/?hl=es):
| Test set | SoftCatalà | Google Translate | aina-translator-ca-en |
|----------------------|------------|------------------|---------------|
| Spanish Constitution | 35,8 | 39,1 | **42,8** |
| United Nations | 44,4 | **46,9** | 45,9 |
| AAPP | 50,7 | 52,9 | **54** |
| European Commission | 52,0 | 53,7 | **54** |
| Flores 200 dev | 42,7 | **52,0** | 47,9 |
| Flores 200 devtest | 42,5 | **50,7** | 46,3 |
| Cybersecurity | 52,5 | **66,8** | 56,8 |
| wmt 19 biomedical | 18,3 | 24,4 | **25,2** |
| wmt 13 news | 37,8 | **42,5** | 39,4 |
| **Average** | 40,8 | **47,7** | 45,8 |
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
be liable for any results arising from the use made by third parties.
</details> | [
"TRANSLATION"
] | Non_BioNLP |
tali1/autotrain-suricata-facebookai-roberta-base | tali1 | text-classification | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:autotrain-suricata-facebookai-roberta-base/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,709,676,158,000 | 2024-03-05T22:02:53 | 6 | 0 | ---
datasets:
- autotrain-suricata-facebookai-roberta-base/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.17026731371879578
f1_macro: 0.9173443088077234
f1_micro: 0.9617224880382775
f1_weighted: 0.9593264076984463
precision_macro: 0.9352380952380951
precision_micro: 0.9617224880382775
precision_weighted: 0.9620688083845979
recall_macro: 0.9134920634920635
recall_micro: 0.9617224880382775
recall_weighted: 0.9617224880382775
accuracy: 0.9617224880382775
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Intel/whisper-base-int8-static-inc | Intel | automatic-speech-recognition | [
"transformers",
"onnx",
"whisper",
"automatic-speech-recognition",
"int8",
"ONNX",
"PostTrainingStatic",
"Intel® Neural Compressor",
"neural-compressor",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,692,949,039,000 | 2023-08-25T07:47:48 | 9 | 0 | ---
datasets:
- librispeech_asr
library_name: transformers
license: apache-2.0
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- automatic-speech-recognition
- int8
- ONNX
- PostTrainingStatic
- Intel® Neural Compressor
- neural-compressor
---
## Model Details: INT8 Whisper base
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command:
```shell
optimum-cli export onnx --model openai/whisper-base whisper-base-with-past/ --task automatic-speech-recognition-with-past --opset 13
```
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | August 25, 2023 |
| Version | 1 |
| Type | Speech Recognition |
| Paper or Other Resources | - |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-base-int8-static/discussions)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for automatic speech recognition inference |
| Primary intended users | Anyone doing automatic speech recognition inference |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Download the model by cloning the repository:
```shell
git clone https://huggingface.co/Intel/whisper-base-int8-static
```
Evaluate the model with below code:
```python
import os
from evaluate import load
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig
model_name = 'openai/whisper-base'
model_path = 'whisper-base-int8-static'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
wer = load("wer")
librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from transformers import PretrainedConfig
model_config = PretrainedConfig.from_pretrained(model_name)
predictions = []
references = []
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
os.path.join(model_path, 'decoder_with_past_model.onnx'))
model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2])
for idx, batch in enumerate(librispeech_test_clean):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
reference = processor.tokenizer._normalize(batch['text'])
references.append(reference)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
prediction = processor.tokenizer._normalize(transcription)
predictions.append(prediction)
wer_result = wer.compute(references=references, predictions=predictions)
print(f"Result wer: {wer_result * 100}")
accuracy = 1 - wer_result
print("Accuracy: %.5f" % accuracy)
```
## Metrics (Model Performance):
| Model | Model Size (GB) | wer |
|---|:---:|:---:|
| FP32 |0.95|5.04|
| INT8 |0.32|5.89|
| [
"TRANSLATION"
] | Non_BioNLP |
Subsets and Splits