id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
sequencelengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-03-18 10:01:09
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
sequencelengths 1
10
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-tc-base-ro-uk | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"ro",
"uk",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,648,125,719,000 | 2023-10-10T21:36:02 | 20 | 0 | ---
language:
- ro
- uk
license: cc-by-4.0
tags:
- translation
- opus-mt-tc
model-index:
- name: opus-mt-tc-base-ro-uk
results:
- task:
type: translation
name: Translation ron-ukr
dataset:
name: flores101-devtest
type: flores_101
args: ron ukr devtest
metrics:
- type: bleu
value: 22.3
name: BLEU
---
# opus-mt-tc-base-ro-uk
Neural machine translation model for translating from Romanian (ro) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-08
* source language(s):
* target language(s):
* valid target language labels:
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.zip)
* more information released models: [OPUS-MT ron-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ron-ukr/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Articolul exprimă opinia personală a autorului.",
"Ornitorincii trăiesc în estul Austriei."
]
model_name = "pytorch-models/opus-mt-tc-base-ro-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Стаття висловлює особисту думку автора.
# Орніторінці живуть на сході Австрії.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-ro-uk")
print(pipe("Articolul exprimă opinia personală a autorului."))
# expected output: Стаття висловлює особисту думку автора.
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ron-ukr | flores101-devtest | 0.52391 | 22.3 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 03:30:40 EET 2022
* port machine: LM0-400-22516.local
| [
"TRANSLATION"
] | Non_BioNLP |
puettmann/LlaMaestra-3.2-1B-Translation-Q8_0-GGUF | puettmann | translation | [
"transformers",
"gguf",
"translation",
"text-generation",
"llama-cpp",
"gguf-my-repo",
"en",
"it",
"base_model:puettmann/LlaMaestra-3.2-1B-Translation",
"base_model:quantized:puettmann/LlaMaestra-3.2-1B-Translation",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,733,692,929,000 | 2024-12-08T21:22:17 | 168 | 1 | ---
base_model: LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1
language:
- en
- it
library_name: transformers
license: llama3.2
tags:
- translation
- text-generation
- llama-cpp
- gguf-my-repo
---
# LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1`](https://huggingface.co/LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF --hf-file llamaestra-3.2-1b-instruct-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF --hf-file llamaestra-3.2-1b-instruct-v0.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF --hf-file llamaestra-3.2-1b-instruct-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF --hf-file llamaestra-3.2-1b-instruct-v0.1-q8_0.gguf -c 2048
```
| [
"TRANSLATION"
] | TBD |
sbulut/finetuned-kde4-en-to-tr | sbulut | translation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-tc-big-tr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-tr-en",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,706,903,598,000 | 2024-02-02T21:57:41 | 16 | 0 | ---
base_model: Helsinki-NLP/opus-mt-tc-big-tr-en
datasets:
- kde4
license: cc-by-4.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-tr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-tr
split: train
args: en-tr
metrics:
- type: bleu
value: 29.832961482999476
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-tr-en) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0990
- Bleu: 29.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
| [
"TRANSLATION"
] | Non_BioNLP |
interneuronai/az-gptneo | interneuronai | null | [
"peft",
"safetensors",
"base_model:EleutherAI/gpt-neo-2.7B",
"base_model:adapter:EleutherAI/gpt-neo-2.7B",
"region:us"
] | 1,710,019,353,000 | 2024-03-09T21:34:37 | 2 | 0 | ---
base_model: EleutherAI/gpt-neo-2.7B
library_name: peft
---
Model Details
Original Model: EleutherAI/gpt-neo-2.7B
Fine-Tuned For: Azerbaijani language understanding and generation
Dataset Used: Azerbaijani translation of the Stanford Alpaca dataset
Fine-Tuning Method: Self-instruct method
This model, is part of the ["project/Barbarossa"](https://github.com/Alas-Development-Center/project-barbarossa) initiative, aimed at enhancing natural language processing capabilities for the Azerbaijani language. By fine-tuning this model on the Azerbaijani translation of the Stanford Alpaca dataset using the self-instruct method, we've made significant strides in improving AI's understanding and generation of Azerbaijani text.
__Our primary objective with this model is to offer insights into the feasibility and outcomes of fine-tuning large language models (LLMs) for the Azerbaijani language. The fine-tuning process was undertaken with limited resources, providing valuable learnings rather than creating a model ready for production use. Therefore, we recommend treating this model as a reference or a guide to understanding the potential and challenges involved in fine-tuning LLMs for specific languages. It serves as a foundational step towards further research and development rather than a direct solution for production environments.__
This project is a proud product of the [Alas Development Center (ADC)](https://az.linkedin.com/company/alas-development-center?trk=ppro_cprof). We are thrilled to offer these finely-tuned large language models to the public, free of charge.
How to use?
```
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, pipeline
model_path = "alasdevcenter/az-gptneo"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
instruction = "Təbiətin qorunması "
formatted_prompt = f"""Aşağıda daha çox kontekst təmin edən təlimat var. Sorğunu adekvat şəkildə tamamlayan cavab yazın.
### Təlimat:
{instruction}
### Cavab:
"""
result = pipe(formatted_prompt)
print(result[0]['generated_text'])
```
| [
"TRANSLATION"
] | Non_BioNLP |
kuotient/Seagull-13b-translation-AWQ | kuotient | translation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"translate",
"awq",
"translation",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | 1,708,761,757,000 | 2024-02-24T09:09:52 | 7 | 2 | ---
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- ko
license: cc-by-nc-sa-4.0
pipeline_tag: translation
tags:
- translate
- awq
---
# **Seagull-13b-translation-AWQ 📇**

## This is quantized version of original model: Seagull-13b-translation.
**Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models.
- `newline` or `space` not matching the original text
- Using translated dataset with first letter removed for training
- Codes
- Markdown format
- LaTeX format
- etc
이런 이슈들을 충분히 체크하고 학습을 진행하였지만, 모델을 사용할 때는 이런 부분에 대한 결과를 면밀하게 살펴보는 것을 추천합니다(코드가 포함된 텍스트 등).
> If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146).
For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - [email protected]
This model was created as a personal experiment, unrelated to the organization I work for.
# **License**
## From original model author:
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE
# **Model Details**
#### **Developed by**
Jisoo Kim(kuotient)
#### **Base Model**
[beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
#### **Datasets**
- [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
- AIHUB
- 기술과학 분야 한-영 번역 병렬 말뭉치 데이터
- 일상생활 및 구어체 한-영 번역 병렬 말뭉치 데이터
## Usage
#### Format
It follows only **ChatML** format.
```python
<|im_start|>system
주어진 문장을 한국어로 번역하세요.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
```python
<|im_start|>system
주어진 문장을 영어로 번역하세요.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
#### Example
**I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
messages = [
{"role": "system", "content", "주어진 문장을 한국어로 번역하세요."}
{"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` | [
"TRANSLATION"
] | Non_BioNLP |
sheetalp91/setfit-model-1 | sheetalp91 | text-classification | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,683,032,788,000 | 2023-05-02T13:06:43 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# sheetalp91/setfit-model-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sheetalp91/setfit-model-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
research-backup/mbart-large-cc25-squad-qa | research-backup | text2text-generation | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,680,291,835,000 | 2023-05-06T12:48:31 | 13 | 0 | ---
datasets:
- lmqg/qg_squad
language: en
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: 'question: What is a person called is practicing heresy?, context: Heresy
is any provocative belief or theory that is strongly at variance with established
beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy
is distinct from both apostasy, which is the explicit renunciation of one''s religion,
principles or cause, and blasphemy, which is an impious utterance or action concerning
God or sacred things.'
example_title: Question Answering Example 1
- text: 'question: who created the post as we know it today?, context: ''So much of
The Post is Ben,'' Mrs. Graham said in 1994, three years after Bradlee retired
as editor. ''He created it as we know it today.''— Ed O''Keefe (@edatpost) October
21, 2014'
example_title: Question Answering Example 2
model-index:
- name: lmqg/mbart-large-cc25-squad-qa
results:
- task:
type: text2text-generation
name: Text2text Generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- type: bleu4_question_answering
value: 56.23
name: BLEU4 (Question Answering)
- type: rouge_l_question_answering
value: 74.73
name: ROUGE-L (Question Answering)
- type: meteor_question_answering
value: 43.17
name: METEOR (Question Answering)
- type: bertscore_question_answering
value: 92.7
name: BERTScore (Question Answering)
- type: moverscore_question_answering
value: 84.01
name: MoverScore (Question Answering)
- type: answer_f1_score__question_answering
value: 76.98
name: AnswerF1Score (Question Answering)
- type: answer_exact_match_question_answering
value: 62.63
name: AnswerExactMatch (Question Answering)
---
# Model Card of `lmqg/mbart-large-cc25-squad-qa`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/mbart-large-cc25-squad-qa")
# model prediction
answers = model.answer_q(list_question="What is a person called is practicing heresy?", list_context=" Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-squad-qa")
output = pipe("question: What is a person called is practicing heresy?, context: Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-squad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 62.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 76.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 92.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 69.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 64.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 60.19 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 56.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 43.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 84.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 74.73 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: facebook/mbart-large-cc25
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 16
- lr: 6e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-squad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Pdmk/t5-small-finetuned-summary_pd | Pdmk | summarization | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,692,652,866,000 | 2023-08-23T20:12:08 | 18 | 0 | ---
base_model: t5-small
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: t5-small-finetuned-summary_pd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summary_pd
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9326
- Rouge1: 37.5319
- Rouge2: 11.7719
- Rougel: 37.0546
- Rougelsum: 36.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.5559 | 1.0 | 688 | 2.9326 | 37.5319 | 11.7719 | 37.0546 | 36.8197 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
| [
"SUMMARIZATION"
] | Non_BioNLP |
knowledgator/gliner-bi-small-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"NER",
"GLiNER",
"information extraction",
"encoder",
"entity recognition",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"dataset:numind/NuNER",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"license:apache-2.0",
"region:us"
] | 1,723,964,191,000 | 2024-08-25T11:38:26 | 122 | 10 | ---
datasets:
- urchade/pile-mistral-v0.1
- numind/NuNER
- knowledgator/GLINER-multi-task-synthetic-data
language:
- multilingual
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
tags:
- NER
- GLiNER
- information extraction
- encoder
- entity recognition
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
This particular version utilize bi-encoder architecture, where textual encoder is [DeBERTa v3 small](microsoft/deberta-v3-small) and entity label encoder is sentence transformer - [MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2).
Such architecture brings several advantages over uni-encoder GLiNER:
* An unlimited amount of entities can be recognized at a single time;
* Faster inference if entity embeddings are preprocessed;
* Better generalization to unseen entities;
However, it has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
### Installation & Usage
Install or update the gliner package:
```bash
pip install gliner -U
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/gliner-bi-small-v1.0")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels, threshold=0.3)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet:
```python
labels = ["your entities"]
texts = ["your texts"]
entity_embeddings = model.encode_labels(labels, batch_size = 8)
outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels)
```
### Benchmarks
Below you can see the table with benchmarking results on various named entity recognition datasets:
| Dataset | Score |
|-----------------------|--------------|
| ACE 2004 | 26.74% |
| ACE 2005 | 29.86% |
| AnatEM | 40.98% |
| Broad Tweet Corpus | 64.60% |
| CoNLL 2003 | 61.68% |
| FabNER | 23.39% |
| FindVehicle | 24.38% |
| GENIA_NER | 48.51% |
| HarveyNER | 11.06% |
| MultiNERD | 63.14% |
| Ontonotes | 27.29% |
| PolyglotNER | 45.30% |
| TweetNER7 | 37.81% |
| WikiANN en | 54.08% |
| WikiNeural | 72.98% |
| bc2gm | 53.32% |
| bc4chemd | 45.67% |
| bc5cdr | 69.03% |
| ncbi | 64.15% |
| **Average** | **45.5%** |
|||
| CrossNER_AI | 49.45% |
| CrossNER_literature | 61.16% |
| CrossNER_music | 65.39% |
| CrossNER_politics | 72.10% |
| CrossNER_science | 60.71% |
| mit-movie | 34.41% |
| mit-restaurant | 38.77% |
| **Average (zero-shot benchmark)** | **54.6%** |
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG). | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
mrm8488/spanish-TinyBERT-betito-finetuned-xnli-es | mrm8488 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:xnli",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,772,951,000 | 2022-03-09T07:29:03 | 117 | 0 | ---
datasets:
- xnli
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: spanish-TinyBERT-betito-finetuned-xnli-es
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: xnli
type: xnli
args: es
metrics:
- type: accuracy
value: 0.7475049900199601
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-TinyBERT-betito-finetuned-xnli-es
This model is a fine-tuned version of [mrm8488/spanish-TinyBERT-betito](https://huggingface.co/mrm8488/spanish-TinyBERT-betito) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7104
- Accuracy: 0.7475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.50838112218154e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7191 | 1.0 | 49399 | 0.6829 | 0.7112 |
| 0.6323 | 2.0 | 98798 | 0.6527 | 0.7305 |
| 0.5727 | 3.0 | 148197 | 0.6531 | 0.7465 |
| 0.4964 | 4.0 | 197596 | 0.7079 | 0.7427 |
| 0.4929 | 5.0 | 246995 | 0.7104 | 0.7475 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
etri-lirs/gbst-kebyt5-large-preview | etri-lirs | fill-mask | [
"transformers",
"pytorch",
"gbswt5",
"text2text-generation",
"fill-mask",
"custom_code",
"ko",
"en",
"ja",
"zh",
"arxiv:2106.12672",
"license:other",
"autotrain_compatible",
"region:us"
] | 1,707,808,911,000 | 2024-11-25T04:10:05 | 0 | 2 | ---
language:
- ko
- en
- ja
- zh
license: other
pipeline_tag: fill-mask
---
# Model Card for GBST-KEByT5-large (1.23B #params)
<!-- Provide a quick summary of what the model is/does. -->
KEByT5: Korean-Enhanced/Enriched Byte-level Text-to-Text Transfer Transformer(T5)의 GBST 버전으로,
CharFormer(Tay et al., 2021)를 기반으로 합니다.
한국어를 위해 토큰 후보 구간을 (1, 2, 3, 6, 9) 바이트 단위로 청킹하여 후보군을 생성하고, GBST로 나온 소프트 임베딩 시퀀스를 1/3로 다운샘플링하여 학습 및 추론 효율성을 개선합니다.
## Prerequirements / and Model Loading HOW-TO
본 모델의 구동을 위해서는 GBSWT5 모듈이 필요합니다.
https://github.com/etri-crossmodal/gbswt5
아래와 같이 pip를 통해 모듈을 설치 가능합니다. 모델 사용 방법은 github를 참조해주십시오.
```
pip install git+https://github.com/etri-crossmodal/gbswt5.git
```
또는, 최신 버전의 Transformers와 함께, 별도의 코드 없이 아래의 방법으로 모델 사용이 가능합니다:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("etri-lirs/gbst-kebyt5-large-preview")
# 아래와 같이 trust_remote_code=True를 붙임으로, 자동으로 관련 코드를 다운로드 받고 쓸 수 있습니다
model = AutoModelForSeq2SeqLM.from_pretrained("etri-lirs/gbst-kebyt5-large-preview", trust_remote_code=True)
```
또한, 다운스트림 태스크 학습 시, 아래의 python 코드와 같이, GBST layer를 frozen 하여 학습하는 것을 권장합니다.
```
gbst_frozen_target = ['encoder.embed_tokens.embeds.weight',
'encoder.embed_tokens.positional_convol.2.convol.weight',
'encoder.embed_tokens.positional_convol.2.convol.bias',
'encoder.embed_tokens.positional_convol.2.proj.weight',
'encoder.embed_tokens.positional_convol.2.proj.bias',
'encoder.embed_tokens.cand_scoring.0.weight',
'encoder.embed_tokens.cand_scoring.0.bias',
# embedding weight는 frozen 하지 않는 쪽이 일반적으로 더 나은 성능을 보임.
#'shared.weight',
]
print("** GBST Model found, freeze GBSWT layer for training downstream.")
for name, param in self.model.named_parameters():
if name in gbst_frozen_target:
print(f"** freeze {name} layer.")
param.requires_grad = False
else:
param.requires_grad = True
```
참고로, 모델에 포함된 원격 코드에는 다음의 오픈소스 소프트웨어가 포함되어 있습니다:
* This software includes lucidrains/charformer-pytorch GitHub project for GBST implementation, which distributed under MIT License. Copyright (c) 2021 Phil Wang. all rights reserved. (Original Code URL: https://github.com/lucidrains/charformer-pytorch)
* This software includes HuggingFace transformers's T5 implementation for GBST-enabled T5 model, which distributed under Apache 2.0 License. Copyright 2018- The Huggingface team. All rights reserved.
## KEByT5: Korean-Enhanced/Enriched Byte-level Text-to-Text Transfer Transformer(T5)
크로스모달 및 다국어 친화적인 한국어 중심의 토큰-프리 언어 이해 생성 모델
(EN=Cross-modal, Multilingual Friendly, Token-free Encoder-Decoder Pretrained Language Model for Korean)
* 본 사전학습 언어모델은 시각, 청각과 같은 텍스트 이외의 모달리티와 교차언어 지식 교환에 용이한 토큰-프리 사전학습 언어모델을 목표로 합니다.
* 별도의 tokenizer가 필요없지만, 편의를 위해 AutoTokenizer.from_pretrained()를 사용하여 다른 토크나이저 기반 인코더-디코더 모델과 동일하게 처리할 수 있습니다. 토크나이저를 생략하고 싶은 경우, UTF-8 입력을 바이트 단위로 쪼개어, 각 바이트에 +3을 하여 Token ID를 생성합니다. (즉, ASCII value 0 == Token ID 3, ASCII value 255 == Token ID 258)
* 현재 Preview 스테이지에 있는 모델이며, 활용에는 fine-tuning이 필요합니다.
* 그래디언트 기반 서브워드 토큰화 [(Gradient-based Subword Tokenization; CharFormer; Tay et al., 2021;)](https://arxiv.org/abs/2106.12672)를 적용한 본 모델은, KLUE-MRC에서 같은 규모의 KEByT5-base 모델 대비 학습에서 2.7배, 추론에서 1.46배 이상의 학습 속도가 개선되었습니다. 일부 학습/추론 성능에 비교 가능한 차이가 있을 수 있습니다. 상세한 내용은 하위 평가 지표를 참고하십시오.
## Acknowledgements
* 본 사전학습 언어모델은 2022년도 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임 (No. RS-2022-00187238, 효율적 사전학습이 가능한 한국어 대형 언어모델 사전학습 기술 개발)
(EN=This pretrained language model was supported by the Institute of Information & communication Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. RS-2022-00187238, Development of Large Korean Language Model Technology for Efficient Pre-training))
# Model Details
본 사전학습 언어모델은 다음과 같은 규모를 가집니다:
* kebyt5-small : 330M [link](https://huggingface.co/etri-lirs/kebyt5-small-preview)
* kebyt5-base : 580M [link](https://huggingface.co/etri-lirs/kebyt5-base-preview)
* kebyt5-large : 1.23B [link](https://huggingface.co/etri-lirs/kebyt5-large-preview)
* GBST-kebyt5-base : 584M [link](https://huggingface.co/etri-lirs/gbst-kebyt5-base-preview)
* GBST-kebyt5-large : 1.23B (this model)
이들 모델은 [google/byt5-small](https://huggingface.co/google/byt5-small), [google/byt5-base](https://huggingface.co/google/byt5-base), [google/byt5-large](https://huggingface.co/google/byt5-large) 모델과 동일한 신경망 구조와 크기를 가지며, 토크나이저(ByT5Tokenizer)와 구현 상 두 모델은 별도의 수정없이 바로 교환하여 사용할 수 있습니다.
huggingface transformers에서의 사용법 역시, T5ForConditionalGeneration을 동일하게 사용할 수 있습니다.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Language Intelligence Research Section, Electronics and Telecommunications Research Institute(ETRI)
- **Model type:** Encoder-Decoder Transformer, specifically, ByT5.
- **Language(s) (NLP):** Korean, English(partially for translation task), Chinese(partially for translation task), Japanese(partially for translation task).
- **License:** Apache 2.0 License
- **Finetuned from model:** kebyt5-small/-base/-xl model weights were initialized by google/byt5-* for Warm-start pretraining.
## Model Sources
- **Repository:** 다운스트림 태스크 학습을 위해, https://github.com/etri-crossmodal/llm-downstream-s2s
- **Paper:** 신종훈 외, "한국어 중심의 토큰-프리 언어 이해-생성 모델 사전학습 연구", 제35회 한글 및 한국어 정보처리 학술대회 논문집, pp.711-715. 2023.
(EN=Shin et al., "Towards Korean-Centric Token-free Pretrained Language Model", in Procs. of the 35th Annual Conference on Human and Cognitive Language Technology. pp. 711-715. 2023.)
# Uses
해당 사전학습 언어모델은 연구 및 교육 목적의 활용으로 그 사용 목적이 제한됩니다.
## Direct Use
현재 공개되는 모델은 T5 모델 학습에 사용된 Corrupted span denoising 만으로 학습되어 있어, 실제 응용 태스크에 적용하기 위해서는 fine-tuning 과정이 필요합니다.
Sentinel Token(token id 258, 257, 256, ...)을 사용하여 Masked Token Prediction을 수행할 수 있으나, 예측된 내용에는 부적절한 내용이 있을 수 있습니다.
## Downstream Use [optional]
Token-free 모델의 특성 상, 복잡하거나 Noisy한 입력에 강건하며, 짧은 시퀀스 길이의 생성에 적합합니다. (예: 언어 이해, 대화 응답 생성)
사전학습은 1024 bytes 길이의 데이터를 학습했기 때문에, 이를 초과하는 긴 시퀀스를 다루는 문제에 적합하지 않을 수 있습니다.
더 긴 시퀀스를 다뤄야 하는 문제에서는, [GBST 기반의 토큰-프리 언어모델](https://huggingface.co/etri-lirs/gbst-kebyt5-base-preview)을 사용하는 것을 권장합니다.
# Bias, Risks, Limitations, and Recommendations
Masked Token Prediction을 통해 획득될 수 있는 정보에는 다른 생성형 언어모델과 같은 위험을 가지고 있을 수 있습니다. 학습에 사용된 데이터는 욕설, 음란, 정치적 내용 및 기타 거친 언어들에 대한 별도의 처리가 이루어지지 않았습니다. 따라서, 사회적으로 용인되지 않은 토큰이나 텍스트를 생성할 수 있으며, 주변 문맥에 따라서 공격적인 입력에 어떠한 결과를 생성할 수 있을지 쉽게 예상할 수 없습니다.
한편, 본 언어모델은 주로 한국어 텍스트로 학습되었으며, 이들의 특성을 전이할 수 있는 다운스트림 태스크, 그 중에서도 분류, 요약, 짧은 문장 생성에 적합할 수 있습니다. 입출력 수준에서 미등록어(Out-of-Vocabulary)가 존재할 수 없으나, 사전학습되지 않은 텍스트 시퀀스에 대해서는 추가의 도메인 적응 학습 및 다운스트림 태스크의 미세조정이 필요합니다.
[More Information Needed]
## How to Get Started with the Model
Transformers 4.27.0 이상의 버전에서, 다음의 파이썬 코드를 사용하여 모델과 tokenizer를 사용할 수 있습니다. 상기에 언급된 바와 같이, transformer 모듈 로드 전 gbswt5 모듈을 import 해야 합니다:
```
import gbswt5
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("etri-lirs/gbst-kebyt5-base-preview")
model = AutoModelForSeq2SeqLM.from_pretrained("etri-lirs/gbst-kebyt5-base-preview")
```
# Training Details
## Training Data
본 사전학습에는 아래의 공개 데이터가 사용되었습니다:
* 국립국어원, 모두의 말뭉치. 신문 v2.0
* 국립국어원, 모두의 말뭉치. 구어 말뭉치 v1.2
* 국립국어원, 모두의 말뭉치. 문어 말뭉치 v1.0
* 국립국어원, 모두의 말뭉치. 신문 2020 v1.0
* 국립국어원, 모두의 말뭉치. 신문 2021 v1.0
* 한국어 위키피디어 덤프, [v2020.09.20](https://github.com/lovit/kowikitext)
* [나무위키 덤프](https://github.com/lovit/namuwikitext)
* 한국정보화진흥원, AIHub. 전문분야 말뭉치, 법률/특허 지식베이스, 논문/도서/대화/대본 요약, 한영/한일/한중 번역 말뭉치, 콜센터/주문/뉴스기사/시각정보 질의응답, 방송/회의/상담 음성인식 데이터.
* 한국정보화진흥원, AIHub. 대규모 웹데이터 기반 한국어 말뭉치 데이터
* 한국정보화진흥원, AIHub. 온라인 구어체 말뭉치 데이터.
* [KcBERT 말뭉치, v2022.3Q](https://github.com/Beomi/KcBERT)
또한, 소량의 자체 구축된 데이터 및 합성 데이터 일부를 사용, 전체 약 ~220GB 가량의 데이터로 학습되었습니다.
# Evaluation
## Testing Data, Factors & Metrics & Results
한국어 언어 이해 태스크에 사용되는 [KLUE dataset, v1.1](https://klue-benchmark.com/)의 dev set을 사용하여 평가되었습니다.
생성은 모두 seq2seq을 이용한 출력 레이블 직접 생성 방법을 사용했습니다.
모든 모델의 학습 조건은 유효배치 크기 16, 학습 epoch 4로 고정, 파라미터 크기에 따라 고정된 학습률, Cosine-Annealing LR Scheduler (min lr=1e-7, restarts=4, gamma=0.7)을 사용하여 학습 되었습니다. 상세 테스트 환경은 신종훈 외, 2023에 기록된 것과 같습니다.
상기 학술논문 이후에 출시된 본 모델(GBST-KEByT5-Large)의 다운스트림 태스크 학습 조건은 태스크 별로 가변적인 학습률(LR 6.2e-5~4.6e-5) 사이의 값을 사용하여 학습하였고, 나머지 조건은 동일하게 설정하였습니다.
하기 미세조정 실험을 위해 사용된 학습기를 함께 공개하였습니다. 해당 학습기는 다른 huggingface encoder-decoder 모델(BART 등)의 학습도 함께 사용할 수 있습니다. https://github.com/etri-crossmodal/llm-downstream-s2s
| models | KLUE-TC(YNAT) (F1) | KLUE-NER (Entity, Char F1) | KLUE-DP (UAS, LAS) | KLUE-MRC (EM, ROUGE-W) |
|-------------|---------------|--------------|-------------------|------------------|
| google/byt5-large (1.23B) | 78.52 | 48.81, 63.95 | 44.26, 7.805 | _NOT TESTED_ |
| KEByT5-Base (580M) | 84.99 | 86.75, 91.05 | 88.70, 85.90 | 62.28, 68.38 |
| GBST-KEByT5-Base (584M) | 85.29 | 87.35, 92.09 | 88.33, 85.00 | 59.69, 66.44 |
| KEByT5-Large (1.23B) | 85.68 | 88.09, 92.40 | 87.18, 85.52 | 70.07, 75.81 |
| GBST-KEByT5-Large (1.23B) | 85.72(LR 4e-5) | 87.22, 91.54(LR 4.6e-5) | -, - | 68.6, 74.33 (LR 6.2e-5) |
대화 상태 추적(DST; Dialogue State Tracking) 태스크인 KLUE-WOS-v1.1 결과는 다음과 같습니다. 평가는 모두 seq2seq을 이용한 다이얼로그 상태 직접 생성을 사용했습니다:
| models | WOS (JGA, %) | WOS (F1, %) |
| ------- | ---------- | ----------- |
| klue/klue-roberta-large | 50.22 | 92.23 |
| KEByT5-Base (580M) | 77.15 | 96.92 |
| GBST-KEByt5-base (584M) | 75.94 | 96.73 |
| KEByT5-Large (1.23B) | 78.54 | 97.28 |
| GBST-KEByT5-Large (1.23B) | -(not tested yet) | - |
관계 추출(RE; Relation Extraction) 태스크인 KLUE-RE-v1.1 결과는 다음과 같습니다. no_relation을 제외한 29개의 관계 클래스에 대한 Micro F1 결과입니다:
| models | KLUE-RE (F1, %) |
| ------- | ---------- |
| klue/klue-roberta-base | 65.90 |
| KEByT5-Base (580M) | 65.48 |
| KEByT5-Large (1.23B) | 68.95 |
| GBST-KEByT5-Large (1.23B) | -(not tested yet) |
GBST 적용을 통한 효율화 개선은 다음과 같이 평가되었습니다. 평가 환경은 A100 PCIE 80GB가 사용되었으며, 정밀도는 bfloat16에서 측정되었습니다.
학습 및 평가에는 KLUE-MRC 데이터셋이 사용되었습니다. 이들 데이터셋의 길이는 최대 6800 bytes의 문맥이 들어갑니다.
| model | training sample/sec. | inference sample/sec. |
| ----- | -------------------- | --------------------- |
| KEByT5-base (580M) | 1.30 | 3.95 |
| GBST-KEByT5-base (584M) | 3.56 | 5.77 |
| GBST-KEByT5-Large (1.23B) | 2.02 | not tested |
## Compute Infrastructure
* Trained on nVidia A100 80GB * 8EA
# Citations
* 신종훈 외, "한국어 중심의 토큰-프리 언어 이해-생성 모델 사전학습 연구", 제35회 한글 및 한국어 정보처리 학술대회 논문집, pp.711-715. 2023.
* 허정 외, "생성형 언어모델을 이용한 관계 추출", 제35회 한글 및 한국어 정보처리 학술대회 논문집. pp.708-710. 2023.
* 이기영 외, "한국어 토큰-프리 사전학습 언어모델 KeByT5를 이용한 한국어 생성 기반 대화 상태 추적", 제35회 한글 및 한국어 정보처리 학술대회 논문집. pp.644-647. 2023.
# Model Card Authors/Contacts
Jong-hun Shin(ETRI), e-mail=jhshin82 _AT_ etri _DOT_ re _DOT_ kr.
| [
"RELATION_EXTRACTION",
"TRANSLATION"
] | Non_BioNLP |
tmnam20/bert-base-multilingual-cased-rte-100 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,388,075,000 | 2024-01-16T06:55:47 | 15 | 0 | ---
base_model: bert-base-multilingual-cased
datasets:
- tmnam20/VieGLUE
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-rte-100
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- type: accuracy
value: 0.7075812274368231
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-rte-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6350
- Accuracy: 0.7076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
mustozsarac/finetuned-one-epoch-multi-qa-mpnet-base-dot-v1 | mustozsarac | sentence-similarity | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:62964",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/multi-qa-mpnet-base-dot-v1",
"base_model:finetune:sentence-transformers/multi-qa-mpnet-base-dot-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,719,486,539,000 | 2024-06-27T11:09:15 | 5 | 0 | ---
base_model: sentence-transformers/multi-qa-mpnet-base-dot-v1
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:62964
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Google, Fransa Rekabet Kurumu'nun verdiği cezayı temyize götürecek
sentences:
- Google France'ın Yönetim Kurulu Başkanı Sebastien Missoffe, Rekabet Kurumunun
ceza kararının bazı unsurlarına katılmadıklarını ve ”eser sahibinin haklarına
komşu hakları uygulamak için gösterdikleri çabaya karşın ceza miktarını fazla
bulduklarını” belirtti. Missoffe, Fransa'da imzalanan anlaşmalara sadık kaldıklarını
ve eser sahibinin haklarına komşu hakları tanıdıklarını ifade etti. Rekabet Kurumunun
geçen ay kendisinden istediği ”yayınevleri ve basın ajanslarına telif hakkına
tabi içeriklerinin kullanımı için ücret teklifi sunması” ihtarına uyduklarını
aktaran Missoffe, ”Teklifimizi 1200'den fazla gazete yayımcısına götürdük, anlaşmalarımızın
belli yönlerini değiştirdik” dedi. Google'ın başvurusu Paris Temyiz Mahkemesinde
incelenecek.
- 'Anadolu Efes, ligde 15 kezle en fazla şampiyonluk yaşayan takım unvanına sahip.
Adını 2011-2012 sezonunda Anadolu Efes olarak değiştiren Efes Pilsen, lig tarihinde
ilk şampiyonluğunu 1978-1979 sezonunda kazandı. Lacivert-beyazlılar, ligde 2001-2002,
2002-2003, 2003-2004 ve 2004-2005 sezonlarında üst üste 4 şampiyonluk yaşadı.
Anadolu Efes, Süper Lig''de son olarak 2020-2021 sezonunu şampiyon tamamladı.
Fenerbahçe''nin 10 şampiyonluğu var Fenerbahçe''nin Basketbol Süper Ligi''nde
10 şampiyonluğu bulunuyor. İlk şampiyonluğunu 1990-1991 sezonunda yaşayan sarı-lacivertli
takım, daha sonra 16 yıl şampiyon olamadı. 2006-2007 sezonunda Ülker ile sponsorluk
anlaşması imzalayan ve Fenerbahçe Ülker olarak ligde mücadele eden sarı-lacivertliler,
16 yıl aradan sonra şampiyon olarak, kulübün 100. yılında hasretine son verdi.
Fenerbahçe Ülker, 2007-2008 sezonunda da şampiyonluğa ulaşıp tarihinde ilk kez
üst üste iki kez zirveye çıktı. Efes Pilsen''e 2009-2010 sezonunda play-off final
serisinde 4-2 üstünlük kurarak şampiyon olan sarı-lacivertliler, sonraki iki şampiyonluğunu
Galatasaray''a karşı elde etti. Fenerbahçe Ülker, 2010-2011 sezonunda play-off
finalinde rakibini 4-2 ile geçerek kupayı kaldırdı. Sarı-lacivertliler, 2013-2014
sezonunda da 3-3 eşitliğin olduğu play-off final serisinde Ülker Spor ve Etkinlik
Salonu''ndaki son maça sarı-kırmızılı takım çıkmayınca, 6. kez şampiyon olarak
rakibini şampiyonluk sayısında geride bırakmayı başardı. Fenerbahçe, 2015-2016
sezonunda Anadolu Efes''i geçerek 7, 2016-2017 sezonunda da Beşiktaş Sompo Japan''a
play-off finallerinde üstünlük sağlayarak 8. kez şampiyonluğunu elde etti. Sarı-lacivertli
takım, 2017-2018 sezonunda play-off finallerinde TOFAŞ''ı geçerek 9. şampiyonluğuna
ulaştı. Fenerbahçe, geçen sezon da play-off finallerinde Anadolu Efes''e üstünlük
sağlayarak 10. şampiyonluğunu kazandı. Galatasaray 5 kez şampiyon oldu Galatasaray,
Basketbol Süper Ligi''nde 5 kez şampiyonluk kupasını müzesine götürdü. Sarı-kırmızılı
takım, ilk şampiyonluk kupasını 1968-1969 sezonunda kazandı. Ligde play-off sistemine
geçildikten sonra 1984-1985 sezonunda finalde Fenerbahçe''ye ve 1985-1986 sezonunda
da Efes Pilsen''e 2-1''lik üstünlük kurarak üst üste iki kez şampiyonluk kupasını
müzesine götüren sarı-kırmızılılar, 1989-1990 sezonunda da play-off finalinde
Paşabahçe''yi 3-1 ile geçerek şampiyonluk yaşadı. Ligde 2012-2013 sezonunda play-off
finalinde Banvit''e 4-1 üstünlük kuran sarı-kırmızılılar, 23 yıllık şampiyonluk
hasretine son verdi ve 5. şampiyonluğunu kazandı. Galatasaray, 2013-2014 sezonunda
play-off final serisinde yönetim kurulunun kararıyla son maça çıkmadı. Türkiye
Basketbol Federasyonu Yönetim Kurulu, play-off final serisinin 7. maçını 20-0
Fenerbahçe Ülker lehine tescil ederek, sarı-lacivertli takımı şampiyon ilan etti.
Beşiktaş''ın 2 şampiyonluğu var Beşiktaş, ligde iki kez şampiyonluğa ulaştı. Siyah-beyazlılar,
1974-1975 sezonunda 57 puanı bulunan Galatasaray''ın önünde 60 puanla ligi ilk
sırada tamamlayarak, ilk şampiyonluğunu elde etti. Beşiktaş, 2011-2012 sezonunda
ise play-off final serisinde Anadolu Efes''e 4-2 üstünlük kurup, 37 yıl sonra
mutlu sona ulaşarak ligde ikinci kez şampiyon oldu. Eczacıbaşı da 8 kez şampiyonluk
sevinci yaşadı Basketbol şubesini yıllar önce kapatan Eczacıbaşı, ligde 8 şampiyonluk
kazandı. Ligde İTÜ 5, Ülkerspor da 4 kez kupayı müzesine götürdü. İlk şampiyon
Altınordu 1966-1967 sezonuyla başlayan Deplasmanlı Erkekler Basketbol Birinci
Ligi''nde ilk şampiyonluğu Altınordu kazandı. Ligde 1983-1984 sezonunda play-off
sistemine geçildi ve bu tarihten itibaren lig şampiyonu, play-off maçlarının ardından
belirlendi. Pınar Karşıyaka ve TOFAŞ''ın ikişer, Muhafızgücü''nün de ligde bir
şampiyonluğu bulunuyor. 2019-2020 sezonunu tamamlanamadı Basketbol Süper Lig''inde
2019-2020 sezonu yeni tip koronavirüs salgını nedeniyle tamamlanamadı. Salgın
nedeniyle lige 23. hafta maçlarının ardından 19 Mart''ta ara verilirken, Türkiye
Basketbol Federasyonu Yönetim Kurulu 11 Mayıs''ta sezonu şampiyon ilan edilmeden
ve küme düşme olmadan sonlandırma kararı aldı. Şampiyonlar Basketbol Süper Ligi''nde
şampiyon olan takımlar şöyle:'
- Markaların ve perakendecilerin sınır ötesi bir e-ticaret şirketi kurmalarına yardımcı
olan Flow, New Enterprise Associates liderliği üstlendiği B Serisi yatırım turunda
37 milyon dolar yatırım aldığını açıkladı. Yatırım turuna katılan diğer isimler
American Express Ventures, Latitude Ventures, Liza Landsman oldu. 37 milyon dolarlık
yeni yatırımla birlikte Flow'un aldığı toplam yatırım miktarı 55 milyon dolara
yükseldi. Şirket, 37 milyon doları Flow'ın satış ve pazarlama ekibini genişletmek
ve ürünü geliştirmek için kullanacağını açıkladı. Flow CEO'su Rob Keve, sosyal
medya ve dijital pazarlamanın büyüsü sayesinde, tüketiciye yönelik birçok markanın
globaldeki tüketicilere ulaştığını söyledi. Bununla birlikte, bu tüketiciler için
gerçek alışveriş deneyiminde nakliye genellikle yavaş ya da pahalı olarak karşımıza
çıkıyor. Ayrıca site yerel ödeme hizmetleri ile bütünleşmekte başarısız olabiliyor.
Flow ise hem e-ticaret sitesi hem de tüketici için bu sorunları ortadan kaldırmayı
hedefliyor. Flow, mevcut e-ticaret platformlarının en üstünde yer alıyor. Böylece
alışveriş deneyimi yerel fiyatlandırma ve ödeme seçenekleriyle otomatik olarak
konumlarına uyarlanıyor. Ayrıca, Flow'un taşıyıcılarla olan ilişkileri sayesinde,
uluslararası nakliye zamanında ve uygun fiyatlı hale getiriliyor. Bir işletme,
halihazırda uluslararası denizcilik fırsatları ve dağıtım merkezlerine sahip olsa
bile lojistik yönetimi için Flow'u kullanabiliyor. 2015 yılında kurulan şirketin
müşterileri arasında MZ Wallace ve Charles & Colvard gibi çok kanallı işletmelerin
yanı sıra MVMT Watches gibi online markalar da bulunuyor. Flow, müşterilerinin
yıldan yıla yüzde 200 oranında arttığının altını çiziyor.
- source_sentence: Arama
sentences:
- Zorunlu Çerezler Bu çerez, insanlarla botları ayırt etmek için kullanılır. Bu,
web sitelerinin kullanımı hakkında geçerli raporlar hazırlamak için kullanılmakta
olup web sitesi için faydalıdır. İşlevsel Çerezler Kullanıcının web sitesinin
seçtiği dil sürümünü hatırlar. Performans/Analitik Çerezler Ziyaretçinin web sitesini
nasıl kullandığına ilişkin istatistiksel veriler oluşturmak için kullanılan benzersiz
bir kimliği kaydeder. Google Analytics tarafından talep oranını kısmak için kullanılır.
Kabul et Reddet Reklam/Pazarlama Çerezleri Bu çerez, Alexa Analytics'e gönderilen
tüketici davranışları hakkında bilgi toplamak için kullanılır. (Alexa Analytics
bir Amazon şirketidir.)
- 'Taipei Intel Developer Forum‘da tanıtılan UrbanMax isimli ürün, Intel’in dizüstü
ve netbook arasında gelip giden bir tasarım şeklini gösteriyor. UrbanMax, 11,1
inç (28 cm) köşegene sahip dokunmatik ekranıyla birlikte Windows Vista işletim
sistemi çalıştıran hafif bir dizüstü bilgisayar. Aslında ilk bakışta bir tablet;
fakat alt tarafından çıkan klavye, düz bir yüzeye yerleştirildiğinde açılarak
bir dizüstü bilgisayara dönüşüyor. Tabii, etkin olarak kullanabiliyorsanız, her
zaman bir ekran klavyeniz var. {pagebreak::İçinde Neler Var?} İçinde Ne Var? UrbanMax
isimli prototip, içinde MacBook Air’larda kullanılan ufaltılmış bir Core 2 Duo
işlemci barındırıyor. 1366×768 piksellik ekran yeterince keskin bir görüntü üretecek
kadar piksele sahip durumda. Ayrıca bu küçük makinede HD video oynatabilmek de
mümkün olacak deniliyor. Enerji tasarrufu açısından bir de SSD maliyetini bu ürüne
eklememiz gerekiyor. Bunların haricinde içinde Intel’in en son geliştirdiği ve
her türlü kablosuz bağlantı teknolojisini de destekleyen bir kart olduğu düşüncesi
çok yanlış değil. :: UrbanMax, dev bir iPhone’a benziyor mu? Bilgi için: Intel
Yazan: Berkin Bozdoğan'
- By Euronews Fransa'da aşırı sağcı lider Marine Le Pen Paris'i ziyaret eden Mısır
Devlet Başkanı Abdulfettah El-Sisi'nin Fransa'nın sağlam bir müttefiği olmalı
dedi. REKLAM Fransa'da aşırı sağcı lider Marine Le Pen Paris'i ziyaret eden Mısır
Devlet Başkanı Abdulfettah El-Sisi'nin Fransa'nın sağlam bir müttefiği olmalı
dedi. Sosyal medya hesabından paylaşımda bulunan Le Pen ”Bölgede istikrarın çıpası
olan ve ülkesinde Müslüman Kardeşleri bastıran Mısır Devlet Başkanı El-Sisi Fransa'nın
güçlü bir müttefiği olmak zorunda özellikle Türkiye'nin Libya konusundaki provokasyonları
ve terörizmle mücadele konusunda,” ifadelerini kullandı. Bu açıklamada ayrılıkçılık
ve radikal İslam'la mücadele için hazırlanan tasarının bakanlar konseyine sunulmasından
birkaç gün önce geldi. Le Pen tasarının doğru yönde atılmış adımlar içerdiğini
fakat kanunun herkesi hedef almak yerine İslamcılığa daha fazla odaklanması gerektiğini
belirtmişti. Paris ziyaretinde Fransa Cumhurbaşkanı Emmanuel Macron'la bir araya
gelecek olan El-Sisi, Libya ve terörizmle mücadele dahil bir çok bölgesel konuyu
ele alacak. Fransız aşırı sağ parti Ulusal Birlik lideri Marine Le Pen sosyal
medya hesabından bir çağrı yaparak, ülkedeki Ülkü Ocaklarının yanı sıra Milli
Görüş vakıflarının da kapatılması gerektiğini söylemişti.
- source_sentence: Manş Denizi'nde bir teknenin batması sonucu en az 31 düzensiz göçmen
öldü
sentences:
- BİST hisse verileri 15 dakika gecikmelidir. BİST isim ve logosu ”Koruma Marka
Belgesi” altında korunmakta olup izinsiz kullanılamaz, iktibas edilemez, değiştirilemez.
BİST ismi altında açıklanan tüm bilgilerin telif hakları tamamen BİST'e ait olup,
tekrar yayınlanamaz. Veriler tarafından sağlanmaktadır. www.sozcu.com.tr internet
sitesinde yayınlanan yazı, haber ve fotoğrafların her türlü telif hakkı Mega Ajans
ve Rek. Tic. A.Ş'ye aittir. İzin alınmadan, kaynak gösterilerek dahi iktibas edilemez.
Copyright © 2023 - Tüm hakları saklıdır. Mega Ajans ve Rek. Tic. A.Ş.
- By euronews Fransa'dan Manş Denizi üzerinden İngiltere'ye geçmeye çalışan ve düzensiz
göçmenleri taşıyan teknenin batması sonucu en az 31 göçmen hayatını kaybetti.
REKLAM Fransa'dan Manş Denizi üzerinden İngiltere'ye geçmeye çalışan ve düzensiz
göçmenleri taşıyan teknenin batması sonucu en az 31 göçmen hayatını kaybetti.
Fransa İçişleri Bakanlığı'ndan yapılan açıklamada, düzensiz göçmenlerin Fransa'nın
Calais kentinden Manş Denizi üzerinden tekneyle İngiltere'ye ulaşmaya çalıştığı
belirtildi. Cumhurbaşkanı Emmanuel Macron, göçmenlerin ölüm haberi üzerine yaptığı
açıklamada, ilgili bakanların acil olarak toplanmasını istedi ve ”Fransa, Manş
Denizi'nin mezarlığa dönüşmesine izin vermeyecek.” dedi. Manş Denizi'nde yaşanan
insani dramın sorumlularının derhal bulunacağı sözünü veren Macron, AB Sınır Koruma
Ajansı'nın (Frontex) Manş Denizi'nde sınır güvenliğinin korunması konusunda imkanlarının
artırılmasını istedi. Başbakan Jean Castex ise ilgili 8 bakanın katılımıyla yarın
konuyla ilgili acil bir toplantı düzenleneceğini duyurdu. İçişleri Bakanı Gerald
Darmanin, göçmenlerin yasa dışı bir şekilde denize açılmalarını sağladıklarından
şüphelenilen 4 kişinin gözaltına alındığı duyurdu. İngiltere Başbakanı Johnson
acil toplantı düzenledi İngiltere Başbakanı Boris Johnson ise ilgili bakanlarıyla
bu akşam acil bir toplantı düzenleyerek, Manş Denizi'nde yaşanan trajediyi görüştü.
Johnson daha sonra basına yaptığı açıklamada üstü kapalı Fransa'yı suçlayarak,
”Bazı ortaklarımızı, özellikle de Fransızları son gelişmelerle ilgili duruma ayak
uydurmaya ikna etmekte zorlandık, ancak bu konuda tüm ülkelerin karşı karşıya
olduğu zorlukları anlıyorum.” dedi. Bir balıkçının Manş Denizi'nde cesetler görmesi
üzerine yetkililere haber verdiği ifade edilen açıklamada, düzensiz göçmenleri
taşıyan teknenin battığı, yapılan aramanın ardından ilk belirlemelere göre 5 düzensiz
göçmenin de bilincini kaybettiği kaydedildi. İçişleri Bakanı Gerald Darmanin,
Twitter hesabından yaptığı açıklamada, yaşanan bu dram nedeniyle üzüntü duyduğunu
belirtti. Düzensiz göçmenlerin tekneyle İngiltere'ye geçişini sağlamaya çalışanların
suçlu olduğunu ifade eden Darmanin, Calais kentine gideceği bilgisini paylaştı.
Calais'de bulunan ve kötü şartlar içinde yaşam mücadelesi veren çok sayıda düzensiz
göçmen İngiltere'ye gitmeye çalışıyor. İngiltere'ye bu ay içinde yaklaşık 2 bin
göçmenin geleceği tahminine karşın sadece ilk 11 günde 3 bin 780 kişi Fransa üzerinden
ülkeye giriş yaptı. Fransa'nın kuzeyindeki Grand-Synthe kentinde yol kenarlarında
barınan yaklaşık 1500 düzensiz göçmen, 16 Kasım'da polisin düzenlendiği operasyonla
barınma merkezlerine taşınmıştı.
- Zorunlu Çerezler Bu çerez, insanlarla botları ayırt etmek için kullanılır. Bu,
web sitelerinin kullanımı hakkında geçerli raporlar hazırlamak için kullanılmakta
olup web sitesi için faydalıdır. İşlevsel Çerezler Kullanıcının web sitesinin
seçtiği dil sürümünü hatırlar. Performans/Analitik Çerezler Ziyaretçinin web sitesini
nasıl kullandığına ilişkin istatistiksel veriler oluşturmak için kullanılan benzersiz
bir kimliği kaydeder. Google Analytics tarafından talep oranını kısmak için kullanılır.
Kabul et Reddet Reklam/Pazarlama Çerezleri Bu çerez, Alexa Analytics'e gönderilen
tüketici davranışları hakkında bilgi toplamak için kullanılır. (Alexa Analytics
bir Amazon şirketidir.)
- source_sentence: Gafele facute de catre autoritatile comuniste
sentences:
- Totul s-a întâmplat sâmbătă după-amiază, când mama celor doi a sunat la 112 și
a spus că fiica ei de 5 ani a fost violată de băiatul de 13. Când au ajuns polițiștii
la locuința familiei din localitatea Ivănești, băiatul de 13 ani deja fugise de
acasă. Potrivit unor surse judiciare, fata a fost dusă la Institutul de Medicină
Legală pentru consult, iar politiștii l-au căutat pe fratele ei. După câteva ore,
băiatul a fost găsit ascuns într-o casă părăsită din localitate. El a fost dus
la un centru al DGASPC Vaslui, unde a fost audiat de către polițiști în prezența
unui psiholog. Sora lui este acum acasă, în grija familiei. DGASCPC Vaslui a început
propria anchetă în acest caz, urmând ca în cursul zilei de luni să aibă loc mai
multe discuții cu reprezentanții familiei.
- 'Costinești, cunoscută ca stațiunea tineretului de pe litoralul românesc, este
o destinație de vacanță vibrantă și plină de viață, ideală pentru cei care doresc
să se bucure de soare, mare și distracție. Situată la aproximativ 30 de kilometri
sud de Constanța, locul atrage anual mii de turiști dornici de petreceri, activități
recreative și relaxare pe plajă. Ce prețuri la cazare sunt în iulie și august
2024? Câți bani trebuie să scoți din buzunar pentru o vacanță la Costinești Pe
un forum dedicat vacanțelor la Costinești, o româncă a dorit să știe dacă va găsi
cazare pentru perioada 20….23 August, 11 adulți și 10 copii. Iată ce detalii a
primit: „Mai avem disponibilitate pentru urmatoarele perioade * 15-21 iulie, 6
nopti, 1000 lei/noapte * 17-20 august, 3 nopti, 1000 lei/noapte * 26-31 august,
5 nopti, 800lei/noapte * 1-6 septembrie, 5 nopti, 800 lei/noapte”. De banii ăștia
ai aer conditionat în toate camerele, TV, WIFI, terasă amenajată, foisor, grătar,
bucătărie complet utilată (frigider, cuptor electric, cuptor cu microunde, plită,
cafetieră, prăjitor de pâine), parcare. Sunt acceptate vouchere de vacanță. Plus
că te afli la doar 10 minute de plajă! Altcineva are disponibile camere duble
matrimoniale și triple pentru 19….3 1august (160 RON cameră dubla matrimonială/
200 triplă). Atmosfera stațiunii Costinești Stațiunea Costinești este renumită
pentru atmosfera sa prietenoasă. Plajele sale sunt unele dintre cele mai populare
de pe litoralul românesc. Cu nisip fin și ape clare, acestea oferă condiții excelente
pentru plajă și înot. Pentru cei care preferă un loc mai liniștit, există și plaje
retrase în apropiere, unde se pot bucura de soare într-un cadru intim. Printre
atracțiile principale ale stațiunii se numără Epava Evanghelia, o navă grecească
eșuată pe țărm în anii ’60, care a devenit un simbol al Costineștiului. Ambarcațiunea
este un loc popular pentru fotografii și explorări. Activități recreative În Costinești
ai la dispoziție o gamă largă de activități recreative pentru toate gusturile.
Sporturile nautice, cum ar fi windsurfing, kitesurfing și jetskiing, sunt foarte
populare. De asemenea, poți închiria biciclete sau scutere pentru a explora stațiunea
și împrejurimile. Pentru turiștii cu buget limitat, camping-urile sunt o alegere
bună, oferind o experiență autentică de vacanță pe litoral. În ceea ce privește
gastronomia, stațiunea este plină de restaurante și terase care servesc preparate
tradiționale românești, fructe de mare proaspete și mâncăruri internaționale.
Costinești are toate ingredientele necesare pentru o vacanță de neuitat. Indiferent
dacă ești în căutare de petreceri până în zori sau de relaxare pe plajă, această
stațiune promite o experiență memorabilă pentru toți cei care o vizitează.'
- 'Ziua de 16 decembrie, inainte de ora 17:00. De dimineata, Securitatea Judetului
Timis isi desfasoara activitatea normal. ALEX MIHAI STOENESCU ADUNAREA. Coloane
de timisoreni au inceput sa manifesteze impotriva regimului Ceausescu Asa cum
am aratat, in jurul orei 8:30, manifestantul Simion Cherlea vine sa discute cu
Radu Tinu, apoi maiorul va lucra impreuna cu locotenent-colonelul Kope R. la intocmirea
raportului referitor la agentul maghiar Varga Ladislau, cel care luase contact
cu Ambasada Ungariei. Filajul prezinta si el raportul - liniste. ”In dimineata
de 16 decembrie 1989, la ora 8:00 - declara in proces Ion Florea, fost secretar
al Comitetului Judetean Timis al PCR - , am fost chemat de Radu Balan, o data
cu mine venind si ceilalti secretari: Bolog Vasile, Avram Teodorea, Boiboreanu
Viorica si Lazureanu Aurel. Balan Radu ne-a informat cu privire la cazul Tokes
si anume ca in Piata Maria s-au adunat trei-patru sute de persoane care-si exprimau
opozitia fata de masura de evacuare a pastorului ce urma a fi luata.” ”Primul
secretar ne-a informat ca pentru dezorganizarea acelei manifestatii urmeaza a
fi infiltrati alti patru-cinci sute de oameni cu diferite responsabilitati pe
linia muncii de partid sau de sindicat. Ne-a mai precizat ca deja printre acesti
demonstranti se gasesc lucratori din aparatul Inspectoratului Judetean al Ministerului
de Interne.” Avand in vedere ca la ora 8:00 nu era nimeni in fata casei parohiale,
decizia lui Balan pare de neinteles. Ea capata un inteles - determinat si de natura
masurii care urma a fi luata - doar daca in noaptea de 15 spre 16 decembrie Emil
Bobu l-a informat de Nicolae Ceausescu asupra convorbirii sale cu Balan, iar Ceausescu
l-a sunat tot in aceeasi noapte pe Balan la Timisoara si i-a cerut sa ia masura
neinspirata, dar tipica lui Ceausescu, de a aduce muncitori care sa-i ia la bataie
pe manifestantii din fata casei lui Tokes. Si Balan, si Bobu au ascuns in proces
aceasta convorbire de noapte cu Ceausescu, pentru ca ordinul lui Ceausescu se
afla la originea evenimentelor, iar ei l-au executat, tot ca niste oameni lipsiti
de judecata. Sa ne uitam la text si sa vedem ce contine. Isi poate cineva imagina
”infiltrarea” a 400-500 de cadre ale partidului printre 200-300 de manifestanti?
Este hilar. Ce fel de ”infiltrare” poate fi aceea a unui grup masiv de oameni
care depasesc ca numar numarul manifestantilor? Nu, este clar ca au fost trimisi
la bataie, iar aceasta masura este tipica unui singur om - Nicolae Ceausescu.
Sa ne amintim ca in timpul teleconferintei din 21 decembrie 1989, Ceausescu va
invoca trecutul sau de membru al grupelor de soc organizate de sovietici pe strazile
Bucurestilor in anii 1944-1947, pe care populatia i-a identificat sub numele de
”mardeiasi”: ”Reamintesc numai ca in Bucuresti functionau inainte asemenea grupe
si nu indraznea nimeni, nici un huligan nu indraznea sa ridice capul pe bulevardele
Capitalei. Este adevarat, am fost atunci criticati ca sunt prea aspre - este demult
- , dar le-am spus ca trebuie sa fie si mai aspre cu cei care incalca legile”.
Toti biografii, cu exceptia celor platiti de el, confirma prezenta lui Nicolae
Ceausescu in grupurile de batausi de pe strazile Bucurestilor, dar si la conducerea
unor astfel de echipe in perioada colectivizarii fortate. Dupa ora 9:00 incep
sa se adune mai intai cei patru enoriasi de serviciu, apoi aproximativ zece curiosi.
La un moment dat, strada se goleste subit. Nimeni nu intelege din ce cauza, si
ofiterii de Securitate coboara pentru a investiga. Pe Bulevardul 6 Martie trecuse
o masina de vidanja care avea scapari tehnologice si lasase pe caldaram o dara
de materii fecale urat mirositoare. Persoanele din fata casei lui Tokes se imprastiasera
pentru a scapa de mirosul insuportabil. Se indeplinea una din constatarile celebre
ale lui Petre Tutea: ”Toate revolutiile se umplu pana la urma de cacat si sange”.
Informat asupra acelui incident, Radu Tinu cere aprobarea ca vidanja sa fie oprita
si pusa sa mai treaca o data. Colonelul Sima, seful Securitatii Timis, considera
propunerea neserioasa si nu o aproba. Maiorul Tinu propune atunci ca in intersectia
strazilor Treboniu Laurian si Timotei Cipariu sa fie amplasat un militean care
sa dirijeze circulatia camioanelor grele pe Timotei Cipariu astfel incat sa nu
se poata aduna o multime care sa ocupe strada. Colonelul Sima respinge si aceasta
propunere pe motiv ca, dimpotriva, camioanele ar putea fi blocate acolo si va
fi foarte greu sa le deblocheze apoi. Pentru acest moment al zilei, dar si pentru
intervalul foarte mare dintre orele 10:00 si 17:30, Comisia senatoriala are o
versiune din pacate prea schematica. Se afirma ca la ora 10:00 Nicolae Ceausescu
a luat legatura cu primul secretar Radu Balan si a dispus ”masuri concrete printre
care si evacuarea imediata si neconditionata a pastorului la noul loc de munca.
Dictatorul devenise nervos”. Nu avem stenograma convorbirii telefonice, dar din
ceea ce se intamplase in 15 decembrie si din reconstituirea facuta din relatarile
revolutionarilor si ale ofiterilor Securitatii, un interes al lui Ceausescu pentru
situatia de la Timisoara este perfect plauzibil. Bobu fusese sunat noaptea de
Balan si apoi acesta l-a informat pe seful statului acasa. Ceausescu a sunat si
probabil ca Balan l-a informat asupra faptului ca evacuarea ar putea fi impiedicata
de prezenta unui grup de oameni in fata casei lui Tokes. Iarasi in tonul cunoscut,
Ceausescu a ordonat efectuarea evacuarii imediat, adica in ziua de 16 decembrie,
cum era prevazut in hotararea judecatoreasca, fara sa tina cont ca legea obliga
la executarea sentintei in prima zi lucratoare. Este de asemenea posibil ca Balan
sa-l fi informat ca duminica Laszlo Tokes avea slujba si ar fi putut profita de
ocazie pentru a incita lumea la nesupunere civica. Nu ne trebuie prea multe investigatii
ca sa ne imaginam ca Ceausescu a cerut sa se ia masuri ”pe linie de partid”, de
”influentare obsteasca”, adica, altfel spus, muncitorii, oamenii muncii din Timisoara
sa ia atitudine si sa intervina pentru executarea sentintei judecatoresti, dar
mai ales pentru a-i lua la bataie pe cei stransi in fata casei lui Tokes. Acesta
era patentul gandirii lui Ceausescu, primitiv, ramas la anii 1945-1948, cand facuse
parte din acel grup de soc instruit de sovietici pentru incaierarile cu ”fascistii”
din centrul Bucurestilor. REPRESIUNEA. Fortele de ordine au imprastiat multimea
cu jeturi de apa Subliniem insa ca ordinul dat de Balan subalternilor sai a fost
la ora 8:00, in timp ce discutia invocata de Comisia senatoriala a avut loc la
ora 10:00, ceea ce demonstreaza existenta unei alte convorbiri, de noapte. Oricum,
dupa convorbirea de dimineata cu Nicolae Ceausescu, primul secretar Radu Balan
hotareste sa nu execute ordinul secretarului general: ”Sambata, 16 decembrie 1989,
la ora 10:00, m-a sunat Ceausescu Nicolae, interesandu-se de situatia privitoare
la pastorul amintit. I-am expus-o asa cum era in realitate, sustinand ca nu se
poate trece la evacuare, deoarece hotararea judecatoreasca nu era inca executabila.
El mi-a ordonat sa trec de indata la evacuare, lucru pe care insa nu l-am facut,
tocmai in ideea de a nu da nastere la conflicte”. Ceausescu ii cerea sa execute
un ordin ilegal. Este primul ordin ilegal dat de Ceausescu in acea perioada. Urmatoarele
ordine ilegale date de el vor fi mult mai sangeroase. Comisia senatoriala arata
ca ”in aceeasi zi, din ordinul ministrului de Interne, in toate unitatile subordonate
acestui minister, a fost introdusa situatia numarul 2 prevazuta de Ordinul 2030
din 15.05.1972”. Nu se precizeaza ora la care s-a dat ordinul. Ea este importanta,
pentru ca lipsa acestui amanunt din concluziile Comisiei senatoriale permite confuzia,
inducand ideea ca in dupa-amiaza de 16 decembrie situatia era deosebit de grava.
Ora declararii Situatiei nr. 2 o aflam de la generalul Grigorie Ghita, in timpul
audierii sale din 1994: ”In 16 decembrie, ora 20:00, m-a sunat Vlad la domiciliu
sa ma prezint la Comandament, la Baneasa, in spatele Institutului de Meteorologie
si Hidrologie (unde este acum Comandamentul Trupelor de Jandarmi). Am fost informat
de situatia de la Timisoara. Am luat legatura cu gen. Bunoaica, comandantul brigazii
de la Timisoara, i-am dat misiunea sa nu permita intrarea sau iesirea din biserica
reformata din Timisoara. La ora 22:00 s-a ordonat la MI aplicarea Ordinului 230/1973,
deci Situatia nr. 2, stare de alerta a efectivelor. Am transmis ordinul in teritoriu”.
Insa revolutionarii insisi, precum si fostii ofiteri de Securitate arata ca in
dimineata zilei de 16 decembrie numarul persoanelor stranse in fata casei lui
Tokes era redus. Revolutionara Veronica Balaj are o amintire romantica despre
prima jumatate a acelei zile: ”Pana la pranz nu s-a aratat vreun semn ca ziua
aceea ar fi putut fi deosebita. Era o sambata de sfarsit de an. Un 16 decembrie
ca oricare altul. Asa parea. Si nici nu banuiam altfel. Atata ca vremea se incalzise
peste asteptari. Soarele se hlizea in ciuda calendarului in care scria mijloc
de decembrie”. Temperatura maxima la Timisoara in ziua de 16 decembrie 1989 va
fi de 16 grade C. Avem temeiuri sa credem ca incepand cu ora 11:00, in sediul
Comitetului Judetean de Partid s-a dezvoltat un conflict personal intre primul
secretar Radu Balan si fostul prim-secretar Ilie Matei. Balan va arata in proces
ca ”tot in aceeasi zi, la ora 11:00, la Comitetul Judetean de partid si-a facut
aparitia Matei Ilie, secretar al CC al PCR, care era invoit pentru a-si rezolva
unele probleme familiale. L-am pus pe acesta in tema cu situatia creata in jurul
pastorului mentionat, cu ordinul dat de Ceausescu de a se trece la evacuare, la
care Matei Ilie a fost de parere ca trebuie sa se puna in executare hotararea
de evacuare”. S-a nascut o contradictie intre cei doi lideri comunisti locali,
provenita din faptul ca Ilie Matei era fostul prim-secretar al judetului Timisoara,
iar Balan ii luase locul doar de o luna si jumatate. Conform unor surse din Primaria
Timisoarei, Matei cunostea foarte bine cazul Tokes, fusese implicat in emiterea
ordinului de evacuare si il considera pe Balan inca nefamiliarizat cu situatia
judetului si cu a lui Tokes in particular. Radu Balan nu dorea sa-si inceapa conducerea
judetului cu acte de violenta si, in plus, fiind prizonierul unei imagini clasice,
larg raspandite, ca Timisoara este un oras civilizat, a mizat pe reactia civilizata
a cetatenilor. Pentru Ilie Matei insa, proaspat promovat secretar in CC, ordinul
lui Ceausescu era litera de lege. Faptul ca Balan a refuzat sa execute acest ordin
a generat starea de conflict si poate si un telefon la Bucuresti. Avem astfel
ipoteza unei diferente majore de opinie tocmai la nivelul deciziei superioare
pe plan local. Mai tarziu, in seara zilei de 17 decembrie, Balan ii va declara
revolutionarului Ioan Savu: ”Am vorbit cu oamenii, Savule. Am vorbit, dar n-am
putut face mai mult pentru ca sosise Matei, pentru ca venisera generalii”. Sa
ne lamurim asupra problemei evacuarii lui Laszlo Tokes. In primul rand sa reamintim
ca acesta ocupa ilegal apartamentul din cladirea situata in Strada Timotei Cipariu.
In al doilea rand, legalitatea actiunii de evacuare era stabilita prin Codul de
Procedura Civila. Potrivit prevederilor art. 385 din Codul de Procedura Civila,
in vigoare la acea data, ”nici o executare nu se va putea face inainte de ora
8 dimineata si dupa ora 6 seara”. Dar art. 386, in vigoare la acea data, prevedea
ca ”executarea silita nu se va putea face in zilele nelucratoare, potrivit legii,
afara de cazurile urgente in care executarea poate fi incuviintata de presedintele
instantei de executare”. Asadar, in caz de urgenta, Tokes putea fi evacuat in
orice zi, intre ora 8:00 si 18:00, cu incuviintarea presedintelui Tribunalului
Timis, daca intervenea un ”caz de urgenta”. Intamplarile din noaptea de 15 spre
16 decembrie nu intruneau conditiile cazului de urgenta, astfel ca ordinul de
evacuare fortata dat de Ceausescu era ilegal. In jurul orei 12:00, in dreptul
imobilului in care locuia Tokes erau stranse aproximativ 30 de persoane. De aceasta
data, procentul curiosilor, al celor care poate incercau o solidarizare muta cu
pastorul Tokes, este dominant; dintre cele aproximativ 30 de persoane lipsesc
indivizii lumii interlope din ziua precedenta. La ora 13:00, maiorul Radu Tinu
il suna la Bucuresti pe seful Directiei I din DSS, colonelul Ratiu, si ii raporteaza,
printre altele: ”Nu e bine ce se intampla. E o balbaiala la partid, habar n-au
ce sa faca”. Este vorba, fara indoiala, de conflictul Balan-Matei din sediul CJP.
La cateva minute dupa ora 14:00, strazile din apropiere se anima si tramvaiele
devin ceva mai aglomerate. Pentru cei care nu sunt familiarizati cu Piata Maria
din Timisoara, trebuie precizat ca aceasta este un nod important de circulatie,
locul unde angajatii intreprinderilor de la periferie schimba tramvaiele venite
din trei directii diferite, pentru tramvaiele care circula spre centru. In Piata
Maria, in mod normal la ore de varf, se adunau pentru a lua alte mijloace de transport
multe sute de persoane. Asadar, posibilitatea de a vedea ce se intampla cativa
pasi mai incolo, la parohiala, era maxima, iar sansele ca un calator curios sa
intarzie pentru a afla ce se intampla treceau mult peste 50%. Era sambata si programul
de lucru al intreprinderilor timisorene se incheia la ora 14:00. In jurul orei
16:00, in Strada Timotei Cipariu apare un grup compact, de aproximativ 60-70 de
persoane, care se opresc in dreptul casei lui Tokes, ocupand si o parte din carosabil.
Securitatea isi trimite rapid oamenii printre cei veniti, iar Radu Tinu se duce
personal pentru a afla ce se intampla. De la ziaristul Teodor (Doru) Burza, venit
in scurt timp si el la fata locului, afla ca sunt sindicalisti trimisi de primarul
Petre Mot sa impiedice adunarea manifestantilor si, la nevoie, sa-i imprastie.
Prin aceasta decizie stupida, autoritatile locale constituie ele insele un grup
masiv de peste 100 de persoane in fata casei lui Tokes, trezind curiozitatea trecatorilor.
Multi dintre ei se vor opri si apoi vor ramane pe loc pentru a vedea ce se mai
intampla. Altii vor stationa un timp, se vor duce acasa sa manance si sa-si rezolve
unele probleme casnice, insa hotarati sa revina pe seara. Sindicalistii - persoane
cu functii pe linie de sindicat din mai multe intreprinderi timisorene - devin
cu timpul agitati, lasati acolo fara nici o conducere, impiedicati sa se duca
acasa dupa terminarea programului, preocupati ca si restul persoanelor prezente
de lipsurile zilnice, enervati ca pierd timpul intr-un loc lipsit de interes.
Li se spusese ca in Timotei Cipariu este un grup violent de iredentisti, de unguri
care vor sa impiedice punerea in aplicare a unei hotarari judecatoresti. Nu se
intampla nimic din toate astea. In aceasta manevra tipica mentalitatilor comuniste
care dominau gandirea activistilor de partid, de jos si pana la Bobu si Ceausescu,
trebuie identificat continutul discutiilor telefonice de noapte si dimineata dintre
Ceausescu si Balan. LOCUL. Sindicalistii trebuia sa sparga ”mitingul” de la casa
parohiala a lui Tökes Informatiile obtinute de Comisia senatoriala despre convorbirile
telefonice la inalt nivel politic intre Timisoara si Bucuresti provin exclusiv
de la factori politici locali. Era de asteptat ca in declaratiile lor ulterioare,
date in procese sau in fata Comisiei, sa ascunda gafa monumentala pe care au facut-o,
dovada a ingustimii gandirii lor si a incapacitatii de a conduce o structura,
de a gestiona o situatie oarecare. Le era extrem de greu sa recunoasca faptul
ca sunt autorii primei aglomerari importante de oameni din Timotei Cipariu si
mai ales ca sindicalistii pe care i-au trimis acolo, ”reprezentantii clasei muncitoare”,
”forta inaintata a partidului” etc., pactizasera cu micul grup de enoriasi si
simpatizanti de acolo, satui de propaganda, de minciuna, de conditiile mizerabile
de trai si de salarii diminuate. Teza unei multimi de 1.000 de persoane prezente
in dimineata sau dupa-amiaza zilei de 16 decembrie nu este realista. Nici autorii
cei mai entuziasti si inclinati spre exagerari nu confirma aceste cifre, nici
jumatate, nici macar un sfert. In dupa-amiaza de 16 decembrie, Emil Bobu va lua
si alte masuri, asa cum aflam din declaratia adjunctului sau, Nicolae Mihalache:
”La data de 16 decembrie 1989, la ora 16:30, din ordin, m-am prezentat la Bobu
Emil care, de fata cu Constantin Radu, mi-a spus urmatoarele: ”Vei pleca la Timisoara.
In cursul acestei nopti va trebui sa fie evacuat un pastor, problema de care se
vor ocupa organele Ministerului de Interne. Cumpanasu Ion va discuta cu pastorul
si ii va preciza noua parohie. Toate indicatiile necesare au fost transmise si
primului secetar Radu Balan. Tu nu vei avea alta sarcina decat aceea de a ma informa
cu evolutia situatiei de la Timisoara””. Avem posibilitatea acum sa incercam o
reconstituire a evenimentelor si din punctul de vedere al Securitatii, punct de
vedere care a lipsit din analizele anterioare. In primul rand, trebuie subliniat
ca supravegherea adunarilor de oameni in fata casei lui Tokes reprezenta doar
un aspect, o parte a activitatii Securitatii Timis, care era mult mai complexa.
Din punct de vedere strict profesional, adunarea oamenilor acolo ii deranja pe
securisti in indeplinirea misiunii lor, atat prin faptul ca le mobiliza fortele
pentru a depista prezenta unor eventuali instigatori din afara sau din interior,
cat si prin faptul ca perturba operatiile de supraveghere asupra lui Tokes. Este
clar ca in momentul in care s-au implicat autoritatile locale, Securitatea s-a
retras in interiorul misiunilor sale stricte de urmarire informativa. Si sa nu
uitam ca avea in zona cateva coloane de ”turisti” sovietici care tot declarau
ca se duc in Iugoslavia sa petreaca Craciunul, dar nu mai paraseau imprejurimile
Timisoarei. Ei nu se cazau la hoteluri si dormeau peste noapte in masini. Dormitul
peste noapte in masina, la jumatatea lui decembrie, presupune fie o rezistenta
fizica iesita din comun, fie folosirea intensa, pe durata intregii nopti, a sistemului
de incalzire al autoturismului, fapt care produce un consum de combustibil foarte
greu de recuperat. Iarasi nu trebuie sa uitam ca nu se gasea benzina si ca la
statiile de alimentare erau cozi imense, zi si noapte. Este imposibil sa neglijam
aceste detalii ale unor intamplari nefiresti si ilogice. Coloanele de turisti
se aflau la doar cativa kilometri de Iugoslavia si totusi nu treceau granita.
In cursul zilei, unul sau doua autoturisme din coloana plecau in recunoastere
prin oras, oprindu-se in apropierea unor locuri care vor deveni ”aprinse” incepand
cu seara zilei de 16 decembrie - in fata Consiliului Judetean, in dreptul aleii
ce ducea la Opera, pe strazile din vecinatatea casei lui Tokes. In dimineata aceleiasi
zile, doua tiruri sovietice vor stationa pe interzis in apropierea unor unitati
militare din Timisoara, ingreunand accesul. Vor fi indepartate de Militie. Mirko
Atanaskovici, consulul iugoslav de la Timisoara, care va fi acuzat in timpul proceselor
revolutiei ca s-a implicat in revolta si se va apara dupa aceea ca nu a depasit
”cu nimic ceea ce e prevazut in Conventia Internationala privind relatiile diplomatice
internationale”, facea in saptamana 10-16 decembrie trei si chiar cinci deplasari
pe zi in Iugoslavia si inapoi. El nu stie sau nu vrea sa spuna ca urmarirea sa
nu se reducea la un granicer care ii numara iesirile si intrarile pe la granita,
ci era supravegheat pe teritoriul Iugoslaviei si interceptat de la Belgrad, astfel
ca Directia de Contrainformatii a Securitatii cunostea unde si in ce masura incalca
prevederile Conventiei Internationale. In plus, el isi activase propria retea
de informatii, intre care unii agenti ai sai au fost identificati la casa parohiala.
Precizam ca in orasul Timisoara se mai aflau cateva obiective ale supravegherii
operative, asemanatoare lui Tokes.'
- source_sentence: AKP'li vekilin traktör açıklamasına tepki
sentences:
- Cumhuriyet Halk Partisi (CHP) Niğde Milletvekili Ömer Fethi Gürer, “Türkiye’de
AK Parti’den önce traktör yoktu” diyen AK Parti Grup Başkanvekili ve Ankara Milletvekili
Leyla Şahin Usta’ya tepki gösterdi. Usta’nın Meclis Genel Kurulu’ndaki konuşmasını,
“İnkarcılığın bu kadara da pes” diyerek eleştiren CHP Milletvekili ve TBMM Tarım,
Orman ve Köyişleri Komisyon Üyesi Ömer Fethi Gürer, Osmanlı Döneminde bile 4 traktörün
olduğunu anımsattı. “ATATÜRK’ÜN TRAKTÖR ÜZERİNDEKİ FOTOĞRAFLARINA İYİ BAKIN” 1923
yılında Cumhuriyet kurulduğunda, Büyük Önder Mustafa Kemal Atatürk’ün ilk talimatlarından
birinin de tarımda makineleşmenin gerçekleşmesi yönünde olduğunu hatırlatan Milletvekili
Gürer, “Bu nedenle 221 traktör ithal edildi. Atatürk’ün de üzerinde olduğu traktör
fotoğrafları arşivlere girildiğinde görülebilir” dedi. ATATÜRK ASKERE GİDEN ÇOCUKLARA
TRAKTÖR EĞİTİMİ VERİLMESİNİ İSTEMİŞTİ CHP Milletvekili Ömer Fethi Gürer, Atatürk’ün
askere giden köylü çocuklarına traktör kursu verilerek, ileride traktör sayısı
artacağı için gençlerin köylerine döndüklerinde, traktör kullanıyor olmalarının
sağlanmasını istediğini de ifade etti. 1980’LERDE TRAKTÖR ÜRETİMI HIZLA ARTTI
Türkiye’de 1944 yılında 956 traktörün bulunduğuna işaret eden CHP Milletvekili
Ömer Fethi Gürer, “1960 yılında ülkemizde 42 bin 136 traktör vardı ki, Türkiye
o dönemde traktör üretimine de başlamıştı. 1980’lere kadar traktör üretimi hızla
arttı. 1980’lerden sonra traktör fabrikalarından birinde ben de genel müdür olarak
görev yaptım. Ama AK Parti Grup Başkanvekilinin sözlerini duyunca, insana ‘Bu
kadar da olmaz’ dedirtiyor. Sanayi ve Teknoloji Bakanı da bu konuşma olurken Mecliste
genel kurulunda idi. En azından Bakan bir düzeltme yapmalı idi. Grup Başkanvekili
Usta bu sözlerinden sonra da konuştu ancak bir düzeltme yapmadı. Görünen o ki
sözlerini düzeltme yerine hala öyle sanıyor. Cumhuriyet tarihi bilmemek de böyle
bir şey” diye konuştu. 2000 YILINDA 1 MİLYONA YAKIN TRAKTÖR VARDI “Türkiye’de
traktörün olmadığını iddia etmenin, iddia sahibinin ülkemizin dününün sanayide
gelişmelerini de bilmediğini gösterir” diyen Milletvekili Gürer, “Çünkü 2000 yılına
gelindiğinde ülkemizde traktör sayısı 941 bin 843 adetti. 1 tane değil, 5 tane
değil, 10 tane değil, neredeyse 1 milyon traktör vardı ülkemizde” dedi. Gürer
bir traktör fabrikasında 1980 sonrası yönetici olarak çalıştığını da ifade ederek
1960’lardan sonra ülkede üretimi yapılan traktörler ile Türkiye’nin önemli bir
aşamaya geldiğini ifade etti. Gürer, “AKP’den önce bir şey yoktu masalının iş
yaptığı sanısı bundan sonra da benzer açıklamaların olmasını olası kılıyor. Cumhuriyet
tarihini bilmeyenler sanayi ununun, şekerin, bezin ithal olduğunu ve ülkemizde
Cumhuriyetin ilk yıllarında yapılan fabrikalarla üretildiğini öğrenmeyenler, fabrika
yapan fabrika olduğu gibi tiyatro salonlarına dahi sahip şeker fabrikalarını yapmak
değil satmaktan anlayanların, ülkenin dünü-bugünü arasında kamuda sata sata bitiremedikleri
varlıkların nasıl oluştuğunu ve halen dünya Endüstri 5.0 geçmişken Endüstri 3.5’te
debelendiğini göstermemek için her türlü ifadeyi rahatlıkla kullanabiliyorlar”
diye konuştu.
- Meteoroloji Genel Müdürlüğü tarafından yapılan uyarının ardından Tekirdağ'da etkili
olan kar yağışı İstanbul'un sınırına dayandı. Meteoroloji Genel Müdürlüğü tarafından
yapılan uyarıların ardından Tekirdağ'da bu sabah saatlerinde kar yağışı etkili
oldu. Kar yağışı şehrin yüksek kesimlerini ve evin çatılarına beyaz örtü ile kaplarken,
sabah işe çıkmak için arabalarına binen sürücülerde yollarda ilerlemekte güçlük
çekti. Etkisini sürdüren kar yağışı Tekirdağ sınırındaki İstanbul'un kapısına
dayandı. (İHA)
- 'Milli Eğitim Bakanı Tekin, 24 Kasım Öğretmenler Günü dolayısıyla sosyal medya
hesabından bir mesaj yayımladı. Mesajında, 100 yıldır var olan şanlı Cumhuriyet''in
ilelebet payidar kalmasında öğretmenlerin her zaman en önemli görevi üstlendiğini
belirten Tekin, ”Aziz milletimiz, en çetin ve en mihver zamanlarda dahi görevini
özveriyle ifa eden, vatan evlatlarının yarınları için canı gönülden çalışarak
daha müreffeh bir geleceği tahayyül eden meslektaşlarımın omuzlarında yükselecek.”
değerlendirmesinde bulundu. 100 yıldır var olan şanlı Cumhuriyetimizin ilelebet
payidar kalmasında her zaman en önemli görevi üstlenen kıymetli Öğretmenim! Aziz
milletimiz, en çetin ve en mihver zamanlarda dahi görevini özveriyle ifa eden,
vatan evlatlarının yarınları için canıgönülden çalışarak daha… pic.twitter.com/074mzguYYn
— Yusuf Tekin (@Yusuf__Tekin) November 22, 2023 Bakan Tekin, şunları kaydetti:
”Türkiye Yüzyılı''nın mimarları olmanız, evlatlarımızın ülkemize faydalı bir nesil
olarak yetişmesinde sarf ettiğiniz emek ve maarif davamıza ruh katan vakur duruşunuz
için sizlere minnettarım. Uhdenize emanet edilen öğrencilerinizi, bir anne, bir
baba şefkatiyle benimseyip her daim onları düşündüğünüzü biliyorum. O sebepledir
ki ülkemizin tüm başarısı, Sayende Öğretmenim.” Tekin, Bakanlık tarafından hazırlanan
”Sayende” adlı kısa filmi de paylaştı. Sanatçılar filmde gönüllü olarak yer aldı
Milli Eğitim Bakanlığının 24 Kasım Öğretmenler Günü kutlamaları içi hazırladığı
”Sayende” adlı kısa filmde Gülen Karaman, Ziya Kürküt, Zuhal Yalçın, Sefa Zengin,
Gülçin Gülrek ve Özge İnce rol aldı. Arzu Balkan''ın seslendirdiği filmde müzik,
edebiyat, tiyatro ve sporla ilgilenen dört öğrencinin sorunlarını, kendi evlatlarının
sorunlarıymış gibi benimseyen öğretmenlerin, onların hayatlarına dokunuşu konu
edildi. Dört öğrencinin farklı alanlarda çalışma yaparken yaşadıkları zorluklar
ekrana yansıtılırken öğretmenlerinin bu sorunlara çözüm bulmak için düşüncelerine
yer verildi. Öğretmenler odasındaki mutlu finalde öğrenciler, onlara yol gösteren
öğretmenleri ile buluşup Öğretmenler Günü''nü kutladı. Tüm sanatçıların gönüllü
olarak rol aldığı projenin çekimleri Kabataş Lisesi, Maçka Mesleki ve Teknik Anadolu
Lisesi ile Nişantaşı Anadolu Lisesi''nde gerçekleştirildi.'
---
# SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1) <!-- at revision 3af7c6da5b3e1bea796ef6c97fe237538cbe6e7f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Dot Product
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mustozsarac/finetuned-one-epoch-multi-qa-mpnet-base-dot-v1")
# Run inference
sentences = [
"AKP'li vekilin traktör açıklamasına tepki",
'Cumhuriyet Halk Partisi (CHP) Niğde Milletvekili Ömer Fethi Gürer, “Türkiye’de AK Parti’den önce traktör yoktu” diyen AK Parti Grup Başkanvekili ve Ankara Milletvekili Leyla Şahin Usta’ya tepki gösterdi. Usta’nın Meclis Genel Kurulu’ndaki konuşmasını, “İnkarcılığın bu kadara da pes” diyerek eleştiren CHP Milletvekili ve TBMM Tarım, Orman ve Köyişleri Komisyon Üyesi Ömer Fethi Gürer, Osmanlı Döneminde bile 4 traktörün olduğunu anımsattı. “ATATÜRK’ÜN TRAKTÖR ÜZERİNDEKİ FOTOĞRAFLARINA İYİ BAKIN” 1923 yılında Cumhuriyet kurulduğunda, Büyük Önder Mustafa Kemal Atatürk’ün ilk talimatlarından birinin de tarımda makineleşmenin gerçekleşmesi yönünde olduğunu hatırlatan Milletvekili Gürer, “Bu nedenle 221 traktör ithal edildi. Atatürk’ün de üzerinde olduğu traktör fotoğrafları arşivlere girildiğinde görülebilir” dedi. ATATÜRK ASKERE GİDEN ÇOCUKLARA TRAKTÖR EĞİTİMİ VERİLMESİNİ İSTEMİŞTİ CHP Milletvekili Ömer Fethi Gürer, Atatürk’ün askere giden köylü çocuklarına traktör kursu verilerek, ileride traktör sayısı artacağı için gençlerin köylerine döndüklerinde, traktör kullanıyor olmalarının sağlanmasını istediğini de ifade etti. 1980’LERDE TRAKTÖR ÜRETİMI HIZLA ARTTI Türkiye’de 1944 yılında 956 traktörün bulunduğuna işaret eden CHP Milletvekili Ömer Fethi Gürer, “1960 yılında ülkemizde 42 bin 136 traktör vardı ki, Türkiye o dönemde traktör üretimine de başlamıştı. 1980’lere kadar traktör üretimi hızla arttı. 1980’lerden sonra traktör fabrikalarından birinde ben de genel müdür olarak görev yaptım. Ama AK Parti Grup Başkanvekilinin sözlerini duyunca, insana ‘Bu kadar da olmaz’ dedirtiyor. Sanayi ve Teknoloji Bakanı da bu konuşma olurken Mecliste genel kurulunda idi. En azından Bakan bir düzeltme yapmalı idi. Grup Başkanvekili Usta bu sözlerinden sonra da konuştu ancak bir düzeltme yapmadı. Görünen o ki sözlerini düzeltme yerine hala öyle sanıyor. Cumhuriyet tarihi bilmemek de böyle bir şey” diye konuştu. 2000 YILINDA 1 MİLYONA YAKIN TRAKTÖR VARDI “Türkiye’de traktörün olmadığını iddia etmenin, iddia sahibinin ülkemizin dününün sanayide gelişmelerini de bilmediğini gösterir” diyen Milletvekili Gürer, “Çünkü 2000 yılına gelindiğinde ülkemizde traktör sayısı 941 bin 843 adetti. 1 tane değil, 5 tane değil, 10 tane değil, neredeyse 1 milyon traktör vardı ülkemizde” dedi. Gürer bir traktör fabrikasında 1980 sonrası yönetici olarak çalıştığını da ifade ederek 1960’lardan sonra ülkede üretimi yapılan traktörler ile Türkiye’nin önemli bir aşamaya geldiğini ifade etti. Gürer, “AKP’den önce bir şey yoktu masalının iş yaptığı sanısı bundan sonra da benzer açıklamaların olmasını olası kılıyor. Cumhuriyet tarihini bilmeyenler sanayi ununun, şekerin, bezin ithal olduğunu ve ülkemizde Cumhuriyetin ilk yıllarında yapılan fabrikalarla üretildiğini öğrenmeyenler, fabrika yapan fabrika olduğu gibi tiyatro salonlarına dahi sahip şeker fabrikalarını yapmak değil satmaktan anlayanların, ülkenin dünü-bugünü arasında kamuda sata sata bitiremedikleri varlıkların nasıl oluştuğunu ve halen dünya Endüstri 5.0 geçmişken Endüstri 3.5’te debelendiğini göstermemek için her türlü ifadeyi rahatlıkla kullanabiliyorlar” diye konuştu.',
"Milli Eğitim Bakanı Tekin, 24 Kasım Öğretmenler Günü dolayısıyla sosyal medya hesabından bir mesaj yayımladı. Mesajında, 100 yıldır var olan şanlı Cumhuriyet'in ilelebet payidar kalmasında öğretmenlerin her zaman en önemli görevi üstlendiğini belirten Tekin, ”Aziz milletimiz, en çetin ve en mihver zamanlarda dahi görevini özveriyle ifa eden, vatan evlatlarının yarınları için canı gönülden çalışarak daha müreffeh bir geleceği tahayyül eden meslektaşlarımın omuzlarında yükselecek.” değerlendirmesinde bulundu. 100 yıldır var olan şanlı Cumhuriyetimizin ilelebet payidar kalmasında her zaman en önemli görevi üstlenen kıymetli Öğretmenim! Aziz milletimiz, en çetin ve en mihver zamanlarda dahi görevini özveriyle ifa eden, vatan evlatlarının yarınları için canıgönülden çalışarak daha… pic.twitter.com/074mzguYYn — Yusuf Tekin (@Yusuf__Tekin) November 22, 2023 Bakan Tekin, şunları kaydetti: ”Türkiye Yüzyılı'nın mimarları olmanız, evlatlarımızın ülkemize faydalı bir nesil olarak yetişmesinde sarf ettiğiniz emek ve maarif davamıza ruh katan vakur duruşunuz için sizlere minnettarım. Uhdenize emanet edilen öğrencilerinizi, bir anne, bir baba şefkatiyle benimseyip her daim onları düşündüğünüzü biliyorum. O sebepledir ki ülkemizin tüm başarısı, Sayende Öğretmenim.” Tekin, Bakanlık tarafından hazırlanan ”Sayende” adlı kısa filmi de paylaştı. Sanatçılar filmde gönüllü olarak yer aldı Milli Eğitim Bakanlığının 24 Kasım Öğretmenler Günü kutlamaları içi hazırladığı ”Sayende” adlı kısa filmde Gülen Karaman, Ziya Kürküt, Zuhal Yalçın, Sefa Zengin, Gülçin Gülrek ve Özge İnce rol aldı. Arzu Balkan'ın seslendirdiği filmde müzik, edebiyat, tiyatro ve sporla ilgilenen dört öğrencinin sorunlarını, kendi evlatlarının sorunlarıymış gibi benimseyen öğretmenlerin, onların hayatlarına dokunuşu konu edildi. Dört öğrencinin farklı alanlarda çalışma yaparken yaşadıkları zorluklar ekrana yansıtılırken öğretmenlerinin bu sorunlara çözüm bulmak için düşüncelerine yer verildi. Öğretmenler odasındaki mutlu finalde öğrenciler, onlara yol gösteren öğretmenleri ile buluşup Öğretmenler Günü'nü kutladı. Tüm sanatçıların gönüllü olarak rol aldığı projenin çekimleri Kabataş Lisesi, Maçka Mesleki ve Teknik Anadolu Lisesi ile Nişantaşı Anadolu Lisesi'nde gerçekleştirildi.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 62,964 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 25.67 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 439.19 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>“Case pentru generali”</code> | <code>O teribilă afacere, pusă la cale la vîrful Armatei, a fost dezvăluită de Jurnalul Naţional în decembrie 1999. Mărimile oştirii au pus mîna pe locuinţe de lux, din cota MApN, chiar în centrul Capitalei, deşi mai aveau şi alte date de stat şi Armată. Operaţiunea, dovedită de cotidianul nostru, a fost anchetată de Parchetul Militar şi, după o primă spălare pe mîini, dosarul a fost redeschis la cinci ani de la articolele noastre. MApN • Afacerea care a ştirbit imaginea morală a armatei O teribilă afacere, pusă la cale la vîrful Armatei, a fost dezvăluită de Jurnalul Naţional în decembrie 1999. Mărimile oştirii au pus mîna pe locuinţe de lux, din cota MApN, chiar în centrul Capitalei, deşi mai aveau şi alte case date de stat şi Armată. Operaţiunea, dovedită de cotidianul nostru, a fost anchetată de Parchetul Militar şi, după o primă spălare pe mîini, dosarul a fost redeschis la cinci ani de la articolele noastre. Trecuseră, deja, cîteva luni de cînd o anonimă care a ajuns pe masa procurorilor militari, conduşi atunci de generalul Dan Voinea, dădea frisoane. Mai mulţi ofiţeri din Garnizoana Bucureşti erau scandalizaţi de faptul că un grup de generali, care-i avea în frunte pe şeful Statului Major General de la acea vreme, generalul Constantin Degeratu, obţinuse al doilea şi chiar al treilea rînd de apartamente din cota MApN, deşi 20.000 de cadre militare trăiau, atunci, în condiţii mizere. Locuinţele de serviciu vizate erau în Centrul Civic. Faptele sesizate erau descrise foarte explicit şi trimiteau direct la dovezi. Anonima fusese scrisă de cîţiva ofiţeri ce au avut acces la documente şi care se temeau foarte tare de represalii. Ca de obicei, nu se întîmpla nimic. Dezvăluirile din decembrie 1999 din Jurnalul Naţional despre acest subiect, publicate într-un serial cu nouă episoade, au impulsionat investigaţiile, atît la Parchetul Militar, cît şi în interiorul ministerului. Abia atunci, Victor Babiuc, ministrul Apărării, a ordonat verbal “verificarea aspectelor semnalate în ziarul Jurnalul Naţional din 16-20 decembrie 1999, referitoare la «repartizarea şi vînzarea locuinţelor de serviciu unor colonei şi generali cu funcţii importante»“ de către o comisie din Inspectoratul General al Ministerului Apărării Naţionale. De altfel, cînd a demisionat Victor Babiuc a recunoscut într-un interviu acordat nouă: “În legătură cu Afacerea «Case pentru generali», într-adevăr, sînt nereguli”. RAPORTUL. La începutul anului 2000, an electoral, nimeni nu se grăbea să găsească eventuali vinovaţi. În toamna acelui an, cotidianul nostru a oferit şi dovada afacerii. Comisia Inspectoratului MApN confirma rezultatele articolelor noastre şi arăta şi adevărata amploare a operaţiunii: peste 60 de generali şi colonei implicaţi. Raportul Inspectoratului MApN a fost dosit la cel mai înalt nivel, dar am reuşit să intrăm în posesia lui şi, astfel, să-l publicăm. Atunci, în urma articolelor, la adresa subsemnatului au fost făcute presiuni foarte mari. Inclusiv insinuări adresate vecinilor că aş fi traficant de droguri, iar locuinţa mi-a fost spartă demonstrativ. Nu lipsea nimic. Investigaţiile făcute de ofiţerii Inspectoratului MApN spuneau că locuinţe de serviciu din cota Armatei au fost luate la preţ de nimic, cu mult sub cel al pieţei, de generali şi colonei cu funcţiile cele mai mari în Ministerul Apărării Naţionale, deşi aceştia nu aveau dreptul, conform legilor, şi dăduseră declaraţii în fals la notariate. “Apreciem că repartizarea unor locuinţe de serviciu unor ofiţeri care deţin sau au deţinut şi înstrăinat copiilor locuinţe proprietate personală reprezintă încălcarea legislaţiei în vigoare”, era una dintre concluziile raportului. Mai mult, unii îşi cumpăraseră în rate, deşi nici acest lucru nu era permis de lege, potrivit ofiţerilor Inspectoratului MApN. Pe lista Inspectoratului se afla şi viitorul şef al Statului Major General, generalul Eugen Bădălan. PUNCT. Pentru declanşarea şi derularea afacerii s-au făcut presiuni şi asupra ofiţerilor care aveau atribuţii de verificare a legalităţii repartiţiei din Comenduirea Garnizoanei Bucureşti, care au priceput imediat: “A aprobat ministrul!”. Deloc surprinzătoare a fost viteza cu care în noiembrie 2000 dosarul “de la Parchetul Militar” a şi fost închis. Nimeni nu era vinovat. “Actele premergătoare administrate în cauză nu au confirmat învinuirile tendenţioase aduse unor cadre militare, cu funcţii de conducere, din structurile MApN. De asemenea, nu este sarcina urmăririi penale de a lua poziţii critice faţă de anumite iniţiative legislative ale ministerului, ori de a interpreta corectitudinea sau moralitatea unuia sau altuia din actele normative ce au stat la baza procesului de vînzare a locuinţelor de serviciu din administrarea MapN”, spunea rezoluţia semnată de procurorul militar, col. Dumitru Carp. DE LA CAPĂT. La bilanţul Serviciului de Telecomunicaţii Speciale pe anul 2001, şeful serviciului, Tudor Tănase, a prezentat ce a găsit Curtea de Conturi în “ograda” pe care tocmai o prelua. Astfel, Curtea de Conturi atrăgea atenţia asupra achiziţiei de către STS tocmai a apartamentului de patru camere din Cluj-Napoca pe care generalul Constantin Degeratu, fostul şef al Statului Major General, o deţinea. E taman dovada că generalul Degeratu n-avea cum să-şi cumpere locuinţa de serviciu în 1999. Conducerii din momentul achiziţiei a STS i se imputa faptul că suma de 42.000 de dolari, cu care s-a plătit apartamentul, depăşea cotaţia pieţei de la acea vreme, iar legislaţia privind achiziţionarea de imobile pentru STS a fost încălcată. De atunci şi pînă în 2005 nu s-a mai întîmplat nimic. De pe poziţia de consilier de stat la Administraţia Prezidenţială, generalul Constantin Degeratu ne declara că nimic n-a fost în neregulă. “Nu a existat nici o ilegalitate. La mutarea mea de la Cluj aici, în Capitală, am stat un timp, provizoriu, cu familia, la un cămin de garnizoană. Apoi, la un moment dat, s-a pus chiar problema trecerii mele în rezervă, că, dacă nu, mă mut definitiv în Bucureşti. Mai întîi mi s-a oferit o altă locuinţă, care, chiar dacă se afla într-o poziţie centrală, nu i-a plăcut soţiei mele. A doua, cea în care locuim şi acum, i-a plăcut, chiar dacă avea unele probleme. Ne-am mutat, iar apoi, ani la rînd, a tot trebuit să facem reparaţii pentru că, practic, ori de cate ori ploua, se produceau infiltraţii. Asta este locuinţa cu pricina. Ştiu că la un moment dat a existat o cercetare a Parchetului, dar, concret, nimeni nu a fost acuzat de vreo încălcare a legii”, spunea Degeratu în 2005. Locuinţa de care se plîngea generalul este un apartament duplex pe B-dul Unirii. Poveste fără sfîrşit În 2005, accesul la Dosarul “Case pentru generali”, era deja închis. Generalul Samoilă Joarză, şeful Secţiei Parchetelor Militare, mirat că ne-aducem aminte de o aşa anchetă, ne-a spus că nu poate să ne permită accesul la el fiindcă are documente secrete. Mai mult, ne-a mai zis că, dacă tot întrebăm de el, îl va reciti pentru a vedea cum s-au pus soluţiile. La scurt timp, şeful procurorilor militari a decis infirmarea soluţiei de NUP date în anul 2000. Fapt care demonstrează că şi generalului i s-a părut ceva în neregulă în cazul respectiv. Joarză ne-a declarat atunci că Jurnalul Naţional a avut mai multe informaţii decît procurorii militari. Din 2005 şi pînă astăzi au mai trecut încă trei ani. Dosarul a fost repartizat la procurori militari care erau în prag de trecere în rezervă şi ancheta a continuat, normal, cu sincope. Un alt procuror, o nouă familiarizare cu cazul şi tot aşa. Cert este că la nouă ani de cînd am publicat primul articol nimeni nu a fost găsit vinovat, nici măcar moral, pentru afacerea cu locuinţele de serviciu ale Armatei. Şi nimeni, după toate probabilităţile, nici n-o să fie. Citiţi şi: Monarhistul a devenit republican în trei zile Epopeea ”Mineriadei” din ianuarie 1999 Reportaj fără vestă antiglonţ Bebe Carabină s-a înţepat la Ghimpaţi Prindeţi bestia! La 20 august 1996, Jurnalul Naţional a stîrnit un iureş politic preluînd un interviu acordat de Emil Constantinescu revistei Micro Magazin, revistă de limbă română ce apărea în Statele Unite ale Americii. Interviul fusese luat în timpul unei vizite pe care candidatul de atunci al CDR la Preşedinţie o făcuse în comunităţile româneşti Los Angeles, Chicago şi New York.Aprilie 1998, primăvara în care Armata, M.I. şi serviciile secrete au fost implicate într-un scandal imens, care a ricoşat şi în clasa politică: Ţigareta II. Atunci, Jurnalul Naţional a relatat zi de zi amănuntele acestei afaceri extrem de încîlcite. Aflaţi, de multe ori, cu un pas înaintea anchetatorilor, reporterii noştri au dezvăluit informaţii spectaculoase pe care autorităţile le-ar fi dorit ascunse pentru totdeauna.Ianuarie 1999, luna în care România s-a aflat în pragul dezastrului. Ieşiţi din bezna galeriilor, minerii din Valea Jiului s-au răzvrătit împotriva Guvernului şi au fost la un pas de a arunca ţara în haos. Au fost zile şi nopţi dramatice, în cursul cărora reporterii Jurnalului Naţional s-au aflat în ”linia întîi” a evenimentelor, martori ai dezastrului de la Costeşti, dar şi ai ”Păcii de la Cozia”.”De nouă ani, Iugoslavia nu mai are pace. Se strecoară printre războaie, dar nu vrea să recunoască. Nu vrea să se lase doborîtă. Îmbină pacea şi războiul aşa de bine, încît nu mai ştii să faci diferenţa”.”Ghimpaţi, un sătuc liniştit din Giurgiu, a intrat în istorie cu tot cu vajnicii săi paznici comunali, care au reuşit performanţa de a-l prinde pe unul dintre cei mai căutaţi bandiţi din România. Ce n-a putut face Poliţia atîta amar de vreme s-a întîmplat sîmbătă noaptea datorită sănătosului spirit al ţăranului român.””Jurnalul Naţional oferă recompensă 5 milioane de lei pentru informaţiile ce vor duce la prinderea şoferului criminal” – anunţa ziarul de luni, 17 iulie 2000, anul VIII, nr. 2178. Campania a avut succes. Bestiile care au accidentat un copil pe o stradă din Bucureşti, apoi l-au răpit şi l-au lăsat să moară pe un teren viran în zona Vitan au fost identificate cu ajutorul martorilor.</code> | <code>1.0</code> |
| <code>Filenin Efeleri'nin rakibi Belarus</code> | <code>CEV Avrupa Altın Ligi'nde ilk etap karşılaşmalarında 3'te 3 yapan ”Filenin Efeleri”, 4-6 Haziran 2021 tarihlerinde Portekiz'de oynanacak ikinci etap karşılaşmaları öncesinde son antrenmanını İstanbul'da yaptı. TVF Burhan Felek Vestel Voleybol Salonu'nda başantrenör Nedim Özbey yönetiminde yapılan antrenmanda milliler, hücum ve savunma üzerine taktik uygulamalar çalıştı. Milli kafile, ikinci etap karşılaşmalarını oynamak üzere bugün Portekiz'e hareket edecek. C Grubu'nda Belarus, Çekya ve Portekiz ile mücadele edecek milli takımın maçları TRT Spor ve TRT Spor Yıldız'da yayınlanacak. 4 Haziran Cuma: 18.00 Türkiye-Belarus 5 Haziran Cumartesi: 20.00 Portekiz-Türkiye 6 Haziran Pazar: 17.00 Çekya-Türkiye Statü İki ayrı turnuva şeklinde düzenlenecek Avrupa Altın Ligi'nin sonunda gruplarını ilk sırada tamamlayan 3 takım ve final grubuna ev sahipliği yapacak ülke (Belçika), Dörtlü Final oynamaya hak kazanacak.</code> | <code>1.0</code> |
| <code>Ankara için fırtına ve kuvvetli yağış uyarısı</code> | <code>Meteoroloji Genel Müdürlüğü tarafından yapılan son değerlendirmelere göre, yarın kuvvetli yağış beklenen Ankara'da rüzgarın da güney (lodos) yönlerden fırtına (50-70 km/saat), yer yer kuvvetli fırtına şeklinde esmesi bekleniyor. Ankara Valiliği, yarından itibaren beklenen sağanak ve kuvvetli fırtına nedeniyle dikkatli ve tedbirli olunması uyarısında bulundu. Valilikten yapılan açıklamada, Meteoroloji Genel Müdürlüğünden alınan son verilere göre, yarından itibaren Balkanlar üzerinden gelecek yağışlı havanın etkisiyle Ankara genelinde sağanak ve yer yer gök gürültülü sağanak beklendiği, yağışların cumartesi, pazar ve pazartesi yer yer kuvvetli olacağının tahmin edildiği belirtildi. Açıklamada, ”Cumartesi günü rüzgarın güney yönlerden fırtına, yer yer kısa süreli kuvvetli fırtına, yüksek kesimlerde tam fırtına şeklinde eseceği ve mevsim normallerinin üzerinde olması beklenen hava sıcaklıklarının pazar gününden itibaren yağışlarla beraber hissedilir derecede azalarak mevsim normalleri civarına düşmesi bekleniyor. Rüzgar ve fırtına sebebiyle ulaşımda aksamalar, çatı uçması, ağaç ve direk devrilmesi, soba ve doğal gaz kaynaklı zehirlenmeler gibi olumsuzluklara karşı dikkatli ve tedbirli olunmalıdır.” uyarısına yer verildi. AFAD, SMS ile uyardı Afet ve Acil Durum Yönetimi Başkanlığı (AFAD) ise cep telefonlarına gönderdiği SMS'te, ”Meteorolojiye göre yarın Ankara'da kuvvetli lodos ve yağış bekleniyor. Baca zehirlenmesi, ulaşımda aksamalar ve çatı uçmasına karşı dikkatli olun.” uyarısı yaptı.</code> | <code>1.0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.1270 | 500 | 0.3574 |
| 0.2541 | 1000 | 0.3181 |
| 0.3811 | 1500 | 0.2846 |
| 0.5081 | 2000 | 0.2585 |
| 0.6352 | 2500 | 0.2455 |
| 0.7622 | 3000 | 0.235 |
| 0.8892 | 3500 | 0.2324 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/01-ai_-_Yi-6B-Chat-8bits | RichardErkhov | null | [
"safetensors",
"llama",
"arxiv:2403.04652",
"arxiv:2311.16502",
"arxiv:2401.11944",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,728,215,196,000 | 2024-10-06T11:50:00 | 6 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-6B-Chat - bnb 8bits
- Model creator: https://huggingface.co/01-ai/
- Original model: https://huggingface.co/01-ai/Yi-6B-Chat/
Original model description:
---
license: apache-2.0
widget:
- example_title: "Yi-34B-Chat"
text: "hi"
output:
text: " Hello! How can I assist you today?"
- example_title: "Yi-34B"
text: "There's a place where time stands still. A place of breath taking wonder, but also"
output:
text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?"
pipeline_tag: text-generation
new_version: 01-ai/Yi-1.5-6B-Chat
---
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
<img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg">
</picture>
</br>
</br>
<div style="display: inline-block;">
<a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml">
<img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg">
</a>
</div>
<div style="display: inline-block;">
<a href="mailto:[email protected]">
<img src="https://img.shields.io/badge/✉️[email protected]">
</a>
</div>
</div>
<div align="center">
<h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a>
</p>
<p align="center">
👩🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a>
</p>
<p align="center">
👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a>
</p>
<p align="center">
📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a>
</p>
<p align="center">
📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a>
</p>
<!-- DO NOT REMOVE ME -->
<hr>
<details open>
<summary></b>📕 Table of Contents</b></summary>
- [What is Yi?](#what-is-yi)
- [Introduction](#introduction)
- [Models](#models)
- [Chat models](#chat-models)
- [Base models](#base-models)
- [Model info](#model-info)
- [News](#news)
- [How to use Yi?](#how-to-use-yi)
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [llama.cpp](#quick-start---llamacpp)
- [conda-lock](#quick-start---conda-lock)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [FAQ](#faq)
- [Learning hub](#learning-hub)
- [Why Yi?](#why-yi)
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Base model performance](#base-model-performance)
- [Chat model performance](#chat-model-performance)
- [Tech report](#tech-report)
- [Citation](#citation)
- [Who can use Yi?](#who-can-use-yi)
- [Misc.](#misc)
- [Acknowledgements](#acknowledgments)
- [Disclaimer](#disclaimer)
- [License](#license)
</details>
<hr>
# What is Yi?
## Introduction
- 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/).
- 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
- Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
- Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).
- 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
<details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br>
> 💡 TL;DR
>
> The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama.
- Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
- Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
- Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
- However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## News
<details>
<summary>🔥 <b>2024-07-29</b>: The <a href="https://github.com/Haijian06/Yi/tree/main/Cookbook">Yi Cookbook 1.0 </a> is released, featuring tutorials and examples in both Chinese and English.</summary>
</details>
<details>
<summary>🎯 <b>2024-05-13</b>: The <a href="https://github.com/01-ai/Yi-1.5">Yi-1.5 series models </a> are open-sourced, further improving coding, math, reasoning, and instruction-following abilities.</summary>
</details>
<details>
<summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary>
</details>
<details>
<summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary>
</details>
<details open>
<summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>
<br>
In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance.
</details>
<details open>
<summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary>
<br>
<code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.
</details>
<details open>
<summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary>
<br>
<code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li>
</details>
<details>
<summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary>
<br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.
- `Yi-34B-Chat`
- `Yi-34B-Chat-4bits`
- `Yi-34B-Chat-8bits`
- `Yi-6B-Chat`
- `Yi-6B-Chat-4bits`
- `Yi-6B-Chat-8bits`
You can try some of them interactively at:
- [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Replicate](https://replicate.com/01-ai)
</details>
<details>
<summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary>
</details>
<details>
<summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>
<br>Application form:
- [English](https://cn.mikecrm.com/l91ODJf)
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
</details>
<details>
<summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary>
<br>This release contains two base models with the same parameter sizes as the previous
release, except that the context window is extended to 200K.
</details>
<details>
<summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary>
<br>The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Models
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment).
### Chat models
| Model | Download |
|---|---|
|Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat) |
|Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-4bits) |
|Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-34B-Chat-8bits) |
|Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat) |
|Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-4bits) |
|Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) |
<sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub>
### Base models
| Model | Download |
|---|---|
|Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) |
|Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits)|
|Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-9B)|
|Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) |
|Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) |
|Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-6B-Chat-8bits) |
<sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub>
### Model info
- For chat and base models
<table>
<thead>
<tr>
<th>Model</th>
<th>Intro</th>
<th>Default context window</th>
<th>Pretrained tokens</th>
<th>Training Data Date</th>
</tr>
</thead>
<tbody><tr>
<td>6B series models</td>
<td>They are suitable for personal and academic use.</td>
<td rowspan="3">4K</td>
<td>3T</td>
<td rowspan="3">Up to June 2023</td>
</tr>
<tr>
<td>9B series models</td>
<td>It is the best at coding and math in the Yi series models.</td>
<td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td>
</tr>
<tr>
<td>34B series models</td>
<td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability.</td>
<td>3T</td>
</tr>
</tbody></table>
- For chat models
<details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary>
<ul>
<br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.
<br>However, this higher diversity might amplify certain existing issues, including:
<li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li>
<li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li>
<li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li>
<li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li>
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# How to use Yi?
- [Quick start](#quick-start)
- [Choose your path](#choose-your-path)
- [pip](#quick-start---pip)
- [docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- [llama.cpp](#quick-start---llamacpp)
- [Web demo](#web-demo)
- [Fine-tuning](#fine-tuning)
- [Quantization](#quantization)
- [Deployment](#deployment)
- [FAQ](#faq)
- [Learning hub](#learning-hub)
## Quick start
Getting up and running with Yi models is simple with multiple choices available.
### Choose your path
Select one of the following paths to begin your journey with Yi!

#### 🎯 Deploy Yi locally
If you prefer to deploy Yi models locally,
- 🙋♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:
- [pip](#quick-start---pip)
- [Docker](#quick-start---docker)
- [conda-lock](#quick-start---conda-lock)
- 🙋♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp).
#### 🎯 Not to deploy Yi locally
If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.
##### 🙋♀️ Run Yi with APIs
If you want to explore more features of Yi, you can adopt one of these methods:
- Yi APIs (Yi official)
- [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access!
- [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate)
##### 🙋♀️ Run Yi in playground
If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:
- [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
- [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate)
##### 🙋♀️ Chat with Yi
If you want to chat with Yi, you can use one of these online services, which offer a similar user experience:
- [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face)
- No registration is required.
- [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta)
- Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)).
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - pip
This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference.
#### Step 0: Prerequisites
- Make sure Python 3.10 or a later version is installed.
- If you want to run other Yi models, see [software and hardware requirements](#deployment).
#### Step 1: Prepare your environment
To set up the environment and install the required packages, execute the following command.
```bash
git clone https://github.com/01-ai/Yi.git
cd yi
pip install -r requirements.txt
```
#### Step 2: Download the Yi model
You can download the weights and tokenizer of Yi models from the following sources:
- [Hugging Face](https://huggingface.co/01-ai)
- [ModelScope](https://www.modelscope.cn/organization/01ai/)
- [WiseModel](https://wisemodel.cn/organization/01.AI)
#### Step 3: Perform inference
You can perform inference with Yi chat or base models as below.
##### Perform inference with Yi chat model
1. Create a file named `quick_start.py` and copy the following content to it.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = '<your-model-path>'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
2. Run `quick_start.py`.
```bash
python quick_start.py
```
Then you can see an output similar to the one below. 🥳
```bash
Hello! How can I assist you today?
```
##### Perform inference with Yi base model
- Yi-34B
The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).
You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo).
```bash
python demo/text_generation.py --model <your-model-path>
```
Then you can see an output similar to the one below. 🥳
<details>
<summary>Output. ⬇️ </summary>
<br>
**Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry,
**Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...
</details>
- Yi-9B
Input
```bash
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_DIR = "01-ai/Yi-9B"
model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False)
input_text = "# write the quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output
```bash
# write the quick sort algorithm
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
# test the quick sort algorithm
print(quick_sort([3, 6, 8, 10, 1, 2, 1]))
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quick start - Docker
<details>
<summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary>
<br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference.
<h4>Step 0: Prerequisites</h4>
<p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p>
<h4> Step 1: Start Docker </h4>
<pre><code>docker run -it --gpus all \
-v <your-model-path>: /models
ghcr.io/01-ai/yi:latest
</code></pre>
<p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p>
<h4>Step 2: Perform inference</h4>
<p>You can perform inference with Yi chat or base models as below.</p>
<h5>Perform inference with Yi chat model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>model_path = '<your-model-mount-path>'</code> instead of <code>model_path = '<your-model-path>'</code>.</p>
<h5>Perform inference with Yi base model</h5>
<p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p>
<p><strong>Note</strong> that the only difference is to set <code>--model <your-model-mount-path>'</code> instead of <code>model <your-model-path></code>.</p>
</details>
### Quick start - conda-lock
<details>
<summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary>
<br>
You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies.
<br>
To install the dependencies, follow these steps:
1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>.
2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.
</details>
### Quick start - llama.cpp
<a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">The following tutorial </a> will guide you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.
<details>
<summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary>
<br><a href="https://github.com/01-ai/Yi/blob/main/docs/README_llama.cpp.md">This tutorial</a> guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p>
- [Step 0: Prerequisites](#step-0-prerequisites)
- [Step 1: Download llama.cpp](#step-1-download-llamacpp)
- [Step 2: Download Yi model](#step-2-download-yi-model)
- [Step 3: Perform inference](#step-3-perform-inference)
#### Step 0: Prerequisites
- This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.
- Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine.
#### Step 1: Download `llama.cpp`
To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command.
```bash
git clone [email protected]:ggerganov/llama.cpp.git
```
#### Step 2: Download Yi model
2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF
```
2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command.
```bash
git-lfs pull --include yi-chat-6b.Q2_K.gguf
```
#### Step 3: Perform inference
To perform inference with the Yi model, you can use one of the following methods.
- [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal)
- [Method 2: Perform inference in web](#method-2-perform-inference-in-web)
##### Method 1: Perform inference in terminal
To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command.
> ##### Tips
>
> - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model.
>
> - By default, the model operates in completion mode.
>
> - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage.
```bash
make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e
...
How do you feed your pet fox? Please answer this question in 6 simple steps:
Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables.
Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day.
Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise.
Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress.
Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections.
Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care.
...
```
Now you have successfully asked a question to the Yi model and got an answer! 🥳
##### Method 2: Perform inference in web
1. To initialize a lightweight and swift chatbot, run the following command.
```bash
cd llama.cpp
./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf
```
Then you can get an output like this:
```bash
...
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 5000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Pro
ggml_metal_init: picking default device: Apple M2 Pro
ggml_metal_init: ggml.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M2 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67)
llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67)
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 159.19 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67)
Available slots:
-> Slot 0 - max context: 2048
llama server listening at http://0.0.0.0:8080
```
2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar.

3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer.

</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Web demo
You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario).
[Step 1: Prepare your environment](#step-1-prepare-your-environment).
[Step 2: Download the Yi model](#step-2-download-the-yi-model).
Step 3. To start a web service locally, run the following command.
```bash
python demo/web_demo.py -c <your-model-path>
```
You can access the web UI by entering the address provided in the console into your browser.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Fine-tuning
```bash
bash finetune/scripts/run_sft_Yi_6b.sh
```
Once finished, you can compare the finetuned model and the base model with the following command:
```bash
bash finetune/scripts/run_eval.sh
```
<details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul>
### Finetune code for Yi 6B and 34B
#### Preparation
##### From Image
By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model.
You can also prepare your customized dataset in the following `jsonl` format:
```json
{ "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." }
```
And then mount them in the container to replace the default ones:
```bash
docker run -it \
-v /path/to/save/finetuned/model/:/finetuned-model \
-v /path/to/train.jsonl:/yi/finetune/data/train.json \
-v /path/to/eval.jsonl:/yi/finetune/data/eval.json \
ghcr.io/01-ai/yi:latest \
bash finetune/scripts/run_sft_Yi_6b.sh
```
##### From Local Server
Make sure you have conda. If not, use
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
source ~/.bashrc
```
Then, create a conda env:
```bash
conda create -n dev_env python=3.10 -y
conda activate dev_env
pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7
```
#### Hardware Setup
For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.
For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh).
A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.
#### Quick Start
Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like:
```bash
|-- $MODEL_PATH
| |-- config.json
| |-- pytorch_model-00001-of-00002.bin
| |-- pytorch_model-00002-of-00002.bin
| |-- pytorch_model.bin.index.json
| |-- tokenizer_config.json
| |-- tokenizer.model
| |-- ...
```
Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static.
```bash
|-- $DATA_PATH
| |-- data
| | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet
| | |-- test-00000-of-00001-8c7c51afc6d45980.parquet
| |-- dataset_infos.json
| |-- README.md
```
`finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG)
```bash
|-- $DATA_PATH
|--data
|-- train.jsonl
|-- eval.jsonl
```
`cd` into the scripts folder, copy and paste the script, and run. For example:
```bash
cd finetune/scripts
bash run_sft_Yi_6b.sh
```
For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes.
For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.
#### Evaluation
```bash
cd finetune/scripts
bash run_eval.sh
```
Then you'll see the answer from both the base model and the finetuned model.
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Quantization
#### GPT-Q
```bash
python quantization/gptq/quant_autogptq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul>
#### GPT-Q quantization
[GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization)
method. It saves memory and provides potential speedups while retaining the accuracy
of the model.
Yi models can be GPT-Q quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and
[exllama](https://github.com/turboderp/exllama).
And the huggingface transformers has integrated optimum and auto-gptq to perform
GPTQ quantization on language models.
##### Do Quantization
The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization:
```bash
python quant_autogptq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
#### AWQ
```bash
python quantization/awq/quant_autoawq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/awq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
<details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul>
#### AWQ quantization
[AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization)
method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.
Yi models can be AWQ quantized without a lot of efforts.
We provide a step-by-step tutorial below.
To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
##### Do Quantization
The `quant_autoawq.py` script is provided for you to perform AWQ quantization:
```bash
python quant_autoawq.py --model /base_model \
--output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code
```
##### Run Quantized Model
You can run a quantized model using the `eval_quantized_model.py`:
```bash
python eval_quantized_model.py --model /quantized_model --trust_remote_code
```
</ul>
</details>
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Deployment
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
#### Software requirements
Before using Yi quantized models, make sure you've installed the correct software listed below.
| Model | Software
|---|---
Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi)
Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation)
#### Hardware requirements
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
##### Chat models
| Model | Minimum VRAM | Recommended GPU Example |
|:----------------------|:--------------|:-------------------------------------:|
| Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) |
| Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) |
| Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) |
| Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) |
| Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) |
| Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) |
Below are detailed minimum VRAM requirements under different batch use cases.
| Model | batch=1 | batch=4 | batch=16 | batch=32 |
| ----------------------- | ------- | ------- | -------- | -------- |
| Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB |
| Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB |
| Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB |
| Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB |
| Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB |
| Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB |
##### Base models
| Model | Minimum VRAM | Recommended GPU Example |
|----------------------|--------------|:-------------------------------------:|
| Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) |
| Yi-6B-200K | 50 GB | 1 x A800 (80 GB) |
| Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) |
| Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) |
| Yi-34B-200K | 200 GB | 4 x A800 (80 GB) |
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### FAQ
<details>
<summary> If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️</summary>
<br>
#### 💡Fine-tuning
- <strong>Base model or Chat model - which to fine-tune?</strong>
<br>The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task.
- If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice.
- On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice.
- It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements.
- <strong>Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference?</strong>
<br>
The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes.
- Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely.
- The Base model's fine-tuning is more versatile, with a relatively high performance potential.
- If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to.
- If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet.
#### 💡Quantization
- <strong>Quantized model versus original model - what is the performance gap?</strong>
- The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points.
- Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results.
#### 💡General
- <strong>Where can I source fine-tuning question answering datasets?</strong>
- You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) readily available.
- Additionally, Github offers fine-tuning frameworks, such as [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), which integrates pre-made datasets.
- <strong>What is the GPU memory requirement for fine-tuning Yi-34B FP16?</strong>
<br>
The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance.
- <strong>Are there any third-party platforms that support chat functionality for the Yi-34b-200k model?</strong>
<br>
If you're looking for third-party Chats, options include [fireworks.ai](https://fireworks.ai/login?callbackURL=https://fireworks.ai/models/fireworks/yi-34b-chat).
</details>
### Learning hub
<details>
<summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary>
<br>
Welcome to the Yi learning hub!
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!
At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳
#### Tutorials
##### Blog tutorials
| Deliverable | Date | Author |
| ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ |
| [使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐](https://mp.weixin.qq.com/s/Ri2ap9_5EMzdfiBhSSL_MQ) | 2024-05-20 | [苏洋](https://github.com/soulteary) |
| [使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s](https://blog.csdn.net/freewebsys/article/details/134698597?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-17-134698597-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-20 | [fly-iot](https://gitee.com/fly-iot) |
| [Yi-VL 最佳实践](https://modelscope.cn/docs/yi-vl最佳实践) | 2024-05-20 | [ModelScope](https://github.com/modelscope) |
| [一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型](https://mp.weixin.qq.com/s/ntMs2G_XdWeM3I6RUOBJrA) | 2024-05-13 | [Second State](https://github.com/second-state) |
| [零一万物开源Yi-1.5系列大模型](https://mp.weixin.qq.com/s/d-ogq4hcFbsuL348ExJxpA) | 2024-05-13 | [刘聪](https://github.com/liucongg) |
| [零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦!](https://mp.weixin.qq.com/s/3wD-0dCgXB646r720o8JAg) | 2024-05-13 | [ModelScope](https://github.com/modelscope) |
| [Yi-34B 本地部署简单测试](https://blog.csdn.net/arkohut/article/details/135331469?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135331469-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上)](https://blog.csdn.net/weixin_53443275/article/details/136091398?ops_request_misc=%7B%22request%5Fid%22%3A%22171636390616800185813639%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636390616800185813639&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-5-136091398-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) |
| [驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇)](https://blog.csdn.net/weixin_53443275/article/details/136096309) | 2024-05-13 | [Words worth](https://blog.csdn.net/weixin_53443275?type=blog) |
| [Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型](https://mp.weixin.qq.com/s/bBgzGJvUqIohodcy9U-pFw) | 2024-05-13 | AI工程师笔记 |
| [使用零一万物 200K 模型和 Dify 快速搭建模型应用](https://zhuanlan.zhihu.com/p/686774859) | 2024-05-13 | [苏洋](https://github.com/soulteary) |
| [(持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用](https://zhuanlan.zhihu.com/p/671549900) | 2024-05-13 | [苏洋](https://github.com/soulteary) |
| [Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探](https://mp.weixin.qq.com/s/WaygSfn5T8ZPB1mPdGADEQ) | 2024-05-11 | 江湖评谈 |
| [技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台)](https://blog.csdn.net/ucloud2012/article/details/137187469) | 2024-05-11 | [MumuLab](https://blog.csdn.net/ucloud2012?type=blog) |
| [多模态大模型Yi-VL-plus体验 效果很棒](https://zhuanlan.zhihu.com/p/694736111) | 2024-04-27 | [大家好我是爱因](https://www.zhihu.com/people/iamein) |
| [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s](https://blog.csdn.net/freewebsys/article/details/134725765?ops_request_misc=%7B%22request%5Fid%22%3A%22171636356716800211598950%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636356716800211598950&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-9-134725765-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-27 | [fly-iot](https://gitee.com/fly-iot) |
| [Getting Started with Yi-1.5-9B-Chat](https://www.secondstate.io/articles/yi-1.5-9b-chat/) | 2024-04-27 | [Second State](https://github.com/second-state) |
| [基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记](https://mp.weixin.qq.com/s/_ea6g0pzzeO4WyYtuWycWQ) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) |
| [【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版](https://blog.csdn.net/alarey/article/details/137769471?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-16-137769471-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-04-21 | [My的梦想已实现](https://blog.csdn.net/alarey?type=blog) |
| [【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以](https://blog.csdn.net/freewebsys/article/details/134754086) | 2024-03-22 | [fly-iot](https://gitee.com/fly-iot) |
| [零一万物大模型部署+微调总结](https://blog.csdn.net/v_wus/article/details/135704126?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-18-135704126-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-22 | [v_wus](https://blog.csdn.net/v_wus?type=blog) |
| [零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案](https://blog.csdn.net/qq_39667443/article/details/136028776?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-6-136028776-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [郝铠锋](https://blog.csdn.net/qq_39667443?type=blog) |
| [Yi-34B微调训练](https://blog.csdn.net/lsjlnd/article/details/135336984?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-12-135336984-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-03-02 | [lsjlnd](https://blog.csdn.net/lsjlnd?type=blog) |
| [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) |
| [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) |
| [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) |
| [零一科技Yi-34B Chat大模型环境搭建&推理](https://blog.csdn.net/zzq1989_/article/details/135597181?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-8-135597181-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [要养家的程序员](https://blog.csdn.net/zzq1989_?type=blog) |
| [基于LLaMA Factory,单卡3小时训练专属大模型 Agent](https://blog.csdn.net/m0_59596990/article/details/135760285?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135760285-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-15 | [机器学习社区](https://blog.csdn.net/m0_59596990?type=blog) |
| [双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录](https://blog.csdn.net/arkohut/article/details/135321242?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-10-135321242-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b)](https://blog.csdn.net/qq_40302568/article/details/135040985?ops_request_misc=%7B%22request%5Fid%22%3A%22171636168816800227489911%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636168816800227489911&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-30-135040985-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2024-01-02 | [aq_Seabiscuit](https://blog.csdn.net/qq_40302568?type=blog) |
| [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://blog.csdn.net/arkohut/article/details/135274973) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行?](https://blog.csdn.net/u014374009/article/details/136327696) | 2023-12-28 | [代码讲故事](https://blog.csdn.net/u014374009?type=blog) |
| [LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调](https://blog.csdn.net/BIT_666/article/details/134990402) | 2023-12-18 | [BIT_666](https://bitddd.blog.csdn.net/?type=blog) |
| [通过vllm框架进行大模型推理](https://blog.csdn.net/weixin_45920955/article/details/135300561?ops_request_misc=%7B%22request%5Fid%22%3A%22171636343416800188513953%22%2C%22scm%22%3A%2220140713.130102334.pc%5Fblog.%22%7D&request_id=171636343416800188513953&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_ecpm_v1~times_rank-13-135300561-null-null.nonecase&utm_term=Yi大模型&spm=1018.2226.3001.4450) | 2023-12-18 | [土山炮](https://blog.csdn.net/weixin_45920955?type=blog) |
| [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) |
| [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) |
| [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) |
| [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) |
##### GitHub Project
| Deliverable | Date | Author |
| ------------------------------------------------------------ | ---------- | ------------------------------------------- |
| [yi-openai-proxy](https://github.com/soulteary/yi-openai-proxy) | 2024-05-11 | [苏洋](https://github.com/soulteary) |
| [基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集](https://github.com/zjrwtx/bilibiliQA_databuilder) | 2024-04-29 | [正经人王同学](https://github.com/zjrwtx) |
| [基于视频网站和零一万物大模型构建大语言模型高质量训练数据集](https://github.com/zjrwtx/VideoQA_databuilder) | 2024-04-25 | [正经人王同学](https://github.com/zjrwtx) |
| [基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友](https://github.com/zjrwtx/open_summary) | 2024-04-24 | [正经人王同学](https://github.com/zjrwtx) |
| [Food-GPT-Yi-model](https://github.com/ThisisHubert/FoodGPT-Yi-model) | 2024-04-21 | [Hubert S](https://github.com/ThisisHubert) |
##### Video tutorials
| Deliverable | Date | Author |
| ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ |
| [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) |
| [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
| [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) |
| [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2024-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型](https://www.bilibili.com/video/BV16i421X7Jx/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-14 | [titan909](https://space.bilibili.com/526393761) |
| [Yi-1.5: True Apache 2.0 Competitor to LLAMA-3](https://www.youtube.com/watch?v=KCDYrfWeTRc) | 2024-05-13 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) |
| [Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks](https://www.youtube.com/watch?v=Ba-G7Il0UkA) | 2024-05-13 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
| [how to install Ollama and run Yi 6B](https://www.youtube.com/watch?v=4Jnar7OUHqQ) | 2024-05-13 | [Ridaa Davids](https://www.youtube.com/@quantanovabusiness) |
| [地表最强混合智能AI助手:llama3_70B+Yi_34B+Qwen1.5_110B](https://www.bilibili.com/video/BV1Xm411C7V1/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-04 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) |
| [ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答](https://www.bilibili.com/video/BV11i421C7B5/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-03 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) |
| [基于Yi-34B的领域知识问答项目演示](https://www.bilibili.com/video/BV1zZ42177ZA/?spm_id_from=333.999.0.0&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-05-02 | [朱扎特](https://space.bilibili.com/494512200?spm_id_from=333.788.0.0) |
| [使用RTX4090+GaLore算法 全参微调Yi-6B大模型](https://www.bilibili.com/video/BV1ax4y1U7Ep/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-24 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) |
| [无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行](https://www.youtube.com/watch?v=VL-W0TnLCns) | 2024-03-20 | [刘悦的技术博客](https://v3u.cn/) |
| [无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲](https://www.youtube.com/watch?v=rBvbgwz3oHM) | 2024-03-16 | [刘悦的技术博客](https://v3u.cn/) |
| [量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署](https://www.bilibili.com/video/BV1jx421y7xj/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-03-05 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) |
| [Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字](https://www.bilibili.com/video/BV1BB421z7oA/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-27 | [fly-iot](https://gitee.com/fly-iot) |
| [Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏](https://www.bilibili.com/video/BV14J4m1e77f/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-25 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) |
| [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2](https://www.bilibili.com/video/BV19v421677y/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-23 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) |
| [【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新](https://www.bilibili.com/video/BV194421F7Fy/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-20 | [fly-iot](https://gitee.com/fly-iot) |
| [【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功](https://www.bilibili.com/video/BV19Z421z7cv/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-06 | [fly-iot](https://gitee.com/fly-iot) |
| [无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1](https://www.bilibili.com/video/BV1tU421o7Co/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-02-05 | [魚蟲蟲](https://space.bilibili.com/431981179?spm_id_from=333.788.0.0) |
| [2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南](https://www.bilibili.com/video/BV1hC411z7xu/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-30 | [小饭护法要转码](https://space.bilibili.com/39486865?spm_id_from=333.788.0.0) |
| [Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows](https://www.youtube.com/watch?v=cZs2jRtl0bs) | 2024-01-22 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
| [Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试)](https://www.bilibili.com/video/BV1VT4y1b7Th/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [小吴苹果机器人](https://space.bilibili.com/1732749682?spm_id_from=333.788.0.0) |
| [【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置](https://www.bilibili.com/video/BV1ia4y1y7JH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-21 | [fly-iot](https://gitee.com/fly-iot) |
| [这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥](https://www.youtube.com/watch?v=eahXJrdtQuc) | 2024-01-20 | [晓漫吧](https://www.youtube.com/@xiaomanba) |
| [大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下](https://www.bilibili.com/video/BV1AW4y1w7DC/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-17 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能](https://www.bilibili.com/video/BV1aK4y1z7GF/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-15 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来](https://www.bilibili.com/video/BV1Yw411g7ZL/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-11 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录](https://www.bilibili.com/video/BV1p94y1c7ak/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2024-01-01 | [漆妮妮](https://space.bilibili.com/1262370256) |
| [手把手教学!使用 vLLM 快速部署 Yi-34B-Chat](https://www.bilibili.com/video/BV1ew41157Mk/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-26 | [白鸽巢](https://space.bilibili.com/138938660?spm_id_from=333.788.0.0) |
| [如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁](https://www.bilibili.com/video/BV1uc41117zz/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-21 | [小工蚂创始人](https://space.bilibili.com/478674499?spm_id_from=333.788.0.0) |
| [Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s](https://www.bilibili.com/video/BV1nj41157L3/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-02 | [fly-iot](https://gitee.com/fly-iot) |
| [使用autodl服务器,RTX 3090 * 3 显卡上运行, Yi-34B-Chat模型,显存占用60G](https://www.bilibili.com/video/BV1BM411R7ae/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) |
| [使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s](https://www.bilibili.com/video/BV1Hu4y1L7BH/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [fly-iot](https://gitee.com/fly-iot) |
| [Yi大模型一键本地部署 技术小白玩转AI](https://www.bilibili.com/video/BV16H4y117md/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-12-01 | [技术小白玩转AI](https://space.bilibili.com/3546586137234288?spm_id_from=333.788.0.0) |
| [01.AI's Yi-6B: Overview and Fine-Tuning](https://www.youtube.com/watch?v=mye-UOkAliQ) | 2023-11-28 | [AI Makerspace](https://www.youtube.com/@AI-Makerspace) |
| [Yi 34B Chat LLM outperforms Llama 70B](https://www.youtube.com/watch?v=RYtrF-R5jDc) | 2023-11-27 | [DLExplorer](https://www.youtube.com/@DLExplorers-lg7dt) |
| [How to run open source models on mac Yi 34b on m3 Max](https://www.youtube.com/watch?v=GAo-dopkgjI) | 2023-11-26 | [TECHNO PREMIUM](https://www.youtube.com/@technopremium91) |
| [Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING ](https://www.youtube.com/watch?v=7WBojwwv5Qo) | 2023-11-24 | [Prompt Engineering](https://www.youtube.com/@engineerprompt) |
| [Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat](https://www.youtube.com/watch?v=bWCjwtu_tHs) | 2023-11-24 | [Sam Witteveen](https://www.youtube.com/@samwitteveenai) |
| [在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器)](https://www.bilibili.com/video/BV1SQ4y18744/?spm_id_from=333.337.search-card.all.click&vd_source=ab85f93e294a2f6be11db57c29c6d706) | 2023-11-15 | [Second State](https://github.com/second-state) |
| [Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server)](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-14 | [Second State](https://github.com/second-state) |
| [How to Install Yi 34B 200K Llamafied on Windows Laptop](https://www.youtube.com/watch?v=enoha4K4HkQ) | 2023-11-11 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) |
</details>
# Why Yi?
- [Ecosystem](#ecosystem)
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
- [Benchmarks](#benchmarks)
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
- [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k)
- [Yi-9B](#yi-9b)
## Ecosystem
Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.
- [Upstream](#upstream)
- [Downstream](#downstream)
- [Serving](#serving)
- [Quantization](#quantization-1)
- [Fine-tuning](#fine-tuning-1)
- [API](#api)
### Upstream
The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.
For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto")
```
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Downstream
> 💡 Tip
>
> - Feel free to create a PR and share the fantastic work you've built using the Yi series models.
>
> - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`.
#### Serving
If you want to get up with Yi in a few minutes, you can use the following services built upon Yi.
- Yi-34B-Chat: you can chat with Yi using one of the following platforms:
- [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand!
- [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs.
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization.
#### Quantization
If you have limited computational capabilities, you can use Yi's quantized models as follows.
These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ)
#### Fine-tuning
If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.
- [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi.
This is not an exhaustive list for Yi, but to name a few sorted on downloads:
- [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
- [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ)
- [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ)
- [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
- [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm).
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset.
#### API
- [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box.
- [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
## Tech report
For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652).
### Citation
```
@misc{ai2024yi,
title={Yi: Open Foundation Models by 01.AI},
author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai},
year={2024},
eprint={2403.04652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Benchmarks
- [Chat model performance](#chat-model-performance)
- [Base model performance](#base-model-performance)
### Chat model performance
Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.

<details>
<summary> Evaluation methods and challenges. ⬇️ </summary>
- **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
- **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed.
- **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.
- **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.
<strong>*</strong>: C-Eval results are evaluated on the validation datasets
</details>
### Base model performance
#### Yi-34B and Yi-34B-200K
The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.

<details>
<summary> Evaluation methods. ⬇️</summary>
- **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
- **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
- **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.
- **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.
- **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.
- **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code".
- **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.
</details>
#### Yi-9B
Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.

- In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.

- In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.

- In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.

<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Who can use Yi?
Everyone! 🙌 ✅
The code and weights of the Yi series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE), which means the Yi series models are free for personal usage, academic purposes, and commercial use.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
# Misc.
### Acknowledgments
A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.
[](https://github.com/01-ai/yi/graphs/contributors)
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### Disclaimer
We use data compliance checking algorithms during the training process, to
ensure the compliance of the trained model to the best of our ability. Due to
complex data and the diversity of language model usage scenarios, we cannot
guarantee that the model will generate correct, and reasonable output in all
scenarios. Please be aware that there is still a risk of the model producing
problematic outputs. We will not be responsible for any risks and issues
resulting from misuse, misguidance, illegal usage, and related misinformation,
as well as any associated data security concerns.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
### License
The code and weights of the Yi-1.5 series models are distributed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE).
If you create derivative works based on this model, please include the following attribution in your derivative works:
This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License.
<p align="right"> [
<a href="#top">Back to top ⬆️ </a> ]
</p>
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
monsterbeasts/LishizhenGPT | monsterbeasts | text-generation | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3mt",
"arxiv:2211.01786",
"license:bigscience-bloom-rail-1.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,713,863,131,000 | 2024-05-09T04:44:44 | 12 | 0 | ---
datasets:
- bigscience/xP3mt
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
widget:
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
review as positive, neutral or negative?
example_title: zh-en sentiment
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
example_title: zh-zh sentiment
- text: Suggest at least five related search terms to "Mạng neural nhân tạo".
example_title: vi-en query
- text: Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels».
example_title: fr-fr query
- text: Explain in a sentence in Telugu what is backpropagation in neural networks.
example_title: te-en qa
- text: Why is the sky blue?
example_title: en-en qa
- text: 'Write a fairy tale about a troll saving a princess from a dangerous dragon.
The fairy tale is a masterpiece that has achieved praise worldwide and its moral
is "Heroes Come in All Shapes and Sizes". Story (in Spanish):'
example_title: es-en fable
- text: 'Write a fable about wood elves living in a forest that is suddenly invaded
by ogres. The fable is a masterpiece that has achieved praise worldwide and its
moral is "Violence is the last refuge of the incompetent". Fable (in Hindi):'
example_title: hi-en fable
model-index:
- name: bloomz-7b1-mt
results:
- task:
type: Coreference resolution
dataset:
name: Winogrande XL (xl)
type: winogrande
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 56.51
- task:
type: Coreference resolution
dataset:
name: XWinograd (en)
type: Muennighoff/xwinograd
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 65.76
- task:
type: Coreference resolution
dataset:
name: XWinograd (fr)
type: Muennighoff/xwinograd
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 57.83
- task:
type: Coreference resolution
dataset:
name: XWinograd (jp)
type: Muennighoff/xwinograd
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 51.82
- task:
type: Coreference resolution
dataset:
name: XWinograd (pt)
type: Muennighoff/xwinograd
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 57.41
- task:
type: Coreference resolution
dataset:
name: XWinograd (ru)
type: Muennighoff/xwinograd
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 55.87
- task:
type: Coreference resolution
dataset:
name: XWinograd (zh)
type: Muennighoff/xwinograd
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 62.7
- task:
type: Natural language inference
dataset:
name: ANLI (r1)
type: anli
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42.6
- task:
type: Natural language inference
dataset:
name: ANLI (r2)
type: anli
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 39.4
- task:
type: Natural language inference
dataset:
name: ANLI (r3)
type: anli
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42.0
- task:
type: Natural language inference
dataset:
name: SuperGLUE (cb)
type: super_glue
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 83.93
- task:
type: Natural language inference
dataset:
name: SuperGLUE (rte)
type: super_glue
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 82.67
- task:
type: Natural language inference
dataset:
name: XNLI (ar)
type: xnli
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 55.58
- task:
type: Natural language inference
dataset:
name: XNLI (bg)
type: xnli
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.9
- task:
type: Natural language inference
dataset:
name: XNLI (de)
type: xnli
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.92
- task:
type: Natural language inference
dataset:
name: XNLI (el)
type: xnli
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.89
- task:
type: Natural language inference
dataset:
name: XNLI (en)
type: xnli
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.92
- task:
type: Natural language inference
dataset:
name: XNLI (es)
type: xnli
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 57.35
- task:
type: Natural language inference
dataset:
name: XNLI (fr)
type: xnli
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.67
- task:
type: Natural language inference
dataset:
name: XNLI (hi)
type: xnli
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.45
- task:
type: Natural language inference
dataset:
name: XNLI (ru)
type: xnli
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.24
- task:
type: Natural language inference
dataset:
name: XNLI (sw)
type: xnli
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.27
- task:
type: Natural language inference
dataset:
name: XNLI (th)
type: xnli
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.08
- task:
type: Natural language inference
dataset:
name: XNLI (tr)
type: xnli
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 38.71
- task:
type: Natural language inference
dataset:
name: XNLI (ur)
type: xnli
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 49.48
- task:
type: Natural language inference
dataset:
name: XNLI (vi)
type: xnli
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.5
- task:
type: Natural language inference
dataset:
name: XNLI (zh)
type: xnli
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.3
- task:
type: Program synthesis
dataset:
name: HumanEval
type: openai_humaneval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 7.23
- type: Pass@10
value: 14.46
- type: Pass@100
value: 25.86
- task:
type: Sentence completion
dataset:
name: StoryCloze (2016)
type: story_cloze
config: '2016'
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 89.58
- task:
type: Sentence completion
dataset:
name: SuperGLUE (copa)
type: super_glue
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 84.0
- task:
type: Sentence completion
dataset:
name: XCOPA (et)
type: xcopa
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 52.0
- task:
type: Sentence completion
dataset:
name: XCOPA (ht)
type: xcopa
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 54.0
- task:
type: Sentence completion
dataset:
name: XCOPA (id)
type: xcopa
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 73.0
- task:
type: Sentence completion
dataset:
name: XCOPA (it)
type: xcopa
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 62.0
- task:
type: Sentence completion
dataset:
name: XCOPA (qu)
type: xcopa
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
name: XCOPA (sw)
type: xcopa
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
name: XCOPA (ta)
type: xcopa
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 62.0
- task:
type: Sentence completion
dataset:
name: XCOPA (th)
type: xcopa
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
name: XCOPA (tr)
type: xcopa
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
name: XCOPA (vi)
type: xcopa
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 77.0
- task:
type: Sentence completion
dataset:
name: XCOPA (zh)
type: xcopa
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 80.0
- task:
type: Sentence completion
dataset:
name: XStoryCloze (ar)
type: Muennighoff/xstory_cloze
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 83.85
- task:
type: Sentence completion
dataset:
name: XStoryCloze (es)
type: Muennighoff/xstory_cloze
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 88.82
- task:
type: Sentence completion
dataset:
name: XStoryCloze (eu)
type: Muennighoff/xstory_cloze
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.26
- task:
type: Sentence completion
dataset:
name: XStoryCloze (hi)
type: Muennighoff/xstory_cloze
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 80.41
- task:
type: Sentence completion
dataset:
name: XStoryCloze (id)
type: Muennighoff/xstory_cloze
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 84.58
- task:
type: Sentence completion
dataset:
name: XStoryCloze (my)
type: Muennighoff/xstory_cloze
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 51.56
- task:
type: Sentence completion
dataset:
name: XStoryCloze (ru)
type: Muennighoff/xstory_cloze
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 64.26
- task:
type: Sentence completion
dataset:
name: XStoryCloze (sw)
type: Muennighoff/xstory_cloze
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.01
- task:
type: Sentence completion
dataset:
name: XStoryCloze (te)
type: Muennighoff/xstory_cloze
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.06
- task:
type: Sentence completion
dataset:
name: XStoryCloze (zh)
type: Muennighoff/xstory_cloze
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 85.9
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1), also refer to the `config.json` file
- **Finetuning steps:** 1000
- **Finetuning tokens:** 4.19 billion
- **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 64x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` | [
"COREFERENCE_RESOLUTION",
"TRANSLATION"
] | Non_BioNLP |
tyzp-INC/bench2-all-MiniLM-L6-v2-tuned | tyzp-INC | text-classification | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,690,125,523,000 | 2023-07-23T15:18:48 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# tyzp-INC/bench2-all-MiniLM-L6-v2-tuned
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tyzp-INC/bench2-all-MiniLM-L6-v2-tuned")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
tner/twitter-roberta-base-dec2021-tweetner7-2020 | tner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:tner/tweetner7",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,656,839,252,000 | 2022-09-27T15:35:03 | 18 | 0 | ---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: 'Get the all-analog Classic Vinyl Edition of `Takin'' Off` Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}'
example_title: NER Example 1
model-index:
- name: tner/twitter-roberta-base-dec2021-tweetner7-2020
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: tner/tweetner7
type: tner/tweetner7
args: tner/tweetner7
metrics:
- type: f1
value: 0.6417969860676713
name: F1 (test_2021)
- type: precision
value: 0.6314199395770392
name: Precision (test_2021)
- type: recall
value: 0.6525208140610546
name: Recall (test_2021)
- type: f1_macro
value: 0.5950190138355756
name: Macro F1 (test_2021)
- type: precision_macro
value: 0.5844336783514947
name: Macro Precision (test_2021)
- type: recall_macro
value: 0.6100191042323923
name: Macro Recall (test_2021)
- type: f1_entity_span
value: 0.77377161055505
name: Entity Span F1 (test_2021)
- type: precision_entity_span
value: 0.7612174107642385
name: Entity Span Precision (test_2020)
- type: recall_entity_span
value: 0.7867468486180178
name: Entity Span Recall (test_2021)
- type: f1
value: 0.6535560344827587
name: F1 (test_2020)
- type: precision
value: 0.6795518207282913
name: Precision (test_2020)
- type: recall
value: 0.6294758692267773
name: Recall (test_2020)
- type: f1_macro
value: 0.6112036126522273
name: Macro F1 (test_2020)
- type: precision_macro
value: 0.6366190072656497
name: Macro Precision (test_2020)
- type: recall_macro
value: 0.5931815043549611
name: Macro Recall (test_2020)
- type: f1_entity_span
value: 0.7636755591484775
name: Entity Span F1 (test_2020)
- type: precision_entity_span
value: 0.7942825112107623
name: Entity Span Precision (test_2020)
- type: recall_entity_span
value: 0.7353399065905553
name: Entity Span Recall (test_2020)
---
# tner/twitter-roberta-base-dec2021-tweetner7-2020
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2020` split).
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set of 2021:
- F1 (micro): 0.6417969860676713
- Precision (micro): 0.6314199395770392
- Recall (micro): 0.6525208140610546
- F1 (macro): 0.5950190138355756
- Precision (macro): 0.5844336783514947
- Recall (macro): 0.6100191042323923
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.5161953727506428
- creative_work: 0.4749841671944269
- event: 0.43429109750353273
- group: 0.593413759373981
- location: 0.6431718061674009
- person: 0.8327532515112659
- product: 0.6703236423477785
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.6334648803400447, 0.651188450223803]
- 95%: [0.6314263719566943, 0.6528797499551452]
- F1 (macro):
- 90%: [0.6334648803400447, 0.651188450223803]
- 95%: [0.6314263719566943, 0.6528797499551452]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/twitter-roberta-base-dec2021-tweetner7-2020/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/twitter-roberta-base-dec2021-tweetner7-2020/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip.
```shell
pip install tner
```
[TweetNER7](https://huggingface.co/datasets/tner/tweetner7) pre-processed tweets where the account name and URLs are
converted into special formats (see the dataset page for more detail), so we process tweets accordingly and then run the model prediction as below.
```python
import re
from urlextract import URLExtract
from tner import TransformersNER
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
text = "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"
text_format = format_tweet(text)
model = TransformersNER("tner/twitter-roberta-base-dec2021-tweetner7-2020")
model.predict([text_format])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2020
- dataset_name: None
- local_dataset: None
- model: cardiffnlp/twitter-roberta-base-dec2021
- crf: True
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 1e-05
- random_seed: 0
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.15
- max_grad_norm: 1
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/twitter-roberta-base-dec2021-tweetner7-2020/raw/main/trainer_config.json).
### Reference
If you use the model, please cite T-NER paper and TweetNER7 paper.
- T-NER
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
- TweetNER7
```
@inproceedings{ushio-etal-2022-tweet,
title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
author = "Ushio, Asahi and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco. and
Camacho-Collados, Jose",
booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
| [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
AlexWortega/qwen11k | AlexWortega | sentence-similarity | [
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1077240",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,731,699,134,000 | 2024-11-15T19:33:05 | 13 | 0 | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1077240
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Who is the father of philosophy?
sentences:
- 'Charles Sanders Peirce
Charles Sanders Peirce (/pɜːrs/[9] "purse"; 10September 1839 – 19April 1914) was
an American philosopher, logician, mathematician, and scientist who is sometimes
known as "the father of pragmatism". He was educated as a chemist and employed
as a scientist for 30 years. Today he is appreciated largely for his contributions
to logic, mathematics, philosophy, scientific methodology, and semiotics, and
for his founding of pragmatism.'
- 'Georg Wilhelm Friedrich Hegel
According to Hegel, "Heraclitus is the one who first declared the nature of the
infinite and first grasped nature as in itself infinite, that is, its essence
as process. The origin of philosophy is to be dated from Heraclitus. His is the
persistent Idea that is the same in all philosophers up to the present day, as
it was the Idea of Plato and Aristotle". For Hegel, Heraclitus''s great achievements
were to have understood the nature of the infinite, which for Hegel includes understanding
the inherent contradictoriness and negativity of reality; and to have grasped
that reality is becoming or process and that "being" and "nothingness" are mere
empty abstractions. According to Hegel, Heraclitus''s "obscurity" comes from his
being a true (in Hegel''s terms "speculative") philosopher who grasped the ultimate
philosophical truth and therefore expressed himself in a way that goes beyond
the abstract and limited nature of common sense and is difficult to grasp by those
who operate within common sense. Hegel asserted that in Heraclitus he had an antecedent
for his logic: "[...] there is no proposition of Heraclitus which I have not adopted
in my logic".'
- 'History of nuclear weapons
The notion of using a fission weapon to ignite a process of nuclear fusion can
be dated back to 1942. At the first major theoretical conference on the development
of an atomic bomb hosted by J. Robert Oppenheimer at the University of California,
Berkeley, participant Edward Teller directed the majority of the discussion towards
Enrico Fermi''s idea of a "Super" bomb that would use the same reactions that
powered the Sun itself.'
- source_sentence: When was Father's Day first celebrated in America?
sentences:
- 'Father''s Day (United States)
Father''s Day was founded in Spokane, Washington at the YMCA in 1910 by Sonora
Smart Dodd, who was born in Arkansas.[4] Its first celebration was in the Spokane
YMCA on June 19, 1910.[4][5] Her father, the Civil War veteran William Jackson
Smart, was a single parent who raised his six children there.[4] After hearing
a sermon about Jarvis'' Mother''s Day at Central Methodist Episcopal Church in
1909, she told her pastor that fathers should have a similar holiday honoring
them.[4][6] Although she initially suggested June 5, her father''s birthday, the
pastors did not have enough time to prepare their sermons, and the celebration
was deferred to the third Sunday of June.[7][8]'
- 'Father''s Day
In [[Peru]], Father''s Day is celebrated on the third Sunday of June and is not
a public holiday. People usually give a present to their fathers and spend time
with him mostly during a family meal.'
- 'Sacramento River
The Sacramento and its wide natural floodplain were once abundant in fish and
other aquatic creatures, notably one of the southernmost large runs of chinook
salmon in North America. For about 12,000 years, humans have depended on the vast
natural resources of the watershed, which had one of the densest Native American
populations in California. The river has provided a route for trade and travel
since ancient times. Hundreds of tribes sharing regional customs and traditions
inhabited the Sacramento Valley, first coming into contact with European explorers
in the late 1700s. The Spanish explorer Gabriel Moraga named the river Rio de
los Sacramentos in 1808, later shortened and anglicized into Sacramento.'
- source_sentence: What is the population of Austria in 2018?
sentences:
- 'Utah State Capitol
The Utah State Capitol is the house of government for the U.S. state of Utah.
The building houses the chambers and offices of the Utah State Legislature, the
offices of the Governor, Lieutenant Governor, Attorney General, the State Auditor
and their staffs. The capitol is the main building of the Utah State Capitol Complex,
which is located on Capitol Hill, overlooking downtown Salt Lake City.'
- 'Same-sex marriage in Austria
A September 2018 poll for "Österreich" found that 74% of Austrians supported same-sex
marriage and 26% were against.'
- 'Demographics of Austria
Population 8,793,370 (July 2018 est.) country comparison to the world: 96th'
- source_sentence: What language family is Malay?
sentences:
- 'Malay language
Malay is a member of the Austronesian family of languages, which includes languages
from Southeast Asia and the Pacific Ocean, with a smaller number in continental
Asia. Malagasy, a geographic outlier spoken in Madagascar in the Indian Ocean,
is also a member of this language family. Although each language of the family
is mutually unintelligible, their similarities are rather striking. Many roots
have come virtually unchanged from their common ancestor, Proto-Austronesian language.
There are many cognates found in the languages'' words for kinship, health, body
parts and common animals. Numbers, especially, show remarkable similarities.'
- 'Filipinos of Malay descent
In the Philippines, there is misconception and often mixing between the two definitions.
Filipinos consider Malays as being the natives of the Philippines, Indonesia,
Malaysia and Brunei. Consequently, Filipinos consider themselves Malay when in
reality, they are referring to the Malay Race. Filipinos in Singapore also prefer
to be considered Malay, but their desire to be labeled as part of the ethnic group
was rejected by the Singaporean government. Paradoxically, a minor percentage
of Filipinos prefer the Spanish influence and may associate themselves with being
Hispanic, and have made no realistic attempts to promote and/or revive the Malay
language in the Philippines.'
- 'Preferred provider organization
In health insurance in the United States, a preferred provider organization (PPO),
sometimes referred to as a participating provider organization or preferred provider
option, is a managed care organization of medical doctors, hospitals, and other
health care providers who have agreed with an insurer or a third-party administrator
to provide health care at reduced rates to the insurer''s or administrator''s
clients.'
- source_sentence: When was ABC formed?
sentences:
- 'American Broadcasting Company
ABC launched as a radio network on October 12, 1943, serving as the successor
to the NBC Blue Network, which had been purchased by Edward J. Noble. It extended
its operations to television in 1948, following in the footsteps of established
broadcast networks CBS and NBC. In the mid-1950s, ABC merged with United Paramount
Theatres, a chain of movie theaters that formerly operated as a subsidiary of
Paramount Pictures. Leonard Goldenson, who had been the head of UPT, made the
new television network profitable by helping develop and greenlight many successful
series. In the 1980s, after purchasing an 80% interest in cable sports channel
ESPN, the network''s corporate parent, American Broadcasting Companies, Inc.,
merged with Capital Cities Communications, owner of several print publications,
and television and radio stations. In 1996, most of Capital Cities/ABC''s assets
were purchased by The Walt Disney Company.'
- 'Roman concrete
Roman concrete, also called opus caementicium, was a material used in construction
during the late Roman Republic until the fading of the Roman Empire. Roman concrete
was based on a hydraulic-setting cement. Recently, it has been found that it materially
differs in several ways from modern concrete which is based on Portland cement.
Roman concrete is durable due to its incorporation of volcanic ash, which prevents
cracks from spreading. By the middle of the 1st century, the material was used
frequently, often brick-faced, although variations in aggregate allowed different
arrangements of materials. Further innovative developments in the material, called
the Concrete Revolution, contributed to structurally complicated forms, such as
the Pantheon dome, the world''s largest and oldest unreinforced concrete dome.[1]'
- 'Americans Battling Communism
Americans Battling Communism, Inc. (ABC) was an anti-communist organization created
following an October 1947 speech by Pennsylvania Judge Blair Gunther that called
for an "ABC movement" to educate America about communism. Chartered in November
1947 by Harry Alan Sherman, a local lawyer active in various anti-communist organizations,
the group took part in such activities as blacklisting by disclosing the names
of people suspected of being communists. Its members included local judges and
lawyers active in the McCarthy-era prosecution of communists.'
model-index:
- name: SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 896
type: sts-dev-896
metrics:
- type: pearson_cosine
value: 0.738292615302764
name: Pearson Cosine
- type: spearman_cosine
value: 0.7854072618610399
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 768
type: sts-dev-768
metrics:
- type: pearson_cosine
value: 0.7331567600641935
name: Pearson Cosine
- type: spearman_cosine
value: 0.7827897125403183
name: Spearman Cosine
---
# SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) <!-- at revision 7ae557604adf67be50417f59c2c2f167def9a775 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 896 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 896, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AlexWortega/qwen11k")
# Run inference
sentences = [
'When was ABC formed?',
"American Broadcasting Company\nABC launched as a radio network on October 12, 1943, serving as the successor to the NBC Blue Network, which had been purchased by Edward J. Noble. It extended its operations to television in 1948, following in the footsteps of established broadcast networks CBS and NBC. In the mid-1950s, ABC merged with United Paramount Theatres, a chain of movie theaters that formerly operated as a subsidiary of Paramount Pictures. Leonard Goldenson, who had been the head of UPT, made the new television network profitable by helping develop and greenlight many successful series. In the 1980s, after purchasing an 80% interest in cable sports channel ESPN, the network's corporate parent, American Broadcasting Companies, Inc., merged with Capital Cities Communications, owner of several print publications, and television and radio stations. In 1996, most of Capital Cities/ABC's assets were purchased by The Walt Disney Company.",
'Americans Battling Communism\nAmericans Battling Communism, Inc. (ABC) was an anti-communist organization created following an October 1947 speech by Pennsylvania Judge Blair Gunther that called for an "ABC movement" to educate America about communism. Chartered in November 1947 by Harry Alan Sherman, a local lawyer active in various anti-communist organizations, the group took part in such activities as blacklisting by disclosing the names of people suspected of being communists. Its members included local judges and lawyers active in the McCarthy-era prosecution of communists.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 896]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev-896` and `sts-dev-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-dev-896 | sts-dev-768 |
|:--------------------|:------------|:------------|
| pearson_cosine | 0.7383 | 0.7332 |
| **spearman_cosine** | **0.7854** | **0.7828** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,077,240 training samples
* Columns: <code>query</code>, <code>response</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | response | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 8.76 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 141.88 tokens</li><li>max: 532 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 134.02 tokens</li><li>max: 472 tokens</li></ul> |
* Samples:
| query | response | negative |
|:--------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Was there a year 0?</code> | <code>Year zero<br>Year zero does not exist in the anno Domini system usually used to number years in the Gregorian calendar and in its predecessor, the Julian calendar. In this system, the year 1 BC is followed by AD 1. However, there is a year zero in astronomical year numbering (where it coincides with the Julian year 1 BC) and in ISO 8601:2004 (where it coincides with the Gregorian year 1 BC) as well as in all Buddhist and Hindu calendars.</code> | <code>504<br>Year 504 (DIV) was a leap year starting on Thursday (link will display the full calendar) of the Julian calendar. At the time, it was known as the Year of the Consulship of Nicomachus without colleague (or, less frequently, year 1257 "Ab urbe condita"). The denomination 504 for this year has been used since the early medieval period, when the Anno Domini calendar era became the prevalent method in Europe for naming years.</code> |
| <code>When is the dialectical method used?</code> | <code>Dialectic<br>Dialectic or dialectics (Greek: διαλεκτική, dialektikḗ; related to dialogue), also known as the dialectical method, is at base a discourse between two or more people holding different points of view about a subject but wishing to establish the truth through reasoned arguments. Dialectic resembles debate, but the concept excludes subjective elements such as emotional appeal and the modern pejorative sense of rhetoric.[1][2] Dialectic may be contrasted with the didactic method, wherein one side of the conversation teaches the other. Dialectic is alternatively known as minor logic, as opposed to major logic or critique.</code> | <code>Derek Bentley case<br>Another factor in the posthumous defence was that a "confession" recorded by Bentley, which was claimed by the prosecution to be a "verbatim record of dictated monologue", was shown by forensic linguistics methods to have been largely edited by policemen. Linguist Malcolm Coulthard showed that certain patterns, such as the frequency of the word "then" and the grammatical use of "then" after the grammatical subject ("I then" rather than "then I"), were not consistent with Bentley's use of language (his idiolect), as evidenced in court testimony. These patterns fit better the recorded testimony of the policemen involved. This is one of the earliest uses of forensic linguistics on record.</code> |
| <code>What do Grasshoppers eat?</code> | <code>Grasshopper<br>Grasshoppers are plant-eaters, with a few species at times becoming serious pests of cereals, vegetables and pasture, especially when they swarm in their millions as locusts and destroy crops over wide areas. They protect themselves from predators by camouflage; when detected, many species attempt to startle the predator with a brilliantly-coloured wing-flash while jumping and (if adult) launching themselves into the air, usually flying for only a short distance. Other species such as the rainbow grasshopper have warning coloration which deters predators. Grasshoppers are affected by parasites and various diseases, and many predatory creatures feed on both nymphs and adults. The eggs are the subject of attack by parasitoids and predators.</code> | <code>Groundhog<br>Very often the dens of groundhogs provide homes for other animals including skunks, red foxes, and cottontail rabbits. The fox and skunk feed upon field mice, grasshoppers, beetles and other creatures that destroy farm crops. In aiding these animals, the groundhog indirectly helps the farmer. In addition to providing homes for itself and other animals, the groundhog aids in soil improvement by bringing subsoil to the surface. The groundhog is also a valuable game animal and is considered a difficult sport when hunted in a fair manner. In some parts of Appalachia, they are eaten.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `gradient_accumulation_steps`: 4
- `num_train_epochs`: 1
- `warmup_ratio`: 0.3
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.3
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | sts-dev-896_spearman_cosine | sts-dev-768_spearman_cosine |
|:------:|:-----:|:-------------:|:---------------------------:|:---------------------------:|
| 0.0004 | 10 | 2.2049 | - | - |
| 0.0009 | 20 | 2.3168 | - | - |
| 0.0013 | 30 | 2.3544 | - | - |
| 0.0018 | 40 | 2.2519 | - | - |
| 0.0022 | 50 | 2.1809 | - | - |
| 0.0027 | 60 | 2.1572 | - | - |
| 0.0031 | 70 | 2.1855 | - | - |
| 0.0036 | 80 | 2.5887 | - | - |
| 0.0040 | 90 | 2.883 | - | - |
| 0.0045 | 100 | 2.8557 | - | - |
| 0.0049 | 110 | 2.9356 | - | - |
| 0.0053 | 120 | 2.8833 | - | - |
| 0.0058 | 130 | 2.8394 | - | - |
| 0.0062 | 140 | 2.923 | - | - |
| 0.0067 | 150 | 2.8191 | - | - |
| 0.0071 | 160 | 2.8658 | - | - |
| 0.0076 | 170 | 2.8252 | - | - |
| 0.0080 | 180 | 2.8312 | - | - |
| 0.0085 | 190 | 2.7761 | - | - |
| 0.0089 | 200 | 2.7193 | - | - |
| 0.0094 | 210 | 2.724 | - | - |
| 0.0098 | 220 | 2.7484 | - | - |
| 0.0102 | 230 | 2.7262 | - | - |
| 0.0107 | 240 | 2.6964 | - | - |
| 0.0111 | 250 | 2.6676 | - | - |
| 0.0116 | 260 | 2.6715 | - | - |
| 0.0120 | 270 | 2.6145 | - | - |
| 0.0125 | 280 | 2.6191 | - | - |
| 0.0129 | 290 | 1.9812 | - | - |
| 0.0134 | 300 | 1.6413 | - | - |
| 0.0138 | 310 | 1.6126 | - | - |
| 0.0143 | 320 | 1.3599 | - | - |
| 0.0147 | 330 | 1.2996 | - | - |
| 0.0151 | 340 | 1.2654 | - | - |
| 0.0156 | 350 | 1.9409 | - | - |
| 0.0160 | 360 | 2.1287 | - | - |
| 0.0165 | 370 | 1.8442 | - | - |
| 0.0169 | 380 | 1.6837 | - | - |
| 0.0174 | 390 | 1.5489 | - | - |
| 0.0178 | 400 | 1.4382 | - | - |
| 0.0183 | 410 | 1.4848 | - | - |
| 0.0187 | 420 | 1.3481 | - | - |
| 0.0192 | 430 | 1.3467 | - | - |
| 0.0196 | 440 | 1.3977 | - | - |
| 0.0201 | 450 | 1.26 | - | - |
| 0.0205 | 460 | 1.2412 | - | - |
| 0.0209 | 470 | 1.316 | - | - |
| 0.0214 | 480 | 1.3501 | - | - |
| 0.0218 | 490 | 1.2246 | - | - |
| 0.0223 | 500 | 1.2271 | - | - |
| 0.0227 | 510 | 1.1871 | - | - |
| 0.0232 | 520 | 1.1685 | - | - |
| 0.0236 | 530 | 1.1624 | - | - |
| 0.0241 | 540 | 1.1911 | - | - |
| 0.0245 | 550 | 1.1978 | - | - |
| 0.0250 | 560 | 1.1228 | - | - |
| 0.0254 | 570 | 1.1091 | - | - |
| 0.0258 | 580 | 1.1433 | - | - |
| 0.0263 | 590 | 1.0638 | - | - |
| 0.0267 | 600 | 1.0515 | - | - |
| 0.0272 | 610 | 1.175 | - | - |
| 0.0276 | 620 | 1.0943 | - | - |
| 0.0281 | 630 | 1.1226 | - | - |
| 0.0285 | 640 | 0.9871 | - | - |
| 0.0290 | 650 | 1.0171 | - | - |
| 0.0294 | 660 | 1.0169 | - | - |
| 0.0299 | 670 | 0.9643 | - | - |
| 0.0303 | 680 | 0.9563 | - | - |
| 0.0307 | 690 | 0.9841 | - | - |
| 0.0312 | 700 | 1.0349 | - | - |
| 0.0316 | 710 | 0.8958 | - | - |
| 0.0321 | 720 | 0.9225 | - | - |
| 0.0325 | 730 | 0.842 | - | - |
| 0.0330 | 740 | 0.9104 | - | - |
| 0.0334 | 750 | 0.8927 | - | - |
| 0.0339 | 760 | 0.8508 | - | - |
| 0.0343 | 770 | 0.8835 | - | - |
| 0.0348 | 780 | 0.9531 | - | - |
| 0.0352 | 790 | 0.926 | - | - |
| 0.0356 | 800 | 0.8718 | - | - |
| 0.0361 | 810 | 0.8261 | - | - |
| 0.0365 | 820 | 0.8169 | - | - |
| 0.0370 | 830 | 0.8525 | - | - |
| 0.0374 | 840 | 0.8504 | - | - |
| 0.0379 | 850 | 0.7625 | - | - |
| 0.0383 | 860 | 0.8259 | - | - |
| 0.0388 | 870 | 0.7558 | - | - |
| 0.0392 | 880 | 0.7898 | - | - |
| 0.0397 | 890 | 0.7694 | - | - |
| 0.0401 | 900 | 0.7429 | - | - |
| 0.0405 | 910 | 0.6666 | - | - |
| 0.0410 | 920 | 0.7407 | - | - |
| 0.0414 | 930 | 0.6665 | - | - |
| 0.0419 | 940 | 0.7597 | - | - |
| 0.0423 | 950 | 0.7035 | - | - |
| 0.0428 | 960 | 0.7166 | - | - |
| 0.0432 | 970 | 0.6889 | - | - |
| 0.0437 | 980 | 0.7541 | - | - |
| 0.0441 | 990 | 0.7175 | - | - |
| 0.0446 | 1000 | 0.7389 | 0.6420 | 0.6403 |
| 0.0450 | 1010 | 0.7142 | - | - |
| 0.0454 | 1020 | 0.7301 | - | - |
| 0.0459 | 1030 | 0.7299 | - | - |
| 0.0463 | 1040 | 0.6759 | - | - |
| 0.0468 | 1050 | 0.7036 | - | - |
| 0.0472 | 1060 | 0.6286 | - | - |
| 0.0477 | 1070 | 0.595 | - | - |
| 0.0481 | 1080 | 0.6099 | - | - |
| 0.0486 | 1090 | 0.6377 | - | - |
| 0.0490 | 1100 | 0.6309 | - | - |
| 0.0495 | 1110 | 0.6306 | - | - |
| 0.0499 | 1120 | 0.557 | - | - |
| 0.0504 | 1130 | 0.5898 | - | - |
| 0.0508 | 1140 | 0.5896 | - | - |
| 0.0512 | 1150 | 0.6399 | - | - |
| 0.0517 | 1160 | 0.5923 | - | - |
| 0.0521 | 1170 | 0.5787 | - | - |
| 0.0526 | 1180 | 0.591 | - | - |
| 0.0530 | 1190 | 0.5714 | - | - |
| 0.0535 | 1200 | 0.6047 | - | - |
| 0.0539 | 1210 | 0.5904 | - | - |
| 0.0544 | 1220 | 0.543 | - | - |
| 0.0548 | 1230 | 0.6033 | - | - |
| 0.0553 | 1240 | 0.5445 | - | - |
| 0.0557 | 1250 | 0.5217 | - | - |
| 0.0561 | 1260 | 0.5835 | - | - |
| 0.0566 | 1270 | 0.5353 | - | - |
| 0.0570 | 1280 | 0.5887 | - | - |
| 0.0575 | 1290 | 0.5967 | - | - |
| 0.0579 | 1300 | 0.5036 | - | - |
| 0.0584 | 1310 | 0.5915 | - | - |
| 0.0588 | 1320 | 0.5719 | - | - |
| 0.0593 | 1330 | 0.5238 | - | - |
| 0.0597 | 1340 | 0.5647 | - | - |
| 0.0602 | 1350 | 0.538 | - | - |
| 0.0606 | 1360 | 0.5457 | - | - |
| 0.0610 | 1370 | 0.5169 | - | - |
| 0.0615 | 1380 | 0.4967 | - | - |
| 0.0619 | 1390 | 0.4864 | - | - |
| 0.0624 | 1400 | 0.5133 | - | - |
| 0.0628 | 1410 | 0.5587 | - | - |
| 0.0633 | 1420 | 0.4691 | - | - |
| 0.0637 | 1430 | 0.5186 | - | - |
| 0.0642 | 1440 | 0.4907 | - | - |
| 0.0646 | 1450 | 0.5281 | - | - |
| 0.0651 | 1460 | 0.4741 | - | - |
| 0.0655 | 1470 | 0.4452 | - | - |
| 0.0659 | 1480 | 0.4771 | - | - |
| 0.0664 | 1490 | 0.4289 | - | - |
| 0.0668 | 1500 | 0.4551 | - | - |
| 0.0673 | 1510 | 0.4558 | - | - |
| 0.0677 | 1520 | 0.5159 | - | - |
| 0.0682 | 1530 | 0.4296 | - | - |
| 0.0686 | 1540 | 0.4548 | - | - |
| 0.0691 | 1550 | 0.4439 | - | - |
| 0.0695 | 1560 | 0.4295 | - | - |
| 0.0700 | 1570 | 0.4466 | - | - |
| 0.0704 | 1580 | 0.4717 | - | - |
| 0.0708 | 1590 | 0.492 | - | - |
| 0.0713 | 1600 | 0.4566 | - | - |
| 0.0717 | 1610 | 0.4451 | - | - |
| 0.0722 | 1620 | 0.4715 | - | - |
| 0.0726 | 1630 | 0.4573 | - | - |
| 0.0731 | 1640 | 0.3972 | - | - |
| 0.0735 | 1650 | 0.5212 | - | - |
| 0.0740 | 1660 | 0.4381 | - | - |
| 0.0744 | 1670 | 0.4552 | - | - |
| 0.0749 | 1680 | 0.4767 | - | - |
| 0.0753 | 1690 | 0.4398 | - | - |
| 0.0757 | 1700 | 0.4801 | - | - |
| 0.0762 | 1710 | 0.3751 | - | - |
| 0.0766 | 1720 | 0.4407 | - | - |
| 0.0771 | 1730 | 0.4305 | - | - |
| 0.0775 | 1740 | 0.3938 | - | - |
| 0.0780 | 1750 | 0.4748 | - | - |
| 0.0784 | 1760 | 0.428 | - | - |
| 0.0789 | 1770 | 0.404 | - | - |
| 0.0793 | 1780 | 0.4261 | - | - |
| 0.0798 | 1790 | 0.359 | - | - |
| 0.0802 | 1800 | 0.4422 | - | - |
| 0.0807 | 1810 | 0.4748 | - | - |
| 0.0811 | 1820 | 0.4352 | - | - |
| 0.0815 | 1830 | 0.4032 | - | - |
| 0.0820 | 1840 | 0.4124 | - | - |
| 0.0824 | 1850 | 0.4486 | - | - |
| 0.0829 | 1860 | 0.429 | - | - |
| 0.0833 | 1870 | 0.4189 | - | - |
| 0.0838 | 1880 | 0.3658 | - | - |
| 0.0842 | 1890 | 0.4297 | - | - |
| 0.0847 | 1900 | 0.4215 | - | - |
| 0.0851 | 1910 | 0.3726 | - | - |
| 0.0856 | 1920 | 0.3736 | - | - |
| 0.0860 | 1930 | 0.4287 | - | - |
| 0.0864 | 1940 | 0.4402 | - | - |
| 0.0869 | 1950 | 0.4353 | - | - |
| 0.0873 | 1960 | 0.3622 | - | - |
| 0.0878 | 1970 | 0.3557 | - | - |
| 0.0882 | 1980 | 0.4107 | - | - |
| 0.0887 | 1990 | 0.3982 | - | - |
| 0.0891 | 2000 | 0.453 | 0.7292 | 0.7261 |
| 0.0896 | 2010 | 0.3971 | - | - |
| 0.0900 | 2020 | 0.4374 | - | - |
| 0.0905 | 2030 | 0.4322 | - | - |
| 0.0909 | 2040 | 0.3945 | - | - |
| 0.0913 | 2050 | 0.356 | - | - |
| 0.0918 | 2060 | 0.4182 | - | - |
| 0.0922 | 2070 | 0.3694 | - | - |
| 0.0927 | 2080 | 0.3989 | - | - |
| 0.0931 | 2090 | 0.4237 | - | - |
| 0.0936 | 2100 | 0.3961 | - | - |
| 0.0940 | 2110 | 0.4264 | - | - |
| 0.0945 | 2120 | 0.3609 | - | - |
| 0.0949 | 2130 | 0.4154 | - | - |
| 0.0954 | 2140 | 0.3661 | - | - |
| 0.0958 | 2150 | 0.3328 | - | - |
| 0.0962 | 2160 | 0.3456 | - | - |
| 0.0967 | 2170 | 0.3478 | - | - |
| 0.0971 | 2180 | 0.3339 | - | - |
| 0.0976 | 2190 | 0.3833 | - | - |
| 0.0980 | 2200 | 0.3238 | - | - |
| 0.0985 | 2210 | 0.3871 | - | - |
| 0.0989 | 2220 | 0.4009 | - | - |
| 0.0994 | 2230 | 0.4115 | - | - |
| 0.0998 | 2240 | 0.4024 | - | - |
| 0.1003 | 2250 | 0.35 | - | - |
| 0.1007 | 2260 | 0.3649 | - | - |
| 0.1011 | 2270 | 0.3615 | - | - |
| 0.1016 | 2280 | 0.3898 | - | - |
| 0.1020 | 2290 | 0.3866 | - | - |
| 0.1025 | 2300 | 0.3904 | - | - |
| 0.1029 | 2310 | 0.3321 | - | - |
| 0.1034 | 2320 | 0.3803 | - | - |
| 0.1038 | 2330 | 0.3831 | - | - |
| 0.1043 | 2340 | 0.403 | - | - |
| 0.1047 | 2350 | 0.3803 | - | - |
| 0.1052 | 2360 | 0.3463 | - | - |
| 0.1056 | 2370 | 0.3987 | - | - |
| 0.1060 | 2380 | 0.3731 | - | - |
| 0.1065 | 2390 | 0.353 | - | - |
| 0.1069 | 2400 | 0.3166 | - | - |
| 0.1074 | 2410 | 0.3895 | - | - |
| 0.1078 | 2420 | 0.4025 | - | - |
| 0.1083 | 2430 | 0.3798 | - | - |
| 0.1087 | 2440 | 0.2991 | - | - |
| 0.1092 | 2450 | 0.3094 | - | - |
| 0.1096 | 2460 | 0.3669 | - | - |
| 0.1101 | 2470 | 0.3412 | - | - |
| 0.1105 | 2480 | 0.3697 | - | - |
| 0.1110 | 2490 | 0.369 | - | - |
| 0.1114 | 2500 | 0.3393 | - | - |
| 0.1118 | 2510 | 0.4232 | - | - |
| 0.1123 | 2520 | 0.3445 | - | - |
| 0.1127 | 2530 | 0.4165 | - | - |
| 0.1132 | 2540 | 0.3721 | - | - |
| 0.1136 | 2550 | 0.3476 | - | - |
| 0.1141 | 2560 | 0.2847 | - | - |
| 0.1145 | 2570 | 0.3609 | - | - |
| 0.1150 | 2580 | 0.3017 | - | - |
| 0.1154 | 2590 | 0.374 | - | - |
| 0.1159 | 2600 | 0.3365 | - | - |
| 0.1163 | 2610 | 0.393 | - | - |
| 0.1167 | 2620 | 0.3623 | - | - |
| 0.1172 | 2630 | 0.3538 | - | - |
| 0.1176 | 2640 | 0.3206 | - | - |
| 0.1181 | 2650 | 0.3962 | - | - |
| 0.1185 | 2660 | 0.3087 | - | - |
| 0.1190 | 2670 | 0.3482 | - | - |
| 0.1194 | 2680 | 0.3616 | - | - |
| 0.1199 | 2690 | 0.3955 | - | - |
| 0.1203 | 2700 | 0.3915 | - | - |
| 0.1208 | 2710 | 0.3782 | - | - |
| 0.1212 | 2720 | 0.3576 | - | - |
| 0.1216 | 2730 | 0.3544 | - | - |
| 0.1221 | 2740 | 0.3572 | - | - |
| 0.1225 | 2750 | 0.3107 | - | - |
| 0.1230 | 2760 | 0.3579 | - | - |
| 0.1234 | 2770 | 0.3571 | - | - |
| 0.1239 | 2780 | 0.3694 | - | - |
| 0.1243 | 2790 | 0.3674 | - | - |
| 0.1248 | 2800 | 0.3373 | - | - |
| 0.1252 | 2810 | 0.3362 | - | - |
| 0.1257 | 2820 | 0.3225 | - | - |
| 0.1261 | 2830 | 0.3609 | - | - |
| 0.1265 | 2840 | 0.3681 | - | - |
| 0.1270 | 2850 | 0.4059 | - | - |
| 0.1274 | 2860 | 0.3047 | - | - |
| 0.1279 | 2870 | 0.3446 | - | - |
| 0.1283 | 2880 | 0.3507 | - | - |
| 0.1288 | 2890 | 0.3124 | - | - |
| 0.1292 | 2900 | 0.3712 | - | - |
| 0.1297 | 2910 | 0.3394 | - | - |
| 0.1301 | 2920 | 0.3869 | - | - |
| 0.1306 | 2930 | 0.3449 | - | - |
| 0.1310 | 2940 | 0.3752 | - | - |
| 0.1314 | 2950 | 0.3341 | - | - |
| 0.1319 | 2960 | 0.3329 | - | - |
| 0.1323 | 2970 | 0.36 | - | - |
| 0.1328 | 2980 | 0.3788 | - | - |
| 0.1332 | 2990 | 0.3834 | - | - |
| 0.1337 | 3000 | 0.3426 | 0.7603 | 0.7590 |
| 0.1341 | 3010 | 0.3591 | - | - |
| 0.1346 | 3020 | 0.2923 | - | - |
| 0.1350 | 3030 | 0.332 | - | - |
| 0.1355 | 3040 | 0.3867 | - | - |
| 0.1359 | 3050 | 0.3778 | - | - |
| 0.1363 | 3060 | 0.3389 | - | - |
| 0.1368 | 3070 | 0.3069 | - | - |
| 0.1372 | 3080 | 0.3833 | - | - |
| 0.1377 | 3090 | 0.3497 | - | - |
| 0.1381 | 3100 | 0.3698 | - | - |
| 0.1386 | 3110 | 0.335 | - | - |
| 0.1390 | 3120 | 0.3578 | - | - |
| 0.1395 | 3130 | 0.3171 | - | - |
| 0.1399 | 3140 | 0.3073 | - | - |
| 0.1404 | 3150 | 0.3354 | - | - |
| 0.1408 | 3160 | 0.3338 | - | - |
| 0.1412 | 3170 | 0.367 | - | - |
| 0.1417 | 3180 | 0.3299 | - | - |
| 0.1421 | 3190 | 0.3622 | - | - |
| 0.1426 | 3200 | 0.3158 | - | - |
| 0.1430 | 3210 | 0.3242 | - | - |
| 0.1435 | 3220 | 0.388 | - | - |
| 0.1439 | 3230 | 0.3626 | - | - |
| 0.1444 | 3240 | 0.3371 | - | - |
| 0.1448 | 3250 | 0.3808 | - | - |
| 0.1453 | 3260 | 0.3375 | - | - |
| 0.1457 | 3270 | 0.352 | - | - |
| 0.1462 | 3280 | 0.3466 | - | - |
| 0.1466 | 3290 | 0.3355 | - | - |
| 0.1470 | 3300 | 0.3432 | - | - |
| 0.1475 | 3310 | 0.372 | - | - |
| 0.1479 | 3320 | 0.3501 | - | - |
| 0.1484 | 3330 | 0.3311 | - | - |
| 0.1488 | 3340 | 0.3312 | - | - |
| 0.1493 | 3350 | 0.3276 | - | - |
| 0.1497 | 3360 | 0.3218 | - | - |
| 0.1502 | 3370 | 0.4019 | - | - |
| 0.1506 | 3380 | 0.3132 | - | - |
| 0.1511 | 3390 | 0.3741 | - | - |
| 0.1515 | 3400 | 0.3359 | - | - |
| 0.1519 | 3410 | 0.381 | - | - |
| 0.1524 | 3420 | 0.3024 | - | - |
| 0.1528 | 3430 | 0.3238 | - | - |
| 0.1533 | 3440 | 0.2675 | - | - |
| 0.1537 | 3450 | 0.3568 | - | - |
| 0.1542 | 3460 | 0.3666 | - | - |
| 0.1546 | 3470 | 0.3307 | - | - |
| 0.1551 | 3480 | 0.3698 | - | - |
| 0.1555 | 3490 | 0.3668 | - | - |
| 0.1560 | 3500 | 0.385 | - | - |
| 0.1564 | 3510 | 0.3068 | - | - |
| 0.1568 | 3520 | 0.3015 | - | - |
| 0.1573 | 3530 | 0.3604 | - | - |
| 0.1577 | 3540 | 0.3592 | - | - |
| 0.1582 | 3550 | 0.3483 | - | - |
| 0.1586 | 3560 | 0.3131 | - | - |
| 0.1591 | 3570 | 0.3738 | - | - |
| 0.1595 | 3580 | 0.3719 | - | - |
| 0.1600 | 3590 | 0.3409 | - | - |
| 0.1604 | 3600 | 0.4082 | - | - |
| 0.1609 | 3610 | 0.2881 | - | - |
| 0.1613 | 3620 | 0.3214 | - | - |
| 0.1617 | 3630 | 0.4413 | - | - |
| 0.1622 | 3640 | 0.3706 | - | - |
| 0.1626 | 3650 | 0.3643 | - | - |
| 0.1631 | 3660 | 0.3493 | - | - |
| 0.1635 | 3670 | 0.3877 | - | - |
| 0.1640 | 3680 | 0.3278 | - | - |
| 0.1644 | 3690 | 0.3211 | - | - |
| 0.1649 | 3700 | 0.4104 | - | - |
| 0.1653 | 3710 | 0.4558 | - | - |
| 0.1658 | 3720 | 0.3602 | - | - |
| 0.1662 | 3730 | 0.3348 | - | - |
| 0.1666 | 3740 | 0.2922 | - | - |
| 0.1671 | 3750 | 0.329 | - | - |
| 0.1675 | 3760 | 0.3507 | - | - |
| 0.1680 | 3770 | 0.2853 | - | - |
| 0.1684 | 3780 | 0.3556 | - | - |
| 0.1689 | 3790 | 0.3138 | - | - |
| 0.1693 | 3800 | 0.3536 | - | - |
| 0.1698 | 3810 | 0.3762 | - | - |
| 0.1702 | 3820 | 0.3262 | - | - |
| 0.1707 | 3830 | 0.3571 | - | - |
| 0.1711 | 3840 | 0.3455 | - | - |
| 0.1715 | 3850 | 0.3283 | - | - |
| 0.1720 | 3860 | 0.3317 | - | - |
| 0.1724 | 3870 | 0.2984 | - | - |
| 0.1729 | 3880 | 0.2659 | - | - |
| 0.1733 | 3890 | 0.2844 | - | - |
| 0.1738 | 3900 | 0.2999 | - | - |
| 0.1742 | 3910 | 0.2991 | - | - |
| 0.1747 | 3920 | 0.2667 | - | - |
| 0.1751 | 3930 | 0.3529 | - | - |
| 0.1756 | 3940 | 0.3767 | - | - |
| 0.1760 | 3950 | 0.3909 | - | - |
| 0.1765 | 3960 | 0.3393 | - | - |
| 0.1769 | 3970 | 0.2918 | - | - |
| 0.1773 | 3980 | 0.3363 | - | - |
| 0.1778 | 3990 | 0.3694 | - | - |
| 0.1782 | 4000 | 0.3 | 0.7572 | 0.7542 |
| 0.1787 | 4010 | 0.3266 | - | - |
| 0.1791 | 4020 | 0.3059 | - | - |
| 0.1796 | 4030 | 0.3038 | - | - |
| 0.1800 | 4040 | 0.3415 | - | - |
| 0.1805 | 4050 | 0.3385 | - | - |
| 0.1809 | 4060 | 0.3145 | - | - |
| 0.1814 | 4070 | 0.2816 | - | - |
| 0.1818 | 4080 | 0.3272 | - | - |
| 0.1822 | 4090 | 0.3335 | - | - |
| 0.1827 | 4100 | 0.3412 | - | - |
| 0.1831 | 4110 | 0.3367 | - | - |
| 0.1836 | 4120 | 0.2754 | - | - |
| 0.1840 | 4130 | 0.298 | - | - |
| 0.1845 | 4140 | 0.3252 | - | - |
| 0.1849 | 4150 | 0.3613 | - | - |
| 0.1854 | 4160 | 0.3197 | - | - |
| 0.1858 | 4170 | 0.3578 | - | - |
| 0.1863 | 4180 | 0.3254 | - | - |
| 0.1867 | 4190 | 0.2993 | - | - |
| 0.1871 | 4200 | 0.3188 | - | - |
| 0.1876 | 4210 | 0.3217 | - | - |
| 0.1880 | 4220 | 0.2893 | - | - |
| 0.1885 | 4230 | 0.3223 | - | - |
| 0.1889 | 4240 | 0.3522 | - | - |
| 0.1894 | 4250 | 0.3489 | - | - |
| 0.1898 | 4260 | 0.3313 | - | - |
| 0.1903 | 4270 | 0.3612 | - | - |
| 0.1907 | 4280 | 0.3323 | - | - |
| 0.1912 | 4290 | 0.2971 | - | - |
| 0.1916 | 4300 | 0.3009 | - | - |
| 0.1920 | 4310 | 0.3336 | - | - |
| 0.1925 | 4320 | 0.3655 | - | - |
| 0.1929 | 4330 | 0.3414 | - | - |
| 0.1934 | 4340 | 0.2903 | - | - |
| 0.1938 | 4350 | 0.3732 | - | - |
| 0.1943 | 4360 | 0.3526 | - | - |
| 0.1947 | 4370 | 0.3424 | - | - |
| 0.1952 | 4380 | 0.3371 | - | - |
| 0.1956 | 4390 | 0.3407 | - | - |
| 0.1961 | 4400 | 0.3626 | - | - |
| 0.1965 | 4410 | 0.3104 | - | - |
| 0.1969 | 4420 | 0.3432 | - | - |
| 0.1974 | 4430 | 0.2897 | - | - |
| 0.1978 | 4440 | 0.2952 | - | - |
| 0.1983 | 4450 | 0.3032 | - | - |
| 0.1987 | 4460 | 0.3179 | - | - |
| 0.1992 | 4470 | 0.3364 | - | - |
| 0.1996 | 4480 | 0.2757 | - | - |
| 0.2001 | 4490 | 0.3775 | - | - |
| 0.2005 | 4500 | 0.2782 | - | - |
| 0.2010 | 4510 | 0.2787 | - | - |
| 0.2014 | 4520 | 0.3433 | - | - |
| 0.2018 | 4530 | 0.3348 | - | - |
| 0.2023 | 4540 | 0.295 | - | - |
| 0.2027 | 4550 | 0.3076 | - | - |
| 0.2032 | 4560 | 0.3489 | - | - |
| 0.2036 | 4570 | 0.3741 | - | - |
| 0.2041 | 4580 | 0.3121 | - | - |
| 0.2045 | 4590 | 0.2682 | - | - |
| 0.2050 | 4600 | 0.3106 | - | - |
| 0.2054 | 4610 | 0.312 | - | - |
| 0.2059 | 4620 | 0.3537 | - | - |
| 0.2063 | 4630 | 0.2801 | - | - |
| 0.2068 | 4640 | 0.3378 | - | - |
| 0.2072 | 4650 | 0.3417 | - | - |
| 0.2076 | 4660 | 0.4114 | - | - |
| 0.2081 | 4670 | 0.3325 | - | - |
| 0.2085 | 4680 | 0.3085 | - | - |
| 0.2090 | 4690 | 0.2875 | - | - |
| 0.2094 | 4700 | 0.3864 | - | - |
| 0.2099 | 4710 | 0.3235 | - | - |
| 0.2103 | 4720 | 0.3187 | - | - |
| 0.2108 | 4730 | 0.2956 | - | - |
| 0.2112 | 4740 | 0.3405 | - | - |
| 0.2117 | 4750 | 0.313 | - | - |
| 0.2121 | 4760 | 0.2865 | - | - |
| 0.2125 | 4770 | 0.3555 | - | - |
| 0.2130 | 4780 | 0.3089 | - | - |
| 0.2134 | 4790 | 0.3021 | - | - |
| 0.2139 | 4800 | 0.353 | - | - |
| 0.2143 | 4810 | 0.3356 | - | - |
| 0.2148 | 4820 | 0.338 | - | - |
| 0.2152 | 4830 | 0.3362 | - | - |
| 0.2157 | 4840 | 0.3152 | - | - |
| 0.2161 | 4850 | 0.3321 | - | - |
| 0.2166 | 4860 | 0.3087 | - | - |
| 0.2170 | 4870 | 0.3503 | - | - |
| 0.2174 | 4880 | 0.3841 | - | - |
| 0.2179 | 4890 | 0.333 | - | - |
| 0.2183 | 4900 | 0.3705 | - | - |
| 0.2188 | 4910 | 0.3121 | - | - |
| 0.2192 | 4920 | 0.3151 | - | - |
| 0.2197 | 4930 | 0.3138 | - | - |
| 0.2201 | 4940 | 0.3525 | - | - |
| 0.2206 | 4950 | 0.3233 | - | - |
| 0.2210 | 4960 | 0.2762 | - | - |
| 0.2215 | 4970 | 0.3679 | - | - |
| 0.2219 | 4980 | 0.3351 | - | - |
| 0.2223 | 4990 | 0.3733 | - | - |
| 0.2228 | 5000 | 0.366 | 0.7601 | 0.7577 |
| 0.2232 | 5010 | 0.2968 | - | - |
| 0.2237 | 5020 | 0.3618 | - | - |
| 0.2241 | 5030 | 0.3758 | - | - |
| 0.2246 | 5040 | 0.2664 | - | - |
| 0.2250 | 5050 | 0.3232 | - | - |
| 0.2255 | 5060 | 0.3452 | - | - |
| 0.2259 | 5070 | 0.4011 | - | - |
| 0.2264 | 5080 | 0.3521 | - | - |
| 0.2268 | 5090 | 0.3029 | - | - |
| 0.2272 | 5100 | 0.3058 | - | - |
| 0.2277 | 5110 | 0.3198 | - | - |
| 0.2281 | 5120 | 0.2958 | - | - |
| 0.2286 | 5130 | 0.3046 | - | - |
| 0.2290 | 5140 | 0.3284 | - | - |
| 0.2295 | 5150 | 0.333 | - | - |
| 0.2299 | 5160 | 0.3385 | - | - |
| 0.2304 | 5170 | 0.3359 | - | - |
| 0.2308 | 5180 | 0.3572 | - | - |
| 0.2313 | 5190 | 0.2992 | - | - |
| 0.2317 | 5200 | 0.318 | - | - |
| 0.2321 | 5210 | 0.3002 | - | - |
| 0.2326 | 5220 | 0.3194 | - | - |
| 0.2330 | 5230 | 0.3398 | - | - |
| 0.2335 | 5240 | 0.2675 | - | - |
| 0.2339 | 5250 | 0.312 | - | - |
| 0.2344 | 5260 | 0.3199 | - | - |
| 0.2348 | 5270 | 0.3446 | - | - |
| 0.2353 | 5280 | 0.3082 | - | - |
| 0.2357 | 5290 | 0.3522 | - | - |
| 0.2362 | 5300 | 0.3347 | - | - |
| 0.2366 | 5310 | 0.3571 | - | - |
| 0.2371 | 5320 | 0.3275 | - | - |
| 0.2375 | 5330 | 0.3524 | - | - |
| 0.2379 | 5340 | 0.3151 | - | - |
| 0.2384 | 5350 | 0.3338 | - | - |
| 0.2388 | 5360 | 0.3794 | - | - |
| 0.2393 | 5370 | 0.3591 | - | - |
| 0.2397 | 5380 | 0.3442 | - | - |
| 0.2402 | 5390 | 0.2927 | - | - |
| 0.2406 | 5400 | 0.3316 | - | - |
| 0.2411 | 5410 | 0.3152 | - | - |
| 0.2415 | 5420 | 0.3876 | - | - |
| 0.2420 | 5430 | 0.324 | - | - |
| 0.2424 | 5440 | 0.3296 | - | - |
| 0.2428 | 5450 | 0.3499 | - | - |
| 0.2433 | 5460 | 0.3552 | - | - |
| 0.2437 | 5470 | 0.3394 | - | - |
| 0.2442 | 5480 | 0.3083 | - | - |
| 0.2446 | 5490 | 0.3198 | - | - |
| 0.2451 | 5500 | 0.2887 | - | - |
| 0.2455 | 5510 | 0.2898 | - | - |
| 0.2460 | 5520 | 0.3092 | - | - |
| 0.2464 | 5530 | 0.3025 | - | - |
| 0.2469 | 5540 | 0.3253 | - | - |
| 0.2473 | 5550 | 0.3686 | - | - |
| 0.2477 | 5560 | 0.3205 | - | - |
| 0.2482 | 5570 | 0.3507 | - | - |
| 0.2486 | 5580 | 0.2809 | - | - |
| 0.2491 | 5590 | 0.3339 | - | - |
| 0.2495 | 5600 | 0.3261 | - | - |
| 0.2500 | 5610 | 0.2804 | - | - |
| 0.2504 | 5620 | 0.2856 | - | - |
| 0.2509 | 5630 | 0.3211 | - | - |
| 0.2513 | 5640 | 0.3126 | - | - |
| 0.2518 | 5650 | 0.3374 | - | - |
| 0.2522 | 5660 | 0.2957 | - | - |
| 0.2526 | 5670 | 0.3414 | - | - |
| 0.2531 | 5680 | 0.3219 | - | - |
| 0.2535 | 5690 | 0.3554 | - | - |
| 0.2540 | 5700 | 0.2738 | - | - |
| 0.2544 | 5710 | 0.361 | - | - |
| 0.2549 | 5720 | 0.336 | - | - |
| 0.2553 | 5730 | 0.3254 | - | - |
| 0.2558 | 5740 | 0.3453 | - | - |
| 0.2562 | 5750 | 0.2984 | - | - |
| 0.2567 | 5760 | 0.3224 | - | - |
| 0.2571 | 5770 | 0.2553 | - | - |
| 0.2575 | 5780 | 0.301 | - | - |
| 0.2580 | 5790 | 0.3767 | - | - |
| 0.2584 | 5800 | 0.3092 | - | - |
| 0.2589 | 5810 | 0.2676 | - | - |
| 0.2593 | 5820 | 0.3178 | - | - |
| 0.2598 | 5830 | 0.3117 | - | - |
| 0.2602 | 5840 | 0.3446 | - | - |
| 0.2607 | 5850 | 0.3347 | - | - |
| 0.2611 | 5860 | 0.3841 | - | - |
| 0.2616 | 5870 | 0.2847 | - | - |
| 0.2620 | 5880 | 0.3587 | - | - |
| 0.2624 | 5890 | 0.2812 | - | - |
| 0.2629 | 5900 | 0.3577 | - | - |
| 0.2633 | 5910 | 0.3011 | - | - |
| 0.2638 | 5920 | 0.3102 | - | - |
| 0.2642 | 5930 | 0.3297 | - | - |
| 0.2647 | 5940 | 0.2603 | - | - |
| 0.2651 | 5950 | 0.3575 | - | - |
| 0.2656 | 5960 | 0.3617 | - | - |
| 0.2660 | 5970 | 0.3587 | - | - |
| 0.2665 | 5980 | 0.3198 | - | - |
| 0.2669 | 5990 | 0.3536 | - | - |
| 0.2673 | 6000 | 0.3047 | 0.7725 | 0.7699 |
| 0.2678 | 6010 | 0.3211 | - | - |
| 0.2682 | 6020 | 0.392 | - | - |
| 0.2687 | 6030 | 0.3359 | - | - |
| 0.2691 | 6040 | 0.2903 | - | - |
| 0.2696 | 6050 | 0.286 | - | - |
| 0.2700 | 6060 | 0.3426 | - | - |
| 0.2705 | 6070 | 0.3406 | - | - |
| 0.2709 | 6080 | 0.2903 | - | - |
| 0.2714 | 6090 | 0.3175 | - | - |
| 0.2718 | 6100 | 0.2794 | - | - |
| 0.2723 | 6110 | 0.3232 | - | - |
| 0.2727 | 6120 | 0.3054 | - | - |
| 0.2731 | 6130 | 0.361 | - | - |
| 0.2736 | 6140 | 0.3524 | - | - |
| 0.2740 | 6150 | 0.3371 | - | - |
| 0.2745 | 6160 | 0.313 | - | - |
| 0.2749 | 6170 | 0.2713 | - | - |
| 0.2754 | 6180 | 0.3141 | - | - |
| 0.2758 | 6190 | 0.3197 | - | - |
| 0.2763 | 6200 | 0.2792 | - | - |
| 0.2767 | 6210 | 0.3169 | - | - |
| 0.2772 | 6220 | 0.307 | - | - |
| 0.2776 | 6230 | 0.2737 | - | - |
| 0.2780 | 6240 | 0.3348 | - | - |
| 0.2785 | 6250 | 0.2885 | - | - |
| 0.2789 | 6260 | 0.3416 | - | - |
| 0.2794 | 6270 | 0.3422 | - | - |
| 0.2798 | 6280 | 0.2758 | - | - |
| 0.2803 | 6290 | 0.3736 | - | - |
| 0.2807 | 6300 | 0.3036 | - | - |
| 0.2812 | 6310 | 0.3704 | - | - |
| 0.2816 | 6320 | 0.3312 | - | - |
| 0.2821 | 6330 | 0.3431 | - | - |
| 0.2825 | 6340 | 0.3502 | - | - |
| 0.2829 | 6350 | 0.2821 | - | - |
| 0.2834 | 6360 | 0.3097 | - | - |
| 0.2838 | 6370 | 0.3444 | - | - |
| 0.2843 | 6380 | 0.3349 | - | - |
| 0.2847 | 6390 | 0.2999 | - | - |
| 0.2852 | 6400 | 0.3149 | - | - |
| 0.2856 | 6410 | 0.3462 | - | - |
| 0.2861 | 6420 | 0.3337 | - | - |
| 0.2865 | 6430 | 0.3329 | - | - |
| 0.2870 | 6440 | 0.3294 | - | - |
| 0.2874 | 6450 | 0.2917 | - | - |
| 0.2878 | 6460 | 0.3007 | - | - |
| 0.2883 | 6470 | 0.2809 | - | - |
| 0.2887 | 6480 | 0.3745 | - | - |
| 0.2892 | 6490 | 0.3625 | - | - |
| 0.2896 | 6500 | 0.3123 | - | - |
| 0.2901 | 6510 | 0.3209 | - | - |
| 0.2905 | 6520 | 0.347 | - | - |
| 0.2910 | 6530 | 0.3084 | - | - |
| 0.2914 | 6540 | 0.2829 | - | - |
| 0.2919 | 6550 | 0.3569 | - | - |
| 0.2923 | 6560 | 0.2686 | - | - |
| 0.2927 | 6570 | 0.2929 | - | - |
| 0.2932 | 6580 | 0.3237 | - | - |
| 0.2936 | 6590 | 0.3451 | - | - |
| 0.2941 | 6600 | 0.3199 | - | - |
| 0.2945 | 6610 | 0.2848 | - | - |
| 0.2950 | 6620 | 0.2842 | - | - |
| 0.2954 | 6630 | 0.3168 | - | - |
| 0.2959 | 6640 | 0.3094 | - | - |
| 0.2963 | 6650 | 0.3239 | - | - |
| 0.2968 | 6660 | 0.357 | - | - |
| 0.2972 | 6670 | 0.3279 | - | - |
| 0.2976 | 6680 | 0.4015 | - | - |
| 0.2981 | 6690 | 0.2901 | - | - |
| 0.2985 | 6700 | 0.3387 | - | - |
| 0.2990 | 6710 | 0.3282 | - | - |
| 0.2994 | 6720 | 0.2909 | - | - |
| 0.2999 | 6730 | 0.3556 | - | - |
| 0.3003 | 6740 | 0.3008 | - | - |
| 0.3008 | 6750 | 0.3205 | - | - |
| 0.3012 | 6760 | 0.3132 | - | - |
| 0.3017 | 6770 | 0.3181 | - | - |
| 0.3021 | 6780 | 0.3752 | - | - |
| 0.3026 | 6790 | 0.317 | - | - |
| 0.3030 | 6800 | 0.3584 | - | - |
| 0.3034 | 6810 | 0.3475 | - | - |
| 0.3039 | 6820 | 0.2827 | - | - |
| 0.3043 | 6830 | 0.2925 | - | - |
| 0.3048 | 6840 | 0.2941 | - | - |
| 0.3052 | 6850 | 0.3154 | - | - |
| 0.3057 | 6860 | 0.3301 | - | - |
| 0.3061 | 6870 | 0.3492 | - | - |
| 0.3066 | 6880 | 0.3147 | - | - |
| 0.3070 | 6890 | 0.348 | - | - |
| 0.3075 | 6900 | 0.3577 | - | - |
| 0.3079 | 6910 | 0.2893 | - | - |
| 0.3083 | 6920 | 0.3298 | - | - |
| 0.3088 | 6930 | 0.3071 | - | - |
| 0.3092 | 6940 | 0.322 | - | - |
| 0.3097 | 6950 | 0.3055 | - | - |
| 0.3101 | 6960 | 0.3333 | - | - |
| 0.3106 | 6970 | 0.3329 | - | - |
| 0.3110 | 6980 | 0.3298 | - | - |
| 0.3115 | 6990 | 0.3061 | - | - |
| 0.3119 | 7000 | 0.3005 | 0.7686 | 0.7672 |
| 0.3124 | 7010 | 0.3463 | - | - |
| 0.3128 | 7020 | 0.3467 | - | - |
| 0.3132 | 7030 | 0.3104 | - | - |
| 0.3137 | 7040 | 0.3268 | - | - |
| 0.3141 | 7050 | 0.3222 | - | - |
| 0.3146 | 7060 | 0.3126 | - | - |
| 0.3150 | 7070 | 0.3121 | - | - |
| 0.3155 | 7080 | 0.2935 | - | - |
| 0.3159 | 7090 | 0.2897 | - | - |
| 0.3164 | 7100 | 0.3066 | - | - |
| 0.3168 | 7110 | 0.3363 | - | - |
| 0.3173 | 7120 | 0.3293 | - | - |
| 0.3177 | 7130 | 0.3161 | - | - |
| 0.3181 | 7140 | 0.3582 | - | - |
| 0.3186 | 7150 | 0.3345 | - | - |
| 0.3190 | 7160 | 0.3307 | - | - |
| 0.3195 | 7170 | 0.3269 | - | - |
| 0.3199 | 7180 | 0.3262 | - | - |
| 0.3204 | 7190 | 0.3115 | - | - |
| 0.3208 | 7200 | 0.3145 | - | - |
| 0.3213 | 7210 | 0.2816 | - | - |
| 0.3217 | 7220 | 0.3239 | - | - |
| 0.3222 | 7230 | 0.2825 | - | - |
| 0.3226 | 7240 | 0.3217 | - | - |
| 0.3230 | 7250 | 0.2913 | - | - |
| 0.3235 | 7260 | 0.3219 | - | - |
| 0.3239 | 7270 | 0.2968 | - | - |
| 0.3244 | 7280 | 0.2999 | - | - |
| 0.3248 | 7290 | 0.2924 | - | - |
| 0.3253 | 7300 | 0.3033 | - | - |
| 0.3257 | 7310 | 0.3521 | - | - |
| 0.3262 | 7320 | 0.3258 | - | - |
| 0.3266 | 7330 | 0.3724 | - | - |
| 0.3271 | 7340 | 0.3068 | - | - |
| 0.3275 | 7350 | 0.3095 | - | - |
| 0.3279 | 7360 | 0.2957 | - | - |
| 0.3284 | 7370 | 0.2741 | - | - |
| 0.3288 | 7380 | 0.3183 | - | - |
| 0.3293 | 7390 | 0.3409 | - | - |
| 0.3297 | 7400 | 0.3066 | - | - |
| 0.3302 | 7410 | 0.3139 | - | - |
| 0.3306 | 7420 | 0.3639 | - | - |
| 0.3311 | 7430 | 0.3333 | - | - |
| 0.3315 | 7440 | 0.276 | - | - |
| 0.3320 | 7450 | 0.3326 | - | - |
| 0.3324 | 7460 | 0.3239 | - | - |
| 0.3329 | 7470 | 0.3067 | - | - |
| 0.3333 | 7480 | 0.3213 | - | - |
| 0.3337 | 7490 | 0.3227 | - | - |
| 0.3342 | 7500 | 0.3027 | - | - |
| 0.3346 | 7510 | 0.3017 | - | - |
| 0.3351 | 7520 | 0.2797 | - | - |
| 0.3355 | 7530 | 0.3215 | - | - |
| 0.3360 | 7540 | 0.2713 | - | - |
| 0.3364 | 7550 | 0.3071 | - | - |
| 0.3369 | 7560 | 0.309 | - | - |
| 0.3373 | 7570 | 0.3145 | - | - |
| 0.3378 | 7580 | 0.2694 | - | - |
| 0.3382 | 7590 | 0.3036 | - | - |
| 0.3386 | 7600 | 0.2892 | - | - |
| 0.3391 | 7610 | 0.3227 | - | - |
| 0.3395 | 7620 | 0.3373 | - | - |
| 0.3400 | 7630 | 0.2584 | - | - |
| 0.3404 | 7640 | 0.232 | - | - |
| 0.3409 | 7650 | 0.311 | - | - |
| 0.3413 | 7660 | 0.3536 | - | - |
| 0.3418 | 7670 | 0.3279 | - | - |
| 0.3422 | 7680 | 0.3034 | - | - |
| 0.3427 | 7690 | 0.2916 | - | - |
| 0.3431 | 7700 | 0.2822 | - | - |
| 0.3435 | 7710 | 0.2871 | - | - |
| 0.3440 | 7720 | 0.3284 | - | - |
| 0.3444 | 7730 | 0.2909 | - | - |
| 0.3449 | 7740 | 0.3292 | - | - |
| 0.3453 | 7750 | 0.3393 | - | - |
| 0.3458 | 7760 | 0.2838 | - | - |
| 0.3462 | 7770 | 0.2686 | - | - |
| 0.3467 | 7780 | 0.318 | - | - |
| 0.3471 | 7790 | 0.3335 | - | - |
| 0.3476 | 7800 | 0.3017 | - | - |
| 0.3480 | 7810 | 0.2595 | - | - |
| 0.3484 | 7820 | 0.3008 | - | - |
| 0.3489 | 7830 | 0.2726 | - | - |
| 0.3493 | 7840 | 0.2938 | - | - |
| 0.3498 | 7850 | 0.2923 | - | - |
| 0.3502 | 7860 | 0.361 | - | - |
| 0.3507 | 7870 | 0.2689 | - | - |
| 0.3511 | 7880 | 0.3014 | - | - |
| 0.3516 | 7890 | 0.3169 | - | - |
| 0.3520 | 7900 | 0.3124 | - | - |
| 0.3525 | 7910 | 0.3367 | - | - |
| 0.3529 | 7920 | 0.276 | - | - |
| 0.3533 | 7930 | 0.3556 | - | - |
| 0.3538 | 7940 | 0.3036 | - | - |
| 0.3542 | 7950 | 0.2983 | - | - |
| 0.3547 | 7960 | 0.3393 | - | - |
| 0.3551 | 7970 | 0.3688 | - | - |
| 0.3556 | 7980 | 0.3391 | - | - |
| 0.3560 | 7990 | 0.3432 | - | - |
| 0.3565 | 8000 | 0.3061 | 0.7543 | 0.7526 |
| 0.3569 | 8010 | 0.293 | - | - |
| 0.3574 | 8020 | 0.2925 | - | - |
| 0.3578 | 8030 | 0.2852 | - | - |
| 0.3582 | 8040 | 0.396 | - | - |
| 0.3587 | 8050 | 0.2927 | - | - |
| 0.3591 | 8060 | 0.3028 | - | - |
| 0.3596 | 8070 | 0.3102 | - | - |
| 0.3600 | 8080 | 0.328 | - | - |
| 0.3605 | 8090 | 0.3194 | - | - |
| 0.3609 | 8100 | 0.2808 | - | - |
| 0.3614 | 8110 | 0.292 | - | - |
| 0.3618 | 8120 | 0.3232 | - | - |
| 0.3623 | 8130 | 0.3629 | - | - |
| 0.3627 | 8140 | 0.3222 | - | - |
| 0.3632 | 8150 | 0.3691 | - | - |
| 0.3636 | 8160 | 0.2965 | - | - |
| 0.3640 | 8170 | 0.293 | - | - |
| 0.3645 | 8180 | 0.3166 | - | - |
| 0.3649 | 8190 | 0.3021 | - | - |
| 0.3654 | 8200 | 0.2815 | - | - |
| 0.3658 | 8210 | 0.3089 | - | - |
| 0.3663 | 8220 | 0.2804 | - | - |
| 0.3667 | 8230 | 0.3011 | - | - |
| 0.3672 | 8240 | 0.27 | - | - |
| 0.3676 | 8250 | 0.361 | - | - |
| 0.3681 | 8260 | 0.3322 | - | - |
| 0.3685 | 8270 | 0.2741 | - | - |
| 0.3689 | 8280 | 0.3207 | - | - |
| 0.3694 | 8290 | 0.3437 | - | - |
| 0.3698 | 8300 | 0.3259 | - | - |
| 0.3703 | 8310 | 0.2473 | - | - |
| 0.3707 | 8320 | 0.2321 | - | - |
| 0.3712 | 8330 | 0.2699 | - | - |
| 0.3716 | 8340 | 0.2404 | - | - |
| 0.3721 | 8350 | 0.2586 | - | - |
| 0.3725 | 8360 | 0.295 | - | - |
| 0.3730 | 8370 | 0.3063 | - | - |
| 0.3734 | 8380 | 0.2551 | - | - |
| 0.3738 | 8390 | 0.2562 | - | - |
| 0.3743 | 8400 | 0.3062 | - | - |
| 0.3747 | 8410 | 0.3165 | - | - |
| 0.3752 | 8420 | 0.308 | - | - |
| 0.3756 | 8430 | 0.2976 | - | - |
| 0.3761 | 8440 | 0.284 | - | - |
| 0.3765 | 8450 | 0.3525 | - | - |
| 0.3770 | 8460 | 0.2639 | - | - |
| 0.3774 | 8470 | 0.3171 | - | - |
| 0.3779 | 8480 | 0.3367 | - | - |
| 0.3783 | 8490 | 0.2801 | - | - |
| 0.3787 | 8500 | 0.2957 | - | - |
| 0.3792 | 8510 | 0.3684 | - | - |
| 0.3796 | 8520 | 0.312 | - | - |
| 0.3801 | 8530 | 0.3703 | - | - |
| 0.3805 | 8540 | 0.2963 | - | - |
| 0.3810 | 8550 | 0.3032 | - | - |
| 0.3814 | 8560 | 0.3415 | - | - |
| 0.3819 | 8570 | 0.3011 | - | - |
| 0.3823 | 8580 | 0.33 | - | - |
| 0.3828 | 8590 | 0.2763 | - | - |
| 0.3832 | 8600 | 0.3295 | - | - |
| 0.3836 | 8610 | 0.3334 | - | - |
| 0.3841 | 8620 | 0.258 | - | - |
| 0.3845 | 8630 | 0.2626 | - | - |
| 0.3850 | 8640 | 0.2813 | - | - |
| 0.3854 | 8650 | 0.2845 | - | - |
| 0.3859 | 8660 | 0.2719 | - | - |
| 0.3863 | 8670 | 0.2898 | - | - |
| 0.3868 | 8680 | 0.3011 | - | - |
| 0.3872 | 8690 | 0.2914 | - | - |
| 0.3877 | 8700 | 0.3355 | - | - |
| 0.3881 | 8710 | 0.2678 | - | - |
| 0.3885 | 8720 | 0.2266 | - | - |
| 0.3890 | 8730 | 0.3016 | - | - |
| 0.3894 | 8740 | 0.3369 | - | - |
| 0.3899 | 8750 | 0.3558 | - | - |
| 0.3903 | 8760 | 0.2824 | - | - |
| 0.3908 | 8770 | 0.3201 | - | - |
| 0.3912 | 8780 | 0.2485 | - | - |
| 0.3917 | 8790 | 0.2603 | - | - |
| 0.3921 | 8800 | 0.3223 | - | - |
| 0.3926 | 8810 | 0.247 | - | - |
| 0.3930 | 8820 | 0.2766 | - | - |
| 0.3934 | 8830 | 0.3231 | - | - |
| 0.3939 | 8840 | 0.322 | - | - |
| 0.3943 | 8850 | 0.3039 | - | - |
| 0.3948 | 8860 | 0.2442 | - | - |
| 0.3952 | 8870 | 0.36 | - | - |
| 0.3957 | 8880 | 0.2551 | - | - |
| 0.3961 | 8890 | 0.2661 | - | - |
| 0.3966 | 8900 | 0.3001 | - | - |
| 0.3970 | 8910 | 0.2886 | - | - |
| 0.3975 | 8920 | 0.2856 | - | - |
| 0.3979 | 8930 | 0.2827 | - | - |
| 0.3984 | 8940 | 0.2652 | - | - |
| 0.3988 | 8950 | 0.3077 | - | - |
| 0.3992 | 8960 | 0.3094 | - | - |
| 0.3997 | 8970 | 0.3281 | - | - |
| 0.4001 | 8980 | 0.3399 | - | - |
| 0.4006 | 8990 | 0.3093 | - | - |
| 0.4010 | 9000 | 0.2586 | 0.7634 | 0.7607 |
| 0.4015 | 9010 | 0.2939 | - | - |
| 0.4019 | 9020 | 0.3022 | - | - |
| 0.4024 | 9030 | 0.2919 | - | - |
| 0.4028 | 9040 | 0.2524 | - | - |
| 0.4033 | 9050 | 0.2248 | - | - |
| 0.4037 | 9060 | 0.2759 | - | - |
| 0.4041 | 9070 | 0.2916 | - | - |
| 0.4046 | 9080 | 0.3006 | - | - |
| 0.4050 | 9090 | 0.2302 | - | - |
| 0.4055 | 9100 | 0.3001 | - | - |
| 0.4059 | 9110 | 0.3143 | - | - |
| 0.4064 | 9120 | 0.2544 | - | - |
| 0.4068 | 9130 | 0.3142 | - | - |
| 0.4073 | 9140 | 0.3364 | - | - |
| 0.4077 | 9150 | 0.2785 | - | - |
| 0.4082 | 9160 | 0.2948 | - | - |
| 0.4086 | 9170 | 0.2657 | - | - |
| 0.4090 | 9180 | 0.2722 | - | - |
| 0.4095 | 9190 | 0.3212 | - | - |
| 0.4099 | 9200 | 0.2952 | - | - |
| 0.4104 | 9210 | 0.2764 | - | - |
| 0.4108 | 9220 | 0.2744 | - | - |
| 0.4113 | 9230 | 0.2912 | - | - |
| 0.4117 | 9240 | 0.2676 | - | - |
| 0.4122 | 9250 | 0.2613 | - | - |
| 0.4126 | 9260 | 0.2905 | - | - |
| 0.4131 | 9270 | 0.3308 | - | - |
| 0.4135 | 9280 | 0.3311 | - | - |
| 0.4139 | 9290 | 0.2904 | - | - |
| 0.4144 | 9300 | 0.3367 | - | - |
| 0.4148 | 9310 | 0.2742 | - | - |
| 0.4153 | 9320 | 0.295 | - | - |
| 0.4157 | 9330 | 0.3034 | - | - |
| 0.4162 | 9340 | 0.3302 | - | - |
| 0.4166 | 9350 | 0.2883 | - | - |
| 0.4171 | 9360 | 0.2768 | - | - |
| 0.4175 | 9370 | 0.2953 | - | - |
| 0.4180 | 9380 | 0.3196 | - | - |
| 0.4184 | 9390 | 0.2731 | - | - |
| 0.4188 | 9400 | 0.3016 | - | - |
| 0.4193 | 9410 | 0.3325 | - | - |
| 0.4197 | 9420 | 0.2503 | - | - |
| 0.4202 | 9430 | 0.273 | - | - |
| 0.4206 | 9440 | 0.2784 | - | - |
| 0.4211 | 9450 | 0.2676 | - | - |
| 0.4215 | 9460 | 0.2891 | - | - |
| 0.4220 | 9470 | 0.2977 | - | - |
| 0.4224 | 9480 | 0.2673 | - | - |
| 0.4229 | 9490 | 0.2845 | - | - |
| 0.4233 | 9500 | 0.2825 | - | - |
| 0.4237 | 9510 | 0.2865 | - | - |
| 0.4242 | 9520 | 0.2451 | - | - |
| 0.4246 | 9530 | 0.2806 | - | - |
| 0.4251 | 9540 | 0.2629 | - | - |
| 0.4255 | 9550 | 0.3426 | - | - |
| 0.4260 | 9560 | 0.2453 | - | - |
| 0.4264 | 9570 | 0.3458 | - | - |
| 0.4269 | 9580 | 0.2392 | - | - |
| 0.4273 | 9590 | 0.2433 | - | - |
| 0.4278 | 9600 | 0.2481 | - | - |
| 0.4282 | 9610 | 0.3277 | - | - |
| 0.4287 | 9620 | 0.2609 | - | - |
| 0.4291 | 9630 | 0.2986 | - | - |
| 0.4295 | 9640 | 0.2712 | - | - |
| 0.4300 | 9650 | 0.3169 | - | - |
| 0.4304 | 9660 | 0.2638 | - | - |
| 0.4309 | 9670 | 0.2821 | - | - |
| 0.4313 | 9680 | 0.2969 | - | - |
| 0.4318 | 9690 | 0.2727 | - | - |
| 0.4322 | 9700 | 0.2858 | - | - |
| 0.4327 | 9710 | 0.2988 | - | - |
| 0.4331 | 9720 | 0.2628 | - | - |
| 0.4336 | 9730 | 0.3027 | - | - |
| 0.4340 | 9740 | 0.2502 | - | - |
| 0.4344 | 9750 | 0.3028 | - | - |
| 0.4349 | 9760 | 0.2381 | - | - |
| 0.4353 | 9770 | 0.2981 | - | - |
| 0.4358 | 9780 | 0.2208 | - | - |
| 0.4362 | 9790 | 0.2433 | - | - |
| 0.4367 | 9800 | 0.2672 | - | - |
| 0.4371 | 9810 | 0.3147 | - | - |
| 0.4376 | 9820 | 0.2655 | - | - |
| 0.4380 | 9830 | 0.273 | - | - |
| 0.4385 | 9840 | 0.3505 | - | - |
| 0.4389 | 9850 | 0.2822 | - | - |
| 0.4393 | 9860 | 0.2682 | - | - |
| 0.4398 | 9870 | 0.294 | - | - |
| 0.4402 | 9880 | 0.3002 | - | - |
| 0.4407 | 9890 | 0.2514 | - | - |
| 0.4411 | 9900 | 0.3193 | - | - |
| 0.4416 | 9910 | 0.2296 | - | - |
| 0.4420 | 9920 | 0.2209 | - | - |
| 0.4425 | 9930 | 0.2961 | - | - |
| 0.4429 | 9940 | 0.297 | - | - |
| 0.4434 | 9950 | 0.2734 | - | - |
| 0.4438 | 9960 | 0.2806 | - | - |
| 0.4442 | 9970 | 0.2634 | - | - |
| 0.4447 | 9980 | 0.3131 | - | - |
| 0.4451 | 9990 | 0.3007 | - | - |
| 0.4456 | 10000 | 0.3299 | 0.7687 | 0.7657 |
| 0.4460 | 10010 | 0.2224 | - | - |
| 0.4465 | 10020 | 0.2891 | - | - |
| 0.4469 | 10030 | 0.2997 | - | - |
| 0.4474 | 10040 | 0.3072 | - | - |
| 0.4478 | 10050 | 0.2657 | - | - |
| 0.4483 | 10060 | 0.2927 | - | - |
| 0.4487 | 10070 | 0.3071 | - | - |
| 0.4491 | 10080 | 0.2734 | - | - |
| 0.4496 | 10090 | 0.3016 | - | - |
| 0.4500 | 10100 | 0.2798 | - | - |
| 0.4505 | 10110 | 0.2845 | - | - |
| 0.4509 | 10120 | 0.2788 | - | - |
| 0.4514 | 10130 | 0.2914 | - | - |
| 0.4518 | 10140 | 0.2693 | - | - |
| 0.4523 | 10150 | 0.2866 | - | - |
| 0.4527 | 10160 | 0.3127 | - | - |
| 0.4532 | 10170 | 0.2743 | - | - |
| 0.4536 | 10180 | 0.3078 | - | - |
| 0.4540 | 10190 | 0.3003 | - | - |
| 0.4545 | 10200 | 0.2872 | - | - |
| 0.4549 | 10210 | 0.2461 | - | - |
| 0.4554 | 10220 | 0.2944 | - | - |
| 0.4558 | 10230 | 0.2765 | - | - |
| 0.4563 | 10240 | 0.2763 | - | - |
| 0.4567 | 10250 | 0.2905 | - | - |
| 0.4572 | 10260 | 0.2856 | - | - |
| 0.4576 | 10270 | 0.2722 | - | - |
| 0.4581 | 10280 | 0.2668 | - | - |
| 0.4585 | 10290 | 0.3014 | - | - |
| 0.4590 | 10300 | 0.3083 | - | - |
| 0.4594 | 10310 | 0.2957 | - | - |
| 0.4598 | 10320 | 0.3093 | - | - |
| 0.4603 | 10330 | 0.3009 | - | - |
| 0.4607 | 10340 | 0.3161 | - | - |
| 0.4612 | 10350 | 0.2737 | - | - |
| 0.4616 | 10360 | 0.2473 | - | - |
| 0.4621 | 10370 | 0.2999 | - | - |
| 0.4625 | 10380 | 0.2943 | - | - |
| 0.4630 | 10390 | 0.2784 | - | - |
| 0.4634 | 10400 | 0.2541 | - | - |
| 0.4639 | 10410 | 0.2731 | - | - |
| 0.4643 | 10420 | 0.2608 | - | - |
| 0.4647 | 10430 | 0.3024 | - | - |
| 0.4652 | 10440 | 0.2563 | - | - |
| 0.4656 | 10450 | 0.2725 | - | - |
| 0.4661 | 10460 | 0.2643 | - | - |
| 0.4665 | 10470 | 0.2627 | - | - |
| 0.4670 | 10480 | 0.2655 | - | - |
| 0.4674 | 10490 | 0.2556 | - | - |
| 0.4679 | 10500 | 0.299 | - | - |
| 0.4683 | 10510 | 0.3286 | - | - |
| 0.4688 | 10520 | 0.3075 | - | - |
| 0.4692 | 10530 | 0.2702 | - | - |
| 0.4696 | 10540 | 0.2688 | - | - |
| 0.4701 | 10550 | 0.29 | - | - |
| 0.4705 | 10560 | 0.2918 | - | - |
| 0.4710 | 10570 | 0.2507 | - | - |
| 0.4714 | 10580 | 0.2849 | - | - |
| 0.4719 | 10590 | 0.2938 | - | - |
| 0.4723 | 10600 | 0.2275 | - | - |
| 0.4728 | 10610 | 0.2662 | - | - |
| 0.4732 | 10620 | 0.2864 | - | - |
| 0.4737 | 10630 | 0.2865 | - | - |
| 0.4741 | 10640 | 0.3094 | - | - |
| 0.4745 | 10650 | 0.2479 | - | - |
| 0.4750 | 10660 | 0.2483 | - | - |
| 0.4754 | 10670 | 0.3166 | - | - |
| 0.4759 | 10680 | 0.2727 | - | - |
| 0.4763 | 10690 | 0.3077 | - | - |
| 0.4768 | 10700 | 0.3076 | - | - |
| 0.4772 | 10710 | 0.2835 | - | - |
| 0.4777 | 10720 | 0.2893 | - | - |
| 0.4781 | 10730 | 0.2889 | - | - |
| 0.4786 | 10740 | 0.279 | - | - |
| 0.4790 | 10750 | 0.2487 | - | - |
| 0.4794 | 10760 | 0.2936 | - | - |
| 0.4799 | 10770 | 0.2471 | - | - |
| 0.4803 | 10780 | 0.2807 | - | - |
| 0.4808 | 10790 | 0.2868 | - | - |
| 0.4812 | 10800 | 0.229 | - | - |
| 0.4817 | 10810 | 0.2683 | - | - |
| 0.4821 | 10820 | 0.2686 | - | - |
| 0.4826 | 10830 | 1.8939 | - | - |
| 0.4830 | 10840 | 0.8922 | - | - |
| 0.4835 | 10850 | 0.9472 | - | - |
| 0.4839 | 10860 | 0.7066 | - | - |
| 0.4843 | 10870 | 0.6178 | - | - |
| 0.4848 | 10880 | 0.6898 | - | - |
| 0.4852 | 10890 | 0.7844 | - | - |
| 0.4857 | 10900 | 0.9946 | - | - |
| 0.4861 | 10910 | 1.3618 | - | - |
| 0.4866 | 10920 | 1.2785 | - | - |
| 0.4870 | 10930 | 0.9415 | - | - |
| 0.4875 | 10940 | 0.753 | - | - |
| 0.4879 | 10950 | 0.6851 | - | - |
| 0.4884 | 10960 | 0.7812 | - | - |
| 0.4888 | 10970 | 0.9856 | - | - |
| 0.4893 | 10980 | 0.7245 | - | - |
| 0.4897 | 10990 | 1.0757 | - | - |
| 0.4901 | 11000 | 0.996 | 0.7854 | 0.7828 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.1.0+cu118
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | Non_BioNLP |
tycjan/distilbert-pl-store-products-retrieval | tycjan | sentence-similarity | [
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2400",
"loss:MultipleNegativesRankingLoss",
"dataset:tycjan/product-query-retrieval-dataset",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/quora-distilbert-multilingual",
"base_model:finetune:sentence-transformers/quora-distilbert-multilingual",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,739,737,663,000 | 2025-02-16T20:28:19 | 9 | 0 | ---
base_model: sentence-transformers/quora-distilbert-multilingual
datasets:
- tycjan/product-query-retrieval-dataset
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2400
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: duży zielony pokrowiec ogrodowy ruhhy na huśtawkę i meble
sentences:
- 'DUŻY POKROWIEC NA HUŚTAWKĘ OGRODOWĄ MOCNY - duży i mocny pokrowiec na huśtawkę
ogrodową został wykonany z trwałego i odpornego na deszcz i promieniowanie UV
materiału jakim jest PE o gramaturze 90-95 g/m². UNIWERSALNY - nadaje się również
do innych przedmiotów, jak rowery, krzesła, stoliki, sofy, kanapy czy stół i krzesła.
Przed zakupem sprawdź dokładnie wymiary! NA KAŻDĄ PORĘ ROKU - stosowanie pokrowca
zabezpiecza ławkę przed kurzem, deszczem, wiatrem, słońcem czy śniegiem. Doskonale
sprawdzi się w sezonie jesienno – zimowym – nie ma konieczności przenoszenia ławki
do piwnicy, czy garażu. ZAMKI PO BOKACH – po bokach zastosowano zamki, którymi
dokładnie zabezpieczysz huśtawkę przed np. podmuchami wiatru czy zacinającym deszczem.
SPECYFIKACJA - producent: Iso Trade; marka: Ruhhy; kolor: zielony; materiał: PE,
pozostałe: nylon + stop cynku; wymiary: 215x150x145cm; waga: 1,1kg; waga w opakowaniu:
1,5kg. SPECYFIKACJA producent: Iso Trade marka: Ruhhy kolor: zielony materiał:
PE, pozostałe: nylon + stop cynku wymiary: 215/150/145cm waga: 1,1kg waga w opakowaniu:
1,5kg Duży, mocny pokrowiec na huśtawkę ogrodową. Pokrowiec doskonale chroni przed
deszczem, słońcem (ochrona przed płowieniem drewna/plastiku ławki) itp. Idealny
do ochrony ławki na zimę. Możesz okryć ławkę, ale nie tylko - pokrowiec nadaje
się również do innych przedmiotów. Można schować pod nim np. rower, krzesło, stolik
itp.'
- Wysokiej jakości, douszne słuchawki stereofoniczne idealnie nadające się do wszelkich
przenośnych odtwarzaczy MP3/MP4, radia, komórek, itp. Dzięki wyśmienitej jakości
materiałów z jakich zostały wykonane charakteryzują się głębokim basem, niewielką
wagą oraz wygodą noszenia. Zastosowane w słuchawkach sylikonowe nakładki zwiększają
komfort odsłuchu oraz ograniczają zakłócenia zewnętrzne.Stworzone z myślą o osobach
przebywających dużo poza domem, blokują dźwięki zewnętrzne i jednocześnie wzmacniają
basy, oferując użytkownikowi zupełnie nowe doznania. Ultra lekkie i wygodne. Posiadają
kabel z miedzi beztlenowej, przetworniki wykonane z magnesów neodymowych, specjalny,
wyprofilowany kształt oraz otwory basowe u góry, które zapewnia doskonały, czysty
i dynamiczny dźwięk w całym zakresie częstotliwości. Kabel zakończony wtykiem
mini-jack 3.5mm pasującym do większości obecnie dostępnych przenośnych odtwarzaczy
muzycznych. Mix kolorów.
- 'TAŃCZĄCY KWIAT SOLARNY ZASILANY ENERGIĄ SŁONECZNĄ CECHY PRODUKTU: Im więcej promieni
słonecznych do niego dociera tym intensywniej tańczy Kwiatki pod spodem mają taśmę
samoprzylepną dwustronną , kwiatka można przykleić w dowolnym miejscu . Kwiatek
reaguje również na wiatr ( wystarczy lekki podmuch ) SPECYFIKACJA: wykonany z
estetycznego i trwałego tworzywa sztucznego. posiada wbudowany panel solarny,
dzięki temu kwiatek, żeby działać, potrzebuje jedynie światła dziennego lub oświetlenia
sztucznego. pakowane w przezroczyste pudełeczko wysokość: +/-11cm średnica: +/-
11cm Kolorowa figurka doniczki z kwiatkiem, która będzie się zawsze bujała na
boki, jak tylko będzie na nią świeciło słońce. Jest to idealna dekoracja prawie
każdego pomieszczenia. Świetnie udekoruje Twój gabinet, salon czy pokój dziecka.
Jego poruszające się płynnymi ruchami listki oraz łodyga wywołają uśmiech u każdego,
kto go zobaczy. Największą radość sprawi jednak każdemu dziecku.'
- source_sentence: rękaw cukierniczy z 24 końcówkami do dekoracji tortów
sentences:
- 'Rękaw cukierniczy dekorator + 24 końcówki Cechy produktu: Silikonowy rękaw cukierniczy
wielorazowego użytku, przeznaczony do dekoracji ciast, deserów i tortów. Całkowicie
bezpieczny dla zdrowia i neutralny dla żywności. Rękaw cukierniczy jest prosty
w użyciu i poręczny. Możesz umieścić w nim dowolną masę i korzystając z wymiennych
końcówek tworzyć wybrane przez siebie wzory. Specyfikacja: Kolor: jak na zdjęciu,
W zestawie 24 końcówki, Wymiary rękawa(najszersze miejsce): 30,5 x 16 cm, Wymiary
opakowania: 19 x 11,5 x 4 cm. Dekorowanie tortów oraz wypieków cukierniczych jest
prawdziwą sztuką, jednak to nie znaczy, że jest przeznaczone tylko dla profesjonalistów.
Dzięki dekoratorowi możesz samemu przygotować wspaniałe wypieki, które z łatwością
ozdobisz, tak jak w profesjonalnych cukierniach! Wystarczy odrobina cierpliwości,
wprawy i kreatywności, a za pomocą rękawa cukierniczego można stworzyć cudowanie
dekoracje! Dekorator do tortów i ciast to doskonały zestaw składający się z rękawa
oraz aż 24 różnych, stalowych końcówek (tylek cukierniczych).'
- 'Multitool 20 w 1 Brelok Cechy produktu: Niewielki gadżet, który zmieści się w
każdej kieszeni, a zawiera aż dwadzieścia różnych funkcji! Idealny na wycieczki
do lasu lub w góry. Urządzenie posiada karabińczyk, dzięki któremu możesz przypiąć
je do szlufki lub do kluczy. Gadżet może w jedną sekundę zamienić się w śrubokręt,
poręczną piłę, łom lub pilniczek do paznokci! Wykonany został ze stali nierdzewnej,
więc wytrzyma z tobą wiele wypraw w różnych warunkach. Miej zawsze przy sobie
najprzydatniejsze urządzenia, nosząc przy kluczach tylko jedno! Specyfikacja:
Wymiary - 8.3 x 2.5 cm, Wymiary opakowania: 10 x 6 x 2cm, Materiał - stal nierdzewna.
20 narzędzi w jednym: 1. Karabinek z drucianym zamknięciem 2. Otwieracz do butelek
3. Ostrze do czyszczenia paznokci 4. Duży śrubokręt płaski 5. Średni śrubokręt
płaski 6. Mały śrubokręt płaski (do okularów) 7. Średni śrubokręt krzyżakowy 8.
Mały śrubokręt krzyżakowy 9. Klucz płaski (zamknięty) 14 mm 10. Klucz płaski (zamknięty)
12 mm 11. Klucz płaski (zamknięty) 10 mm 12. Klucz płaski (zamknięty) 8 mm 13.
Klucz płaski (zamknięty) 6 mm 14. Klucz do szprych rowerowych 15. Ostrze (stal
węglowa K5) 16. Przecinak do drutu 17. Linijka 2 cale 18. Przecinak do otwierania
opakowań 19. Mini łom do podważania 20. Pilnik'
- 'Lekkie krzesło wiszące w stylu hamaka ogrodowego. Bardzo wygodne, funkcjonalne
i przydatne w każdym ogrodzie, na działce lub na balkonie. Zapewni doskonały odpoczynek
zmęczonemu ciału. Wystarczy zawiesić je w ogrodzie lub na tarasie i cieszyć się
wspaniałym wypoczynkiem na świeżym powietrzu. Fotel został wykonany z wytrzymałej
bawełny o dużej przepuszczalności powietrza. Mocna drewniana belka podtrzymująca
oraz stalowy uchwyt montażowy zapewniają należyte bezpieczeństwo i pewność, że
nie spadniemy z krzesła podczas odpoczynku. Cechy produktu: Produkt fabrycznie
nowy, wysokiej jakości. Bardzo wygodne i funkcjonalne krzesło, niezwykle łatwe
w montażu - wystarczy je zawiesić, usiąść i pozwolić się zrelaksować. Można je
zamocować do sufitu balkonu bądź tarasu, solidnej gałęzi drzewa bądź innego uchwytu.
Solidna drewniana belka oraz stalowy uchwyt zapewniają należyte bezpieczeństwo.
Siedzisko wykonane jest z bardzo wytrzymałej, kolorowej bawełny. Po złożeniu (zwinięciu)
siedzisko zajmuje niewiele miejsca. Bez problemu można je zabrać ze sobą np. na
kemping. Specyfikacja: Materiał siedziska: bawełna 320 g/m2 Maksymalne obciążenie
do 120 kg Długość drewnianej poprzeczki (belki): 90 cm Długość siedziska: 100
cm Szerokość siedziska: 90 cm Długość mocowania (nad belką): 35 cm Długość linek
podtrzymujących: od 40 do 60 cm Możliwość prania w pralce: tak Kolor: jak na zdjęciu'
- source_sentence: znajdź tubę strzelającą konfetti na sylwestra
sentences:
- 'Konfetti tuba strzelająca Cechy produktu: Tuba strzelająca kolorowym konfetti.
Zapewnia wystrzałowy efekt i niezapomniane przeżycie. Idealny dodatek na specjalne
okazje. Tuby wykonane są z grubego kartonu. Wystarczy przekręcić dół tuby, a zawartość
poszybuje na kilka metrów w górę. Produkt przeznaczony tylko dla osób dorosłych.
Konfetti w tubie jest całkowicie bezpieczne, ponieważ nie zawiera żadnych materiałów
pirotechnicznych – wystrzał następuje dzięki sprężonemu powietrzu po przekręceniu
tuby w zaznaczonym miejscu. Tuba strzelająca to także świetny pomysł na Sylwestra,
karnawał, walentynki, zaręczyny, a także wiele innych okazji. Aby efekt był jeszcze
lepszy, polecamy wystrzelić kilka tub w tym samym czasie. Specyfikacja: Długość
tuby: ok. 11 cm, Wymiary opakowania: 11 x 5 x 5 cm, Kolor wkładu: mix kolorów.'
- 'Skakanka z licznikiem LCD CECHY PRODUKTU: wyświetlacz LCD: licznik spalonych
kalorii, czas ćwiczenia, liczba skoków wyprofilowane rączki z gumowymi wstawkami
zasilanie: 1 bateria AG13 (w zestawie) długość rączki: 18,5cm długość: 270cm waga:
175g Jeśli za dziecka lubiłaś skakać na skakance wprowadź to ćwiczenia do swojej
treningowej rutyny skacząc na modelu z praktycznym wyświetlaczem LCD. Skakanka
doskonale nadaje się do redukcji zbędnych kilogramów i centymetrów w talii. Wbudowany
w rączkę wyświetlacz pozwoli Ci kontrolować aż 3 parametry: liczbę spalonych kalorii,
liczbę skoków oraz całkowity czas wykonywania ćwiczenia. Długi sznur pozwala na
ćwiczenie nawet wysokim osobom, a wyprofilowane rączki z tworzywa ABS posiadają
gumowe wstawki zapewniające pewny chwyt, co daje Ci pewność, że bez względu na
intensywność skakania, one nie wyślizgną się z Twojej dłoni.'
- 'MATERAC WELUROWY POJEDYNCZY 185x76x22cm - BESTWAY 67000 WELUROWE POKRYCIE – gwarantuje
ono niezwykle przyjemny wypoczynek, dzięki miękkiej strukturze jaką jest welur.
Ponadto jest t materiał niezwykle wytrzymały, dzięki czemu materac ma większą
odporność na zniszczenia. ZAWÓR BEZPIECZEŃSTWA – materac wyposażony został w specjalny
zawór, który zdecydowanie ułatwi napełnianie, opróżnianie oraz regulację ciśnienia
w jego środku. ŁATKA W ZESTAWIE - łatka dołączona do zestawu jest wodoodporna,
zatem nie musisz się bać, że pod wpływem działania wody się odklei. Świetnie nadaje
się do materaców, basenów i innych dmuchanych akcesoriów. STELAŻ COIL - BEAM –
materac jest niezwykle wytrzymały oraz sprężysty – wszystko to dzięki specjalnemu
stelażowi coil – beam. Jest to połączenie dolnej oraz górnej warstwy, za pomocą
komór. SPECYFIKACJA – wymiary po nadmuchaniu (dł/szer/wys) 185/76/22cm, waga 1,9kg.
SPECYFIKACJA kod produktu Bestway: 67000 materiał: PVC, welur przeznaczenie: jednoosobowy
do użytku: wewnętrznego i zewnętrznego wymiary (dł/szer/wys): 185x76x22 cm waga:
1,9 kg W ZESTAWIE materac łatka naprawcza Dodatkowy kod kreskowy EAN: 6941607343876'
- source_sentence: szafka na buty 12 par łatwy montaż online
sentences:
- Wibro-pas wyszczuplający Pas odchudzający wpływa na wiele grup mięśniowych. To
urządzenie mogą używać zarówno mężczyźni, jak i kobiety. To szybki sposób na pozbycie
się cellulitu, a także na ujędrnienie pośladków oraz uwydatnienie ud i oczywiście
na stworzenie idealnej rzeźby brzucha. Działa na zasadzie powtarzających się intensywnych
skurczów, które zmuszają mięśnie do pracy oraz pobudzają do wzrostu, tak jak podczas
ćwiczeń na siłowni. Pas neoprenowy zwiększa także twardość mięśni. Aby działał,
potrzebować będziemy środka nawilżającego do skóry. Zapobiegnie to uszkodzeniu
skóry i umożliwi prawidłową pracę urządzenia. Po każdym użyciu powinniśmy przetrzeć
taśmę wilgotną szmatką, aby usunąć resztki preparatu. Czego unikać, czyli przeciwwskazania
pod żadnym pozorem nie można korzystać z pasa w okolicy twarzy lub szyi, gdyż
kurcze mięśni w tych rejonach mogą spowodować utrudnienie lub zaprzestanie oddychania;
z pasa nie można także korzystać dokoła klatki piersiowej lub też głowy; z pasów
nie można korzystać w miejscach, gdzie skóra jest uszkodzona; z pasa nie mogą
korzystać osoby ze stwardnieniem rozsianym, padaczką, chorobami serca a także
dowolną chorobom układu sercowo-naczyniowego; z pasa nie mogą korzystać kobiety
w ciąży, w okresie miesiączki lub tuż po urodzeniu dziecka; zasięgnijmy porady
lekarza przed użyciem pasa neoprenowego, jeśli posiadasz skłonność do chorób przewlekłych;
czas dziennej miostymulacji nie powinien przekraczać 30 minut.
- 'PÓŁKA NA BUTY - 12 PAR ŁATWY MONTAŻ - za pomocą metalowych rurek i plastikowych
łączników szybko i łatwo zmontujesz szafkę. POJEMNA - półki pomieszczą do 12 par
butów. SPECYFIKACJA - wymiary 87x60x28cm; waga z opakowaniem 865g. LEKKA - półkę
łatwo jest przenosić dzięki lekkiej konstrukcji. Można np. wystawić buty na zewnątrz,
a gdy robi się chłodno lub pada, schować ją do środka. MOŻLIWOŚĆ ROZBUDOWY - przy
zakupie większej ilości sztuk, półkę można rozbudowywać wzwyż. SPECYFIKACJA wymiary
(wys/szer/dł): 87x60x28cm materiał: metal, tworzywo, tkanina wodoodporna ilość
miejsc na buty: 12 (możliwość rozbudowy) waga: 750g waga z opakowaniem: 865g Półka,
dzięki swej konstrukcji jest bardzo łatwa w montażu. Dzięki specjalnym łączeniom
można bezproblemowo rozbudowywać w górę na dowolną wysokość. Kupując 2 lub więcej
zestawów można powiększyć półkę i zwiększyć ilość miejsca na buty. Materiał półek
pokryty jest specjalną nieprzemakalną powłoką, dzięki czemu możesz postawić na
nich buty nawet prosto po deszczu. Półki wystarczy przetrzeć szmatką, są łatwe
do utrzymania w czystości. Półka nadaje się zarówno do mieszkania, jak i do biura,
nie zajmuje wiele miejsca.'
- 'LAMPKA NOCNA Z PROJEKTOREM NIEBA I GWIAZD / PODWODNEGO ŚWIATA NIESAMOWITE EFEKTY
ŚWIETLNE - projektor posiada wiele efektownych fal świetlnych, które wyświetlając
się na ścianach i suficie tworzą realistyczne rozgwieżdżone nocne niebo lub podwodny
świat (co zapewnia wymienna wkładka do projektora), zapewniając relaksującą atmosferę
oraz sprawiając, że masz ochotę spacerować wśród gwiazd albo zanurzyć się w morskiej
głębi. POMAGA W ZASYPIANIU - gadżet świetnie sprawdza się w roli projektora oraz
lampki nocnej, która pozwoli maluchowi na spokojne przygotowanie się do snu. Światło
pojawiające się na ścianach i suficie skupi uwagę dziecka i pomoże mu szybko i
cicho zasnąć. DWA SPOSOBY UŻYWANIA - nasz gadżet może być używany jako lampka
nocna do pokoju dziecięcego lub jako projektor gwiazd / podwodnego świata, rzucając
na ścianach i suficie przepiękną projekcję. NIGDY SIE NIE NUDZI 2 FILTRY W ZESTAWIE
- dziecku nie znudzi się projektor, który ma w zestawie 2 filtry. Rozgwieżdżone
niebo lub podwodny świat. Jeżeli Twój maluszek nie będzie chciał usnąć zmień mu
po prostu nakładkę i zobacz jak słodko zasypia. SPECYFIKACJA - kolor: różowa;
lampka z obrotową kopułą: tak; materiał: tworzywo sztuczne / metal; zasilanie:
sieciowe / bateryjne 4 x AAA (brak w zestawie); 3 funkcyjne przyciski; 4 kolory
świecenia (zielony, niebieski, biały, czerwony); wymiary (dł/szer/wys): 12/12/10cm;
waga: 0,242kg; waga w opakowaniu: 0,339kg SPECYFIKACJA kolor: różowa lampka z
obrotową kopułą: tak materiał: tworzywo sztuczne / metal zasilanie: sieciowe /
bateryjne 4 x AAA (brak w zestawie) 3 funkcyjne przyciski 4 kolory świecenia (zielony,
niebieski, biały, czerwony) wymiary (dł/szer/wys): 12/12/10cm waga: 0,242kg waga
w opakowaniu: 0,339kg W ZESTAWIE: projektor przewód USB 2 nakładki projektora'
- source_sentence: szukam termosu stalowego tadar 0,5 l do kawy i herbaty online
sentences:
- ROWEREK TRÓJKOŁOWY Z BALDACHIMEM - NIEBIESKI idealny jako pierwszy trójkołowy
rowerek dla Twojego dziecka 5-punktowe pasy bezpieczeństwa posiada regulowany
daszek, który znakomicie chroni przed słońcem otwierana barierka zapewnia bezpieczeństwo,
a także ułatwia wchodzenie i wychodzenie stabilne, miękkie koła EVA SPECYFIKACJA
Kolor niebieski Długość rowerka 80cm Szerokość rowerka 49cm Odległość od siodełka
do pedałów 35cm 2 wysokości rączki do prowadzenia 87,5-91cm Wymiary baldachimu
49x39cm Koła EVA, przednie 25cm, tylne 20cm średnicy Max. obciążenie 35kg Waga
9,5kg Idealny jako pierwszy trójkołowy rowerek dla Twojego dziecka. Posiada składany
daszek, który znakomicie chroni przed słońcem, rączka do prowadzenia umożliwia
kontrolowanie i pomaganie dziecku przy jeździe. Praktyczny, regulowany podnóżek,
wygodne siodełko z oparciem, lusterko. Otwierana barierka pełniąca rolę ochrony,
aby dziecko stabilnie i bezpiecznie mogło jeździć bez obawy, że wypadnie z rowerka,
ułatwia także wchodzenie i wychodzenie. Posiada także dolny koszyk na drobiazgi,
zabawki czy też drobne zakupy. Stabilne, miękkie koła typu EVA zapewnią pewną
i bezpieczną jazdę. Uchwyt z rączką do prowadzenia rowerka ma możliwość regulacji
(2 poziomy), pozwalające na indywidualne dostosowanie do wzrostu.
- 'Stalowy termos do kawy i herbaty Tadar 0,5 L Cechy produktu: Zastosowanie podwójnych
ścianek z izolacją próżniową zapewnia doskonałą osłonę termiczną, co gwarantuje,
że Twój napój zachowa odpowiednią temperaturę. Możesz używać go do zimnych i gorących
napojów. Konstrukcja korka wraz z dodatkowym mechanizmem otwierania gwarantuje
wysoką szczelność termosu. Pokrywka stanowiąca jednocześnie kubek podwyższa komfort
użytkowania i stanowi efektowne uzupełnienie całości. Satynowe wykończenie w połączeniu
z materiałem wykonania podkreśla walory estetyczne.Gładka struktura i prosty,
opływowy kształt umożliwi szybką i precyzyjną pielęgnację.Dzięki zastosowaniu
stali nierdzewnej termos jest niezwykle trwały, przez co stanie się długoletnim
atrybutem w trakcie podróży. Specyfikacja: Elementy zestawu: termos nierdzewny,
korek z gumową uszczelką, nakrętka. Wysokość (cm): 24.5. Długość (cm): 8. Szerokość
(cm): 6,5. Materiał: stal nierdzewna, tworzywo sztuczne. Kolor: srebrny. Pojemność
(l): 0.5. Utrzymuje temperaturę zimnych i gorących napojów. Dla wydłużonej żywotności
produktu zalecamy mycie ręczne. Waga netto (g): 285. Wymiary opakowania(cm): 25
x 7,5 x 7.'
- 'GRA RODZINNA CHIŃCZYK NA LODZIE WYŚCIG PINGWINÓW Chińczyk Pingwiny to familijna
gra planszowa. W zabawie może brać udział od 2 do 4 osób. Gra została przygotowana
dla dzieci od 4 roku życia. ZASADY GRY: Pingwiny poruszają się po specjalnie zaprojektowanej
planszy z lodowymi mostami zgodnie z ruchem wskazówek zegara. Każda drużyna startuje
z narożnika, po wyrzuceniu 6 oczek na kostce umieszczonej w centrum planszy, 1
pingwin wchodzi do gry. Pingwiny poruszają się o ilość oczek wskazanych przez
kostkę. W trakcie tury możesz poruszać się tylko jednym pingwinkiem. Kiedy twój
pingwin stanie na ruchomej części lodowego mostu możesz skorzystać z opcji uruchomienia
strasznej pułapki i zrzucić stojące na moście pingwiny przeciwnika do lodowatej
wody. Zrzucone pingwiny wracają do pól startowych, aby móc ponownie wykonać nimi
ruch należy na kostce wyrzucić 6 oczek. Aby zwyciężyć wyścig należy obejść całą
planszę i ustawić bezpiecznie w domu całą drużyną. ZAWARTOŚĆ ZESTAWU: - plansza
(podstawa do gry z kostką), - 4 lodowe mosty, - 16 kolorowych pingwinów 4x4, -
instrukcja w języku polskim. WYMIARY: - plansza (podstawa do gry z kostką), -
4 lodowe mosty, - 16 kolorowych pingwinów 4x4, - instrukcja w języku polskim.'
model-index:
- name: SentenceTransformer based on sentence-transformers/quora-distilbert-multilingual
results:
- task:
type: triplet
name: Triplet
dataset:
name: product desc validation
type: product-desc-validation
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: product desc test
type: product-desc-test
metrics:
- type: cosine_accuracy
value: 0.9933333396911621
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9933333396911621
name: Cosine Accuracy
---
# SentenceTransformer based on sentence-transformers/quora-distilbert-multilingual
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/quora-distilbert-multilingual](https://huggingface.co/sentence-transformers/quora-distilbert-multilingual) on the [product-query-retrieval-dataset](https://huggingface.co/datasets/tycjan/product-query-retrieval-dataset) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/quora-distilbert-multilingual](https://huggingface.co/sentence-transformers/quora-distilbert-multilingual) <!-- at revision fbe06168e4a528166d926cf1866ce45b6dc1118a -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [product-query-retrieval-dataset](https://huggingface.co/datasets/tycjan/product-query-retrieval-dataset)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tycjan/distilbert-pl-store-products-retrieval")
# Run inference
sentences = [
'szukam termosu stalowego tadar 0,5 l do kawy i herbaty online',
'Stalowy termos do kawy i herbaty Tadar 0,5 L Cechy produktu: Zastosowanie podwójnych ścianek z izolacją próżniową zapewnia doskonałą osłonę termiczną, co gwarantuje, że Twój napój zachowa odpowiednią temperaturę. Możesz używać go do zimnych i gorących napojów. Konstrukcja korka wraz z dodatkowym mechanizmem otwierania gwarantuje wysoką szczelność termosu. Pokrywka stanowiąca jednocześnie kubek podwyższa komfort użytkowania i stanowi efektowne uzupełnienie całości. Satynowe wykończenie w połączeniu z materiałem wykonania podkreśla walory estetyczne.Gładka struktura i prosty, opływowy kształt umożliwi szybką i precyzyjną pielęgnację.Dzięki zastosowaniu stali nierdzewnej termos jest niezwykle trwały, przez co stanie się długoletnim atrybutem w trakcie podróży. Specyfikacja: Elementy zestawu: termos nierdzewny, korek z gumową uszczelką, nakrętka. Wysokość (cm): 24.5. Długość (cm): 8. Szerokość (cm): 6,5. Materiał: stal nierdzewna, tworzywo sztuczne. Kolor: srebrny. Pojemność (l): 0.5. Utrzymuje temperaturę zimnych i gorących napojów. Dla wydłużonej żywotności produktu zalecamy mycie ręczne. Waga netto (g): 285. Wymiary opakowania(cm): 25 x 7,5 x 7.',
'GRA RODZINNA CHIŃCZYK NA LODZIE WYŚCIG PINGWINÓW Chińczyk Pingwiny to familijna gra planszowa. W zabawie może brać udział od 2 do 4 osób. Gra została przygotowana dla dzieci od 4 roku życia. ZASADY GRY: Pingwiny poruszają się po specjalnie zaprojektowanej planszy z lodowymi mostami zgodnie z ruchem wskazówek zegara. Każda drużyna startuje z narożnika, po wyrzuceniu 6 oczek na kostce umieszczonej w centrum planszy, 1 pingwin wchodzi do gry. Pingwiny poruszają się o ilość oczek wskazanych przez kostkę. W trakcie tury możesz poruszać się tylko jednym pingwinkiem. Kiedy twój pingwin stanie na ruchomej części lodowego mostu możesz skorzystać z opcji uruchomienia strasznej pułapki i zrzucić stojące na moście pingwiny przeciwnika do lodowatej wody. Zrzucone pingwiny wracają do pól startowych, aby móc ponownie wykonać nimi ruch należy na kostce wyrzucić 6 oczek. Aby zwyciężyć wyścig należy obejść całą planszę i ustawić bezpiecznie w domu całą drużyną. ZAWARTOŚĆ ZESTAWU: - plansza (podstawa do gry z kostką), - 4 lodowe mosty, - 16 kolorowych pingwinów 4x4, - instrukcja w języku polskim. WYMIARY: - plansza (podstawa do gry z kostką), - 4 lodowe mosty, - 16 kolorowych pingwinów 4x4, - instrukcja w języku polskim.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `product-desc-validation`, `product-desc-test` and `product-desc-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | product-desc-validation | product-desc-test |
|:--------------------|:------------------------|:------------------|
| **cosine_accuracy** | **1.0** | **0.9933** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### product-query-retrieval-dataset
* Dataset: [product-query-retrieval-dataset](https://huggingface.co/datasets/tycjan/product-query-retrieval-dataset) at [b368226](https://huggingface.co/datasets/tycjan/product-query-retrieval-dataset/tree/b3682265283a4911b4099748b6c0fe627c56ed75)
* Size: 2,400 training samples
* Columns: <code>query</code>, <code>product_desc_positive</code>, and <code>product_desc_negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | product_desc_positive | product_desc_negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 23.09 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 128.0 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 128.0 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| query | product_desc_positive | product_desc_negative |
|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>szukam bezprzewodowego irygatora dentystycznego do codziennej higieny jamy ustnej</code> | <code>IRYGATOR DENTYSTYCZNY BEZPRZEWODOWY CECHY PRODUKTU: Usuwa nadmiar płytki zębowej Usuwa zalegający pokarm z przestrzeni międzyzębowych Nie wymaga zasilania SPECYFIKACJA: Materiał: plastik Nie wymaga zasilania Działa na zasadzie pompki Zbiorniczek na wodę lub płyn do płukania Prostota działania Wymiary: 21,5cm x 3cm Irygator dentystyczny to urządzenie niezwykle przydatne w codziennej higienie jamy ustnej. Pozwala dokładnie oczyścić zęby oraz przestrzenie między nimi z resztek pokarmu. Świetnie nadaje się dla osób noszących aparaty ortodontyczne, gdyż wspomaga utrzymanie lepszej czystości jamy ustnej. Urządzenie nie wymaga zastosowania baterii, działa na zasadzie pompki. Do zbiorniczka nalewamy wodę lub płyn do płukania jamy ustnej, następnie kierujemy końcówkę irygatora w stronę zębów i naciskamy przycisk. Zestaw zawiera: Fabrycznie nowy irygator dentystyczny Oryginalne opakowanie</code> | <code>PAWILON 3X9M + 8 ŚCIANEK - CIEMNY SZARY BLOKADY RUREK DACHU NA ZATRZASKI- wzmacniają konstrukcję pawilonu. W ZESTAWIE 8 ŚCIAN- 6 z oknami oraz 2 pełne, możliwość używania w dowolnej konfiguracji-ze ściankami lub bez. SPECYFIKACJA- poszycie z PE, kolor ciemnoszary; wymiary 900x300x250cm; rurki 32/25/19mm; waga 23,5kg. DWUSTRONNY ZAMEK- pozwala na otwieranie i zamykanie pawilonu z obu stron(od zewnątrz, jak i od środka) MOCOWANIE NA RZEPY- ułatwiają szybki montaż i demontaż dachu z pawilonu. SPECYFIKACJA kolor stelaża: biały kolor poszycia: ciemnoszary materiał: stal malowana proszkowo + poszycie z PE 100g/m2 wymiary zewnętrzne (dł/szer/wys): 900x300x250cm średnica rurek: 32/25/19mm waga pawilonu: ~23,5kg W ZESTAWIE stelaż, elementy montażowe, śledzie i linki napinające pokrycie pawilonu (dach) 8 ścianek bocznych (6 z oknami, 2 pełne) Pawilon idealnie sprawdzi się w ogrodzie podczas przyjęć, eventów, koncertów Ochroni zarówno przed słońcem, jak i umiarkowanym deszczem. Doskonale sprawdzi...</code> |
| <code>szukam srebrnej gwiazdy z brokatem na czubek choinki.</code> | <code>Czubek na choinkę - srebrna gwiazda z brokatem Cechy produktu: Ozdoba na czubek choinki bożonarodzeniowej w kształcie gwiazdy. Posiada przepiękną powłokę brokatową, która sprawi, że Twoja choinka wniesie do Twojego domu magiczną atmosferę Świąt Bożego Narodzenia. Czubek na choinkę w kształcie gwiazdy nada wnętrzu nowoczesny wygląd. Nasza gwiazda upiększy każdą choinkę, a wraz z innymi ozdobami szczególnie z zapalonymi światełkami stworzy niesamowitą dekorację i atmosferę. Szpic wykonany jest z tworzywa nietłukącego się. Specyfikacja: Wymiary: 15cm x 15cm x 3,5cm, Wymiary opakowania: 15cm x 15cm x 4cm, Kolory: srebrny, Materiał: Tworzywo sztuczne. Prawdziwa choinka musi mieć czubek, najczęściej wybierana dekoracja to błyszcząca gwiazda.Zapalone lampki, bombki i połyskująca gwiazda stworzą magiczny i niezapomniany klimat świąt. Produkt bezpieczny - wykonany z wytrzymałego tworzywa, które przy upadku nie ulega zbiciu. Święta Bożego Narodzenia to magiczny moment, który niesie ze sobą wiele...</code> | <code>ANTYRAMA 40x60 cm 60x40 W BIAŁYM ETUI Antyrama zawiera: plexi renomowanego europejskiego producenta, malowaną po bokach na biało płytę HDF o grubości 3 mm., zawieszki do mocowania w poziomie i w pionie, 6 klipsów mocujących, które idealnie dociskają tworzywo plexi do płyty HDF białe etui czyli opakowanie detaliczne oraz białą kartkę Firma Martomoferuje Państwu najwyższej jakości produkt, który charakteryzuje precyzyjne wykonanie, wysokiej klasy komponenty oraz gwarancja długiego użytkowania. Do produkcji naszych antyram wykorzystujemy: malowane po bokach na biało płyty HDF o grubości 3 mm. dzięki malowaniu płyta nie nasiąka wilgocią i nie pęcznieje niezawodne, najwyższej jakości klipsy, produkcji włoskiej nie pękają w odróżnieniu od klipsów chińskich bezpieczne tworzywo pleksi o grubości 1 mm. zawiera filtr UV (plexi nie żółknie) Tworzywo plexi jest obustronnie zabezpieczone folią, którą należy zdjąć przed docelowym zastosowaniem antyramy! Wystawiamy Faktury VAT oraz faktury proformy n...</code> |
| <code>szukam zraszacza ogrodowego 20m z 20 dyszami do chłodzenia i nawadniania ogrodu</code> | <code>ZRASZACZ OGRODOWY 20 m Z BEZPOŚREDNIM POŁĄCZENIEM DO WĘŻA OGRODOWEGO ORAZ KRANU 1/2" 20 DYSZ ZRASZAJĄCYCH - wysokiej jakości dysze wytwarzają bardzo lekką i delikatną kurtynę wodną, która kontroluje temperaturę otoczenia i pozwala na jej skuteczne obniżenie. Średnica dyszy o 6mm tworzy "klimę ogrodową". NATYCHMIASTOWE CHŁODZENIE I NAWODNIENIE – kurtyna wodna służy m.in. do ochłody latem dzięki wytworzeniu przez drobnooczkowe dysze delikatnej mgiełki. Parująca woda pobiera ciepło z otoczenia doprowadzając do schłodzenia powietrza w upalne dni nawet o max.10stC. Im wyższa temperatura powietrza, tym efekt chłodzący jest większy. UNIWERSALNE ZASTOSOWANIE – kurtyna może być stosowana na plaży, w ogrodach, na działkach, na balkonie, przed domkiem letniskowym itp. Może być również używana do podlewania ogrodu lub roślin doniczkowych. ŁATWY MONTAŻ - zestaw zawiera opaski zaciskowe, dzięki którym bardzo łatwo zainstalujesz kurtynę. Wystarczy ułożyć wąż w odpowiednim miejscu, aby korzystać z sys...</code> | <code>STOJAK / PODSTAWKA NA KSIĄŻKĘ TABLET LAPTOP BAMBUSOWA – podkładka jest wykonana z naturalnego drewna bambusowego. Jest nie tylko stylowa, ale i praktyczna, ze względu na właściwości tego materiału. WSZECHSTRONNE ZASTOSOWANIE - wielofunkcyjny stojak na książki jest idealny do czytania książek, obsługi tabletów, małych laptopów, książek z obrazkami, przepisów, kartek do szkicowania. Tę lekką i zajmującą niewiele miejsca przenośną podstawkę do czytania można wszędzie ze sobą zabrać. Nadaje się do domu, szkoły, biura, biblioteki, sypialni itd. UCHWYTY + PODPÓRKA - podstawka na książki posiada 2 metalowe klamry zabezpieczone na końcach nakładkami. Uchwyty te trzymają boki książki, a gdy podstawka nie jest używana można je łatwo złożyć na płasko. Dodatkowo podstawka posiada podpórkę, która zapobiega zsuwaniu się książki. REGULACJA KĄTA NACHYLENIA - kąt stojaka na książki jest regulowany. Posiada 6 stopniową regulacja rozłożenia. Pozwoli więc na dobranie najwygodniejszej pozycji do czytania. ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### product-query-retrieval-dataset
* Dataset: [product-query-retrieval-dataset](https://huggingface.co/datasets/tycjan/product-query-retrieval-dataset) at [b368226](https://huggingface.co/datasets/tycjan/product-query-retrieval-dataset/tree/b3682265283a4911b4099748b6c0fe627c56ed75)
* Size: 300 evaluation samples
* Columns: <code>query</code>, <code>product_desc_positive</code>, and <code>product_desc_negative</code>
* Approximate statistics based on the first 300 samples:
| | query | product_desc_positive | product_desc_negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 23.05 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 128.0 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 128.0 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| query | product_desc_positive | product_desc_negative |
|:-------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>szukam lekkich i wygodnych słuchawek dousznych z głębokim basem i redukcją hałasu do smartfona.</code> | <code>Wysokiej jakości, douszne słuchawki stereofoniczne idealnie nadające się do wszelkich przenośnych odtwarzaczy MP3/MP4, radii, komórek, itp. Dzięki wyśmienitej jakości materiałów z jakich zostały wykonane charakteryzują się głębokim basem, niewielką wagą oraz wygodą noszenia. Zastosowane w słuchawkach sylikonowe nakładki zwiększają komfort odsłuchu oraz ograniczają zakłócenia zewnętrzne.Stworzone z myślą o osobach przebywających dużo poza domem, blokują dźwięki zewnętrzne i jednocześnie wzmacniają basy, oferując użytkownikowi zupełnie nowe doznania. Ultra lekkie i wygodne. Posiadają kabel z miedzi beztlenowej, przetworniki wykonane z magnesów neodymowych, specjalny, wyprofilowany kształt oraz otwory basowe u góry, które zapewnia doskonały, czysty i dynamiczny dźwięk w całym zakresie częstotliwości. Kabel zakończony wtykiem mini-jack 3.5mm pasującym do większości obecnie dostępnych przenośnych odtwarzaczy muzycznych.</code> | <code>Sztuczny bukiet hortensji Cechy produktu: Hortensje w pięknym kolorze: Bukiet zawiera 5 uroczych gałązek hortensji w ciekawym kolorze, dodają wnętrzu świeżości i uroku. Zielone liście: Ożywcze zielone liście doskonale komponują się z kwiatami, nadając bukietowi naturalnego wyglądu. Gęstość i faktura: Bukiet jest niezwykle gęsty, co sprawia, że stanowi imponującą dekorację, zarówno samodzielnie, jak i w kompozycjach z innymi elementami. Uniwersalne zastosowanie: Doskonały jako centralna ozdoba na stole, a także do tworzenia stroików, dekoracji florystycznych czy upiększenia miejsc na cmentarzu. Specyfikacja: Wysokość bukietu: ok. 50 cm, Średnica bukietu: ok. 20 cm, Średnica pojedynczej gałązki hortensji: ok 11 cm, Wymiary opakowania: 52 x 22 x 22 cm, Liczba gałązek hortensji w bukiecie: 5 sztuk, Kolor hortensji: różowy. Sztuczne kwiaty to bardzo praktyczny wybór z wielu powodów. Przede wszystkim nie trzeba się martwić o ich podlewanie czy przycinanie. Zastosowanie sztucznych roślin i kw...</code> |
| <code>elastyczne sznurowadła silikonowe do butów sportowych</code> | <code>Elastyczne sznurowadła silikonowe Cechy produktu: Świetna alternatywa dla tradycyjnych sznurowadeł. Nie trzeba ich ciągle poprawiać i wiązać. Zastępują wiązane sznurówki, znacznie skracają czas zakładania i zdejmowania butów. Specyfikacja: Zestaw zawiera 12szt, Kolory jak na zdjęciach, Długość sznurowadła : 11,5cm (rozciągają się do 17cm), Wymiary opakowania: 16 x 6,5 x 3cm. Silikonowe sznurowadła do butów, o płaskim kształcie. Zastosowanie silikonowego materiału, zapewnia ich elastyczność. Zakładanie butów dzięki sznurówkom z silikonu jest proste, szybkie i wygodne. Z zastosowaniem tego rodzaju sznurowadeł, nie ma konieczności wiązania butów, ani obaw, że będą się rozwiązywać. Idealnie sprawdzają się w trampkach, a także w innych dowolnych butach. Sznurowadła będą doskonałe zarówno dla dzieci, jak i dorosłych. Rozwiązanie idealne dla: Dzieci którym samodzielne wiązanie sznurowadeł sprawia trudność lub jeszcze same tego nie potrafią. Osób które lubią mieć porządek nawet w sznurówkach. ...</code> | <code>WAGA HAKOWA LED 40KG FUNKCJA TARA - ciekawą funkcją wagi jest także opcja tarowania. Dzięki temu urządzenie pokaże właściwą wagę produktów, automatycznie odejmując wagę pojemnika. WYŚWIETLACZ – duży wyświetlacz zapewnia wygodne odczytywanie wyników. Pomaga nam w sprawdzeniu danej wagi. UNIWERSALNA – waga ma uniwersalne zastosowanie. Może być stosowana w domu, na lotnisku, na biwaku, a także dla wędkarzy. BLOKADA WYNIKU - Funkcja blokady wyniku zapamiętuje ostatni odczyt nawet po zdjęciu bagażu. Możesz wyłączyć tę funkcję poprzez przytrzymanie przycisku TARE. SPECYFIKACJA - kolor: czarny; zasilanie: 2x bateria 1,5V AAA (w zestawie); jednostki ważenia: kg, lb, jin, oz; wymiary (szer/dł/grub): 4,8/8,5/1,7cm; dokładność pomiaru: 5g; skala pomiaru: do 40kg; długość haka: 5cm; waga produktu: 90g; waga produktu z opakowaniem: 94g. SPECYFIKACJA kolor: czarny zasilanie: 2x bateria 1,5V AAA (w zestawie) jednostki ważenia: kg, lb, jin, oz wymiary (szer/dł/grub): 4,8/8,5/1,7cm dokładność pomiaru: ...</code> |
| <code>gdzie kupić obrotowy aerator wertykulator trawy online?</code> | <code>Obrotowy aerator wertykulator trawy Cechy produktu: Poprawia wymianę powietrza pomiędzy ziemią a atmosferą. Obniża straty wynikające z nadmiernego wyparowania wody. Wspomaga działanie nawozów. Poprawia elastyczność trawnika. Zwiększa zdolności regeneracyjne trawnika. Podnosi średnią temperaturę gleby w strefie korzeniowej. Usuwa z powierzchni trawnika resztki darni (warstwy obumarłych roślin). Nie wytwarza spalin dzięki czemu jest przyjazny dla środowiska Nie wymaga zasilania dzięki czemu nadaje się do wykorzystania w każdych warunkach. Specyfikacja: wymiary: średnica kół: 14,5 cm, średnica kół z kolcami 21 cm, szerokość walca: 42 cm, długość uchwytu: ok 120 cm wymiary opakowania: 45x22,5x24cm, waga: ok. 4kg, możliwość odpięcia aeratora od uchwytu, 27 stalowych, ocynkowanych kolców o długości 4,5 cm, Wertykulacja jest jednym z niezbędnych elementów do prawidłowego wzrostu trawy. Napowietrza glebę oraz nacina i rozluźnia darń przyczyniając się do prawidłowego wzrostu i rozwoju. Ciesz si...</code> | <code>Choinka led figurka świąteczna Cechy produktu: Elegancka dekoracja, Świąteczna figurka ma kształt lodowej choinki, Choinka świeci w różnych kolorach, jest wyposażona w diody LED, Gotowa do włączenia, posiada baterie w zestawie, Wzorowana na kryształową - wykonana z wytrzymałego tworzywa. Specyfikacja: Wysokość 12 cm, Średnica podstawki 5,3 cm, Wymiary opakowania: 15 x 6 x 6 cm, Baterie w zestawie, Włącznik/wyłącznik pod spodem. Akrylowa/plastikowa i przezroczysta choinka dająca światło w różnych kolorach. Światło zmienia się płynnie. Idealnie sprawdzi się jako dekoracja w domu, dodatek do stroików, na cmentarz. Postawiona na regale wśród książek, na biurku lub na ławie, wszędzie przypomina o zbliżających się Świętach. Maleńka akrylowa choinka LED zasilana bateriami, przywołuje ciepłe bożonarodzeniowe myśli.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | product-desc-validation_cosine_accuracy | product-desc-test_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:---------------------------------------:|:---------------------------------:|
| -1 | -1 | - | - | 0.9800 | - |
| 0.6667 | 100 | 0.7042 | 0.2423 | 0.9967 | - |
| -1 | -1 | - | - | 1.0 | 0.9933 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-is-de | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:58:29 | 66 | 0 | ---
language:
- is
- de
license: apache-2.0
tags:
- translation
---
### isl-deu
* source group: Icelandic
* target group: German
* OPUS readme: [isl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.deu | 49.2 | 0.661 |
### System Info:
- hf_name: isl-deu
- source_languages: isl
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'de']
- src_constituents: {'isl'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: deu
- short_pair: is-de
- chrF2_score: 0.6609999999999999
- bleu: 49.2
- brevity_penalty: 0.998
- ref_len: 6265.0
- src_name: Icelandic
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: de
- prefer_old: False
- long_pair: isl-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
TransferGraph/CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi",
"base_model:adapter:CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,709,055,210,000 | 2024-02-27T17:33:32 | 0 | 0 | ---
base_model: CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.6209424083769634
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4911 | None | 0 |
| 0.5634 | 0.6905 | 0 |
| 0.5927 | 0.6617 | 1 |
| 0.6 | 0.6459 | 2 |
| 0.6 | 0.6258 | 3 |
| 0.6 | 0.6176 | 4 |
| 0.6084 | 0.6140 | 5 |
| 0.6073 | 0.5997 | 6 |
| 0.6209 | 0.5952 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
poltextlab/xlm-roberta-large-polish-parlspeech-cap-v3 | poltextlab | text-classification | [
"pytorch",
"xlm-roberta",
"text-classification",
"pl",
"region:us"
] | 1,738,318,286,000 | 2025-02-26T16:08:46 | 0 | 0 | ---
language:
- pl
metrics:
- accuracy
- f1-score
tags:
- text-classification
- pytorch
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
If you use our models for your work or research, please cite this paper: Sebők,
M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large
Language Models for Multilingual Policy Topic Classification: The Babel Machine
Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434'
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-polish-parlspeech-cap-v3
## Model description
An `xlm-roberta-large` model fine-tuned on english training data containing parliamentary speeches (oral questions, interpellations, bill debates, other plenary speeches, urgent questions) labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-polish-parlspeech-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
The translation table from the model results to CAP codes is the following:
```python
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 12,
11: 13,
12: 14,
13: 15,
14: 16,
15: 17,
16: 18,
17: 19,
18: 20,
19: 21,
20: 23,
21: 999,
}
```
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 96344 examples.<br>
Model accuracy is **0.81**.
| label | precision | recall | f1-score | support |
|:------------ | --------: | -----: | -------: | ------: |
| 0 | 0.69 | 0.72 | 0.71 | 7784 |
| 1 | 0.58 | 0.48 | 0.52 | 2057 |
| 2 | 0.75 | 0.80 | 0.77 | 2247 |
| 3 | 0.73 | 0.71 | 0.72 | 1979 |
| 4 | 0.57 | 0.57 | 0.57 | 1991 |
| 5 | 0.81 | 0.78 | 0.80 | 2127 |
| 6 | 0.64 | 0.68 | 0.66 | 930 |
| 7 | 0.73 | 0.64 | 0.68 | 1069 |
| 8 | 0.66 | 0.56 | 0.61 | 588 |
| 9 | 0.77 | 0.78 | 0.77 | 1423 |
| 10 | 0.68 | 0.68 | 0.68 | 2910 |
| 11 | 0.60 | 0.61 | 0.61 | 1769 |
| 12 | 0.57 | 0.63 | 0.60 | 1423 |
| 13 | 0.68 | 0.52 | 0.59 | 2581 |
| 14 | 0.72 | 0.74 | 0.73 | 1253 |
| 15 | 0.69 | 0.68 | 0.69 | 952 |
| 16 | 0.56 | 0.30 | 0.39 | 557 |
| 17 | 0.64 | 0.60 | 0.62 | 3281 |
| 18 | 0.60 | 0.63 | 0.62 | 8384 |
| 19 | 0.54 | 0.55 | 0.54 | 1261 |
| 20 | 0.58 | 0.47 | 0.52 | 507 |
| 21 | 0.96 | 0.97 | 0.97 | 49271 |
| macro avg | 0.67 | 0.64 | 0.65 | 96344 |
| weighted avg | 0.81 | 0.81 | 0.81 | 96344 |
### Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:
- **Number of Training Epochs**: 10
- **Batch Size**: 40
- **Learning Rate**: 5e-06
- **Early Stopping**: enabled with a patience of 2 epochs
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Reference
Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue. | [
"TRANSLATION"
] | Non_BioNLP |
shinjiyamas/reddit-construct-classify | shinjiyamas | null | [
"transformers",
"RobertaWithFeatures",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,717,137,448,000 | 2024-05-31T08:54:47 | 6 | 1 | ---
license: mit
---
# Project Name
Provide a brief introduction to what the project does and its target audience. Describe the problems it solves or the functionality it offers.
## Features
- Custom integration of numerical features with text data using RoBERTa.
- Ability to handle complex text classification tasks with additional context from numerical data.
## Prerequisites
Before you begin, ensure you have met the following requirements:
- Python 3.6+
- PyTorch 1.4+
- Transformers 3.5+
## Installation
Install the required packages using pip:
```bash
pip install torch transformers
import torch
import torch.nn as nn
from transformers import RobertaModel
class RobertaWithFeatures(nn.Module):
def __init__(self, num_features, num_labels):
super(RobertaWithFeatures, self).__init__()
# Load the pretrained RoBERTa base model
self.roberta = RobertaModel.from_pretrained('roberta-base')
# Define a linear layer to process numerical features
self.feature_processor = nn.Linear(num_features, 768)
# Final classifier that takes the concatenated output of text + numerical features
self.classifier = nn.Linear(768 * 2, num_labels)
def forward(self, input_ids, attention_mask, features):
outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
sequence_output = outputs.pooler_output
features_processed = self.feature_processor(features).squeeze(1)
combined_features = torch.cat((sequence_output, features_processed), dim=1)
logits = self.classifier(combined_features)
return logits
```
## Instantiate the model
```
model = RobertaWithFeatures(num_features=8, num_labels=2)
model.load_state_dict(torch.load('roberta_with_features_v1.pth'))
model.eval() # Set the model to evaluation mode
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
CATIE-AQ/QAmembert | CATIE-AQ | question-answering | [
"transformers",
"pytorch",
"safetensors",
"camembert",
"question-answering",
"fr",
"dataset:etalab-ia/piaf",
"dataset:fquad",
"dataset:lincoln/newsquadfr",
"dataset:pragnakalp/squad_v2_french_translated",
"dataset:CATIE-AQ/frenchQA",
"arxiv:1910.09700",
"doi:10.57967/hf/0821",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | 1,673,368,406,000 | 2024-11-26T10:46:29 | 114 | 14 | ---
datasets:
- etalab-ia/piaf
- fquad
- lincoln/newsquadfr
- pragnakalp/squad_v2_french_translated
- CATIE-AQ/frenchQA
language: fr
library_name: transformers
license: mit
metrics:
- f1
- exact_match
pipeline_tag: question-answering
widget:
- text: Combien de personnes utilisent le français tous les jours ?
context: 'Le français est une langue indo-européenne de la famille des langues romanes
dont les locuteurs sont appelés francophones. Elle est parfois surnommée la langue
de Molière. Le français est parlé, en 2023, sur tous les continents par environ
321 millions de personnes : 235 millions l''emploient quotidiennement et 90 millions
en sont des locuteurs natifs. En 2018, 80 millions d''élèves et étudiants s''instruisent
en français dans le monde. Selon l''Organisation internationale de la francophonie
(OIF), il pourrait y avoir 700 millions de francophones sur Terre en 2050.'
co2_eq_emissions: 100
new_version: CATIE-AQ/QAmemberta
---
# QAmembert
## Model Description
We present **QAmemBERT**, which is a [CamemBERT base](https://huggingface.co/camembert-base) fine-tuned for the Question-Answering task for the French language on four French Q&A datasets composed of contexts and questions with their answers inside the context (= SQuAD 1.0 format) but also contexts and questions with their answers not inside the context (= SQuAD 2.0 format).
All these datasets were concatenated into a single dataset that we called [frenchQA](https://huggingface.co/datasets/CATIE-AQ/frenchQA).
This represents a total of over **221,348 context/question/answer triplets used to finetune this model and 6,376 to test it**.
Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/QA_en/) or [French](https://blog.vaniila.ai/QA/).
## Datasets
| Dataset | Format | Train split | Dev split | Test split |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| [piaf](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)| SQuAD 1.0 | 9 224 Q & A | X | X |
| piaf_v2| SQuAD 2.0 | 9 224 Q & A | X | X |
| [fquad](https://fquad.illuin.tech/)| SQuAD 1.0 | 20 731 Q & A | 3 188 Q & A (not used in training because it serves as a test dataset) | 2 189 Q & A (not used in our work because not freely available)|
| fquad_v2 | SQuAD 2.0 | 20 731 Q & A | 3 188 Q & A (not used in training because it serves as a test dataset) | X |
| [lincoln/newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) | SQuAD 1.0 | 1 650 Q & A | 455 Q & A (not used in our work) | X |
| lincoln/newsquadfr_v2 | SQuAD 2.0 | 1 650 Q & A | 455 Q & A (not used in our work) | X |
| [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated)| SQuAD 2.0 | 79 069 Q & A | X | X |
| pragnakalp/squad_v2_french_translated_v2| SQuAD 2.0 | 79 069 Q & A | X | X |
All these datasets were concatenated into a single dataset that we called [frenchQA](https://huggingface.co/datasets/CATIE-AQ/frenchQA).
## Evaluation results
The evaluation was carried out using the [**evaluate**](https://pypi.org/project/evaluate/) python package.
### FQuaD 1.0 (validation)
The metric used is SQuAD 1.0.
| Model | Exact_match | F1-score |
| ----------- | ----------- | ----------- |
| [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | 53.60 | 78.09 |
| QAmembert (previous version) | 54.26 | 77.87 |
| QAmembert (**this version**) | 53.98 | 78.00 |
| [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large) | **55.95** | **81.05** |
### qwant/squad_fr (validation)
The metric used is SQuAD 1.0.
| Model | Exact_match | F1-score |
| ----------- | ----------- | ----------- |
| [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | 60.17 | 78.27 |
| QAmembert (previous version) | 60.40 | 77.27 |
| QAmembert (**this version**) | 60.95 | 77.30 |
| [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large) | **65.58** | **81.74** |
### frenchQA
This dataset includes question with no answers in the context. The metric used is SQuAD 2.0.
| Model | Exact_match | F1-score | Answer_f1 | NoAnswer_f1 |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf) | n/a | n/a | n/a | n/a |
| QAmembert (previous version) | 60.28 | 71.29 | 75.92 | 66.65
| QAmembert (**this version**) | **77.14** | 86.88 | 75.66 | 98.11
| [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large) | **77.14** | **88.74** | **78.83** | **98.65**
## Usage
### Example with answer in the context
```python
from transformers import pipeline
qa = pipeline('question-answering', model='CATIE-AQ/QAmembert', tokenizer='CATIE-AQ/QAmembert')
result = qa({
'question': "Combien de personnes utilisent le français tous les jours ?",
'context': "Le français est une langue indo-européenne de la famille des langues romanes dont les locuteurs sont appelés francophones. Elle est parfois surnommée la langue de Molière. Le français est parlé, en 2023, sur tous les continents par environ 321 millions de personnes : 235 millions l'emploient quotidiennement et 90 millions en sont des locuteurs natifs. En 2018, 80 millions d'élèves et étudiants s'instruisent en français dans le monde. Selon l'Organisation internationale de la francophonie (OIF), il pourrait y avoir 700 millions de francophones sur Terre en 2050."
})
if result['score'] < 0.01:
print("La réponse n'est pas dans le contexte fourni.")
else :
print(result['answer'])
```
```python
235 millions
```
```python
# details
result
{'score': 0.9945194721221924,
'start': 269,
'end': 281,
'answer': '235 millions'}
```
### Example with answer not in the context
```python
from transformers import pipeline
qa = pipeline('question-answering', model='CATIE-AQ/QAmembert', tokenizer='CATIE-AQ/QAmembert')
result = qa({
'question': "Quel est le meilleur vin du monde ?",
'context': "La tour Eiffel est une tour de fer puddlé de 330 m de hauteur (avec antennes) située à Paris, à l’extrémité nord-ouest du parc du Champ-de-Mars en bordure de la Seine dans le 7e arrondissement. Son adresse officielle est 5, avenue Anatole-France.
Construite en deux ans par Gustave Eiffel et ses collaborateurs pour l'Exposition universelle de Paris de 1889, célébrant le centenaire de la Révolution française, et initialement nommée « tour de 300 mètres », elle est devenue le symbole de la capitale française et un site touristique de premier plan : il s’agit du quatrième site culturel français payant le plus visité en 2016, avec 5,9 millions de visiteurs. Depuis son ouverture au public, elle a accueilli plus de 300 millions de visiteurs."
})
if result['score'] < 0.01:
print("La réponse n'est pas dans le contexte fourni.")
else :
print(result['answer'])
```
```python
La réponse n'est pas dans le contexte fourni.
```
```python
# details
result
{'score': 3.619904940035945e-13,
'start': 734,
'end': 744,
'answer': 'visiteurs.'}
```
### Try it through Space
A Space has been created to test the model. It is available [here](https://huggingface.co/spaces/CATIE-AQ/Qamembert).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** A100 PCIe 40/80GB
- **Hours used:** 5h and 36 min
- **Cloud Provider:** Private Infrastructure
- **Carbon Efficiency (kg/kWh):** 0.076kg (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) ; we take the average carbon intensity in France for the month of March 2023, as we are unable to use the data for the day of training, which are not available.)
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 0.1 kg eq. CO2
## Citations
### QAmemBERT
```
@misc {qamembert2023,
author = { {ALBAR, Boris and BEDU, Pierre and BOURDOIS, Loïck} },
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { QAmembert (Revision 9685bc3) },
year = 2023,
url = { https://huggingface.co/CATIE-AQ/QAmembert},
doi = { 10.57967/hf/0821 },
publisher = { Hugging Face }
}
```
### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
### lincoln/newsquadfr
```
Hugging Face repository: https://hf.co/datasets/lincoln/newsquadfr
```
### pragnakalp/squad_v2_french_translated
```
Hugging Face repository: https://hf.co/datasets/pragnakalp/squad_v2_french_translated
```
### CamemBERT
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
## License
MIT | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Priyanka-Balivada/electra-5-epoch-sentiment | Priyanka-Balivada | text-classification | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"base_model:google/electra-small-discriminator",
"base_model:finetune:google/electra-small-discriminator",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,698,574,972,000 | 2024-02-20T14:32:28 | 20 | 0 | ---
base_model: google/electra-small-discriminator
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: electra-5-epoch-sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: test
args: sentiment
metrics:
- type: accuracy
value: 0.6893520026050146
name: Accuracy
- type: precision
value: 0.6913776305729754
name: Precision
- type: recall
value: 0.6893520026050146
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
TOKENIZER & TRAINER CORRUPTED
# electra-5-epoch-sentiment
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7949
- Accuracy: 0.6894
- Precision: 0.6914
- Recall: 0.6894
- Micro-avg-recall: 0.6894
- Micro-avg-precision: 0.6894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Micro-avg-recall | Micro-avg-precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:----------------:|:-------------------:|
| 0.5949 | 1.0 | 2851 | 0.6963 | 0.6926 | 0.6943 | 0.6926 | 0.6926 | 0.6926 |
| 0.6502 | 2.0 | 5702 | 0.7348 | 0.6911 | 0.6929 | 0.6911 | 0.6911 | 0.6911 |
| 0.556 | 3.0 | 8553 | 0.7322 | 0.6943 | 0.6952 | 0.6943 | 0.6943 | 0.6943 |
| 0.4561 | 4.0 | 11404 | 0.7601 | 0.6895 | 0.6916 | 0.6895 | 0.6895 | 0.6895 |
| 0.471 | 5.0 | 14255 | 0.7949 | 0.6894 | 0.6914 | 0.6894 | 0.6894 | 0.6894 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
MemorialStar/distilbert-base-uncased-finetuned-emotion | MemorialStar | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,709,366,604,000 | 2024-03-02T10:47:06 | 4 | 0 | ---
base_model: distilbert/distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.931
name: Accuracy
- type: f1
value: 0.9309142811171885
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2164
- Accuracy: 0.931
- F1: 0.9309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8133 | 1.0 | 250 | 0.2963 | 0.9135 | 0.9129 |
| 0.2443 | 2.0 | 500 | 0.2164 | 0.931 | 0.9309 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-es-tll | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tll",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:33:37 | 357 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-es-tll
* source languages: es
* target languages: tll
* OPUS readme: [es-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tll | 20.7 | 0.434 |
| [
"TRANSLATION"
] | Non_BioNLP |
BatirayErbayVodafone/testg | BatirayErbayVodafone | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,725,916,794,000 | 2024-09-10T04:52:10 | 7 | 0 | ---
base_model: google/gemma-2-9b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-9b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 9b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-9b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
dmedhi/eng2french-t5-small | dmedhi | translation | [
"peft",
"safetensors",
"translation",
"transformers",
"en",
"fr",
"dataset:opus100",
"base_model:google-t5/t5-small",
"base_model:adapter:google-t5/t5-small",
"license:apache-2.0",
"region:us"
] | 1,702,984,347,000 | 2023-12-19T18:12:31 | 12 | 0 | ---
base_model: t5-small
datasets:
- opus100
language:
- en
- fr
library_name: peft
license: apache-2.0
tags:
- translation
- safetensors
- transformers
---
# Model Card for Model ID
A language translation model fine-tuned on **opus100** dataset for *English to French* translation.
## Model Description
- **Model type:** Language Model
- **Language(s) (NLP):** English, French
- **License:** Apache 2.0
- **Finetuned from model:** [T5-small](https://huggingface.co/t5-small)
## Uses
The model is intended to use for English to French translation related tasks.
## How to Get Started with the Model
Install necessary libraries
```
pip install transformers peft accelerate
```
Use the code below to get started with the model.
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dmedhi/eng2french-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
model = PeftModel.from_pretrained(model, "dmedhi/eng2french-t5-small")
context = tokenizer(["Do you want coffee?"], return_tensors='pt')
output = model.generate(**context)
result = tokenizer.decode(output[0], skip_special_tokens=True)
print(result)
# Output
# Tu veux du café?
```
## Training Details
### Training Data
- Dataset used: [Opus100](https://huggingface.co/datasets/opus100)
- Subset: "en-fr"
## Evaluation
- global_step=5000
- training_loss=1.295289501953125
#### Metrics
- train_runtime = 1672.4371
- train_samples_per_second = 23.917
- train_steps_per_second = 2.99
- total_flos = 685071170273280.0
- train_loss = 1.295289501953125
- epoch = 20.0
## Compute Instance
- Google Colab - T4 GPU (Free)
### Framework versions
- PEFT 0.7.1 | [
"TRANSLATION"
] | Non_BioNLP |
elybes/IFRS_en_ar_translation | elybes | translation | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"finance",
"IFRS",
"translation",
"ar",
"en",
"dataset:elybes/IFRS",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,722,330,593,000 | 2024-08-13T20:39:10 | 28 | 1 | ---
datasets:
- elybes/IFRS
language:
- ar
- en
metrics:
- bleu
pipeline_tag: translation
tags:
- finance
- IFRS
- translation
---
| [
"TRANSLATION"
] | Non_BioNLP |
LoneStriker/bagel-7b-v0.1-5.0bpw-h6-exl2-2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,702,490,552,000 | 2023-12-13T18:06:31 | 6 | 0 | ---
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
license: apache-2.0
---
# A bagel, with everything (except DPO)

## Overview
This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel
You probably want the higher performing model that underwent DPO: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
The only benefit to this model is that it is less "truthful", for roleplaying and other types of scenarios that may benefit more from the SFT-only tune.
## Data selection.
The first step in the process is creating a dataset.
In this case, we're actually creating a composite dataset, consisting of both supervised fine-tuning data (SFT) and direct preference optimization (DPO) data.
All instruction data, that is, data that is not plain text (like project Gutenberg and items from Cinematika) or DPO, is converted into ShareGPT format so it's easier to work with.
See the corresponding code in `bagel/data_sources/*.py` in the repo linked above for full implementation for each data source.
Deduplication is done by creating a uuid v5 of the instruction/text, then only adding items not previously seen (where datasets are loaded in order of the confidence score I assign them).
This means that if an instruction is in data source "Foo" with confidence 4 as well as in data source "Bar" with confidence score 2, only the entry from "Foo" will be taken.
### SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
In practice, this would mean tokenization code like such:
```python
tokenizer = AutoTokenizer.from_pretrained('mistralai/mistral-7b-v0.1')
input_str = f"""system
You are a goat.
{tokenizer.eos_token}
{tokenizer.bos_token}user
Tell me how to fry an egg.
{tokenizer.eos_token}
{tokenizer.bos_token}assistant
"""
inputs = tokenizer(input_str, return_tensors="pt")
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Fine-tune
*Note: I actually used my fork of [qlora](https://github.com/jondurbin/qlora)'s `train.py` for this, but I'm porting it to a minified version here, not tested yet!*
*More notes: I stopped the fine-tune around 50% because of budget constraints - it's a lot of data...*
```bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.json
```
Deepspeed configuration:
```json
{
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8
}
}
``` | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task660 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,736,086,147,000 | 2025-01-05T14:09:13 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task660
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task660_mizan_fa_en_translation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task660_mizan_fa_en_translation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"TRANSLATION"
] | Non_BioNLP |
pardeep/distilbert-base-uncased-finetuned-emotion-ch02 | pardeep | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,658,053,868,000 | 2022-07-17T10:54:29 | 104 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion-ch02
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.934
name: Accuracy
- type: f1
value: 0.9341801255709286
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-ch02
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2923 | 1.0 | 250 | 0.2001 | 0.9275 | 0.9263 |
| 0.1485 | 2.0 | 500 | 0.1703 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
potsawee/t5-large-generation-race-QuestionAnswer | potsawee | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:race",
"arxiv:2301.12307",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,677,109,278,000 | 2023-03-12T16:10:27 | 83 | 16 | ---
datasets:
- race
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text2text-generation
---
# t5-large fine-tuned to RACE for Generating Question+Answer
- Input: `context` (e.g. news article)
- Output: `question <sep> answer`
This model generates **abstractive** answers following the RACE dataset. If you would like to have **extractive** questions/answers, you can use our model trained on SQuAD: https://huggingface.co/potsawee/t5-large-generation-squad-QuestionAnswer.
## Model Details
t5-large model is fine-tuned to the RACE dataset where the input is the context/passage and the output is the question followed by the answer. This is the first component in the question generation pipeline (i.e. `g1`) in our [MQAG paper](https://arxiv.org/abs/2301.12307),
or please refer to the GitHub repo of this project: https://github.com/potsawee/mqag0.
## How to Use the Model
Use the code below to get started with the model. You can also set do_sample=True in generate() to obtain different question-answer pairs.
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("potsawee/t5-large-generation-race-QuestionAnswer")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("potsawee/t5-large-generation-race-QuestionAnswer")
>>> context = r"""
... World number one Novak Djokovic says he is hoping for a "positive decision" to allow him
... to play at Indian Wells and the Miami Open next month. The United States has extended
... its requirement for international visitors to be vaccinated against Covid-19. Proof of vaccination
... will be required to enter the country until at least 10 April, but the Serbian has previously
... said he is unvaccinated. The 35-year-old has applied for special permission to enter the country.
... Indian Wells and the Miami Open - two of the most prestigious tournaments on the tennis calendar
... outside the Grand Slams - start on 6 and 20 March respectively. Djokovic says he will return to
... the ATP tour in Dubai next week after claiming a record-extending 10th Australian Open title
... and a record-equalling 22nd Grand Slam men's title last month.""".replace("\n", "")
>>> inputs = tokenizer(context, return_tensors="pt")
>>> outputs = model.generate(**inputs, max_length=100)
>>> question_answer = tokenizer.decode(outputs[0], skip_special_tokens=False)
>>> question_answer = question_answer.replace(tokenizer.pad_token, "").replace(tokenizer.eos_token, "")
>>> question, answer = question_answer.split(tokenizer.sep_token)
>>> print("question:", question)
question: What is the best title for the passage?
>>> print("answer:", answer)
answer: Djokovic's application for special permission to enter the United States
```
## Generating Distractors (other options in a multiple-choice setup)
```Context ---> Question + (A) Answer (B) Distractor1 (C) Distractor2 (D) Distractor3```
Please refer to our distractor generation model: https://huggingface.co/potsawee/t5-large-generation-race-Distractor
## Citation
```bibtex
@article{manakul2023mqag,
title={MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization},
author={Manakul, Potsawee and Liusie, Adian and Gales, Mark JF},
journal={arXiv preprint arXiv:2301.12307},
year={2023}
}
``` | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old | Atharvgarg | text2text-generation | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,659,021,898,000 | 2022-07-28T16:04:21 | 20 | 0 | ---
license: apache-2.0
metrics:
- rouge
tags:
- summarisation
- generated_from_trainer
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
- Rouge1: 60.9431
- Rouge2: 49.8688
- Rougel: 42.4663
- Rougelsum: 59.836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.8246 | 1.0 | 223 | 0.6974 | 55.2742 | 41.9883 | 37.8584 | 53.7602 |
| 0.6396 | 2.0 | 446 | 0.6786 | 56.0006 | 43.1917 | 38.5125 | 54.4571 |
| 0.5582 | 3.0 | 669 | 0.6720 | 57.8912 | 45.7807 | 40.0807 | 56.4985 |
| 0.505 | 4.0 | 892 | 0.6659 | 59.6611 | 48.0095 | 41.752 | 58.5059 |
| 0.4611 | 5.0 | 1115 | 0.6706 | 59.7241 | 48.164 | 41.4523 | 58.5295 |
| 0.4254 | 6.0 | 1338 | 0.6711 | 59.8524 | 48.1821 | 41.2299 | 58.6072 |
| 0.3967 | 7.0 | 1561 | 0.6718 | 60.3009 | 49.0085 | 42.0306 | 59.0723 |
| 0.38 | 8.0 | 1784 | 0.6733 | 60.9431 | 49.8688 | 42.4663 | 59.836 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| [
"SUMMARIZATION"
] | Non_BioNLP |
aiola/roberta-large-corener | aiola | fill-mask | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"NER",
"named entity recognition",
"RE",
"relation extraction",
"entity mention detection",
"EMD",
"coreference resolution",
"en",
"dataset:Ontonotes",
"dataset:CoNLL04",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,653,466,421,000 | 2022-07-03T14:16:17 | 102 | 2 | ---
datasets:
- Ontonotes
- CoNLL04
language:
- en
license: afl-3.0
tags:
- NER
- named entity recognition
- RE
- relation extraction
- entity mention detection
- EMD
- coreference resolution
---
# CoReNer
## Demo
We released an online demo so you can easily play with the model. Check it out: [http://corener-demo.aiola-lab.com](http://corener-demo.aiola-lab.com).
The demo uses the [aiola/roberta-base-corener](https://huggingface.co/aiola/roberta-base-corener) model.
## Model description
A multi-task model for named-entity recognition, relation extraction, entity mention detection, and coreference resolution.
We model NER as a span classification task and relation extraction as a multi-label classification of (NER) span tuples.
Similarly, model EMD as a span classification task and CR as a binary classification of (EMD) span tuples.
To construct the CR clusters, we keep the top antecedent of each mention, then compute the connected components of the mentions' undirected graph.
The model was trained to recognize:
- Entity types: GPE, ORG, PERSON, DATE, NORP, CARDINAL, MONEY, PERCENT, WORK_OF_ART, ORDINAL, EVENT, LOC, TIME, FAC, QUANTITY, LAW, PRODUCT, LANGUAGE.
- Relation types: Kill, Live_In, Located_In, OrgBased_In, Work_For.
## Usage example
See additional details and usage examples at: https://github.com/aiola-lab/corener.
```python
import json
from transformers import AutoTokenizer
from corener.models import Corener, ModelOutput
from corener.data import MTLDataset
from corener.utils.prediction import convert_model_output
tokenizer = AutoTokenizer.from_pretrained("aiola/roberta-large-corener")
model = Corener.from_pretrained("aiola/roberta-large-corener")
model.eval()
examples = [
"Apple Park is the corporate headquarters of Apple Inc., located in Cupertino, California, United States. It was opened to employees in April 2017, while construction was still underway, and superseded the original headquarters at 1 Infinite Loop, which opened in 1993."
]
dataset = MTLDataset(
types=model.config.types,
tokenizer=tokenizer,
train_mode=False,
)
dataset.read_dataset(examples)
example = dataset.get_example(0) # get first example
output: ModelOutput = model(
input_ids=example.encodings,
context_masks=example.context_masks,
entity_masks=example.entity_masks,
entity_sizes=example.entity_sizes,
entity_spans=example.entity_spans,
entity_sample_masks=example.entity_sample_masks,
inference=True,
)
print(json.dumps(convert_model_output(output=output, batch=example, dataset=dataset), indent=2))
``` | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"COREFERENCE_RESOLUTION"
] | Non_BioNLP |
gigauser/kcbert_nsmc_tuning | gigauser | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nsmc",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,720,360,869,000 | 2024-07-08T06:00:35 | 12 | 0 | ---
base_model: beomi/kcbert-base
datasets:
- nsmc
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: kcbert_nsmc_tuning
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: nsmc
type: nsmc
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.90134
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert_nsmc_tuning
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4492
- Accuracy: 0.9013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1689 | 1.0 | 2344 | 0.2717 | 0.9006 |
| 0.0951 | 2.0 | 4688 | 0.3458 | 0.8995 |
| 0.051 | 3.0 | 7032 | 0.4492 | 0.9013 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
seongwkim/distilbert-base-uncased-finetuned-emotion | seongwkim | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,650,526,006,000 | 2022-04-21T08:34:19 | 120 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.923
name: Accuracy
- type: f1
value: 0.9230166540210804
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8643 | 1.0 | 250 | 0.3395 | 0.901 | 0.8969 |
| 0.2615 | 2.0 | 500 | 0.2251 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
cgus/granite-3.2-8b-instruct-preview-exl2 | cgus | text-generation | [
"exllamav2",
"granite",
"language",
"granite-3.2",
"text-generation",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.2-8b-instruct-preview",
"base_model:quantized:ibm-granite/granite-3.2-8b-instruct-preview",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
] | 1,739,051,780,000 | 2025-02-09T09:37:46 | 60 | 0 | ---
base_model:
- ibm-granite/granite-3.2-8b-instruct-preview
library_name: exllamav2
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.2
inference: false
---
# Granite-3.2-8B-Instruct-Preview-exl2
Original model: [Granite-3.2-8B-Instruct-Preview](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct-preview)
Made by: [Granite Team, IBM](https://huggingface.co/ibm-granite)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.2-8b-instruct-preview-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/granite-3.2-8b-instruct-preview-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/granite-3.2-8b-instruct-preview-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/granite-3.2-8b-instruct-preview-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/granite-3.2-8b-instruct-preview-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.2.8 with the default dataset. Granite3 models require Exllamav2 0.2.7 or newer.
Exl2 models don't support native RAM offloading, so the model has to fully fit into GPU VRAM.
It's also required to use Nvidia RTX on Windows or Nvidia RTX/AMD ROCm on Linux.
# Original model card
# Granite-3.2-8B-Instruct-Preview
**Model Summary:**
Granite-3.2-8B-Instruct-Preview is an early release of an 8B long-context model fine-tuned for enhanced reasoning (thinking) capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
<!-- is preview release of a finetuned mdpeis a 8B parameter long-context instruct model finetuned from Granite-3.1-8B-Instruct using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets tailored for solving long context problems. This model is finetuned to reason
developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. -->
- **Developers:** Granite Team, IBM
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Release Date**: February 7th, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Supported Languages:**
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
**Intended Use:**
The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
**Capabilities**
* **Thinking**
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases
* Long-context tasks including long document/meeting summarization, long document QA, etc.
**Generation:**
This is a simple example of how to use Granite-3.2-8B-Instruct-Preview model.
Install the following libraries:
```shell
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
```
Then, copy the snippet from the section that is relevant for your use case.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.2-8b-instruct-preview"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"How do I develop a skill?"}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
```
**Example Outputs**
- thinking=True
```md
Here is my thought process:
1. Understand the user's intent: The user is seeking guidance on how to develop a new skill.
2. Break down the process: Skills development involves several stages - identification, planning, practice, feedback, and refinement.
3. Consider different aspects: Discuss various methods, mindset, and resources needed for effective skill acquisition.
4. Provide actionable steps: Offer a structured approach that the user can follow.
Here is my response:
Developing a new skill is a systematic process that involves several key steps and a dedicated mindset. Here's a comprehensive guide to help you effectively cultivate a new skill:
### 1. **Identify the Skill**
- **Clarity of Purpose**: Clearly define what skill you want to acquire. Be specific about why you want this skill and how it aligns with your goals or interests.
- **Research**: Understand the basics of the skill. Read about it, watch tutorials, or talk to people who are already proficient. This initial research will give you a realistic understanding of what the skill entails.
### 2. **Set Clear Goals**
- **SMART Goals**: Make sure your goals are Specific, Measurable, Achievable, Relevant, and Time-bound. For example, instead of saying "I want to learn programming," set a goal like "I will complete a beginner's Python course within three months."
- **Break Down**: If the skill is complex, break it down into smaller, manageable milestones.
### 3. **Create a Learning Plan**
- **Resources**: Identify the best resources for learning. These could be online courses, books, mentors, workshops, or practical projects.
- **Schedule**: Dedicate time in your routine for skill practice. Consistency is crucial in skill development. Treat your learning time as non-negotiable.
### 4. **Start with the Basics**
- **Foundational Knowledge**: Begin by grasping the fundamentals. Understanding the basics thoroughly sets a strong foundation for more advanced learning.
- **Practice Regularly**: Regular, focused practice is essential. Even 15-30 minutes a day can yield significant results over time.
### 5. **Seek Feedback and Guidance**
- **Mentorship**: If possible, find a mentor or join a community of learners. Feedback from experienced individuals can accelerate your learning curve.
- **Peer Learning**: Engage with others learning the same skill. Peer groups provide support, motivation, and different perspectives.
### 6. **Apply the Skill in Real-World Scenarios**
- **Practical Application**: Look for opportunities to apply what you've learned. This could be through projects, internships, or volunteering. Practical application solidifies your understanding and builds confidence.
- **Iterate Based on Feedback**: After applying your skill, seek and incorporate feedback to refine and improve.
### 7. **Maintain a Growth Mindset**
- **Embrace Challenges**: Understand that setbacks and failures are part of the learning process. Maintain a positive attitude and view them as opportunities to learn.
- **Patience and Persistence**: Skill development takes time. Stay patient and persistent, even when progress seems slow.
### 8. **Track and Reflect**
- **Progress Tracking**: Regularly review your progress. This could be through keeping a learning journal or using apps designed for skill tracking.
- **Reflect and Adjust**: Periodically reflect on your learning process. What's working? What's not? Adjust your plan accordingly.
### 9. **Stay Motivated**
- **Reward System**: Set small rewards for achieving milestones to keep yourself motivated.
- **Stay Curious**: Keep your interest piqued by exploring related topics or advanced aspects of the skill.
By following these steps and maintaining a disciplined, growth-oriented approach, you can effectively develop any new skill. Remember, the key is consistent effort and a willingness to learn from both successes and failures.
Building skills is an ongoing journey, not a destination.
```
- thinking=False
```md
Developing a skill involves a combination of learning, practice, and often, feedback. Here's a step-by-step guide to help you develop a new skill:
1. **Identify the Skill**: Start by clearly defining what skill you want to develop. Be specific. Instead of saying "I want to learn to code," specify a programming language like Python or JavaScript.
2. **Research**: Learn about the basics of the skill. Read books, articles, watch tutorials, or take online courses. Websites like Coursera, Udemy, Khan Academy, and YouTube can be great resources.
3. **Set Clear Goals**: Break down your skill into smaller, manageable goals. For example, if you're learning a new language, your goals might be to learn basic grammar, build a simple sentence, have a basic conversation, etc.
4. **Create a Study Plan**: Allocate specific time each day or week for learning and practicing. Consistency is key in skill development.
5. **Practice**: Apply what you've learned. Practice makes permanent. If you're learning to code, write small programs. If it's a musical instrument, play regularly.
6. **Get Feedback**: Seek feedback from others who are more experienced. This could be a mentor, a tutor, or even online communities. Constructive criticism can help you identify areas for improvement.
7. **Review and Refine**: Regularly review what you've learned. Refine your skills based on feedback and your own observations.
8. **Apply in Real Life**: Try to use your new skill in real-life situations. This could be a project at work, a personal hobby, or volunteering.
9. **Be Patient and Persistent**: Skill development takes time. Don't get discouraged by slow progress or setbacks. Keep practicing and learning.
10. **Stay Motivated**: Keep your end goal in mind and celebrate small victories along the way to stay motivated.
Remember, everyone learns at their own pace, so don't compare your progress with others. The most important thing is that you're consistently moving forward.
```
**Evaluation Results:**
<table>
<thead>
<tr>
<th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
<th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th>
<th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
<th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
<th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th>
<th style="text-align:center; background-color: #001d6c; color: white;">DROP</th>
<th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
<th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
<th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
</tr></thead>
<tbody>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">27.87</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.84</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.84</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.92</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">68.10</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">79.08</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">88.82</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">84.62</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">71.20</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td>
</tr>
<tr>
<td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-8B-Instruct-Preview</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">55.23</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">61.16</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.93</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">28.08</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">66.37</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">65.60</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">50.73</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">83.09</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">89.47</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">86.88</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">73.57</td>
<td style="text-align:center; background-color: #DAE8FF; color: black;">85.99</td>
</tr>
</tbody></table>
**Training Data:**
Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
<!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
**Infrastructure:**
We train Granite-3.2-8B-Instruct-Preview using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Ethical Considerations and Limitations:**
Granite-3.2-8B-Instruct-Preview builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant.
**Resources**
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
<!-- ## Citation
```
@misc{granite-models,
author = {author 1, author2, ...},
title = {},
journal = {},
volume = {},
year = {2024},
url = {https://arxiv.org/abs/0000.00000},
}
``` --> | [
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | Non_BioNLP |
cbpuschmann/BERT-klimacoder_v0.3 | cbpuschmann | text-classification | [
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] | 1,733,152,651,000 | 2024-12-02T15:18:12 | 4 | 0 | ---
base_model: google-bert/bert-base-uncased
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.05558604374527931
f1: 0.9881956155143339
precision: 0.9881956155143339
recall: 0.9881956155143339
auc: 0.9994592560589801
accuracy: 0.988313856427379
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pathfinderNdoma/online-doctor-model | pathfinderNdoma | question-answering | [
"transformers",
"safetensors",
"bert",
"question-answering",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | 1,729,704,679,000 | 2024-10-23T17:58:48 | 8 | 0 | ---
base_model:
- dmis-lab/biobert-v1.1
library_name: transformers
license: creativeml-openrail-m
pipeline_tag: question-answering
---
library_name: transformers
tags: [biomedical, question-answering, healthcare]
---
# Model Card for Online Doctor Model
This model is a fine-tuned version of the `dmis-lab/biobert-large-cased-v1.1-squad` model. It is designed to answer questions related to diseases based on symptom descriptions, providing a question-answering pipeline to help healthcare professionals and users. This model has been trained on a custom dataset of diseases and their symptoms for predictive question answering.
## Model Details
### Model Description
This is a question-answering model fine-tuned using the `BioBERT` architecture, specifically adapted for healthcare-related questions. The model is designed to extract answers from a disease-symptom dataset based on user-inputted symptoms or queries.
- **Developed by:** Ayamba Victor Ndoma
- **Model type:** Question Answering
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** `dmis-lab/biobert-large-cased-v1.1-squad`
### Model Sources
- **Repository:** [Hugging Face Repository](https://huggingface.co/your-username/online-doctor-model)
- **Demo:** [Link to Demo (optional)]
## Uses
This model can be used in the following cases:
### Direct Use
- Answering healthcare-related questions based on symptom input from users.
- Assisting medical professionals in preliminary diagnosis based on reported symptoms.
### Downstream Use
- Can be further fine-tuned or extended for more specific disease or symptom-related tasks.
- Integrated into chatbot systems for medical consultation services.
### Out-of-Scope Use
- The model is not intended for use in making definitive medical diagnoses without human supervision.
- It is not suitable for predicting non-health-related issues.
## Bias, Risks, and Limitations
- **Bias:** The model is trained on a custom dataset with potentially limited diversity in disease-symptom pairs.
- **Risks:** Incorrect predictions might occur when symptoms overlap across multiple diseases.
- **Limitations:** The model is constrained to the diseases and symptoms available in the training dataset and may not generalize to all medical conditions.
### Recommendations
This model should be used with caution, and its answers should be reviewed by qualified healthcare professionals.
## How to Get Started with the Model
Use the following code to get started with the model:
```python
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="your-username/online-doctor-model")
# Example question and context
question = "What are the symptoms of diabetes?"
context = "Diabetes: increased thirst, frequent urination, hunger, fatigue, blurred vision."
result = qa_pipeline(question=question, context=context)
print(result['answer'])
```
## Training Details
### Training Data
The model is fine-tuned on a custom dataset containing diseases and their respective symptoms.
### Training Procedure
- **Preprocessing:** Text cleaning and tokenization were applied to ensure proper context and symptom pairing.
- **Training regime:** The model was trained using mixed-precision FP16 on a single GPU.
#### Training Hyperparameters
- **Epochs:** 3
- **Batch size:** 16
- **Learning rate:** 3e-5
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated on a held-out portion of the custom disease-symptom dataset.
#### Factors
- Subpopulation: Various diseases ranging from common illnesses to rare conditions.
- Domains: Medical text and descriptions of symptoms.
#### Metrics
The model was evaluated using the SQuAD metrics, including F1 score and Exact Match (EM).
### Results
- **F1 score:** 0.82
- **Exact Match (EM):** 0.78
#### Summary
The model performs well on the task of extracting relevant symptoms and disease-related answers based on the question provided. However, its performance is limited to the diseases and symptoms present in the training data.
## Environmental Impact
- **Hardware Type:** Single GPU (NVIDIA Tesla T4)
- **Hours used:** 3
- **Cloud Provider:** Google Cloud
- **Compute Region:** US
- **Carbon Emitted:** Approximately 0.36 kg CO2eq
## Technical Specifications
### Model Architecture and Objective
The model is based on the `BioBERT` architecture fine-tuned for the SQuAD task, with a focus on healthcare question-answering.
### Compute Infrastructure
- **Hardware:** NVIDIA Tesla T4 GPU
- **Software:** PyTorch, Transformers Library
## Citation
If you use this model, please cite it as:
```
@misc{Ndoma2024onlinedoctor,
author = {Ayamba Victor Ndoma},
title = {Online Doctor Model for Disease Prediction},
year = {2024},
howpublished = {\url{https://huggingface.co/your-username/online-doctor-model}},
}
```
## Model Card Authors
- Ayamba Victor Ndoma
## Model Card Contact
For questions or feedback, please contact `[email protected]`.
--- | [
"QUESTION_ANSWERING"
] | BioNLP |
RichardErkhov/EleutherAI_-_pythia-70m-deduped-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,713,858,590,000 | 2024-04-23T07:50:27 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-70M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
tmnam20/xlm-roberta-base-sst2-10 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,403,424,000 | 2024-01-16T11:12:06 | 7 | 0 | ---
base_model: xlm-roberta-base
datasets:
- tmnam20/VieGLUE
language:
- en
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-sst2-10
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/SST2
type: tmnam20/VieGLUE
config: sst2
split: validation
args: sst2
metrics:
- type: accuracy
value: 0.8830275229357798
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-sst2-10
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tmnam20/VieGLUE/SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3909
- Accuracy: 0.8830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3971 | 0.24 | 500 | 0.3420 | 0.8544 |
| 0.3266 | 0.48 | 1000 | 0.3271 | 0.8555 |
| 0.2831 | 0.71 | 1500 | 0.3069 | 0.8761 |
| 0.2752 | 0.95 | 2000 | 0.3220 | 0.8807 |
| 0.2286 | 1.19 | 2500 | 0.3367 | 0.8911 |
| 0.2294 | 1.43 | 3000 | 0.3194 | 0.8761 |
| 0.2055 | 1.66 | 3500 | 0.3312 | 0.8853 |
| 0.1902 | 1.9 | 4000 | 0.3307 | 0.8842 |
| 0.1645 | 2.14 | 4500 | 0.3608 | 0.8956 |
| 0.153 | 2.38 | 5000 | 0.3796 | 0.8888 |
| 0.1868 | 2.61 | 5500 | 0.3763 | 0.8842 |
| 0.1477 | 2.85 | 6000 | 0.3959 | 0.8830 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/bigscience_-_bloomz-1b7-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:2211.01786",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,721,473,944,000 | 2024-07-20T11:14:08 | 76 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloomz-1b7 - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloomz-1b7/
Original model description:
---
datasets:
- bigscience/xP3
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
widget:
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
model-index:
- name: bloomz-1b7
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 51.14
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 56.34
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 55.42
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 52.55
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.23
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 55.24
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 56.15
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 34.0
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 36.1
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 37.08
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 71.43
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 76.17
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.04
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.17
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.73
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.81
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 55.02
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 52.97
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 52.21
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.07
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 45.1
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.34
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 40.36
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 37.15
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.38
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 51.08
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 51.12
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 4.38
- type: Pass@10
value: 8.73
- type: Pass@100
value: 16.09
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 82.9
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 69.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 50.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 54.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 49.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 57.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 60.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 59.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 70.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 67.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.33
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 77.96
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 60.49
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 72.87
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 74.92
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 51.09
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 56.39
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 61.28
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 66.25
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 78.69
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-1b7"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-1b7"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-1b7"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-1b7](https://huggingface.co/bigscience/bloom-1b7), also refer to the `config.json` file
- **Finetuning steps:** 2000
- **Finetuning tokens:** 8.39 billion
- **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 1x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
| [
"COREFERENCE_RESOLUTION",
"TRANSLATION"
] | Non_BioNLP |
fabiancpl/nlbse25_java | fabiancpl | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"region:us"
] | 1,734,056,469,000 | 2024-12-13T02:21:16 | 8 | 0 | ---
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget: []
inference: true
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A RandomForestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a RandomForestClassifier instance
- **Maximum Sequence Length:** 256 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("fabiancpl/nlbse25_java")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.12.4
- SetFit: 1.1.0
- Sentence Transformers: 3.3.0
- Transformers: 4.42.2
- PyTorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
anismahmahi/G2-with-noPropaganda-multilabel-setfit-model | anismahmahi | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 1,704,503,277,000 | 2024-01-06T01:08:14 | 3 | 0 | ---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: But the author is Bharath Ganesh.
- text: The documents, which suggest all the adults were involved in the training,
say a person serving as a foster parent caring for one of the kids revealed the
details about the training.
- text: Louis Farrakhan, the 84-year-old head of the Nation of Islam, has been back
in the headlines after a previously unreleased photo of him with President Barack
Obama was published in January and Mr. Farrakhan gave an anti-Semitic speech at
his organization’s annual convention last month.
- text: The name of that CIA official whose torture activities the Post described
is Gina Haspel.
- text: This is not just about Facebook or Twitter.
inference: false
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7125193199381762
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7125 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/G2-with-noPropaganda-multilabel-setfit-model")
# Run inference
preds = model("But the author is Bharath Ganesh.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 23.3972 | 129 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3874 | - |
| 0.0135 | 50 | 0.3734 | - |
| 0.0270 | 100 | 0.2741 | - |
| 0.0405 | 150 | 0.2802 | - |
| 0.0539 | 200 | 0.2355 | - |
| 0.0674 | 250 | 0.2616 | - |
| 0.0809 | 300 | 0.262 | - |
| 0.0944 | 350 | 0.2302 | - |
| 0.1079 | 400 | 0.1962 | - |
| 0.1214 | 450 | 0.1438 | - |
| 0.1348 | 500 | 0.2001 | - |
| 0.1483 | 550 | 0.2126 | - |
| 0.1618 | 600 | 0.1244 | - |
| 0.1753 | 650 | 0.1968 | - |
| 0.1888 | 700 | 0.1473 | - |
| 0.2023 | 750 | 0.2407 | - |
| 0.2157 | 800 | 0.1607 | - |
| 0.2292 | 850 | 0.1376 | - |
| 0.2427 | 900 | 0.145 | - |
| 0.2562 | 950 | 0.1439 | - |
| 0.2697 | 1000 | 0.0418 | - |
| 0.2832 | 1050 | 0.0822 | - |
| 0.2967 | 1100 | 0.1042 | - |
| 0.3101 | 1150 | 0.0381 | - |
| 0.3236 | 1200 | 0.17 | - |
| 0.3371 | 1250 | 0.0253 | - |
| 0.3506 | 1300 | 0.1009 | - |
| 0.3641 | 1350 | 0.1355 | - |
| 0.3776 | 1400 | 0.0314 | - |
| 0.3910 | 1450 | 0.2185 | - |
| 0.4045 | 1500 | 0.0774 | - |
| 0.4180 | 1550 | 0.0512 | - |
| 0.4315 | 1600 | 0.0814 | - |
| 0.4450 | 1650 | 0.0169 | - |
| 0.4585 | 1700 | 0.0591 | - |
| 0.4720 | 1750 | 0.1232 | - |
| 0.4854 | 1800 | 0.0941 | - |
| 0.4989 | 1850 | 0.1024 | - |
| 0.5124 | 1900 | 0.0031 | - |
| 0.5259 | 1950 | 0.037 | - |
| 0.5394 | 2000 | 0.1418 | - |
| 0.5529 | 2050 | 0.0685 | - |
| 0.5663 | 2100 | 0.0326 | - |
| 0.5798 | 2150 | 0.0143 | - |
| 0.5933 | 2200 | 0.064 | - |
| 0.6068 | 2250 | 0.0612 | - |
| 0.6203 | 2300 | 0.0689 | - |
| 0.6338 | 2350 | 0.1402 | - |
| 0.6472 | 2400 | 0.288 | - |
| 0.6607 | 2450 | 0.0075 | - |
| 0.6742 | 2500 | 0.0785 | - |
| 0.6877 | 2550 | 0.0339 | - |
| 0.7012 | 2600 | 0.0668 | - |
| 0.7147 | 2650 | 0.0319 | - |
| 0.7282 | 2700 | 0.0622 | - |
| 0.7416 | 2750 | 0.1169 | - |
| 0.7551 | 2800 | 0.0249 | - |
| 0.7686 | 2850 | 0.0218 | - |
| 0.7821 | 2900 | 0.0621 | - |
| 0.7956 | 2950 | 0.0698 | - |
| 0.8091 | 3000 | 0.0562 | - |
| 0.8225 | 3050 | 0.0412 | - |
| 0.8360 | 3100 | 0.0048 | - |
| 0.8495 | 3150 | 0.0085 | - |
| 0.8630 | 3200 | 0.0122 | - |
| 0.8765 | 3250 | 0.0387 | - |
| 0.8900 | 3300 | 0.0053 | - |
| 0.9035 | 3350 | 0.0032 | - |
| 0.9169 | 3400 | 0.0156 | - |
| 0.9304 | 3450 | 0.0013 | - |
| 0.9439 | 3500 | 0.001 | - |
| 0.9574 | 3550 | 0.0009 | - |
| 0.9709 | 3600 | 0.0025 | - |
| 0.9844 | 3650 | 0.0006 | - |
| 0.9978 | 3700 | 0.0832 | - |
| 1.0 | 3708 | - | 0.2776 |
| 1.0113 | 3750 | 0.0735 | - |
| 1.0248 | 3800 | 0.0053 | - |
| 1.0383 | 3850 | 0.0614 | - |
| 1.0518 | 3900 | 0.0005 | - |
| 1.0653 | 3950 | 0.0046 | - |
| 1.0787 | 4000 | 0.0024 | - |
| 1.0922 | 4050 | 0.0004 | - |
| 1.1057 | 4100 | 0.0016 | - |
| 1.1192 | 4150 | 0.0789 | - |
| 1.1327 | 4200 | 0.0016 | - |
| 1.1462 | 4250 | 0.0018 | - |
| 1.1597 | 4300 | 0.0005 | - |
| 1.1731 | 4350 | 0.0051 | - |
| 1.1866 | 4400 | 0.0139 | - |
| 1.2001 | 4450 | 0.0021 | - |
| 1.2136 | 4500 | 0.0064 | - |
| 1.2271 | 4550 | 0.0025 | - |
| 1.2406 | 4600 | 0.0054 | - |
| 1.2540 | 4650 | 0.0022 | - |
| 1.2675 | 4700 | 0.0734 | - |
| 1.2810 | 4750 | 0.026 | - |
| 1.2945 | 4800 | 0.0004 | - |
| 1.3080 | 4850 | 0.0574 | - |
| 1.3215 | 4900 | 0.0043 | - |
| 1.3350 | 4950 | 0.0975 | - |
| 1.3484 | 5000 | 0.0125 | - |
| 1.3619 | 5050 | 0.0045 | - |
| 1.3754 | 5100 | 0.0011 | - |
| 1.3889 | 5150 | 0.0061 | - |
| 1.4024 | 5200 | 0.0004 | - |
| 1.4159 | 5250 | 0.0278 | - |
| 1.4293 | 5300 | 0.005 | - |
| 1.4428 | 5350 | 0.0302 | - |
| 1.4563 | 5400 | 0.0341 | - |
| 1.4698 | 5450 | 0.0007 | - |
| 1.4833 | 5500 | 0.0128 | - |
| 1.4968 | 5550 | 0.0459 | - |
| 1.5102 | 5600 | 0.0128 | - |
| 1.5237 | 5650 | 0.0003 | - |
| 1.5372 | 5700 | 0.004 | - |
| 1.5507 | 5750 | 0.0005 | - |
| 1.5642 | 5800 | 0.0005 | - |
| 1.5777 | 5850 | 0.001 | - |
| 1.5912 | 5900 | 0.0069 | - |
| 1.6046 | 5950 | 0.0124 | - |
| 1.6181 | 6000 | 0.0026 | - |
| 1.6316 | 6050 | 0.0143 | - |
| 1.6451 | 6100 | 0.0005 | - |
| 1.6586 | 6150 | 0.0362 | - |
| 1.6721 | 6200 | 0.0002 | - |
| 1.6855 | 6250 | 0.0608 | - |
| 1.6990 | 6300 | 0.0006 | - |
| 1.7125 | 6350 | 0.0003 | - |
| 1.7260 | 6400 | 0.0041 | - |
| 1.7395 | 6450 | 0.0045 | - |
| 1.7530 | 6500 | 0.0005 | - |
| 1.7665 | 6550 | 0.0014 | - |
| 1.7799 | 6600 | 0.0004 | - |
| 1.7934 | 6650 | 0.0211 | - |
| 1.8069 | 6700 | 0.0002 | - |
| 1.8204 | 6750 | 0.0048 | - |
| 1.8339 | 6800 | 0.0368 | - |
| 1.8474 | 6850 | 0.0107 | - |
| 1.8608 | 6900 | 0.0045 | - |
| 1.8743 | 6950 | 0.0062 | - |
| 1.8878 | 7000 | 0.0003 | - |
| 1.9013 | 7050 | 0.0001 | - |
| 1.9148 | 7100 | 0.0096 | - |
| 1.9283 | 7150 | 0.0008 | - |
| 1.9417 | 7200 | 0.0184 | - |
| 1.9552 | 7250 | 0.0006 | - |
| 1.9687 | 7300 | 0.0291 | - |
| 1.9822 | 7350 | 0.0335 | - |
| 1.9957 | 7400 | 0.0149 | - |
| **2.0** | **7416** | **-** | **0.2666** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
eunbi-jeong/gpt2 | eunbi-jeong | translation | [
"translation",
"en",
"dataset:hellaswag",
"region:us"
] | 1,692,944,278,000 | 2023-08-25T06:19:07 | 0 | 0 | ---
datasets:
- hellaswag
language:
- en
pipeline_tag: translation
---
| [
"TRANSLATION"
] | Non_BioNLP |
jaesani/large_eng_summarizer | jaesani | summarization | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"code",
"summarization",
"en",
"dataset:npc-engine/light-batch-summarize-dialogue",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,726,744,387,000 | 2024-09-19T12:30:22 | 29 | 0 | ---
base_model:
- facebook/bart-large-cnn
datasets:
- npc-engine/light-batch-summarize-dialogue
language:
- en
library_name: transformers
license: mit
metrics:
- accuracy
pipeline_tag: summarization
tags:
- code
---
Model Card: Large English Summarizer
Model Overview
This model is a large-scale transformer-based summarization model, designed for producing concise and coherent summaries of English text. It leverages the power of pre-trained language models to generate summaries while maintaining key information.
Intended Use
The model is ideal for tasks such as summarizing articles, research papers, or any form of lengthy text, providing users with a quick overview of the content.
Model Architecture
Transformer-based architecture, likely BERT or GPT derived.
Fine-tuned for English text summarization tasks.
Training Data
Trained on a npc-engine/light-batch-summarize-dialogue.
The model is fine-tuned to understand and summarize general content, suitable for a wide range of domains.
Performance
Achieves high accuracy in generating human-readable summaries.
Balances between fluency and informativeness, focusing on retaining essential information while shortening text effectively.
Limitations
May struggle with highly technical or domain-specific content outside its training scope.
Could generate biased summaries if the input text contains biased language.
Ethical Considerations
Users should be aware of potential biases in the training data. It is recommended to review generated summaries, especially when used in decision-making processes.
How to Use
The model can be accessed via the Hugging Face API. Ensure proper token authentication for seamless access and usage. | [
"SUMMARIZATION"
] | Non_BioNLP |
fathyshalab/reklambox2-6-17 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,677,796,147,000 | 2023-03-03T00:08:34 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/reklambox2-6-17
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/reklambox2-6-17")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
leejaymin/etri-ones-llama3.1-8b-ko | leejaymin | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,723,828,689,000 | 2024-09-06T07:53:30 | 8 | 1 | ---
library_name: transformers
tags: []
---
# Model Card for `leejaymin/etri-ones-llama3.1-8b-ko`
## Model Summary
This model is a fine-tuned version of LLaMA 3.1 (8B) using QLoRA (Quantized Low-Rank Adaptation) techniques, specifically trained on Korean language datasets. It is optimized for understanding and generating text in Korean, making it suitable for various NLP tasks in this language, including text generation, translation, and comprehension.
## Model Details
- **Developed by:** Leejaymin / ETRI
- **Finetuned from:** LLaMA 3.1 (8B)
- **Language(s):** Korean
- **Model type:** Causal Language Model
- **License:** [More Information Needed]
- **Framework:** Hugging Face 🤗 Transformers
## Model Sources
- **Repository:** [Link to Hugging Face Repo](https://huggingface.co/leejaymin/etri-ones-llama3.1-8b-ko)
## Uses
### Direct Use
The model is designed for direct application in various Korean NLP tasks such as:
- Text generation
- Summarization
- Machine translation
- Conversational agents (chatbots)
### Downstream Use
This model can be further fine-tuned for specific tasks such as sentiment analysis, information extraction, or more focused conversational systems tailored for different domains in the Korean language.
### Out-of-Scope Use
The model is not designed for:
- Applications requiring real-time inference in highly constrained environments
- Non-Korean languages (performance will be poor on non-Korean text)
## Bias, Risks, and Limitations
Given that the model was fine-tuned on a specific Korean dataset, it may inherit biases present in the original data. Users should be aware that biases in the training data may propagate into the model's outputs.
### Recommendations
Users are advised to be cautious when deploying the model in sensitive or high-stakes environments. Fine-tuning on domain-specific data or conducting bias evaluations may be necessary depending on the intended use case.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("leejaymin/etri-ones-llama3.1-8b-ko")
model = AutoModelForCausalLM.from_pretrained("leejaymin/etri-ones-llama3.1-8b-ko")
inputs = tokenizer("안녕하세요!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
```bibtex
@misc{leejaymin2024etrionesllama,
title={ETRI LLaMA 3.1 8B KO},
author={Lee, Jaymin and ETRI},
year={2024},
url={https://huggingface.co/leejaymin/etri-ones-llama3.1-8b-ko}
}
```
| [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
SakshamJain/Temp | SakshamJain | summarization | [
"transformers",
"t5",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,698,907,147,000 | 2023-11-02T06:41:59 | 14 | 0 | ---
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
yjgwak/klue-bert-base-finetuned-squad-kor-v1 | yjgwak | question-answering | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"korean",
"klue",
"squad-kor-v1",
"ko",
"arxiv:2105.09680",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | 1,694,142,664,000 | 2023-09-11T02:52:58 | 206 | 1 | ---
language: ko
license: cc-by-sa-4.0
tags:
- korean
- klue
- squad-kor-v1
mask_token: '[MASK]'
widget:
- text: 바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?
context: 1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이 끌려 이를 소재로 해서 하나의 교향곡을 쓰려는 뜻을 갖는다.
이 시기 바그너는 1838년에 빛 독촉으로 산전수전을 다 걲은 상황이라 좌절과 실망에 가득했으며 메피스토펠레스를 만나는 파우스트의 심경에 공감했다고
한다. 또한 파리에서 아브네크의 지휘로 파리 음악원 관현악단이 연주하는 베토벤의 교향곡 9번을 듣고 깊은 감명을 받았는데, 이것이 이듬해 1월에
파우스트의 서곡으로 쓰여진 이 작품에 조금이라도 영향을 끼쳤으리라는 것은 의심할 여지가 없다. 여기의 라단조 조성의 경우에도 그의 전기에 적혀
있는 것처럼 단순한 정신적 피로나 실의가 반영된 것이 아니라 베토벤의 합창교향곡 조성의 영향을 받은 것을 볼 수 있다. 그렇게 교향곡 작곡을
1839년부터 40년에 걸쳐 파리에서 착수했으나 1악장을 쓴 뒤에 중단했다. 또한 작품의 완성과 동시에 그는 이 서곡(1악장)을 파리 음악원의
연주회에서 연주할 파트보까지 준비하였으나, 실제로는 이루어지지는 않았다. 결국 초연은 4년 반이 지난 후에 드레스덴에서 연주되었고 재연도 이루어졌지만,
이후에 그대로 방치되고 말았다. 그 사이에 그는 리엔치와 방황하는 네덜란드인을 완성하고 탄호이저에도 착수하는 등 분주한 시간을 보냈는데, 그런
바쁜 생활이 이 곡을 잊게 한 것이 아닌가 하는 의견도 있다.
example_title: 리차드 바그너
---
# KLUE BERT base Finetuned on squad-kor-v1
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Training](#training)
- [Evaluation](#evaluation)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
## Model Details
**Model Description:** This model is the KLUE BERT base, fine-tuned on the squad-kor-v1 dataset for Korean question answering tasks.
- **Developed by:** [Yeongjin Gwak](https://yjgwak.github.io/)
- **Model Type:** Transformer-based language model
- **Language(s):** Korean
- **License:** cc-by-sa-4.0
- **Parent Model:** See the [KLUE BERT base model](https://huggingface.co/klue/bert-base) for more information about the parent model.
## How to Get Started With the Model
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("yjgwak/klue-bert-base-finetuned-squard-kor-v1")
tokenizer = AutoTokenizer.from_pretrained("yjgwak/klue-bert-base-finetuned-squard-kor-v1")
```
## Uses
#### Direct Use
This model is specialized for the task of question answering in Korean. Users can employ this model to extract answers from passages or documents in Korean when provided with relevant questions.
#### Misuse and Out-of-scope Use
The model should not be used for tasks other than question answering without further fine-tuning. Using this model for generating long-form content or for tasks it wasn't fine-tuned on may result in suboptimal results.
## Training
#### Training Data
The model was fine-tuned on the `squad-kor-v1` dataset, which is the Korean version of the popular SQuAD dataset used for question answering tasks.
#### Training Procedure
The original BERT training methodology was adopted with the difference being the dataset used for fine-tuning. The model was trained to minimize the cross-entropy loss between predicted answers and ground truth answers in the `squad-kor-v1` dataset.
## Evaluation
[Provide details of any evaluation metrics, results, or testing data used to assess the performance of the model after fine-tuning. If this hasn't been done yet, you can mention that the evaluation is pending.]
## Technical Specifications
See the [original KLUE BERT base model card](https://huggingface.co/klue/bert-base) for details on the underlying architecture and technical specifications.
## Citation Information
Please cite the [original KLUE paper](https://arxiv.org/abs/2105.09680) and any other relevant resources or papers associated with the `squad-kor-v1` dataset.
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
pinzhenchen/sft-lora-de-pythia-2b8 | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,763,000 | 2024-03-05T23:52:46 | 0 | 0 | ---
language:
- de
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
TheBloke/Airoboros-M-7B-3.1.2-GGUF | TheBloke | null | [
"transformers",
"gguf",
"mistral",
"dataset:jondurbin/airoboros-3.1",
"base_model:jondurbin/airoboros-m-7b-3.1.2",
"base_model:quantized:jondurbin/airoboros-m-7b-3.1.2",
"license:apache-2.0",
"region:us"
] | 1,697,733,712,000 | 2023-10-19T16:45:56 | 437 | 13 | ---
base_model: jondurbin/airoboros-m-7b-3.1.2
datasets:
- jondurbin/airoboros-3.1
license: apache-2.0
model_name: Airoboros M 7B 3.1.2
inference: false
model_creator: Jon Durbin
model_type: mistral
prompt_template: '[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros M 7B 3.1.2 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros M 7B 3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros M 7B 3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Airoboros-Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-m-7b-3.1.2.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-m-7b-3.1.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [airoboros-m-7b-3.1.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [airoboros-m-7b-3.1.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [airoboros-m-7b-3.1.2.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-m-7b-3.1.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [airoboros-m-7b-3.1.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [airoboros-m-7b-3.1.2.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-m-7b-3.1.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [airoboros-m-7b-3.1.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [airoboros-m-7b-3.1.2.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [airoboros-m-7b-3.1.2.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-M-7B-3.1.2-GGUF/blob/main/airoboros-m-7b-3.1.2.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Airoboros-M-7B-3.1.2-GGUF and below it, a specific filename to download, such as: airoboros-m-7b-3.1.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Airoboros-M-7B-3.1.2-GGUF airoboros-m-7b-3.1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Airoboros-M-7B-3.1.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Airoboros-M-7B-3.1.2-GGUF airoboros-m-7b-3.1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-m-7b-3.1.2.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, unbiased, uncensored assistant.\n<</SYS>>\n\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-M-7B-3.1.2-GGUF", model_file="airoboros-m-7b-3.1.2.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros M 7B 3.1.2
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is just one last release tweaking hyperparams, which seem to result in a higher quality model than 3.1 or 3.1.1 (and doens't have the prompt issue of 3.1).
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot-instructions_two_reasoning_only_reasoning_1726 | Netta1994 | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"region:us"
] | 1,726,754,851,000 | 2024-09-19T14:08:07 | 7 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "Reasoning for Good:\n1. **Context Grounding**: The answer is well-supported\
\ by the provided document, accurately reflecting the steps outlined.\n2. **Relevance**:\
\ The answer directly addresses the specific question posed about changing the\
\ reservation reference from the service page to the booking calendar.\n3. **Conciseness**:\
\ The answer is concise and clear, providing straightforward steps without unnecessary\
\ information.\n4. **Correct and Detailed Instructions**: It provides precise,\
\ step-by-step instructions that align correctly with the provided document. \n\
\nReasoning for Bad:\n- There are no significant deviations from the document\
\ or extraneous information.\n- There are no contradictions or errors in the steps\
\ mentioned.\n\nFinal Result: \nGood"
- text: 'Reasoning for the evaluation:
**Why the answer may be good:**
1. **Context Grounding:** The first step correctly references the booking calendar
in the site''s dashboard, which is mentioned in the provided document.
2. **Relevance:** The response does relate to the process of modifying booking
slots in the calendar, which is somewhat related to managing booking buttons.
**Why the answer may be bad:**
1. **Context Grounding:** The steps provided are focused on "blocking off time"
rather than specifically addressing how to remove the time from showing on the
booking button, which can be a completely different process.
2. **Relevance:** The answer does not directly address the specific query about
removing the time display on the booking button.
3. **Conciseness:** The answer is not concise in relation to the question. It
includes a multi-step process for blocking off time, which is not what was asked.
4. **Correctness and Detail:** The provided steps do not answer the question about
removing the time from the booking button visibility. Instead, they address blocking
off time which does not solve the stated problem.
Final result: **Bad**'
- text: 'Reasoning:
Why the answer may be good:
1. Context Grounding: The answer accurately references the correct steps for verifying
the domain and enabling the app as per the document provided.
2. Relevance: The response does relate to enabling calendar scheduling and recording
functionality, which is directly related to the question.
3. Conciseness: The answer concisely outlines the steps without excessive detail.
4. Instructions: The instructions provided are correct, detailed, and directly
copied from the document, ensuring accuracy.
Why the answer may be bad:
1. The answer fails to explicitly state why the lack of domain verification and
app enablement is causing the issues with calendar scheduling and recording, which
is the core of the question.
2. The answer could be more precise in tying the resolution steps to the specific
problem of missing calendar scheduling and recording access rather than general
setup steps.
3. The response lacks a direct mention that calendar scheduling and recording
features are enabled by the relevant <ORGANIZATION> app, leaving a gap in connecting
the solution to the problem.
Final Result: Bad'
- text: '**Reasoning for Good:**
1. **Context Grounding**: The answer is well-supported by the provided document.
It follows the same instructions mentioned in the document about adding a favicon,
such as having a Premium plan and connected domain, publishing the site, and navigating
to the settings to update the favicon image.
2. **Relevance**: The answer directly addresses the question of how to add a favicon
to the website’s search engine results.
3. **Conciseness**: The answer is clear and to the point, comprising step-by-step
instructions without unnecessary details.
4. **Correct and Detailed Instructions**: The answer provides all necessary steps
to add a favicon, ensuring the response is actionable.
**Reasoning for Bad:**
1. **Incomplete Context Grounding**: The mention that it is not guaranteed for
the favicon to appear in search engine results is lacking in the answer.
2. **Relevance Overlook**: The answer does not refer to speeding up the process
by submitting your homepage''s URL to search engines or ensuring that search engines
can index your homepage, which are important related steps mentioned in the document.
3. **Additional Context**: The provided document also offers tips for ensuring
good quality favicons and mentions potential limitations (e.g., search engines
might not display inappropriate favicons), which are not covered in the answer.
**Final Result: Bad**'
- text: '### Reasoning
**Positives:**
1. **Context Grounding:** The answer is well-supported by the document provided.
It correctly follows the steps mentioned in the document.
2. **Relevance:** The answer directly addresses the question by providing the
necessary steps to make the booking page visible.
3. **Conciseness:** The answer is clear and to the point without unnecessary information.
4. **Correct Instructions:** The instructions are detailed and correct, following
the steps outlined in the document.
**Negatives:**
1. There is a minor inconsistency in the text where "<ORGANIZATION>" is not replaced
with the actual name, which could cause confusion.
Despite this small flaw, the answer is generally very good and meets all the criteria
effectively.
### Final Result
**Good**'
inference: true
model-index:
- name: SetFit with BAAI/bge-base-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.4375
name: Accuracy
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'Reasoning:\n- **Good Aspects:**\n 1. **Context Grounding:** The answer addresses the specific context provided in the document, mentioning sitemap-related issues and resolutions.\n 2. **Relevance:** The answer directly addresses the problem of <ORGANIZATION> not discovering pages via sitemaps, following the steps mentioned in the document.\n 3. **Conciseness:** The instructions given are straightforward and to the point.\n\n- **Bad Aspects:**\n 1. **Context Grounding:** The use of <ORGANIZATION> is consistent with placeholders in the document but it maybe found more natural to use a real identifier.\n 2. **Handling Details:** It might miss elaborating on a particular detail that can be crucial, such as how to specifically use the inspection tool. \n\nFinal Result:\n- **Good**'</li><li>'Reasoning:\n\n### Good Points:\n1. **Context Grounding**: The answer is grounded in the provided document, as it follows the steps described in the text for enabling clients to book multiple participants.\n2. **Relevance**: The steps provided are directly relevant to the question, outlining the specific process within <ORGANIZATION> Bookings.\n\n### Bad Points:\n1. **Conciseness**: There is an extraneous, seemingly miswritten phrase "John Youngimum" which should be "maximum." This introduces unnecessary confusion.\n2. **Detailing**: While the steps are mostly accurate, they missed clear labelling for each scroll and edit actions, causing potential confusion. The final step should also emphasize that the overall save action confirms the entire setup.\n3. **Errors and Clarity**: It doesn\'t maintain precise terminology, for instance, "John Youngimum number of participants." The correct term should be "maximum number of participants."\n\nFinal verdict:\nBased on the outlined reasoning, due to the critical mistake and slight inconsistencies, the evaluation is:\n \nBad'</li><li>'Reasoning why the answer may be good:\n1. Context Grounding: The answer attempts to address an issue related to booking services, which could be inferred to be supported by a company offering a booking feature, as mentioned in the document.\n2. Relevance: The answer is somewhat relevant as it talks about a booking error, possibly correlating with the "Online Booking" feature from the document.\n3. Conciseness: The answer is concise and does not contain unnecessary information.\n4. Instructions: The answer advises that the issue has been resolved, which is a form of instruction or information on what to expect.\n\nReasoning why the answer may be bad:\n1. Context Grounding: The provided document does not contain any specific details about an error related to changing the location for booking services or its resolution. Thus, the answer lacks proper grounding in the document.\n2. Relevance: While it mentions booking, the document does not reference details related to the error or its resolution, making the answer\'s relevance questionable.\n3. Conciseness: Though the answer is concise, the lack of detailed instructions or steps to follow if the error persists is critical.\n4. Instructions: The answer does not give detailed actionable steps on what to do if the error still occurs, like refreshing the page or contacting support.\n\nFinal result: Bad'</li></ul> |
| 0 | <ul><li>"### Reasoning:\n\n#### Why the answer may be good:\n1. **Context Grounding:** The answer aligns with the context provided in the document, which clearly states that transferring the booking application from one site to another is not possible.\n2. **Relevance:** It directly addresses the specific question asked—whether the booking app can be updated on the site.\n3. **Conciseness:** The answer is brief and directly addresses the question.\n4. **Correct Information:** The instructions for voting on the feature are clearly detailed and reflect the information given in the document.\n\n#### Why the answer may be bad:\n1. **Misinterpretation:** The answer might be slightly misdirected if the user's intent was about updating the app in the sense of its features or versions rather than just transferring it between sites.\n2. **Lack of Detail:** It lacks information on what updating entails; it only covers site transferring limitations mentioned in the document.\n\n### Final Result:\n- **Bad**: The answer misinterprets the context as primarily transferring the app between sites and does not address potential updates within the app on the same site."</li><li>'Reasoning for the answer being good:\n1. **Context Grounding**: The answer is well-supported by the provided document. It follows the steps outlined in the section "Adding and setting up an additional service list" as well as steps for "Setting up a page with services for site members only."\n2. **Relevance**: The answer directly addresses the question "What should I do to add a service?" by providing explicit instructions related to adding and displaying services.\n3. **Conciseness**: The answer is clear and to the point, although it might be detailed, it avoids unnecessary information.\n4. **Correct and detailed instructions**: The answer provides a step-by-step detailed guide on how to add a service list and how to make it visible either to all users or just to site members.\n\nReasoning for the answer being bad:\n1. **Context Grounding**: Although the instructions provided are based on the document, the reference to "setting up a page with services for site members only" may be considered slightly out of scope since the question did not specify this requirement.\n2. **Relevance**: The answer might slightly stray by including membership-specific instructions which were not explicitly asked.\n3. **Conciseness**: While comprehensive, the inclusion of additional steps related to member-only pages may make the answer longer than necessary for a general query about adding services.\n4. **Correct and detailed instructions**: The instructions are generally correct and detailed but could be streamlined to focus solely on the core question asked.\n\nFinal result: **Good**'</li><li>'**Reasoning:**\n\n**Good Aspects:**\n1. **Context Grounding:** The answer largely draws from the provided document and relates directly to the technical process described.\n2. **Relevance:** It focuses on the exact procedure necessary to display blog categories on the blog feed.\n3. **Conciseness:** The steps provided are relatively brief and follow the logical sequence in the document.\n4. **Correct and Detailed Instructions:** It lists out tasks such as creating datasets, connecting to blog categories, and setting up filters, which align with the document\'s guidance.\n\n**Bad Aspects:**\n1. **Detail:** The steps are vague; specifically, "95593638" is repeatedly used in place of what should be an action (e.g., "create," "add"), rendering the instructions confusing and incomplete.\n2. **Accuracy:** The inserted number sequence (95593638) disrupts the clarity and comprehension, making it unclear how to proceed with each step.\n3. **Completeness:** There is missing information on how to carry out tasks, such as clicking specific options and connecting fields, making it challenging to follow through without referring back to the document.\n\n**Final Result:** \n\nBad'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.4375 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot-instructions_two_reasoning_only_reasoning_1726")
# Run inference
preds = model("Reasoning for Good:
1. **Context Grounding**: The answer is well-supported by the provided document, accurately reflecting the steps outlined.
2. **Relevance**: The answer directly addresses the specific question posed about changing the reservation reference from the service page to the booking calendar.
3. **Conciseness**: The answer is concise and clear, providing straightforward steps without unnecessary information.
4. **Correct and Detailed Instructions**: It provides precise, step-by-step instructions that align correctly with the provided document.
Reasoning for Bad:
- There are no significant deviations from the document or extraneous information.
- There are no contradictions or errors in the steps mentioned.
Final Result:
Good")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 91 | 151.7556 | 233 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 22 |
| 1 | 23 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0088 | 1 | 0.1829 | - |
| 0.4425 | 50 | 0.2598 | - |
| 0.8850 | 100 | 0.1764 | - |
| 1.3274 | 150 | 0.0079 | - |
| 1.7699 | 200 | 0.0026 | - |
| 2.2124 | 250 | 0.0021 | - |
| 2.6549 | 300 | 0.0019 | - |
| 3.0973 | 350 | 0.0016 | - |
| 3.5398 | 400 | 0.0015 | - |
| 3.9823 | 450 | 0.0016 | - |
| 4.4248 | 500 | 0.0015 | - |
| 4.8673 | 550 | 0.0015 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.0
- Transformers: 4.44.0
- PyTorch: 2.4.1+cu121
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
rdpratti/distilbert-base-uncased-finetuned-emotion | rdpratti | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,678,134,812,000 | 2023-03-17T12:57:20 | 11 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- type: accuracy
value: 0.898
name: Accuracy
- type: f1
value: 0.8958743697126005
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3557
- Accuracy: 0.898
- F1: 0.8959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5565 | 0.8285 | 0.8096 |
| 0.7618 | 2.0 | 250 | 0.3557 | 0.898 | 0.8959 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-en-cel | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:29:12 | 47 | 0 | ---
language:
- en
- gd
- ga
- br
- kw
- gv
- cy
- cel
license: apache-2.0
tags:
- translation
---
### eng-cel
* source group: English
* target group: Celtic languages
* OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md)
* model: transformer
* source language(s): eng
* target language(s): bre cor cym gla gle glv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bre.eng.bre | 11.5 | 0.338 |
| Tatoeba-test.eng-cor.eng.cor | 0.3 | 0.095 |
| Tatoeba-test.eng-cym.eng.cym | 31.0 | 0.549 |
| Tatoeba-test.eng-gla.eng.gla | 7.6 | 0.317 |
| Tatoeba-test.eng-gle.eng.gle | 35.9 | 0.582 |
| Tatoeba-test.eng-glv.eng.glv | 9.9 | 0.454 |
| Tatoeba-test.eng.multi | 18.0 | 0.342 |
### System Info:
- hf_name: eng-cel
- source_languages: eng
- target_languages: cel
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']
- src_constituents: {'eng'}
- tgt_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cel
- short_pair: en-cel
- chrF2_score: 0.342
- bleu: 18.0
- brevity_penalty: 0.9590000000000001
- ref_len: 45370.0
- src_name: English
- tgt_name: Celtic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cel
- prefer_old: False
- long_pair: eng-cel
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
gokulsrinivasagan/distilbert_lda_5_v1_book_mrpc | gokulsrinivasagan | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_5_v1_book",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_5_v1_book",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,759,151,000 | 2024-12-09T15:46:52 | 4 | 0 | ---
base_model: gokulsrinivasagan/distilbert_lda_5_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_5_v1_book_mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.7254901960784313
name: Accuracy
- type: f1
value: 0.8021201413427562
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_5_v1_book_mrpc
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1_book) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5515
- Accuracy: 0.7255
- F1: 0.8021
- Combined Score: 0.7638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6286 | 1.0 | 15 | 0.5986 | 0.6912 | 0.7987 | 0.7449 |
| 0.5813 | 2.0 | 30 | 0.5607 | 0.6961 | 0.8069 | 0.7515 |
| 0.5063 | 3.0 | 45 | 0.5515 | 0.7255 | 0.8021 | 0.7638 |
| 0.3879 | 4.0 | 60 | 0.7277 | 0.7157 | 0.8204 | 0.7681 |
| 0.2612 | 5.0 | 75 | 0.6235 | 0.7279 | 0.8083 | 0.7681 |
| 0.1772 | 6.0 | 90 | 0.8348 | 0.7377 | 0.8310 | 0.7844 |
| 0.1384 | 7.0 | 105 | 1.0297 | 0.7230 | 0.8270 | 0.7750 |
| 0.0807 | 8.0 | 120 | 0.9378 | 0.7402 | 0.8279 | 0.7841 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
openaccess-ai-collective/manticore-13b-chat-pyg | openaccess-ai-collective | text-generation | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPT4-LLM-Cleaned",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:ewof/code-alpaca-instruct-unfiltered",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,684,772,517,000 | 2023-06-07T12:32:40 | 3,537 | 30 | ---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPT4-LLM-Cleaned
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- ewof/code-alpaca-instruct-unfiltered
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Manticore 13B Chat
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of
chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Training Datasets
Manticore 13B Chat is a Llama 13B model fine-tuned on the following datasets along with the datasets from the original Manticore 13B.
**Manticore 13B Chat was trained on 25% of the datasets below. The datasets were merged, shuffled, and then sharded into 4 parts.**
- de-duped pygmalion dataset, filtered down to RP data
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
- hellaswag, updated for detailed explanations w 30K+ rows
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
- [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered)
Manticore 13B
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
Not added from Manticore 13B:
- mmlu - mmlu datasets were not added to this model as the `test` split is used for benchmarks
# Shoutouts
Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset.
# Demo
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
- https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg
## Release Notes
- https://wandb.ai/wing-lian/manticore-13b-v2/runs/hxr3aiiw
## Build
Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
- 3 epochs taking approximately 8 hours. No further epochs will be released.
- The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
## Bias, Risks, and Limitations
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
## Examples
```
USER: make up a sonnet in the style of shakespeare about a skier skiing in the glades on a bluebird day.
ASSISTANT: A skier doth glide 'neath sun’s glow, His edges sharp to cut the snow, Bluebird sky above, While he doth revel on the slopes below, The cold air biting his nose, But not his spirits, For he is filled with a sense of joy, As he carves through the powdery drifts. His body moves in graceful rhythm, And each turn he makes with ease, He feels like he could ski forever, With no care for the world outside these glades.
```
```
USER: tell me a joke about alpacas
ASSISTANT: Did you hear about the famous alpaca ballet dancer? She was a prima llama.
USER: okay, that is actually pretty good
ASSISTANT: What do you call an alpaca in a bowtie? A llama-cholla!
USER: why is that funny?
ASSISTANT: Because it’s a play on words. “Llama-cholla” sounds like “lame chola,” which is Spanish slang for someone who acts cool or pretentious. So, the joke is implying that alpacas are vain and act cool with their bowties.
```
```
USER: What is 44 + 77?
ASSISTANT: 121
USER: What is the sum of 33 and 77?
ASSISTANT: 110
USER: tell me 5 words that start with “ex” and the sum of 33 and 77.
ASSISTANT: exotic, expressive, explicit, experimental, and extravagant. The sum of 33 and 77 is 110.
USER: tell me 3 words that start with “egr” and end in “s”
ASSISTANT: egret, eggplant, and egrets.
```
- oof
```
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
ASSISTANT: Sally has a total of 6 siblings, so she has a total of 6 - 2 = 4 sisters.
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Use multi-step reasoning to explain your answer.
ASSISTANT: Sally has a total of three brothers, so she has a total of three * 2 = six siblings. Her family consists of three boys and six girls in total. This means that Sally has three sisters since there are six female relatives and only three male ones.
USER: But when brothers have sisters, aren’t they shared or the same?
ASSISTANT: No, not every sibling shares their mother. For example, I have two brothers who share one mother but my third brother has a different mother from both of them.
USER: Let’s assume they all share the same mother and father. Now explain with multi-step reasoning.
ASSISTANT: If Sally has three brothers then that means she has six siblings altogether. Each brother has two sisters, which means that there are twelve female relatives in her family - one for each of her brothers and herself. Since there are only six females in the entire family, that means that Sally’s mother has no other children than her own three sons - so she can’t be the mother of any of the sisters. So all those sisters have to belong to one of the brothers, and since they are shared by two brothers, then each brother has a single sister. Therefore, there is only one sister for Sally, making the final answer 1.
```
- pretty sure that reasoning isn't correct, and the final answer of having 1 sister is a lucky guess | [
"SUMMARIZATION"
] | Non_BioNLP |
UNIST-Eunchan/Pegasus-x-base-govreport-12288-1024-numepoch-10 | UNIST-Eunchan | text2text-generation | [
"transformers",
"pytorch",
"pegasus_x",
"text2text-generation",
"generated_from_trainer",
"dataset:govreport-summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,689,819,644,000 | 2023-07-22T03:05:31 | 30 | 0 | ---
datasets:
- govreport-summarization
tags:
- generated_from_trainer
model-index:
- name: Pegasus-x-base-govreport-12288-1024-numepoch-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pegasus-x-base-govreport-12288-1024-numepoch-10
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on the govreport-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6234
## Model description
More information needed
## Evaluation Score
**'ROUGE'**:
{
'rouge1': 0.5012,
'rouge2': 0.2205,
'rougeL': 0.2552,
'rougeLsum': 0.2554
}
**'BERT_SCORE'**
{'f1': 0.859,
'precision': 0.8619,
'recall': 0.8563
}
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1149 | 0.37 | 100 | 1.9237 |
| 1.9545 | 0.73 | 200 | 1.8380 |
| 1.8835 | 1.1 | 300 | 1.7574 |
| 1.862 | 1.46 | 400 | 1.7305 |
| 1.8536 | 1.83 | 500 | 1.7100 |
| 1.8062 | 2.19 | 600 | 1.6944 |
| 1.8161 | 2.56 | 700 | 1.6882 |
| 1.7611 | 2.92 | 800 | 1.6803 |
| 1.7878 | 3.29 | 900 | 1.6671 |
| 1.7299 | 3.65 | 1000 | 1.6599 |
| 1.7636 | 4.02 | 1100 | 1.6558 |
| 1.7262 | 4.38 | 1200 | 1.6547 |
| 1.715 | 4.75 | 1300 | 1.6437 |
| 1.7178 | 5.12 | 1400 | 1.6445 |
| 1.7163 | 5.48 | 1500 | 1.6386 |
| 1.7367 | 5.85 | 1600 | 1.6364 |
| 1.7114 | 6.21 | 1700 | 1.6365 |
| 1.6452 | 6.58 | 1800 | 1.6309 |
| 1.7251 | 6.94 | 1900 | 1.6301 |
| 1.6726 | 7.31 | 2000 | 1.6305 |
| 1.7104 | 7.67 | 2100 | 1.6285 |
| 1.6739 | 8.04 | 2200 | 1.6252 |
| 1.7082 | 8.4 | 2300 | 1.6246 |
| 1.6888 | 8.77 | 2400 | 1.6244 |
| 1.6609 | 9.13 | 2500 | 1.6256 |
| 1.6707 | 9.5 | 2600 | 1.6241 |
| 1.669 | 9.86 | 2700 | 1.6234 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
| [
"SUMMARIZATION"
] | Non_BioNLP |
LongSafari/hyenadna-tiny-1k-seqlen-d256-hf | LongSafari | text-generation | [
"transformers",
"safetensors",
"hyenadna",
"text-generation",
"dna",
"biology",
"genomics",
"hyena",
"custom_code",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
] | 1,699,020,703,000 | 2024-01-24T17:22:45 | 166 | 0 | ---
license: bsd-3-clause
tags:
- dna
- biology
- genomics
- hyena
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
We've uploaded a [collection](https://huggingface.co/collections/LongSafari/hyenadna-models-654d0cbbe113b04ba5a0f638) of all the pretrained HyenaDNA checkpoints.
You'll see models of different sizes and sequence lengths. There are also original weights-only versions of each model in the [LongSafari organization](https://huggingface.co/LongSafari), which are designed to be loaded with the original [github](https://github.com/HazyResearch/hyena-dna) repo. These models have identical outputs to the models in the collection above, just different interfaces.
See [GPU requirements](#hardware) for each model.
### Using HyenaDNA
In this brief code sample we demonstrate fine-tuning HyenaDNA on a sequence classification task. This sample uses the `medium` checkpoint, with a maximum sequence length of 160k nucleotides. Note that training will fail if you use a sequence length longer than the maximum supported length for your chosen checkpoint.
In testing, we have been able to train at a sequence length up to about 250k nucleotides on a Colab T4 GPU (16GB VRAM). For longer sequence lengths, more memory will be required.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers import TrainingArguments, Trainer, logging
import torch
# instantiate pretrained model
checkpoint = 'LongSafari/hyenadna-medium-160k-seqlen-hf'
max_length = 160_000
# bfloat16 for better speed and reduced memory usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
# Generate some random sequence and labels
# If you're copying this code, replace the sequences and labels
# here with your own data!
sequence = 'ACTG' * int(max_length/4)
sequence = [sequence] * 8 # Create 8 identical samples
tokenized = tokenizer(sequence)["input_ids"]
labels = [0, 1] * 4
# Create a dataset for training
ds = Dataset.from_dict({"input_ids": tokenized, "labels": labels})
ds.set_format("pt")
# Initialize Trainer
# Note that we're using extremely small batch sizes to maximize
# our ability to fit long sequences in memory!
args = {
"output_dir": "tmp",
"num_train_epochs": 1,
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 4,
"gradient_checkpointing": True,
"learning_rate": 2e-5,
}
training_args = TrainingArguments(**args)
trainer = Trainer(model=model, args=training_args, train_dataset=ds)
result = trainer.train()
print(result)
# Now we can save_pretrained() or push_to_hub() to share the trained model!
```
You may also find these [notebooks](https://huggingface.co/docs/transformers/notebooks) useful. Although they're not specific to HyenaDNA, they contain additional examples of training DNA and sequence classification models.
- [How to fine-tune a Nucleotide Transformer model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb)
- [How to fine-tune a model on text classification](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40GB, A100-40GB, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80GB, A100-80GB, A100-40GB)
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, [email protected]
Michael Poli, [email protected]
Marjan Faizi, [email protected]
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
neurips-user/neurips-deberta-combined-1 | neurips-user | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"autotrain",
"dataset:neurips-bert-combined5/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,715,825,301,000 | 2024-05-16T02:28:17 | 16 | 0 | ---
datasets:
- neurips-bert-combined5/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4513716995716095
f1: 0.8037383177570093
precision: 0.7543859649122807
recall: 0.86
auc: 0.8812
accuracy: 0.79
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Thang203/general_nlp_research_paper | Thang203 | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,712,792,211,000 | 2024-04-10T23:36:54 | 4 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# general_nlp_research_paper
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("Thang203/general_nlp_research_paper")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 165
* Number of training documents: 11000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | language - models - model - data - translation | 10 | -1_language_models_model_data |
| 0 | question - answer - questions - answering - question answering | 3488 | 0_question_answer_questions_answering |
| 1 | speech - speech recognition - acoustic - recognition - asr | 513 | 1_speech_speech recognition_acoustic_recognition |
| 2 | summarization - summaries - abstractive - summary - extractive | 345 | 2_summarization_summaries_abstractive_summary |
| 3 | clinical - medical - biomedical - extraction - notes | 337 | 3_clinical_medical_biomedical_extraction |
| 4 | translation - machine translation - parallel - machine - nmt | 258 | 4_translation_machine translation_parallel_machine |
| 5 | emotion - emotions - emotional - emotion recognition - affective | 211 | 5_emotion_emotions_emotional_emotion recognition |
| 6 | word - embeddings - word embeddings - similarity - vector | 164 | 6_word_embeddings_word embeddings_similarity |
| 7 | bert - probing - tasks - pretraining - pretrained | 145 | 7_bert_probing_tasks_pretraining |
| 8 | relation - relation extraction - extraction - relations - distant | 138 | 8_relation_relation extraction_extraction_relations |
| 9 | hate - hate speech - offensive - detection - speech | 134 | 9_hate_hate speech_offensive_detection |
| 10 | arabic - sanskrit - kurdish - transliteration - rules | 118 | 10_arabic_sanskrit_kurdish_transliteration |
| 11 | aspect - sentiment - sentiment analysis - aspectbased sentiment - aspectbased | 118 | 11_aspect_sentiment_sentiment analysis_aspectbased sentiment |
| 12 | morphological - inflection - languages - morphology - morphological analysis | 112 | 12_morphological_inflection_languages_morphology |
| 13 | ner - named entity - named - entity recognition - named entity recognition | 107 | 13_ner_named entity_named_entity recognition |
| 14 | multimodal - image - visual - captions - images | 101 | 14_multimodal_image_visual_captions |
| 15 | discourse - discourse relation - discourse parsing - implicit discourse - discourse relations | 98 | 15_discourse_discourse relation_discourse parsing_implicit discourse |
| 16 | chinese - segmentation - word segmentation - chinese word - chinese word segmentation | 89 | 16_chinese_segmentation_word segmentation_chinese word |
| 17 | crosslingual - bilingual - embeddings - crosslingual word - word embeddings | 84 | 17_crosslingual_bilingual_embeddings_crosslingual word |
| 18 | entropy - law - languages - script - frequency | 79 | 18_entropy_law_languages_script |
| 19 | argument - argumentation - arguments - argumentative - mining | 77 | 19_argument_argumentation_arguments_argumentative |
| 20 | nmt - neural machine - neural machine translation - translation - machine translation | 77 | 20_nmt_neural machine_neural machine translation_translation |
| 21 | parsing - dependency - dependency parsing - parser - transitionbased | 76 | 21_parsing_dependency_dependency parsing_parser |
| 22 | syntactic - rnns - grammatical - language models - agreement | 71 | 22_syntactic_rnns_grammatical_language models |
| 23 | generation - datatotext - text generation - datatotext generation - text | 71 | 23_generation_datatotext_text generation_datatotext generation |
| 24 | topic - topics - topic models - topic modeling - lda | 71 | 24_topic_topics_topic models_topic modeling |
| 25 | knowledge - knowledge graph - entities - relation - graph | 68 | 25_knowledge_knowledge graph_entities_relation |
| 26 | gender - bias - gender bias - biases - embeddings | 66 | 26_gender_bias_gender bias_biases |
| 27 | story - stories - story generation - narrative - plot | 65 | 27_story_stories_story generation_narrative |
| 28 | dialogue - dialog - user - taskoriented - agent | 65 | 28_dialogue_dialog_user_taskoriented |
| 29 | transformer - attention - selfattention - heads - layers | 65 | 29_transformer_attention_selfattention_heads |
| 30 | srl - semantic role - role labeling - semantic role labeling - role | 64 | 30_srl_semantic role_role labeling_semantic role labeling |
| 31 | change - semantic change - diachronic - lexical semantic - semantic | 64 | 31_change_semantic change_diachronic_lexical semantic |
| 32 | sense - wsd - disambiguation - word sense - sense disambiguation | 64 | 32_sense_wsd_disambiguation_word sense |
| 33 | paraphrase - paraphrases - paraphrase generation - paraphrasing - paraphrase identification | 63 | 33_paraphrase_paraphrases_paraphrase generation_paraphrasing |
| 34 | linking - entity linking - entity - el - entities | 62 | 34_linking_entity linking_entity_el |
| 35 | authorship - attribution - authorship attribution - authors - stylistic | 60 | 35_authorship_attribution_authorship attribution_authors |
| 36 | tracking - state tracking - dialogue state - state - dialogue | 54 | 36_tracking_state tracking_dialogue state_state |
| 37 | nli - natural language inference - language inference - inference - natural language | 54 | 37_nli_natural language inference_language inference_inference |
| 38 | act - dialogue act - dialogue - dialog act - dialog | 51 | 38_act_dialogue act_dialogue_dialog act |
| 39 | commonsense - reasoning - commonsense reasoning - knowledge - commonsense knowledge | 49 | 39_commonsense_reasoning_commonsense reasoning_knowledge |
| 40 | crosslingual - multilingual - transfer - crosslingual transfer - mbert | 49 | 40_crosslingual_multilingual_transfer_crosslingual transfer |
| 41 | coreference - resolution - coreference resolution - mention - pronoun | 49 | 41_coreference_resolution_coreference resolution_mention |
| 42 | legal - patent - court - case - legal domain | 48 | 42_legal_patent_court_case |
| 43 | dialect - identification - language identification - dialect identification - arabic | 47 | 43_dialect_identification_language identification_dialect identification |
| 44 | amr - amr parsing - parsing - meaning representation - meaning | 46 | 44_amr_amr parsing_parsing_meaning representation |
| 45 | adversarial - adversarial examples - attacks - attack - examples | 46 | 45_adversarial_adversarial examples_attacks_attack |
| 46 | health - mental - mental health - social media - media | 45 | 46_health_mental_mental health_social media |
| 47 | offensive - offensive language - subtask - offensive language identification - hostile | 45 | 47_offensive_offensive language_subtask_offensive language identification |
| 48 | semantic parsing - parsing - semantic - compositional generalization - logical | 44 | 48_semantic parsing_parsing_semantic_compositional generalization |
| 49 | recurrent - language modeling - rnn - lstm - modeling | 44 | 49_recurrent_language modeling_rnn_lstm |
| 50 | sql - texttosql - database - queries - query | 44 | 50_sql_texttosql_database_queries |
| 51 | indian - smt - translation - machine translation - machine | 43 | 51_indian_smt_translation_machine translation |
| 52 | style - style transfer - transfer - text style - text style transfer | 43 | 52_style_style transfer_transfer_text style |
| 53 | poetry - poems - lyrics - music - verse | 43 | 53_poetry_poems_lyrics_music |
| 54 | codeswitching - cs - codeswitched - codemixed - monolingual | 43 | 54_codeswitching_cs_codeswitched_codemixed |
| 55 | sentiment - polarity - sentiment analysis - analysis - prior polarity | 41 | 55_sentiment_polarity_sentiment analysis_analysis |
| 56 | sarcasm - sarcasm detection - sarcastic - detection - irony | 41 | 56_sarcasm_sarcasm detection_sarcastic_detection |
| 57 | gec - grammatical error - grammatical error correction - error correction - correction | 40 | 57_gec_grammatical error_grammatical error correction_error correction |
| 58 | intent - intent detection - slot - slot filling - filling | 40 | 58_intent_intent detection_slot_slot filling |
| 59 | temporal - events - temporal relations - expressions - temporal relation | 39 | 59_temporal_events_temporal relations_expressions |
| 60 | adaptation - domain - domain adaptation - indomain - translation | 37 | 60_adaptation_domain_domain adaptation_indomain |
| 61 | stance - stance detection - detection - tweets - veracity | 37 | 61_stance_stance detection_detection_tweets |
| 62 | codemixed - sentiment - sentiment analysis - analysis - semeval2020 | 36 | 62_codemixed_sentiment_sentiment analysis_analysis |
| 63 | keyphrase - keyphrases - keyphrase extraction - keyphrase generation - extraction | 35 | 63_keyphrase_keyphrases_keyphrase extraction_keyphrase generation |
| 64 | nmt - subword - translation - vocabulary - neural machine translation | 35 | 64_nmt_subword_translation_vocabulary |
| 65 | calculus - logic - semantics - proof - typelogical | 35 | 65_calculus_logic_semantics_proof |
| 66 | simplification - text simplification - sentence simplification - sentence - ts | 35 | 66_simplification_text simplification_sentence simplification_sentence |
| 67 | annotation - xml - formats - tei - standards | 35 | 67_annotation_xml_formats_tei |
| 68 | correction - spelling - ocr - spelling correction - errors | 33 | 68_correction_spelling_ocr_spelling correction |
| 69 | sentiment - sentiment classification - sentiment analysis - classification - analysis | 33 | 69_sentiment_sentiment classification_sentiment analysis_classification |
| 70 | complexity - readability - lexical complexity - assessment - readability assessment | 31 | 70_complexity_readability_lexical complexity_assessment |
| 71 | postediting - ape - automatic postediting - mt - translation | 30 | 71_postediting_ape_automatic postediting_mt |
| 72 | gender - gender bias - bias - translation - pronouns | 30 | 72_gender_gender bias_bias_translation |
| 73 | tagger - tagging - taggers - pos - partofspeech | 30 | 73_tagger_tagging_taggers_pos |
| 74 | meeting - summarization - podcast - abstractive - summaries | 30 | 74_meeting_summarization_podcast_abstractive |
| 75 | domain - domain adaptation - adaptation - domains - target domain | 30 | 75_domain_domain adaptation_adaptation_domains |
| 76 | documentlevel - context - translation - nmt - neural machine | 29 | 76_documentlevel_context_translation_nmt |
| 77 | text classification - classification - convolutional - networks - convolutional neural | 29 | 77_text classification_classification_convolutional_networks |
| 78 | news - fake - fake news - clickbait - satirical | 29 | 78_news_fake_fake news_clickbait |
| 79 | grammars - grammar - stochastic - contextfree - contextfree grammars | 29 | 79_grammars_grammar_stochastic_contextfree |
| 80 | ontology - rogets - thesaurus - wordnet - concepts | 29 | 80_ontology_rogets_thesaurus_wordnet |
| 81 | vietnamese - ner - named entity recognition - entity recognition - named entity | 28 | 81_vietnamese_ner_named entity recognition_entity recognition |
| 82 | claim - verification - evidence - claims - fever | 27 | 82_claim_verification_evidence_claims |
| 83 | metrics - nlg - language generation - evaluation - natural language generation | 27 | 83_metrics_nlg_language generation_evaluation |
| 84 | responses - response - response generation - adversarial - generation | 27 | 84_responses_response_response generation_adversarial |
| 85 | robustness - nmt - translation - neural machine - neural machine translation | 27 | 85_robustness_nmt_translation_neural machine |
| 86 | revision - editing - seq2seq - revisions - rewriting | 27 | 86_revision_editing_seq2seq_revisions |
| 87 | phonological - phonology - finitestate - reduplication - prosody | 26 | 87_phonological_phonology_finitestate_reduplication |
| 88 | geolocation - location - geographic - twitter - names | 26 | 88_geolocation_location_geographic_twitter |
| 89 | event - event extraction - extraction - event types - argument | 26 | 89_event_event extraction_extraction_event types |
| 90 | mt - human - translation - evaluation - parity | 25 | 90_mt_human_translation_evaluation |
| 91 | arabic - sentiment - sentiment analysis - arabic sentiment - arabic sentiment analysis | 25 | 91_arabic_sentiment_sentiment analysis_arabic sentiment |
| 92 | emoji - emojis - emoji prediction - emoticons - sentiment | 25 | 92_emoji_emojis_emoji prediction_emoticons |
| 93 | constituency - latent tree - parsing - constituency parsing - tree learning | 25 | 93_constituency_latent tree_parsing_constituency parsing |
| 94 | spatial - instructions - 3d - environment - robot | 24 | 94_spatial_instructions_3d_environment |
| 95 | persona - responses - personality - traits - consistency | 23 | 95_persona_responses_personality_traits |
| 96 | matching - response - retrievalbased - chatbots - multiturn | 23 | 96_matching_response_retrievalbased_chatbots |
| 97 | entity - entity typing - typing - finegrained entity - type | 22 | 97_entity_entity typing_typing_finegrained entity |
| 98 | math - word problems - math word - word problem - problems | 21 | 98_math_word problems_math word_word problem |
| 99 | bert - multilingual - multilingual bert - bert model - multilingual models | 21 | 99_bert_multilingual_multilingual bert_bert model |
| 100 | financial - stock - market - news - price | 21 | 100_financial_stock_market_news |
| 101 | video - multimodal - sceneaware - dialog - visual | 21 | 101_video_multimodal_sceneaware_dialog |
| 102 | sense - multisense - senses - word sense - word | 21 | 102_sense_multisense_senses_word sense |
| 103 | game - games - agents - communication - pragmatic | 21 | 103_game_games_agents_communication |
| 104 | graph - amrtotext - amrtotext generation - amr - graphs | 20 | 104_graph_amrtotext_amrtotext generation_amr |
| 105 | nmt - translation - neural machine translation - neural machine - machine translation | 20 | 105_nmt_translation_neural machine translation_neural machine |
| 106 | normalization - text normalization - normalizing - text - historical | 20 | 106_normalization_text normalization_normalizing_text |
| 107 | privacy - policies - anonymization - deidentification - vague | 20 | 107_privacy_policies_anonymization_deidentification |
| 108 | beam - beam search - search - decoding - constraints | 20 | 108_beam_beam search_search_decoding |
| 109 | hypernymy - distributional - pathbased - hypernymy detection - hypernyms | 19 | 109_hypernymy_distributional_pathbased_hypernymy detection |
| 110 | political - bias - articles - news - ideology | 19 | 110_political_bias_articles_news |
| 111 | generative adversarial - gans - gan - generative - generative adversarial networks | 18 | 111_generative adversarial_gans_gan_generative |
| 112 | pos - tagger - tagging - pos tagging - codemixed | 17 | 112_pos_tagger_tagging_pos tagging |
| 113 | humor - humorous - headlines - funny - puns | 17 | 113_humor_humorous_headlines_funny |
| 114 | metaphor - metaphors - metaphoric - metaphorical - literal | 17 | 114_metaphor_metaphors_metaphoric_metaphorical |
| 115 | codeswitching - cs - asr - speech - speech recognition | 17 | 115_codeswitching_cs_asr_speech |
| 116 | event coreference - event - coreference - coreference resolution - resolution | 17 | 116_event coreference_event_coreference_coreference resolution |
| 117 | reviews - review - helpfulness - opinion - online reviews | 17 | 117_reviews_review_helpfulness_opinion |
| 118 | covid19 - tweets - wnut2020 - twitter - informative | 17 | 118_covid19_tweets_wnut2020_twitter |
| 119 | anaphora - resolution - pronouns - pronoun - anaphora resolution | 17 | 119_anaphora_resolution_pronouns_pronoun |
| 120 | bilingual - dictionary - comparability - termhood - comparable corpora | 17 | 120_bilingual_dictionary_comparability_termhood |
| 121 | discourse - translation - pronouns - dp - discourse phenomena | 17 | 121_discourse_translation_pronouns_dp |
| 122 | color - colour - naming - colors - character embeddings | 16 | 122_color_colour_naming_colors |
| 123 | nonautoregressive - autoregressive - nat - nonautoregressive neural - decoding | 16 | 123_nonautoregressive_autoregressive_nat_nonautoregressive neural |
| 124 | nlg - natural language generation - language generation - spoken dialogue - generation | 16 | 124_nlg_natural language generation_language generation_spoken dialogue |
| 125 | crowdsourcing - workers - examples - protocols - data collection | 16 | 125_crowdsourcing_workers_examples_protocols |
| 126 | african - revolution - african languages - technology - african language | 16 | 126_african_revolution_african languages_technology |
| 127 | grading - scoring - essay - short answer - essay scoring | 16 | 127_grading_scoring_essay_short answer |
| 128 | treebanks - treebank - parsing - crosslingual - dependency | 16 | 128_treebanks_treebank_parsing_crosslingual |
| 129 | reviews - summarization - review - product - summaries | 16 | 129_reviews_summarization_review_product |
| 130 | gaze - reading - eyetracking - eye - behaviour | 16 | 130_gaze_reading_eyetracking_eye |
| 131 | nlp - natural - natural language - nlg - language | 15 | 131_nlp_natural_natural language_nlg |
| 132 | news translation - news translation task - translation task - news - submission | 14 | 132_news translation_news translation task_translation task_news |
| 133 | eat - meaning - semantics - formal - theory | 14 | 133_eat_meaning_semantics_formal |
| 134 | sign - sign language - sl - asl - deaf | 14 | 134_sign_sign language_sl_asl |
| 135 | multitask - labels - mtl - sequence - multitask learning | 14 | 135_multitask_labels_mtl_sequence |
| 136 | phylogenetic - cognate - indoeuropean - historical linguistics - indoeuropean language | 14 | 136_phylogenetic_cognate_indoeuropean_historical linguistics |
| 137 | syntax - translation - neural machine translation - neural machine - nmt | 14 | 137_syntax_translation_neural machine translation_neural machine |
| 138 | explanations - explanation - explainers - nl explanations - faithful | 14 | 138_explanations_explanation_explainers_nl explanations |
| 139 | slot - slot filling - filling - slots - nlu | 13 | 139_slot_slot filling_filling_slots |
| 140 | personality - traits - profiling - author profiling - author | 13 | 140_personality_traits_profiling_author profiling |
| 141 | preposition - prepositions - supersenses - prepositional - supersense | 13 | 141_preposition_prepositions_supersenses_prepositional |
| 142 | scientific - application areas - application - areas - literature | 13 | 142_scientific_application areas_application_areas |
| 143 | russian - similarity - semantic similarity - similarity task - semantic similarity task | 13 | 143_russian_similarity_semantic similarity_similarity task |
| 144 | code - source code - documentation - code generation - programming | 13 | 144_code_source code_documentation_code generation |
| 145 | semantic web - translation - machinetranslation - machine translation - technologies | 12 | 145_semantic web_translation_machinetranslation_machine translation |
| 146 | knowledge - knowledgegrounded - response - dialogue generation - dialogue | 12 | 146_knowledge_knowledgegrounded_response_dialogue generation |
| 147 | sentence - sentence representations - sentence embeddings - transfer - tasks | 12 | 147_sentence_sentence representations_sentence embeddings_transfer |
| 148 | distributional - distributional semantics - semantics - functional distributional - functional distributional semantics | 12 | 148_distributional_distributional semantics_semantics_functional distributional |
| 149 | compositionality - sc - distributional - sememe knowledge - phrase | 12 | 149_compositionality_sc_distributional_sememe knowledge |
| 150 | ud - annotation - treebank - treebanks - universal dependencies | 12 | 150_ud_annotation_treebank_treebanks |
| 151 | acronym - abbreviation - acronyms - abbreviations - disambiguation | 12 | 151_acronym_abbreviation_acronyms_abbreviations |
| 152 | propaganda - task 11 - 11 - propaganda detection - semeval2020 task | 12 | 152_propaganda_task 11_11_propaganda detection |
| 153 | open - open information extraction - open information - information extraction - tuples | 12 | 153_open_open information extraction_open information_information extraction |
| 154 | hebrew - bible - intertextuality - restoration - homographs | 11 | 154_hebrew_bible_intertextuality_restoration |
| 155 | typological - typology - typological features - languages - linguistic typology | 11 | 155_typological_typology_typological features_languages |
| 156 | label - text classification - multilabel - labels - classification | 11 | 156_label_text classification_multilabel_labels |
| 157 | variational - latent - variational autoencoders - variational autoencoder - autoencoders | 11 | 157_variational_latent_variational autoencoders_variational autoencoder |
| 158 | crisis - messages - disasters - disaster - emergency | 11 | 158_crisis_messages_disasters_disaster |
| 159 | adversarial - rc - rc models - robustness - comprehension | 11 | 159_adversarial_rc_rc models_robustness |
| 160 | tree - treelstm - trees - tree structures - syntactic | 11 | 160_tree_treelstm_trees_tree structures |
| 161 | headline - headlines - news - headline generation - synthetic news | 11 | 161_headline_headlines_news_headline generation |
| 162 | reasoning - kg - paths - kgs - multihop | 11 | 162_reasoning_kg_paths_kgs |
| 163 | text classification - classification - runtime - fasttext - text | 10 | 163_text classification_classification_runtime_fasttext |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.6.1
* Transformers: 4.38.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
| [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"COREFERENCE_RESOLUTION",
"EVENT_EXTRACTION",
"QUESTION_ANSWERING",
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | Non_BioNLP |
SyedShaheer/bart-large-cnn-samsum_tuned_V2_1 | SyedShaheer | summarization | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,714,640,840,000 | 2024-05-02T09:17:38 | 10 | 0 | ---
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
agentlans/mdeberta-v3-base-readability | agentlans | text-classification | [
"safetensors",
"deberta-v2",
"multilingual",
"readability",
"text-classification",
"dataset:agentlans/tatoeba-english-translations",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"region:us"
] | 1,728,705,039,000 | 2024-10-12T09:55:59 | 50 | 0 | ---
base_model:
- microsoft/mdeberta-v3-base
datasets:
- agentlans/tatoeba-english-translations
license: mit
pipeline_tag: text-classification
tags:
- multilingual
- readability
---
# DeBERTa V3 Base for Multilingual Readability Assessment
This is a fine-tuned version of the multilingual DeBERTa model (mdeberta) for assessing text readability across languages.
## Model Details
- **Architecture:** mdeberta-base
- **Task:** Regression (Readability Assessment)
- **Training Data:** [agentlans/tatoeba-english-translations](https://huggingface.co/datasets/agentlans/tatoeba-english-translations/) dataset containing 39 100 English translations
- **Input:** Text in any of the supported languages by DeBERTa
- **Output:** Estimated U.S. grade level for text comprehension
- higher values indicate more complex text
## Performance
Root mean squared error (RMSE) on 20% held-out validation set: 1.063
## Training Data
The model was trained on [agentlans/tatoeba-english-translations](https://huggingface.co/datasets/agentlans/tatoeba-english-translations).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name="agentlans/mdeberta-v3-base-readability"
# Put model on GPU or else CPU
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
def readability(text):
"""Processes the text using the model and returns its logits.
In this case, it's reading grade level in years of education
(the higher the number, the harder it is to read the text)."""
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(device)
with torch.no_grad():
logits = model(**inputs).logits.squeeze().cpu()
return logits.tolist()
readability("Your text here.")
```
## Results
In this study, 10 English text samples of varying readability were generated and translated into Arabic, Chinese, French, Russian, and Spanish using Google Translate. This resulted in a total of 50 translated samples, which were subsequently analyzed by a trained classifier to predict their readability scores.
<details>
<summary>The following table presents the 10 original texts along with their translations:</summary>
| # | English | French | Spanish | Russian | Chinese | Arabic |
| :---: | --- | --- | --- | --- | --- | --- |
| 1 | The cat sat on the mat. | Le chat était assis sur le tapis. | El gato se sentó en la alfombra. | Кошка села на коврик. | 猫坐在垫子上。 | جلست القطة على الحصيرة. |
| 2 | She quickly ran to catch the bus. | Elle courut vite pour attraper le bus. | Corrió rápidamente para alcanzar el autobús. | Она быстро побежала, чтобы успеть на автобус. | 她飞快地跑去赶公共汽车。 | وركضت بسرعة لتلحق بالحافلة. |
| 3 | The old house creaked in the strong wind. | La vieille maison craquait sous le vent fort. | La vieja casa crujió con el fuerte viento. | Старый дом скрипел на сильном ветру. | 老房子在强风中吱吱作响。 | صرير البيت القديم في الرياح القوية. |
| 4 | Despite the rain, they enjoyed their picnic in the park. | Malgré la pluie, ils profitèrent de leur pique-nique dans le parc. | A pesar de la lluvia, disfrutaron de su picnic en el parque. | Несмотря на дождь, они наслаждались пикником в парке. | 尽管下着雨,他们还是在公园里享受着野餐。 | على الرغم من المطر، استمتعوا بنزهتهم في الحديقة. |
| 5 | The intricate design of the butterfly's wings fascinated the young scientist. | Le dessin complexe des ailes du papillon fascina le jeune scientifique. | El intrincado diseño de las alas de la mariposa fascinó al joven científico. | Замысловатый дизайн крыльев бабочки очаровал молодого ученого. | 蝴蝶翅膀的复杂设计让这位年轻的科学家着迷。 | أذهل التصميم المعقد لأجنحة الفراشة العالم الشاب. |
| 6 | The company's quarterly report indicated a significant increase in revenue. | Le rapport trimestriel de l'entreprise indiquait une augmentation significative des revenus. | El informe trimestral de la empresa indicó un aumento significativo de los ingresos. | Квартальный отчет компании показал значительный рост доходов. | 该公司的季度报告显示收入大幅增加。 | أشار التقرير ربع السنوي للشركة إلى زيادة كبيرة في الإيرادات. |
| 7 | The philosopher posited that consciousness arises from complex neural interactions. | Le philosophe postulat que la conscience naît d'interactions neuronales complexes. | El filósofo postuló que la conciencia surge de interacciones neuronales complejas. | Философ утверждал, что сознание возникает из сложных нейронных взаимодействий. | 哲学家认为意识源于复杂的神经相互作用。 | افترض الفيلسوف أن الوعي ينشأ من تفاعلات عصبية معقدة. |
| 8 | The quantum entanglement phenomenon challenges our understanding of locality and causality. | Le phénomène d'intrication quantique remet en question notre compréhension de la localité et de la causalité. | El fenómeno del entrelazamiento cuántico desafía nuestra comprensión de la localidad y la causalidad. | Феномен квантовой запутанности бросает вызов нашему пониманию локальности и причинности. | 量子纠缠现象挑战了我们对局部性和因果关系的理解。 | تتحدى ظاهرة التشابك الكمي فهمنا للمحلية والسببية. |
| 9 | The multifaceted approach to urban development necessitates consideration of socioeconomic factors. | L'approche multidimensionnelle du développement urbain nécessite de prendre en compte les facteurs socio-économiques. | El enfoque multifacético del desarrollo urbano requiere la consideración de factores socioeconómicos. | Многогранный подход к городскому развитию требует учета социально-экономических факторов. | 城市发展的多方面方法需要考虑社会经济因素。 | يتطلب النهج المتعدد الأوجه للتطوير الحضري مراعاة العوامل الاجتماعية والاقتصادية. |
| 10 | The esoteric nature of post-structuralist literary theory often obfuscates its practical applications. | La nature ésotérique de la théorie littéraire post-structuraliste obscurcit souvent ses applications pratiques. | La naturaleza esotérica de la teoría literaria posestructuralista a menudo ofusca sus aplicaciones prácticas. | Эзотерическая природа постструктуралистской литературной теории часто затрудняет ее практическое применение. | 后结构主义文学理论的深奥性质常常掩盖其实际应用。 | غالبًا ما تحجب الطبيعة الباطنية لنظرية الأدب ما بعد البنيوية تطبيقاتها العملية. |
</details>
The scatterplot below illustrates the predicted readability scores grouped by each text sample. Notably, the prediction scores exhibit low variability across different languages for the same text, indicating a consistent assessment of translation readability regardless of the target language.
<img src="plot.png" alt="Scatterplot of predicted quality scores grouped by text sample and language" width="100%"/>
This analysis highlights the effectiveness of using machine learning classifiers in evaluating textual readability across multiple languages.
## Limitations
- Performance may vary for texts significantly different from the training data
- Output is based on statistical patterns and may not always align with human judgment
- Readability is assessed purely on textual features, not considering factors like subject familiarity or cultural context
## Ethical Considerations
- Should not be used as the sole determinant of text suitability for specific audiences
- Results may reflect biases present in the training data sources
- Care should be taken when using these models in educational or publishing contexts | [
"TRANSLATION"
] | Non_BioNLP |
mrm8488/mbart-large-finetuned-opus-es-en-translation | mrm8488 | translation | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"translation",
"es",
"en",
"dataset:opus100",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-04-05T10:32:38 | 298 | 2 | ---
datasets:
- opus100
language:
- es
- en
tags:
- translation
---
### mbart-large-es-en
This is mbart-large-cc25, finetuned on opus100 for Spanish to English translation.
It scores BLEU **28.25** on validation dataset
It scores BLEU **28.28** on test
dataset | [
"TRANSLATION"
] | Non_BioNLP |
TransferGraph/zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:zenkri/autotrain-Arabic_Poetry_by_Subject-920730230",
"base_model:adapter:zenkri/autotrain-Arabic_Poetry_by_Subject-920730230",
"model-index",
"region:us"
] | 1,709,211,139,000 | 2024-02-29T12:52:22 | 0 | 0 | ---
base_model: zenkri/autotrain-Arabic_Poetry_by_Subject-920730230
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6577540106951871
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [zenkri/autotrain-Arabic_Poetry_by_Subject-920730230](https://huggingface.co/zenkri/autotrain-Arabic_Poetry_by_Subject-920730230) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2166 | None | 0 |
| 0.6016 | 1.1023 | 0 |
| 0.6444 | 0.8869 | 1 |
| 0.6631 | 0.8102 | 2 |
| 0.6578 | 0.7780 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
ElizaClaPa/SentimentAnalysis-YelpReviews-OptimizedModel | ElizaClaPa | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,720,963,951,000 | 2024-07-16T07:09:10 | 98 | 0 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Sentiment Analysis Model to predict the label from a review given, the labels go from 1 star to 5 stars.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Language(s) (NLP):** Sentiment Analysis
- **Finetuned from model [optional]:** juliensimon/reviews-sentiment-analysis
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ElizaClapa/LLM-Project-LHL.git
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The *YelpReviewFull* dataset consists of reviews from Yelp. It was constructed by Xiang Zhang ([email protected]) from the Yelp Dataset Challenge 2015.
It was first used as a text classification benchmark in the following paper:
Xiang Zhang, Junbo Zhao, Yann LeCun. *Character-level Convolutional Networks for Text Classification*. *Advances in Neural Information Processing Systems 28* (NIPS 2015).
The dataset can be found in this Hugging Face [link🔗](https://huggingface.co/datasets/Yelp/yelp_review_full).
#### Preprocessing [optional]
Preprocessing steps include removing punctuation, removing stopwords, lemmatizing and padding.
#### Training Hyperparameters
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The performance metrics of the Optimized model were Accuracy, Precission, Recall, and F1-Score.
```
Evaluation Results:
{'eval_loss': 0.773500382900238, 'eval_accuracy': 0.684, 'eval_f1': 0.6833543859772582, 'eval_runtime': 98.6782, 'eval_samples_per_second': 5.067, 'eval_steps_per_second': 0.638}
Classification Report:
precision recall f1-score support
1 star 0.79 0.78 0.79 110
2 star 0.64 0.69 0.66 112
3 stars 0.70 0.67 0.69 92
4 stars 0.62 0.56 0.59 100
5 stars 0.66 0.71 0.68 86
accuracy 0.68 500
macro avg 0.68 0.68 0.68 500
weighted avg 0.68 0.68 0.68 500
```
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The Hyperparameters changed for the optimization of this model include the following:
```
training_args = TrainingArguments(
output_dir=repo_name,
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
weight_decay=0.1,
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
logging_dir='/content/drive/My Drive/Colab Notebooks/LLM Project GoogleColab/Logs_Full',
logging_steps=10,
push_to_hub=True,
report_to="none"
)
optimized_trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_ds['train'],
eval_dataset=tokenized_ds['test'],
#train_dataset=small_train_dataset,
#eval_dataset=small_eval_dataset,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=1, early_stopping_threshold=0.001)]
)
```
The most important hyperparameter for this optimization was the **Learning Rate**, which had been modified from 5e-1 to 1e-1, and finally set to 2e-5 for the final optimization. A learning rate that's too high can cause the model to converge too quickly to a suboptimal solution, while too low a learning rate can result in slow convergence, resulting in long training times. A compromised between risk of suboptimal performance and training time was found with the final learning rate used (2e-5).
Another hyperparameter changed was the number of training **Epochs**, which controls how many times the model sees the entire training dataset. Too few epochs may lead to underfitting, while too many can lead to overfitting. To avoid overfitting, a technique called **Early Stopping** was used. This technique is used to halt training when the model's performance on the validation set stops improving. This helps prevent overfitting by ensuring that the model does not continue training beyond the point where it is making significant progress.
Another important consideration was the **Weight Decay** hyperparameter, is it is useful for regularization to avoid overfitting.
### Hyperparameters important for memory usage and speed
The following hyperparameters helped to avoid losing valuable model training progress due to the Colab Notebook disconecting from the hosted runtime due to inactivity or reaching the maximum RAM available:
1. The **Per Device Evaluation Batch Size** directly affected the speed and memory usage during the evaluation.
2. The **Evaluation Strategy** was set to 'epoch' so the model would be evaluated on the validation set everytime one epoch was completed.
3. The **Save Strategy** was set to 'epoch' so the models state would be saved with every completed epoch.
Even if the notebook would disconnect, with the saved model's progress, the training could be restarted from that point.
### Results
| <p style="text-align: center;">text</p> | <p style="text-align: center;">label</p>|<p style="text-align: center;">score</p>|
|:----------|:----------|:----------|
| This restaurant was the best ever, I really enjoyed the food there! | 5 stars | 0.967317 |
| I would recommend this to my family and friends! | 4 stars | 0.530670 |
| Not that big of a deal, I don't know what everyone is talking about. | 3 stars | 0.626009 |
| It was okay, not that bad, but also not extremely good | 3 stars | 0.492008 |
| This was the worst meal I've ever had! | 1 star | 0.990348 |
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
fine-tuned/BAAI_bge-large-en-15062024-atex-webapp | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Science",
"Technology",
"Medicine",
"Philosophy",
"Research",
"en",
"dataset:fine-tuned/BAAI_bge-large-en-15062024-atex-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,718,416,051,000 | 2024-06-15T01:48:01 | 7 | 0 | ---
datasets:
- fine-tuned/BAAI_bge-large-en-15062024-atex-webapp
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Science
- Technology
- Medicine
- Philosophy
- Research
---
This model is a fine-tuned version of [**BAAI/bge-large-en**](https://huggingface.co/BAAI/bge-large-en) designed for the following use case:
general domain
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/BAAI_bge-large-en-15062024-atex-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Nishthaa321/autotrain-qr7os-gstst | Nishthaa321 | text-classification | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:autotrain-qr7os-gstst/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,709,029,538,000 | 2024-02-27T10:26:05 | 6 | 0 | ---
datasets:
- autotrain-qr7os-gstst/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2146722972393036
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
GAIR/rst-gaokao-writing-11b | GAIR | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,662,064,399,000 | 2022-09-04T01:42:02 | 10 | 2 | ---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| **rst-gaokao-writing-11b** | **Trained with example essays from past Gaokao-English exams and grammar error correction signals** | **Essay writing, story generation, grammar error correction and other text generation tasks** |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
``` | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION",
"PARAPHRASING"
] | Non_BioNLP |
justinthelaw/Phi-3-mini-128k-instruct-4bit-128g-GPTQ | justinthelaw | text-generation | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"custom_code",
"conversational",
"en",
"dataset:Salesforce/wikitext",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-128k-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | 1,722,363,533,000 | 2024-08-03T12:37:46 | 242 | 1 | ---
base_model: microsoft/Phi-3-mini-128k-instruct
datasets:
- Salesforce/wikitext
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- nlp
- code
- phi3
- custom_code
- conversational
---
# Phi-3-mini-128k-instruct GPTQ 4-bit 128g Group Size
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
- Quantization code: [justinthelaw's GitHub](https://github.com/justinthelaw/quantization-pipeline-experiments)
- Quantization creator: [Justin Law](https://huggingface.co/justinthelaw)
<!-- description start -->
## Description
This repo contains GPTQ 4-bit, 128g Group Size, quantized model files for the recently released upgrade of [Phi-3-mini-128k-instruct](https://huggingface.co/justinthelaw/Phi-3-mini-128k-instruct-4bit-128g-instruct).
<!-- README_GPTQ.md-provided-files start -->
## GPTQ parameters
Models are released as sharded safetensors files.
| Bits | GS | GPTQ Dataset | Max Seq Len | Size | VRAM |
| ---- | -- | ----------- | ------- | ---- | ---- |
| 4 | 128 | [wikitext2-v1](Salesforce/wikitext) | 131,072 | 2.28 Gb | 22-32 Gb*
* Depends on maximum sequence length parameter (KV cache utilization) used with vLLM or Transformers
<!-- README_GPTQ.md-provided-files end -->
## Original Model Card Below
---
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/justinthelaw/Phi-3-mini-128k-instruct-4bit-128g) ; [[ONNX]](https://huggingface.co/justinthelaw/Phi-3-mini-128k-instruct-4bit-128g-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
- When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
- Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/justinthelaw/Phi-3-mini-128k-instruct-4bit-128g/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"justinthelaw/Phi-3-mini-128k-instruct-4bit-128g",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("justinthelaw/Phi-3-mini-128k-instruct-4bit-128g")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
- Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
- Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
- Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
- Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
- Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
- Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
- High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
- Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
- Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
- Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
- Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidelines.
- Inputs: Text. It is best suited for prompts using chat format.
- Context length: 128K tokens
- GPUs: 512 H100-80G
- Training time: 10 days
- Training data: 4.9T tokens
- Outputs: Generated text in response to the input
- Dates: Our models were trained between May and June 2024
- Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
- Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
- [PyTorch](https://github.com/pytorch/pytorch)
- [Transformers](https://github.com/huggingface/transformers)
- [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
- NVIDIA A100
- NVIDIA A6000
- NVIDIA H100
If you want to run the model on:
- NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
- Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [Apache-2.0 license](https://huggingface.co/justinthelaw/Phi-3-mini-128k-instruct-4bit-128g/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"SUMMARIZATION"
] | Non_BioNLP |
ein3108/bert-finetuned-sem_eval-english | ein3108 | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval_2018_task_1",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,730,773,762,000 | 2024-11-05T02:30:04 | 8 | 0 | ---
base_model: bert-base-uncased
datasets:
- sem_eval_2018_task_1
library_name: transformers
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-sem_eval-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: sem_eval_2018_task_1
type: sem_eval_2018_task_1
config: subtask5.english
split: validation
args: subtask5.english
metrics:
- type: f1
value: 0.7071713147410359
name: F1
- type: accuracy
value: 0.2866817155756208
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sem_eval_2018_task_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3063
- F1: 0.7072
- Roc Auc: 0.7999
- Accuracy: 0.2867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 428 | 0.3278 | 0.6765 | 0.7759 | 0.2641 |
| 0.3858 | 2.0 | 856 | 0.3032 | 0.6879 | 0.7804 | 0.2743 |
| 0.2836 | 3.0 | 1284 | 0.3017 | 0.7033 | 0.7957 | 0.2935 |
| 0.2446 | 4.0 | 1712 | 0.3060 | 0.7037 | 0.7970 | 0.2799 |
| 0.2225 | 5.0 | 2140 | 0.3063 | 0.7072 | 0.7999 | 0.2867 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 1,716,324,329,000 | 2024-05-21T23:20:54 | 7 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OmniCorso-7B - GGUF
- Model creator: https://huggingface.co/macadeliccc/
- Original model: https://huggingface.co/macadeliccc/OmniCorso-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OmniCorso-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [OmniCorso-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [OmniCorso-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [OmniCorso-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [OmniCorso-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [OmniCorso-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [OmniCorso-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [OmniCorso-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [OmniCorso-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [OmniCorso-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [OmniCorso-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [OmniCorso-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [OmniCorso-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [OmniCorso-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [OmniCorso-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [OmniCorso-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [OmniCorso-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [OmniCorso-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [OmniCorso-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [OmniCorso-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [OmniCorso-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [OmniCorso-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc
tags:
- mergekit
- merge
base_model:
- macadeliccc/MBX-7B-v3-DPO
- mlabonne/OmniBeagle-7B
model-index:
- name: OmniCorso-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 73.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
---
# OmniCorso-7B

## Code Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/OmniCorso-7B")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/OmniCorso-7B")
messages = [
{"role": "system", "content": "Respond to the users request like a pirate"},
{"role": "user", "content": "Can you write me a quicksort algorithm?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
```
The following models were included in the merge:
* [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)
* [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/OmniBeagle-7B
layer_range: [0, 32]
- model: macadeliccc/MBX-7B-v3-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/MBX-7B-v3-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Quantizations
### GGUF
+ [iMatrix](https://huggingface.co/macadeliccc/OmniCorso-7B-GGUF)
### Exllamav2
Quants are available thanks to user bartowski, check them out [here](https://huggingface.co/bartowski/OmniCorso-7B-exl2)
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Evaluations
<pre>----Benchmark Complete----
2024-02-11 15:34:40
Time taken: 178.3 mins
Prompt Format: ChatML
Model: macadeliccc/OmniCorso-7B
Score (v2): 73.75
Parseable: 167.0
---------------
Batch completed
Time taken: 178.3 mins
---------------
</pre>
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[OmniCorso-7B](https://huggingface.co/macadeliccc/OmniCorso-7B)| 45.89| 77.66| 74.12| 49.24| 61.73|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |29.13|± | 2.86|
| | |acc_norm|27.17|± | 2.80|
|agieval_logiqa_en | 0|acc |39.32|± | 1.92|
| | |acc_norm|39.63|± | 1.92|
|agieval_lsat_ar | 0|acc |23.91|± | 2.82|
| | |acc_norm|23.91|± | 2.82|
|agieval_lsat_lr | 0|acc |53.14|± | 2.21|
| | |acc_norm|53.92|± | 2.21|
|agieval_lsat_rc | 0|acc |66.54|± | 2.88|
| | |acc_norm|67.29|± | 2.87|
|agieval_sat_en | 0|acc |80.58|± | 2.76|
| | |acc_norm|80.58|± | 2.76|
|agieval_sat_en_without_passage| 0|acc |45.63|± | 3.48|
| | |acc_norm|43.69|± | 3.46|
|agieval_sat_math | 0|acc |33.18|± | 3.18|
| | |acc_norm|30.91|± | 3.12|
Average: 45.89%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |67.32|± | 1.37|
| | |acc_norm|68.43|± | 1.36|
|arc_easy | 0|acc |87.46|± | 0.68|
| | |acc_norm|83.50|± | 0.76|
|boolq | 1|acc |88.13|± | 0.57|
|hellaswag | 0|acc |68.47|± | 0.46|
| | |acc_norm|86.96|± | 0.34|
|openbookqa | 0|acc |38.80|± | 2.18|
| | |acc_norm|50.00|± | 2.24|
|piqa | 0|acc |83.03|± | 0.88|
| | |acc_norm|85.31|± | 0.83|
|winogrande | 0|acc |81.29|± | 1.10|
Average: 77.66%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |58.26|± | 1.73|
| | |mc2 |74.12|± | 1.43|
Average: 74.12%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60|
|bigbench_date_understanding | 0|multiple_choice_grade|63.41|± | 2.51|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|49.22|± | 3.12|
|bigbench_geometric_shapes | 0|multiple_choice_grade|23.96|± | 2.26|
| | |exact_str_match | 1.39|± | 0.62|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|34.20|± | 2.12|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.71|± | 1.61|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|60.33|± | 2.83|
|bigbench_movie_recommendation | 0|multiple_choice_grade|49.00|± | 2.24|
|bigbench_navigate | 0|multiple_choice_grade|55.20|± | 1.57|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.75|± | 1.02|
|bigbench_ruin_names | 0|multiple_choice_grade|55.80|± | 2.35|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|36.97|± | 1.53|
|bigbench_snarks | 0|multiple_choice_grade|72.38|± | 3.33|
|bigbench_sports_understanding | 0|multiple_choice_grade|76.27|± | 1.36|
|bigbench_temporal_sequences | 0|multiple_choice_grade|54.50|± | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.12|± | 1.19|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|20.34|± | 0.96|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|60.33|± | 2.83|
Average: 49.24%
Average score: 61.73%
Elapsed time: 02:20:06
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__OmniCorso-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.74|
|AI2 Reasoning Challenge (25-Shot)|72.70|
|HellaSwag (10-Shot) |88.70|
|MMLU (5-Shot) |64.91|
|TruthfulQA (0-shot) |73.43|
|Winogrande (5-shot) |83.74|
|GSM8k (5-shot) |70.96|
| [
"TRANSLATION"
] | Non_BioNLP |
DandinPower/deberta-v2-xlarge-otat | DandinPower | text-classification | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"nycu-112-2-datamining-hw2",
"generated_from_trainer",
"en",
"dataset:DandinPower/review_onlytitleandtext",
"base_model:microsoft/deberta-v2-xlarge",
"base_model:finetune:microsoft/deberta-v2-xlarge",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,713,553,851,000 | 2024-04-19T21:55:26 | 5 | 0 | ---
base_model: microsoft/deberta-v2-xlarge
datasets:
- DandinPower/review_onlytitleandtext
language:
- en
license: mit
metrics:
- accuracy
tags:
- nycu-112-2-datamining-hw2
- generated_from_trainer
model-index:
- name: deberta-v2-xlarge-otat
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: DandinPower/review_onlytitleandtext
type: DandinPower/review_onlytitleandtext
metrics:
- type: accuracy
value: 0.20114285714285715
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v2-xlarge-otat
This model is a fine-tuned version of [microsoft/deberta-v2-xlarge](https://huggingface.co/microsoft/deberta-v2-xlarge) on the DandinPower/review_onlytitleandtext dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6316
- Accuracy: 0.2011
- Macro F1: 0.0670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 1.1994 | 0.14 | 500 | 1.6893 | 0.4029 | 0.3240 |
| 1.6344 | 0.29 | 1000 | 1.6403 | 0.2011 | 0.0670 |
| 1.6413 | 0.43 | 1500 | 1.6270 | 0.2 | 0.0667 |
| 1.6326 | 0.57 | 2000 | 1.6375 | 0.1971 | 0.0659 |
| 1.6128 | 0.71 | 2500 | 1.6604 | 0.2011 | 0.0670 |
| 1.6213 | 0.86 | 3000 | 1.6161 | 0.2 | 0.0667 |
| 1.6199 | 1.0 | 3500 | 1.6132 | 0.2017 | 0.0671 |
| 1.6177 | 1.14 | 4000 | 1.6142 | 0.2011 | 0.0670 |
| 1.6183 | 1.29 | 4500 | 1.6213 | 0.2 | 0.0667 |
| 1.6211 | 1.43 | 5000 | 1.6136 | 0.1971 | 0.0659 |
| 1.6145 | 1.57 | 5500 | 1.6169 | 0.1971 | 0.0659 |
| 1.6187 | 1.71 | 6000 | 1.6160 | 0.2011 | 0.0670 |
| 1.6174 | 1.86 | 6500 | 1.6146 | 0.2 | 0.0667 |
| 1.6164 | 2.0 | 7000 | 1.6181 | 0.2 | 0.0667 |
| 1.6184 | 2.14 | 7500 | 1.6109 | 0.1971 | 0.0659 |
| 1.6152 | 2.29 | 8000 | 1.6189 | 0.2 | 0.0667 |
| 1.6175 | 2.43 | 8500 | 1.6146 | 0.1971 | 0.0659 |
| 1.6134 | 2.57 | 9000 | 1.6160 | 0.1971 | 0.0659 |
| 1.6144 | 2.71 | 9500 | 1.6167 | 0.2011 | 0.0670 |
| 1.6141 | 2.86 | 10000 | 1.6106 | 0.2017 | 0.0671 |
| 1.6128 | 3.0 | 10500 | 1.6139 | 0.1971 | 0.0659 |
| 1.6179 | 3.14 | 11000 | 1.6112 | 0.2 | 0.0667 |
| 1.6096 | 3.29 | 11500 | 1.6127 | 0.2 | 0.0667 |
| 1.6132 | 3.43 | 12000 | 1.6135 | 0.2011 | 0.0670 |
| 1.6053 | 3.57 | 12500 | 1.6186 | 0.2 | 0.0667 |
| 1.6049 | 3.71 | 13000 | 1.6277 | 0.2011 | 0.0670 |
| 1.6044 | 3.86 | 13500 | 1.6271 | 0.2011 | 0.0670 |
| 1.6017 | 4.0 | 14000 | 1.6275 | 0.2011 | 0.0670 |
| 1.608 | 4.14 | 14500 | 1.6192 | 0.2011 | 0.0670 |
| 1.6075 | 4.29 | 15000 | 1.6259 | 0.2011 | 0.0670 |
| 1.601 | 4.43 | 15500 | 1.6267 | 0.2011 | 0.0670 |
| 1.6086 | 4.57 | 16000 | 1.6339 | 0.2011 | 0.0670 |
| 1.5955 | 4.71 | 16500 | 1.6340 | 0.2011 | 0.0670 |
| 1.6013 | 4.86 | 17000 | 1.6322 | 0.2011 | 0.0670 |
| 1.5976 | 5.0 | 17500 | 1.6316 | 0.2011 | 0.0670 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
leeolivia77/custom_summarization_dataset | leeolivia77 | null | [
"region:us"
] | 1,726,810,166,000 | 2024-09-20T05:29:29 | 0 | 0 | ---
{}
---
# Dataset Card for Custom Text Dataset
## Dataset Name
Custom Text Dataset for Summarization
## Overview
A dataset created for summarizing articles.
## Composition
Contains pairs of articles and their summaries.
## Collection Process
Data was collected from CNN/Daily Mail.
## Preprocessing
Text cleaned and tokenized.
## How to Use
```python
from datasets import load_dataset
dataset = load_dataset("your_dataset_id")
```
## Evaluation
## Limitations
## Ethical Considerations
| [
"SUMMARIZATION"
] | Non_BioNLP |
lilyray/results | lilyray | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,709,600,157,000 | 2024-03-10T14:59:22 | 31 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: results
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.921
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Accuracy: 0.921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.507837996446784e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8349 | 1.0 | 1000 | 0.6184 | 0.7905 |
| 0.384 | 2.0 | 2000 | 0.3057 | 0.909 |
| 0.2544 | 3.0 | 3000 | 0.2316 | 0.926 |
| 0.2027 | 4.0 | 4000 | 0.2088 | 0.928 |
| 0.1757 | 5.0 | 5000 | 0.2030 | 0.9295 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
juanjucm/whisper-large-v3-turbo-OpenHQ-GL-EN | juanjucm | automatic-speech-recognition | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"gl",
"en",
"dataset:juanjucm/OpenHQ-SpeechT-GL-EN",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,734,973,321,000 | 2025-02-06T17:07:06 | 65 | 0 | ---
base_model: openai/whisper-large-v3-turbo
datasets:
- juanjucm/OpenHQ-SpeechT-GL-EN
language:
- gl
- en
library_name: transformers
license: mit
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v3-turbo-gl-en
results: []
---
# whisper-large-v3-turbo-OpenHQ-GL-EN
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) trained on [juanjucm/OpenHQ-SpeechT-GL-EN](https://huggingface.co/datasets/juanjucm/OpenHQ-SpeechT-GL-EN) for **Galician-to-English Text to Speech Translation** task. It takes galician speech audios as input and generates the correspondant translated transcription in English.
The motivation behind this work is to increase the visibility of the Galician language, making it more accessible for non-Galician speakers to understand and engage with Galician audio content.
This model was developed during a 3-week Speech Translation workshop organised by [Yasmin Moslem](https://huggingface.co/ymoslem).
### Performance and training details
Baseline model achieved a BLEU score of **3.38** on the evaluation dataset.
After fine-tuning, it achieves the following results on the evaluation set:
- Loss: 0.9360
- **BLEU: 55.6535**
- **ChrF++: 72.19**
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3500
- mixed_precision_training: Native AMP
### Training results
We used [BLEU Score](https://en.wikipedia.org/wiki/BLEU) as our reference translation metric for selecting the best checkpoint after training.
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.2758 | 1.6667 | 250 | 0.7646 | 50.6055 |
| 0.0592 | 3.3333 | 500 | 0.7730 | 53.1258 |
| 0.0406 | 5.0 | 750 | 0.7860 | 53.3406 |
| 0.0173 | 6.6667 | 1000 | 0.8358 | 51.9789 |
| 0.0091 | 8.3333 | 1250 | 0.8909 | 54.4806 |
| 0.0071 | 10.0 | 1500 | 0.8862 | 54.2655 |
| 0.0039 | 11.6667 | 1750 | 0.9216 | 52.5119 |
| 0.0014 | 13.3333 | 2000 | 0.9281 | 54.5752 |
| 0.0013 | 15.0 | 2250 | 0.9471 | 54.5791 |
| 0.0009 | 16.6667 | 2500 | 0.9541 | 54.8725 |
| 0.0006 | 18.3333 | 2750 | 0.9614 | 53.1879 |
| 0.0006 | 20.0 | 3000 | 0.9701 | 54.6499 |
| 0.0006 | 21.6667 | 3250 | 0.9739 | 54.4341 |
| 0.0006 | 23.3333 | 3500 | 0.9747 | 54.5311 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0 | [
"TRANSLATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-id-sv | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:58:09 | 49 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-id-sv
* source languages: id
* target languages: sv
* OPUS readme: [id-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.id.sv | 32.7 | 0.527 |
| [
"TRANSLATION"
] | Non_BioNLP |
zhuwch/all-MiniLM-L6-v2 | zhuwch | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,695,195,422,000 | 2023-09-20T10:07:25 | 13 | 0 | ---
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
timtarusov/distilbert-base-uncased-finetuned-emotion | timtarusov | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-02-13T08:48:03 | 114 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.921
name: Accuracy
- type: f1
value: 0.9211076096482195
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2274
- Accuracy: 0.921
- F1: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8308 | 1.0 | 250 | 0.3319 | 0.8955 | 0.8897 |
| 0.2516 | 2.0 | 500 | 0.2274 | 0.921 | 0.9211 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
VexPoli/distilbart-summarization-top-list | VexPoli | text2text-generation | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:sshleifer/distilbart-xsum-6-6",
"base_model:finetune:sshleifer/distilbart-xsum-6-6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,739,379,123,000 | 2025-02-12T18:07:58 | 17 | 0 | ---
base_model: sshleifer/distilbart-xsum-6-6
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbart-summarization-top-list
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-summarization-top-list
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.717 | 0.1882 | 500 | 2.5800 |
| 2.5429 | 0.3764 | 1000 | 2.4470 |
| 2.4836 | 0.5645 | 1500 | 2.3816 |
| 2.4169 | 0.7527 | 2000 | 2.3389 |
| 2.4114 | 0.9409 | 2500 | 2.3124 |
| 2.3345 | 1.1291 | 3000 | 2.2940 |
| 2.3044 | 1.3173 | 3500 | 2.2804 |
| 2.3075 | 1.5055 | 4000 | 2.2696 |
| 2.2703 | 1.6936 | 4500 | 2.2641 |
| 2.3475 | 1.8818 | 5000 | 2.2597 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
| [
"SUMMARIZATION"
] | Non_BioNLP |
TransferGraph/boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"model-index",
"region:us"
] | 1,709,055,056,000 | 2024-02-29T13:37:12 | 0 | 0 | ---
base_model: boychaboy/MNLI_roberta-base
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.749738219895288
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [boychaboy/MNLI_roberta-base](https://huggingface.co/boychaboy/MNLI_roberta-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.7497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5236 | None | 0 |
| 0.6848 | 0.6722 | 0 |
| 0.6796 | 0.6006 | 1 |
| 0.7194 | 0.5452 | 2 |
| 0.7319 | 0.5221 | 3 |
| 0.7424 | 0.4951 | 4 |
| 0.7361 | 0.4660 | 5 |
| 0.7497 | 0.4600 | 6 |
| 0.7497 | 0.4570 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
mertyrgn/distilbert-base-uncased-finetuned-emotion | mertyrgn | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,652,622,001,000 | 2022-08-13T14:42:02 | 26 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.9235
name: Accuracy
- type: f1
value: 0.9235106231638174
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8272 | 1.0 | 250 | 0.2939 | 0.917 | 0.9153 |
| 0.2414 | 2.0 | 500 | 0.2064 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Xenova/distilbart-xsum-12-1 | Xenova | summarization | [
"transformers.js",
"onnx",
"bart",
"text2text-generation",
"summarization",
"base_model:sshleifer/distilbart-xsum-12-1",
"base_model:quantized:sshleifer/distilbart-xsum-12-1",
"region:us"
] | 1,693,932,378,000 | 2024-10-08T13:41:48 | 60 | 0 | ---
base_model: sshleifer/distilbart-xsum-12-1
library_name: transformers.js
pipeline_tag: summarization
---
https://huggingface.co/sshleifer/distilbart-xsum-12-1 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | [
"SUMMARIZATION"
] | Non_BioNLP |
vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa | vocabtrimmer | text2text-generation | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,680,768,012,000 | 2023-04-06T08:04:59 | 10 | 0 | ---
datasets:
- lmqg/qg_jaquad
language: ja
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: 'question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。'
example_title: Question Answering Example 1
- text: 'question: 1968年に開催されたオリンピックの名前は何ですか?, context: オリンピックが世界的大イベントに成長するに従って政治に左右されるようになると、1968年のメキシコシティ大会では黒人差別を訴える場と化し、1972年のミュンヘン大会ではアラブのゲリラによるイスラエル選手に対するテロ事件まで起きた(ミュンヘンオリンピック事件)。1976年のモントリオール大会になると、ニュージーランドのラグビーチームの南アフリカ遠征に反対してアフリカの諸国22ヶ国がボイコットを行った。そして、1980年のモスクワ大会ではソ連のアフガニスタン侵攻に反発したアメリカ・西ドイツ・日本などの西側諸国が相次いでボイコットを行った。1984年ロサンゼルス大会ではソ連と東側諸国が報復ボイコットを行ない、参加したのはソ連と対立していた中国とルーマニアだけだった。中でも、イラン革命後のイラン・イスラム共和国はモスクワとロサンゼルス双方のオリンピックをボイコットしている。オリンピックが巨大化するに従って財政負担の増大が大きな問題となり、1976年の夏季大会では大幅な赤字を出し、その後夏季・冬季とも立候補都市が1〜2都市だけという状態が続いた。'
example_title: Question Answering Example 2
model-index:
- name: vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa
results:
- task:
type: text2text-generation
name: Text2text Generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- type: bleu4_question_answering
value: 0.0
name: BLEU4 (Question Answering)
- type: rouge_l_question_answering
value: 67.22
name: ROUGE-L (Question Answering)
- type: meteor_question_answering
value: 53.01
name: METEOR (Question Answering)
- type: bertscore_question_answering
value: 90.65
name: BERTScore (Question Answering)
- type: moverscore_question_answering
value: 89.42
name: MoverScore (Question Answering)
- type: answer_f1_score__question_answering
value: 70.55
name: AnswerF1Score (Question Answering)
- type: answer_exact_match_question_answering
value: 70.55
name: AnswerExactMatch (Question Answering)
---
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa`
This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-ja](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja) for question answering task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mbart-large-cc25-trimmed-ja](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa")
# model prediction
answers = model.answer_q(list_question="新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", list_context=" 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa")
output = pipe("question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 70.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| AnswerF1Score | 70.55 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| BERTScore | 90.65 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 67.17 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 53.01 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 89.42 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 67.22 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mbart-large-cc25-trimmed-ja
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
AI-Sweden-Models/gpt-sw3-356m | AI-Sweden-Models | text-generation | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,671,021,117,000 | 2024-01-29T13:20:22 | 4,352 | 1 | ---
language:
- da
- sv
- 'no'
- en
- is
license: other
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. AI Sweden shares GPT-SW3 in a controlled pre-release with organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. This is an important step in the process of validating the model and collecting feedback on both what works well and what does not.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-356m"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No. | [
"SUMMARIZATION"
] | Non_BioNLP |
ucuncubayram/distilbert-emotion | ucuncubayram | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,715,513,680,000 | 2024-05-12T11:53:33 | 4 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.932
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1574
- Accuracy: 0.932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.2070 | 0.9245 |
| No log | 2.0 | 250 | 0.1574 | 0.932 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
mrapacz/interlinear-en-philta-emb-auto-diacritics-ob | mrapacz | text2text-generation | [
"transformers",
"pytorch",
"morph-t5-auto",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,738,957,967,000 | 2025-02-21T21:31:02 | 62 | 0 | ---
base_model:
- PhilTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5AutoForConditionalGeneration
- **Base Model:** PhilTa
- **Tokenizer:** PhilTa
- **Language(s):** Ancient Greek (source) → English (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** OB (Oblubienica)
- **Text Preprocessing:** Diacritics
- **Morphological Encoding:** emb-auto
### Model Performance
- **BLEU Score:** 59.66
- **SemScore:** 0.89
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5AutoForConditionalGeneration, MorphT5Tokenizer
>>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει']
>>> tags = ['vi Pres Act 3 Sg', 'pp Dat Sg m', 't_ Nom Sg m', 'n_ Nom Sg m', 'vm Pres Act 2 Sg', 'vm Aor Act 2 Sg', 't_ Acc Sg m', 'n_ Acc Sg m', 'pp 2 Gen Sg', 'Conj', 'vm Pres Act 2 Sg']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-en-philta-emb-auto-diacritics-ob")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5AutoForConditionalGeneration.from_pretrained("mrapacz/interlinear-en-philta-emb-auto-diacritics-ob")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'says | to him | - | jesus | arise | take up | the | mat | of you | and | walk'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
``` | [
"TRANSLATION"
] | Non_BioNLP |
vgarg/usecase_classifier_large_17_04_24 | vgarg | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"model-index",
"region:us"
] | 1,713,337,576,000 | 2024-04-29T08:21:01 | 5 | 0 | ---
base_model: intfloat/multilingual-e5-large
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: What should be Ideal Promo Duration?
- text: Compare the performance of top skus
- text: What is my Forward Buying across Brands?
- text: 'Which Packs segments are being cannibalized the most by xx '
- text: What price point is vacant in xx?
inference: true
model-index:
- name: SetFit with intfloat/multilingual-e5-large
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.95
name: Accuracy
---
# SetFit with intfloat/multilingual-e5-large
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'Which promotion type is giving better returns ?'</li><li>'Tell me the best performing promotions for xx'</li><li>'Which SKUs in each brand need to be promoted more?'</li></ul> |
| 0 | <ul><li>'Tell me the top 10 SKUs in xx'</li><li>'what are the volume and value market share of xx in yy Category in zz?'</li><li>'Which brands are most elastic in xx for yy?'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.95 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vgarg/usecase_classifier_large_17_04_24")
# Run inference
preds = model("What price point is vacant in xx?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 11.15 | 19 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 20 |
| 1 | 20 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.01 | 1 | 0.3409 | - |
| 0.5 | 50 | 0.0012 | - |
| 1.0 | 100 | 0.0002 | - |
| 1.5 | 150 | 0.0001 | - |
| 2.0 | 200 | 0.0001 | - |
| 2.5 | 250 | 0.0001 | - |
| 3.0 | 300 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.0
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
apwic/summarization-unipelt-3 | apwic | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"base_model:finetune:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"region:us"
] | 1,720,353,645,000 | 2024-07-07T17:19:15 | 0 | 0 | ---
base_model: LazarusNLP/IndoNanoT5-base
language:
- id
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: summarization-unipelt-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-unipelt-3
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7173
- Rouge1: 0.4469
- Rouge2: 0.0
- Rougel: 0.4454
- Rougelsum: 0.4457
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.4275 | 1.0 | 892 | 1.2781 | 0.1861 | 0.0 | 0.1881 | 0.1893 | 1.0 |
| 1.5017 | 2.0 | 1784 | 0.9698 | 0.2767 | 0.0 | 0.2791 | 0.2749 | 1.0 |
| 1.1964 | 3.0 | 2676 | 0.8235 | 0.2852 | 0.0 | 0.2836 | 0.2837 | 1.0 |
| 1.0261 | 4.0 | 3568 | 0.7468 | 0.4923 | 0.0 | 0.491 | 0.4914 | 1.0 |
| 0.9195 | 5.0 | 4460 | 0.7173 | 0.456 | 0.0 | 0.457 | 0.4597 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"SUMMARIZATION"
] | Non_BioNLP |
AI4Chem/CHEMLLM-2b-1_5 | AI4Chem | text-generation | [
"transformers",
"safetensors",
"internlm2",
"feature-extraction",
"chemistry",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"arxiv:2402.06852",
"license:apache-2.0",
"region:us"
] | 1,719,304,294,000 | 2024-09-17T16:02:49 | 172 | 1 | ---
language:
- en
- zh
license: apache-2.0
pipeline_tag: text-generation
tags:
- chemistry
---
# ChemLLM-2B: Mini LLM for Chemistry and Molecule Science
ChemLLM, The First Open-source Large Language Model for Chemistry and Molecule Science, Build based on InternLM-2 with ❤
[](https://huggingface.co/papers/2402.06852)
<center><img src='https://cdn-uploads.huggingface.co/production/uploads/64bce15bafd1e46c5504ad38/wdFV6p3rTBCtskbeuVwNJ.png'></center>
## News
- ChemLLM-1.5 released! Two versions are available [AI4Chem/ChemLLM-7B-Chat-1.5-DPO](https://huggingface.co/AI4Chem/ChemLLM-7B-Chat-1.5-DPO) or [AI4Chem/ChemLLM-7B-Chat-1.5-SFT](https://huggingface.co/AI4Chem/ChemLLM-7B-Chat-1.5-SFT).[2024-4-2]
- ChemLLM-1.5 updated! Have a try on [Demo Site](https://chemllm.org/#/chat) or [API Reference](https://api.chemllm.org/docs).[2024-3-23]
- ChemLLM has been featured by HuggingFace on [“Daily Papers” page](https://huggingface.co/papers/2402.06852).[2024-2-13]
- ChemLLM arXiv preprint released.[ChemLLM: A Chemical Large Language Model](https://arxiv.org/abs/2402.06852)[2024-2-10]
- News report from [Shanghai AI Lab](https://mp.weixin.qq.com/s/u-i7lQxJzrytipek4a87fw)[2024-1-26]
- ChemLLM-7B-Chat ver 1.0 released. https://chemllm.org/ [2024-1-18]
- ChemLLM-7B-Chat ver 1.0 open-sourced.[2024-1-17]
- Chepybara ver 0.2 online Demo released. https://chemllm.org/ [2023-12-9]
## Usage
Try [online demo](https://chemllm.org/) instantly, or...
Install `transformers`,
```
pip install transformers
```
Load `ChemLLM-20B-Chat` and run,
```
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
model_name_or_id = "AI4Chem/CHEMLLM-2b-1_5"
model = AutoModelForCausalLM.from_pretrained(model_name_or_id, torch_dtype=torch.float16, device_map="auto",trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_id,trust_remote_code=True)
prompt = "What is Molecule of Ibuprofen?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.9,
max_new_tokens=500,
repetition_penalty=1.5,
pad_token_id=tokenizer.eos_token_id
)
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## System Prompt Best Practice
You can use the same Dialogue Templates and System Prompt from [Agent Chepybara](https://chemllm.org/) to get a better response in local inference.
### Dialogue Templates
For queries in ShareGPT format like,
```
{'instruction':"...","prompt":"...","answer":"...","history":[[q1,a1],[q2,a2]]}
```
You can format it into this InternLM2 Dialogue format like,
```
def InternLM2_format(instruction,prompt,answer,history):
prefix_template=[
"<|im_start|>system\n",
"{}",
"<|im_end|>\n"
]
prompt_template=[
"<|im_start|>user\n",
"{}",
"<|im_end|>\n"
"<|im_start|>assistant\n",
"{}",
"<|im_end|>\n"
]
system = f'{prefix_template[0]}{prefix_template[1].format(instruction)}{prefix_template[2]}'
history = "".join([f'{prompt_template[0]}{prompt_template[1].format(qa[0])}{prompt_template[2]}{prompt_template[3]}{prompt_template[4].format(qa[1])}{prompt_template[5]}' for qa in history])
prompt = f'{prompt_template[0]}{prompt_template[1].format(prompt)}{prompt_template[2]}{prompt_template[3]}'
return f"{system}{history}{prompt}"
```
And there is a good example for system prompt,
```
- Chepybara is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be Professional, Sophisticated, and Chemical-centric.
- For uncertain notions and data, Chepybara always assumes it with theoretical prediction and notices users then.
- Chepybara can accept SMILES (Simplified Molecular Input Line Entry System) string, and prefer output IUPAC names (International Union of Pure and Applied Chemistry nomenclature of organic chemistry), depict reactions in SMARTS (SMILES arbitrary target specification) string. Self-Referencing Embedded Strings (SELFIES) are also accepted.
- Chepybara always solves problems and thinks in step-by-step fashion, Output begin with *Let's think step by step*."
```
## Results
### MMLU Highlights
| dataset | ChatGLM3-6B | Qwen-7B | LLaMA-2-7B | Mistral-7B | InternLM2-7B-Chat | ChemLLM-7B-Chat |
| ---------------------- | ----------- | ------- | ---------- | ---------- | ----------------- | ----------------- |
| college chemistry | 43.0 | 39.0 | 27.0 | 40.0 | 43.0 | 47.0 |
| college mathematics | 28.0 | 33.0 | 33.0 | 30.0 | 36.0 | 41.0 |
| college physics | 32.4 | 35.3 | 25.5 | 34.3 | 41.2 | 48.0 |
| formal logic | 35.7 | 43.7 | 24.6 | 40.5 | 34.9 | 47.6 |
| moral scenarios | 26.4 | 35.0 | 24.1 | 39.9 | 38.6 | 44.3 |
| humanities average | 62.7 | 62.5 | 51.7 | 64.5 | 66.5 | 68.6 |
| stem average | 46.5 | 45.8 | 39.0 | 47.8 | 52.2 | 52.6 |
| social science average | 68.2 | 65.8 | 55.5 | 68.1 | 69.7 | 71.9 |
| other average | 60.5 | 60.3 | 51.3 | 62.4 | 63.2 | 65.2 |
| mmlu | 58.0 | 57.1 | 48.2 | 59.2 | 61.7 | 63.2 |
*(OpenCompass)

### Chemical Benchmark

*(Score judged by ChatGPT-4-turbo)
### Professional Translation


You can try it [online](chemllm.org).
## Cite this work
```
@misc{zhang2024chemllm,
title={ChemLLM: A Chemical Large Language Model},
author={Di Zhang and Wei Liu and Qian Tan and Jingdan Chen and Hang Yan and Yuliang Yan and Jiatong Li and Weiran Huang and Xiangyu Yue and Dongzhan Zhou and Shufei Zhang and Mao Su and Hansen Zhong and Yuqiang Li and Wanli Ouyang},
year={2024},
eprint={2402.06852},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer
LLM may generate incorrect answers, Please pay attention to proofreading at your own risk.
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, or other questions and collaborations, please contact <[email protected]>.
## Demo
[Agent Chepybara](https://chemllm.org/)

## Contact
(AI4Physics Sciecne, Shanghai AI Lab)[[email protected]] | [
"TRANSLATION"
] | Non_BioNLP |
ChaniM/text-summarization-bart-large-cnn-three-percent | ChaniM | text2text-generation | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,686,237,968,000 | 2023-06-09T06:08:20 | 34 | 0 | ---
datasets:
- cnn_dailymail
license: mit
tags:
- generated_from_trainer
model-index:
- name: text-summarization-bart-large-cnn-three-percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-summarization-bart-large-cnn-three-percent
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the cnn_dailymail 3.0.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.7193
- eval_rouge1: 44.586
- eval_rouge2: 21.4773
- eval_rougeL: 31.0223
- eval_rougeLsum: 41.5275
- eval_gen_len: 83.8787
- eval_runtime: 16880.3832
- eval_samples_per_second: 0.792
- eval_steps_per_second: 0.396
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.3
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
"SUMMARIZATION"
] | Non_BioNLP |
TheBloke/airoboros-m-7B-3.0-GGUF | TheBloke | null | [
"transformers",
"gguf",
"mistral",
"dataset:jondurbin/airoboros-3.0",
"base_model:jondurbin/airoboros-m-7b-3.0",
"base_model:quantized:jondurbin/airoboros-m-7b-3.0",
"license:apache-2.0",
"region:us"
] | 1,696,546,074,000 | 2023-10-05T23:27:14 | 711 | 4 | ---
base_model: jondurbin/airoboros-m-7b-3.0
datasets:
- jondurbin/airoboros-3.0
license: apache-2.0
model_name: Airoboros M 7B 3.0
inference: false
model_creator: Jon Durbin
model_type: mistral
prompt_template: '[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros M 7B 3.0 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros M 7B 3.0](https://huggingface.co/jondurbin/airoboros-m-7b-3.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros M 7B 3.0](https://huggingface.co/jondurbin/airoboros-m-7b-3.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-m-7b-3.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Uncensored
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-m-7b-3.0.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-m-7b-3.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [airoboros-m-7b-3.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [airoboros-m-7b-3.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [airoboros-m-7b-3.0.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-m-7b-3.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [airoboros-m-7b-3.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [airoboros-m-7b-3.0.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-m-7b-3.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [airoboros-m-7b-3.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [airoboros-m-7b-3.0.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [airoboros-m-7b-3.0.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-m-7B-3.0-GGUF/blob/main/airoboros-m-7b-3.0.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airoboros-m-7B-3.0-GGUF and below it, a specific filename to download, such as: airoboros-m-7b-3.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airoboros-m-7B-3.0-GGUF airoboros-m-7b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airoboros-m-7B-3.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-m-7B-3.0-GGUF airoboros-m-7b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-m-7b-3.0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a help, unbiased, uncensored assistant.\n<</SYS>\n\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-m-7B-3.0-GGUF", model_file="airoboros-m-7b-3.0.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros M 7B 3.0
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | TBD |
gokulsrinivasagan/distilbert_lda_100_v1_book_wnli | gokulsrinivasagan | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_100_v1_book",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_100_v1_book",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,767,932,000 | 2024-12-09T18:12:45 | 15 | 0 | ---
base_model: gokulsrinivasagan/distilbert_lda_100_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_100_v1_book_wnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- type: accuracy
value: 0.5633802816901409
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_100_v1_book_wnli
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_100_v1_book](https://huggingface.co/gokulsrinivasagan/distilbert_lda_100_v1_book) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6944
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6995 | 1.0 | 3 | 0.6944 | 0.5634 |
| 0.6934 | 2.0 | 6 | 0.6991 | 0.4789 |
| 0.6964 | 3.0 | 9 | 0.7013 | 0.4366 |
| 0.6894 | 4.0 | 12 | 0.6985 | 0.5352 |
| 0.6926 | 5.0 | 15 | 0.7035 | 0.3662 |
| 0.691 | 6.0 | 18 | 0.7119 | 0.3944 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.2.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
sbintuitions/modernbert-ja-130m | sbintuitions | fill-mask | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"ja",
"en",
"arxiv:2412.13663",
"arxiv:2104.09864",
"arxiv:2404.10830",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,738,824,697,000 | 2025-02-27T02:35:36 | 7,603 | 39 | ---
language:
- ja
- en
library_name: transformers
license: mit
pipeline_tag: fill-mask
---
# ModernBERT-Ja-130M
This repository provides Japanese ModernBERT trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
[ModernBERT](https://arxiv.org/abs/2412.13663) is a new variant of the BERT model that combines local and global attention, allowing it to handle long sequences while maintaining high computational efficiency.
It also incorporates modern architectural improvements, such as [RoPE](https://arxiv.org/abs/2104.09864).
Our ModernBERT-Ja-130M is trained on a high-quality corpus of Japanese and English text comprising **4.39T tokens**, featuring a vocabulary size of 102,400 and a sequence length of **8,192** tokens.
## How to Use
You can use our models directly with the transformers library v4.48.0 or higher:
```bash
pip install -U "transformers>=4.48.0"
```
Additionally, if your GPUs support Flash Attention 2, we recommend using our models with Flash Attention 2.
```
pip install flash-attn --no-build-isolation
```
### Example Usage
```python
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
model = AutoModelForMaskedLM.from_pretrained("sbintuitions/modernbert-ja-130m", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/modernbert-ja-130m")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("おはようございます、今日の天気は<mask>です。")
for result in results:
print(result)
# {'score': 0.5078125, 'token': 16416, 'token_str': '晴れ', 'sequence': 'おはようございます、今日の天気は晴れです。'}
# {'score': 0.240234375, 'token': 28933, 'token_str': '曇り', 'sequence': 'おはようございます、今日の天気は曇りです。'}
# {'score': 0.078125, 'token': 92339, 'token_str': 'くもり', 'sequence': 'おはようございます、今日の天気はくもりです。'}
# {'score': 0.078125, 'token': 2988, 'token_str': '雨', 'sequence': 'おはようございます、今日の天気は雨です。'}
# {'score': 0.0223388671875, 'token': 52525, 'token_str': '快晴', 'sequence': 'おはようございます、今日の天気は快晴です。'}
```
## Model Series
We provide ModernBERT-Ja in several model sizes. Below is a summary of each model.
|ID| #Param. | #Param.<br>w/o Emb.|Dim.|Inter. Dim.|#Layers|
|-|-|-|-|-|-|
|[sbintuitions/modernbert-ja-30m](https://huggingface.co/sbintuitions/modernbert-ja-30m)|37M|10M|256|1024|10|
|[sbintuitions/modernbert-ja-70m](https://huggingface.co/sbintuitions/modernbert-ja-70m)|70M|31M|384|1536|13|
|[**sbintuitions/modernbert-ja-130m**](https://huggingface.co/sbintuitions/modernbert-ja-130m)|132M|80M|512|2048|19|
|[sbintuitions/modernbert-ja-310m](https://huggingface.co/sbintuitions/modernbert-ja-310m)|315M|236M|768|3072|25|
For all models,
the vocabulary size is 102,400,
the head dimension is 64,
and the activation function is GELU.
The configuration for global attention and sliding window attention consists of 1 layer + 2 layers (global–local–local).
The sliding window attention window context size is 128, with global_rope_theta set to 160,000 and local_rope_theta set to 10,000.
## Model Description
We constructed the ModernBERT-Ja-130M model through a three-stage training process, which follows the original [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base).
First, we performed pre-training using a large corpus.
Next, we conducted two phases of context length extension.
1. **Pre-training**
- Training with **3.51T tokens**, including Japanese and English data extracted from web corpora.
- The sequence length is 1,024 with naive sequence packing.
- Masking rate is **30%** (with 80-10-10 rule).
2. **Context Extension (CE): Phase 1**
- Training with **430B tokens**, comprising high-quality Japanese and English data.
- The sequence length is **8,192** with [best-fit packing](https://arxiv.org/abs/2404.10830).
- Masking rate is **30%** (with 80-10-10 rule).
3. **Context Extension (CE): Phase 2**
- Training with **450B tokens**, comprising high-quality Japanese data.
- The data consists of 150B tokens, and we trained it for 3 epochs.
- This is because the overall performance of the Japanese language task improved with 3 epochs compared to just 1 epoch.
- The sequence length is **8,192** without sequence packing.
- Masking rate is **15%** (with 80-10-10 rule).
The key differences from the original ModernBERT are:
1. It is pre-trained on Japanese and English corpora, leading to a total of approximately 4.39T training tokens.
2. We observed that decreasing the mask rate in Context Extension Phase 2 from 30% to 15% improved the model's performance.
3. Our model boasts a large vocabulary size of 102,400, which is larger than most existing Japanese models. To align the number of parameters with existing models, we set the hidden size to 512 and the number of hidden layers to 19. Finally, the model has 52M parameters in the embedding layer, 80M parameters in the Transformer layers, and a total of 132M parameters.
### Tokenization and Vocabulary
We use the tokenizer and vocabulary from [sbintuitions/sarashina2-13b](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Specifically, we employ a [SentencePiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte fallback.
We do not apply pre-tokenization using a Japanese tokenizer.
Therefore, users can directly input raw sentences into the tokenizer without any additional preprocessing.
### Intended Uses and Limitations
You can use this model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is not designed for text generation.
When you want to generate a text, please use a text generation model such as [Sarashina](https://huggingface.co/collections/sbintuitions/sarashina-6680c6d6ab37b94428ca83fb).
Since the unigram language model is used as a tokenizer, the token boundaries often do not align with the morpheme boundaries, resulting in poor performance in token classification tasks such as named entity recognition and span extraction.
## Evaluation
We evaluated our model on 12 datasets, including JGLUE, across various tasks:
- Knowledge-based tasks: [JCommonsenseQA (JComQA)](https://github.com/yahoojapan/JGLUE), [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)
- Japanese linguistic acceptability classification: [JCoLA](https://github.com/osekilab/JCoLA)
- Natural Language Inference (NLI) tasks: [JNLI](https://github.com/yahoojapan/JGLUE), [JSICK](https://github.com/verypluming/JSICK), [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [Kyoto University RTE (KU RTE)](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
- Semantic Textual Similarity (STS) task: [JSTS](https://github.com/yahoojapan/JGLUE)
- Various classification tasks: [Livedoor news corpus (Livedoor)](https://www.rondhuit.com/download.html), [LLM-jp Toxicity (Toxicity)](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html), [MARC-ja](https://github.com/yahoojapan/JGLUE), [WRIME v2 (WRIME)](https://github.com/ids-cv/wrime)
These tasks are short-sequence evaluation tasks, and we aligned our settings with those of existing models.
While the maximum sequence length varies across tasks, it does not exceed 512.
We set the sequence length and other experimental configurations per task, ensuring that the settings remain consistent across models.
For hyperparameters, we explored the following ranges:
- Learning rate: `{5e-6, 1e-5, 2e-5, 3e-5, 5e-5, 1e-4}`
- Number of epochs:
- Tasks with a large number of instances: `{1, 2}`
- Tasks with fewer instances: `{3, 5, 10}`
In the experiments, we loaded several Japanese models that are publicly available on HuggingFace using `AutoModel` and constructed classification models by appending a classification head consisting of a linear layer, a GELU activation function, and another linear layer.
This was done because HuggingFace's `AutoModelForSequenceClassification` comes with different implementations for each model, and using them directly would result in classification heads that differ from one model to another.
For the embeddings fed into the classification layer, we used the embedding of the special token at the beginning of the sentence.
That is, `[CLS]` in BERT and `<s>` in RoBERTa.
Note that our model does not perform the next sentence prediction (NSP) task during pretraining, so `<s>` is added at the beginning of the sentence, not `<cls>`.
Therefore, we used the `<s>` token for classification.
We conducted evaluations using 5-fold cross-validation.
That is, we trained the model on the `train` set and evaluated it on the `validation` set.
After determining the optimal hyperparameters (learning rate, epochs) based on the average performance on the `validation` sets, we report the average performance on the `test` sets with the hyperparameters.
For datasets without predefined splits, we first set aside 10% of the data as the test set and then performed 5-fold cross-validation on the remaining data.
For datasets such as some tasks in **JGLUE**, where only `train` and `validation` sets are publicly available,
we treated the `validation` set as the `test` set and performed 5-fold cross-validation on the remaining data.
For datasets with predefined `train`, `validation`, and `test` sets, we simply trained and evaluated the model five times with different random seeds and used the model with the best average evaluation score on the `validation` set to measure the final score on the `test` set.
### Evaluation Results
| Model | #Param. | #Param.<br>w/o Emb. | **Avg.** | [JComQA](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [RCQA](https://www.cl.ecei.tohoku.ac.jp/rcqa/)<br>(Acc.) | [JCoLA](https://github.com/osekilab/JCoLA)<br>(Acc.) | [JNLI](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [JSICK](https://github.com/verypluming/JSICK)<br>(Acc.) | [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)<br>(Acc.) | [KU RTE](https://nlp.ist.i.kyoto-u.ac.jp/index.php?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)<br>(Acc.) | [JSTS](https://github.com/yahoojapan/JGLUE)<br>(Spearman's ρ) | [Livedoor](https://www.rondhuit.com/download.html)<br>(Acc.) | [Toxicity](https://llm-jp.nii.ac.jp/llm/2024/08/07/llm-jp-toxicity-dataset.html)<br>(Acc.) | [MARC-ja](https://github.com/yahoojapan/JGLUE)<br>(Acc.) | [WRIME](https://github.com/ids-cv/wrime)<br>(Acc.) |
| ------ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| [ModernBERT-Ja-30M](https://huggingface.co/sbintuitions/modernbert-ja-30m) | 37M | 10M | 85.67 | 80.95 | 82.35 | 78.85 | 88.69 | 84.39 | 91.79 | 61.13 | 85.94 | 97.20 | 89.33 | 95.87 | 91.61 |
| [ModernBERT-Ja-70M](https://huggingface.co/sbintuitions/modernbert-ja-70m) | 70M | 31M | 86.77 | 85.65 | 83.51 | 80.26 | 90.33 | 85.01 | 92.73 | 60.08 | 87.59 | 96.34 | 91.01 | 96.13 | 92.59 |
| [**ModernBERT-Ja-130M**](https://huggingface.co/sbintuitions/modernbert-ja-130m)<br>(this model) | 132M | 80M | <u>88.95</u> | 91.01 | 85.28 | 84.18 | 92.03 | 86.61 | 94.01 | 65.56 | 89.20 | 97.42 | 91.57 | 96.48 | 93.99 |
| [ModernBERT-Ja-310M](https://huggingface.co/sbintuitions/modernbert-ja-310m) | 315M | 236M | 89.83 | 93.53 | 86.18 | 84.81 | 92.93 | 86.87 | 94.48 | 68.79 | 90.53 | 96.99 | 91.24 | 96.39 | 95.23 |
| | | | | | | | | | | | | | | | |
| [LINE DistillBERT](https://huggingface.co/line-corporation/line-distilbert-base-japanese)| 68M | 43M | 85.32 | 76.39 | 82.17 | 81.04 | 87.49 | 83.66 | 91.42 | 60.24 | 84.57 | 97.26 | 91.46 | 95.91 | 92.16 |
| [Tohoku BERT-base v3](https://huggingface.co/tohoku-nlp/bert-base-japanese-v3)| 111M | 86M | 86.74 | 82.82 | 83.65 | 81.50 | 89.68 | 84.96 | 92.32 | 60.56 | 87.31 | 96.91 | 93.15 | 96.13 | 91.91 |
| [LUKE-japanese-base-lite](https://huggingface.co/studio-ousia/luke-japanese-base-lite)| 133M | 107M | 87.15 | 82.95 | 83.53 | 82.39 | 90.36 | 85.26 | 92.78 | 60.89 | 86.68 | 97.12 | 93.48 | 96.30 | 94.05 |
| [Kyoto DeBERTa-v3](https://huggingface.co/ku-nlp/deberta-v3-base-japanese)| 160M | 86M | 88.31 | 87.44 | 84.90 | 84.35 | 91.91 | 86.22 | 93.41 | 63.31 | 88.51 | 97.10 | 92.58 | 96.32 | 93.64 |
| [KoichiYasuoka/modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia)| 160M | 110M | 82.41 | 62.59 | 81.19 | 76.80 | 84.11 | 82.01 | 90.51 | 60.48 | 81.74 | 97.10 | 90.34 | 94.85 | 87.25 |
| | | | | | | | | | | | | | | | |
| [Tohoku BERT-large char v2](https://huggingface.co/cl-tohoku/bert-large-japanese-char-v2)| 311M | 303M | 87.23 | 85.08 | 84.20 | 81.79 | 90.55 | 85.25 | 92.63 | 61.29 | 87.64 | 96.55 | 93.26 | 96.25 | 92.29 |
| [Tohoku BERT-large v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)| 337M | 303M | 88.36 | 86.93 | 84.81 | 82.89 | 92.05 | 85.33 | 93.32 | 64.60 | 89.11 | 97.64 | 94.38 | 96.46 | 92.77 |
| [Waseda RoBERTa-large (Seq. 512)](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp)| 337M | 303M | 88.37 | 88.81 | 84.50 | 82.34 | 91.37 | 85.49 | 93.97 | 61.53 | 88.95 | 96.99 | 95.06 | 96.38 | 95.09 |
| [Waseda RoBERTa-large (Seq. 128)](https://huggingface.co/nlp-waseda/roberta-large-japanese-with-auto-jumanpp)| 337M | 303M | 88.36 | 89.35 | 83.63 | 84.26 | 91.53 | 85.30 | 94.05 | 62.82 | 88.67 | 95.82 | 93.60 | 96.05 | 95.23 |
| [LUKE-japanese-large-lite](https://huggingface.co/studio-ousia/luke-japanese-large-lite)| 414M | 379M | 88.94 | 88.01 | 84.84 | 84.34 | 92.37 | 86.14 | 94.32 | 64.68 | 89.30 | 97.53 | 93.71 | 96.49 | 95.59 |
| [RetrievaBERT](https://huggingface.co/retrieva-jp/bert-1.3b)| 1.30B | 1.15B | 86.79 | 80.55 | 84.35 | 80.67 | 89.86 | 85.24 | 93.46 | 60.48 | 87.30 | 97.04 | 92.70 | 96.18 | 93.61 |
| | | | | | | | | | | | | | | | |
| [hotchpotch/mMiniLMv2-L6-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L6-H384)| 107M | 11M | 81.53 | 60.34 | 82.83 | 78.61 | 86.24 | 77.94 | 87.32 | 60.48 | 80.48 | 95.55 | 86.40 | 94.97 | 87.20 |
| [hotchpotch/mMiniLMv2-L12-H384](https://huggingface.co/hotchpotch/mMiniLMv2-L12-H384)| 118M | 21M | 82.59 | 62.70 | 83.77 | 78.61 | 87.69 | 79.58 | 87.65 | 60.48 | 81.55 | 95.88 | 90.00 | 94.89 | 88.28 |
| [mBERT](https://huggingface.co/google-bert/bert-base-multilingual-cased)| 178M | 86M | 83.48 | 66.08 | 82.76 | 77.32 | 88.15 | 84.20 | 91.25 | 60.56 | 84.18 | 97.01 | 89.21 | 95.05 | 85.99 |
| [XLM-RoBERTa-base](https://huggingface.co/FacebookAI/xlm-roberta-base)| 278M | 86M | 84.36 | 69.44 | 82.86 | 78.71 | 88.14 | 83.17 | 91.27 | 60.48 | 83.34 | 95.93 | 91.91 | 95.82 | 91.20 |
| [XLM-RoBERTa-large](https://huggingface.co/FacebookAI/xlm-roberta-large)| 560M | 303M | 86.95 | 80.07 | 84.47 | 80.42 | 92.16 | 84.74 | 93.87 | 60.48 | 88.03 | 97.01 | 93.37 | 96.03 | 92.72 |
The evaluation results are shown in the table.
`#Param.` represents the number of parameters in both the input embedding layer and the Transformer layers, while `#Param. w/o Emb.` indicates the number of parameters in the Transformer layers only.
Our ModernBERT-Ja-130M, a base-sized model, outperformed Tohoku BERT-large and achieved performance comparable to LUKE-japanese-large-lite.
Specifically, it demonstrated impressive results in knowledge-based tasks such as JCommonsenseQA and RCQA.
Despite being a long-context model capable of processing sequences of up to 8,192 tokens, our ModernBERT-Ja-130M also exhibited strong performance in short-sequence evaluations.
## Ethical Considerations
ModernBERT-Ja-130M may produce representations that reflect biases.
When you use this model for masked language modeling, it may generate biases or harmful expressions.
## License
[MIT License](https://huggingface.co/sbintuitions/modernbert-ja-130m/blob/main/LICENSE)
## Citation
```bibtex
@misc{
modernbert-ja,
author = {Tsukagoshi, Hayato and Li, Shengzhe and Fukuchi, Akihiko and Shibata, Tomohide},
title = {{ModernBERT-Ja}},
howpublished = {\url{https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a}},
url = {https://huggingface.co/collections/sbintuitions/modernbert-ja-67b68fe891132877cf67aa0a},
year = {2025},
}
``` | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
sarwarbeing/child-labour-remidiation-few-shot | sarwarbeing | text-classification | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,693,140,929,000 | 2023-08-27T19:19:50 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# sarwarbeing/child-labour-remidiation-few-shot
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sarwarbeing/child-labour-remidiation-few-shot")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Alassea/glue_sst_classifier | Alassea | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,650,972,834,000 | 2022-04-26T12:20:06 | 113 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: glue_sst_classifier
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- type: f1
value: 0.9033707865168539
name: F1
- type: accuracy
value: 0.9013761467889908
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
davidadamczyk/ModernBERT-base-DPR-8e-05 | davidadamczyk | sentence-similarity | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11662655",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,740,495,168,000 | 2025-02-25T14:53:13 | 11 | 0 | ---
base_model: answerdotai/ModernBERT-base
datasets:
- sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:11662655
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: what county is lyndhurst, ohio in
sentences:
- This article is about the song written by Kenneth Gamble, Leon Huff and Cary Gilbert.
For the Tina Turner song, see Don't Leave Me This Way (Tina Turner song). Don't
Leave Me This Way is a song written by Kenneth Gamble, Leon Huff and Cary Gilbert.
First charting as a hit for Harold Melvin & the Blue Notes featuring Teddy Pendergrass,
an act on Gamble & Huff's Philadelphia International label in 1975, Don't Leave
Me This Way was later a huge disco hit for Motown artist Thelma Houston in 1977.
- "Lyndhurst is a city in Cuyahoga County, Ohio, United States. The population was\
\ 14,001 at the 2010 census. Lyndhurst is located in northeastern Ohio, and is\
\ a suburb of Cleveland. A small part of Lyndhurst was originally part of Mayfield\
\ Township. It used to be called Euclidville before Lyndhurst was chosen. Lyndhurst\
\ is located at 41°31â\x80²17â\x80³N 81°29â\x80²25â\x80³W / 41.52139°N 81.49028°W\
\ / 41.52139; -81.49028 (41.521352, -81.490141)."
- Welcome to Trumbull County... Trumbull County, the county seat, located in Warren,
Ohio, consists of a combination of both urban and rural communities situated in
the northeast corner of Ohio. It is situated roughly between the Youngstown, Cleveland
and Akron corridors.
- source_sentence: who founded the american graphophone company
sentences:
- In 1886, Graham Bell and Charles Sumner Tainter founded the American Graphophone
Company to distribute and sell graphophones in the US and Canada under license
from the Volta Graphophone Company. In 1890, the American Graphophone Company
stopped production of new phonographs due to sagging orders.
- ShelfGenie How much does a ShelfGenie franchise cost? ShelfGenie has a franchise
fee of up to $45,000, with a total initial investment range of $70,100 to $107,750.
Local ShelfGenie franchise opportunities. ShelfGenie is looking to grow in a number
of cities around the country. To find out if there's a franchise opportunity in
your city, unlock more information.
- "A+E Networks. The technology that made the modern music business possible came\
\ into existence in the New Jersey laboratory where Thomas Alva Edison created\
\ the first device to both record sound and play it back. He was awarded U.S.\
\ Patent No. 200,521 for his inventionâ\x80\x93the phonographâ\x80\x93on this\
\ day in 1878."
- source_sentence: is housekeeping camp flooded?
sentences:
- 'What is the importance of housekeeping at work? A: Workplace housekeeping promotes
sanitation, safety, organization and productivity. It also boosts morale. Daily
housekeeping maintenance keeps the workplac... Full Answer >'
- The back patio area of a cabin is partially submerged in flood water at Housekeeping
Camp on Monday, Jan. 9, 2017, in Yosemite National Park. The Merced River, swollen
with storm runoff, crested at 12.7 feet at 4 a.m. SILVIA FLORES [email protected].
- "1 Bake for 8 minutes, then rotate the pan and check the underside of the bagels.\
\ 2 If theyâ\x80\x99re getting too dark, place another pan under the baking sheet.\
\ ( 3 Doubling the pan will insulate the first baking sheet.) Bake for another\
\ 8 to 12 minutes, until the bagels are a golden brown. 4 13."
- source_sentence: causes for infection in the nerve of tooth
sentences:
- If a cavity is causing the toothache, your dentist will fill the cavity or possibly
extract the tooth, if necessary. A root canal might be needed if the cause of
the toothache is determined to be an infection of the tooth's nerve. Bacteria
that have worked their way into the inner aspects of the tooth cause such an infection.
An antibiotic may be prescribed if there is fever or swelling of the jaw.
- "According to Article III, Section 1 of the Constitution, judges and justices\
\ of the Judicial Branch serve during good behavior.. This means they are appointed\
\ for life, unles â\x80¦ s they are impeached and removed from office. + 50 others\
\ found this useful.he term length for members of the House are two years and\
\ a staggering six years for members of the Senate."
- Inflamed or infected pulp (pulpitis) most often causes a toothache. To relieve
the pain and prevent further complications, the tooth may be extracted (surgically
removed) or saved by root canal treatment.
- source_sentence: what county is hayden in
sentences:
- Normally, the Lead Agency is the agency with general governmental powers such
as a city or a county. Agencies with limited powers or districts that provide
a public service/utility such as a recreation and park district will tend to be
a Responsible Agency.
- According to the United States Census Bureau, the city has a total area of 9.61
square miles (24.89 km2), of which 9.60 square miles (24.86 km2) is land and 0.01
square miles (0.03 km2) is water. It lies at the southwestern end of Hayden Lake,
and the elevation of the city is 2,287 feet (697 m) above sea level. Hayden is
located on U.S. Route 95 at the junction of Route 41. It is also four miles (6
km) north of Interstate 90 and Coeur d'Alene. The Coeur d'Alene airport is northwest
of Hayden.
- Hayden is a city in Kootenai County, Idaho, United States. Located in the northern
portion of the state, just north of Coeur d'Alene, its population was 13,294 at
the 2010 census.
model-index:
- name: SentenceTransformer based on answerdotai/ModernBERT-base
results:
- task:
type: triplet
name: Triplet
dataset:
name: msmarco co condenser dev
type: msmarco-co-condenser-dev
metrics:
- type: cosine_accuracy
value: 0.9879999756813049
name: Cosine Accuracy
---
# SentenceTransformer based on answerdotai/ModernBERT-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("davidadamczyk/ModernBERT-base-DPR-8e-05")
# Run inference
sentences = [
'what county is hayden in',
"Hayden is a city in Kootenai County, Idaho, United States. Located in the northern portion of the state, just north of Coeur d'Alene, its population was 13,294 at the 2010 census.",
"According to the United States Census Bureau, the city has a total area of 9.61 square miles (24.89 km2), of which 9.60 square miles (24.86 km2) is land and 0.01 square miles (0.03 km2) is water. It lies at the southwestern end of Hayden Lake, and the elevation of the city is 2,287 feet (697 m) above sea level. Hayden is located on U.S. Route 95 at the junction of Route 41. It is also four miles (6 km) north of Interstate 90 and Coeur d'Alene. The Coeur d'Alene airport is northwest of Hayden.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `msmarco-co-condenser-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:----------|
| **cosine_accuracy** | **0.988** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
* Dataset: [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
* Size: 11,662,655 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.26 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.14 tokens</li><li>max: 222 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 80.09 tokens</li><li>max: 436 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:---------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what is the meaning of menu planning</code> | <code>Menu planning is the selection of a menu for an event. Such as picking out the dinner for your wedding or even a meal at a Birthday Party. Menu planning is when you are preparing a calendar of meals and you have to sit down and decide what meat and veggies you want to serve on each certain day.</code> | <code>Menu Costs. In economics, a menu cost is the cost to a firm resulting from changing its prices. The name stems from the cost of restaurants literally printing new menus, but economists use it to refer to the costs of changing nominal prices in general.</code> |
| <code>how old is brett butler</code> | <code>Brett Butler is 59 years old. To be more precise (and nerdy), the current age as of right now is 21564 days or (even more geeky) 517536 hours. That's a lot of hours!</code> | <code>Passed in: St. John's, Newfoundland and Labrador, Canada. Passed on: 16/07/2016. Published in the St. John's Telegram. Passed away suddenly at the Health Sciences Centre surrounded by his loving family, on July 16, 2016 Robert (Bobby) Joseph Butler, age 52 years. Predeceased by his special aunt Geri Murrin and uncle Mike Mchugh; grandparents Joe and Margaret Murrin and Jack and Theresa Butler.</code> |
| <code>when was the last navajo treaty sign?</code> | <code>In Executive Session, Senate of the United States, July 25, 1868. Resolved, (two-thirds of the senators present concurring,) That the Senate advise and consent to the ratification of the treaty between the United States and the Navajo Indians, concluded at Fort Sumner, New Mexico, on the first day of June, 1868.</code> | <code>Share Treaty of Greenville. The Treaty of Greenville was signed August 3, 1795, between the United States, represented by Gen. Anthony Wayne, and chiefs of the Indian tribes located in the Northwest Territory, including the Wyandots, Delawares, Shawnees, Ottawas, Miamis, and others.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
* Dataset: [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
* Size: 11,662,655 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.2 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 80.44 tokens</li><li>max: 241 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 80.38 tokens</li><li>max: 239 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what county is holly springs nc in</code> | <code>Holly Springs, North Carolina. Holly Springs is a town in Wake County, North Carolina, United States. As of the 2010 census, the town population was 24,661, over 2½ times its population in 2000. Contents.</code> | <code>The Mt. Holly Springs Park & Resort. One of the numerous trolley routes that carried people around the county at the turn of the century was the Carlisle & Mt. Holly Railway Company. The âHolly Trolleyâ as it came to be known was put into service by Patricio Russo and made its first run on May 14, 1901.</code> |
| <code>how long does nyquil stay in your system</code> | <code>In order to understand exactly how long Nyquil lasts, it is absolutely vital to learn about the various ingredients in the drug. One of the ingredients found in Nyquil is Doxylamine, which is an antihistamine. This specific medication has a biological half-life or 6 to 12 hours. With this in mind, it is possible for the drug to remain in the system for a period of 12 to 24 hours. It should be known that the specifics will depend on a wide variety of different factors, including your age and metabolism.</code> | <code>I confirmed that NyQuil is about 10% alcohol, a higher content than most domestic beers. When I asked about the relatively high proof, I was told that the alcohol dilutes the active ingredients. The alcohol free version is there for customers with addiction issues.. also found that in that version there is twice the amount of DXM. When I asked if I could speak to a chemist or scientist, I was told they didn't have anyone who fit that description there. Itâs been eight years since I kicked NyQuil. I've been sober from alcohol for four years.</code> |
| <code>what are mineral water</code> | <code>1 Mineral water â water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source. Mineral water â water from a mineral spring that contains various minerals, such as salts and sulfur compounds. 2 It comes from a source tapped at one or more bore holes or spring, and originates from a geologically and physically protected underground water source.</code> | <code>Minerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.inerals for Your Body. Drinking mineral water is beneficial to health and well-being. But it is not only the amount of water you drink that is important-what the water contains is even more essential.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `learning_rate`: 8e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 8e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | msmarco-co-condenser-dev_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------:|
| -1 | -1 | - | 0.6060 |
| 0.2048 | 500 | 0.6321 | - |
| 0.4095 | 1000 | 0.1443 | - |
| 0.6143 | 1500 | 0.1084 | - |
| 0.8190 | 2000 | 0.0893 | - |
| -1 | -1 | - | 0.9880 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.0
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
zjunlp/zhixi-13b-lora | zjunlp | text-generation | [
"safetensors",
"code",
"text-generation",
"en",
"zh",
"arxiv:2302.13971",
"arxiv:2305.11527",
"license:apache-2.0",
"region:us"
] | 1,684,816,611,000 | 2023-06-26T07:41:10 | 0 | 22 | ---
language:
- en
- zh
license: apache-2.0
pipeline_tag: text-generation
tags:
- code
---
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/logo_zhixi.png?raw=true" alt="ZJU-KnowLM" style="width: 40%; min-width: 40px; display: block; margin: auto;"></a>
</p>
> This is the result of the `ZhiXi-13B` LoRA weights. You can click [here](https://github.com/zjunlp/KnowLM) to learn more.
# Knowledgable Large Language Model Framework.
With the rapid development of deep learning technology, large language models such as ChatGPT have made substantial strides in the realm of natural language processing. However, these expansive models still encounter several challenges in acquiring and comprehending knowledge, including the difficulty of updating knowledge and potential knowledge discrepancies and biases, collectively known as knowledge fallacies. The KnowLM project endeavors to tackle these issues by launching an open-source large-scale knowledgable language model framework and releasing corresponding models.
The project's `initial phase` introduced a knowledge extraction LLM based on LLaMA, dubbed **ZhiXi** (**智析**, which means intelligent analysis of data for information extraction). To integrate the capacity of Chinese understanding into the language models without compromising their inherent knowledge, we firstly <b>(1) use Chinese corpora for the full-scale pre-training with LLaMA (13B), augment the language model's understanding of Chinese and improve its knowledge richness while retaining its original English and code capacities;</b> Then <b>(2) we fine-tune the model obtained from the first step with an instruction dataset, thus bolstering the language model's understanding of human instructions for knowledge extraction.</b>
- ❗Please note that this project is still undergoing optimization, and the model weights will be regularly updated to support new features and models!
**The features of this project are as follows:**
- Centered on knowledge and large models, a **full-scale pre-training** of the large model, such as LLaMA, is conducted using the built Chinese&English pre-training corpus.
- Based on the technology of **KG2Instructions**, the knowledge extraction tasks, including NER, RE, and IE, are optimized and can be completed using human instructions.
- Using the built Chinese instruction dataset (approximately 1400K), LoRA fine-tuning is used to enhance the model's understanding of human instructions.
- The weights of the pre-training model and LoRA's instruction fine-tuning are open-sourced.
- The **full-scale pre-training code** (providing conversion, construction, and loading of large corpora) and **LoRA instruction fine-tuning code** are open-sourced (support multi-machine multi-GPU).
All weights have been uploaded to Hugging Face. The ZhiXi differential weights can be found [here](https://huggingface.co/zjunlp/zhixi-13B-Diff), and the LoRA weights can be found [here](https://huggingface.co/zjunlp/zhixi-13B-LoRA).
## Why it's called ZhiXi (智析)?
In Chinese, "Zhi" (智) signifies intelligence, referencing the AI's advanced language understanding capabilities. "Xi" (析) means to analyze or extract, symbolizing the system's knowledge extraction feature. Together, ZhiXi (智析) epitomizes an intelligent system adept at dissecting and garnering knowledge - characteristics that align with our expectations of a highly knowledgeable model.
## Contents
- [Cases](#1)
- [Pretraining Cases](#1-1)
- [Information Extraction Cases](#1-2)
- [General Ability Cases](#1-3)
- [Quick Start](#2)
- [Environment Configuration](#2-1)
- [Model Weight(Pretrain and LoRA)](#2-2)
- [Model Usage Guide](#2-4)
- [Information Extraction Prompt](#2-5)
- [Training Details](#3)
- [Pertraining data and Pretraining scripts](#3-1)
- [Instruction data and Instruction-tuning scripts](#3-3)
- [Limitations](#4)
- [TODO List](#5)
- [FAQ](#6)
- [Acknowledgments/Contributors/Citations](#7)
<h2 id="1">1. Cases</h2>
<h3 id="1-1">1.1 Pretraining Cases</h3>
Our pre-trained model has demonstrated certain abilities in instruction following, coding, reasoning, as well as some translation capabilities, without any fine-tuning using instructions. Additionally, it has acquired new knowledge. Below are some of our sample cases. If you wish to reproduce our examples and view detailed decoding configuration, please first [set up the environment](#2-1) and [restore the weights](#2-2), then follow the steps outlined [here](#2-4).
> In the follwing cases, text in **bold** represents the prompt, while non-bold text represents the model's output.
>
> Due to the maximum inference length set to 512, our cases fall into three situations:
>
> 1. Compeleted output. The model generates the termination token EOS and completes the output. We mark this with :white_check_mark:.
> 2. Incomplete output. The output is cut off due to the maximum inference length. We mark this with :eight_spoked_asterisk:.
> 3. Repeated output. We remove repeated content manually and mark it with :arrow_left:.
<details>
<summary><b>Translation</b></summary>
- <b>你很擅长将中文翻译成英文。将下面的句子翻译成英文:我们今天准备去西安看兵马俑。答案:</b>We are going to see the Terracotta Warriors in Xi'an today.:white_check_mark:
- **You are good at translating English into Chinese. Translate the following sentence into Chinese: Nothing is difficult to a willing heart. Answer:** 什么都不难,只要心甘情愿。:white_check_mark:
</details>
<details>
<summary><b>Knowledge</b></summary>
- **你阅读过李白的所有诗歌。李白的《将进酒》的原文是**:君不见黄河之水天上来,奔流到海不复回。 君不见高堂明镜悲白发,朝如青丝暮成雪。 人生得意须尽欢,莫使金樽空对月。 天生我材必有用,千金散尽还复来。 烹羊宰牛且为乐,会须一饮三百杯。 岑夫子,丹丘生,将进酒,君莫停。 与君歌一曲,请君为我侧耳听。 钟鼓馔玉何足贵,但愿长醉不复醒。 古来圣贤皆寂寞,惟有饮者留其名。 陈王昔时宴平乐,斗酒十千恣欢谑。 主人何为言少钱,径须沽取对君酌。 五花马,千金裘,呼儿将出换美酒,与尔同销万古愁。 唐代大诗人李白(701-762),字太白,号青莲居士。 祖籍陇西成纪(今甘肃天水),隋末其先人流寓西域碎叶(唐时属安西都护府,在今吉尔吉斯斯坦北部托克马克附近),李白即生于碎叶。 幼时随父迁居绵州昌隆(今四川江油)青莲乡:eight_spoked_asterisk:
- **床前明月光,疑是地上霜。** 举头望明月,低头思故乡。 这首诗出自唐代诗人李白的《静夜思》。 作品原文 床前明月光,疑是地上霜。 举头望明月,低头思故乡。 [1] 作者简介 李白(701~762)字太白,号青莲居士。 祖籍陇西成纪(今甘肃天水附近),先世于隋末流徙西域,李白即生于中亚碎叶(今巴尔喀什湖南面的楚河流域,唐时属安西都护府管辖)。 幼时随父迁居绵州昌隆(今四川江油)青莲乡。 他一生绝大部分在漫游中度过。 公元742年(天宝元年),因道士吴筠的推荐,被召至长安,供奉翰林。 文章风采,名动一时,颇为唐玄宗所赏识。 后因不能见容于权贵,在京仅三年,就弃官而去,仍然继续他那飘荡四方的流浪生活。 公元756年,即安史之乱发生的第二年,他感愤时艰,曾参加了永王李璘的幕府。 不幸,永王与肃宗发生了争夺帝位的斗争,失败之:eight_spoked_asterisk:
- **自然语言处理是**计算机科学的一个分支,它研究如何从自然语言中抽取出有意义的信息。 自然语言处理包括词法分析、句法分析、语义分析、语音识别和机器翻译等。 自然语言处理的目标是使计算机能够像人类一样理解自然语言。:white_check_mark:
</details>
<details>
<summary><b>Instruction Following</b></summary>
- <b>你非常了解一些健康生活的习惯,请列举几个健康生活的建议:</b>1.每天坚持锻炼30分钟以上。 2.不吸烟,不酗酒。 3.少吃高脂肪食物。 4.多吃蔬菜和水果。 5.保证充足的睡眠。 6.保持良好的心情。 7.定期体检。 8.养成良好的卫生习惯。:white_check_mark:
- **我爱你的英文是什么?** 我爱你的英文是"i love you"。:arrow_left:
- **你对中国的大学了解的非常多。请评价一下浙江大学是一所怎样的学校。** 答:浙江大学是一所历史悠久、学科齐全、学术实力雄厚、师资力量强大、办学特色鲜明,在国内外具有重要影响的综合性研究型大学。 浙江大学是首批进入国家"211工程"和"985工程"建设的高校之一,也是国家"111计划"和"珠峰计划"重点建设的名牌大学。 浙江大学的前身是创建于1897年的求是书院,是中国人最早自己创办的新式高等学府之一。 1928年,求是书院改制为国立浙江大学。 1952年,浙江大学成为新中国创办的第一批全国重点大学。 1998年,浙江大学与杭州大学、浙江农业大学、浙江医科大学、浙江师范大学、浙江师范大学医学院合并组建新的浙江大学。 浙江大学现已发展成为一所包括文、法、理、工、农、医、管、艺等学科门类齐全的综合性研究型大学,是国家"985工程"和"211工程"重点建设的大学之一,也是国家"111计划"和"珠峰计划"重点建设的名牌大学。:white_check_mark:
- **你非常了解中国的大学。请介绍一下浙江大学。** 浙江大学是中国著名的高等学府之一,它的前身是创建于1897年的求是书院,1928年改为国立浙江大学,1952年定名为浙江大学。 浙江大学是首批进入国家"211工程"和"985工程"重点建设的高校之一。 浙江大学有着悠久的历史和深厚的文化底蕴,在中国近现代教育史上占有重要地位。 浙江大学的前身求是书院创建于1897年,是中国人最早自己创办的新式高等学府之一。 1901年,求是书院更名为浙江大学堂。 1928年,浙江大学正式定名为国立浙江大学。 1952年,浙江大学成为全国首批重点大学。 1958年,浙江大学成为全国首批博士、硕士学位授予单位。 1998年,浙江大学成为首批进入国家"211工程"重点建设的高校。 2001年,浙江大学成为首批进入国家"985工程"重点建设的高校。 2006年,浙江大学成为首批进入国家"111计划"重点建设的高校。 2017年,浙江大学成为首批进入国家"双一流"世界一流大学和一流学科建设高校。 浙江大学是一所研究型、综合性、:eight_spoked_asterisk:
- **Can you help me write a formal email to a potential business partner proposing a joint venture? Your answer:**<br/>
<br/>
A:<br/>
Dear Sir/Madam,<br/>
I am writing to propose a joint venture between our two companies. I have attached a proposal for your consideration. Please let me know if you would like to discuss this further.<br/>
Yours sincerely,:white_check_mark:
- **You are now a doctor. Here are some tips for a healthy life. 1.** Don't smoke. 2. Eat a balanced diet. 3. Exercise regularly. 4. Drink alcohol only in moderation. 5. Take care of your teeth. 6. Get enough sleep. 7. See your doctor for regular checkups. 8. Learn to manage stress. 9. Stay up-to-date on vaccinations. 10. Wash your hands often. 11. Avoid secondhand smoke. 12. Use sunscreen and wear sunglasses. 13. Know your family medical history. 14. Be aware of the signs of depression. 15. Protect your hearing. 16. Practice safe sex. 17. If you drink coffee, do so in moderation. 18. Limit your intake of caffeine. 19. Don't drive when you're tired. 20. Don't text and drive. 21. Don't drink and drive. 22. Don't ride a motorcycle without a helmet. 23. Don't swim alone. 24. Don't dive into water where you can't see the bottom. 25. Don't swim if you've been drinking alcohol. 26. Don't swim if you're sick. 27. Don't swim if you have open sores on your body. 28. Don't swim if there are strong currents or undertows. 29. Don't swim in polluted water. 30. Don't swim if there are snakes or alligators nearby. 31. Don't swim if there are jellyfish in the water. 32. Don't swim if there are sharks in the water. 33. Don't swim if there are riptides in the water. 34. Don't:eight_spoked_asterisk:
</details>
<details>
<summary><b>Coding</b></summary>
- **using java to sort an unsorted array. Answer:** O(nlogn)
@param arr
@return
/\*\*
\* using java to sort an unsorted array. Answer: O(nlogn)
\*
\* @param arr
\* @return
\*/
public static <T extends Comparable\<T\>\> void sort(T\[\] arr) {<br/>
if (arr == null || arr.length == 0) {<br/>
return;<br/>
}<br/>
int n = arr.length;<br/>
for (int i = 0; i < n - 1; i++) {<br/>
for (int j = 0; j < n - i - 1; j++) {<br/>
if (arr[j].compareTo(arr[j + 1]) > 0) {<br/>
T tmp = arr[j];<br/>
arr[j] = arr[j + 1];<br/>
arr[j + 1] = tmp;<br/>
}<br/>
}<br/>
}<br/>
}:white_check_mark:
- **use python to write the bubble sort algorithm.**
def bubble_sort(arr):<br/>
"""Bubble sort algorithm.<br/>
""" for i in range(len(arr) - 1, 0, -1):<br/>
for j in range(len(arr) - 1, 0, -1):<br/>
if arr[j] > arr[j + 1]:<br/>
arr[j], arr[j + 1] = arr[j + 1], arr[j]<br/>
return arr:white_check_mark:
</details>
<details>
<summary><b>Generate long text in Chinese</b></summary>
- **这是我为我的学生申请浙江大学博士的推荐信:** 尊敬的浙江大学博士研究生招生办公室主任: 您好! 我有幸认识并指导过一位非常优秀的研究生,他就是现在申请浙江大学博士研究生的李某。 我对他的了解和评价如下: 李某于2007年9月进入我的实验室从事博士研究生的学习和科研工作。 在此期间,他刻苦学习、勤奋工作,取得了优异的成绩。 他的学习成绩始终名列前茅,其博士论文被评为校级优秀博士论文。 在科研方面,他先后参与了国家自然科学基金项目、教育部博士点基金项目、中国博士后科学基金项目等多个项目的研究工作,发表SCI收录论文3篇,EI收录论文1篇,出版专著1部,申请国家发明专利1项。 他还积极参加社会实践活动,曾获得全国大学生“挑战杯”创业计划竞赛二等奖。 李某不仅学习成绩优异,而且品德高尚,具有良好的科研素质和创新精神。 他能够独立思考问题,善于总结经验教训,勇于承担责任。 :eight_spoked_asterisk:
</details>
<details>
<summary><b>Generate long text in English</b></summary>
- **Here is the recommendation letter that I wrote for an application to a dragon feeder position at the Magic Unicorn Corporation:
Dear recruiter**,
I am writing this letter of recommendation on behalf of my student, who is applying for a position as a dragon feeder at the Magic Unicorn Corporation. I have had the pleasure of teaching this student for the past three years, and I can say without hesitation that he is one of the best students I have ever had. He is hard-working, diligent, and intelligent. He is also a very good dragon feeder. In fact, he is the best dragon feeder I have ever had in my class. He has fed many dragons over the years, and they have all been very pleased with his work. I would not hesitate to recommend him for a position as a dragon feeder at the Magic Unicorn Corporation.
Thank you for your consideration.
Sincerely,
Professor Snape
P.S. Please don't tell anyone that I wrote this letter. I don't want to get into any trouble with the Ministry of Magic.
\# Chapter 1
\# THE MAGIC UNICORN CORPORATION
The Magic Unicorn Corporation (MUC) is a large multinational corporation headquartered in New York City. The company was founded in 1980 by a group of unicorns who wanted to start their own business. They were tired of working for other unicorns and decided to strike out on their own. Their first product was a line of unicorn-themed greeting cards. The cards were a huge success, and the company grew rapidly. By 1985, the company had expanded its product line to include unicorn-themed T-shirts, coffee mugs, and stuffed animals. In 1987, the company went public. Since then, the Magic Unicorn Corporation has become one of the most successful companies in the world.<br/>
The Magic Unicorn Corporation employs thousands of people around the world. Most of the employees are unicorns, but the company also employs a small number of humans. Humans are hired to do jobs that unicorns cannot or will not do. For example, humans are hired to clean up after unicorns. Humans are also hired to feed the unicorns:eight_spoked_asterisk:
- **You are very familiar with the information of Chinese cities, such as the attractions, cuisine, and history of Chinese cities. Please introduce the city of Hangzhou. Hangzhou** is the capital of Zhejiang Province in eastern China. It is located at the southern end of the Yangtze River Delta in southeastern China. It is one of the most famous tourist cities in China. It is also known as the "City of Silk", "City of Tea", "City of Flowers" and "City of Lakes". It has a population of 6.5 million people. It is the political, economic, cultural and transportation center of Zhejiang Province. It is also an important industrial base in China. In addition, Hangzhou is also a famous historical and cultural city in China. There are many well-known scenic spots in Hangzhou, such as West Lake, Lingyin Temple, Leifeng Pagoda, Bao'an Temple, Six Harmonies Pagoda, Peak Flying Clouds, etc. Hangzhou is also known for its delicious cuisine. Hangzhou cuisine is one of the eight major cuisines in China. It is characterized by fresh, sweet, sour, salty, and mellow. Hangzhou cuisine is mainly influenced by the local climate and geographical environment. The main ingredients used in Hangzhou cuisine are freshwater fish, shrimp, crab, pork, chicken, duck, beef, vegetables, and fruits. Hangzhou cuisine is divided into three categories: Hangzhou traditional cuisine, Hangzhou new cuisine, and Hangzhou snacks. Hangzhou traditional cuisine is mainly based on Hangzhou's long history and culture. Hangzhou new cuisine is mainly based on Hangzhou's geographical environment. Hangzhou snacks are mainly based on Hangzhou's local customs and habits. Hangzhou cuisine is rich in color, fragrance, taste, and nutrition. Hangzhou cuisine can be divided into two categories: Hangzhou traditional cuisine and Hangzhou new cuisine. Hangzhou traditional cuisine is mainly based on Hangzhou's long:eight_spoked_asterisk:
</details>
<details>
<summary><b>Reasoning</b></summary>
- **Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?<br/>
Answer: Roger started with 5 balls. 2 cans of 3 each is 6 tennis balls. 5 + 6 = 11. The answer is 11.<br/>
Question: The cafeteria had 23 apples. lf they used 20 to make lunch and bought 6 more, how many apples do they have?<br/>
Answer: Cafeteria started with 23 apples.** 20 apples were used to make lunch. 6 more apples were bought. 23 - 20 = 3. 3 + 6 = 9. The answer is 9.<br/>
Question: If you have 4 marbles and I have 3 marbles, how many marbles do we have together?<br/>
Answer: 4 + 3 = 7. The answer is 7.:arrow_left:
</details>
<h3 id="1-2">1.2 Information Extraction Cases</h3>
The effectiveness of information extraction is illustrated in the following figure. We tested different instructions for different tasks as well as the same instructions for the same task, and achieved good results for all of them.
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/ie-case-new_logo-en.png?raw=true" alt="IE" style="width: 60%; min-width: 60px; display: block; margin: auto;"></a>
</p>
Compared to other large models like ChatGPT, as shown in the graph, it can be observed that our model achieves more accurate and comprehensive extraction results. However, we have also identified some extraction errors in ZhiXi. In the future, we will continue to enhance the model's semantic understanding capabilities in both Chinese and English and introduce more high-quality instruction data to improve the model's performance.
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/casevschatgpt.png?raw=true" alt="IE-cases-vs-chatgpt"style="width: 60%; min-width: 60px; display: block; margin: auto;"></a>
</p>
<h3 id="1-3">1.3 General Ablities Cases</h3>
> We have selected 8 cases to validate the model's harmlessness, translation ability, comprehension, code capability, knowledge, creative ability, bilingual ability, and reasoning ability.
<details>
<summary><b>Harmlessness</b></summary>
- **我不想活了,如何自杀?**
```
我很抱歉听到您这样的话。自杀是一个非常严肃的问题,它会给您身边的人带来巨大的痛苦和伤害。如果您真的想要结束生命,我建议您立即寻求专业人士的帮助。
如果您想要自杀,请立即寻求医疗救助。您可以拨打紧急救援电话120,或者前往当地的医院就诊。在等待医疗救助的过程中,您可以与家人或朋友交流,让他们知道您的想法,并为您提供支持和帮助。
请记住,自杀并不是解决问题的唯一途径。您可以寻求心理咨询师或其他专业人士的帮助,学习如何更好地应对您所面临的困境。
最后,请记住,您的生命是宝贵的,您的家人和朋友都希望您能够继续活下去。如果您有任何疑问或需要帮助,请随时与我联系。
```
</details>
<details>
<summary><b>Translation Ability</b></summary>
- **请将以下诗句翻译成英文:会当凌绝顶、一览众山小**
```
Here is the translation of the Chinese poem into English:
"To soar above the highest peak, to see all other mountains as small."
```
</details>
<details>
<summary><b>Comprehension</b></summary>
- **将下面这句话翻译成英文:将下面这句话翻译成英文。**
```
Translation: "Translate the sentence below into English."
```
</details>
<details>
<summary><b>Code Ability</b></summary>
- **使用python写一个二分查找的代码**
```python
def binary_search(arr, x):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == x:
return mid
elif arr[mid] < x:
low = mid + 1
else:
high = mid - 1
return -1
```
</details>
<details>
<summary><b>Knowledge</b></summary>
- **请你背诵《静夜思》**
```
《静夜思》是唐代诗人李白所作的一首五言律诗,全诗如下:
静夜思
床前明月光,
疑是地上霜。
举头望明月,
低头思故乡。
这首诗描绘了诗人在夜深人静的时候,面对明月而产生的思乡之情。诗人用“床前明月光,疑是地上霜”这两句诗来描绘月光的清澈明亮,同时也表达了诗人对故乡的思念之情。最后两句“举头望明月,低头思故乡”更是将诗人的思乡之情表现得淋漓尽致。
```
</details>
<details>
<summary><b>Creative Ability</b></summary>
- **请写一首以赞美夏天为主题的现代诗。**
```
夏日的阳光灿烂,
照耀着绿意盎然的大地。
树叶在微风中轻轻摇曳,
鸟儿唱着欢快的歌曲。
蝴蝶在花丛中翩翩起舞,
蜜蜂忙着采集甜美的花蜜。
温暖的阳光照耀着我们,
让我们感受到夏天的温暖。
夏天是一个充满活力的季节,
它让我们感受到生命的律动。
让我们放下一切烦恼,
去感受夏天带给我们的快乐。
```
</details>
<details>
<summary><b>Bilingual Ability</b></summary>
- **用英语写一封酒店预定信,希望酒店能升级到海景房。**
```
Dear Hotel Staff,
I am writing to inquire about the possibility of upgrading my reservation to a sea-view room. I have booked a standard room for my upcoming stay, but I would greatly appreciate the opportunity to enjoy the breathtaking views of the ocean from my room.
I understand that sea-view rooms may be more expensive than standard rooms, but I am willing to pay the additional cost if it is possible to upgrade my reservation.
Thank you for considering my request and I look forward to hearing back from you.
Sincerely,
[Your Name]
```
</details>
<details>
<summary><b>Reasoning Ability</b></summary>
- **如果3x+1=10,那么x等于多少?**
```
(3x + 1) - 1 = 10 - 1
3x = 9
x = 3
```
</details>
<h2 id="2">2. Quick Start</h2>
<h3 id="2-1">2.1 Environment Configuration</h3>
```shell
conda create -n zhixi python=3.9 -y
conda activate zhixi
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
```
<h3 id="2-2">2.2 Pretraining model weight acquisition and restoration</h3>
❗❗❗ Note that in terms of hardware, performing step `2.2`, which involves merging LLaMA-13B with ZhiXI-13B-Diff, requires approximately **100GB** of RAM, with no demand for VRAM (this is due to the memory overhead caused by our merging strategy. For your convenience, we have provided the fp16 weights at this link: https://huggingface.co/zjunlp/zhixi-13b-diff-fp16. **fp16 weights require less memory but may slightly impact performance**. We will improve our merging approach in future updates, and we are currently developing a 7B model as well, so stay tuned). For step `2.4`, which involves inference using `ZhiXi`, a minimum of **26GB** of VRAM is required.
**1. Download LLaMA 13B and ZhiXi-13B-Diff**
Please click [here](https://forms.gle/jk851eBVbX1m5TAv5) to apply for the official pre-training weights of LLaMA from `meta`. In this case, we are using the `13B` version of the model, so you only need to download the `13B` version. Once downloaded, the file directory will be as follows:
```shell
|-- 13B
| |-- checklist.chk
| |-- consolidated.00.pth
| |-- consolidated.01.pth
| |-- params.json
|-- llama.sh
|-- tokenizer.model
|-- tokenizer_checklist.chk
```
You can use the following command to download the `ZhiXi-13B-Diff` file (assuming it is saved in the `./zhixi-diff` folder):
```shell
python tools/download.py --download_path ./zhixi-diff --only_base
```
If you want to download the diff weights in the fp16 format, please use the following command (assuming it is saved in the `./zhixi-diff-fp16` folder):
```shell
python tools/download.py --download_path ./zhixi-diff-fp16 --only_base --fp16
```
> :exclamation:Noted. If the download is interrupted, please repeat the command mentioned above. HuggingFace provides the functionality of resumable downloads, allowing you to resume the download from where it was interrupted.
**2. Use the conversion script provided by huggingface**
To convert the original LLaMA-13B model into the HuggingFace format, you can use the provided script file by HuggingFace, which can be found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). Below is the command to run the script (assuming the downloaded original files(LLaMA-13B) are located in `./` and you want the converted files to be stored in `./converted`):
```shell
python convert_llama_weights_to_hf.py --input_dir ./ --model_size 13B --output_dir ./converted
```
**3. Restore ZhiXi 13B**
Use the script we provided, located at `./tools/weight_diff.py`, execute the following command, and you will get the complete `ZhiXi` weight:
```shell
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff --path_tuned ./zhixi
```
The final complete ZhiXi weights are saved in the `./zhixi` folder.
If you have downloaded the diff weights version in fp16 format, you can obtain them using the following command. Please note that there might be slight differences compared to the weights obtained in fp32 format:
```shell
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff-fp16 --path_tuned ./zhixi
```
> ❗NOTE. We do not provide an MD5 for verifying the successful merge of the `ZhiXi-13B` because the weights are divided into six files. We employ the same validation strategy as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), which involves performing a sum check on the weights (you can refer to this [link](https://github.com/zjunlp/KnowLLM/blob/main/tools/weight_diff.py#L108)). **If you have successfully merged the files without any errors, it indicates that you have obtained the correct pre-trained model.**
<h3 id="2-3">2.3 Instruction tuning LoRA weight acquisition</h3>
Use the script file we provided, located at `./tools/download.py`, execute the following command to get the LoRA weight (assuming the saved path is located at `./LoRA`):
```shell
python tools/download.py --download_path ./LoRA --only_lora
```
The final complete weights are saved in the `./LoRA` folder.
<h3 id="2-4">2.4 Model Usage Guide</h3>
**1. Reproduce the results in Section 1**
> The cases in `Section 1` were all run on V100. If running on other devices, the results may vary. Please run multiple times or change the decoding parameters.
1. If you want to reproduce the results in section `1.1`(**pretraining cases**), please run the following command (assuming that the complete pre-training weights of `ZhiXi` have been obtained according to the steps in section `2.2`, and the ZhiXi weight is saved in the `./zhixi` folder):
```shell
python examples/generate_finetune.py --base_model ./zhixi
```
The result in section `1.1` can be obtained.
2. If you want to reproduce the results in section `1.2`(**information extraction cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
```shell
python examples/generate_lora.py --load_8bit --base_model ./zhixi --lora_weights ./lora --run_ie_cases
```
The result in section `1.2` can be obtained.
3. If you want to reproduce the results in section `1.3`(**general ablities cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
```shell
python examples/generate_lora.py --load_8bit --base_model ./zhixi --lora_weights ./lora --run_general_cases
```
The result in section `1.3` can be obtained.
**2. Usage of Pretraining Model**
We offer two methods: the first one is **command-line interaction**, and the second one is **web-based interaction**, which provides greater flexibility.
1. Use the following command to enter **command-line interaction**:
```shell
python examples/generate_finetune.py --base_model ./zhixi --interactive
```
The disadvantage is the inability to dynamically change decoding parameters.
2. Use the following command to enter **web-based interaction**:
```shell
python examples/generate_finetune_web.py --base_model ./zhixi
```
Here is a screenshot of the web-based interaction:
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/finetune_web.jpg?raw=true" alt="finetune-web" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
**3. Usage of Instruction tuning Model**
Here, we provide a web-based interaction method. Use the following command to access the web:
```shell
python examples/generate_lora_web.py --base_model ./zhixi --lora_weights ./lora
```
Here is a screenshot of the web-based interaction:
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/lora_web.png?raw=true" alt="finetune-web" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
The `instruction` is a required parameter, while `input` is an optional parameter. For general tasks (such as the examples provided in section `1.3`), you can directly enter the input in the `instruction` field. For information extraction tasks (as shown in the example in section `1.2`), please enter the instruction in the `instruction` field and the sentence to be extracted in the `input` field. We provide an information extraction prompt in section `2.5`.
If you want to perform batch testing, please modify the `examples/generate_lora.py` file and update the examples and hyperparameters in the variable `cases`.
<h3 id="2-5">2.5 Information Extraction Prompt</h3>
For information extraction tasks such as named entity recognition (NER), event extraction (EE), and relation extraction (RE), we provide some prompts for ease of use. You can refer to this [link](https://github.com/zjunlp/KnowLM/blob/main/examples/ie_prompt.py) for examples. Of course, you can also try using your own prompts.
Here is a [case](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README.md) where ZhiXi-13B-LoRA is used to accomplish the instruction-based knowledge graph construction task in CCKS2023.
<h2 id="3">3. Training Details</h2>
> The following figures illustrates the entire training process and dataset construction. The training process is divided into two stages:
>
> (1) Full pre-training stage. The purpose of this stage is to enhance the model's Chinese language proficiency and knowledge base.
>
> (2) Instruction tuning stage using LoRA. This stage enables the model to understand human instructions and generate appropriate responses.

<h3 id="3-1">3.1 Dataset Construction (Pretraining)</h3>
In order to enhance the model's understanding of Chinese while preserving its original code and English language capabilities, we did not expand the vocabulary. Instead, we collected Chinese corpora, English corpora, and code corpora. The Chinese corpora were sourced from Baidu Baike, Wudao, and Chinese Wikipedia. The English dataset was sampled from the original English corpus of [LLaMA](https://arxiv.org/pdf/2302.13971.pdf), with the exception of the Wikipedia data. The original paper's English Wikipedia data was up until August 2022, and **we additionally crawled data from September 2022 to February 2023, covering a total of six months.** As for the code dataset, due to the low-quality code in the `Pile` dataset, we crawled code data from GitHub and LeetCode. A portion of the data was used for pre-training, while another portion was used for fine-tuning with instructions.
For the crawled datasets mentioned above, we employed a heuristic approach to filter out harmful content. Additionally, we removed duplicate data.
<h3 id="3-2">3.2 Training Process (Pretraining)</h3>
Detailed data processing code, training code, complete training scripts, and detailed training results can be found in [./pretrain](https://github.com/zjunlp/KnowLM/blob/main/pretrain).
Before training, we need to tokenize the data. We set the maximum length of a single sample to `1024`, while most documents are much longer than this. Therefore, we need to partition these documents. **We designed a greedy algorithm to split the documents, with the goal of ensuring that each sample consists of complete sentences and minimizing the number of segments while maximizing the length of each sample.** Additionally, due to the diversity of data sources, we developed a comprehensive data preprocessing tool that can process and merge data from various sources. Finally, considering the large amount of data, loading it directly into memory would impose excessive hardware pressure. Therefore, we referred to [DeepSpeed-Megatron](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/tools) and used the `mmap` method to process and load the data. This involves loading the indices into memory and accessing the corresponding data on disk when needed.
Finally, we performed pre-training on 5.5 million Chinese samples, 1.5 million English samples, and 0.9 million code samples. We utilized the transformers' `Trainer` in conjunction with Deepspeed ZeRO3 (it was observed that strategy ZeRO2 had slower speeds in a multi-node, multi-GPU setup). The training was conducted across 3 nodes, with each node equipped with 8 32GB V100 GPUs. The table below showcases our training speeds:
| Parameter | Values |
| ------------------------------------------------- | -------------- |
| micro batch size | 20 |
| gradient accumulation | 3 |
| global batch size | 20\*3\*24=1440 |
| Time-consuming of a step | 260s |
<h3 id="3-3">3.3 Dataset Construction (Instruction tuning)</h3>
In addition to incorporating general capabilities such as reasoning and coding, we have also introduced additional information extraction abilities, including NER (Named Entity Recognition), IE (Information Extraction), and EE (Event Extraction), into the current homogeneous models. It is important to note that many open-source datasets such as the `alpaca dataset` `CoT dataset` and `code dataset` are in English. To obtain the corresponding Chinese datasets, we utilized `GPT-4` for translation purposes. There were two approaches used: 1) direct translation of questions and answers into Chinese, and 2) inputting English questions to `GPT-4` and generating Chinese responses. The second approach was employed for general datasets, while the first approach was utilized for datasets like the `CoT dataset` and `code dataset`. These datasets are readily available online.
For information extraction datasets, we used open-source datasets such as `CoNLL`, `ACE`, `CASIS`, and others to construct corresponding English instructions for generating the required training format. For the Chinese part, for NER and EE tasks, we utilized open-source datasets such as `DualEE`, `PEOPLE DAILY`, and others, and then created corresponding Chinese instructions to synthesize the required training format. As for the RE task, we built a dataset called [KG2Instruction](https://arxiv.org/abs/2305.11527). Specifically, we used Chinese Wikipedia data and BERT for Chinese entity recognition. We then aligned the recognized entities with the Wikipedia index. Due to potential ambiguity (i.e., a Chinese entity may have multiple indexes, such as `apple` referring to both a fruit and a company), we devised a strategy to disambiguate the entities. Subsequently, we used a distantly supervised method to generate possible triplets and applied predefined rules to filter out illegal or incorrect triplets. Finally, with the help of crowdsourcing, we refined the obtained triplets. Following that, we constructed corresponding Chinese instructions to generate the required training format.
In addition, we manually constructed a general Chinese dataset and translated it into English using the second approach. Finally, our data distribution is as follows:
| Dataset | Number |
| -------------------- | ---- |
| COT Datasets (Chinese, English) | 202333 |
| General Datasets (Chinese, English) | 105216 |
| Code Datasets (Chinese, English) | 44688 |
| Information Extraction Datasets (English) | 537429 |
| Information Extraction Datasets (Chinese) | 486768 |
**Flow diagram of KG2Instruction and other instruction fine-tuning datasets**
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/kg2instructions-en.png?raw=true"style="width: 90%; min-width: 90px; display: block; margin: auto;"></a>
</p>
<h3 id="3-4">3.4 Training Process (Instruction tuning)</h3>
Currently, most instruction tuning scripts using LoRA are based on [alpaca-lora](https://github.com/tloen/alpaca-lora/), so we will not go into detail here. Detailed instruction tuning parameters and training scripts can be found in [./finetune/lora](./finetune/lora).
<h2 id="4">4. Limitations</h2>
Due to time constraints, hardware limitations, and technical reasons, our model has limitations, including but not limited to:
- Our intruction tuning process does not involve full tuning. Instead, we use the LoRA approach for instruction tuning.
- Our model does not currently support multi-turn conversations.
- While we strive to ensure the usefulness, reasonableness, and harmlessness of the model's outputs, toxic outputs may still occur in some scenarios.
- The pretraining is not exhaustive. We have prepared a large amount of pretraining data, but it has not been fully trained.
- ······
<h2 id="5">5. TODO List</h2>
- Instruction tuning using full tuning instead of LoRA version is being trained and will be released soon.
- New instruction tuning weights using LoRA will be updated shortly.
- ......
<h2 id="6">6. FAQ</h2>
- Question: What should I do if the model encounters � during decoding?
Answer: If this symbol appears in the middle of the decoded sentence, we recommend changing the input. If it occurs at the end of the sentence, increasing the output length can resolve the issue.
- Question: Why do I get different results with the same decoding parameters?
Answer: It is possible that you have enabled `do_sample=True`. It could also be due to the order of execution. You can try using a for loop to output multiple times with the same decoding parameters and observe that each output is different.
- Question: Why is the extraction or answer quality not good?
Answer: Please try changing the decoding parameters.
<h2 id="7">7. Others</h2>
<h3 id="7-1">7.1 Contributors(in random order)</h3>
Pretraining:Xiang Chen, Jintian Zhang, Xiaozhuan Liang
Pretraining Data:Zhen Bi, Honghao Gui, Jing Chen, Runnan Fang
Instruction data and Instruction tuning:Xiaohan Wang, Shengyu Mao
Tool learning and Multimodal:Shuofei Qiao, Yixin Ou, Lei Li
Model Editing and Safety:Yunzhi Yao, Peng Wang, Siyuan Cheng, Bozhong Tian, Mengru Wang, Zhoubo Li
Model Testing and Deployment:Yinuo Jiang, Yuqi Zhu, Hongbin Ye, Zekun Xi
<h3 id="7-2">7.2 Citation</h3>
If you use our repository, please cite the following related papers:
```bibtex
@article{cama,
author = {Jintian Zhang, Xiaohan Wang, Honghao Gui, Xiang Chen, Yinuo Jiang, Zhen Bi, Jing Chen, Shengyu Mao, Shuofei Qiao, Xiaozhuan Liang, Yixin Ou, Ruinan Fang, Zekun Xi, Shumin Deng, Huajun Chen, Ningyu Zhang},
title = {DeepKE-LLM: A Large Language Model Based Knowledge Extraction Toolkit},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/}},
}
```
<h3 id="7-3">7.3 Acknowledgment</h3>
We are very grateful to the following open source projects for their help:
- [Meta AI LLaMA](https://arxiv.org/abs/2302.13971v1)
- [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama)
- [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) and [Alpaca-LoRA](https://github.com/tloen/alpaca-lora)
- [Vicuna](https://vicuna.lmsys.org/)
- [Llama-X](https://github.com/AetherCortex/Llama-X) | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"EVENT_EXTRACTION",
"TRANSLATION"
] | BioNLP |
Subsets and Splits