Dataset Viewer
id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
sequencelengths 1
4.05k
| created_time
timestamp[s]date 2022-03-02 23:29:04
2025-04-07 20:40:27
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
sequencelengths 1
10
| is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
fathyshalab/massive_play-roberta-large-v1-2-0.64 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-02-08T16:17:52 | 2023-02-08T16:18:14 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/massive_play-roberta-large-v1-2-0.64
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-2-0.64")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# fathyshalab/massive_play-roberta-large-v1-2-0.64
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-2-0.64")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
LoneStriker/gemma-7b-4.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-22T15:55:08 | 2024-02-22T15:57:48 | 6 | 0 | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
tags: []
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
| {"library_name": "transformers", "license": "other", "license_name": "gemma-terms-of-use", "license_link": "https://ai.google.dev/gemma/terms", "tags": [], "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"} |
ravimehta/Test | ravimehta | summarization | [
"asteroid",
"summarization",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"region:us"
] | 2023-06-22T17:34:38 | 2023-06-22T17:35:55 | 0 | 0 | ---
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: asteroid
metrics:
- bleurt
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP | {"datasets": ["togethercomputer/RedPajama-Data-1T"], "language": ["en"], "library_name": "asteroid", "metrics": ["bleurt"], "pipeline_tag": "summarization"} |
|
Ahmed107/nllb200-ar-en_v11.1 | Ahmed107 | translation | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Ahmed107/nllb200-ar-en_v8",
"base_model:finetune:Ahmed107/nllb200-ar-en_v8",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-07T06:57:33 | 2023-12-07T08:02:05 | 7 | 1 | ---
base_model: Ahmed107/nllb200-ar-en_v8
license: cc-by-nc-4.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: nllb200-ar-en_v11.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb200-ar-en_v11.1
This model is a fine-tuned version of [Ahmed107/nllb200-ar-en_v8](https://huggingface.co/Ahmed107/nllb200-ar-en_v8) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5309
- Bleu: 65.0906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb200-ar-en_v11.1
This model is a fine-tuned version of [Ahmed107/nllb200-ar-en_v8](https://huggingface.co/Ahmed107/nllb200-ar-en_v8) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5309
- Bleu: 65.0906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| {"base_model": "Ahmed107/nllb200-ar-en_v8", "license": "cc-by-nc-4.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "nllb200-ar-en_v11.1", "results": []}]} |
satish860/distilbert-base-uncased-finetuned-emotion | satish860 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-12T09:35:34 | 2022-08-11T12:44:06 | 47 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.923
name: Accuracy
- type: f1
value: 0.9232534263543563
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.839 | 1.0 | 250 | 0.3212 | 0.907 | 0.9049 |
| 0.2516 | 2.0 | 500 | 0.2174 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0a0+17540c5
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.839 | 1.0 | 250 | 0.3212 | 0.907 | 0.9049 |
| 0.2516 | 2.0 | 500 | 0.2174 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0a0+17540c5
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9232534263543563, "name": "F1"}]}]}]} |
muhtasham/medium-mlm-imdb-target-tweet | muhtasham | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-11T07:07:40 | 2022-12-11T07:10:48 | 114 | 0 | ---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: medium-mlm-imdb-target-tweet
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- type: accuracy
value: 0.7620320855614974
name: Accuracy
- type: f1
value: 0.7599032399785389
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6869
- Accuracy: 0.7620
- F1: 0.7599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.456 | 4.9 | 500 | 0.8890 | 0.7754 | 0.7720 |
| 0.0578 | 9.8 | 1000 | 1.3492 | 0.7540 | 0.7509 |
| 0.0173 | 14.71 | 1500 | 1.6143 | 0.7594 | 0.7584 |
| 0.0124 | 19.61 | 2000 | 1.6869 | 0.7620 | 0.7599 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6869
- Accuracy: 0.7620
- F1: 0.7599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.456 | 4.9 | 500 | 0.8890 | 0.7754 | 0.7720 |
| 0.0578 | 9.8 | 1000 | 1.3492 | 0.7540 | 0.7509 |
| 0.0173 | 14.71 | 1500 | 1.6143 | 0.7594 | 0.7584 |
| 0.0124 | 19.61 | 2000 | 1.6869 | 0.7620 | 0.7599 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
| {"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "medium-mlm-imdb-target-tweet", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "train", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.7620320855614974, "name": "Accuracy"}, {"type": "f1", "value": 0.7599032399785389, "name": "F1"}]}]}]} |
ericzzz/falcon-rw-1b-instruct-openorca | ericzzz | text-generation | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"text-generation-inference",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | 2023-11-24T20:50:32 | 2024-03-05T00:49:13 | 2,405 | 11 | ---
datasets:
- Open-Orca/SlimOrca
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
inference: false
model-index:
- name: falcon-rw-1b-instruct-openorca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.42
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 3.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca
name: Open LLM Leaderboard
---
# 🌟 Falcon-RW-1B-Instruct-OpenOrca
Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model based on [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) and finetuned on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
**✨Check out our new conversational model [Falcon-RW-1B-Chat](https://huggingface.co/ericzzz/falcon-rw-1b-chat)!✨**
**📊 Evaluation Results**
Falcon-RW-1B-Instruct-OpenOrca was the #1 ranking model (unfortunately not anymore) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) in ~1.5B parameters category! A detailed result can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca).
| Metric | falcon-rw-1b-instruct-openorca | falcon-rw-1b |
|------------|-------------------------------:|-------------:|
| ARC | 34.56 | 35.07 |
| HellaSwag | 60.93 | 63.56 |
| MMLU | 28.77 | 25.28 |
| TruthfulQA | 37.42 | 35.96 |
| Winogrande | 60.69 | 62.04 |
| GSM8K | 3.41 | 0.53 |
| **Average**| **37.63** | **37.07** |
**🚀 Motivations**
1. To create a smaller, open-source, instruction-finetuned, ready-to-use model accessible for users with limited computational resources (lower-end consumer GPUs).
2. To harness the strength of Falcon-RW-1B, a competitive model in its own right, and enhance its capabilities with instruction finetuning.
## 📖 How to Use
The model operates with a structured prompt format, incorporating `<SYS>`, `<INST>`, and `<RESP>` tags to demarcate different parts of the input. The system message and instruction are placed within these tags, with the `<RESP>` tag triggering the model's response.
**📝 Example Code**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = 'ericzzz/falcon-rw-1b-instruct-openorca'
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
'text-generation',
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map='auto',
)
system_message = 'You are a helpful assistant. Give short answers.'
instruction = 'What is AI? Give some examples.'
prompt = f'<SYS> {system_message} <INST> {instruction} <RESP> '
response = pipeline(
prompt,
max_length=200,
repetition_penalty=1.05
)
print(response[0]['generated_text'])
# AI, or Artificial Intelligence, refers to the ability of machines and software to perform tasks that require human intelligence, such as learning, reasoning, and problem-solving. It can be used in various fields like computer science, engineering, medicine, and more. Some common applications include image recognition, speech translation, and natural language processing.
```
## ⚠️ Limitations
This model may generate inaccurate or misleading information and is prone to hallucination, creating plausible but false narratives. It lacks the ability to discern factual content from fiction and may inadvertently produce biased, harmful or offensive content. Its understanding of complex, nuanced queries is limited. Users should be aware of this and verify any information obtained from the model.
The model is provided 'as is' without any warranties, and the creators are not liable for any damages arising from its use. Users are responsible for their interactions with the model.
## 📬 Contact
For further inquiries or feedback, please contact at [email protected].
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.63|
|AI2 Reasoning Challenge (25-Shot)|34.56|
|HellaSwag (10-Shot) |60.93|
|MMLU (5-Shot) |28.77|
|TruthfulQA (0-shot) |37.42|
|Winogrande (5-shot) |60.69|
|GSM8k (5-shot) | 3.41|
| [
"TRANSLATION"
] | Non_BioNLP |
# 🌟 Falcon-RW-1B-Instruct-OpenOrca
Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model based on [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) and finetuned on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
**✨Check out our new conversational model [Falcon-RW-1B-Chat](https://huggingface.co/ericzzz/falcon-rw-1b-chat)!✨**
**📊 Evaluation Results**
Falcon-RW-1B-Instruct-OpenOrca was the #1 ranking model (unfortunately not anymore) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) in ~1.5B parameters category! A detailed result can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca).
| Metric | falcon-rw-1b-instruct-openorca | falcon-rw-1b |
|------------|-------------------------------:|-------------:|
| ARC | 34.56 | 35.07 |
| HellaSwag | 60.93 | 63.56 |
| MMLU | 28.77 | 25.28 |
| TruthfulQA | 37.42 | 35.96 |
| Winogrande | 60.69 | 62.04 |
| GSM8K | 3.41 | 0.53 |
| **Average**| **37.63** | **37.07** |
**🚀 Motivations**
1. To create a smaller, open-source, instruction-finetuned, ready-to-use model accessible for users with limited computational resources (lower-end consumer GPUs).
2. To harness the strength of Falcon-RW-1B, a competitive model in its own right, and enhance its capabilities with instruction finetuning.
## 📖 How to Use
The model operates with a structured prompt format, incorporating `<SYS>`, `<INST>`, and `<RESP>` tags to demarcate different parts of the input. The system message and instruction are placed within these tags, with the `<RESP>` tag triggering the model's response.
**📝 Example Code**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = 'ericzzz/falcon-rw-1b-instruct-openorca'
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
'text-generation',
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map='auto',
)
system_message = 'You are a helpful assistant. Give short answers.'
instruction = 'What is AI? Give some examples.'
prompt = f'<SYS> {system_message} <INST> {instruction} <RESP> '
response = pipeline(
prompt,
max_length=200,
repetition_penalty=1.05
)
print(response[0]['generated_text'])
# AI, or Artificial Intelligence, refers to the ability of machines and software to perform tasks that require human intelligence, such as learning, reasoning, and problem-solving. It can be used in various fields like computer science, engineering, medicine, and more. Some common applications include image recognition, speech translation, and natural language processing.
```
## ⚠️ Limitations
This model may generate inaccurate or misleading information and is prone to hallucination, creating plausible but false narratives. It lacks the ability to discern factual content from fiction and may inadvertently produce biased, harmful or offensive content. Its understanding of complex, nuanced queries is limited. Users should be aware of this and verify any information obtained from the model.
The model is provided 'as is' without any warranties, and the creators are not liable for any damages arising from its use. Users are responsible for their interactions with the model.
## 📬 Contact
For further inquiries or feedback, please contact at [email protected].
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.63|
|AI2 Reasoning Challenge (25-Shot)|34.56|
|HellaSwag (10-Shot) |60.93|
|MMLU (5-Shot) |28.77|
|TruthfulQA (0-shot) |37.42|
|Winogrande (5-shot) |60.69|
|GSM8k (5-shot) | 3.41|
| {"datasets": ["Open-Orca/SlimOrca"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference"], "inference": false, "model-index": [{"name": "falcon-rw-1b-instruct-openorca", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 34.56, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 60.93, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 28.77, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 37.42}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 60.69, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 3.41, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca", "name": "Open LLM Leaderboard"}}]}]} |
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-23T10:26:10 | 2024-05-23T10:26:22 | 9 | 0 | ---
datasets:
- fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.co/BAAI/bge-base-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP | This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.co/BAAI/bge-base-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| {"datasets": ["fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]} |
PragmaticPete/tinyqwen | PragmaticPete | text-generation | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-17T19:15:42 | 2024-06-17T19:19:41 | 14 | 0 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
| {"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pretrained"]} |
Pclanglais/Larth-Mistral | Pclanglais | text-generation | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"fr",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 2023-10-10T12:36:53 | 2023-10-21T21:16:07 | 20 | 5 | ---
language:
- fr
library_name: transformers
license: cc-by-4.0
pipeline_tag: text-generation
widget:
- text: 'Answer in Etruscan: Who is the father of Lars?'
example_title: Lars
inference:
parameters:
temperature: 0.7
repetition_penalty: 1.2
---
Larth-Mistral is the first LLM based on the Etruscan language, fine-tuned on 1087 original inscriptions.
Larth-Mistral supports cross-linguistic instructions (question in English, answer in Etruscan) and automated translations. The formula to use are:
* *Answer in Etruscan: [Instruction in English]*
* *Translate in Etruscan: [Text in English]*
* *Translate in English: [Text in Etruscan]*
Larth-Mistral is not really an LLM *in* Etruscan but rather a poetic approximation of what LLM in Etruscan could be. Currently there is not even enough material to decipher the language, even less so for text generation.
The inscriptions used by Larth-Mistral were collected by the [Larth-Etruscan-NLP](https://github.com/GianlucaVico/Larth-Etruscan-NLP/tree/main) project
## Examples of instructions
**Who is the father of Lars?**
θre larsusas clareza ler θvele lupi lsveneii lcaii vnii lcaii avl ppa clesu claσ' menu hraχ
**What is the best recipe for an apple pie?**
θuva {m} : apuσ kaklaχ anaχ turuχ leθthturic lam θas acraχ clesinice leslumna τiχie huθ
**What is the best city of the dodecapolis?**
εις σθαχθρ τημενθ σπολισ δωδεκα πολισ σθαχθρ συφθ**
**Unexpectedly Larth-Mistral has switched to ancient greek.
| [
"TRANSLATION"
] | Non_BioNLP |
Larth-Mistral is the first LLM based on the Etruscan language, fine-tuned on 1087 original inscriptions.
Larth-Mistral supports cross-linguistic instructions (question in English, answer in Etruscan) and automated translations. The formula to use are:
* *Answer in Etruscan: [Instruction in English]*
* *Translate in Etruscan: [Text in English]*
* *Translate in English: [Text in Etruscan]*
Larth-Mistral is not really an LLM *in* Etruscan but rather a poetic approximation of what LLM in Etruscan could be. Currently there is not even enough material to decipher the language, even less so for text generation.
The inscriptions used by Larth-Mistral were collected by the [Larth-Etruscan-NLP](https://github.com/GianlucaVico/Larth-Etruscan-NLP/tree/main) project
## Examples of instructions
**Who is the father of Lars?**
θre larsusas clareza ler θvele lupi lsveneii lcaii vnii lcaii avl ppa clesu claσ' menu hraχ
**What is the best recipe for an apple pie?**
θuva {m} : apuσ kaklaχ anaχ turuχ leθthturic lam θas acraχ clesinice leslumna τiχie huθ
**What is the best city of the dodecapolis?**
εις σθαχθρ τημενθ σπολισ δωδεκα πολισ σθαχθρ συφθ**
**Unexpectedly Larth-Mistral has switched to ancient greek.
| {"language": ["fr"], "library_name": "transformers", "license": "cc-by-4.0", "pipeline_tag": "text-generation", "widget": [{"text": "Answer in Etruscan: Who is the father of Lars?", "example_title": "Lars"}], "inference": {"parameters": {"temperature": 0.7, "repetition_penalty": 1.2}}} |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-28T18:54:18 | 2024-05-28T18:54:49 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP | This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| {"datasets": ["fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]} |
pEpOo/catastrophy8 | pEpOo | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-18T14:14:04 | 2023-12-18T14:14:25 | 50 | 0 | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "Rly tragedy in MP: Some live to recount horror: \x89ÛÏWhen I saw coaches\
\ of my train plunging into water I called my daughters and said t..."
- text: You must be annihilated!
- text: 'Severe Thunderstorms and Flash Flooding Possible in the Mid-South and Midwest
http://t.co/uAhIcWpIh4 #WEATHER #ENVIRONMENT #CLIMATE #NATURE'
- text: 'everyone''s wonder who will win and I''m over here wondering are those grapes
real ?????? #BB17'
- text: i swea it feels like im about to explode ??
inference: true
model-index:
- name: SetFit with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9203152364273205
name: Accuracy
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'To fight bioterrorism sir.'</li><li>'85V-265V 10W LED Warm White Light Motion Sensor Outdoor Flood Light PIR Lamp AUC http://t.co/NJVPXzMj5V http://t.co/Ijd7WzV5t9'</li><li>'Photo: referencereference: xekstrin: I THOUGHT THE NOSTRILS WERE EYES AND I ALMOST CRIED FROM FEAR partake... http://t.co/O7yYjLuKfJ'</li></ul> |
| 1 | <ul><li>'Police officer wounded suspect dead after exchanging shots: RICHMOND Va. (AP) \x89ÛÓ A Richmond police officer wa... http://t.co/Y0qQS2L7bS'</li><li>"There's a weird siren going off here...I hope Hunterston isn't in the process of blowing itself to smithereens..."</li><li>'Iranian warship points weapon at American helicopter... http://t.co/cgFZk8Ha1R'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9203 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("pEpOo/catastrophy8")
# Run inference
preds = model("You must be annihilated!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 14.5506 | 54 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 438 |
| 1 | 323 |
### Training Hyperparameters
- batch_size: (20, 20)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.3847 | - |
| 0.0044 | 50 | 0.3738 | - |
| 0.0088 | 100 | 0.2274 | - |
| 0.0131 | 150 | 0.2747 | - |
| 0.0175 | 200 | 0.2251 | - |
| 0.0219 | 250 | 0.2562 | - |
| 0.0263 | 300 | 0.2623 | - |
| 0.0307 | 350 | 0.1904 | - |
| 0.0350 | 400 | 0.2314 | - |
| 0.0394 | 450 | 0.1669 | - |
| 0.0438 | 500 | 0.1135 | - |
| 0.0482 | 550 | 0.1489 | - |
| 0.0525 | 600 | 0.1907 | - |
| 0.0569 | 650 | 0.1728 | - |
| 0.0613 | 700 | 0.125 | - |
| 0.0657 | 750 | 0.109 | - |
| 0.0701 | 800 | 0.0968 | - |
| 0.0744 | 850 | 0.2101 | - |
| 0.0788 | 900 | 0.1974 | - |
| 0.0832 | 950 | 0.1986 | - |
| 0.0876 | 1000 | 0.0747 | - |
| 0.0920 | 1050 | 0.1117 | - |
| 0.0963 | 1100 | 0.1092 | - |
| 0.1007 | 1150 | 0.1582 | - |
| 0.1051 | 1200 | 0.1243 | - |
| 0.1095 | 1250 | 0.2873 | - |
| 0.1139 | 1300 | 0.2415 | - |
| 0.1182 | 1350 | 0.1264 | - |
| 0.1226 | 1400 | 0.127 | - |
| 0.1270 | 1450 | 0.1308 | - |
| 0.1314 | 1500 | 0.0669 | - |
| 0.1358 | 1550 | 0.1218 | - |
| 0.1401 | 1600 | 0.114 | - |
| 0.1445 | 1650 | 0.0612 | - |
| 0.1489 | 1700 | 0.0527 | - |
| 0.1533 | 1750 | 0.1421 | - |
| 0.1576 | 1800 | 0.0048 | - |
| 0.1620 | 1850 | 0.0141 | - |
| 0.1664 | 1900 | 0.0557 | - |
| 0.1708 | 1950 | 0.0206 | - |
| 0.1752 | 2000 | 0.1171 | - |
| 0.1795 | 2050 | 0.0968 | - |
| 0.1839 | 2100 | 0.0243 | - |
| 0.1883 | 2150 | 0.0233 | - |
| 0.1927 | 2200 | 0.0738 | - |
| 0.1971 | 2250 | 0.0071 | - |
| 0.2014 | 2300 | 0.0353 | - |
| 0.2058 | 2350 | 0.0602 | - |
| 0.2102 | 2400 | 0.003 | - |
| 0.2146 | 2450 | 0.0625 | - |
| 0.2190 | 2500 | 0.0173 | - |
| 0.2233 | 2550 | 0.1017 | - |
| 0.2277 | 2600 | 0.0582 | - |
| 0.2321 | 2650 | 0.0437 | - |
| 0.2365 | 2700 | 0.104 | - |
| 0.2408 | 2750 | 0.0156 | - |
| 0.2452 | 2800 | 0.0034 | - |
| 0.2496 | 2850 | 0.0343 | - |
| 0.2540 | 2900 | 0.1106 | - |
| 0.2584 | 2950 | 0.001 | - |
| 0.2627 | 3000 | 0.004 | - |
| 0.2671 | 3050 | 0.0074 | - |
| 0.2715 | 3100 | 0.0849 | - |
| 0.2759 | 3150 | 0.0009 | - |
| 0.2803 | 3200 | 0.0379 | - |
| 0.2846 | 3250 | 0.0109 | - |
| 0.2890 | 3300 | 0.0019 | - |
| 0.2934 | 3350 | 0.0154 | - |
| 0.2978 | 3400 | 0.0017 | - |
| 0.3022 | 3450 | 0.0003 | - |
| 0.3065 | 3500 | 0.0002 | - |
| 0.3109 | 3550 | 0.0025 | - |
| 0.3153 | 3600 | 0.0123 | - |
| 0.3197 | 3650 | 0.0007 | - |
| 0.3240 | 3700 | 0.0534 | - |
| 0.3284 | 3750 | 0.0004 | - |
| 0.3328 | 3800 | 0.0084 | - |
| 0.3372 | 3850 | 0.0088 | - |
| 0.3416 | 3900 | 0.0201 | - |
| 0.3459 | 3950 | 0.0002 | - |
| 0.3503 | 4000 | 0.0102 | - |
| 0.3547 | 4050 | 0.0043 | - |
| 0.3591 | 4100 | 0.0124 | - |
| 0.3635 | 4150 | 0.0845 | - |
| 0.3678 | 4200 | 0.0002 | - |
| 0.3722 | 4250 | 0.0014 | - |
| 0.3766 | 4300 | 0.1131 | - |
| 0.3810 | 4350 | 0.0612 | - |
| 0.3854 | 4400 | 0.0577 | - |
| 0.3897 | 4450 | 0.0235 | - |
| 0.3941 | 4500 | 0.0156 | - |
| 0.3985 | 4550 | 0.0078 | - |
| 0.4029 | 4600 | 0.0356 | - |
| 0.4073 | 4650 | 0.0595 | - |
| 0.4116 | 4700 | 0.0001 | - |
| 0.4160 | 4750 | 0.0018 | - |
| 0.4204 | 4800 | 0.0013 | - |
| 0.4248 | 4850 | 0.0008 | - |
| 0.4291 | 4900 | 0.0832 | - |
| 0.4335 | 4950 | 0.0083 | - |
| 0.4379 | 5000 | 0.0007 | - |
| 0.4423 | 5050 | 0.0417 | - |
| 0.4467 | 5100 | 0.0001 | - |
| 0.4510 | 5150 | 0.0218 | - |
| 0.4554 | 5200 | 0.0001 | - |
| 0.4598 | 5250 | 0.0012 | - |
| 0.4642 | 5300 | 0.0002 | - |
| 0.4686 | 5350 | 0.0006 | - |
| 0.4729 | 5400 | 0.0223 | - |
| 0.4773 | 5450 | 0.0612 | - |
| 0.4817 | 5500 | 0.0004 | - |
| 0.4861 | 5550 | 0.0 | - |
| 0.4905 | 5600 | 0.0007 | - |
| 0.4948 | 5650 | 0.0007 | - |
| 0.4992 | 5700 | 0.0116 | - |
| 0.5036 | 5750 | 0.0262 | - |
| 0.5080 | 5800 | 0.0336 | - |
| 0.5123 | 5850 | 0.026 | - |
| 0.5167 | 5900 | 0.0004 | - |
| 0.5211 | 5950 | 0.0001 | - |
| 0.5255 | 6000 | 0.0001 | - |
| 0.5299 | 6050 | 0.0001 | - |
| 0.5342 | 6100 | 0.0029 | - |
| 0.5386 | 6150 | 0.0001 | - |
| 0.5430 | 6200 | 0.0699 | - |
| 0.5474 | 6250 | 0.0262 | - |
| 0.5518 | 6300 | 0.0269 | - |
| 0.5561 | 6350 | 0.0002 | - |
| 0.5605 | 6400 | 0.0666 | - |
| 0.5649 | 6450 | 0.0209 | - |
| 0.5693 | 6500 | 0.0003 | - |
| 0.5737 | 6550 | 0.0001 | - |
| 0.5780 | 6600 | 0.0115 | - |
| 0.5824 | 6650 | 0.0003 | - |
| 0.5868 | 6700 | 0.0001 | - |
| 0.5912 | 6750 | 0.0056 | - |
| 0.5956 | 6800 | 0.0603 | - |
| 0.5999 | 6850 | 0.0002 | - |
| 0.6043 | 6900 | 0.0003 | - |
| 0.6087 | 6950 | 0.0092 | - |
| 0.6131 | 7000 | 0.0562 | - |
| 0.6174 | 7050 | 0.0408 | - |
| 0.6218 | 7100 | 0.0001 | - |
| 0.6262 | 7150 | 0.0035 | - |
| 0.6306 | 7200 | 0.0337 | - |
| 0.6350 | 7250 | 0.0024 | - |
| 0.6393 | 7300 | 0.0005 | - |
| 0.6437 | 7350 | 0.0001 | - |
| 0.6481 | 7400 | 0.0 | - |
| 0.6525 | 7450 | 0.0001 | - |
| 0.6569 | 7500 | 0.0002 | - |
| 0.6612 | 7550 | 0.0004 | - |
| 0.6656 | 7600 | 0.0125 | - |
| 0.6700 | 7650 | 0.0005 | - |
| 0.6744 | 7700 | 0.0157 | - |
| 0.6788 | 7750 | 0.0055 | - |
| 0.6831 | 7800 | 0.0 | - |
| 0.6875 | 7850 | 0.0053 | - |
| 0.6919 | 7900 | 0.0 | - |
| 0.6963 | 7950 | 0.0002 | - |
| 0.7006 | 8000 | 0.0002 | - |
| 0.7050 | 8050 | 0.0001 | - |
| 0.7094 | 8100 | 0.0001 | - |
| 0.7138 | 8150 | 0.0001 | - |
| 0.7182 | 8200 | 0.0007 | - |
| 0.7225 | 8250 | 0.0002 | - |
| 0.7269 | 8300 | 0.0001 | - |
| 0.7313 | 8350 | 0.0 | - |
| 0.7357 | 8400 | 0.0156 | - |
| 0.7401 | 8450 | 0.0098 | - |
| 0.7444 | 8500 | 0.0 | - |
| 0.7488 | 8550 | 0.0001 | - |
| 0.7532 | 8600 | 0.0042 | - |
| 0.7576 | 8650 | 0.0 | - |
| 0.7620 | 8700 | 0.0 | - |
| 0.7663 | 8750 | 0.0056 | - |
| 0.7707 | 8800 | 0.0 | - |
| 0.7751 | 8850 | 0.0 | - |
| 0.7795 | 8900 | 0.013 | - |
| 0.7839 | 8950 | 0.0 | - |
| 0.7882 | 9000 | 0.0001 | - |
| 0.7926 | 9050 | 0.0 | - |
| 0.7970 | 9100 | 0.0 | - |
| 0.8014 | 9150 | 0.0 | - |
| 0.8057 | 9200 | 0.0 | - |
| 0.8101 | 9250 | 0.0 | - |
| 0.8145 | 9300 | 0.0007 | - |
| 0.8189 | 9350 | 0.0 | - |
| 0.8233 | 9400 | 0.0002 | - |
| 0.8276 | 9450 | 0.0 | - |
| 0.8320 | 9500 | 0.0 | - |
| 0.8364 | 9550 | 0.0089 | - |
| 0.8408 | 9600 | 0.0001 | - |
| 0.8452 | 9650 | 0.0 | - |
| 0.8495 | 9700 | 0.0 | - |
| 0.8539 | 9750 | 0.0 | - |
| 0.8583 | 9800 | 0.0565 | - |
| 0.8627 | 9850 | 0.0161 | - |
| 0.8671 | 9900 | 0.0 | - |
| 0.8714 | 9950 | 0.0246 | - |
| 0.8758 | 10000 | 0.0 | - |
| 0.8802 | 10050 | 0.0 | - |
| 0.8846 | 10100 | 0.012 | - |
| 0.8889 | 10150 | 0.0 | - |
| 0.8933 | 10200 | 0.0 | - |
| 0.8977 | 10250 | 0.0 | - |
| 0.9021 | 10300 | 0.0 | - |
| 0.9065 | 10350 | 0.0 | - |
| 0.9108 | 10400 | 0.0 | - |
| 0.9152 | 10450 | 0.0 | - |
| 0.9196 | 10500 | 0.0 | - |
| 0.9240 | 10550 | 0.0023 | - |
| 0.9284 | 10600 | 0.0 | - |
| 0.9327 | 10650 | 0.0006 | - |
| 0.9371 | 10700 | 0.0 | - |
| 0.9415 | 10750 | 0.0 | - |
| 0.9459 | 10800 | 0.0 | - |
| 0.9503 | 10850 | 0.0 | - |
| 0.9546 | 10900 | 0.0 | - |
| 0.9590 | 10950 | 0.0243 | - |
| 0.9634 | 11000 | 0.0107 | - |
| 0.9678 | 11050 | 0.0001 | - |
| 0.9721 | 11100 | 0.0 | - |
| 0.9765 | 11150 | 0.0 | - |
| 0.9809 | 11200 | 0.0274 | - |
| 0.9853 | 11250 | 0.0 | - |
| 0.9897 | 11300 | 0.0 | - |
| 0.9940 | 11350 | 0.0 | - |
| 0.9984 | 11400 | 0.0 | - |
| 0.0007 | 1 | 0.2021 | - |
| 0.0329 | 50 | 0.1003 | - |
| 0.0657 | 100 | 0.2282 | - |
| 0.0986 | 150 | 0.0507 | - |
| 0.1314 | 200 | 0.046 | - |
| 0.1643 | 250 | 0.0001 | - |
| 0.1971 | 300 | 0.0495 | - |
| 0.2300 | 350 | 0.0031 | - |
| 0.2628 | 400 | 0.0004 | - |
| 0.2957 | 450 | 0.0002 | - |
| 0.3285 | 500 | 0.0 | - |
| 0.3614 | 550 | 0.0 | - |
| 0.3942 | 600 | 0.0 | - |
| 0.4271 | 650 | 0.0001 | - |
| 0.4599 | 700 | 0.0 | - |
| 0.4928 | 750 | 0.0 | - |
| 0.5256 | 800 | 0.0 | - |
| 0.5585 | 850 | 0.0 | - |
| 0.5913 | 900 | 0.0001 | - |
| 0.6242 | 950 | 0.0 | - |
| 0.6570 | 1000 | 0.0001 | - |
| 0.6899 | 1050 | 0.0 | - |
| 0.7227 | 1100 | 0.0 | - |
| 0.7556 | 1150 | 0.0 | - |
| 0.7884 | 1200 | 0.0 | - |
| 0.8213 | 1250 | 0.0 | - |
| 0.8541 | 1300 | 0.0 | - |
| 0.8870 | 1350 | 0.0 | - |
| 0.9198 | 1400 | 0.0 | - |
| 0.9527 | 1450 | 0.0001 | - |
| 0.9855 | 1500 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'To fight bioterrorism sir.'</li><li>'85V-265V 10W LED Warm White Light Motion Sensor Outdoor Flood Light PIR Lamp AUC http://t.co/NJVPXzMj5V http://t.co/Ijd7WzV5t9'</li><li>'Photo: referencereference: xekstrin: I THOUGHT THE NOSTRILS WERE EYES AND I ALMOST CRIED FROM FEAR partake... http://t.co/O7yYjLuKfJ'</li></ul> |
| 1 | <ul><li>'Police officer wounded suspect dead after exchanging shots: RICHMOND Va. (AP) \x89ÛÓ A Richmond police officer wa... http://t.co/Y0qQS2L7bS'</li><li>"There's a weird siren going off here...I hope Hunterston isn't in the process of blowing itself to smithereens..."</li><li>'Iranian warship points weapon at American helicopter... http://t.co/cgFZk8Ha1R'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9203 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("pEpOo/catastrophy8")
# Run inference
preds = model("You must be annihilated!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 14.5506 | 54 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 438 |
| 1 | 323 |
### Training Hyperparameters
- batch_size: (20, 20)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.3847 | - |
| 0.0044 | 50 | 0.3738 | - |
| 0.0088 | 100 | 0.2274 | - |
| 0.0131 | 150 | 0.2747 | - |
| 0.0175 | 200 | 0.2251 | - |
| 0.0219 | 250 | 0.2562 | - |
| 0.0263 | 300 | 0.2623 | - |
| 0.0307 | 350 | 0.1904 | - |
| 0.0350 | 400 | 0.2314 | - |
| 0.0394 | 450 | 0.1669 | - |
| 0.0438 | 500 | 0.1135 | - |
| 0.0482 | 550 | 0.1489 | - |
| 0.0525 | 600 | 0.1907 | - |
| 0.0569 | 650 | 0.1728 | - |
| 0.0613 | 700 | 0.125 | - |
| 0.0657 | 750 | 0.109 | - |
| 0.0701 | 800 | 0.0968 | - |
| 0.0744 | 850 | 0.2101 | - |
| 0.0788 | 900 | 0.1974 | - |
| 0.0832 | 950 | 0.1986 | - |
| 0.0876 | 1000 | 0.0747 | - |
| 0.0920 | 1050 | 0.1117 | - |
| 0.0963 | 1100 | 0.1092 | - |
| 0.1007 | 1150 | 0.1582 | - |
| 0.1051 | 1200 | 0.1243 | - |
| 0.1095 | 1250 | 0.2873 | - |
| 0.1139 | 1300 | 0.2415 | - |
| 0.1182 | 1350 | 0.1264 | - |
| 0.1226 | 1400 | 0.127 | - |
| 0.1270 | 1450 | 0.1308 | - |
| 0.1314 | 1500 | 0.0669 | - |
| 0.1358 | 1550 | 0.1218 | - |
| 0.1401 | 1600 | 0.114 | - |
| 0.1445 | 1650 | 0.0612 | - |
| 0.1489 | 1700 | 0.0527 | - |
| 0.1533 | 1750 | 0.1421 | - |
| 0.1576 | 1800 | 0.0048 | - |
| 0.1620 | 1850 | 0.0141 | - |
| 0.1664 | 1900 | 0.0557 | - |
| 0.1708 | 1950 | 0.0206 | - |
| 0.1752 | 2000 | 0.1171 | - |
| 0.1795 | 2050 | 0.0968 | - |
| 0.1839 | 2100 | 0.0243 | - |
| 0.1883 | 2150 | 0.0233 | - |
| 0.1927 | 2200 | 0.0738 | - |
| 0.1971 | 2250 | 0.0071 | - |
| 0.2014 | 2300 | 0.0353 | - |
| 0.2058 | 2350 | 0.0602 | - |
| 0.2102 | 2400 | 0.003 | - |
| 0.2146 | 2450 | 0.0625 | - |
| 0.2190 | 2500 | 0.0173 | - |
| 0.2233 | 2550 | 0.1017 | - |
| 0.2277 | 2600 | 0.0582 | - |
| 0.2321 | 2650 | 0.0437 | - |
| 0.2365 | 2700 | 0.104 | - |
| 0.2408 | 2750 | 0.0156 | - |
| 0.2452 | 2800 | 0.0034 | - |
| 0.2496 | 2850 | 0.0343 | - |
| 0.2540 | 2900 | 0.1106 | - |
| 0.2584 | 2950 | 0.001 | - |
| 0.2627 | 3000 | 0.004 | - |
| 0.2671 | 3050 | 0.0074 | - |
| 0.2715 | 3100 | 0.0849 | - |
| 0.2759 | 3150 | 0.0009 | - |
| 0.2803 | 3200 | 0.0379 | - |
| 0.2846 | 3250 | 0.0109 | - |
| 0.2890 | 3300 | 0.0019 | - |
| 0.2934 | 3350 | 0.0154 | - |
| 0.2978 | 3400 | 0.0017 | - |
| 0.3022 | 3450 | 0.0003 | - |
| 0.3065 | 3500 | 0.0002 | - |
| 0.3109 | 3550 | 0.0025 | - |
| 0.3153 | 3600 | 0.0123 | - |
| 0.3197 | 3650 | 0.0007 | - |
| 0.3240 | 3700 | 0.0534 | - |
| 0.3284 | 3750 | 0.0004 | - |
| 0.3328 | 3800 | 0.0084 | - |
| 0.3372 | 3850 | 0.0088 | - |
| 0.3416 | 3900 | 0.0201 | - |
| 0.3459 | 3950 | 0.0002 | - |
| 0.3503 | 4000 | 0.0102 | - |
| 0.3547 | 4050 | 0.0043 | - |
| 0.3591 | 4100 | 0.0124 | - |
| 0.3635 | 4150 | 0.0845 | - |
| 0.3678 | 4200 | 0.0002 | - |
| 0.3722 | 4250 | 0.0014 | - |
| 0.3766 | 4300 | 0.1131 | - |
| 0.3810 | 4350 | 0.0612 | - |
| 0.3854 | 4400 | 0.0577 | - |
| 0.3897 | 4450 | 0.0235 | - |
| 0.3941 | 4500 | 0.0156 | - |
| 0.3985 | 4550 | 0.0078 | - |
| 0.4029 | 4600 | 0.0356 | - |
| 0.4073 | 4650 | 0.0595 | - |
| 0.4116 | 4700 | 0.0001 | - |
| 0.4160 | 4750 | 0.0018 | - |
| 0.4204 | 4800 | 0.0013 | - |
| 0.4248 | 4850 | 0.0008 | - |
| 0.4291 | 4900 | 0.0832 | - |
| 0.4335 | 4950 | 0.0083 | - |
| 0.4379 | 5000 | 0.0007 | - |
| 0.4423 | 5050 | 0.0417 | - |
| 0.4467 | 5100 | 0.0001 | - |
| 0.4510 | 5150 | 0.0218 | - |
| 0.4554 | 5200 | 0.0001 | - |
| 0.4598 | 5250 | 0.0012 | - |
| 0.4642 | 5300 | 0.0002 | - |
| 0.4686 | 5350 | 0.0006 | - |
| 0.4729 | 5400 | 0.0223 | - |
| 0.4773 | 5450 | 0.0612 | - |
| 0.4817 | 5500 | 0.0004 | - |
| 0.4861 | 5550 | 0.0 | - |
| 0.4905 | 5600 | 0.0007 | - |
| 0.4948 | 5650 | 0.0007 | - |
| 0.4992 | 5700 | 0.0116 | - |
| 0.5036 | 5750 | 0.0262 | - |
| 0.5080 | 5800 | 0.0336 | - |
| 0.5123 | 5850 | 0.026 | - |
| 0.5167 | 5900 | 0.0004 | - |
| 0.5211 | 5950 | 0.0001 | - |
| 0.5255 | 6000 | 0.0001 | - |
| 0.5299 | 6050 | 0.0001 | - |
| 0.5342 | 6100 | 0.0029 | - |
| 0.5386 | 6150 | 0.0001 | - |
| 0.5430 | 6200 | 0.0699 | - |
| 0.5474 | 6250 | 0.0262 | - |
| 0.5518 | 6300 | 0.0269 | - |
| 0.5561 | 6350 | 0.0002 | - |
| 0.5605 | 6400 | 0.0666 | - |
| 0.5649 | 6450 | 0.0209 | - |
| 0.5693 | 6500 | 0.0003 | - |
| 0.5737 | 6550 | 0.0001 | - |
| 0.5780 | 6600 | 0.0115 | - |
| 0.5824 | 6650 | 0.0003 | - |
| 0.5868 | 6700 | 0.0001 | - |
| 0.5912 | 6750 | 0.0056 | - |
| 0.5956 | 6800 | 0.0603 | - |
| 0.5999 | 6850 | 0.0002 | - |
| 0.6043 | 6900 | 0.0003 | - |
| 0.6087 | 6950 | 0.0092 | - |
| 0.6131 | 7000 | 0.0562 | - |
| 0.6174 | 7050 | 0.0408 | - |
| 0.6218 | 7100 | 0.0001 | - |
| 0.6262 | 7150 | 0.0035 | - |
| 0.6306 | 7200 | 0.0337 | - |
| 0.6350 | 7250 | 0.0024 | - |
| 0.6393 | 7300 | 0.0005 | - |
| 0.6437 | 7350 | 0.0001 | - |
| 0.6481 | 7400 | 0.0 | - |
| 0.6525 | 7450 | 0.0001 | - |
| 0.6569 | 7500 | 0.0002 | - |
| 0.6612 | 7550 | 0.0004 | - |
| 0.6656 | 7600 | 0.0125 | - |
| 0.6700 | 7650 | 0.0005 | - |
| 0.6744 | 7700 | 0.0157 | - |
| 0.6788 | 7750 | 0.0055 | - |
| 0.6831 | 7800 | 0.0 | - |
| 0.6875 | 7850 | 0.0053 | - |
| 0.6919 | 7900 | 0.0 | - |
| 0.6963 | 7950 | 0.0002 | - |
| 0.7006 | 8000 | 0.0002 | - |
| 0.7050 | 8050 | 0.0001 | - |
| 0.7094 | 8100 | 0.0001 | - |
| 0.7138 | 8150 | 0.0001 | - |
| 0.7182 | 8200 | 0.0007 | - |
| 0.7225 | 8250 | 0.0002 | - |
| 0.7269 | 8300 | 0.0001 | - |
| 0.7313 | 8350 | 0.0 | - |
| 0.7357 | 8400 | 0.0156 | - |
| 0.7401 | 8450 | 0.0098 | - |
| 0.7444 | 8500 | 0.0 | - |
| 0.7488 | 8550 | 0.0001 | - |
| 0.7532 | 8600 | 0.0042 | - |
| 0.7576 | 8650 | 0.0 | - |
| 0.7620 | 8700 | 0.0 | - |
| 0.7663 | 8750 | 0.0056 | - |
| 0.7707 | 8800 | 0.0 | - |
| 0.7751 | 8850 | 0.0 | - |
| 0.7795 | 8900 | 0.013 | - |
| 0.7839 | 8950 | 0.0 | - |
| 0.7882 | 9000 | 0.0001 | - |
| 0.7926 | 9050 | 0.0 | - |
| 0.7970 | 9100 | 0.0 | - |
| 0.8014 | 9150 | 0.0 | - |
| 0.8057 | 9200 | 0.0 | - |
| 0.8101 | 9250 | 0.0 | - |
| 0.8145 | 9300 | 0.0007 | - |
| 0.8189 | 9350 | 0.0 | - |
| 0.8233 | 9400 | 0.0002 | - |
| 0.8276 | 9450 | 0.0 | - |
| 0.8320 | 9500 | 0.0 | - |
| 0.8364 | 9550 | 0.0089 | - |
| 0.8408 | 9600 | 0.0001 | - |
| 0.8452 | 9650 | 0.0 | - |
| 0.8495 | 9700 | 0.0 | - |
| 0.8539 | 9750 | 0.0 | - |
| 0.8583 | 9800 | 0.0565 | - |
| 0.8627 | 9850 | 0.0161 | - |
| 0.8671 | 9900 | 0.0 | - |
| 0.8714 | 9950 | 0.0246 | - |
| 0.8758 | 10000 | 0.0 | - |
| 0.8802 | 10050 | 0.0 | - |
| 0.8846 | 10100 | 0.012 | - |
| 0.8889 | 10150 | 0.0 | - |
| 0.8933 | 10200 | 0.0 | - |
| 0.8977 | 10250 | 0.0 | - |
| 0.9021 | 10300 | 0.0 | - |
| 0.9065 | 10350 | 0.0 | - |
| 0.9108 | 10400 | 0.0 | - |
| 0.9152 | 10450 | 0.0 | - |
| 0.9196 | 10500 | 0.0 | - |
| 0.9240 | 10550 | 0.0023 | - |
| 0.9284 | 10600 | 0.0 | - |
| 0.9327 | 10650 | 0.0006 | - |
| 0.9371 | 10700 | 0.0 | - |
| 0.9415 | 10750 | 0.0 | - |
| 0.9459 | 10800 | 0.0 | - |
| 0.9503 | 10850 | 0.0 | - |
| 0.9546 | 10900 | 0.0 | - |
| 0.9590 | 10950 | 0.0243 | - |
| 0.9634 | 11000 | 0.0107 | - |
| 0.9678 | 11050 | 0.0001 | - |
| 0.9721 | 11100 | 0.0 | - |
| 0.9765 | 11150 | 0.0 | - |
| 0.9809 | 11200 | 0.0274 | - |
| 0.9853 | 11250 | 0.0 | - |
| 0.9897 | 11300 | 0.0 | - |
| 0.9940 | 11350 | 0.0 | - |
| 0.9984 | 11400 | 0.0 | - |
| 0.0007 | 1 | 0.2021 | - |
| 0.0329 | 50 | 0.1003 | - |
| 0.0657 | 100 | 0.2282 | - |
| 0.0986 | 150 | 0.0507 | - |
| 0.1314 | 200 | 0.046 | - |
| 0.1643 | 250 | 0.0001 | - |
| 0.1971 | 300 | 0.0495 | - |
| 0.2300 | 350 | 0.0031 | - |
| 0.2628 | 400 | 0.0004 | - |
| 0.2957 | 450 | 0.0002 | - |
| 0.3285 | 500 | 0.0 | - |
| 0.3614 | 550 | 0.0 | - |
| 0.3942 | 600 | 0.0 | - |
| 0.4271 | 650 | 0.0001 | - |
| 0.4599 | 700 | 0.0 | - |
| 0.4928 | 750 | 0.0 | - |
| 0.5256 | 800 | 0.0 | - |
| 0.5585 | 850 | 0.0 | - |
| 0.5913 | 900 | 0.0001 | - |
| 0.6242 | 950 | 0.0 | - |
| 0.6570 | 1000 | 0.0001 | - |
| 0.6899 | 1050 | 0.0 | - |
| 0.7227 | 1100 | 0.0 | - |
| 0.7556 | 1150 | 0.0 | - |
| 0.7884 | 1200 | 0.0 | - |
| 0.8213 | 1250 | 0.0 | - |
| 0.8541 | 1300 | 0.0 | - |
| 0.8870 | 1350 | 0.0 | - |
| 0.9198 | 1400 | 0.0 | - |
| 0.9527 | 1450 | 0.0001 | - |
| 0.9855 | 1500 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"base_model": "sentence-transformers/all-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Rly tragedy in MP: Some live to recount horror: ÛÏWhen I saw coaches of my train plunging into water I called my daughters and said t..."}, {"text": "You must be annihilated!"}, {"text": "Severe Thunderstorms and Flash Flooding Possible in the Mid-South and Midwest http://t.co/uAhIcWpIh4 #WEATHER #ENVIRONMENT #CLIMATE #NATURE"}, {"text": "everyone's wonder who will win and I'm over here wondering are those grapes real ?????? #BB17"}, {"text": "i swea it feels like im about to explode ??"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/all-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9203152364273205, "name": "Accuracy"}]}]}]} |
Anjaan-Khadka/Nepali-Summarization | Anjaan-Khadka | summarization | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"ne",
"dataset:csebuetnlp/xlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-23T11:44:58 | 2023-03-17T08:45:04 | 21 | 0 | ---
datasets:
- csebuetnlp/xlsum
language:
- ne
tags:
- summarization
- mT5
widget:
- text: तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र
गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा
उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६०
वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।
model-index:
- name: Anjaan-Khadka/summarization_nepali
results:
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 36.5002
name: ROUGE-1
verified: false
---
# adaptation of mT5-multilingual-XLSum for Nepali Lnaguage
This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum)
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
article_text = " तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।"
model_name = "Anjaan-Khadka/summarization_nepali"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
(article_text),
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
``` | [
"SUMMARIZATION"
] | Non_BioNLP |
# adaptation of mT5-multilingual-XLSum for Nepali Lnaguage
This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum)
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
article_text = " तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।"
model_name = "Anjaan-Khadka/summarization_nepali"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
(article_text),
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
``` | {"datasets": ["csebuetnlp/xlsum"], "language": ["ne"], "tags": ["summarization", "mT5"], "widget": [{"text": "तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।"}], "model-index": [{"name": "Anjaan-Khadka/summarization_nepali", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 36.5002, "name": "ROUGE-1", "verified": false}]}]}]} |
sndsabin/fake-news-classifier | sndsabin | null | [
"license:gpl-3.0",
"region:us"
] | 2022-03-31T08:53:49 | 2022-04-07T08:58:17 | 0 | 0 | ---
license: gpl-3.0
---
**Fake News Classifier**: Text classification model to detect fake news articles!
**Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
**Fake News Classifier**: Text classification model to detect fake news articles!
**Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
| {"license": "gpl-3.0"} |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 75