---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: we get some truly unique character studies and a cross-section of americana
that hollywood could n't possibly fictionalize and be believed .
- text: the movie is one of the best examples of artful large format filmmaking you
are likely to see anytime soon .
- text: my response to the film is best described as lukewarm .
- text: the movie 's ripe , enrapturing beauty will tempt those willing to probe its
inscrutable mysteries .
- text: fear dot com is so rambling and disconnected it never builds any suspense
.
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.4235294117647059
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4 |
- 'this is a sincerely crafted picture that deserves to emerge from the traffic jam of holiday movies .'
- 'e.t. works because its flabbergasting principals , 14-year-old robert macnaughton , 6-year-old drew barrymore and 10-year-old henry thomas , convince us of the existence of the wise , wizened visitor from a faraway planet .'
- 'jones has delivered a solidly entertaining and moving family drama .'
|
| 0 | - 'nothing more than an amiable but unfocused bagatelle that plays like a loosely-connected string of acting-workshop exercises .'
- 'makes a joke out of car chases for an hour and then gives us half an hour of car chases .'
- 'in all the annals of the movies , few films have been this odd , inexplicable and unpleasant .'
|
| 2 | - 'in gleefully , thumpingly hyperbolic terms , it covers just about every cliche in the compendium about crass , jaded movie types and the phony baloney movie biz .'
- "this may be dover kosashvili 's feature directing debut , but it looks an awful lot like life -- gritty , awkward and ironic ."
- "the filmmaker 's heart is in the right place ..."
|
| 3 | - "if you 're as happy listening to movies as you are watching them , and the slow parade of human frailty fascinates you , then you 're at the right film ."
- 'a bracing , unblinking work that serves as a painful elegy and sobering cautionary tale .'
- "now as a former gong show addict , i 'll admit it , my only complaint is that we did n't get more re-creations of all those famous moments from the show ."
|
| 1 | - "you 'll feel like you ate a reeses without the peanut butter ... '"
- "but , like silence , it 's a movie that gets under your skin ."
- 'he seems to want both , but succeeds in making neither .'
|
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.4235 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-sst5")
# Run inference
preds = model("my response to the film is best described as lukewarm .")
```
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 18.325 | 35 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.005 | 1 | 0.3225 | - |
| 0.25 | 50 | 0.2032 | - |
| 0.5 | 100 | 0.0987 | - |
| 0.75 | 150 | 0.062 | - |
| 1.0 | 200 | 0.016 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```