SmartIU2's picture
Update README.md
9a5bfe6 verified
metadata
tags:
  - setfit
  - absa
  - sentence-transformers
  - text-classification
  - generated_from_setfit_trainer
widget: []
metrics:
  - accuracy
pipeline_tag: text-classification
library_name: setfit
inference: false
language:
  - en
base_model:
  - sentence-transformers/all-distilroberta-v1

Usage

This model was created with a Setfit Fork using a custom aspect extractor.

It is intended to be used in conjunction with IMDb_ABSA only.

Default setfit model card below:

SetFit Polarity Model

A SetFit model can be used for Aspect Based Sentiment Analysis (ABSA). A LogisticRegression instance is used for classification. In particular, this model is in charge of classifying aspect polarities.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

This model was trained within the context of a larger system for ABSA, which looks like so:

  1. Use a spaCy model to select possible aspect span candidates.
  2. Use a SetFit model to filter these possible aspect span candidates.
  3. Use this SetFit model to classify the filtered aspect span candidates.

Model Details

Model Description

Original Setfit Model Sources

Training Details

Framework Versions

  • Python: 3.10.6
  • SetFit: 1.1.2
  • Sentence Transformers: 4.1.0
  • spaCy: 3.7.5
  • Transformers: 4.52.4
  • PyTorch: 2.7.0+cu128
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}