SetFit with FacebookAI/xlm-roberta-base

This is a SetFit model that can be used for Text Classification. This SetFit model uses FacebookAI/xlm-roberta-base as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'Uusi maakuntajohtaja haluaa tehdä määrätietoisesti avointa elinkeinopolitiikkaa.'
  • '– Verot eivät saa nousta.'
  • 'Enemmänkin ulkoinen apu, sikäli kun sitä tarvitaan, on luonteeltaan täydentävää, läsnäoloa.'
1
  • 'nan'
  • '– Seuraava laskumarkkina voi olla edessä, kun keskuspankit alkavat vähentää likviditeettiä pois markkinoilta, Rothovius tuumaa.'
  • 'Hallituspuolueista puolestaan huomautettiin, että sotea on yritetty saadaan kuntoon 15 vuotta, josta kokoomus on ollut hallitusvastuussa valtaosan.'

Evaluation

Metrics

Label Metric
all 0.8210

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HY-Aalto-DIME/FinnClaim-detect-FinBERT-CF2")
# Run inference
preds = model("Toinen pulma on se, lopahtaako harrastajien into.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 11.0767 29
Label Training Sample Count
0 727
1 329

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (4, 4)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 6
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0013 1 0.4903 -
0.0631 50 0.4847 -
0.1263 100 0.2626 -
0.1894 150 0.2597 -
0.2525 200 0.1716 -
0.3157 250 0.3103 -
0.3788 300 0.0955 -
0.4419 350 0.1019 -
0.5051 400 0.167 -
0.5682 450 0.0886 -
0.6313 500 0.0591 -
0.6944 550 0.161 -
0.7576 600 0.031 -
0.8207 650 0.0947 -
0.8838 700 0.0087 -
0.9470 750 0.0055 -
1.0 792 - 0.2484
1.0101 800 0.0034 -
1.0732 850 0.0037 -
1.1364 900 0.0043 -
1.1995 950 0.0031 -
1.2626 1000 0.0007 -
1.3258 1050 0.002 -
1.3889 1100 0.0004 -
1.4520 1150 0.0021 -
1.5152 1200 0.0005 -
1.5783 1250 0.0002 -
1.6414 1300 0.0009 -
1.7045 1350 0.0002 -
1.7677 1400 0.0392 -
1.8308 1450 0.0039 -
1.8939 1500 0.0002 -
1.9571 1550 0.0411 -
2.0 1584 - 0.2935
2.0202 1600 0.0002 -
2.0833 1650 0.0003 -
2.1465 1700 0.0548 -
2.2096 1750 0.0042 -
2.2727 1800 0.0002 -
2.3359 1850 0.0002 -
2.3990 1900 0.0001 -
2.4621 1950 0.0001 -
2.5253 2000 0.0003 -
2.5884 2050 0.0001 -
2.6515 2100 0.0001 -
2.7146 2150 0.0002 -
2.7778 2200 0.0002 -
2.8409 2250 0.0001 -
2.9040 2300 0.0002 -
2.9672 2350 0.0002 -
3.0 2376 - 0.3097
3.0303 2400 0.0001 -
3.0934 2450 0.0 -
3.1566 2500 0.0001 -
3.2197 2550 0.0001 -
3.2828 2600 0.0001 -
3.3460 2650 0.0001 -
3.4091 2700 0.0001 -
3.4722 2750 0.0001 -
3.5354 2800 0.0001 -
3.5985 2850 0.0001 -
3.6616 2900 0.0001 -
3.7247 2950 0.0001 -
3.7879 3000 0.0001 -
3.8510 3050 0.0001 -
3.9141 3100 0.0001 -
3.9773 3150 0.0001 -
4.0 3168 - 0.2759

Framework Versions

  • Python: 3.11.9
  • SetFit: 1.0.3
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu124
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
12
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HY-Aalto-DIME/FinnClaim-detect-FinBERT-CF2

Finetuned
(3373)
this model

Evaluation results