SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
Model Sources
Evaluation
Metrics
Label |
Accuracy |
Precision |
Recall |
F1 |
Roc_Auc |
Hamming_Loss |
all |
0.9028 |
0.9795 |
0.9262 |
0.9498 |
0.9608 |
0.0170 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
model = SetFitModel.from_pretrained("etham13/consent-form-PII-corrected")
preds = model("We may use your device's SSID to provide location-based services.")
Training Details
Training Set Metrics
Training set |
Min |
Median |
Max |
Word count |
8 |
17.0764 |
83 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 30
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
Epoch |
Step |
Training Loss |
Validation Loss |
0.0019 |
1 |
0.2177 |
- |
0.0926 |
50 |
0.1652 |
- |
0.1852 |
100 |
0.1008 |
- |
0.2778 |
150 |
0.0743 |
- |
0.3704 |
200 |
0.0704 |
- |
0.4630 |
250 |
0.0642 |
- |
0.5556 |
300 |
0.0586 |
- |
0.6481 |
350 |
0.0528 |
- |
0.7407 |
400 |
0.0594 |
- |
0.8333 |
450 |
0.0537 |
- |
0.9259 |
500 |
0.0584 |
- |
External Val Batch 3 updated
Overall Metrics
Metric |
Value |
Accuracy |
0.8966 |
F1 Score |
0.9249 |
ROC AUC Score |
0.9654 |
Hamming Loss |
0.0115 |
Per-Class Performance
Class |
Precision |
Recall |
F1-Score |
Support |
AAID |
1.00 |
0.83 |
0.91 |
6 |
SSID |
0.75 |
1.00 |
0.86 |
3 |
BSSID |
1.00 |
1.00 |
1.00 |
4 |
Bluetooth MAC |
1.00 |
1.00 |
1.00 |
2 |
IMEI |
0.75 |
1.00 |
0.86 |
3 |
Email |
0.86 |
1.00 |
0.92 |
6 |
IMSI |
1.00 |
0.80 |
0.89 |
5 |
Phone Number |
1.00 |
1.00 |
1.00 |
1 |
(Device) Serial Number |
1.00 |
0.80 |
0.89 |
5 |
Averaged Performance
Avg Type |
Precision |
Recall |
F1-Score |
Support |
Micro Avg |
0.91 |
0.91 |
0.91 |
35 |
Macro Avg |
0.93 |
0.94 |
0.92 |
35 |
Weighted Avg |
0.93 |
0.91 |
0.91 |
35 |
Samples Avg |
0.47 |
0.47 |
0.47 |
35 |
External Val (OLD)
Metric |
Value |
Accuracy |
0.8182 |
F1 Score |
0.8522 |
ROC AUC Score |
0.9171 |
Hamming Loss |
0.0219 |
Class |
Precision |
Recall |
F1-Score |
Support |
AAID |
0.50 |
1.00 |
0.67 |
2 |
SSID |
1.00 |
1.00 |
1.00 |
4 |
BSSID |
1.00 |
1.00 |
1.00 |
4 |
Bluetooth Mac |
1.00 |
1.00 |
1.00 |
2 |
IMEI |
1.00 |
0.67 |
0.80 |
6 |
Email |
0.83 |
1.00 |
0.91 |
5 |
IMSI |
1.00 |
0.50 |
0.67 |
6 |
Phone Number |
1.00 |
0.82 |
0.90 |
11 |
(Device) Serial Number |
1.00 |
0.57 |
0.73 |
7 |
Avg Type |
Precision |
Recall |
F1-Score |
Support |
Micro Avg |
0.93 |
0.79 |
0.85 |
47 |
Macro Avg |
0.93 |
0.84 |
0.85 |
47 |
Weighted Avg |
0.96 |
0.79 |
0.84 |
47 |
Samples Avg |
0.50 |
0.45 |
0.47 |
47 |
Framework Versions
- Python: 3.11.11
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Datasets: 3.4.1
- Tokenizers: 0.21.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}