Edit model card

SetFit with BAAI/bge-small-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-small-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • '. : Pt oh mM\nBaw iS\n\nWw tere\nPr pe 0 ok ji the\nFw: Pending Bills jer\n\n, Ronit Sarangi to: Vinit K Sinha i 22-01-2020 11:37\n
1
  • 'Tax Invoice\nOriginal for Buyer/ Duplicate for Transporter/ Triplicate for Assessee\n\nSupplier Legal Name; Mahanadi Coalfields Area Code :MO01\n7 Limited Area Description :Jagannath\nSupplier Address , Jagriti Vihar, Bur.a Invaice Number 19100066504\nSambalpur 768020 Involee Date :Dee 15, 2022\nSupplier City : Sambalpur Contract Reference: 3030007756\nSupplier State Odisha Contract type :Spot Auction\nSupplier Pincode : 768020 Salas Order 1240002677\n.P.O. - dJagriti vihar, Burla

Evaluation

Metrics

Label Accuracy
all 1.0

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/COAL_INVOICE_ZEON")
# Run inference
preds = model("UNITED MEDICAL STORE Patient Name: KASTURI uENA
‘EW MARKET, C/O PRAFULLA KUMAR JENA
HIRAKUD. SAMBALPUR. Dr. Name :

Medicine Advice Slip: MA/2223/0668 “
Phone :0663-2431670 Prescription Indent:M/2223/06299

DL No. :SAWZ 486 R/487 RC Invoice No. ; 0002785 Date : 21/11/2022

Se|__Qiy. [Pack [Product “Batch [Exp] HSN [ MRP | Table | Dis [5051] CO3i] Amount |

1. 30 TAB] 30'S TELMA H TAB 11/24 | 30049099; 484.00! 432.14 0.001 6.00
NEOPRIDE TOTAL CAP 7/24 30049099) 445.00) 0,00; 6.00

 

 

 

SUB TOTAL :

SGST
er rH 2 ROFF :
— ha GRAND TOTAL

Te & Con itions For UNITED MEDICAL STORE R a ah
BILL GRAND TOTAL IS CALCULATED ACCORDING TO 1D- 3306 Im- 1220
MRP PRICE ( INCLUDING ALL GST TAXES ) Q _ 06 (ped)

 
")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 270.5442 4241
Label Training Sample Count
0 130
1 85

Training Hyperparameters

  • batch_size: (32, 32)
  • num_epochs: (2, 2)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0013 1 0.2394 -
0.0657 50 0.1203 -
0.1314 100 0.0095 -
0.1971 150 0.0029 -
0.2628 200 0.0014 -
0.3285 250 0.0014 -
0.3942 300 0.0011 -
0.4599 350 0.0009 -
0.5256 400 0.0008 -
0.5913 450 0.0007 -
0.6570 500 0.0008 -
0.7227 550 0.0008 -
0.7884 600 0.0006 -
0.8541 650 0.0005 -
0.9198 700 0.0004 -
0.9855 750 0.0005 -
1.0512 800 0.0004 -
1.1170 850 0.0005 -
1.1827 900 0.0004 -
1.2484 950 0.0004 -
1.3141 1000 0.0003 -
1.3798 1050 0.0004 -
1.4455 1100 0.0004 -
1.5112 1150 0.0004 -
1.5769 1200 0.0005 -
1.6426 1250 0.0004 -
1.7083 1300 0.0003 -
1.7740 1350 0.0004 -
1.8397 1400 0.0005 -
1.9054 1450 0.0004 -
1.9711 1500 0.0003 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 2.2.2
  • Transformers: 4.35.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.16.1
  • Tokenizers: 0.15.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
5
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Gopal2002/COAL_INVOICE_ZEON

Finetuned
(108)
this model

Evaluation results