Model Card for BioClinicalBERT IBD

The model classifies documents as either IBD or Not IBD

Model Details

Model Description

As above. This is a model trained to detect IBD patients from clinical text

  • Developed by: Matt Stammers
  • Funded by: University Hospital Foundation NHS Trust
  • Shared by: Matt Stammers - SETT Data and AI Clinical Lead
  • Model type: BERT Transformer
  • Language(s) (NLP): English
  • License: cc-by-nc-4.0
  • Finetuned from model: sentence-transformers/all-mpnet-base-v2

Model Sources

Uses

For document classification tasks to differentiate between documents likely to be diagnostic of IBD and those unlikely to be diagnostic of IBD.

Direct Use

This model can be used directly at Cohort Identification Demo

Downstream Use

Others can build on this model and improve it but only for non-commercial purposes.

Out-of-Scope Use

This model is less powerful (in terms of F1 Score) when making predictions at the patient level by 1-2%. It can be used for this purpose but with care.

Bias, Risks, and Limitations

This model does not have any major known biases. It only has some slight bias tendencies.

Recommendations

This model might work well in any population but has not yet been tested in this regard as such.

How to Get Started with the Model

Use the code below to get started with the model.

The model is best used with the transformers library.

Training Details

Training Data

The model was trained on fully pseudonymised clinical information at UHSFT which was carefully labelled by a consultant (attending) physician and evaluated against a randomly selected internal holdout set.

Training Procedure

See the paper for more information on the training procedure

Training Hyperparameters

  • Training regime: fp32

Speeds, Sizes, Times

This model (part of a set of models) took 213.55 minutes to train

Evaluation

The model was internally validated against a holdout set

Testing Data, Factors & Metrics

Testing Data

The testing data cannot be revealed due to IG regulations and to remain compliant with GDPR, only the resulting model can be

Factors

IBD vs Not-IBD

Metrics

Full evaluation metrics are available in the paper with a summary below

Results

Model Doc Coverage Accuracy Precision Recall Specificity NPV F1 Score MCC
SBERT 768 (100.00%) 89.67% (CI: 86.64% - 92.08%) 88.81% (CI: 85.44% - 91.48%) 99.20% (CI: 97.68% - 99.73%) 56.48% (CI: 47.07% - 65.45%) 95.31% (CI: 87.10% - 98.39%) 93.72% (CI: 92.50% - 94.90%) 0.6844 (CI: 0.6120 - 0.7545)

Summary

Overall performance of the model is high with an F1 Score of >94% on our internal holdout set.

Environmental Impact

Training the model used 2.01kWh of energy emmitting 416.73 grams of CO2

  • Hardware Type: L40S
  • Hours used: 0.1
  • Carbon Emitted: 0.009 Kg CO2

Citation

Arxiv (Pending)

Glossary

Term Description
Accuracy The percentage of results that were correct among all results from the system. Calc: (TP + TN) / (TP + FP + TN + FN).
Precision (PPV) Also called positive predictive value (PPV), it is the percentage of true positive results among all results that the system flagged as positive. Calc: TP / (TP + FP).
Negative Predictive Value (NPV) The percentage of results that were true negative (TN) among all results that the system flagged as negative. Calc: TN / (TN + FN).
Recall Also called sensitivity. The percentage of results flagged positive among all results that should have been obtained. Calc: TP / (TP + FN).
Specificity The percentage of results that were flagged negative among all negative results. Calc: TN / (TN + FP).
F1-Score The harmonic mean of PPV/precision and sensitivity/recall. Calc: 2 × (Precision × Recall) / (Precision + Recall). Moderately useful in the context of class imbalance.
Matthews’ Correlation Coefficient (MCC) A statistical measure used to evaluate the quality of binary classifications. Unlike other metrics, MCC considers all four categories of a confusion matrix. Calc: (TP × TN − FP × FN) / √((TP + FP)(TP + FN)(TN + FP)(TN + FN)).
Precision / Recall AUC Represents the area under the Precision-Recall curve, which plots Precision against Recall at various threshold settings. It is more resistant to class imbalance than alternatives like AUROC.
Demographic Parity (DP) Demographic Parity, also known as Statistical Parity, requires that the probability of a positive prediction is the same across different demographic groups. Calc: DP = P(Ŷ=1∣A=a) = P(Ŷ=1∣A=b). This figure is given as an absolute difference where positive values suggest the more privileged group gains and negative values the reverse.
Equal Opportunity (EO) Equal Opportunity focuses on equalising the true positive rates across groups. Among those who truly belong to the positive class, the model should predict positive outcomes at equal rates across groups. Calc: EO = P(Ŷ=1∣Y=1, A=a) = P(Ŷ=1∣Y=1, A=b). A higher value indicates a bias against the more vulnerable group.
Disparate Impact (DI) Divides the protected group’s positive prediction rate by that of the most-favoured group. If the ratio is below 0.8 or above 1.25, disparate impact is considered present. Calc: DI = P(Ŷ=1∣A=unfavoured) / P(Ŷ=1∣A=favoured). Values outside 0.8–1.25 range suggest bias.
Execution Time / Energy / CO₂ Emissions Measured in minutes and total energy consumption in kilowatt-hours (kWh), which is then converted to CO₂ emissions using a factor of 0.20705 Kg CO₂e per kWh.

Model Card Authors

Matt Stammers - Computational Gastroenterologist

Model Card Contact

[email protected]

Base Training Data for the Model

Training data

We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the data_config.json file.

Dataset Paper Number of training tuples
Reddit comments (2015-2018) paper 726,484,430
S2ORC Citation pairs (Abstracts) paper 116,288,806
WikiAnswers Duplicate question pairs paper 77,427,422
PAQ (Question, Answer) pairs paper 64,371,441
S2ORC Citation pairs (Titles) paper 52,603,982
S2ORC (Title, Abstract) paper 41,769,185
Stack Exchange (Title, Body) pairs - 25,316,456
Stack Exchange (Title+Body, Answer) pairs - 21,396,559
Stack Exchange (Title, Answer) pairs - 21,396,559
MS MARCO triplets paper 9,144,553
GOOAQ: Open Question Answering with Diverse Answer Types paper 3,012,496
Yahoo Answers (Title, Answer) paper 1,198,260
Code Search - 1,151,414
COCO Image captions paper 828,395
SPECTER citation triplets paper 684,100
Yahoo Answers (Question, Answer) paper 681,164
Yahoo Answers (Title, Question) paper 659,896
SearchQA paper 582,261
Eli5 paper 325,475
Flickr 30k paper 317,695
Stack Exchange Duplicate questions (titles) 304,525
AllNLI (SNLI and MultiNLI paper SNLI, paper MultiNLI 277,230
Stack Exchange Duplicate questions (bodies) 250,519
Stack Exchange Duplicate questions (titles+bodies) 250,460
Sentence Compression paper 180,000
Wikihow paper 128,542
Altlex paper 112,696
Quora Question Triplets - 103,663
Simple Wikipedia paper 102,225
Natural Questions (NQ) paper 100,231
SQuAD2.0 paper 87,599
TriviaQA - 73,346
Total 1,170,060,424
Downloads last month
11
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MattStammers/SBERT_IBD

Quantized
(3)
this model

Collection including MattStammers/SBERT_IBD