Model Checkpoints and Logs

Name Few-Shot Base-to-Novel
BiomedCoOp link link

Reproducing Results

Run the following scripts to use the checkpoints and get testing results. Note that the following scripts automatically download the desired model weights:

(1) Few-shot Evaluation
CUDA_VISIBLE_DEVICES=<GPU number> bash scripts/biomedcoop/eval_fewshot.sh <data directory> <dataset> <nb of shots>
# Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0
CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_fewshot.sh data btmri 16
(2) Base-to-Novel Generalization
CUDA_VISIBLE_DEVICES=<GPU number> bash scripts/biomedcoop/eval_base2new.sh <data directory> <dataset> <nb of shots>
# Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0
CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_base2new.sh data btmri 16

Citation

If you use our work, please consider citing:

@article{koleilat2024biomedcoop,
        title={BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models},
        author={Koleilat, Taha and Asgariandehkordi, Hojat and Rivaz, Hassan and Xiao, Yiming},
        journal={arXiv preprint arXiv:2411.15232},
        year={2024}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support