--- language: - en tags: - medical size_categories: - 1K
Paper Project Code HuggingFace Model
Dataset List CT_DeepLesion-MedSAM2 LLD-MMRI-MedSAM2 3D Slicer
Gradio App CT-Seg-Demo Video-Seg-Demo BibTeX
## Authors

Jun Ma* 1,2, Zongxin Yang* 3, Sumin Kim2,4,5, Bihui Chen2,4,5, Mohammed Baharoon2,3,5,
Adibvafa Fallahpour2,4,5, Reza Asakereh4,7, Hongwei Lyu4, Bo Wang† 1,2,4,5,6

* Equal contribution     Corresponding author

1AI Collaborative Centre, University Health Network, Toronto, Canada
2Vector Institute for Artificial Intelligence, Toronto, Canada
3Department of Biomedical Informatics, Harvard Medical School, Harvard University, Boston, USA
4Peter Munk Cardiac Centre, University Health Network, Toronto, Canada
5Department of Computer Science, University of Toronto, Toronto, Canada
6Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada
7Roche Canada and Genentech

## About [DeepLesion](https://nihcc.app.box.com/v/DeepLesion) dataset contains 32,735 diverse lesions in 32,120 CT slices from 10,594 studies of 4,427 unique patients. Each lesion has a bounding box annotation on the key slice, which is derived from the longest diameter and longest perpendicular diameter. We annotated 5000 lesions with [MedSAM2](https://github.com/bowang-lab/MedSAM2) in a human-in-the-loop pipeline. ```py # Install required package pip install datasets # Load the dataset from datasets import load_dataset # Download and load the dataset dataset = load_dataset("wanglab/CT_DeepLesion-MedSAM2") # Access the train split train_dataset = dataset["train"] # Display the first example print(train_dataset[0]) ``` Please cite both DeepLesion and MedSAM2 when using this dataset. ```bash @article{DeepLesion, title={DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning}, author={Yan, Ke and Wang, Xiaosong and Lu, Le and Summers, Ronald M}, journal={Journal of Medical Imaging}, volume={5}, number={3}, pages={036501--036501}, year={2018} } @article{MedSAM2, title={MedSAM2: Segment Anything in 3D Medical Images and Videos}, author={Ma, Jun and Yang, Zongxin and Kim, Sumin and Chen, Bihui and Baharoon, Mohammed and Fallahpour, Adibvafa and Asakereh, Reza and Lyu, Hongwei and Wang, Bo}, journal={arXiv preprint arXiv:2504.63609}, year={2025} } ```