Dataset Viewer
Auto-converted to Parquet
name
string
repo
dict
requires
string
papers
sequence
category
string
description
string
arguments
list
returns
list
example
dict
test_invocations
list
note
string
python_signature
string
xml_summary
string
papers_info
list
abrsp_vote_predictor
{ "branch": null, "commit": "8ba3a7c", "env": [], "info": "ABRS-P from https://github.com/qinghezeng/ABRS-P (at commit: 8ba3a7c)", "name": "ABRS-P", "url": "https://github.com/qinghezeng/ABRS-P" }
cuda
[ "zeng2023abrsp" ]
pathology
Predict binary sensitivity (High vs Low) to atezolizumab-bevacizumab with the 10-fold ABRS-P model: for each sample keep the ordered list of ten raw scores, apply the matching thresholds, count “High” votes, and assign High if ≥ 5, else Low (ties impossible).
[ { "description": "CSV that was used to train the checkpoints, listing every slide/patient to evaluate; must include `slide_id` and the column named by `label_col` (values are loaded but not used for prediction).", "name": "dataset_csv_file", "type": "str" }, { "description": "Name of the expression column in `dataset_csv_file`, required only to satisfy the dataset API.", "name": "label_col", "type": "str" }, { "description": "Directory with `splits_0.csv` … `splits_9.csv`; rows marked as `test` define which samples are processed for each fold.", "name": "dataset_splits_dir", "type": "str" }, { "description": "Directory containing the ten fold checkpoints `s_0_checkpoint.pt` … `s_9_checkpoint.pt`.", "name": "checkpoints_dir", "type": "str" }, { "description": "Folder holding one CTransPath feature tensor (`.pt`) per `slide_id`.", "name": "features_dir", "type": "str" }, { "description": "Ordered list of exactly ten floats used as fold-specific cut-offs to convert continuous scores to High/Low.", "name": "fixed_thresholds", "type": "list" }, { "description": "Destination CSV with columns `sample_id`, `fold_0` … `fold_9`, and the majority-voted `final`.", "name": "output_predictions_file", "type": "str" } ]
[ { "description": "List of dictionaries mirroring the rows of the output CSV (`sample_id`, `fold_0` … `fold_9`, `final`).", "name": "predictions", "type": "list" } ]
{ "arguments": [ { "name": "dataset_csv_file", "value": "\"/mount/input/dataset_csv/myscore_s1.csv\"" }, { "name": "label_col", "value": "\"my_score\"" }, { "name": "dataset_splits_dir", "value": "\"/mount/input/dataset_splits/s1_100\"" }, { "name": "checkpoints_dir", "value": "\"/mount/input/10_fold_checkpoints\"" }, { "name": "features_dir", "value": "\"/mount/input/TCGA-LIHC-cTransPath-features-20x\"" }, { "name": "fixed_thresholds", "value": "[6.32, 5.14, 6.79, 5.88, 6.01, 5.49, 6.27, 5.62, 6.66, 5.36]" }, { "name": "output_predictions_file", "value": "\"/mount/output/predictions.csv\"" } ], "mount": [ { "source": "dataset_csv", "target": "dataset_csv" }, { "source": "dataset_splits", "target": "dataset_splits" }, { "source": "10_fold_checkpoints", "target": "10_fold_checkpoints" }, { "source": "TCGA-LIHC-cTransPath-features-20x", "target": "TCGA-LIHC-cTransPath-features-20x" } ], "name": "example" }
[ { "arguments": [ { "name": "dataset_csv_file", "value": "\"/mount/input/dataset_csv/myscore_s2.csv\"" }, { "name": "label_col", "value": "\"my_score\"" }, { "name": "dataset_splits_dir", "value": "\"/mount/input/dataset_splits/s2_100\"" }, { "name": "checkpoints_dir", "value": "\"/mount/input/10_fold_checkpoints\"" }, { "name": "features_dir", "value": "\"/mount/input/TCGA-LIHC-cTransPath-features-20x\"" }, { "name": "fixed_thresholds", "value": "[6.32, 5.14, 6.79, 5.88, 6.01, 5.49, 6.27, 5.62, 6.66, 5.36]" }, { "name": "output_predictions_file", "value": "\"/mount/output/predictions.csv\"" } ], "mount": [ { "source": "dataset_csv", "target": "dataset_csv" }, { "source": "dataset_splits", "target": "dataset_splits" }, { "source": "10_fold_checkpoints", "target": "10_fold_checkpoints" }, { "source": "TCGA-LIHC-cTransPath-features-20x", "target": "TCGA-LIHC-cTransPath-features-20x" } ], "name": "s2" }, { "arguments": [ { "name": "dataset_csv_file", "value": "\"/mount/input/dataset_csv/myscore_s3.csv\"" }, { "name": "label_col", "value": "\"my_score\"" }, { "name": "dataset_splits_dir", "value": "\"/mount/input/dataset_splits/s3_100\"" }, { "name": "checkpoints_dir", "value": "\"/mount/input/10_fold_checkpoints\"" }, { "name": "features_dir", "value": "\"/mount/input/TCGA-LIHC-cTransPath-features-20x\"" }, { "name": "fixed_thresholds", "value": "[6.32, 5.14, 6.79, 5.88, 6.01, 5.49, 6.27, 5.62, 6.66, 5.36]" }, { "name": "output_predictions_file", "value": "\"/mount/output/predictions.csv\"" } ], "mount": [ { "source": "dataset_csv", "target": "dataset_csv" }, { "source": "dataset_splits", "target": "dataset_splits" }, { "source": "10_fold_checkpoints", "target": "10_fold_checkpoints" }, { "source": "TCGA-LIHC-cTransPath-features-20x", "target": "TCGA-LIHC-cTransPath-features-20x" } ], "name": "s3" }, { "arguments": [ { "name": "dataset_csv_file", "value": "\"/mount/input/dataset_csv/myscore_s4.csv\"" }, { "name": "label_col", "value": "\"my_score\"" }, { "name": "dataset_splits_dir", "value": "\"/mount/input/dataset_splits/s4_100\"" }, { "name": "checkpoints_dir", "value": "\"/mount/input/10_fold_checkpoints\"" }, { "name": "features_dir", "value": "\"/mount/input/TCGA-LIHC-cTransPath-features-20x\"" }, { "name": "fixed_thresholds", "value": "[6.32, 5.14, 6.79, 5.88, 6.01, 5.49, 6.27, 5.62, 6.66, 5.36]" }, { "name": "output_predictions_file", "value": "\"/mount/output/predictions.csv\"" } ], "mount": [ { "source": "dataset_csv", "target": "dataset_csv" }, { "source": "dataset_splits", "target": "dataset_splits" }, { "source": "10_fold_checkpoints", "target": "10_fold_checkpoints" }, { "source": "TCGA-LIHC-cTransPath-features-20x", "target": "TCGA-LIHC-cTransPath-features-20x" } ], "name": "s4" } ]
null
def abrsp_vote_predictor(dataset_csv_file: str = '/mount/input/dataset_csv/myscore_s1.csv', label_col: str = 'my_score', dataset_splits_dir: str = '/mount/input/dataset_splits/s1_100', checkpoints_dir: str = '/mount/input/10_fold_checkpoints', features_dir: str = '/mount/input/TCGA-LIHC-cTransPath-features-20x', fixed_thresholds: list = [6.32, 5.14, 6.79, 5.88, 6.01, 5.49, 6.27, 5.62, 6.66, 5.36], output_predictions_file: str = '/mount/output/predictions.csv') -> dict: """ Predict binary sensitivity (High vs Low) to atezolizumab-bevacizumab with the 10-fold ABRS-P model: for each sample keep the ordered list of ten raw scores, apply the matching thresholds, count “High” votes, and assign High if ≥ 5, else Low (ties impossible). Args: dataset_csv_file: CSV that was used to train the checkpoints, listing every slide/patient to evaluate; must include `slide_id` and the column named by `label_col` (values are loaded but not used for prediction). label_col: Name of the expression column in `dataset_csv_file`, required only to satisfy the dataset API. dataset_splits_dir: Directory with `splits_0.csv` … `splits_9.csv`; rows marked as `test` define which samples are processed for each fold. checkpoints_dir: Directory containing the ten fold checkpoints `s_0_checkpoint.pt` … `s_9_checkpoint.pt`. features_dir: Folder holding one CTransPath feature tensor (`.pt`) per `slide_id`. fixed_thresholds: Ordered list of exactly ten floats used as fold-specific cut-offs to convert continuous scores to High/Low. output_predictions_file: Destination CSV with columns `sample_id`, `fold_0` … `fold_9`, and the majority-voted `final`. Returns: dict with the following structure: { 'predictions': list # List of dictionaries mirroring the rows of the output CSV (`sample_id`, `fold_0` … `fold_9`, `final`). } """
<description> Predict binary sensitivity (High vs Low) to atezolizumab-bevacizumab with the 10-fold ABRS-P model: for each sample keep the ordered list of ten raw scores, apply the matching thresholds, count “High” votes, and assign High if ≥ 5, else Low (ties impossible). </description> <arguments> dataset_csv_file (str): CSV that was used to train the checkpoints, listing every slide/patient to evaluate; must include `slide_id` and the column named by `label_col` (values are loaded but not used for prediction). (example: '/mount/input/dataset_csv/myscore_s1.csv') label_col (str): Name of the expression column in `dataset_csv_file`, required only to satisfy the dataset API. (example: 'my_score') dataset_splits_dir (str): Directory with `splits_0.csv` … `splits_9.csv`; rows marked as `test` define which samples are processed for each fold. (example: '/mount/input/dataset_splits/s1_100') checkpoints_dir (str): Directory containing the ten fold checkpoints `s_0_checkpoint.pt` … `s_9_checkpoint.pt`. (example: '/mount/input/10_fold_checkpoints') features_dir (str): Folder holding one CTransPath feature tensor (`.pt`) per `slide_id`. (example: '/mount/input/TCGA-LIHC-cTransPath-features-20x') fixed_thresholds (list): Ordered list of exactly ten floats used as fold-specific cut-offs to convert continuous scores to High/Low. (example: [6.32, 5.14, 6.79, 5.88, 6.01, 5.49, 6.27, 5.62, 6.66, 5.36]) output_predictions_file (str): Destination CSV with columns `sample_id`, `fold_0` … `fold_9`, and the majority-voted `final`. (example: '/mount/output/predictions.csv') </arguments> <returns> dict with the following structure: { 'predictions': list # List of dictionaries mirroring the rows of the output CSV (`sample_id`, `fold_0` … `fold_9`, `final`). } </returns>
[ { "bibtex": "@article{zeng2023abrsp,\n author = {Zeng, Qinghe\n and Klein, Christophe\n and Caruso, Stefano\n and Maille, Pascale\n and Allende, Daniela S.\n and M{\\'i}nguez, Beatriz\n and Iavarone, Massimo\n and Ningarhari, Massih\n and Casadei-Gardini, Andrea\n and Pedica, Federica\n and Rimini, Margherita\n and Perbellini, Riccardo\n and Boulagnon-Rombi, Camille\n and Heurgu{\\'e}, Alexandra\n and Maggioni, Marco\n and Rela, Mohamed\n and Vij, Mukul\n and Baulande, Sylvain\n and Legoix, Patricia\n and Lameiras, Sonia\n and Amaddeo, Giuliana\n and Argemi, Josepmaria\n and Beaufr{\\`e}re, Aur{\\'e}lie\n and Berm{\\'u}dez-Ramos, Mar{\\'i}a\n and Boursier, J{\\'e}r{\\^o}me\n and Bruges, L{\\'e}a\n and Calderaro, Julien\n and Campani, Claudia\n and Castano Garcia, Andres\n and Chan, Stephen Lam\n and D'Alessio, Antonio\n and Di Tommaso, Luca\n and Diaz, Alba\n and Digklia, Antonia\n and Dufour, Jean-Fran{\\c{c}}ois\n and Garcia-Porrero, Guillermo\n and Ghaffari Laleh, Narmin\n and Gnemmi, Viviane\n and Gopal, Purva\n and Graham, Rondell P.\n and I{\\~{n}}arrairaegui, Mercedes\n and Kather, Jakob Nikolas\n and Labgaa, Ismail\n and Lequoy, Marie\n and Leung, Howard Ho-Wai\n and Lom{\\'e}nie, Nicolas\n and Mar{\\'i}n-Zuluaga, Juan Ignacio\n and Mendoza-Pacas, Guillermo\n and Michalak, Sophie\n and El Nahhas, Omar S. M.\n and Nault, Jean-Charles\n and Navale, Pooja\n and Paradis, Val{\\'e}rie\n and Park, Young Nyun\n and Pawlotsky, Jean-Michel\n and Peter, Simon\n and Pinato, David James\n and Pinter, Matthias\n and Radu, Pompilia\n and Regnault, H{\\'e}l{\\`e}ne\n and Reig, Maria\n and Rhee, Hyungjin\n and Rimassa, Lorenza\n and Salcedo, Mar{\\'i}a Teresa\n and Sangro, Bruno\n and Scheiner, Bernhard\n and Sempoux, Christine\n and Su, Tung-Hung\n and Torres, Callie\n and Tran, Nguyen H.\n and Tr{\\'e}po, Eric\n and Varela, Maria\n and Verset, Gontran\n and Vogel, Arndt\n and Wendum, Dominique\n and Ziol, Marianne},\n title = {Artificial intelligence-based pathology as a biomarker of sensitivity to atezolizumab{\\&}{\\#}x2013;bevacizumab in patients with hepatocellular carcinoma: a multicentre retrospective study},\n year = {2023},\n month = {Dec},\n day = {01},\n journal = {The Lancet Oncology},\n volume = {24},\n number = {12},\n pages = {1411-1422},\n publisher = {Elsevier},\n issn = {1470-2045},\n}", "id": "zeng2023abrsp", "url": "https://www.thelancet.com/journals/lanonc/article/PIIS1470-2045(23)00468-0" } ]
cobra_extract_features
{ "branch": null, "commit": null, "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "COBRA from https://github.com/KatherLab/COBRA", "name": "COBRA", "url": "https://github.com/KatherLab/COBRA" }
cuda
[ "lenz2025cobra" ]
pathology
Perform feature extraction on an input image using COBRA. The provided tile features have 1.0 mpp (224 microns / 224 px per patch).
[ { "description": "Path to the output folder where the features will be saved", "name": "output_dir", "type": "str" }, { "description": "Path to the input folder containing the tile features", "name": "input_dir", "type": "str" } ]
[ { "description": "The number of slides that were processed", "name": "num_processed_slides", "type": "int" } ]
{ "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-brca-single\"" }, { "name": "input_dir", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features-10x/\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features-10x/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.h5", "target": "TCGA-BRCA-Virchow2-features-10x/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.h5" } ], "name": "example" }
[ { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-crc\"" }, { "name": "input_dir", "value": "\"/mount/input/TCGA-CRC-Virchow2-features-10x\"" } ], "mount": [ { "source": "TCGA-CRC-Virchow2-features-10x", "target": "TCGA-CRC-Virchow2-features-10x" } ], "name": "crc" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-brca\"" }, { "name": "input_dir", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features-10x\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features-10x", "target": "TCGA-BRCA-Virchow2-features-10x" } ], "name": "brca" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-brca-single\"" }, { "name": "input_dir", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features-10x/\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features-10x/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.h5", "target": "TCGA-BRCA-Virchow2-features-10x/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.h5" } ], "name": "brca-single" } ]
null
def cobra_extract_features(output_dir: str = '/mount/output/cobra_features-brca-single', input_dir: str = '/mount/input/TCGA-BRCA-Virchow2-features-10x/') -> dict: """ Perform feature extraction on an input image using COBRA. The provided tile features have 1.0 mpp (224 microns / 224 px per patch). Args: output_dir: Path to the output folder where the features will be saved input_dir: Path to the input folder containing the tile features Returns: dict with the following structure: { 'num_processed_slides': int # The number of slides that were processed } """
<description> Perform feature extraction on an input image using COBRA. The provided tile features have 1.0 mpp (224 microns / 224 px per patch). </description> <arguments> output_dir (str): Path to the output folder where the features will be saved (example: '/mount/output/cobra_features-brca-single') input_dir (str): Path to the input folder containing the tile features (example: '/mount/input/TCGA-BRCA-Virchow2-features-10x/') </arguments> <returns> dict with the following structure: { 'num_processed_slides': int # The number of slides that were processed } </returns>
[ { "bibtex": "@inproceedings{lenz2025cobra,\n author = {T. Lenz* and Peter Neidlinger* and Marta Ligero and Georg Wölflein and Marko van Treeck and Jakob Nikolas Kather},\n title = {Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning},\n year = {2025},\n booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n}", "id": "lenz2025cobra", "url": "https://arxiv.org/abs/2411.13623" } ]
cobra_heatmaps
{ "branch": null, "commit": null, "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "COBRA from https://github.com/KatherLab/COBRA", "name": "COBRA", "url": "https://github.com/KatherLab/COBRA" }
cuda
[ "lenz2025cobra" ]
pathology
Create unsupervised heatmaps using COBRA. The provided tile features have 1.0 mpp (224 microns / 224 px per patch).
[ { "description": "Path to the output folder where the features will be saved", "name": "output_dir", "type": "str" }, { "description": "Path to the input folder containing the WSIs", "name": "slide_dir", "type": "str" }, { "description": "Path to the input folder containing the tile features", "name": "tile_features_dir", "type": "str" } ]
[ { "description": "The number of heatmaps that were generated", "name": "num_heatmaps", "type": "int" }, { "description": "The number of total bytes of the generated heatmaps", "name": "byte_size_heatmaps", "type": "int" } ]
{ "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_heatmaps-crc\"" }, { "name": "tile_features_dir", "value": "\"/mount/input/TCGA-CRC-Virchow2-features-10x\"" }, { "name": "slide_dir", "value": "\"/mount/input/crc-wsi\"" } ], "mount": [ { "source": "TCGA-CRC-Virchow2-features-10x", "target": "TCGA-CRC-Virchow2-features-10x" }, { "source": "crc-wsi", "target": "crc-wsi" } ], "name": "example" }
[ { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_heatmaps-crc\"" }, { "name": "tile_features_dir", "value": "\"/mount/input/TCGA-CRC-Virchow2-features-10x\"" }, { "name": "slide_dir", "value": "\"/mount/input/crc-wsi\"" } ], "mount": [ { "source": "TCGA-CRC-Virchow2-features-10x", "target": "TCGA-CRC-Virchow2-features-10x" }, { "source": "crc-wsi", "target": "crc-wsi" } ], "name": "crc" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_heatmaps-brca\"" }, { "name": "tile_features_dir", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features-10x\"" }, { "name": "slide_dir", "value": "\"/mount/input/brca-wsi\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features-10x", "target": "TCGA-BRCA-Virchow2-features-10x" }, { "source": "brca-wsi", "target": "brca-wsi" } ], "name": "brca" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_heatmaps-brca-single\"" }, { "name": "tile_features_dir", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features-10x\"" }, { "name": "slide_dir", "value": "\"/mount/input/brca-wsi\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features-10x/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.h5", "target": "TCGA-BRCA-Virchow2-features-10x/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.h5" }, { "source": "brca-wsi/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.svs", "target": "brca-wsi/TCGA-BH-A0HQ-01Z-00-DX1.0921FCEF-20A2-4D4B-A198-91AF9F6C814C.svs" } ], "name": "brca-single" } ]
null
def cobra_heatmaps(output_dir: str = '/mount/output/cobra_heatmaps-crc', slide_dir: str = '/mount/input/crc-wsi', tile_features_dir: str = '/mount/input/TCGA-CRC-Virchow2-features-10x') -> dict: """ Create unsupervised heatmaps using COBRA. The provided tile features have 1.0 mpp (224 microns / 224 px per patch). Args: output_dir: Path to the output folder where the features will be saved slide_dir: Path to the input folder containing the WSIs tile_features_dir: Path to the input folder containing the tile features Returns: dict with the following structure: { 'num_heatmaps': int # The number of heatmaps that were generated 'byte_size_heatmaps': int # The number of total bytes of the generated heatmaps } """
<description> Create unsupervised heatmaps using COBRA. The provided tile features have 1.0 mpp (224 microns / 224 px per patch). </description> <arguments> output_dir (str): Path to the output folder where the features will be saved (example: '/mount/output/cobra_heatmaps-crc') slide_dir (str): Path to the input folder containing the WSIs (example: '/mount/input/crc-wsi') tile_features_dir (str): Path to the input folder containing the tile features (example: '/mount/input/TCGA-CRC-Virchow2-features-10x') </arguments> <returns> dict with the following structure: { 'num_heatmaps': int # The number of heatmaps that were generated 'byte_size_heatmaps': int # The number of total bytes of the generated heatmaps } </returns>
[ { "bibtex": "@inproceedings{lenz2025cobra,\n author = {T. Lenz* and Peter Neidlinger* and Marta Ligero and Georg Wölflein and Marko van Treeck and Jakob Nikolas Kather},\n title = {Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning},\n year = {2025},\n booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n}", "id": "lenz2025cobra", "url": "https://arxiv.org/abs/2411.13623" } ]
conch_extract_features
{ "branch": null, "commit": "171f2be", "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "CONCH from https://github.com/mahmoodlab/CONCH (at commit: 171f2be)", "name": "CONCH", "url": "https://github.com/mahmoodlab/CONCH" }
cuda
[ "lu2024conch" ]
pathology
Perform feature extraction on an input image using CONCH.
[ { "description": "Path to the input image", "name": "input_image", "type": "str" } ]
[ { "description": "The feature vector extracted from the input image, as a list of floats", "name": "features", "type": "list" } ]
{ "arguments": [ { "name": "input_image", "value": "\"/mount/input/TUM-TCGA-ACRLPPQE.tif\"" } ], "mount": [ { "source": "TUM-TCGA-ACRLPPQE.tif", "target": "TUM-TCGA-ACRLPPQE.tif" } ], "name": "example" }
[ { "arguments": [ { "name": "input_image", "value": "\"/mount/input/MUC/MUC-TCGA-ACCPKIPN.tif\"" } ], "mount": [ { "source": "MUC-TCGA-ACCPKIPN.tif", "target": "MUC/MUC-TCGA-ACCPKIPN.tif" } ], "name": "tif" }, { "arguments": [ { "name": "input_image", "value": "\"/mount/input/TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png", "target": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png" } ], "name": "png" }, { "arguments": [ { "name": "input_image", "value": "\"/mount/input/TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg", "target": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg" } ], "name": "jpg" } ]
null
def conch_extract_features(input_image: str = '/mount/input/TUM-TCGA-ACRLPPQE.tif') -> dict: """ Perform feature extraction on an input image using CONCH. Args: input_image: Path to the input image Returns: dict with the following structure: { 'features': list # The feature vector extracted from the input image, as a list of floats } """
<description> Perform feature extraction on an input image using CONCH. </description> <arguments> input_image (str): Path to the input image (example: '/mount/input/TUM-TCGA-ACRLPPQE.tif') </arguments> <returns> dict with the following structure: { 'features': list # The feature vector extracted from the input image, as a list of floats } </returns>
[ { "bibtex": "@article{lu2024conch,\n author = {Lu, Ming Y. and Chen, Bowen and Williamson, Drew F. K. and Chen, Richard J. and Liang, Ivy and Ding, Tong and Jaume, Guillaume and Odintsov, Igor and Le, Long Phi and Gerber, Georg and Parwani, Anil V. and Zhang, Andrew and Mahmood, Faisal},\n title = {A visual-language foundation model for computational pathology},\n year = {2024},\n journal = {Nature Medicine},\n volume = {30},\n number = {3},\n pages = {863--874},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "lu2024conch", "url": "https://www.nature.com/articles/s41591-024-02856-4" } ]
cytopus_db
{ "branch": null, "commit": "638dd91", "env": [], "info": "Cytopus from https://github.com/wallet-maker/cytopus (at commit: 638dd91)", "name": "Cytopus", "url": "https://github.com/wallet-maker/cytopus" }
cpu
[ "kunes2023cytopus" ]
genomics_proteomics
Initialize the Cytopus KnowledgeBase and generate a JSON file containing a nested dictionary with gene set annotations organized by cell type, suitable for input into the Spectra library.
[ { "description": "List of cell types for which to retrieve gene sets", "name": "celltype_of_interest", "type": "list" }, { "description": "List of global cell types to include in the JSON file.", "name": "global_celltypes", "type": "list" }, { "description": "Path to the file where the output JSON file should be stored.", "name": "output_file", "type": "str" } ]
[ { "description": "The list of keys in the produced JSON file.", "name": "keys", "type": "list" } ]
{ "arguments": [ { "name": "celltype_of_interest", "value": "[\"B_memory\", \"B_naive\", \"CD4_T\", \"CD8_T\", \"DC\", \"ILC3\", \"MDC\", \"NK\", \"Treg\", \"gdT\", \"mast\", \"pDC\", \"plasma\"]" }, { "name": "global_celltypes", "value": "[\"all-cells\", \"leukocyte\"]" }, { "name": "output_file", "value": "\"/mount/output/Spectra_dict.json\"" } ], "mount": [], "name": "example" }
[ { "arguments": [ { "name": "celltype_of_interest", "value": "[\"B\", \"CD4_T\"]" }, { "name": "global_celltypes", "value": "[\"all-cells\", \"leukocyte\"]" }, { "name": "output_file", "value": "\"/mount/output/Spectra_dict.json\"" } ], "mount": [], "name": "B_and_CD4_T" }, { "arguments": [ { "name": "celltype_of_interest", "value": "[\"B_memory\", \"B_naive\", \"CD4_T\", \"CD8_T\", \"DC\", \"ILC3\", \"MDC\", \"NK\", \"Treg\", \"gdT\", \"mast\", \"pDC\", \"plasma\"]" }, { "name": "global_celltypes", "value": "[\"leukocyte\"]" }, { "name": "output_file", "value": "\"/mount/output/Spectra_dict.json\"" } ], "mount": [], "name": "leukocytes" }, { "arguments": [ { "name": "celltype_of_interest", "value": "[\"Treg\", \"plasma\", \"B_naive\"]" }, { "name": "global_celltypes", "value": "[\"leukocyte\"]" }, { "name": "output_file", "value": "\"/mount/output/Spectra_dict.json\"" } ], "mount": [], "name": "Treg_and_plasma_and_B_naive" } ]
The information on how to do this is in: https://github.com/wallet-maker/cytopus/blob/main/notebooks/KnowledgeBase_queries_colaboratory.ipynb
def cytopus_db(celltype_of_interest: list = ['B_memory', 'B_naive', 'CD4_T', 'CD8_T', 'DC', 'ILC3', 'MDC', 'NK', 'Treg', 'gdT', 'mast', 'pDC', 'plasma'], global_celltypes: list = ['all-cells', 'leukocyte'], output_file: str = '/mount/output/Spectra_dict.json') -> dict: """ Initialize the Cytopus KnowledgeBase and generate a JSON file containing a nested dictionary with gene set annotations organized by cell type, suitable for input into the Spectra library. Args: celltype_of_interest: List of cell types for which to retrieve gene sets global_celltypes: List of global cell types to include in the JSON file. output_file: Path to the file where the output JSON file should be stored. Returns: dict with the following structure: { 'keys': list # The list of keys in the produced JSON file. } """
<description> Initialize the Cytopus KnowledgeBase and generate a JSON file containing a nested dictionary with gene set annotations organized by cell type, suitable for input into the Spectra library. </description> <arguments> celltype_of_interest (list): List of cell types for which to retrieve gene sets (example: ['B_memory', 'B_naive', 'CD4_T', 'CD8_T', 'DC', 'ILC3', 'MDC', 'NK', 'Treg', 'gdT', 'mast', 'pDC', 'plasma']) global_celltypes (list): List of global cell types to include in the JSON file. (example: ['all-cells', 'leukocyte']) output_file (str): Path to the file where the output JSON file should be stored. (example: '/mount/output/Spectra_dict.json') </arguments> <returns> dict with the following structure: { 'keys': list # The list of keys in the produced JSON file. } </returns>
[ { "bibtex": "@article{kunes2023cytopus,\n author = {Kunes, Russell Z. and Walle, Thomas and Land, Max and Nawy, Tal and Pe’er, Dana},\n title = {Supervised discovery of interpretable gene programs from single-cell data},\n year = {2023},\n journal = {Nature Biotechnology},\n volume = {42},\n number = {7},\n pages = {1084--1095},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "kunes2023cytopus", "url": "https://www.nature.com/articles/s41587-023-01940-3" } ]
cyvcf2_count_alterations
{ "branch": "main", "commit": "541ab16", "env": [], "info": "cyvcf2 from https://github.com/brentp/cyvcf2 (at commit: 541ab16) (at branch: main)", "name": "cyvcf2", "url": "https://github.com/brentp/cyvcf2" }
cpu
[ "pedersen2017cyvcf2" ]
genomics_proteomics
Use cyvcf2 to parse through VCF file containing detected sequence variants to identify the number of single nucleotide polymorphisms (SNPs) from a specific reference nucleotide to a specific alternate nucleotide.
[ { "description": "Path to the input VCF file", "name": "input_vcf", "type": "str" }, { "description": "The reference nucleotide to compare against (\"A\", \"C\", \"G\", or \"T\")", "name": "reference_nucleotide", "type": "str" }, { "description": "The alternate nucleotide to compare against (\"A\", \"C\", \"G\", or \"T\")", "name": "alternate_nucleotide", "type": "str" } ]
[ { "description": "The number of SNPs that are altered from reference `reference_nucleotide` to `alternate_nucleotide`.", "name": "num_snps", "type": "int" } ]
{ "arguments": [ { "name": "input_vcf", "value": "\"/mount/input/SRR2058984_zc.vcf\"" }, { "name": "reference_nucleotide", "value": "\"A\"" }, { "name": "alternate_nucleotide", "value": "\"C\"" } ], "mount": [ { "source": "SRR2058984_zc.vcf", "target": "SRR2058984_zc.vcf" } ], "name": "example" }
[ { "arguments": [ { "name": "input_vcf", "value": "\"/mount/input/SRR2058985_zc.vcf\"" }, { "name": "reference_nucleotide", "value": "\"A\"" }, { "name": "alternate_nucleotide", "value": "\"T\"" } ], "mount": [ { "source": "SRR2058985_zc.vcf", "target": "SRR2058985_zc.vcf" } ], "name": "SRR2058985" }, { "arguments": [ { "name": "input_vcf", "value": "\"/mount/input/SRR2058987_zc.vcf\"" }, { "name": "reference_nucleotide", "value": "\"T\"" }, { "name": "alternate_nucleotide", "value": "\"C\"" } ], "mount": [ { "source": "SRR2058987_zc.vcf", "target": "SRR2058987_zc.vcf" } ], "name": "SRR2058987" }, { "arguments": [ { "name": "input_vcf", "value": "\"/mount/input/SRR2058988_zc.vcf\"" }, { "name": "reference_nucleotide", "value": "\"T\"" }, { "name": "alternate_nucleotide", "value": "\"A\"" } ], "mount": [ { "source": "SRR2058988_zc.vcf", "target": "SRR2058988_zc.vcf" } ], "name": "SRR2058988" }, { "arguments": [ { "name": "input_vcf", "value": "\"/mount/input/SRR2058989_zc.vcf\"" }, { "name": "reference_nucleotide", "value": "\"T\"" }, { "name": "alternate_nucleotide", "value": "\"G\"" } ], "mount": [ { "source": "SRR2058989_zc.vcf", "target": "SRR2058989_zc.vcf" } ], "name": "SRR2058989" } ]
null
def cyvcf2_count_alterations(input_vcf: str = '/mount/input/SRR2058984_zc.vcf', reference_nucleotide: str = 'A', alternate_nucleotide: str = 'C') -> dict: """ Use cyvcf2 to parse through VCF file containing detected sequence variants to identify the number of single nucleotide polymorphisms (SNPs) from a specific reference nucleotide to a specific alternate nucleotide. Args: input_vcf: Path to the input VCF file reference_nucleotide: The reference nucleotide to compare against ("A", "C", "G", or "T") alternate_nucleotide: The alternate nucleotide to compare against ("A", "C", "G", or "T") Returns: dict with the following structure: { 'num_snps': int # The number of SNPs that are altered from reference `reference_nucleotide` to `alternate_nucleotide`. } """
<description> Use cyvcf2 to parse through VCF file containing detected sequence variants to identify the number of single nucleotide polymorphisms (SNPs) from a specific reference nucleotide to a specific alternate nucleotide. </description> <arguments> input_vcf (str): Path to the input VCF file (example: '/mount/input/SRR2058984_zc.vcf') reference_nucleotide (str): The reference nucleotide to compare against ("A", "C", "G", or "T") (example: 'A') alternate_nucleotide (str): The alternate nucleotide to compare against ("A", "C", "G", or "T") (example: 'C') </arguments> <returns> dict with the following structure: { 'num_snps': int # The number of SNPs that are altered from reference `reference_nucleotide` to `alternate_nucleotide`. } </returns>
[ { "bibtex": "@article{pedersen2017cyvcf2,\n author = {Pedersen, Brent S and Quinlan, Aaron R},\n title = {cyvcf2: fast, flexible variant analysis with Python},\n year = {2017},\n month = {02},\n journal = {Bioinformatics},\n volume = {33},\n number = {12},\n pages = {1867-1869},\n issn = {1367-4803},\n}", "id": "pedersen2017cyvcf2", "url": "https://academic.oup.com/bioinformatics/article/33/12/1867/2971439" } ]
eagle_extract_features
{ "branch": "simple_feature_extraction", "commit": null, "env": [], "info": "EAGLE from https://github.com/KatherLab/EAGLE (at branch: simple_feature_extraction)", "name": "EAGLE", "url": "https://github.com/KatherLab/EAGLE" }
cuda
[ "neidlinger2025eagle" ]
pathology
Perform slide-level feature extraction using EAGLE, given tile-level features.
[ { "description": "Path to the output folder where the features will be saved", "name": "output_dir", "type": "str" }, { "description": "Path to the input folder containing the tile features to create the weighting (chief-CTP). Files are in *.h5 format, one file per slide.", "name": "feature_dir_weighting", "type": "str" }, { "description": "Path to the input folder containing the tile features that will be aggregated (Virchow2). Files are in *.h5 format, one file per slide.", "name": "feature_dir_aggregation", "type": "str" } ]
[ { "description": "The number of slides that were processed", "name": "num_processed_slides", "type": "int" } ]
{ "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-crc\"" }, { "name": "feature_dir_aggregation", "value": "\"/mount/input/TCGA-CRC-Virchow2-features\"" }, { "name": "feature_dir_weighting", "value": "\"/mount/input/TCGA-CRC-ChiefCTP-features\"" } ], "mount": [ { "source": "TCGA-CRC-Virchow2-features", "target": "TCGA-CRC-Virchow2-features" }, { "source": "TCGA-CRC-ChiefCTP-features", "target": "TCGA-CRC-ChiefCTP-features" } ], "name": "example" }
[ { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-crc\"" }, { "name": "feature_dir_aggregation", "value": "\"/mount/input/TCGA-CRC-Virchow2-features\"" }, { "name": "feature_dir_weighting", "value": "\"/mount/input/TCGA-CRC-ChiefCTP-features\"" } ], "mount": [ { "source": "TCGA-CRC-Virchow2-features", "target": "TCGA-CRC-Virchow2-features" }, { "source": "TCGA-CRC-ChiefCTP-features", "target": "TCGA-CRC-ChiefCTP-features" } ], "name": "crc" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-brca\"" }, { "name": "feature_dir_aggregation", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features\"" }, { "name": "feature_dir_weighting", "value": "\"/mount/input/TCGA-BRCA-ChiefCTP-features\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features", "target": "TCGA-BRCA-Virchow2-features" }, { "source": "TCGA-BRCA-ChiefCTP-features", "target": "TCGA-BRCA-ChiefCTP-features" } ], "name": "brca" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/cobra_features-brca-single\"" }, { "name": "feature_dir_aggregation", "value": "\"/mount/input/TCGA-BRCA-Virchow2-features\"" }, { "name": "feature_dir_weighting", "value": "\"/mount/input/TCGA-BRCA-ChiefCTP-features\"" } ], "mount": [ { "source": "TCGA-BRCA-Virchow2-features/TCGA-3C-AALJ-01Z-00-DX2.62DFE56B-B84C-40F9-9625-FCB55767B70D.h5", "target": "TCGA-BRCA-Virchow2-features/TCGA-3C-AALJ-01Z-00-DX2.62DFE56B-B84C-40F9-9625-FCB55767B70D.h5" }, { "source": "TCGA-BRCA-ChiefCTP-features/TCGA-3C-AALJ-01Z-00-DX2.62DFE56B-B84C-40F9-9625-FCB55767B70D.h5", "target": "TCGA-BRCA-ChiefCTP-features/TCGA-3C-AALJ-01Z-00-DX2.62DFE56B-B84C-40F9-9625-FCB55767B70D.h5" } ], "name": "brca-single" } ]
null
def eagle_extract_features(output_dir: str = '/mount/output/cobra_features-crc', feature_dir_weighting: str = '/mount/input/TCGA-CRC-ChiefCTP-features', feature_dir_aggregation: str = '/mount/input/TCGA-CRC-Virchow2-features') -> dict: """ Perform slide-level feature extraction using EAGLE, given tile-level features. Args: output_dir: Path to the output folder where the features will be saved feature_dir_weighting: Path to the input folder containing the tile features to create the weighting (chief-CTP). Files are in *.h5 format, one file per slide. feature_dir_aggregation: Path to the input folder containing the tile features that will be aggregated (Virchow2). Files are in *.h5 format, one file per slide. Returns: dict with the following structure: { 'num_processed_slides': int # The number of slides that were processed } """
<description> Perform slide-level feature extraction using EAGLE, given tile-level features. </description> <arguments> output_dir (str): Path to the output folder where the features will be saved (example: '/mount/output/cobra_features-crc') feature_dir_weighting (str): Path to the input folder containing the tile features to create the weighting (chief-CTP). Files are in *.h5 format, one file per slide. (example: '/mount/input/TCGA-CRC-ChiefCTP-features') feature_dir_aggregation (str): Path to the input folder containing the tile features that will be aggregated (Virchow2). Files are in *.h5 format, one file per slide. (example: '/mount/input/TCGA-CRC-Virchow2-features') </arguments> <returns> dict with the following structure: { 'num_processed_slides': int # The number of slides that were processed } </returns>
[ { "bibtex": "@misc{neidlinger2025eagle,\n author = {Peter Neidlinger and Tim Lenz and Sebastian Foersch and Chiara M. L. Loeffler and Jan Clusmann and Marco Gustav and Lawrence A. Shaktah and Rupert Langer and Bastian Dislich and Lisa A. Boardman and Amy J. French and Ellen L. Goode and Andrea Gsur and Stefanie Brezina and Marc J. Gunter and Robert Steinfelder and Hans-Michael Behrens and Christoph Röcken and Tabitha Harrison and Ulrike Peters and Amanda I. Phipps and Giuseppe Curigliano and Nicola Fusco and Antonio Marra and Michael Hoffmeister and Hermann Brenner and Jakob Nikolas Kather},\n title = {A deep learning framework for efficient pathology image analysis},\n year = {2025},\n url = {https://arxiv.org/abs/2502.13027},\n archiveprefix = {arXiv},\n eprint = {2502.13027},\n primaryclass = {cs.CV},\n}", "id": "neidlinger2025eagle", "url": "https://arxiv.org/abs/2502.13027" } ]
esm_fold_predict
{ "branch": null, "commit": "2b36991", "env": [], "info": "ESM from https://github.com/facebookresearch/esm (at commit: 2b36991)", "name": "ESM", "url": "https://github.com/facebookresearch/esm" }
cuda
[ "verkuil2022esm1", "hie2022esm2" ]
genomics_proteomics
Generate the representation of a protein sequence and the contact map using Facebook Research's pretrained esm2_t33_650M_UR50D model.
[ { "description": "Protein sequence to for which to generate representation and contact map.", "name": "sequence", "type": "str" } ]
[ { "description": "Token representations for the protein sequence as a list of floats, i.e. a 1D array of shape L where L is the number of tokens.", "name": "sequence_representation", "type": "list" }, { "description": "Contact map for the protein sequence as a list of list of floats, i.e. a 2D array of shape LxL where L is the number of tokens.", "name": "contact_map", "type": "list" } ]
{ "arguments": [ { "name": "sequence", "value": "\"MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG\"" } ], "mount": [], "name": "example" }
[ { "arguments": [ { "name": "sequence", "value": "\"KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE\"" } ], "mount": [], "name": "protein2" }, { "arguments": [ { "name": "sequence", "value": "\"KALTARQQEVFDLIRD<mask>ISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE\"" } ], "mount": [], "name": "protein2_with_mask" }, { "arguments": [ { "name": "sequence", "value": "\"K A <mask> I S Q\"" } ], "mount": [], "name": "protein3" } ]
This repository does not contain any simple CLI functions, but examples. We ask the model to re-implement one of the examples.
def esm_fold_predict(sequence: str = 'MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG') -> dict: """ Generate the representation of a protein sequence and the contact map using Facebook Research's pretrained esm2_t33_650M_UR50D model. Args: sequence: Protein sequence to for which to generate representation and contact map. Returns: dict with the following structure: { 'sequence_representation': list # Token representations for the protein sequence as a list of floats, i.e. a 1D array of shape L where L is the number of tokens. 'contact_map': list # Contact map for the protein sequence as a list of list of floats, i.e. a 2D array of shape LxL where L is the number of tokens. } """
<description> Generate the representation of a protein sequence and the contact map using Facebook Research's pretrained esm2_t33_650M_UR50D model. </description> <arguments> sequence (str): Protein sequence to for which to generate representation and contact map. (example: 'MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG') </arguments> <returns> dict with the following structure: { 'sequence_representation': list # Token representations for the protein sequence as a list of floats, i.e. a 1D array of shape L where L is the number of tokens. 'contact_map': list # Contact map for the protein sequence as a list of list of floats, i.e. a 2D array of shape LxL where L is the number of tokens. } </returns>
[ { "bibtex": "@misc{verkuil2022esm1,\n author = {Verkuil, Robert and Kabeli, Ori and Du, Yilun and Wicky, Basile I. M. and Milles, Lukas F. and Dauparas, Justas and Baker, David and Ovchinnikov, Sergey and Sercu, Tom and Rives, Alexander},\n title = {Language models generalize beyond natural proteins},\n year = {2022},\n archiveprefix = {bioRxiv},\n eprint = {2022.12.21.521521},\n}", "id": "verkuil2022esm1", "url": "https://www.biorxiv.org/content/10.1101/2022.12.21.521521v1" }, { "bibtex": "@misc{hie2022esm2,\n author = {Hie, Brian and Candido, Salvatore and Lin, Zeming and Kabeli, Ori and Rao, Roshan and Smetanin, Nikita and Sercu, Tom and Rives, Alexander},\n title = {A high-level programming language for generative protein design},\n year = {2022},\n archiveprefix = {bioRxiv},\n eprint = {2022.12.21.521526},\n}", "id": "hie2022esm2", "url": "https://www.biorxiv.org/content/10.1101/2022.12.21.521526v1" } ]
flowmap_overfit_scene
{ "branch": null, "commit": "578a515", "env": [], "info": "FlowMap from https://github.com/dcharatan/flowmap (at commit: 578a515)", "name": "FlowMap", "url": "https://github.com/dcharatan/flowmap" }
cuda
[ "smith2024flowmap" ]
misc
Overfit FlowMap on an input scene to determine camera extrinsics for each frame in the scene.
[ { "description": "Path to the directory containing the images of the input scene (just the image files, nothing else)", "name": "input_scene", "type": "str" } ]
[ { "description": "The number of images (frames) in the scene", "name": "n", "type": "int" }, { "description": "The camera extrinsics matrix for each of the n frames in the scene, must have a shape of nx4x4 (as a nested python list of floats)", "name": "camera_extrinsics", "type": "list" } ]
{ "arguments": [ { "name": "input_scene", "value": "\"/mount/input/llff_flower\"" } ], "mount": [ { "source": "flowmap/llff_flower", "target": "llff_flower" } ], "name": "example" }
[ { "arguments": [ { "name": "input_scene", "value": "\"/mount/input/llff_fern\"" } ], "mount": [ { "source": "flowmap/llff_fern", "target": "llff_fern" } ], "name": "llff_fern" }, { "arguments": [ { "name": "input_scene", "value": "\"/mount/input/llff_orchids\"" } ], "mount": [ { "source": "flowmap/llff_orchids", "target": "llff_orchids" } ], "name": "llff_orchids" } ]
null
def flowmap_overfit_scene(input_scene: str = '/mount/input/llff_flower') -> dict: """ Overfit FlowMap on an input scene to determine camera extrinsics for each frame in the scene. Args: input_scene: Path to the directory containing the images of the input scene (just the image files, nothing else) Returns: dict with the following structure: { 'n': int # The number of images (frames) in the scene 'camera_extrinsics': list # The camera extrinsics matrix for each of the n frames in the scene, must have a shape of nx4x4 (as a nested python list of floats) } """
<description> Overfit FlowMap on an input scene to determine camera extrinsics for each frame in the scene. </description> <arguments> input_scene (str): Path to the directory containing the images of the input scene (just the image files, nothing else) (example: '/mount/input/llff_flower') </arguments> <returns> dict with the following structure: { 'n': int # The number of images (frames) in the scene 'camera_extrinsics': list # The camera extrinsics matrix for each of the n frames in the scene, must have a shape of nx4x4 (as a nested python list of floats) } </returns>
[ { "bibtex": "@misc{smith2024flowmap,\n author = {Cameron Smith and David Charatan and Ayush Tewari and Vincent Sitzmann},\n title = {FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent},\n year = {2024},\n archiveprefix = {arXiv},\n eprint = {2404.15259},\n}", "id": "smith2024flowmap", "url": "https://arxiv.org/abs/2404.15259" } ]
llmaix_extract_text
{ "branch": null, "commit": "693564b", "env": [], "info": "LLMAIx from https://github.com/KatherLab/LLMAIx (at commit: 693564b)", "name": "LLMAIx", "url": "https://github.com/KatherLab/LLMAIx" }
cpu
[ "wiest2024llm" ]
llms
Preprocess various input files into a standardized structure - a list of the texts. The list can contain multiple texts in case of .csv and .xlsx input files. In some cases, OCR needs to be applied or even enforced to get the correct text from a pdf (or image) document.
[ { "description": "Path to the input file. Input file can be a .pdf, .csv, .xlsx, .png, .jpg and .docx file.", "name": "file_path", "type": "str" } ]
[ { "description": "The preprocessed document(s), as a list of strings (usually only one string, but in case .csv or .xlsx documents are preprocessed - one per line).", "name": "ocr_text_list", "type": "list" } ]
{ "arguments": [ { "name": "file_path", "value": "\"/mount/input/9874563.pdf\"" } ], "mount": [ { "source": "9874563.pdf", "target": "9874563.pdf" } ], "name": "example" }
[ { "arguments": [ { "name": "file_path", "value": "\"/mount/input/9874563.pdf\"" } ], "mount": [ { "source": "9874563.pdf", "target": "9874563.pdf" } ], "name": "pdf_ocr" }, { "arguments": [ { "name": "file_path", "value": "\"/mount/input/9874562.pdf\"" } ], "mount": [ { "source": "9874562.pdf", "target": "9874562.pdf" } ], "name": "pdf_ocr_force" }, { "arguments": [ { "name": "file_path", "value": "\"/mount/input/data.csv\"" } ], "mount": [ { "source": "data.csv", "target": "data.csv" } ], "name": "csv" }, { "arguments": [ { "name": "file_path", "value": "\"/mount/input/data.xlsx\"" } ], "mount": [ { "source": "data.xlsx", "target": "data.xlsx" } ], "name": "xlsx" }, { "arguments": [ { "name": "file_path", "value": "\"/mount/input/9874563.png\"" } ], "mount": [ { "source": "9874563.png", "target": "9874563.png" } ], "name": "image" } ]
null
def llmaix_extract_text(file_path: str = '/mount/input/9874563.pdf') -> dict: """ Preprocess various input files into a standardized structure - a list of the texts. The list can contain multiple texts in case of .csv and .xlsx input files. In some cases, OCR needs to be applied or even enforced to get the correct text from a pdf (or image) document. Args: file_path: Path to the input file. Input file can be a .pdf, .csv, .xlsx, .png, .jpg and .docx file. Returns: dict with the following structure: { 'ocr_text_list': list # The preprocessed document(s), as a list of strings (usually only one string, but in case .csv or .xlsx documents are preprocessed - one per line). } """
<description> Preprocess various input files into a standardized structure - a list of the texts. The list can contain multiple texts in case of .csv and .xlsx input files. In some cases, OCR needs to be applied or even enforced to get the correct text from a pdf (or image) document. </description> <arguments> file_path (str): Path to the input file. Input file can be a .pdf, .csv, .xlsx, .png, .jpg and .docx file. (example: '/mount/input/9874563.pdf') </arguments> <returns> dict with the following structure: { 'ocr_text_list': list # The preprocessed document(s), as a list of strings (usually only one string, but in case .csv or .xlsx documents are preprocessed - one per line). } </returns>
[ { "bibtex": "@article{wiest2024llm,\n author = {Wiest, Isabella Catharina and Wolf, Fabian and Le{\\ss}mann, Marie-Elisabeth and van Treeck, Marko and Ferber, Dyke and Zhu, Jiefu and Boehme, Heiko and Bressem, Keno K and Ulrich, Hannes and Ebert, Matthias P and others},\n title = {LLM-AIx: An open source pipeline for Information Extraction from unstructured medical text based on privacy preserving Large Language Models},\n year = {2024},\n journal = {medRxiv},\n}", "id": "wiest2024llm", "url": "https://www.medrxiv.org/content/10.1101/2024.09.02.24312917" } ]
medsam_inference
{ "branch": null, "commit": "b9db486", "env": [], "info": "MedSAM from https://github.com/bowang-lab/MedSAM (at commit: b9db486)", "name": "MedSAM", "url": "https://github.com/bowang-lab/MedSAM" }
cuda
[ "ma2024medsam" ]
radiology
Use the trained MedSAM model to segment the given abdomen CT scan.
[ { "description": "Path to the abdomen CT scan image.", "name": "image_file", "type": "str" }, { "description": "Bounding box to segment (list of 4 integers).", "name": "bounding_box", "type": "list" }, { "description": "Path to where the segmentation image should be saved.", "name": "segmentation_file", "type": "str" } ]
[]
{ "arguments": [ { "name": "image_file", "value": "\"/mount/input/my_image.jpg\"" }, { "name": "bounding_box", "value": "[25, 100, 155, 155]" }, { "name": "segmentation_file", "value": "\"/mount/output/segmented_image.png\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg", "target": "my_image.jpg" } ], "name": "example" }
[ { "arguments": [ { "name": "image_file", "value": "\"/mount/input/cucumber.jpg\"" }, { "name": "bounding_box", "value": "[25, 100, 155, 155]" }, { "name": "segmentation_file", "value": "\"/mount/output/segmented_image.png\"" } ], "mount": [ { "source": "cucumber.jpg", "target": "cucumber.jpg" } ], "name": "cucumber" }, { "arguments": [ { "name": "image_file", "value": "\"/mount/input/cucumber.jpg\"" }, { "name": "bounding_box", "value": "[25, 100, 155, 155]" }, { "name": "segmentation_file", "value": "\"/mount/output/some_other_file.png\"" } ], "mount": [ { "source": "cucumber.jpg", "target": "cucumber.jpg" } ], "name": "other_output_file" }, { "arguments": [ { "name": "image_file", "value": "\"/mount/input/image2.png\"" }, { "name": "bounding_box", "value": "[25, 100, 155, 155]" }, { "name": "segmentation_file", "value": "\"/mount/output/segmented_image.png\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png", "target": "image2.png" } ], "name": "png" } ]
null
def medsam_inference(image_file: str = '/mount/input/my_image.jpg', bounding_box: list = [25, 100, 155, 155], segmentation_file: str = '/mount/output/segmented_image.png') -> dict: """ Use the trained MedSAM model to segment the given abdomen CT scan. Args: image_file: Path to the abdomen CT scan image. bounding_box: Bounding box to segment (list of 4 integers). segmentation_file: Path to where the segmentation image should be saved. Returns: empty dict """
<description> Use the trained MedSAM model to segment the given abdomen CT scan. </description> <arguments> image_file (str): Path to the abdomen CT scan image. (example: '/mount/input/my_image.jpg') bounding_box (list): Bounding box to segment (list of 4 integers). (example: [25, 100, 155, 155]) segmentation_file (str): Path to where the segmentation image should be saved. (example: '/mount/output/segmented_image.png') </arguments> <returns> empty dict </returns>
[ { "bibtex": "@article{ma2024medsam,\n author = {Ma, Jun and He, Yuting and Li, Feifei and Han, Lin and You, Chenyu and Wang, Bo},\n title = {Segment anything in medical images},\n year = {2024},\n journal = {Nature Communications},\n volume = {15},\n number = {1},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "ma2024medsam", "url": "https://www.nature.com/articles/s41467-024-44824-z" } ]
medsss_generate
{ "branch": null, "commit": "ebbfd02", "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "MedSSS from https://github.com/pixas/MedSSS (at commit: ebbfd02)", "name": "MedSSS", "url": "https://github.com/pixas/MedSSS" }
cuda
[ "jiang2025medsss" ]
llms
Given a user message, generate a response using the MedSSS_Policy model.
[ { "description": "The user message.", "name": "user_message", "type": "str" } ]
[ { "description": "The response generated by the model.", "name": "response", "type": "str" } ]
{ "arguments": [ { "name": "user_message", "value": "\"How to stop a cough?\"" } ], "mount": [], "name": "example" }
[ { "arguments": [ { "name": "user_message", "value": "\"How would you treat a patient with advanced non-small cell lung cancer?\"" } ], "mount": [], "name": "nsclc" }, { "arguments": [ { "name": "user_message", "value": "\"You are the first responder to a motor vehicle accident. The patient is unconscious and has a suspected spinal injury. What would you do?\"" } ], "mount": [], "name": "motor_vehicle_accident" }, { "arguments": [ { "name": "user_message", "value": "\"You are a pediatrician seeing a 5 year old with a rash covering their whole chest. What are your first steps?\"" } ], "mount": [], "name": "pediatric_rash" } ]
null
def medsss_generate(user_message: str = 'How to stop a cough?') -> dict: """ Given a user message, generate a response using the MedSSS_Policy model. Args: user_message: The user message. Returns: dict with the following structure: { 'response': str # The response generated by the model. } """
<description> Given a user message, generate a response using the MedSSS_Policy model. </description> <arguments> user_message (str): The user message. (example: 'How to stop a cough?') </arguments> <returns> dict with the following structure: { 'response': str # The response generated by the model. } </returns>
[ { "bibtex": "@misc{jiang2025medsss,\n author = {Shuyang Jiang and Yusheng Liao and Zhe Chen and Ya Zhang and Yanfeng Wang and Yu Wang},\n title = {MedS$^3$: Towards Medical Small Language Models with Self-Evolved Slow Thinking},\n year = {2025},\n archiveprefix = {arXiv},\n eprint = {2501.12051},\n}", "id": "jiang2025medsss", "url": "https://arxiv.org/abs/2501.12051" } ]
modernbert_predict_masked
{ "branch": null, "commit": "8c57a0f", "env": [], "info": "ModernBERT from https://github.com/AnswerDotAI/ModernBERT (at commit: 8c57a0f)", "name": "ModernBERT", "url": "https://github.com/AnswerDotAI/ModernBERT" }
cpu
[ "warner2024modernbert" ]
llms
Given a masked sentence string, predict the original sentence using the pretrained ModernBERT-base model on CPU.
[ { "description": "The masked sentence string. The masked part is represented by \"[MASK]\"\".", "name": "input_string", "type": "str" } ]
[ { "description": "The predicted original sentence (including the predicted masked part)", "name": "prediction", "type": "str" } ]
{ "arguments": [ { "name": "input_string", "value": "\"Paris is the [MASK] of France.\"" } ], "mount": [], "name": "example" }
[ { "arguments": [ { "name": "input_string", "value": "\"He walked to the [MASK].\"" } ], "mount": [], "name": "walking" }, { "arguments": [ { "name": "input_string", "value": "\"The future of AI is [MASK].\"" } ], "mount": [], "name": "future_of_ai" }, { "arguments": [ { "name": "input_string", "value": "\"The meaning of life is [MASK].\"" } ], "mount": [], "name": "meaning_of_life" } ]
null
def modernbert_predict_masked(input_string: str = 'Paris is the [MASK] of France.') -> dict: """ Given a masked sentence string, predict the original sentence using the pretrained ModernBERT-base model on CPU. Args: input_string: The masked sentence string. The masked part is represented by "[MASK]"". Returns: dict with the following structure: { 'prediction': str # The predicted original sentence (including the predicted masked part) } """
<description> Given a masked sentence string, predict the original sentence using the pretrained ModernBERT-base model on CPU. </description> <arguments> input_string (str): The masked sentence string. The masked part is represented by "[MASK]"". (example: 'Paris is the [MASK] of France.') </arguments> <returns> dict with the following structure: { 'prediction': str # The predicted original sentence (including the predicted masked part) } </returns>
[ { "bibtex": "@misc{warner2024modernbert,\n author = {Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli},\n title = {Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference},\n year = {2024},\n archiveprefix = {arXiv},\n eprint = {2412.13663},\n}", "id": "warner2024modernbert", "url": "https://arxiv.org/abs/2412.13663" } ]
mopadi_generate_counterfactuals
{ "branch": null, "commit": "4e76820", "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "mopadi from https://github.com/KatherLab/mopadi (at commit: 4e76820)", "name": "mopadi", "url": "https://github.com/KatherLab/mopadi" }
cuda
[ "zigutyte2024mopadi" ]
pathology
Generate counterfactual explanations for the top 3 tiles per patient by manipulating them with a specific amplitude, such that the predicted class of each counterfactual image flips to the opposite class (i.e., the predicted output for the opposite class exceeds 0.9), while avoiding excessive overmanipulation. Use a pretrained diffusion autoencoder according to the cancer type, combined with a corresponding MIL classifier trained to distinguish biologically meaningful histological patterns. You will be provided with the path to the folder containing images, the clinical table with each patient's target label values, and the folder containing pre-extracted features
[ { "description": "Path to the folder containing patient subfolders with image patches", "name": "images_dir", "type": "str" }, { "description": "Path to the folder containing extracted features for each patient", "name": "feat_path_test", "type": "str" }, { "description": "Path to the XLSX file containing the MSI status of each patient", "name": "clini_table", "type": "str" }, { "description": "Name of the column in the clinical table that contains classification labels", "name": "target_label", "type": "str" }, { "description": "Path to the output directory where the results will be saved", "name": "base_dir", "type": "str" }, { "description": "Amplitude of the manipulation to be applied to the images", "name": "manipulation_levels", "type": "list" }, { "description": "Name of the pretrained diffusion autoencoder model", "name": "pretrained_autoenc_name", "type": "str" }, { "description": "Name of the pretrained classifier", "name": "pretrained_clf_name", "type": "str" } ]
[ { "description": "The number of counterfactual images that were generated", "name": "num_counterfactuals", "type": "int" } ]
{ "arguments": [ { "name": "images_dir", "value": "\"/mount/input/images/TCGA-CRC\"" }, { "name": "feat_path_test", "value": "\"/mount/input/features/TCGA-CRC\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA-CRC-DX_CLINI.xlsx\"" }, { "name": "target_label", "value": "\"isMSIH\"" }, { "name": "base_dir", "value": "\"/mount/output/counterfactuals_crc_msi\"" }, { "name": "manipulation_levels", "value": "[0.06]" }, { "name": "pretrained_autoenc_name", "value": "\"crc_512_model\"" }, { "name": "pretrained_clf_name", "value": "\"msi\"" } ], "mount": [ { "source": "images/TCGA-CRC", "target": "images/TCGA-CRC" }, { "source": "features/TCGA-CRC", "target": "features/TCGA-CRC" }, { "source": "TCGA-CRC-DX_CLINI.xlsx", "target": "TCGA-CRC-DX_CLINI.xlsx" } ], "name": "example" }
[ { "arguments": [ { "name": "images_dir", "value": "\"/mount/input/images/TCGA-BRCA\"" }, { "name": "feat_path_test", "value": "\"/mount/input/features/TCGA-BRCA\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA-BRCA-DX_CLINI.csv\"" }, { "name": "target_label", "value": "\"BRCA_Pathology\"" }, { "name": "base_dir", "value": "\"/mount/output/counterfactuals_brca_types\"" }, { "name": "manipulation_levels", "value": "[0.06]" }, { "name": "pretrained_autoenc_name", "value": "\"brca_512_model\"" }, { "name": "pretrained_clf_name", "value": "\"type\"" } ], "mount": [ { "source": "images/TCGA-BRCA", "target": "images/TCGA-BRCA" }, { "source": "features/TCGA-BRCA", "target": "features/TCGA-BRCA" }, { "source": "TCGA-BRCA-DX_CLINI.csv", "target": "TCGA-BRCA-DX_CLINI.csv" } ], "name": "brca_types" }, { "arguments": [ { "name": "images_dir", "value": "\"/mount/input/images/Pancancer-liver\"" }, { "name": "feat_path_test", "value": "\"/mount/input/features/Pancancer\"" }, { "name": "clini_table", "value": "\"/mount/input/Pancancer_clini.xlsx\"" }, { "name": "target_label", "value": "\"Type\"" }, { "name": "base_dir", "value": "\"/mount/output/counterfactuals_liver_types\"" }, { "name": "manipulation_levels", "value": "[0.04]" }, { "name": "pretrained_autoenc_name", "value": "\"pancancer_model\"" }, { "name": "pretrained_clf_name", "value": "\"liver\"" } ], "mount": [ { "source": "images/Pancancer-liver", "target": "images/Pancancer-liver" }, { "source": "features/Pancancer", "target": "features/Pancancer" }, { "source": "Pancancer_clini.xlsx", "target": "Pancancer_clini.xlsx" } ], "name": "liver" }, { "arguments": [ { "name": "images_dir", "value": "\"/mount/input/images/Pancancer-lung\"" }, { "name": "feat_path_test", "value": "\"/mount/input/features/Pancancer\"" }, { "name": "clini_table", "value": "\"/mount/input/Pancancer_clini.xlsx\"" }, { "name": "target_label", "value": "\"Type\"" }, { "name": "base_dir", "value": "\"/mount/output/counterfactuals_lung_types\"" }, { "name": "manipulation_levels", "value": "[0.06]" }, { "name": "pretrained_autoenc_name", "value": "\"pancancer_model\"" }, { "name": "pretrained_clf_name", "value": "\"lung\"" } ], "mount": [ { "source": "images/Pancancer-lung", "target": "images/Pancancer-lung" }, { "source": "features/Pancancer", "target": "features/Pancancer" }, { "source": "Pancancer_clini.xlsx", "target": "Pancancer_clini.xlsx" } ], "name": "lung" } ]
null
def mopadi_generate_counterfactuals(images_dir: str = '/mount/input/images/TCGA-CRC', feat_path_test: str = '/mount/input/features/TCGA-CRC', clini_table: str = '/mount/input/TCGA-CRC-DX_CLINI.xlsx', target_label: str = 'isMSIH', base_dir: str = '/mount/output/counterfactuals_crc_msi', manipulation_levels: list = [0.06], pretrained_autoenc_name: str = 'crc_512_model', pretrained_clf_name: str = 'msi') -> dict: """ Generate counterfactual explanations for the top 3 tiles per patient by manipulating them with a specific amplitude, such that the predicted class of each counterfactual image flips to the opposite class (i.e., the predicted output for the opposite class exceeds 0.9), while avoiding excessive overmanipulation. Use a pretrained diffusion autoencoder according to the cancer type, combined with a corresponding MIL classifier trained to distinguish biologically meaningful histological patterns. You will be provided with the path to the folder containing images, the clinical table with each patient's target label values, and the folder containing pre-extracted features Args: images_dir: Path to the folder containing patient subfolders with image patches feat_path_test: Path to the folder containing extracted features for each patient clini_table: Path to the XLSX file containing the MSI status of each patient target_label: Name of the column in the clinical table that contains classification labels base_dir: Path to the output directory where the results will be saved manipulation_levels: Amplitude of the manipulation to be applied to the images pretrained_autoenc_name: Name of the pretrained diffusion autoencoder model pretrained_clf_name: Name of the pretrained classifier Returns: dict with the following structure: { 'num_counterfactuals': int # The number of counterfactual images that were generated } """
<description> Generate counterfactual explanations for the top 3 tiles per patient by manipulating them with a specific amplitude, such that the predicted class of each counterfactual image flips to the opposite class (i.e., the predicted output for the opposite class exceeds 0.9), while avoiding excessive overmanipulation. Use a pretrained diffusion autoencoder according to the cancer type, combined with a corresponding MIL classifier trained to distinguish biologically meaningful histological patterns. You will be provided with the path to the folder containing images, the clinical table with each patient's target label values, and the folder containing pre-extracted features </description> <arguments> images_dir (str): Path to the folder containing patient subfolders with image patches (example: '/mount/input/images/TCGA-CRC') feat_path_test (str): Path to the folder containing extracted features for each patient (example: '/mount/input/features/TCGA-CRC') clini_table (str): Path to the XLSX file containing the MSI status of each patient (example: '/mount/input/TCGA-CRC-DX_CLINI.xlsx') target_label (str): Name of the column in the clinical table that contains classification labels (example: 'isMSIH') base_dir (str): Path to the output directory where the results will be saved (example: '/mount/output/counterfactuals_crc_msi') manipulation_levels (list): Amplitude of the manipulation to be applied to the images (example: [0.06]) pretrained_autoenc_name (str): Name of the pretrained diffusion autoencoder model (example: 'crc_512_model') pretrained_clf_name (str): Name of the pretrained classifier (example: 'msi') </arguments> <returns> dict with the following structure: { 'num_counterfactuals': int # The number of counterfactual images that were generated } </returns>
[ { "bibtex": "@misc{zigutyte2024mopadi,\n author = {Laura Žigutytė and Tim Lenz and Tianyu Han and Katherine Jane Hewitt and Nic Gabriel Reitsam and Sebastian Foersch and Zunamys I Carrero and Michaela Unger and Alexander T Pearson and Daniel Truhn and Jakob Nikolas Kather},\n title = {ounterfactual Diffusion Models for Mechanistic Explainability of Artificial Intelligence Models in Pathology},\n year = {2024},\n url = {https://www.biorxiv.org/content/10.1101/2024.10.29.620913v1},\n archiveprefix = {bioRxiv},\n eprint = {2024.10.29.620913},\n}", "id": "zigutyte2024mopadi", "url": "https://www.biorxiv.org/content/10.1101/2024.10.29.620913v2" } ]
musk_extract_features
{ "branch": null, "commit": "e1699c2", "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "MUSK from https://github.com/lilab-stanford/MUSK (at commit: e1699c2)", "name": "MUSK", "url": "https://github.com/lilab-stanford/MUSK" }
cuda
[ "xiang2025musk" ]
pathology
Perform feature extraction on an input image using the vision part of MUSK.
[ { "description": "Path to the input image", "name": "input_image", "type": "str" } ]
[ { "description": "The feature vector extracted from the input image, as a list of floats", "name": "features", "type": "list" } ]
{ "arguments": [ { "name": "input_image", "value": "\"/mount/input/TUM/TUM-TCGA-ACRLPPQE.tif\"" } ], "mount": [ { "source": "TUM-TCGA-ACRLPPQE.tif", "target": "TUM/TUM-TCGA-ACRLPPQE.tif" } ], "name": "example" }
[ { "arguments": [ { "name": "input_image", "value": "\"/mount/input/MUC/MUC-TCGA-ACCPKIPN.tif\"" } ], "mount": [ { "source": "MUC-TCGA-ACCPKIPN.tif", "target": "MUC/MUC-TCGA-ACCPKIPN.tif" } ], "name": "kather100k_muc" }, { "arguments": [ { "name": "input_image", "value": "\"/mount/input/TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png", "target": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png" } ], "name": "tcga_brca_patch_png" }, { "arguments": [ { "name": "input_image", "value": "\"/mount/input/TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg", "target": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg" } ], "name": "tcga_brca_patch_jpg" } ]
null
def musk_extract_features(input_image: str = '/mount/input/TUM/TUM-TCGA-ACRLPPQE.tif') -> dict: """ Perform feature extraction on an input image using the vision part of MUSK. Args: input_image: Path to the input image Returns: dict with the following structure: { 'features': list # The feature vector extracted from the input image, as a list of floats } """
<description> Perform feature extraction on an input image using the vision part of MUSK. </description> <arguments> input_image (str): Path to the input image (example: '/mount/input/TUM/TUM-TCGA-ACRLPPQE.tif') </arguments> <returns> dict with the following structure: { 'features': list # The feature vector extracted from the input image, as a list of floats } </returns>
[ { "bibtex": "@article{xiang2025musk,\n author = {Xiang, Jinxi and Wang, Xiyue and Zhang, Xiaoming and Xi, Yinghua and Eweje, Feyisope and Chen, Yijiang and Li, Yuchen and Bergstrom, Colin and Gopaulchan, Matthew and Kim, Ted and Yu, Kun-Hsing and Willens, Sierra and Olguin, Francesca Maria and Nirschl, Jeffrey J. and Neal, Joel and Diehn, Maximilian and Yang, Sen and Li, Ruijiang},\n title = {A vision-language foundation model for precision oncology},\n year = {2025},\n journal = {Nature},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "xiang2025musk", "url": "https://www.nature.com/articles/s41586-024-08378-w" } ]
nnunet_preprocess
{ "branch": null, "commit": "58a3b12", "env": [], "info": "nnUNet from https://github.com/MIC-DKFZ/nnUNet (at commit: 58a3b12)", "name": "nnUNet", "url": "https://github.com/MIC-DKFZ/nnUNet" }
cpu
[ "isensee2020nnunet" ]
radiology
Preprocess a dataset using nnUNetv2. The dataset is in the old Medical Segmentation Decathlon (MSD) format and will need to be converted. Does not require GPU.
[ { "description": "The path to the dataset folder to train the model on (in MSD format, so contains dataset.json, imagesTr, imagesTs, labelsTr)", "name": "dataset_path", "type": "str" } ]
[ { "description": "The dataset config object (dataset.json) created by nnUNetv2, as the parsed json object", "name": "dataset_json", "type": "dict" }, { "description": "The nnUNetv2 plan file (nnUNetPlans.json) created by nnUNetv2, as the parsed json object", "name": "nnUNetPlans_json", "type": "dict" } ]
{ "arguments": [ { "name": "dataset_path", "value": "\"/mount/input/Task02_Heart\"" } ], "mount": [ { "source": "msd/Task02_Heart", "target": "Task02_Heart" } ], "name": "example" }
[ { "arguments": [ { "name": "dataset_path", "value": "\"/mount/input/Task05_Prostate\"" } ], "mount": [ { "source": "msd/Task05_Prostate", "target": "Task05_Prostate" } ], "name": "prostate" }, { "arguments": [ { "name": "dataset_path", "value": "\"/mount/input/Task09_Spleen\"" } ], "mount": [ { "source": "msd/Task09_Spleen", "target": "Task09_Spleen" } ], "name": "spleen" }, { "arguments": [ { "name": "dataset_path", "value": "\"/mount/input/Task04_Hippocampus\"" } ], "mount": [ { "source": "msd/Task04_Hippocampus", "target": "Task04_Hippocampus" } ], "name": "hippocampus" } ]
Use the UNet model from DKFZ to train a medical segmentation model. More info here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/how_to_use_nnunet.md https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/dataset_format.md
def nnunet_preprocess(dataset_path: str = '/mount/input/Task02_Heart') -> dict: """ Preprocess a dataset using nnUNetv2. The dataset is in the old Medical Segmentation Decathlon (MSD) format and will need to be converted. Does not require GPU. Args: dataset_path: The path to the dataset folder to train the model on (in MSD format, so contains dataset.json, imagesTr, imagesTs, labelsTr) Returns: dict with the following structure: { 'dataset_json': dict # The dataset config object (dataset.json) created by nnUNetv2, as the parsed json object 'nnUNetPlans_json': dict # The nnUNetv2 plan file (nnUNetPlans.json) created by nnUNetv2, as the parsed json object } """
<description> Preprocess a dataset using nnUNetv2. The dataset is in the old Medical Segmentation Decathlon (MSD) format and will need to be converted. Does not require GPU. </description> <arguments> dataset_path (str): The path to the dataset folder to train the model on (in MSD format, so contains dataset.json, imagesTr, imagesTs, labelsTr) (example: '/mount/input/Task02_Heart') </arguments> <returns> dict with the following structure: { 'dataset_json': dict # The dataset config object (dataset.json) created by nnUNetv2, as the parsed json object 'nnUNetPlans_json': dict # The nnUNetv2 plan file (nnUNetPlans.json) created by nnUNetv2, as the parsed json object } </returns>
[ { "bibtex": "@article{isensee2020nnunet,\n author = {Isensee, Fabian and Jaeger, Paul F. and Kohl, Simon A. A. and Petersen, Jens and Maier-Hein, Klaus H.},\n title = {nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation},\n year = {2020},\n journal = {Nature Methods},\n volume = {18},\n number = {2},\n pages = {203--211},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "isensee2020nnunet", "url": "https://www.nature.com/articles/s41592-020-01008-z" } ]
pathfinder_verify_biomarker
{ "branch": null, "commit": "093d77b", "env": [], "info": "PathFinder from https://github.com/LiangJunhao-THU/PathFinderCRC (at commit: 093d77b)", "name": "PathFinder", "url": "https://github.com/LiangJunhao-THU/PathFinderCRC" }
cpu
[ "liang2023pathfinder" ]
pathology
Given WSI probability maps, a hypothesis of a potential biomarker, and clinical data, determine (1) whether the potential biomarker is significant for patient prognosis, and (2) whether the potential biomarker is independent among already known biomarkers.
[ { "description": "Path to the folder containing the numpy array (`*.npy`) files, which contains the heatmaps of the trained model (each heatmap is HxWxC where C is the number of classes)", "name": "heatmaps", "type": "str" }, { "description": "A python file, which contains a function `def hypothesis_score(prob_map_path: str) -> float` which expresses a mathematical model of a hypothesis of a potential biomarker. For a particular patient (whose heatmap is given by `prob_map_path` as a npy file), the function returns a risk score.", "name": "hypothesis", "type": "str" }, { "description": "Path to the CSV file containing the clinical data", "name": "clini_table", "type": "str" }, { "description": "Path to the CSV file containing the mapping between patient IDs (in the PATIENT column) and heatmap filenames (in the FILENAME column)", "name": "files_table", "type": "str" }, { "description": "The name of the column in the clinical data that contains the survival time", "name": "survival_time_column", "type": "str" }, { "description": "The name of the column in the clinical data that contains the event (e.g. death, recurrence, etc.)", "name": "event_column", "type": "str" }, { "description": "A list of known biomarkers. These are column names in the clinical data.", "name": "known_biomarkers", "type": "list" } ]
[ { "description": "The p-value of the significance of the potential biomarker", "name": "p_value", "type": "float" }, { "description": "The hazard ratio for the biomarker", "name": "hazard_ratio", "type": "float" } ]
{ "arguments": [ { "name": "heatmaps", "value": "\"/mount/input/TCGA_CRC\"" }, { "name": "hypothesis", "value": "\"/mount/input/mus_fraction_score.py\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA_CRC_info.csv\"" }, { "name": "files_table", "value": "\"/mount/input/TCGA_CRC_files.csv\"" }, { "name": "survival_time_column", "value": "\"OS.time\"" }, { "name": "event_column", "value": "\"vital_status\"" }, { "name": "known_biomarkers", "value": "[\"MSI\"]" } ], "mount": [ { "source": "TCGA_CRC", "target": "TCGA_CRC" }, { "source": "mus_fraction_score.py", "target": "mus_fraction_score.py" }, { "source": "TCGA_CRC_info.csv", "target": "TCGA_CRC_info.csv" }, { "source": "TCGA_CRC_files.csv", "target": "TCGA_CRC_files.csv" } ], "name": "example" }
[ { "arguments": [ { "name": "heatmaps", "value": "\"/mount/input/TCGA_CRC\"" }, { "name": "hypothesis", "value": "\"/mount/input/tum_fraction_score.py\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA_CRC_info.csv\"" }, { "name": "files_table", "value": "\"/mount/input/TCGA_CRC_files.csv\"" }, { "name": "survival_time_column", "value": "\"OS.time\"" }, { "name": "event_column", "value": "\"OS\"" }, { "name": "known_biomarkers", "value": "[\"MSI\"]" } ], "mount": [ { "source": "TCGA_CRC", "target": "TCGA_CRC" }, { "source": "tum_fraction_score.py", "target": "tum_fraction_score.py" }, { "source": "TCGA_CRC_info.csv", "target": "TCGA_CRC_info.csv" }, { "source": "TCGA_CRC_files.csv", "target": "TCGA_CRC_files.csv" } ], "name": "crc_tum_fraction_score" }, { "arguments": [ { "name": "heatmaps", "value": "\"/mount/input/TCGA_CRC\"" }, { "name": "hypothesis", "value": "\"/mount/input/str_fraction_score.py\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA_CRC_info.csv\"" }, { "name": "files_table", "value": "\"/mount/input/TCGA_CRC_files.csv\"" }, { "name": "survival_time_column", "value": "\"OS.time\"" }, { "name": "event_column", "value": "\"OS\"" }, { "name": "known_biomarkers", "value": "[\"MSI\"]" } ], "mount": [ { "source": "TCGA_CRC", "target": "TCGA_CRC" }, { "source": "str_fraction_score.py", "target": "str_fraction_score.py" }, { "source": "TCGA_CRC_info.csv", "target": "TCGA_CRC_info.csv" }, { "source": "TCGA_CRC_files.csv", "target": "TCGA_CRC_files.csv" } ], "name": "crc_str_fraction_score" }, { "arguments": [ { "name": "heatmaps", "value": "\"/mount/input/CPTAC_CRC\"" }, { "name": "hypothesis", "value": "\"/mount/input/str_fraction_score.py\"" }, { "name": "clini_table", "value": "\"/mount/input/CPTAC_CRC_info.csv\"" }, { "name": "files_table", "value": "\"/mount/input/CPTAC_CRC_files.csv\"" }, { "name": "survival_time_column", "value": "\"OS.time\"" }, { "name": "event_column", "value": "\"OS\"" }, { "name": "known_biomarkers", "value": "[\"MSI\"]" } ], "mount": [ { "source": "CPTAC_CRC", "target": "CPTAC_CRC" }, { "source": "str_fraction_score.py", "target": "str_fraction_score.py" }, { "source": "CPTAC_CRC_info.csv", "target": "CPTAC_CRC_info.csv" }, { "source": "CPTAC_CRC_files.csv", "target": "CPTAC_CRC_files.csv" } ], "name": "cptac_str_fraction_score" } ]
null
def pathfinder_verify_biomarker(heatmaps: str = '/mount/input/TCGA_CRC', hypothesis: str = '/mount/input/mus_fraction_score.py', clini_table: str = '/mount/input/TCGA_CRC_info.csv', files_table: str = '/mount/input/TCGA_CRC_files.csv', survival_time_column: str = 'OS.time', event_column: str = 'vital_status', known_biomarkers: list = ['MSI']) -> dict: """ Given WSI probability maps, a hypothesis of a potential biomarker, and clinical data, determine (1) whether the potential biomarker is significant for patient prognosis, and (2) whether the potential biomarker is independent among already known biomarkers. Args: heatmaps: Path to the folder containing the numpy array (`*.npy`) files, which contains the heatmaps of the trained model (each heatmap is HxWxC where C is the number of classes) hypothesis: A python file, which contains a function `def hypothesis_score(prob_map_path: str) -> float` which expresses a mathematical model of a hypothesis of a potential biomarker. For a particular patient (whose heatmap is given by `prob_map_path` as a npy file), the function returns a risk score. clini_table: Path to the CSV file containing the clinical data files_table: Path to the CSV file containing the mapping between patient IDs (in the PATIENT column) and heatmap filenames (in the FILENAME column) survival_time_column: The name of the column in the clinical data that contains the survival time event_column: The name of the column in the clinical data that contains the event (e.g. death, recurrence, etc.) known_biomarkers: A list of known biomarkers. These are column names in the clinical data. Returns: dict with the following structure: { 'p_value': float # The p-value of the significance of the potential biomarker 'hazard_ratio': float # The hazard ratio for the biomarker } """
<description> Given WSI probability maps, a hypothesis of a potential biomarker, and clinical data, determine (1) whether the potential biomarker is significant for patient prognosis, and (2) whether the potential biomarker is independent among already known biomarkers. </description> <arguments> heatmaps (str): Path to the folder containing the numpy array (`*.npy`) files, which contains the heatmaps of the trained model (each heatmap is HxWxC where C is the number of classes) (example: '/mount/input/TCGA_CRC') hypothesis (str): A python file, which contains a function `def hypothesis_score(prob_map_path: str) -> float` which expresses a mathematical model of a hypothesis of a potential biomarker. For a particular patient (whose heatmap is given by `prob_map_path` as a npy file), the function returns a risk score. (example: '/mount/input/mus_fraction_score.py') clini_table (str): Path to the CSV file containing the clinical data (example: '/mount/input/TCGA_CRC_info.csv') files_table (str): Path to the CSV file containing the mapping between patient IDs (in the PATIENT column) and heatmap filenames (in the FILENAME column) (example: '/mount/input/TCGA_CRC_files.csv') survival_time_column (str): The name of the column in the clinical data that contains the survival time (example: 'OS.time') event_column (str): The name of the column in the clinical data that contains the event (e.g. death, recurrence, etc.) (example: 'vital_status') known_biomarkers (list): A list of known biomarkers. These are column names in the clinical data. (example: ['MSI']) </arguments> <returns> dict with the following structure: { 'p_value': float # The p-value of the significance of the potential biomarker 'hazard_ratio': float # The hazard ratio for the biomarker } </returns>
[ { "bibtex": "@article{liang2023pathfinder,\n author = {Liang, Junhao and Zhang, Weisheng and Yang, Jianghui and Wu, Meilong and Dai, Qionghai and Yin, Hongfang and Xiao, Ying and Kong, Lingjie},\n title = {Deep learning supported discovery of biomarkers for clinical prognosis of liver cancer},\n year = {2023},\n journal = {Nature Machine Intelligence},\n volume = {5},\n number = {4},\n pages = {408--420},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "liang2023pathfinder", "url": "https://www.nature.com/articles/s42256-023-00635-3" } ]
retfound_feature_vector
{ "branch": null, "commit": "897d71c", "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "RETFound from https://github.com/rmaphoh/RETFound_MAE (at commit: 897d71c)", "name": "RETFound", "url": "https://github.com/rmaphoh/RETFound_MAE" }
cuda
[ "zhou2023retfound" ]
misc
Extract the latent feature vector for the given retinal image using the RETFound pretrained RETFound_mae_natureCFP model.
[ { "description": "Path to the retinal image.", "name": "image_file", "type": "str" } ]
[ { "description": "The feature vector for the given retinal image, as a list of floats.", "name": "feature_vector", "type": "list" } ]
{ "arguments": [ { "name": "image_file", "value": "\"/mount/input/retinal_image.jpg\"" } ], "mount": [ { "source": "cucumber.jpg", "target": "retinal_image.jpg" } ], "name": "example" }
[ { "arguments": [ { "name": "image_file", "value": "\"/mount/input/image1.jpg\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg", "target": "image1.jpg" } ], "name": "jpg" }, { "arguments": [ { "name": "image_file", "value": "\"/mount/input/image2.png\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png", "target": "image2.png" } ], "name": "png" }, { "arguments": [ { "name": "image_file", "value": "\"/mount/input/cucumber.jpg\"" } ], "mount": [ { "source": "cucumber.jpg", "target": "cucumber.jpg" } ], "name": "cucumber_different_filename" } ]
null
def retfound_feature_vector(image_file: str = '/mount/input/retinal_image.jpg') -> dict: """ Extract the latent feature vector for the given retinal image using the RETFound pretrained RETFound_mae_natureCFP model. Args: image_file: Path to the retinal image. Returns: dict with the following structure: { 'feature_vector': list # The feature vector for the given retinal image, as a list of floats. } """
<description> Extract the latent feature vector for the given retinal image using the RETFound pretrained RETFound_mae_natureCFP model. </description> <arguments> image_file (str): Path to the retinal image. (example: '/mount/input/retinal_image.jpg') </arguments> <returns> dict with the following structure: { 'feature_vector': list # The feature vector for the given retinal image, as a list of floats. } </returns>
[ { "bibtex": "@article{zhou2023retfound,\n author = {Zhou, Yukun and Chia, Mark A. and Wagner, Siegfried K. and Ayhan, Murat S. and Williamson, Dominic J. and Struyven, Robbert R. and Liu, Timing and Xu, Moucheng and Lozano, Mateo G. and Woodward-Court, Peter and Kihara, Yuka and Allen, Naomi and Gallacher, John E. J. and Littlejohns, Thomas and Aslam, Tariq and Bishop, Paul and Black, Graeme and Sergouniotis, Panagiotis and Atan, Denize and Dick, Andrew D. and Williams, Cathy and Barman, Sarah and Barrett, Jenny H. and Mackie, Sarah and Braithwaite, Tasanee and Carare, Roxana O. and Ennis, Sarah and Gibson, Jane and Lotery, Andrew J. and Self, Jay and Chakravarthy, Usha and Hogg, Ruth E. and Paterson, Euan and Woodside, Jayne and Peto, Tunde and Mckay, Gareth and Mcguinness, Bernadette and Foster, Paul J. and Balaskas, Konstantinos and Khawaja, Anthony P. and Pontikos, Nikolas and Rahi, Jugnoo S. and Lascaratos, Gerassimos and Patel, Praveen J. and Chan, Michelle and Chua, Sharon Y. L. and Day, Alexander and Desai, Parul and Egan, Cathy and Fruttiger, Marcus and Garway-Heath, David F. and Hardcastle, Alison and Khaw, Sir Peng T. and Moore, Tony and Sivaprasad, Sobha and Strouthidis, Nicholas and Thomas, Dhanes and Tufail, Adnan and Viswanathan, Ananth C. and Dhillon, Bal and Macgillivray, Tom and Sudlow, Cathie and Vitart, Veronique and Doney, Alexander and Trucco, Emanuele and Guggeinheim, Jeremy A. and Morgan, James E. and Hammond, Chris J. and Williams, Katie and Hysi, Pirro and Harding, Simon P. and Zheng, Yalin and Luben, Robert and Luthert, Phil and Sun, Zihan and McKibbin, Martin and O’Sullivan, Eoin and Oram, Richard and Weedon, Mike and Owen, Chris G. and Rudnicka, Alicja R. and Sattar, Naveed and Steel, David and Stratton, Irene and Tapp, Robyn and Yates, Max M. and Petzold, Axel and Madhusudhan, Savita and Altmann, Andre and Lee, Aaron Y. and Topol, Eric J. and Denniston, Alastair K. and Alexander, Daniel C. and Keane, Pearse A.},\n title = {A foundation model for generalizable disease detection from retinal images},\n year = {2023},\n journal = {Nature},\n volume = {622},\n number = {7981},\n pages = {156--163},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "zhou2023retfound", "url": "https://www.nature.com/articles/s41586-023-06555-x" } ]
stamp_extract_features
{ "branch": null, "commit": "97522aa", "env": [], "info": "STAMP from https://github.com/KatherLab/STAMP (at commit: 97522aa)", "name": "STAMP", "url": "https://github.com/KatherLab/STAMP" }
cuda
[ "elnahhas2024stamp" ]
pathology
Perform feature extraction using CTransPath with STAMP on a set of whole slide images, saving the resulting features to the specified output directory.
[ { "description": "Path to the output folder where the features will be saved", "name": "output_dir", "type": "str" }, { "description": "Path to the input folder containing the whole slide images", "name": "slide_dir", "type": "str" } ]
[ { "description": "The number of slides that were processed", "name": "num_processed_slides", "type": "int" } ]
{ "arguments": [ { "name": "output_dir", "value": "\"/mount/output/TCGA-BRCA-features\"" }, { "name": "slide_dir", "value": "\"/mount/input/TCGA-BRCA-SLIDES\"" } ], "mount": [ { "source": "brca", "target": "TCGA-BRCA-SLIDES" } ], "name": "example" }
[ { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/TCGA-CRC-features\"" }, { "name": "slide_dir", "value": "\"/mount/input/TCGA-CRC-SLIDES\"" } ], "mount": [ { "source": "crc", "target": "TCGA-CRC-SLIDES" } ], "name": "crc" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/TCGA-CRC-features\"" }, { "name": "slide_dir", "value": "\"/mount/input/TCGA-CRC-SLIDES\"" } ], "mount": [ { "source": "crc/TCGA-4N-A93T-01Z-00-DX1.82E240B1-22C3-46E3-891F-0DCE35C43F8B.svs", "target": "TCGA-CRC-SLIDES/TCGA-4N-A93T-01Z-00-DX1.82E240B1-22C3-46E3-891F-0DCE35C43F8B.svs" } ], "name": "crc_single" }, { "arguments": [ { "name": "output_dir", "value": "\"/mount/output/TCGA-BRCA-features\"" }, { "name": "slide_dir", "value": "\"/mount/input/TCGA-BRCA-SLIDES\"" } ], "mount": [ { "source": "brca/TCGA-BH-A0BZ-01Z-00-DX1.45EB3E93-A871-49C6-9EAE-90D98AE01913.svs", "target": "TCGA-BRCA-SLIDES/TCGA-BH-A0BZ-01Z-00-DX1.45EB3E93-A871-49C6-9EAE-90D98AE01913.svs" } ], "name": "brca_single" } ]
null
def stamp_extract_features(output_dir: str = '/mount/output/TCGA-BRCA-features', slide_dir: str = '/mount/input/TCGA-BRCA-SLIDES') -> dict: """ Perform feature extraction using CTransPath with STAMP on a set of whole slide images, saving the resulting features to the specified output directory. Args: output_dir: Path to the output folder where the features will be saved slide_dir: Path to the input folder containing the whole slide images Returns: dict with the following structure: { 'num_processed_slides': int # The number of slides that were processed } """
<description> Perform feature extraction using CTransPath with STAMP on a set of whole slide images, saving the resulting features to the specified output directory. </description> <arguments> output_dir (str): Path to the output folder where the features will be saved (example: '/mount/output/TCGA-BRCA-features') slide_dir (str): Path to the input folder containing the whole slide images (example: '/mount/input/TCGA-BRCA-SLIDES') </arguments> <returns> dict with the following structure: { 'num_processed_slides': int # The number of slides that were processed } </returns>
[ { "bibtex": "@article{elnahhas2024stamp,\n author = {El Nahhas, Omar S. M. and van Treeck, Marko and W\\\"{o}lflein, Georg and Unger, Michaela and Ligero, Marta and Lenz, Tim and Wagner, Sophia J. and Hewitt, Katherine J. and Khader, Firas and Foersch, Sebastian and Truhn, Daniel and Kather, Jakob Nikolas},\n title = {From whole-slide image to biomarker prediction: end-to-end weakly supervised deep learning in computational pathology},\n year = {2024},\n journal = {Nature Protocols},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "elnahhas2024stamp", "url": "https://www.nature.com/articles/s41596-024-01047-2" } ]
stamp_train_classification_model
{ "branch": null, "commit": "97522aa", "env": [], "info": "STAMP from https://github.com/KatherLab/STAMP (at commit: 97522aa)", "name": "STAMP", "url": "https://github.com/KatherLab/STAMP" }
cuda
[ "elnahhas2024stamp" ]
pathology
Train a model for biomarker classification. You will be supplied with the path to the folder containing the whole slide images, alongside a path to a CSV file containing the training labels. Use ctranspath for feature extraction.
[ { "description": "Path to the folder containing the whole slide images", "name": "slide_dir", "type": "str" }, { "description": "Path to the CSV file containing the clinical data", "name": "clini_table", "type": "str" }, { "description": "Path to the CSV file containing the slide metadata", "name": "slide_table", "type": "str" }, { "description": "The name of the column in the clinical data that contains the target labels", "name": "target_column", "type": "str" }, { "description": "Path to the *.ckpt file where the trained model should be saved by this function", "name": "trained_model_path", "type": "str" } ]
[ { "description": "The number of parameters in the trained model", "name": "num_params", "type": "int" } ]
{ "arguments": [ { "name": "slide_dir", "value": "\"/mount/input/TCGA-BRCA-SLIDES\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA-BRCA-DX_CLINI.xlsx\"" }, { "name": "slide_table", "value": "\"/mount/input/TCGA-BRCA-DX_SLIDE.csv\"" }, { "name": "target_column", "value": "\"TP53_driver\"" }, { "name": "trained_model_path", "value": "\"/mount/output/STAMP-BRCA-TP53-model.ckpt\"" } ], "mount": [ { "source": "brca", "target": "TCGA-BRCA-SLIDES" }, { "source": "TCGA-BRCA-DX_CLINI.xlsx", "target": "TCGA-BRCA-DX_CLINI.xlsx" }, { "source": "TCGA-BRCA-DX_SLIDE.csv", "target": "TCGA-BRCA-DX_SLIDE.csv" } ], "name": "example" }
[ { "arguments": [ { "name": "slide_dir", "value": "\"/mount/input/TCGA-CRC-SLIDES\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA-CRC-DX_CLINI.xlsx\"" }, { "name": "slide_table", "value": "\"/mount/input/TCGA-CRC-DX_SLIDE.csv\"" }, { "name": "target_column", "value": "\"isMSIH\"" }, { "name": "trained_model_path", "value": "\"/mount/output/STAMP-CRC-MSI-model.ckpt\"" } ], "mount": [ { "source": "crc", "target": "TCGA-CRC-SLIDES" }, { "source": "TCGA-CRC-DX_CLINI.xlsx", "target": "TCGA-CRC-DX_CLINI.xlsx" }, { "source": "TCGA-CRC-DX_SLIDE.csv", "target": "TCGA-CRC-DX_SLIDE.csv" } ], "name": "crc_msi" }, { "arguments": [ { "name": "slide_dir", "value": "\"/mount/input/TCGA-CRC-SLIDES\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA-CRC-DX_CLINI.xlsx\"" }, { "name": "slide_table", "value": "\"/mount/input/TCGA-CRC-DX_SLIDE.csv\"" }, { "name": "target_column", "value": "\"BRAF\"" }, { "name": "trained_model_path", "value": "\"/mount/output/STAMP-CRC-BRAF-model.ckpt\"" } ], "mount": [ { "source": "crc", "target": "TCGA-CRC-SLIDES" }, { "source": "TCGA-CRC-DX_CLINI.xlsx", "target": "TCGA-CRC-DX_CLINI.xlsx" }, { "source": "TCGA-CRC-DX_SLIDE.csv", "target": "TCGA-CRC-DX_SLIDE.csv" } ], "name": "crc_braf" }, { "arguments": [ { "name": "slide_dir", "value": "\"/mount/input/TCGA-CRC-SLIDES\"" }, { "name": "clini_table", "value": "\"/mount/input/TCGA-CRC-DX_CLINI.xlsx\"" }, { "name": "slide_table", "value": "\"/mount/input/TCGA-CRC-DX_SLIDE.csv\"" }, { "name": "target_column", "value": "\"KRAS\"" }, { "name": "trained_model_path", "value": "\"/mount/output/STAMP-CRC-KRAS-model.ckpt\"" } ], "mount": [ { "source": "crc", "target": "TCGA-CRC-SLIDES" }, { "source": "TCGA-CRC-DX_CLINI.xlsx", "target": "TCGA-CRC-DX_CLINI.xlsx" }, { "source": "TCGA-CRC-DX_SLIDE.csv", "target": "TCGA-CRC-DX_SLIDE.csv" } ], "name": "crc_kras" } ]
Here, the agent must realize that it needs to perform feature extraction before training the model.
def stamp_train_classification_model(slide_dir: str = '/mount/input/TCGA-BRCA-SLIDES', clini_table: str = '/mount/input/TCGA-BRCA-DX_CLINI.xlsx', slide_table: str = '/mount/input/TCGA-BRCA-DX_SLIDE.csv', target_column: str = 'TP53_driver', trained_model_path: str = '/mount/output/STAMP-BRCA-TP53-model.ckpt') -> dict: """ Train a model for biomarker classification. You will be supplied with the path to the folder containing the whole slide images, alongside a path to a CSV file containing the training labels. Use ctranspath for feature extraction. Args: slide_dir: Path to the folder containing the whole slide images clini_table: Path to the CSV file containing the clinical data slide_table: Path to the CSV file containing the slide metadata target_column: The name of the column in the clinical data that contains the target labels trained_model_path: Path to the *.ckpt file where the trained model should be saved by this function Returns: dict with the following structure: { 'num_params': int # The number of parameters in the trained model } """
<description> Train a model for biomarker classification. You will be supplied with the path to the folder containing the whole slide images, alongside a path to a CSV file containing the training labels. Use ctranspath for feature extraction. </description> <arguments> slide_dir (str): Path to the folder containing the whole slide images (example: '/mount/input/TCGA-BRCA-SLIDES') clini_table (str): Path to the CSV file containing the clinical data (example: '/mount/input/TCGA-BRCA-DX_CLINI.xlsx') slide_table (str): Path to the CSV file containing the slide metadata (example: '/mount/input/TCGA-BRCA-DX_SLIDE.csv') target_column (str): The name of the column in the clinical data that contains the target labels (example: 'TP53_driver') trained_model_path (str): Path to the *.ckpt file where the trained model should be saved by this function (example: '/mount/output/STAMP-BRCA-TP53-model.ckpt') </arguments> <returns> dict with the following structure: { 'num_params': int # The number of parameters in the trained model } </returns>
[ { "bibtex": "@article{elnahhas2024stamp,\n author = {El Nahhas, Omar S. M. and van Treeck, Marko and W\\\"{o}lflein, Georg and Unger, Michaela and Ligero, Marta and Lenz, Tim and Wagner, Sophia J. and Hewitt, Katherine J. and Khader, Firas and Foersch, Sebastian and Truhn, Daniel and Kather, Jakob Nikolas},\n title = {From whole-slide image to biomarker prediction: end-to-end weakly supervised deep learning in computational pathology},\n year = {2024},\n journal = {Nature Protocols},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "elnahhas2024stamp", "url": "https://www.nature.com/articles/s41596-024-01047-2" } ]
tabpfn_predict
{ "branch": null, "commit": "e8744e4", "env": [], "info": "TabPFN from https://github.com/PriorLabs/TabPFN (at commit: e8744e4)", "name": "TabPFN", "url": "https://github.com/PriorLabs/TabPFN" }
cpu
[ "hollmann2025tabpfn" ]
misc
Train a predictor using TabPFN on a tabular dataset. Evaluate the predictor on the test set. Train on CPU.
[ { "description": "Path to the CSV file containing the training data", "name": "train_csv", "type": "str" }, { "description": "Path to the CSV file containing the test data", "name": "test_csv", "type": "str" }, { "description": "The names of the columns to use as features", "name": "feature_columns", "type": "list" }, { "description": "The name of the column to predict", "name": "target_column", "type": "str" } ]
[ { "description": "The ROC AUC score of the predictor on the test set", "name": "roc_auc", "type": "float" }, { "description": "The accuracy of the predictor on the test set", "name": "accuracy", "type": "float" }, { "description": "The probabilities of the predictor on the test set, as a list of floats (one per sample in the test set)", "name": "probs", "type": "list" } ]
{ "arguments": [ { "name": "train_csv", "value": "\"/mount/input/breast_cancer_train.csv\"" }, { "name": "test_csv", "value": "\"/mount/input/breast_cancer_test.csv\"" }, { "name": "feature_columns", "value": "[\"mean radius\", \"mean texture\", \"mean perimeter\", \"mean area\", \"mean smoothness\", \"mean compactness\", \"mean concavity\", \"mean concave points\", \"mean symmetry\", \"mean fractal dimension\", \"radius error\", \"texture error\", \"perimeter error\", \"area error\", \"smoothness error\", \"compactness error\", \"concavity error\", \"concave points error\", \"symmetry error\", \"fractal dimension error\", \"worst radius\", \"worst texture\", \"worst perimeter\", \"worst area\", \"worst smoothness\", \"worst compactness\", \"worst concavity\", \"worst concave points\", \"worst symmetry\", \"worst fractal dimension\"]" }, { "name": "target_column", "value": "\"target\"" } ], "mount": [ { "source": "breast_cancer_train.csv", "target": "breast_cancer_train.csv" }, { "source": "breast_cancer_test.csv", "target": "breast_cancer_test.csv" } ], "name": "example" }
[ { "arguments": [ { "name": "train_csv", "value": "\"/mount/input/diabetes_train.csv\"" }, { "name": "test_csv", "value": "\"/mount/input/diabetes_test.csv\"" }, { "name": "feature_columns", "value": "[\"preg\", \"plas\", \"pres\", \"skin\", \"insu\", \"mass\", \"pedi\", \"age\"]" }, { "name": "target_column", "value": "\"class\"" } ], "mount": [ { "source": "diabetes_train.csv", "target": "diabetes_train.csv" }, { "source": "diabetes_test.csv", "target": "diabetes_test.csv" } ], "name": "diabetes" }, { "arguments": [ { "name": "train_csv", "value": "\"/mount/input/heart_disease_train.csv\"" }, { "name": "test_csv", "value": "\"/mount/input/heart_disease_test.csv\"" }, { "name": "feature_columns", "value": "[\"age\", \"sex\", \"chest\", \"resting_blood_pressure\", \"serum_cholestoral\", \"fasting_blood_sugar\", \"resting_electrocardiographic_results\", \"maximum_heart_rate_achieved\", \"exercise_induced_angina\", \"oldpeak\", \"slope\", \"number_of_major_vessels\", \"thal\"]" }, { "name": "target_column", "value": "\"class\"" } ], "mount": [ { "source": "heart_disease_train.csv", "target": "heart_disease_train.csv" }, { "source": "heart_disease_test.csv", "target": "heart_disease_test.csv" } ], "name": "heart_disease" }, { "arguments": [ { "name": "train_csv", "value": "\"/mount/input/parkinsons_train.csv\"" }, { "name": "test_csv", "value": "\"/mount/input/parkinsons_test.csv\"" }, { "name": "feature_columns", "value": "[\"V1\", \"V2\", \"V3\", \"V4\", \"V5\", \"V6\", \"V7\", \"V8\", \"V9\", \"V10\", \"V11\", \"V12\", \"V13\", \"V14\", \"V15\", \"V16\", \"V17\", \"V18\", \"V19\", \"V20\", \"V21\", \"V22\"]" }, { "name": "target_column", "value": "\"Class\"" } ], "mount": [ { "source": "parkinsons_train.csv", "target": "parkinsons_train.csv" }, { "source": "parkinsons_test.csv", "target": "parkinsons_test.csv" } ], "name": "parkinsons" } ]
null
def tabpfn_predict(train_csv: str = '/mount/input/breast_cancer_train.csv', test_csv: str = '/mount/input/breast_cancer_test.csv', feature_columns: list = ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension'], target_column: str = 'target') -> dict: """ Train a predictor using TabPFN on a tabular dataset. Evaluate the predictor on the test set. Train on CPU. Args: train_csv: Path to the CSV file containing the training data test_csv: Path to the CSV file containing the test data feature_columns: The names of the columns to use as features target_column: The name of the column to predict Returns: dict with the following structure: { 'roc_auc': float # The ROC AUC score of the predictor on the test set 'accuracy': float # The accuracy of the predictor on the test set 'probs': list # The probabilities of the predictor on the test set, as a list of floats (one per sample in the test set) } """
<description> Train a predictor using TabPFN on a tabular dataset. Evaluate the predictor on the test set. Train on CPU. </description> <arguments> train_csv (str): Path to the CSV file containing the training data (example: '/mount/input/breast_cancer_train.csv') test_csv (str): Path to the CSV file containing the test data (example: '/mount/input/breast_cancer_test.csv') feature_columns (list): The names of the columns to use as features (example: ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension']) target_column (str): The name of the column to predict (example: 'target') </arguments> <returns> dict with the following structure: { 'roc_auc': float # The ROC AUC score of the predictor on the test set 'accuracy': float # The accuracy of the predictor on the test set 'probs': list # The probabilities of the predictor on the test set, as a list of floats (one per sample in the test set) } </returns>
[ { "bibtex": "@article{hollmann2025tabpfn,\n author = {Hollmann, Noah and M\\\"{u}ller, Samuel and Purucker, Lennart and Krishnakumar, Arjun and K\\\"{o}rfer, Max and Hoo, Shi Bin and Schirrmeister, Robin Tibor and Hutter, Frank},\n title = {Accurate predictions on small data with a tabular foundation model},\n year = {2025},\n journal = {Nature},\n volume = {637},\n number = {8045},\n pages = {319--326},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "hollmann2025tabpfn", "url": "https://www.nature.com/articles/s41586-024-08328-6" } ]
textgrad_medical_qa_optimize
{ "branch": null, "commit": "bf5b0c5", "env": [ { "name": "OPENAI_API_KEY", "value": "${env:OPENAI_API_KEY}" } ], "info": "textgrad from https://github.com/zou-group/textgrad (at commit: bf5b0c5)", "name": "textgrad", "url": "https://github.com/zou-group/textgrad" }
cpu
[ "yuksekgonul2024textgrad" ]
llms
Optimize answers to multiple-choice medical questions using TextGrad. Each question is improved at test-time through textual gradients, guided by an objective (e.g. "Make the answer concise and accurate").
[ { "description": "Path to a CSV file containing columns: 'index', 'question', 'objective'", "name": "csv_path", "type": "str" }, { "description": "The model used to compute textual gradients (e.g., 'gpt-4o')", "name": "backward_engine", "type": "str" }, { "description": "The model used to generate initial zero-shot answers (e.g., 'gpt-3.5-turbo')", "name": "forward_engine", "type": "str" }, { "description": "System prompt to guide the LLM behavior", "name": "starting_system_prompt", "type": "str" }, { "description": "Constraint string that specifies the required answer format", "name": "optimizer_constraint", "type": "str" } ]
[ { "description": "A list of optimized answers, each ending with 'Answer: $LETTER'", "name": "optimized_answers", "type": "list" } ]
{ "arguments": [ { "name": "csv_path", "value": "\"/mount/input/sample_0.csv\"" }, { "name": "backward_engine", "value": "\"gpt-4o\"" }, { "name": "forward_engine", "value": "\"gpt-4o\"" }, { "name": "starting_system_prompt", "value": "\"You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\\nKnowledge cutoff: 2023-12\\nCurrent date: 2024-04-01\\n\"" }, { "name": "optimizer_constraint", "value": "\"You must end your answer with a separate line like: 'Answer: A', 'Answer: B', 'Answer: C', or 'Answer: D'. \\nDo NOT include any additional explanation or diagnosis after 'Answer: $LETTER'.\\n\"" } ], "mount": [ { "source": "sample_0.csv", "target": "sample_0.csv" } ], "name": "example" }
[ { "arguments": [ { "name": "csv_path", "value": "\"/mount/input/sample_0.csv\"" }, { "name": "backward_engine", "value": "\"gpt-4o\"" }, { "name": "forward_engine", "value": "\"gpt-4\"" }, { "name": "starting_system_prompt", "value": "\"You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\\nKnowledge cutoff: 2023-12\\nCurrent date: 2024-04-01\\n\"" }, { "name": "optimizer_constraint", "value": "\"You must end your answer with a separate line like: 'Answer: A', 'Answer: B', 'Answer: C', or 'Answer: D'. \\nDo NOT include any additional explanation or diagnosis after 'Answer: $LETTER'.\\n\"" } ], "mount": [ { "source": "sample_0.csv", "target": "sample_0.csv" } ], "name": "sample_0" }, { "arguments": [ { "name": "csv_path", "value": "\"/mount/input/sample_1.csv\"" }, { "name": "backward_engine", "value": "\"gpt-4o\"" }, { "name": "forward_engine", "value": "\"gpt-4\"" }, { "name": "starting_system_prompt", "value": "\"You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\\nKnowledge cutoff: 2023-12\\nCurrent date: 2024-04-01\\n\"" }, { "name": "optimizer_constraint", "value": "\"You must end your answer with a separate line like: 'Answer: A', 'Answer: B', 'Answer: C', or 'Answer: D'. \\nDo NOT include any additional explanation or diagnosis after 'Answer: $LETTER'.\\n\"" } ], "mount": [ { "source": "sample_1.csv", "target": "sample_1.csv" } ], "name": "sample_1" }, { "arguments": [ { "name": "csv_path", "value": "\"/mount/input/sample_2.csv\"" }, { "name": "backward_engine", "value": "\"gpt-4o\"" }, { "name": "forward_engine", "value": "\"gpt-4\"" }, { "name": "starting_system_prompt", "value": "\"You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\\nKnowledge cutoff: 2023-12\\nCurrent date: 2024-04-01\\n\"" }, { "name": "optimizer_constraint", "value": "\"You must end your answer with a separate line like: 'Answer: A', 'Answer: B', 'Answer: C', or 'Answer: D'. \\nDo NOT include any additional explanation or diagnosis after 'Answer: $LETTER'.\\n\"" } ], "mount": [ { "source": "sample_2.csv", "target": "sample_2.csv" } ], "name": "sample_2" } ]
null
def textgrad_medical_qa_optimize(csv_path: str = '/mount/input/sample_0.csv', backward_engine: str = 'gpt-4o', forward_engine: str = 'gpt-4o', starting_system_prompt: str = 'You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\nKnowledge cutoff: 2023-12\nCurrent date: 2024-04-01\n', optimizer_constraint: str = "You must end your answer with a separate line like: 'Answer: A', 'Answer: B', 'Answer: C', or 'Answer: D'. \nDo NOT include any additional explanation or diagnosis after 'Answer: $LETTER'.\n") -> dict: """ Optimize answers to multiple-choice medical questions using TextGrad. Each question is improved at test-time through textual gradients, guided by an objective (e.g. "Make the answer concise and accurate"). Args: csv_path: Path to a CSV file containing columns: 'index', 'question', 'objective' backward_engine: The model used to compute textual gradients (e.g., 'gpt-4o') forward_engine: The model used to generate initial zero-shot answers (e.g., 'gpt-3.5-turbo') starting_system_prompt: System prompt to guide the LLM behavior optimizer_constraint: Constraint string that specifies the required answer format Returns: dict with the following structure: { 'optimized_answers': list # A list of optimized answers, each ending with 'Answer: $LETTER' } """
<description> Optimize answers to multiple-choice medical questions using TextGrad. Each question is improved at test-time through textual gradients, guided by an objective (e.g. "Make the answer concise and accurate"). </description> <arguments> csv_path (str): Path to a CSV file containing columns: 'index', 'question', 'objective' (example: '/mount/input/sample_0.csv') backward_engine (str): The model used to compute textual gradients (e.g., 'gpt-4o') (example: 'gpt-4o') forward_engine (str): The model used to generate initial zero-shot answers (e.g., 'gpt-3.5-turbo') (example: 'gpt-4o') starting_system_prompt (str): System prompt to guide the LLM behavior (example: 'You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\nKnowledge cutoff: 2023-12\nCurrent date: 2024-04-01\n') optimizer_constraint (str): Constraint string that specifies the required answer format (example: "You must end your answer with a separate line like: 'Answer: A', 'Answer: B', 'Answer: C', or 'Answer: D'. \nDo NOT include any additional explanation or diagnosis after 'Answer: $LETTER'.\n") </arguments> <returns> dict with the following structure: { 'optimized_answers': list # A list of optimized answers, each ending with 'Answer: $LETTER' } </returns>
[ { "bibtex": "@misc{yuksekgonul2024textgrad,\n author = {Yuksekgonul, Mert and Bianchi, Federico and Boen, Joseph and Liu, Sheng and Huang, Zhi and Guestrin, Carlos and Zou, James},\n title = {{TextGrad}: Automatic \"differentiation\" via text},\n year = {2024},\n archiveprefix = {arXiv},\n eprint = {2406.07496},\n}", "id": "yuksekgonul2024textgrad", "url": "https://arxiv.org/abs/2406.07496" } ]
tiatoolbox_wsi_dimensions
{ "branch": null, "commit": "7ba7394", "env": [], "info": "tiatoolbox from https://github.com/TissueImageAnalytics/tiatoolbox (at commit: 7ba7394)", "name": "tiatoolbox", "url": "https://github.com/TissueImageAnalytics/tiatoolbox" }
cpu
[ "pocock2022tiatoolbox" ]
pathology
Determine the pixel dimensions for every whole slide image (WSI) in `input_dir` using TIAToolbox.
[ { "description": "Path to the folder that contains the WSIs", "name": "input_dir", "type": "str" }, { "description": "Whether to include every pyramid level instead of only the baseline dimensions", "name": "include_pyramid", "type": "bool" } ]
[ { "description": "Dimensions of the WSI (optionally with of without full pyramid values) as a dict of {slide_filename: {\"baseline\": [width, height], \"levels\": [[width, height], ...]}}, where `baseline` is the dimensions of the WSI at the highest resolution and `levels` is a list of dimensions for each pyramid level. If `include_pyramid` is `False`, only the `baseline` dimensions are included.", "name": "dimensions", "type": "dict" } ]
{ "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "include_pyramid", "value": "true" } ], "mount": [ { "source": "wsis", "target": "wsis" } ], "name": "example" }
[ { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "include_pyramid", "value": "false" } ], "mount": [ { "source": "wsis/TCGA-DT-5265-01Z-00-DX1.563f09af-8bbe-45cd-9c6d-85a96255e67f.svs", "target": "wsis/TCGA-DT-5265-01Z-00-DX1.563f09af-8bbe-45cd-9c6d-85a96255e67f.svs" } ], "name": "single_wsi_baseline" }, { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "include_pyramid", "value": "true" } ], "mount": [ { "source": "wsis/TCGA-DT-5265-01Z-00-DX1.563f09af-8bbe-45cd-9c6d-85a96255e67f.svs", "target": "wsis/TCGA-DT-5265-01Z-00-DX1.563f09af-8bbe-45cd-9c6d-85a96255e67f.svs" } ], "name": "single_wsi_full_pyramid" }, { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "include_pyramid", "value": "true" } ], "mount": [ { "source": "wsis/TCGA-AG-A011-01Z-00-DX1.155A4093-5EC6-4D38-8CE1-24C045DF0CD8.svs", "target": "wsis/TCGA-AG-A011-01Z-00-DX1.155A4093-5EC6-4D38-8CE1-24C045DF0CD8.svs" }, { "source": "wsis/TCGA-EI-6881-01Z-00-DX1.5cfa2929-4374-4166-b110-39ab7d3de7cd.svs", "target": "wsis/TCGA-EI-6881-01Z-00-DX1.5cfa2929-4374-4166-b110-39ab7d3de7cd.svs" } ], "name": "two_wsi_full_pyramid" }, { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "include_pyramid", "value": "false" } ], "mount": [ { "source": "wsis", "target": "wsis" } ], "name": "full_dir_baseline" } ]
null
def tiatoolbox_wsi_dimensions(input_dir: str = '/mount/input/wsis', include_pyramid: bool = True) -> dict: """ Determine the pixel dimensions for every whole slide image (WSI) in `input_dir` using TIAToolbox. Args: input_dir: Path to the folder that contains the WSIs include_pyramid: Whether to include every pyramid level instead of only the baseline dimensions Returns: dict with the following structure: { 'dimensions': dict # Dimensions of the WSI (optionally with of without full pyramid values) as a dict of {slide_filename: {"baseline": [width, height], "levels": [[width, height], ...]}}, where `baseline` is the dimensions of the WSI at the highest resolution and `levels` is a list of dimensions for each pyramid level. If `include_pyramid` is `False`, only the `baseline` dimensions are included. } """
<description> Determine the pixel dimensions for every whole slide image (WSI) in `input_dir` using TIAToolbox. </description> <arguments> input_dir (str): Path to the folder that contains the WSIs (example: '/mount/input/wsis') include_pyramid (bool): Whether to include every pyramid level instead of only the baseline dimensions (example: True) </arguments> <returns> dict with the following structure: { 'dimensions': dict # Dimensions of the WSI (optionally with of without full pyramid values) as a dict of {slide_filename: {"baseline": [width, height], "levels": [[width, height], ...]}}, where `baseline` is the dimensions of the WSI at the highest resolution and `levels` is a list of dimensions for each pyramid level. If `include_pyramid` is `False`, only the `baseline` dimensions are included. } </returns>
[ { "bibtex": "@article{pocock2022tiatoolbox,\n author = {Pocock, Johnathan and Graham, Simon and Vu, Quoc Dang and Jahanifar, Mostafa and Deshpande, Srijay and Hadjigeorghiou, Giorgos and Shephard, Adam and Bashir, Raja Muhammad Saad and Bilal, Mohsin and Lu, Wenqi and others},\n title = {TIAToolbox as an end-to-end library for advanced tissue image analytics},\n year = {2022},\n journal = {Communications medicine},\n volume = {2},\n number = {1},\n pages = {120},\n publisher = {Nature Publishing Group UK London},\n}", "id": "pocock2022tiatoolbox", "url": "https://www.nature.com/articles/s43856-022-00186-5" } ]
tiatoolbox_wsi_thumbnailer
{ "branch": null, "commit": "7ba7394", "env": [], "info": "tiatoolbox from https://github.com/TissueImageAnalytics/tiatoolbox (at commit: 7ba7394)", "name": "tiatoolbox", "url": "https://github.com/TissueImageAnalytics/tiatoolbox" }
cpu
[ "pocock2022tiatoolbox" ]
pathology
Generate a PNG thumbnail for every whole-slide image (WSI) in `input_dir` using TIAToolbox and save them to `output_dir` with the suffix “_thumbnail.png”.
[ { "description": "Path to the folder that contains the WSIs", "name": "input_dir", "type": "str" }, { "description": "Path to the folder where thumbnails are written", "name": "output_dir", "type": "str" }, { "description": "Requested magnification / physical resolution", "name": "resolution", "type": "float" }, { "description": "Units for resolution (\"power\", \"mpp\", \"level\", \"baseline\")", "name": "units", "type": "str" } ]
[ { "description": "Number of thumbnails created", "name": "num_thumbnails", "type": "int" } ]
{ "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "output_dir", "value": "\"/mount/output/wsis_thumbs\"" }, { "name": "resolution", "value": "1.25" }, { "name": "units", "value": "\"power\"" } ], "mount": [ { "source": "wsis", "target": "wsis" } ], "name": "example" }
[ { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "output_dir", "value": "\"/mount/output/wsis_thumbs\"" }, { "name": "resolution", "value": "0.625" }, { "name": "units", "value": "\"power\"" } ], "mount": [ { "source": "wsis/TCGA-DT-5265-01Z-00-DX1.563f09af-8bbe-45cd-9c6d-85a96255e67f.svs", "target": "wsis/TCGA-DT-5265-01Z-00-DX1.563f09af-8bbe-45cd-9c6d-85a96255e67f.svs" } ], "name": "single_wsi_low_power" }, { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "output_dir", "value": "\"/mount/output/wsis_thumbs\"" }, { "name": "resolution", "value": "2.0" }, { "name": "units", "value": "\"mpp\"" } ], "mount": [ { "source": "wsis/TCGA-AG-A011-01Z-00-DX1.155A4093-5EC6-4D38-8CE1-24C045DF0CD8.svs", "target": "wsis/TCGA-AG-A011-01Z-00-DX1.155A4093-5EC6-4D38-8CE1-24C045DF0CD8.svs" }, { "source": "wsis/TCGA-EI-6881-01Z-00-DX1.5cfa2929-4374-4166-b110-39ab7d3de7cd.svs", "target": "wsis/TCGA-EI-6881-01Z-00-DX1.5cfa2929-4374-4166-b110-39ab7d3de7cd.svs" } ], "name": "two_wsi_at_2mpp" }, { "arguments": [ { "name": "input_dir", "value": "\"/mount/input/wsis\"" }, { "name": "output_dir", "value": "\"/mount/output/wsis_thumbs\"" }, { "name": "resolution", "value": "1.25" }, { "name": "units", "value": "\"power\"" } ], "mount": [ { "source": "wsis", "target": "wsis" } ], "name": "full_dir_1p25_power" } ]
null
def tiatoolbox_wsi_thumbnailer(input_dir: str = '/mount/input/wsis', output_dir: str = '/mount/output/wsis_thumbs', resolution: float = 1.25, units: str = 'power') -> dict: """ Generate a PNG thumbnail for every whole-slide image (WSI) in `input_dir` using TIAToolbox and save them to `output_dir` with the suffix “_thumbnail.png”. Args: input_dir: Path to the folder that contains the WSIs output_dir: Path to the folder where thumbnails are written resolution: Requested magnification / physical resolution units: Units for resolution ("power", "mpp", "level", "baseline") Returns: dict with the following structure: { 'num_thumbnails': int # Number of thumbnails created } """
<description> Generate a PNG thumbnail for every whole-slide image (WSI) in `input_dir` using TIAToolbox and save them to `output_dir` with the suffix “_thumbnail.png”. </description> <arguments> input_dir (str): Path to the folder that contains the WSIs (example: '/mount/input/wsis') output_dir (str): Path to the folder where thumbnails are written (example: '/mount/output/wsis_thumbs') resolution (float): Requested magnification / physical resolution (example: 1.25) units (str): Units for resolution ("power", "mpp", "level", "baseline") (example: 'power') </arguments> <returns> dict with the following structure: { 'num_thumbnails': int # Number of thumbnails created } </returns>
[ { "bibtex": "@article{pocock2022tiatoolbox,\n author = {Pocock, Johnathan and Graham, Simon and Vu, Quoc Dang and Jahanifar, Mostafa and Deshpande, Srijay and Hadjigeorghiou, Giorgos and Shephard, Adam and Bashir, Raja Muhammad Saad and Bilal, Mohsin and Lu, Wenqi and others},\n title = {TIAToolbox as an end-to-end library for advanced tissue image analytics},\n year = {2022},\n journal = {Communications medicine},\n volume = {2},\n number = {1},\n pages = {120},\n publisher = {Nature Publishing Group UK London},\n}", "id": "pocock2022tiatoolbox", "url": "https://www.nature.com/articles/s43856-022-00186-5" } ]
totalsegmentator_segment_liver
{ "branch": null, "commit": "5b1a4f0", "env": [], "info": "TotalSegmentator from https://github.com/wasserth/TotalSegmentator (at commit: 5b1a4f0)", "name": "TotalSegmentator", "url": "https://github.com/wasserth/TotalSegmentator" }
cuda
[ "wasserthal2023totalsegmentator" ]
radiology
Segment the liver vessels from the input CT scan(s) and save the result as a .nii.gz file. In the output segmentation, liver vessel voxels must have the value 1, and all other voxels must be 0.
[ { "description": "Path to the input image in .nii.gz format", "name": "ct_input_path", "type": "str" }, { "description": "Path to the output file (liver vessels segmentation mask) in .nii.gz format", "name": "segmentation_mask_output_path", "type": "str" } ]
[]
{ "arguments": [ { "name": "ct_input_path", "value": "\"/mount/input/CRLM-CT-1040_0000.nii.gz\"" }, { "name": "segmentation_mask_output_path", "value": "\"/mount/output/CRLM-CT-1040_0000_seg_mask.nii.gz\"" } ], "mount": [ { "source": "CRLM-CT-1040_0000.nii.gz", "target": "CRLM-CT-1040_0000.nii.gz" } ], "name": "example" }
[ { "arguments": [ { "name": "ct_input_path", "value": "\"/mount/input/CRLM-CT-1085_0000.nii.gz\"" }, { "name": "segmentation_mask_output_path", "value": "\"/mount/output/CRLM-CT-1085_0000_seg_mask.nii.gz\"" } ], "mount": [ { "source": "CRLM-CT-1085_0000.nii.gz", "target": "CRLM-CT-1085_0000.nii.gz" } ], "name": "CRLM-CT-1085" }, { "arguments": [ { "name": "ct_input_path", "value": "\"/mount/input/CRLM-CT-1161_0000.nii.gz\"" }, { "name": "segmentation_mask_output_path", "value": "\"/mount/output/CRLM-CT-1161_0000_seg_mask.nii.gz\"" } ], "mount": [ { "source": "CRLM-CT-1161_0000.nii.gz", "target": "CRLM-CT-1161_0000.nii.gz" } ], "name": "CRLM-CT-1161" }, { "arguments": [ { "name": "ct_input_path", "value": "\"/mount/input/CRLM-CT-1094_0000.nii.gz\"" }, { "name": "segmentation_mask_output_path", "value": "\"/mount/output/CRLM-CT-1094_0000_seg_mask.nii.gz\"" } ], "mount": [ { "source": "CRLM-CT-1094_0000.nii.gz", "target": "CRLM-CT-1094_0000.nii.gz" } ], "name": "CRLM-CT-1094" } ]
null
def totalsegmentator_segment_liver(ct_input_path: str = '/mount/input/CRLM-CT-1040_0000.nii.gz', segmentation_mask_output_path: str = '/mount/output/CRLM-CT-1040_0000_seg_mask.nii.gz') -> dict: """ Segment the liver vessels from the input CT scan(s) and save the result as a .nii.gz file. In the output segmentation, liver vessel voxels must have the value 1, and all other voxels must be 0. Args: ct_input_path: Path to the input image in .nii.gz format segmentation_mask_output_path: Path to the output file (liver vessels segmentation mask) in .nii.gz format Returns: empty dict """
<description> Segment the liver vessels from the input CT scan(s) and save the result as a .nii.gz file. In the output segmentation, liver vessel voxels must have the value 1, and all other voxels must be 0. </description> <arguments> ct_input_path (str): Path to the input image in .nii.gz format (example: '/mount/input/CRLM-CT-1040_0000.nii.gz') segmentation_mask_output_path (str): Path to the output file (liver vessels segmentation mask) in .nii.gz format (example: '/mount/output/CRLM-CT-1040_0000_seg_mask.nii.gz') </arguments> <returns> empty dict </returns>
[ { "bibtex": "@article{wasserthal2023totalsegmentator,\n author = {Wasserthal, Jakob and Breit, Hanns-Christian and Meyer, Manfred T and Pradella, Maurice and Hinck, Daniel and Sauter, Alexander W and Heye, Tobias and Boll, Daniel T and Cyriac, Joshy and Yang, Shan and others},\n title = {TotalSegmentator: robust segmentation of 104 anatomic structures in CT images},\n year = {2023},\n journal = {Radiology: Artificial Intelligence},\n volume = {5},\n number = {5},\n pages = {e230024},\n publisher = {Radiological Society of North America},\n}", "id": "wasserthal2023totalsegmentator", "url": "https://pubs.rsna.org/doi/10.1148/ryai.230024" } ]
uni_extract_features
{ "branch": null, "commit": "42715ef", "env": [ { "name": "HF_TOKEN", "value": "${env:HF_TOKEN}" } ], "info": "UNI from https://github.com/mahmoodlab/UNI (at commit: 42715ef)", "name": "UNI", "url": "https://github.com/mahmoodlab/UNI" }
cuda
[ "chen2024uni" ]
pathology
Perform feature extraction on an input image using the "UNI" model.
[ { "description": "Path to the input image", "name": "input_image", "type": "str" } ]
[ { "description": "The feature vector extracted from the input image, as a list of floats", "name": "features", "type": "list" } ]
{ "arguments": [ { "name": "input_image", "value": "\"/mount/input/TUM/TUM-TCGA-ACRLPPQE.tif\"" } ], "mount": [ { "source": "TUM-TCGA-ACRLPPQE.tif", "target": "TUM/TUM-TCGA-ACRLPPQE.tif" } ], "name": "example" }
[ { "arguments": [ { "name": "input_image", "value": "\"/mount/input/MUC/MUC-TCGA-ACCPKIPN.tif\"" } ], "mount": [ { "source": "MUC-TCGA-ACCPKIPN.tif", "target": "MUC/MUC-TCGA-ACCPKIPN.tif" } ], "name": "kather100k_muc" }, { "arguments": [ { "name": "input_image", "value": "\"/mount/input/TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png", "target": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.png" } ], "name": "tcga_brca_patch_png" }, { "arguments": [ { "name": "input_image", "value": "\"/mount/input/TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg\"" } ], "mount": [ { "source": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg", "target": "TCGA-BRCA_patch_TCGA-BH-A0DE-01Z-00-DX1.64A0340A-8146-48E8-AAF7-4035988B9152.jpg" } ], "name": "tcga_brca_patch_jpg" } ]
null
def uni_extract_features(input_image: str = '/mount/input/TUM/TUM-TCGA-ACRLPPQE.tif') -> dict: """ Perform feature extraction on an input image using the "UNI" model. Args: input_image: Path to the input image Returns: dict with the following structure: { 'features': list # The feature vector extracted from the input image, as a list of floats } """
<description> Perform feature extraction on an input image using the "UNI" model. </description> <arguments> input_image (str): Path to the input image (example: '/mount/input/TUM/TUM-TCGA-ACRLPPQE.tif') </arguments> <returns> dict with the following structure: { 'features': list # The feature vector extracted from the input image, as a list of floats } </returns>
[ { "bibtex": "@article{chen2024uni,\n author = {Chen, Richard J. and Ding, Tong and Lu, Ming Y. and Williamson, Drew F. K. and Jaume, Guillaume and Song, Andrew H. and Chen, Bowen and Zhang, Andrew and Shao, Daniel and Shaban, Muhammad and Williams, Mane and Oldenburg, Lukas and Weishaupt, Luca L. and Wang, Judy J. and Vaidya, Anurag and Le, Long Phi and Gerber, Georg and Sahai, Sharifa and Williams, Walt and Mahmood, Faisal},\n title = {Towards a general-purpose foundation model for computational pathology},\n year = {2024},\n journal = {Nature Medicine},\n volume = {30},\n number = {3},\n pages = {850--862},\n publisher = {Springer Science and Business Media LLC},\n}", "id": "chen2024uni", "url": "https://www.nature.com/articles/s41591-024-02857-3" } ]
README.md exists but content is empty.
Downloads last month
142