--- language: - en tags: - vision-transformer - dinov2 # Or another base architecture if more appropriate - neuropathology - image-classification # Or image-segmentation, object-detection, etc. - university-of-kentucky # - add-other-relevant-tags-here license: "apache-2.0" # IMPORTANT: Replace with your actual chosen license ID (e.g., mit, cc-by-nc-4.0). Must be a valid SPDX license identifier or 'other'. datasets: - "uky-neuropathology-placeholder" # IMPORTANT: Replace with an actual dataset identifier if available on the Hub, or a descriptive name for your dataset. Cannot be empty. # pipeline_tag: "image-classification" # Uncomment and set if applicable (e.g., image-classification, image-segmentation) base_model: "facebook/dinov2-giant" # IMPORTANT: Replace with the actual Hugging Face Hub model ID of the base model if this is a fine-tune (e.g., google/vit-base-patch16-224-in21k). If not fine-tuned from a Hub model, REMOVE this entire 'base_model' line. It cannot be empty if present. # metrics: # Uncomment and fill if you have structured evaluation results # - accuracy # - f1 # - roc_auc # model-index: # For detailed, structured evaluation results (see Hugging Face docs) # - name: "[Your Model Name]" # results: # - task: # type: "image-classification" # e.g., image-classification # dataset: # name: "UKy Neuropathology Test Set Placeholder" # e.g., UKy Neuropathology Test Set # type: "private" # e.g., private-institutional-dataset, or a Hub dataset identifier # metrics: # - name: "Accuracy" # type: "accuracy" # value: 0.0 # e.g., 0.925 # - name: "F1-score" # type: "f1" # value: 0.0 # e.g., 0.924 # source: # name: "Internal Evaluation Report Placeholder" # e.g., Internal Evaluation Report or Link to Paper # url: "" # Link if available co2_emissions: # This is the standard field name emissions: 1.0 # IMPORTANT: Replace with your estimated CO2 emissions in kg. This is a placeholder value. source: "Estimated" # IMPORTANT: Replace with how you got this value (e.g., "ML CO2 Impact tool", "CodeCarbon", "Estimated") # training_type: "fine-tuning" # Optional: e.g., pretraining, fine-tuning # geographical_location: "Lexington, KY, USA" # Optional # hardware_used: "NVIDIA A100" # Optional #thumbnail: "url-to-your-thumbnail-image.jpg" # Optional: URL to a thumbnail image for the model card --- # Model Card for Neuropathology Vision Transformer This model is a Vision Transformer adapted for neuropathology tasks, developed using data from the University of Kentucky. It leverages principles from self-supervised learning models like DINOv2. ## Model Details * **Model Type:** Vision Transformer (ViT) for neuropathology. * **Developed by:** Center for Applied Artificial Intelligence * **Model Date:** 05/05/2025 * **Base Model Architecture :** DINOv2-Giant (Vit-G/14) * **Input:** Image (224x224). * **Embedding Dimension:** 1536 * **Patch Size:** 14 * **Image Size Compatibility:** * The model was trained on images/patches of size 224x224. * The model can accept larger images provided the image dimensions are multiples of the patch size. If not, cropping to the closest smaller multiple may occur. * **License:** [PLACEHOLDER: Reiterate license chosen in YAML, e.g., Apache 2.0. Add link to full license if custom or 'other'.] * **Repository:** [PLACEHOLDER: Link to your model repository (e.g., GitHub, Hugging Face Hub)] * **Paper(s)/Reference(s):** * [PLACEHOLDER: Link to your paper if applicable] * [Optional: Link to relevant University of Kentucky data descriptor or study paper] * Oquab et al., "DINOv2: Learning Robust Visual Features without Supervision" (https://arxiv.org/abs/2304.07193) * Darcet et al., "Vision Transformers Need Registers" (https://arxiv.org/abs/2309.16588) (if registers are used) * **Demo:** [PLACEHOLDER: Link to your demo, if any] ## Intended Uses This model is intended for research purposes in the field of neuropathology. * **Primary Intended Uses:** * [PLACEHOLDER: e.g., Automated detection of specific neuropathological features (e.g., amyloid plaques, neurofibrillary tangles, Lewy bodies) in digitized histopathological slides.] * [PLACEHOLDER: e.g., Classification of tissue samples based on the presence/severity of neuropathological changes.] * [PLACEHOLDER: e.g., Feature extraction for quantitative analysis of neuropathology.] * [PLACEHOLDER: e.g., A research tool to explore correlations between image features and disease states/progression.] * **Primary Intended Users:** * [PLACEHOLDER: e.g., Neuropathology researchers] * [PLACEHOLDER: e.g., Computational pathology scientists] * [PLACEHOLDER: e.g., AI developers working on medical imaging solutions for neurodegenerative diseases] * **Out-of-Scope Uses:** * [PLACEHOLDER: e.g., Direct clinical diagnosis or patient management decisions without expert human neuropathologist review and confirmation.] * [PLACEHOLDER: e.g., Use on staining methods, tissue types, or species significantly different from the training data without thorough validation.] * [PLACEHOLDER: e.g., Any application with legal or primary diagnostic implications without regulatory clearance.] ## How to Get Started with the Model This model can extract embeddings from pathology images using three different approaches: with an image processor for standardized preprocessing, without explicit resizing for preserving original image dimensions, or with forced 224×224 resizing for consistent inputs. These flexible extraction methods accommodate various usage scenarios while ensuring proper normalization, allowing researchers to choose the approach that best fits their specific data characteristics and research requirements. ```python import torch from PIL import Image from transformers import AutoModel, AutoImageProcessor from torchvision import transforms def get_embeddings_with_processor(image_path, model_path, processor_path): """ Extract embeddings using a HuggingFace image processor. This approach handles normalization and resizing automatically. Args: image_path: Path to the image file model_path: Path to the model directory processor_path: Path to the processor config directory Returns: Image embeddings from the model """ # Load model model = AutoModel.from_pretrained(model_path) model.eval() # Load processor from config image_processor = AutoImageProcessor.from_pretrained(processor_path) # Process the image with torch.no_grad(): image = Image.open(image_path).convert('RGB') inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) embeddings = outputs.last_hidden_state[:, 0, :] return embeddings def get_embeddings_direct(image_path, model_path, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]): """ Extract embeddings directly without an image processor. This approach works with various image resolutions since transformers handle different input sizes by design. Args: image_path: Path to the image file model_path: Path to the model directory mean: Normalization mean values std: Normalization standard deviation values Returns: Image embeddings from the model """ # Load model model = AutoModel.from_pretrained(model_path) model.eval() # Define transformation - just converting to tensor and normalizing transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) # Process the image with torch.no_grad(): # Open image and convert to RGB image = Image.open(image_path).convert('RGB') # Convert image to tensor image_tensor = transform(image).unsqueeze(0) # Add batch dimension # Feed to model outputs = model(pixel_values=image_tensor) # Get embeddings embeddings = outputs.last_hidden_state[:, 0, :] return embeddings def get_embeddings_resized(image_path, model_path, size=(224, 224), mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]): """ Extract embeddings with explicit resizing to 224x224. This approach ensures consistent input size regardless of original image dimensions. Args: image_path: Path to the image file model_path: Path to the model directory size: Target size for resizing (default: 224x224) mean: Normalization mean values std: Normalization standard deviation values Returns: Image embeddings from the model """ # Load model model = AutoModel.from_pretrained(model_path) model.eval() # Define transformation with explicit resize transform = transforms.Compose([ transforms.Resize(size, interpolation=transforms.InterpolationMode.BICUBIC), transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) # Process the image with torch.no_grad(): image = Image.open(image_path).convert('RGB') image_tensor = transform(image).unsqueeze(0) # Add batch dimension outputs = model(pixel_values=image_tensor) embeddings = outputs.last_hidden_state[:, 0, :] return embeddings # Example usage if __name__ == "__main__": image_path = "test.jpg" model_path = "outputs/training_test_3/teacher_checkpoints/iter_40" processor_path = "processor_config.json" # Directory containing preprocessor_config.json # Method 1: Using image processor (recommended for consistency) embeddings1 = get_embeddings_with_processor(image_path, model_path, processor_path) print('Embedding shape (with processor):', embeddings1.shape) # Method 2: Direct approach without resizing (works with various resolutions) embeddings2 = get_embeddings_direct(image_path, model_path) print('Embedding shape (direct):', embeddings2.shape) # Method 3: With explicit resize to 224x224 embeddings3 = get_embeddings_resized(image_path, model_path) print('Embedding shape (resized):', embeddings3.shape) ``` ## Training Data * **Dataset(s):** The model was trained on data from the University of Kentucky. * **Name/Identifier:** [PLACEHOLDER: Specify the formal name or internal identifier of the dataset, e.g., "UKy Alzheimer's Disease Center Neuropathology Whole Slide Image Cohort v1.0"] * **Source:** University of Kentucky, [PLACEHOLDER: Specific Department, Center, or PI, e.g., Sanders-Brown Center on Aging, Department of Pathology] * **Description:** [PLACEHOLDER: Describe the data. E.g., "Digitized whole slide images (WSIs) of human post-mortem brain tissue sections from [number] subjects. Sections were stained with [e.g., Hematoxylin and Eosin (H&E), and immunohistochemistry for Amyloid-beta (Aβ) and phosphorylated Tau (pTau)]. Images were acquired using [e.g., Aperio AT2 scanner at 20x magnification]."] * **Preprocessing:** [PLACEHOLDER: Describe significant preprocessing steps. E.g., "WSIs were tiled into non-overlapping [e.g., 224x224 pixel] patches. Tiles with excessive background or artifacts were excluded. Color normalization using [Method, e.g., Macenko method] was applied."] * **Annotation (if applicable for supervised fine-tuning or evaluation):** [PLACEHOLDER: Describe the annotation process. E.g., "Regions of interest (ROIs) for [pathologies] were annotated by board-certified neuropathologists. For classification tasks, slide-level or region-level labels for [disease/pathology presence/severity] were provided."] * **Data Collection and Bias:** * **Demographics & Characteristics:** [PLACEHOLDER: Describe characteristics of the subjects providing data – e.g., age range, sex distribution, ethnicity distribution (if available and ethically appropriate to share), primary diagnoses, disease stages. Note any significant imbalances or selection criteria. E.g., "Data primarily from individuals over 65 years of age, with a representation of [X% female, Y% male]. The cohort includes cases spanning a spectrum of Alzheimer's Disease neuropathologic change (ADNC)."] * **Known Biases in Data:** [PLACEHOLDER: Address any known or potential biases in the dataset. E.g., "The dataset is derived from a single academic medical center (University of Kentucky), potentially limiting geographic and scanner-type diversity.", "Underrepresentation of certain comorbid conditions or early disease stages.", "Potential for selection bias based on consent or case availability."] ## Training Procedure * **Training System/Framework:** [PLACEHOLDER: e.g., "PyTorch", "Hugging Face Transformers library". If custom or specific framework features were essential, mention them, e.g., "Custom training loop implementing DINOv2 self-distillation loss and iBOT masked image modeling."] * **Base Model (if fine-tuning):** [PLACEHOLDER: e.g., "Pretrained `facebook/dinov2-vitb14` loaded from Hugging Face Hub."] * **Training Objective(s):** [PLACEHOLDER: Describe the loss functions and training paradigm. E.g., "Self-supervised learning using DINO loss, iBOT masked-image modeling loss, and KoLeo regularization on [CLS] tokens.", or for fine-tuning: "Fine-tuned for [specific task, e.g., multi-class classification of neuropathological features] using a cross-entropy loss function."] * **Key Hyperparameters (example):** * Batch size: [PLACEHOLDER] * Learning rate: [PLACEHOLDER] (and schedule if any) * Epochs/Iterations: [PLACEHOLDER] * Optimizer: [PLACEHOLDER: e.g., AdamW] * Weight decay: [PLACEHOLDER] * [Optional: Other important parameters like temperature for DINO, mask ratio for iBOT] * **Data Augmentation:** [PLACEHOLDER: List specific augmentations used. E.g., "Standard augmentations including random cropping, horizontal/vertical flipping, rotations. Color augmentations such as random brightness, contrast, and HED color jitter specifically for histopathology images. [Optional: Stain augmentation techniques if used.]"] * **Training Regime:** [PLACEHOLDER: e.g., "Trained with fp16 mixed-precision using PyTorch FSDP on [Number]x NVIDIA [Type, e.g., A100] GPUs."] * [Optional: Parameter-Efficient Fine-Tuning (PEFT): If used, describe e.g., "LoRA was applied to attention and feed-forward network layers with a rank of [r]."] * [Optional: Layer Freezing: If used, e.g., "The first N layers of the pretrained backbone were frozen during fine-tuning."] ## Evaluation * **Task(s):** [PLACEHOLDER: Clearly define the task(s) the model was evaluated on. E.g., "Patch-level classification of [pathology A vs. B vs. healthy]", "Detection of [specific cellular feature]", "Slide-level prediction of [disease grade]"] * **Metrics:** [PLACEHOLDER: List the metrics used for evaluation. E.g., "For classification: Accuracy, Precision, Recall, F1-score (macro/micro/weighted), AUC-ROC, AUC-PR. For detection: mean Average Precision (mAP) at [IoU threshold(s)]."] * **Evaluation Data:** * **Dataset(s):** [PLACEHOLDER: Describe the dataset(s) used for evaluation. E.g., "A held-out test set from the University of Kentucky dataset, comprising [N] images/slides from [M] subjects, ensuring no overlap with the training set.", "Optional: An external validation dataset from [Source Y] consisting of [details]."] * **Demographics and Characteristics:** [PLACEHOLDER: Describe the evaluation set similarly to the training data, highlighting any differences.] * **Results:** [PLACEHOLDER: Present key quantitative results. Tables are good for multiple metrics/classes. Include confidence intervals or standard deviations if available. E.g., "The model achieved an accuracy of X% and an F1-score of Y for classifying [pathology Z] on the internal UKy test set. On the external validation set [Dataset Name], it achieved an accuracy of A%."] ## Bias, Risks, and Limitations * **Model Biases:** * [PLACEHOLDER: Reflect on potential biases. E.g., "Performance may be unequal across different demographic groups if these were imbalanced in the UKy training data and these characteristics correlate with image features.", "Model may exhibit bias towards features prevalent in the specific scanner or staining protocols used at the University of Kentucky.", "Bias may arise from class imbalance in the training data, leading to better performance on majority classes."] * **Risks:** * [PLACEHOLDER: Identify potential risks. E.g., "Over-reliance on model predictions in a research setting without thorough critical assessment by domain experts could lead to erroneous scientific conclusions.", "Risk of algorithmic bias perpetuating or amplifying existing disparities if the model is naively applied to populations or data sources different from the training set without careful validation.", "Misinterpretation of model outputs as definitive diagnostic statements (model is for research/assistive use)."] * **Limitations:** * [PLACEHOLDER: State known limitations. E.g., "The model was trained primarily on [specific stains/markers, e.g., H&E, Aβ, pTau] and its performance on other stains is not guaranteed.", "Generalization to images from different institutions, scanners, or significantly different tissue preparation protocols may be limited without further fine-tuning or validation.", "Performance on very rare neuropathological features or subtle morphological changes may be suboptimal due to limited representation in the training data.", "The model requires high-quality input images; performance may degrade with significant artifacts (e.g., blur, tissue folds, pen marks)."] * **Recommendations:** * Users should critically evaluate model outputs, especially in novel contexts or with data from different sources. * Extensive validation is recommended before use on datasets with different characteristics than the training data. * [PLACEHOLDER: Add any other specific recommendations for users.] ## Ethical Considerations * **Data Usage:** * [PLACEHOLDER: E.g., "The data from the University of Kentucky used for training and evaluating this model was collected and utilized under Institutional Review Board (IRB) protocol #[XYZ] at the University of Kentucky.", "All data was de-identified prior to its use in this research in accordance with IRB-approved procedures and applicable privacy regulations (e.g., HIPAA)."] * **Patient Privacy:** * [PLACEHOLDER: E.g., "Measures were taken to ensure de-identification of patient data. The model outputs do not contain personally identifiable information."] * **Intended Use Context:** * This model is intended for research purposes to augment the capabilities of neuropathology researchers. It is not a medical device and should not be used for direct clinical decision-making, diagnosis, or treatment planning without comprehensive validation, regulatory approval (if applicable), and oversight by qualified medical professionals. * **Fairness and Bias Mitigation:** * [PLACEHOLDER: Describe any steps taken during development to assess or mitigate bias, or plans for future work in this area. E.g., "Ongoing work includes evaluating model performance across different demographic subgroups represented in the University of Kentucky dataset to identify and address potential disparities."] ## Environmental Impact * **Hardware Type:** [PLACEHOLDER: e.g., NVIDIA A100 80GB, NVIDIA V100 32GB, or specific University of Kentucky HPC node types] * **Hours Used:** [PLACEHOLDER: Estimate total GPU/TPU hours for training/fine-tuning, e.g., "Approximately X GPU hours"] * **Cloud Provider:** [PLACEHOLDER: e.g., University of Kentucky Lipscomb Compute Cluster, AWS, GCP, Azure, Private Infrastructure] * **Compute Region:** [PLACEHOLDER: e.g., Lexington, KY (for UKy HPC); us-east-1 (if cloud); Not Applicable (if local HPC)] * **Carbon Emitted (CO2eq):** [PLACEHOLDER: e.g., "X kg". Estimate if possible using tools like CodeCarbon or ML CO2 Impact. If not measured, state "Not quantitatively measured." Consider adding: "We encourage users to be mindful of the computational cost of using and retraining deep learning models."] * *Software:* [PLACEHOLDER: e.g., PyTorch X.Y, Transformers Z.A, CUDA B.C] ## Citation / BibTeX [PLACEHOLDER: If your model is described in a publication, provide its BibTeX entry here.] ```bibtex @misc{yourlastname_year_modelname, author = {[PLACEHOLDER: Your Name/Group Name, e.g., Doe, John and The University of Kentucky Neuropathology AI Group]}, title = {[PLACEHOLDER: Neuropathology Vision Transformer (University of Kentucky Data)]}, year = {[PLACEHOLDER: YYYY]}, publisher = {[PLACEHOLDER: e.g., Hugging Face or arXiv if pre-print, or Journal Name if published]}, url = {[PLACEHOLDER: Link to model Hub page or paper]} } ``` [Optional: Add BibTeX for the DINOv2 and Vision Transformers Need Registers papers if they are core to your methodology.] ```bibtex @misc{oquab2023dinov2, title={DINOv2: Learning Robust Visual Features without Supervision}, author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patr