--- language: - en tags: - vision-transformer - dinov2 # Or another base architecture if more appropriate - neuropathology - image-classification # Or image-segmentation, object-detection, etc. - university-of-kentucky # - add-other-relevant-tags-here license: "apache-2.0" # IMPORTANT: Replace with your actual chosen license ID (e.g., mit, cc-by-nc-4.0). Must be a valid SPDX license identifier or 'other'. datasets: - "uky-neuropathology-placeholder" # IMPORTANT: Replace with an actual dataset identifier if available on the Hub, or a descriptive name for your dataset. Cannot be empty. # pipeline_tag: "image-classification" # Uncomment and set if applicable (e.g., image-classification, image-segmentation) base_model: "facebook/dinov2-giant" # IMPORTANT: Replace with the actual Hugging Face Hub model ID of the base model if this is a fine-tune (e.g., google/vit-base-patch16-224-in21k). If not fine-tuned from a Hub model, REMOVE this entire 'base_model' line. It cannot be empty if present. # metrics: # Uncomment and fill if you have structured evaluation results # - accuracy # - f1 # - roc_auc # model-index: # For detailed, structured evaluation results (see Hugging Face docs) # - name: "[Your Model Name]" # results: # - task: # type: "image-classification" # e.g., image-classification # dataset: # name: "UKy Neuropathology Test Set Placeholder" # e.g., UKy Neuropathology Test Set # type: "private" # e.g., private-institutional-dataset, or a Hub dataset identifier # metrics: # - name: "Accuracy" # type: "accuracy" # value: 0.0 # e.g., 0.925 # - name: "F1-score" # type: "f1" # value: 0.0 # e.g., 0.924 # source: # name: "Internal Evaluation Report Placeholder" # e.g., Internal Evaluation Report or Link to Paper # url: "" # Link if available co2_emissions: # This is the standard field name emissions: 1.0 # IMPORTANT: Replace with your estimated CO2 emissions in kg. This is a placeholder value. source: "Estimated" # IMPORTANT: Replace with how you got this value (e.g., "ML CO2 Impact tool", "CodeCarbon", "Estimated") # training_type: "fine-tuning" # Optional: e.g., pretraining, fine-tuning # geographical_location: "Lexington, KY, USA" # Optional # hardware_used: "NVIDIA A100" # Optional #thumbnail: "url-to-your-thumbnail-image.jpg" # Optional: URL to a thumbnail image for the model card --- # Model Card for Neuropathology Vision Transformer This model is a Vision Transformer adapted for neuropathology tasks, developed using data from the University of Kentucky. It leverages principles from self-supervised learning models like DINOv2. ## Model Details * **Model Type:** Vision Transformer (ViT) for neuropathology. * **Developed by:** Center for Applied Artificial Intelligence (CAAI) * **Model Date:** 05/2025 * **Base Model Architecture:** Dinov2-giant (https://huggingface.co/facebook/dinov2-giant) * **Input:** Image (224x224). * **Output:** Class token and patch tokens. These can be used for various downstream tasks (e.g., classification, segmentation, similarity search). * **Embedding Dimension:** 1536 * **Patch Size:** 14 * **Image Size Compatibility:** * The model was trained on images/patches of size 224x224. * The model can accept images of any size, not just the 224x224 dimensions used in training. * **License:** Apache 2.0 ## Intended Uses This model is intended for research purposes in the field of neuropathology. * **Primary Intended Uses:** * Classification of tissue samples based on the presence/severity of neuropathological changes. * Feature extraction for quantitative analysis of neuropathology. ## How to Get Started with the Model The following code examples demonstrate three different approaches to extract embeddings from images using this model. Each approach has specific use cases depending on your requirements. Three example methods using Hugging Face `transformers` (adjust based on your actual model and task): ```python import torch from PIL import Image from transformers import AutoModel, AutoImageProcessor from torchvision import transforms def get_embeddings_with_processor(image_path, model_path): """ Extract embeddings using a HuggingFace image processor. This approach handles normalization and resizing automatically. Args: image_path: Path to the image file model_path: Path to the model directory processor_path: Path to the processor config directory Returns: Image embeddings from the model """ # Load model model = AutoModel.from_pretrained(model_path) model.eval() # Load processor from config image_processor = AutoImageProcessor.from_pretrained(model_path) # Process the image with torch.no_grad(): image = Image.open(image_path).convert('RGB') inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) embeddings = outputs.last_hidden_state[:, 0, :] return embeddings def get_embeddings_direct(image_path, model_path, mean=[0.83800817, 0.6516568, 0.78056043], std=[0.08324149, 0.09973671, 0.07153901]): """ Extract embeddings directly without an image processor. This approach works with various image resolutions since transformers handle different input sizes by design. Args: image_path: Path to the image file model_path: Path to the model directory mean: Normalization mean values std: Normalization standard deviation values Returns: Image embeddings from the model """ # Load model model = AutoModel.from_pretrained(model_path) model.eval() # Define transformation - just converting to tensor and normalizing transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) # Process the image with torch.no_grad(): # Open image and convert to RGB image = Image.open(image_path).convert('RGB') # Convert image to tensor image_tensor = transform(image).unsqueeze(0) # Add batch dimension # Feed to model outputs = model(pixel_values=image_tensor) # Get embeddings embeddings = outputs.last_hidden_state[:, 0, :] return embeddings def get_embeddings_resized(image_path, model_path, size=(224, 224), mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]): """ Extract embeddings with explicit resizing to 224x224. This approach ensures consistent input size regardless of original image dimensions. Args: image_path: Path to the image file model_path: Path to the model directory size: Target size for resizing (default: 224x224) mean: Normalization mean values std: Normalization standard deviation values Returns: Image embeddings from the model """ # Load model model = AutoModel.from_pretrained(model_path) model.eval() # Define transformation with explicit resize transform = transforms.Compose([ transforms.Resize(size, interpolation=transforms.InterpolationMode.BICUBIC), transforms.ToTensor(), transforms.Normalize(mean=mean, std=std) ]) # Process the image with torch.no_grad(): image = Image.open(image_path).convert('RGB') image_tensor = transform(image).unsqueeze(0) # Add batch dimension outputs = model(pixel_values=image_tensor) embeddings = outputs.last_hidden_state[:, 0, :] return embeddings # Example usage if __name__ == "__main__": image_path = "test.jpg" model_path = "IBI-CAAI/NP-TEST-0" # Method 1: Using image processor (recommended for consistency) embeddings1 = get_embeddings_with_processor(image_path, model_path) print('Embedding shape (with processor):', embeddings1.shape) # Method 2: Direct approach without resizing (works with various resolutions) embeddings2 = get_embeddings_direct(image_path, model_path) print('Embedding shape (direct):', embeddings2.shape) # Method 3: With explicit resize to 224x224 embeddings3 = get_embeddings_resized(image_path, model_path) print('Embedding shape (resized):', embeddings3.shape) ``` ## Training Data * **Dataset(s):** The model was trained on data from the University of Kentucky. * **Name/Identifier:** UK Alzheimer's Disease Center Neuropathology Whole Slide Image Cohort [BDSA TEST v1.0] * **Source:** [UK-ADRC Neuropathology Lab at the University of Kentucky University of Kentucky](https://neuropathlab.createuky.net/), [PLACEHOLDER: Specific Department, Center, or PI, e.g., Sanders-Brown Center on Aging, Department of Pathology] * **Description:** [PLACEHOLDER: Describe the data. E.g., "Digitized whole slide images (WSIs) of human post-mortem brain tissue sections from [number] subjects. Sections were stained with [e.g., Hematoxylin and Eosin (H&E), and immunohistochemistry for Amyloid-beta (Aβ) and phosphorylated Tau (pTau)]. Images were acquired using [e.g., Aperio AT2 scanner at 20x magnification]."] * **Preprocessing:** WSIs were tiled into non-overlapping 224x224 pixel patches at multiple magnification levels (40x, 10x, 2.5x, and 1.25x). For each magnification level, a maximum of 1000 tiles per annotation label were extracted to ensure balanced representation across pathological features. * **Annotation :** "Regions of interest (ROIs) for Gray Matter, White Matter, Leptomeninges, Exclude and Superficial were annotated by board-certified neuropathologists." ## Training Procedure * **Training System/Framework:** DINO-MX (Modular & Flexible Self-Supervised Training Framework) * **Base Model (if fine-tuning):** Pretrained `facebook/dinov2-giant` loaded from Hugging Face Hub. * **Training Objective(s):** Self-supervised learning using DINO loss, iBOT masked-image modeling loss. * **Key Hyperparameters (example):** * Batch size: 32 * Learning rate: 1.0e-4 * Epochs/Iterations: 5000 Iterations * Optimizer: AdamW * Weight decay: 0.04-0.4 ## Evaluation * **Task(s):** Classification, KNN, Clustering, Robustness * **Metrics:** Accuracy, Precision, Recall, F1 * **Dataset(s):** Neuro Path dataset * **Results:** The model achieved strong performance across multiple evaluation methods using the Neuro Path dataset. The model architecture is based on facebook/dinov2-giant. **Linear Probe Performance:** - Accuracy: 80.17% - Precision: 79.20% - Recall: 79.60% - F1 Score: 77.88% **K-Nearest Neighbors Classification:** - Accuracy: 83.76% - Precision: 83.34% - Recall: 83.76% - F1 Score: 83.40% **Clustering Quality:** - Silhouette Score: 0.267 - Adjusted Mutual Information: 0.473 **Robustness Score:** 0.574 **Overall Performance Score:** 0.646 ## Ethical Considerations * **Data Usage:** * [PLACEHOLDER: E.g., "The data from the University of Kentucky used for training and evaluating this model was collected and utilized under Institutional Review Board (IRB) protocol #[XYZ] at the University of Kentucky.", "All data was de-identified prior to its use in this research in accordance with IRB-approved procedures and applicable privacy regulations (e.g., HIPAA)."] * **Patient Privacy:** * [PLACEHOLDER: E.g., "Measures were taken to ensure de-identification of patient data. The model outputs do not contain personally identifiable information."] * **Intended Use Context:** * This model is intended for research purposes to augment the capabilities of neuropathology researchers. It is not a medical device and should not be used for direct clinical decision-making, diagnosis, or treatment planning without comprehensive validation, regulatory approval (if applicable), and oversight by qualified medical professionals. * **Fairness and Bias Mitigation:** * [PLACEHOLDER: Describe any steps taken during development to assess or mitigate bias, or plans for future work in this area. E.g., "Ongoing work includes evaluating model performance across different demographic subgroups represented in the University of Kentucky dataset to identify and address potential disparities."] ## Contact For any additional questions or comments, contact CAAI (`ai@uky.edu`), Mahmut Gokmen (`m.gokmen@uky.edu`) Cody Bumgardner (`cody@uky.edu`). ## Citation / BibTeX TBD