File size: 2,078 Bytes
2baf0ec
add8bac
7481177
 
add8bac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
language: en
tags:
- vision
- image-classification
- medical-imaging
- tumor-classification
license: apache-2.0
base_model: google/vit-base-patch16-224
model-index:
- name: vit_tumor_classifier
  results:
  - task:
      name: Image Classification
      type: binary-classification
    metrics:
      - name: Accuracy
        type: accuracy
        value: 0.85  # Replace with your actual accuracy
      - name: F1 Score
        type: f1
        value: 0.84  # Replace with your actual F1 score
---

# Vision Transformer for Tumor Classification

This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) for binary tumor classification in medical images.

## Model Details

- **Model Type:** Vision Transformer (ViT)
- **Base Model:** google/vit-base-patch16-224
- **Task:** Binary Image Classification
- **Training Data:** Medical image dataset with tumor/non-tumor annotations
- **Input:** Medical images (224x224 pixels)
- **Output:** Binary classification (tumor/non-tumor)
- **Model Size:** 85.8M parameters
- **Framework:** PyTorch
- **License:** Apache 2.0

## Intended Use

This model is designed for tumor classification in medical imaging. It should be used as part of a larger medical diagnostic system and not as a standalone diagnostic tool.

## Usage

```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image

# Load model and processor
processor = AutoImageProcessor.from_pretrained("SIATCN/vit_tumor_classifier")
model = AutoModelForImageClassification.from_pretrained("SIATCN/vit_tumor_classifier")

# Load and process image
image = Image.open("path_to_your_image.jpg")
inputs = processor(image, return_tensors="pt")

# Make prediction
outputs = model(**inputs)
predictions = outputs.logits.softmax(dim=-1)
predicted_label = predictions.argmax().item()
confidence = predictions[0][predicted_label].item()

# Get class name
class_names = ["non-tumor", "tumor"]
print(f"Predicted: {class_names[predicted_label]} (confidence: {confidence:.2f})")