Venkata Pydipalli commited on
Commit
22937f2
·
1 Parent(s): f7782b6

updated read me and json files.

Browse files
Files changed (2) hide show
  1. README.md +24 -10
  2. config.json +20 -0
README.md CHANGED
@@ -3,19 +3,33 @@
3
  ## Overview
4
  This repository contains a fine-tuned version of the [CLIP ViT Base Patch32](https://huggingface.co/tanganke/clip-vit-base-patch32_pcam) model on the [PatchCamelyon (PCAM)](https://huggingface.co/datasets/1aurent/PatchCamelyon) dataset. The model is optimized for histopathological image classification.
5
 
6
- ## Model Details
7
- - **Base Model**: CLIP ViT Base Patch32
8
- - **Dataset**: PatchCamelyon (PCAM)
9
- - **Optimizer**: AdamW
10
- - **Loss Function**: Cross-Entropy Loss
11
- - **Batch Size**: 32
12
- - **Hardware**: Trained on GPU
 
 
 
 
 
 
13
 
14
- ## Training Performance
15
- - **Epoch 1 Results:**
16
- - **Train Loss**: 0.1520
 
 
17
  - **Train Accuracy**: 94.35%
18
  - **Validation Accuracy**: 95.16%
 
 
 
 
 
 
19
 
20
  ## Usage
21
  ### Installation
 
3
  ## Overview
4
  This repository contains a fine-tuned version of the [CLIP ViT Base Patch32](https://huggingface.co/tanganke/clip-vit-base-patch32_pcam) model on the [PatchCamelyon (PCAM)](https://huggingface.co/datasets/1aurent/PatchCamelyon) dataset. The model is optimized for histopathological image classification.
5
 
6
+ ---
7
+ tags:
8
+ - vision
9
+ - clip
10
+ - fine-tuned
11
+ - PatchCamelyon
12
+ - medical-imaging
13
+ license: apache-2.0
14
+ library_name: transformers
15
+ model_type: clip_vision_model
16
+ datasets:
17
+ - 1aurent/PatchCamelyon
18
+ ---
19
 
20
+ ## Model Details
21
+ - **Base Model**: `openai/clip-vit-base-patch32`
22
+ - **Dataset**: `PatchCamelyon`
23
+ - **Fine-tuned for**: Medical image classification (tumor vs. non-tumor)
24
+ - **Evaluation Results**:
25
  - **Train Accuracy**: 94.35%
26
  - **Validation Accuracy**: 95.16%
27
+ - **Hardware**: Trained on GPU-A100
28
+
29
+ ## Training Performance
30
+ - **Train Loss**: 0.1520
31
+ - **Train Accuracy**: 94.35%
32
+ - **Validation Accuracy**: 95.16%
33
 
34
  ## Usage
35
  ### Installation
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "openai/clip-vit-base-patch32",
3
+ "architectures": ["CLIPVisionModel"],
4
+ "attention_dropout": 0.0,
5
+ "dropout": 0.0,
6
+ "hidden_act": "quick_gelu",
7
+ "hidden_size": 768,
8
+ "image_size": 224,
9
+ "initializer_factor": 1.0,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 3072,
12
+ "layer_norm_eps": 1e-05,
13
+ "model_type": "clip_vision_model",
14
+ "num_attention_heads": 12,
15
+ "num_channels": 3,
16
+ "num_hidden_layers": 12,
17
+ "patch_size": 32,
18
+ "projection_dim": 512,
19
+ "torch_dtype": "float32"
20
+ }