File size: 5,131 Bytes
6b0f55f
3cfc838
 
 
 
 
 
 
 
 
 
 
6b0f55f
 
 
 
 
 
3cfc838
 
 
6b0f55f
 
 
 
3cfc838
 
 
a48c62f
 
 
 
 
 
 
 
3cfc838
a48c62f
3cfc838
 
6b0f55f
3cfc838
a48c62f
 
 
3cfc838
6b0f55f
3cfc838
 
 
 
 
 
 
a48c62f
3cfc838
 
 
 
6b0f55f
 
 
3cfc838
 
6b0f55f
3cfc838
 
6b0f55f
 
3cfc838
 
6b0f55f
 
3cfc838
 
6b0f55f
3cfc838
 
6b0f55f
3cfc838
 
 
6b0f55f
 
3cfc838
 
6b0f55f
 
3cfc838
 
6b0f55f
 
 
3cfc838
6b0f55f
 
fb24ec6
6b0f55f
 
 
fb24ec6
6b0f55f
fb24ec6
 
a48c62f
6b0f55f
3cfc838
6b0f55f
 
 
 
 
fb24ec6
 
 
 
 
 
 
 
a48c62f
fb24ec6
 
 
a48c62f
fb24ec6
 
 
a48c62f
 
fb24ec6
 
 
 
 
 
 
 
 
 
 
6b0f55f
 
 
3cfc838
6b0f55f
 
 
3cfc838
 
 
 
6b0f55f
3cfc838
6b0f55f
3cfc838
 
6b0f55f
 
 
 
 
 
3cfc838
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
tags:
  - adversarial
  - image-classification
  - robustness
  - deep-learning
  - computer-vision
task_categories:
  - image-classification
model:
  - lens-ai/clip-vit-base-patch32_pcam_finetuned
---

# **Adversarial PCAM Dataset**
This dataset contains adversarial examples generated using various attack techniques on **PatchCamelyon (PCAM)** images. The adversarial images were crafted to fool the fine-tuned model:  
**[lens-ai/clip-vit-base-patch32_pcam_finetuned](https://huggingface.co/lens-ai/clip-vit-base-patch32_pcam_finetuned)**.  

Researchers and engineers can use this dataset to:
- Evaluate model robustness against adversarial attacks
- Train models with adversarial data for improved resilience
- Benchmark new adversarial defense mechanisms

---

## **πŸ“‚ Dataset Structure**
```
organized_dataset/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ 0/  # Negative samples (adversarial images only)
β”‚   β”‚   └── adv_0_labelfalse_pred1_SquareAttack.png
β”‚   └── 1/  # Positive samples (adversarial images only)
β”‚       └── adv_1_labeltrue_pred0_SquareAttack.png
β”œβ”€β”€ originals/  # Original images
β”‚   β”œβ”€β”€ orig_0_labelfalse_SquareAttack.png
β”‚   └── orig_1_labeltrue_SquareAttack.png
β”œβ”€β”€ perturbations/  # Perturbation masks
β”‚   β”œβ”€β”€ perturbation_0_SquareAttack.png
β”‚   └── perturbation_1_SquareAttack.png
└── dataset.json
```

Each adversarial example consists of:
- `train/{0,1}/adv_{id}_label{true/false}_pred{pred_label}_{attack_name}.png` β†’ **Adversarial image** with model prediction
- `originals/orig_{id}_label{true/false}_{attack_name}.png` β†’ **Original image** before perturbation
- `perturbations/perturbation_{id}_{attack_name}.png` β†’ **The perturbation applied** to the original image
- **Attack name in filename** indicates which method was used

The `dataset.json` file contains detailed metadata for each sample, including:
```json
{
    "attack": "SquareAttack",
    "type": "black_box_attacks",
    "perturbation": "perturbations/perturbation_1_SquareAttack.png",
    "adversarial": "train/0/adv_1_labelfalse_pred1_SquareAttack.png",
    "original": "originals/orig_1_labelfalse_SquareAttack.png",
    "label": 0,
    "prediction": 1
}
```

---

## **πŸ”Ή Attack Types**
The dataset contains both black-box and non-black-box adversarial attacks.

### **1️⃣ Black-Box Attacks**
These attacks do not require access to model gradients:

#### **πŸ”Ή HopSkipJump Attack**
- Query-efficient black-box attack that estimates gradients
- Based on decision boundary approximation

#### **πŸ”Ή Zoo Attack**
- Zeroth-order optimization (ZOO) attack
- Estimates gradients via finite-difference methods

### **2️⃣ Non-Black-Box Attacks**
These attacks require access to model gradients:

#### **πŸ”Ή SimBA (Simple Black-box Attack)**
- Uses random perturbations to mislead the model
- Reduces query complexity

#### **πŸ”Ή Boundary Attack**
- Query-efficient attack moving along decision boundary
- Minimizes perturbation size

#### **πŸ”Ή Spatial Transformation Attack**
- Uses rotation, scaling, and translation
- No pixel-level perturbations required

---

## Usage

```python
import json
import torch
from torchvision import transforms
from PIL import Image
from pathlib import Path

# Load the dataset information
with open('organized_dataset/dataset.json', 'r') as f:
    dataset_info = json.load(f)["train"]["rows"]  # Access the rows in train split

# Define transformation
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor()
])

# Function to load and process images
def load_image(image_path):
    img = Image.open(image_path).convert("RGB")
    return transform(img)

# Example: Loading a set of related images (original, adversarial, and perturbation)
for entry in dataset_info:
    # Load adversarial image
    adv_path = Path('organized_dataset') / entry['image_path']
    adv_image = load_image(adv_path)
    
    # Load original image
    orig_path = Path('organized_dataset') / entry['original_path']
    orig_image = load_image(orig_path)
    
    # Load perturbation if available
    if entry['perturbation_path']:
        pert_path = Path('organized_dataset') / entry['perturbation_path']
        pert_image = load_image(pert_path)
    
    # Access metadata
    attack_type = entry['attack']
    label = entry['label']
    prediction = entry['prediction']
    
    print(f"Attack: {attack_type}")
    print(f"True Label: {label}")
    print(f"Model Prediction: {prediction}")
    print(f"Image shapes: {adv_image.shape}")  # Should be (3, 224, 224)
```

## **πŸ“Š Attack Success Rates**
Success rates for each attack on the target model:
```json
{
    "HopSkipJump": {"success_rate": 14},
    "Zoo_Attack": {"success_rate": 22},
    "SimBA": {"success_rate": 99},
    "Boundary_Attack": {"success_rate": 98},
    "SpatialTransformation_Attack": {"success_rate": 99}
}
```

## Citation
```bibtex
@article{lensai2025adversarial,
  title={Adversarial PCAM Dataset},
  author={LensAI Team},
  year={2025},
  url={https://huggingface.co/datasets/lens-ai/adversarial_pcam}
}
```