nielsr HF Staff commited on
Commit
02fcf6a
·
verified ·
1 Parent(s): 84eebcf

Improve model card: Add metadata, paper, project, and code links

Browse files

This PR significantly enhances the model card for `jaeunglee/resnet18-cifar10-unlearning` by:

- Adding `pipeline_tag: image-classification` and `library_name: pytorch` to the metadata, which improves the model's discoverability and categorization on the Hugging Face Hub.
- Adding prominent links to the paper ([Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods](https://huggingface.co/papers/2508.12730)), the project page ([https://gnueaj.github.io/Machine-Unlearning-Comparator/](https://gnueaj.github.io/Machine-Unlearning-Comparator/)), and the GitHub repository ([https://github.com/gnueaj/Machine-Unlearning-Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator)) at the top of the model card for easier access to related resources.

Please review and merge if these improvements align with the repository's goals.

Files changed (1) hide show
  1. README.md +30 -24
README.md CHANGED
@@ -4,17 +4,23 @@ tags:
4
  - machine-unlearning
5
  - unlearning
6
  - resnet18
 
 
7
  ---
8
 
9
  # Model Card for jaeunglee/resnet18-cifar10-unlearning
10
 
11
  This repository contains ResNet18 models retrained on the CIFAR-10 dataset with specific classes excluded during training. Each model is trained to study the impact of class exclusion on model performance and generalization.
12
 
 
 
 
 
13
  ---
14
  ## Evaluation
15
 
16
- - **Testing Data:** CIFAR-10 test set
17
- - **Metrics:** Top-1 accuracy
18
 
19
  ### Results
20
 
@@ -36,19 +42,19 @@ This repository contains ResNet18 models retrained on the CIFAR-10 dataset with
36
 
37
  ### Training Procedure
38
 
39
- - **Base Model:** ResNet18
40
- - **Dataset:** CIFAR-10
41
- - **Excluded Class:** Varies by model
42
- - **Loss Function:** CrossEntropyLoss
43
- - **Optimizer:** SGD with:
44
- - Learning rate: `0.1`
45
- - Momentum: `0.9`
46
- - Weight decay: `5e-4`
47
- - Nesterov: `True`
48
- - **Scheduler:** CosineAnnealingLR (T_max: `200`)
49
- - **Training Epochs:** `200`
50
- - **Batch Size:** `128`
51
- - **Hardware:** Single GPU
52
 
53
  ### Notes on Training
54
 
@@ -59,21 +65,21 @@ The training recipe is adapted from the paper **"Benchopt: Reproducible, efficie
59
 
60
  The following transformations were applied to the CIFAR-10 dataset:
61
 
62
- - **Base Transformations (applied to both training and test sets):**
63
- - Conversion to PyTorch tensors using `ToTensor()`.
64
- - Normalization using mean `(0.4914, 0.4822, 0.4465)` and standard deviation `(0.2023, 0.1994, 0.2010)`.
65
 
66
- - **Training Set Augmentation (only for training data):**
67
- - **RandomCrop(32, padding=4):** Randomly crops images with padding for spatial variation.
68
- - **RandomHorizontalFlip():** Randomly flips images horizontally with a 50% probability.
69
 
70
  These augmentations help improve the model's ability to generalize by introducing variability in the training data.
71
 
72
  ### Model Description
73
 
74
- - **Developed by:** Jaeung Lee
75
- - **Model type:** Image Classification
76
- - **License:** MIT
77
 
78
  ### Related Work
79
 
 
4
  - machine-unlearning
5
  - unlearning
6
  - resnet18
7
+ pipeline_tag: image-classification
8
+ library_name: pytorch
9
  ---
10
 
11
  # Model Card for jaeunglee/resnet18-cifar10-unlearning
12
 
13
  This repository contains ResNet18 models retrained on the CIFAR-10 dataset with specific classes excluded during training. Each model is trained to study the impact of class exclusion on model performance and generalization.
14
 
15
+ **Paper:** [Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods](https://huggingface.co/papers/2508.12730)
16
+ **Project Page:** [https://gnueaj.github.io/Machine-Unlearning-Comparator/](https://gnueaj.github.io/Machine-Unlearning-Comparator/)
17
+ **GitHub Repository:** [https://github.com/gnueaj/Machine-Unlearning-Comparator](https://github.com/gnueaj/Machine-Unlearning-Comparator)
18
+
19
  ---
20
  ## Evaluation
21
 
22
+ - **Testing Data:** CIFAR-10 test set
23
+ - **Metrics:** Top-1 accuracy
24
 
25
  ### Results
26
 
 
42
 
43
  ### Training Procedure
44
 
45
+ - **Base Model:** ResNet18
46
+ - **Dataset:** CIFAR-10
47
+ - **Excluded Class:** Varies by model
48
+ - **Loss Function:** CrossEntropyLoss
49
+ - **Optimizer:** SGD with:
50
+ - Learning rate: `0.1`
51
+ - Momentum: `0.9`
52
+ - Weight decay: `5e-4`
53
+ - Nesterov: `True`
54
+ - **Scheduler:** CosineAnnealingLR (T_max: `200`)
55
+ - **Training Epochs:** `200`
56
+ - **Batch Size:** `128`
57
+ - **Hardware:** Single GPU
58
 
59
  ### Notes on Training
60
 
 
65
 
66
  The following transformations were applied to the CIFAR-10 dataset:
67
 
68
+ - **Base Transformations (applied to both training and test sets):**
69
+ - Conversion to PyTorch tensors using `ToTensor()`.
70
+ - Normalization using mean `(0.4914, 0.4822, 0.4465)` and standard deviation `(0.2023, 0.1994, 0.2010)`.
71
 
72
+ - **Training Set Augmentation (only for training data):**
73
+ - **RandomCrop(32, padding=4):** Randomly crops images with padding for spatial variation.
74
+ - **RandomHorizontalFlip():** Randomly flips images horizontally with a 50% probability.
75
 
76
  These augmentations help improve the model's ability to generalize by introducing variability in the training data.
77
 
78
  ### Model Description
79
 
80
+ - **Developed by:** Jaeung Lee
81
+ - **Model type:** Image Classification
82
+ - **License:** MIT
83
 
84
  ### Related Work
85