Spaces:
Running
on
L4
Running
on
L4
Update README.md
Browse files
README.md
CHANGED
@@ -1,87 +1,14 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
Our code requires the same packages as the official StyleGAN3 repo. However, we have updated the code so it is compatible with the latest version of the required packages (including PyTorch, etc).
|
16 |
-
|
17 |
-
## Getting started
|
18 |
-
To generate images using a given model, run:
|
19 |
-
|
20 |
-
```
|
21 |
-
# Generate 8 images using pre-trained FFHQ 256x256 model
|
22 |
-
gen_images.py --seeds=0-7 --outdir=out --network=ffhq-256x256.pkl
|
23 |
-
|
24 |
-
# Generate 64 airplane images using pre-trained CIFAR10 model
|
25 |
-
gen_images.py --seeds=0-63 --outdir=out --class=0 --network=cifar10.pkl
|
26 |
-
```
|
27 |
-
|
28 |
-
To reproduce the main results from our paper, run the following commands:
|
29 |
-
|
30 |
-
```
|
31 |
-
# CIFAR10
|
32 |
-
python train.py --outdir=./training-runs --data=./datasets/cifar10.zip --gpus=8 --batch=512 --mirror=1 --aug=1 --cond=1 --preset=CIFAR10 --tick=1 --snap=200
|
33 |
-
|
34 |
-
# FFHQ 64x64
|
35 |
-
python train.py --outdir=./training-runs --data=./datasets/ffhq-64x64.zip --gpus=8 --batch=256 --mirror=1 --aug=1 --preset=FFHQ-64 --tick=1 --snap=200
|
36 |
-
|
37 |
-
# FFHQ 256x256
|
38 |
-
python train.py --outdir=./training-runs --data=./datasets/ffhq-256x256.zip --gpus=8 --batch=256 --mirror=1 --aug=1 --preset=FFHQ-256 --tick=1 --snap=200
|
39 |
-
|
40 |
-
# ImageNet 32x32
|
41 |
-
python train.py --outdir=./training-runs --data=./datasets/imagenet-32x32.zip --gpus=32 --batch=4096 --mirror=1 --aug=1 --cond=1 --preset=ImageNet-32 --tick=1 --snap=200
|
42 |
-
|
43 |
-
# Imagenet 64x64
|
44 |
-
python train.py --outdir=./training-runs --data=./datasets/imagenet-64x64.zip --gpus=64 --batch=4096 --mirror=1 --aug=1 --cond=1 --preset=ImageNet-64 --tick=1 --snap=200
|
45 |
-
```
|
46 |
-
|
47 |
-
The easiest way to explore different training settings is to modify [`train.py`](./train.py) directly.
|
48 |
-
|
49 |
-
## Pre-trained models
|
50 |
-
|
51 |
-
We provide pre-trained models for our proposed training configuration (config E) on each dataset:
|
52 |
-
|
53 |
-
- [https://huggingface.co/brownvc/BaselineGAN-CIFAR10/tree/main](https://huggingface.co/brownvc/BaselineGAN-CIFAR10/tree/main)
|
54 |
-
- [https://huggingface.co/brownvc/BaselineGAN-FFHQ-64x64/tree/main](https://huggingface.co/brownvc/BaselineGAN-FFHQ-64x64/tree/main)
|
55 |
-
- [https://huggingface.co/brownvc/BaselineGAN-FFHQ-256x256/tree/main](https://huggingface.co/brownvc/BaselineGAN-FFHQ-256x256/tree/main)
|
56 |
-
- [https://huggingface.co/brownvc/BaselineGAN-ImgNet-64x64-v0/tree/main](https://huggingface.co/brownvc/BaselineGAN-ImgNet-64x64-v0/tree/main)
|
57 |
-
- [https://huggingface.co/brownvc/BaselineGAN-ImgNet-32x32/tree/main](https://huggingface.co/brownvc/BaselineGAN-ImgNet-32x32/tree/main)
|
58 |
-
|
59 |
-
## Preparing datasets
|
60 |
-
We use the same dataset format and dataset preprocessing tool as StyleGAN3 and EDM, refer to their repos for more details.
|
61 |
-
|
62 |
-
## Quality metrics
|
63 |
-
We support the following metrics:
|
64 |
-
|
65 |
-
* `fid50k_full`: Fréchet inception distance against the full dataset.
|
66 |
-
* `kid50k_full`: Kernel inception distance against the full dataset.
|
67 |
-
* `pr50k3_full`: Precision and recall againt the full dataset.
|
68 |
-
* `is50k`: Inception score for CIFAR-10.
|
69 |
-
|
70 |
-
Refer to the StyleGAN3 code base for more details.
|
71 |
-
|
72 |
-
## Citation
|
73 |
-
|
74 |
-
```
|
75 |
-
@inproceedings{
|
76 |
-
huang2024the,
|
77 |
-
title={The {GAN} is dead; long live the {GAN}! A Modern {GAN} Baseline},
|
78 |
-
author={Nick Huang and Aaron Gokaslan and Volodymyr Kuleshov and James Tompkin},
|
79 |
-
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
|
80 |
-
year={2024},
|
81 |
-
url={https://openreview.net/forum?id=OrtN9hPP7V}
|
82 |
-
}
|
83 |
-
```
|
84 |
-
|
85 |
-
## Acknowledgements
|
86 |
-
|
87 |
-
The authors thank Xinjie Jayden Yi for contributing to the proof and Yu Cheng for helpful discussion. For compute, the authors thank Databricks Mosaic Research. Nick Huang was supported by a Brown University Division of Research Seed Award, and James Tompkin was supported by NSF CAREER 2144956. Volodymyr Kuleshov was supported by NSF CAREER 2145577 and NIH MIRA 1R35GM15124301.
|
|
|
1 |
+
---
|
2 |
+
title: R3GAN - GANs are so back!
|
3 |
+
emoji: 📉
|
4 |
+
colorFrom: gray
|
5 |
+
colorTo: blue
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 5.11.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
short_description: GANs are so back!
|
12 |
+
---
|
13 |
+
|
14 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|