ShoufaChen commited on
Commit
ee91349
·
verified ·
1 Parent(s): 1721bda
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ <div align="center">
6
+
7
+ <h1> PixelFlow: Pixel-Space Generative Models with Flow </h1>
8
+
9
+ [![arXiv](https://img.shields.io/badge/arXiv%20paper-2504.07963-b31b1b.svg)](https://arxiv.org/abs/2504.07963)
10
+ [![GitHub](https://img.shields.io/badge/GitHub-PixelFlow-181717?logo=github)](https://github.com/ShoufaChen/PixelFlow)
11
+ [![demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Online_Demo-blue)](https://huggingface.co/spaces/ShoufaChen/PixelFlow)&nbsp;
12
+
13
+
14
+ ![pixelflow](https://github.com/user-attachments/assets/7e2e4db9-4b41-46ca-8d43-92f2b642a676)
15
+
16
+ </div>
17
+
18
+
19
+
20
+
21
+ > [**PixelFlow: Pixel-Space Generative Models with Flow**](https://arxiv.org/abs/2504.07963)<br>
22
+ > [Shoufa Chen](https://www.shoufachen.com), [Chongjian Ge](https://chongjiange.github.io/), [Shilong Zhang](https://jshilong.github.io/), [Peize Sun](https://peizesun.github.io/), [Ping Luo](http://luoping.me/)
23
+ > <br>The University of Hong Kong, Adobe<br>
24
+
25
+ ## Introduction
26
+ We present PixelFlow, a family of image generation models that operate directly in the raw pixel space, in contrast to the predominant latent-space models. This approach simplifies the image generation process by eliminating the need for a pre-trained Variational Autoencoder (VAE) and enabling the whole model end-to-end trainable. Through efficient cascade flow modeling, PixelFlow achieves affordable computation cost in pixel space. It achieves an FID of 1.98 on 256x256 ImageNet class-conditional image generation benchmark. The qualitative text-to-image results demonstrate that PixelFlow excels in image quality, artistry, and semantic control. We hope this new paradigm will inspire and open up new opportunities for next-generation visual generation models.
27
+
28
+
29
+ ## Model Zoo
30
+
31
+ | Model | Task | Params | FID | Checkpoint |
32
+ |:---------:|:--------------:|:------:|:----:|:----------:|
33
+ | PixelFlow | class-to-image | 677M | 1.98 | [🤗](https://huggingface.co/ShoufaChen/PixelFlow-Class2Image) |
34
+ | PixelFlow | text-to-image | 882M | N/A | [🤗](https://huggingface.co/ShoufaChen/PixelFlow-Text2Image) |
35
+
36
+
37
+ ## Setup
38
+
39
+ ### 1. Create Environment
40
+ ```bash
41
+ conda create -n pixelflow python=3.12
42
+ conda activate pixelflow
43
+ ```
44
+ ### 2. Install Dependencies:
45
+ * [PyTorch 2.6.0](https://pytorch.org/) — install it according to your system configuration (CUDA version, etc.).
46
+ * [flash-attention v2.7.4.post1](https://github.com/Dao-AILab/flash-attention/releases/tag/v2.7.4.post1): optional, required only for training.
47
+ * Other packages: `pip3 install -r requirements.txt`
48
+
49
+
50
+ ## Demo [![demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Online_Demo-blue)](https://huggingface.co/spaces/ShoufaChen/PixelFlow)
51
+
52
+
53
+ We provide an online [Gradio demo](https://huggingface.co/spaces/ShoufaChen/PixelFlow) for class-to-image generation.
54
+
55
+ You can also easily deploy both class-to-image and text-to-image demos locally by:
56
+
57
+ ```bash
58
+ python app.py --checkpoint /path/to/checkpoint --class_cond # for class-to-image
59
+ ```
60
+ or
61
+ ```bash
62
+ python app.py --checkpoint /path/to/checkpoint # for text-to-image
63
+ ```
64
+
65
+
66
+ ## Training
67
+
68
+ ### 1. ImageNet Preparation
69
+
70
+ - Download the ImageNet dataset from [http://www.image-net.org/](http://www.image-net.org/).
71
+ - Use the [extract_ILSVRC.sh]([extract_ILSVRC.sh](https://github.com/pytorch/examples/blob/main/imagenet/extract_ILSVRC.sh)) to extract and organize the training and validation images into labeled subfolders.
72
+
73
+ ### 2. Training Command
74
+
75
+ ```bash
76
+ torchrun --nnodes=1 --nproc_per_node=8 train.py configs/pixelflow_xl_c2i.yaml
77
+ ```
78
+
79
+ ## Evaluation (FID, Inception Score, etc.)
80
+
81
+ We provide a [sample_ddp.py](sample_ddp.py) script, adapted from [DiT](https://github.com/facebookresearch/DiT), for generating sample images and saving them both as a folder and as a .npz file. The .npz file is compatible with ADM's TensorFlow evaluation suite, allowing direct computation of FID, Inception Score, and other metrics.
82
+
83
+
84
+ ```bash
85
+ torchrun --nnodes=1 --nproc_per_node=8 sample_ddp.py --pretrained /path/to/checkpoint
86
+ ```
87
+
88
+
89
+ ## BibTeX
90
+ ```bibtex
91
+ @article{chen2025pixelflow,
92
+ title={PixelFlow: Pixel-Space Generative Models with Flow},
93
+ author={Chen, Shoufa and Ge, Chongjian and Zhang, Shilong and Sun, Peize and Luo, Ping},
94
+ journal={arXiv preprint arXiv:2504.07963},
95
+ year={2025}
96
+ }
97
+ ```