Text-to-Image
Diffusers
Safetensors
English
PYY2001 commited on
Commit
51e0300
·
verified ·
1 Parent(s): ce5b588

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -50
README.md CHANGED
@@ -13,70 +13,43 @@ pipeline_tag: text-to-image
13
  <table>
14
  <tr>
15
  <td><img src="assets/teaser_info.png" alt="teaser example 0" width="1200"/></td>
 
 
16
  <td><img src="assets/teaser_slide.png" alt="teaser example 1" width="1200"/></td>
17
  </tr>
18
  </table>
19
 
20
  ## Abstract
21
  <p>
22
- Generating visually appealing images is fundamental to modern text-to-image generation models.
23
- A potential solution to better aesthetics is direct preference optimization (DPO),
24
- which has been applied to diffusion models to improve general image quality including prompt alignment and aesthetics.
25
- Popular DPO methods propagate preference labels from clean image pairs to all the intermediate steps along the two generation trajectories.
26
- However, preference labels provided in existing datasets are blended with layout and aesthetic opinions, which would disagree with aesthetic preference.
27
- Even if aesthetic labels were provided (at substantial cost), it would be hard for the two-trajectory methods to capture nuanced visual differences at different steps.
28
  </p>
29
  <p>
30
- To improve aesthetics economically, this paper uses existing generic preference data and introduces step-by-step preference optimization
31
- (SPO) that discards the propagation strategy and allows fine-grained image details to be assessed. Specifically,
32
- at each denoising step, we 1) sample a pool of candidates by denoising from a shared noise latent,
33
- 2) use a step-aware preference model to find a suitable win-lose pair to supervise the diffusion model, and
34
- 3) randomly select one from the pool to initialize the next denoising step.
35
- This strategy ensures that diffusion models focus on the subtle, fine-grained visual differences
36
- instead of layout aspect. We find that aesthetic can be significantly enhanced by accumulating these
37
- improved minor differences.
38
  </p>
39
  <p>
40
- When fine-tuning Stable Diffusion v1.5 and SDXL, SPO yields significant
41
- improvements in aesthetics compared with existing DPO methods while not sacrificing image-text alignment
42
- compared with vanilla models. Moreover, SPO converges much faster than DPO methods due to the step-by-step
43
- alignment of fine-grained visual details.
44
  </p>
45
 
46
  ## Model Description
47
 
48
- This model is fine-tuned from [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). It has been trained on 4,000 prompts for 10 epochs.
49
-
50
- This is a merged checkpoint that combines the LoRA checkpoint with the base model [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). If you want to access the LoRA checkpoint, please visit [SPO-SDXL_4k-p_10ep_LoRA](https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep_LoRA). We also provide a LoRA checkpoint compatible with [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which can be accessed [here](https://civitai.com/models/510261?modelVersionId=567119).
51
-
52
-
53
- ## A quick example
54
- ```python
55
- from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel
56
- import torch
57
-
58
- # load pipeline
59
- inference_dtype = torch.float16
60
- pipe = StableDiffusionXLPipeline.from_pretrained(
61
- "SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep",
62
- torch_dtype=inference_dtype,
63
- )
64
- vae = AutoencoderKL.from_pretrained(
65
- 'madebyollin/sdxl-vae-fp16-fix',
66
- torch_dtype=inference_dtype,
67
- )
68
- pipe.vae = vae
69
- pipe.to('cuda')
70
-
71
- generator=torch.Generator(device='cuda').manual_seed(42)
72
- image = pipe(
73
- prompt='a child and a penguin sitting in front of the moon',
74
- guidance_scale=5.0,
75
- generator=generator,
76
- output_type='pil',
77
- ).images[0]
78
- image.save('moon.png')
79
- ```
80
 
81
  ## Citation
82
  If you find our work or codebase useful, please consider giving us a star and citing our work.
 
13
  <table>
14
  <tr>
15
  <td><img src="assets/teaser_info.png" alt="teaser example 0" width="1200"/></td>
16
+ </tr>
17
+ <tr>
18
  <td><img src="assets/teaser_slide.png" alt="teaser example 1" width="1200"/></td>
19
  </tr>
20
  </table>
21
 
22
  ## Abstract
23
  <p>
24
+ Recently, state-of-the-art text-to-image generation models, such as Flux and Ideogram 2.0, have made
25
+ significant progress in sentence-level visual text rendering. In this paper, we focus on the more
26
+ challenging scenarios of article-level visual text rendering and address a novel task of generating
27
+ high-quality business content, including infographics and slides, based on user provided article-level
28
+ descriptive prompts and ultra-dense layouts. The fundamental challenges are twofold: significantly
29
+ longer context lengths and the scarcity of high-quality business content data.
30
  </p>
31
  <p>
32
+ In contrast to most previous works that focus on a limited number of sub-regions and sentence-level
33
+ prompts, ensuring precise adherence to ultra-dense layouts with tens or even hundreds of sub-regions in
34
+ business content is far more challenging. We make two key technical contributions: (i) the construction
35
+ of scalable, high-quality business content dataset, i.e., Infographics-650K, equipped with
36
+ ultra-dense layouts and prompts by implementing a layer-wise retrieval-augmented infographic generation
37
+ scheme; and (ii) a layout-guided cross attention scheme, which injects tens of region-wise prompts into
38
+ a set of cropped region latent space according to the ultra-dense layouts, and refine each sub-regions
39
+ flexibly during inference using a layout conditional CFG.
40
  </p>
41
  <p>
42
+ We demonstrate the strong results of our system compared to previous SOTA systems such as Flux and SD3
43
+ on our BizEval prompt set. Additionally, we conduct thorough ablation experiments to verify the
44
+ effectiveness of each component. We hope our constructed Infographics-650K and BizEval can encourage
45
+ the broader community to advance the progress of business content generation.
46
  </p>
47
 
48
  ## Model Description
49
 
50
+ The ByT5 model is finetuned from [Glyph-ByT5-v2](https://arxiv.org/abs/2406.10208), which supports accerate visual text rendering in ten different languages.
51
+ The [SPO](https://huggingface.co/SPO-Diffusion-Models) model is a substitute for the original sdxl-base-1.0 for aesthetic improvement.
52
+ You can follow our [github]() to organize and run the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  ## Citation
55
  If you find our work or codebase useful, please consider giving us a star and citing our work.