Adding Conditional Control to Text-to-Image Diffusion Models Paper β’ 2302.05543 β’ Published Feb 10, 2023 β’ 40
Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators Paper β’ 2303.13439 β’ Published Mar 23, 2023 β’ 4
Expressive Text-to-Image Generation with Rich Text Paper β’ 2304.06720 β’ Published Apr 13, 2023 β’ 1
Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation Paper β’ 2212.11565 β’ Published Dec 22, 2022 β’ 3
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation Paper β’ 2302.13848 β’ Published Feb 27, 2023 β’ 1
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing Paper β’ 2303.09535 β’ Published Mar 16, 2023 β’ 1
Improving Sample Quality of Diffusion Models Using Self-Attention Guidance Paper β’ 2210.00939 β’ Published Oct 3, 2022 β’ 6
Understanding 3D Object Interaction from a Single Image Paper β’ 2305.09664 β’ Published May 16, 2023 β’ 1
Dense Text-to-Image Generation with Attention Modulation Paper β’ 2308.12964 β’ Published Aug 24, 2023 β’ 2
Masked Diffusion Transformer is a Strong Image Synthesizer Paper β’ 2303.14389 β’ Published Mar 25, 2023 β’ 1
Editing Implicit Assumptions in Text-to-Image Diffusion Models Paper β’ 2303.08084 β’ Published Mar 14, 2023 β’ 2
Ablating Concepts in Text-to-Image Diffusion Models Paper β’ 2303.13516 β’ Published Mar 23, 2023 β’ 1
StableVideo: Text-driven Consistency-aware Diffusion Video Editing Paper β’ 2308.09592 β’ Published Aug 18, 2023 β’ 2
SALAD: Part-Level Latent Diffusion for 3D Shape Generation and Manipulation Paper β’ 2303.12236 β’ Published Mar 21, 2023 β’ 3
StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces Paper β’ 2303.06146 β’ Published Mar 10, 2023 β’ 2