Multimodal Latent Language Modeling with Next-Token Diffusion Paper • 2412.08635 • Published 14 days ago • 41
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation Paper • 2412.07589 • Published 16 days ago • 45
HumanEdit: A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing Paper • 2412.04280 • Published 21 days ago • 13
Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation Paper • 2410.13848 • Published Oct 17 • 31
Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis Paper • 2410.08261 • Published Oct 10 • 49
Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language Paper • 2406.20085 • Published Jun 28 • 11
Mamba or RWKV: Exploring High-Quality and High-Efficiency Segment Anything Model Paper • 2406.19369 • Published Jun 27 • 2
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding Paper • 2406.19389 • Published Jun 27 • 52
MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning Paper • 2406.17770 • Published Jun 25 • 18
MotionBooth: Motion-Aware Customized Text-to-Video Generation Paper • 2406.17758 • Published Jun 25 • 18
Towards Language-Driven Video Inpainting via Multimodal Large Language Models Paper • 2401.10226 • Published Jan 18 • 1
MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation Paper • 2309.13042 • Published Sep 22, 2023 • 9