SenseNova-U1 Collection SenseNova-U1: Unifying Multimodal Understanding and Generation with NEO-Unify Architecture • 3 items • Updated 3 days ago • 37
HP-Edit: A Human-Preference Post-Training Framework for Image Editing Paper • 2604.19406 • Published 13 days ago • 6
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model Paper • 2604.20796 • Published 12 days ago • 239
Reward Hacking in the Era of Large Models: Mechanisms, Emergent Misalignment, Challenges Paper • 2604.13602 • Published 19 days ago • 31
Seeing Fast and Slow: Learning the Flow of Time in Videos Paper • 2604.21931 • Published 11 days ago • 19
Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation Paper • 2604.24763 • Published 7 days ago • 66
FlowAnchor: Stabilizing the Editing Signal for Inversion-Free Video Editing Paper • 2604.22586 • Published 10 days ago • 16
Prompt Relay: Inference-Time Temporal Control for Multi-Event Video Generation Paper • 2604.10030 • Published 23 days ago • 15
OmniShow: Unifying Multimodal Conditions for Human-Object Interaction Video Generation Paper • 2604.11804 • Published 21 days ago • 71
SenCache: Accelerating Diffusion Model Inference via Sensitivity-Aware Caching Paper • 2602.24208 • Published Feb 27 • 7
Mode Seeking meets Mean Seeking for Fast Long Video Generation Paper • 2602.24289 • Published Feb 27 • 41
BitDance: Scaling Autoregressive Generative Models with Binary Tokens Paper • 2602.14041 • Published Feb 15 • 53
Context Forcing: Consistent Autoregressive Video Generation with Long Context Paper • 2602.06028 • Published Feb 5 • 36