REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers Paper • 2504.10483 • Published 10 days ago • 20
mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data Paper • 2502.08468 • Published Feb 12 • 13
SPAR: Personalized Content-Based Recommendation via Long Engagement Attention Paper • 2402.10555 • Published Feb 16, 2024 • 36
Item-Language Model for Conversational Recommendation Paper • 2406.02844 • Published Jun 5, 2024 • 12
Molar: Multimodal LLMs with Collaborative Filtering Alignment for Enhanced Sequential Recommendation Paper • 2412.18176 • Published Dec 24, 2024 • 16
Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search Paper • 2412.18319 • Published Dec 24, 2024 • 40
Star Attention: Efficient LLM Inference over Long Sequences Paper • 2411.17116 • Published Nov 26, 2024 • 55
Continuous Risk Factor Models: Analyzing Asset Correlations through Energy Distance Paper • 2410.23447 • Published Oct 30, 2024 • 1
γ-MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models Paper • 2410.13859 • Published Oct 17, 2024 • 8
What Matters in Transformers? Not All Attention is Needed Paper • 2406.15786 • Published Jun 22, 2024 • 32