Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models Paper • 2504.15271 • Published 1 day ago • 50
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models Paper • 2504.10479 • Published 8 days ago • 239
Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models Paper • 2504.15271 • Published 1 day ago • 50
Token-Efficient Long Video Understanding for Multimodal LLMs Paper • 2503.04130 • Published Mar 6 • 94
FB-BEV: BEV Representation from Forward-Backward View Transformations Paper • 2308.02236 • Published Aug 4, 2023
Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers Paper • 2109.03814 • Published Sep 8, 2021
Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications Paper • 2401.06197 • Published Jan 11, 2024 • 1
DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving Paper • 2312.09245 • Published Dec 14, 2023
Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding Paper • 2403.09626 • Published Mar 14, 2024 • 15
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions Paper • 2211.05778 • Published Nov 10, 2022
Driving with InternVL: Oustanding Champion in the Track on Driving with Language of the Autonomous Grand Challenge at CVPR 2024 Paper • 2412.07247 • Published Dec 10, 2024
Eagle 2: Building Post-Training Data Strategies from Scratch for Frontier Vision-Language Models Paper • 2501.14818 • Published Jan 20 • 5
Eagle 2: Building Post-Training Data Strategies from Scratch for Frontier Vision-Language Models Paper • 2501.14818 • Published Jan 20 • 5