MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research Paper • 2503.13399 • Published Mar 17 • 22
Temporal Preference Optimization for Long-Form Video Understanding Paper • 2501.13919 • Published Jan 23 • 23
Why are Visually-Grounded Language Models Bad at Image Classification? Paper • 2405.18415 • Published May 28, 2024
Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation Paper • 2501.03225 • Published Jan 6 • 7
BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions Paper • 2411.07461 • Published Nov 12, 2024 • 24
xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations Paper • 2408.12590 • Published Aug 22, 2024 • 37
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness Paper • 2407.01942 • Published Jul 2, 2024
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models Paper • 2408.08872 • Published Aug 16, 2024 • 101
Can large language models provide useful feedback on research papers? A large-scale empirical analysis Paper • 2310.01783 • Published Oct 3, 2023 • 1
Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data Paper • 2401.08567 • Published Jan 16, 2024
μ-Bench: A Vision-Language Benchmark for Microscopy Understanding Paper • 2407.01791 • Published Jul 1, 2024 • 7
Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation Paper • 2311.16201 • Published Nov 27, 2023