EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models
Abstract
EarthMind is a vision-language framework that uses spatial attention prompting and cross-modal fusion for efficient multi-granular and multi-sensor Earth Observation data understanding, outperforming larger models on specialized benchmarks.
Large Multimodal Models (LMMs) have demonstrated strong performance in various vision-language tasks. However, they often struggle to comprehensively understand Earth Observation (EO) data, which is critical for monitoring the environment and the effects of human activity on it. In this work, we present EarthMind, a novel vision-language framework for multi-granular and multi-sensor EO data understanding. EarthMind features two core components: (1) Spatial Attention Prompting (SAP), which reallocates attention within the LLM to enhance pixel-level understanding; and (2) Cross-modal Fusion, which aligns heterogeneous modalities into a shared space and adaptively reweighs tokens based on their information density for effective fusion. To facilitate multi-sensor fusion evaluation, we propose EarthMind-Bench, a comprehensive benchmark with over 2,000 human-annotated multi-sensor image-question pairs, covering a wide range of perception and reasoning tasks. Extensive experiments demonstrate the effectiveness of EarthMind. It achieves state-of-the-art performance on EarthMind-Bench, surpassing GPT-4o despite being only 4B in scale. Moreover, EarthMind outperforms existing methods on multiple public EO benchmarks, showcasing its potential to handle both multi-granular and multi-sensor challenges in a unified framework.
Community
The first Multi-granular and Multi-sensor LMM for Earth Observation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EarthGPT-X: Enabling MLLMs to Flexibly and Comprehensively Understand Multi-Source Remote Sensing Imagery (2025)
- TerraMind: Large-Scale Generative Multimodality for Earth Observation (2025)
- SegEarth-R1: Geospatial Pixel Reasoning via Large Language Model (2025)
- RemoteSAM: Towards Segment Anything for Earth Observation (2025)
- DisasterM3: A Remote Sensing Vision-Language Dataset for Disaster Damage Assessment and Response (2025)
- HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning (2025)
- Vision-Language Modeling Meets Remote Sensing: Models, Datasets and Perspectives (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper