GeoGrid_Bench / README.md
bowen-upenn's picture
Update README.md
a8c5455 verified
metadata
license: mit
task_categories:
  - table-question-answering
  - visual-question-answering
language:
  - en
tags:
  - geo-spatial
  - climate
  - multimodal
  - tabular
  - grid
size_categories:
  - 1K<n<10K
pretty_name: GeoGrid-Bench
configs:
  - config_name: benchmark
    data_files:
      - split: qa_data
        path: qa_data.csv

GeoGrid-Bench: Can Foundation Models Understand Multimodal Gridded Geo-Spatial Data?

Paper Github

We present GeoGrid-Bench, a benchmark designed to evaluate the ability of foundation models to understand geo-spatial data in the grid structure. Geo-spatial datasets pose distinct challenges due to their dense numerical values, strong spatial and temporal dependencies, and unique multimodal representations including tabular data, heatmaps, and geographic visualizations. To assess how foundation models can support scientific research in this domain, GeoGrid-Bench features large-scale, real-world data covering 16 climate variables across 150 locations and extended time frames. The benchmark includes approximately 3k question-answer pairs, systematically generated from 8 domain expert-curated templates to reflect practical tasks encountered by human scientists. These range from basic queries at a single location and time to complex spatiotemporal comparisons across regions and periods. Our evaluation reveals that vision-language models perform best overall, and we provide a fine-grained analysis of the strengths and limitations of different foundation models in different geo-spatial tasks. This benchmark offers clearer insights into how foundation models can be effectively applied to geo-spatial data analysis and used to support scientific research.

This work is a collaboration between Argonne National Laboratory and the University of Pennsylvania. For questions or feedback, please submit an issue or contact [email protected] via email.

๐Ÿ“Š Benchmark Data

We provide the inference scripts on our Github Github. We also release the benchmark data here on ๐Ÿค—Huggingface, including question-answer pairs, corresponding images, and other meta data. Please download the folder image_data/ and the file qa_data.csv, and put them under the data/benchmark/ directory.

image/png

๐Ÿฐ Citation

If you find our work inspires you, please consider citing it. Thank you!

@article{jiang2025geogrid,
  title={GeoGrid-Bench: Can Foundation Models Understand Multimodal Gridded Geo-Spatial Data?},
  author={Jiang, Bowen and Xie, Yangxinyu and Wang, Xiaomeng and He, Jiashu and Bergerson, Joshua and Hutchison, John K and Branham, Jordan and Taylor, Camillo J and Mallick, Tanwi},
  journal={arXiv preprint arXiv:2505.10714},
  year={2025}
}

๐Ÿ”— Dependencies

We use Conda environment. Please run the following commands to create the environment and install all the requirements:

conda create -n geospatial python=3.9
conda activate geospatial
pip install -r requirements.txt

๐Ÿš€ Running Inference on Benchmark Data

Step 1 - Credential Setup

Before you begin, create a new folder named api_tokens/ in the root directory. This folder will store your credentials required to run the models.

๐Ÿ”ธ If you are using internal OpenAI models from Argonne National Laboratory:

  1. Inside the api_tokens/ folder, create the following two files:
  • model_url.txt โ€“ containing the base URL of the internal OpenAI model endpoint
  • user_name.txt โ€“ containing your Argonne username

๐Ÿ”ธ If you are running open-sourced Llama models from Huggingface:

  1. Login with a HF token with gated access permission:
huggingface-cli login
  1. Request the model access from HuggingFace.

๐Ÿ”ธ Otherwise, for public API providers, follow these steps:

  1. Create API keys from the respective providers if you haven't already.

  2. Inside the api_tokens/ folder, create the following text files and paste your API key as plain text into each:

  • openai_key.txt โ€“ for OpenAI models
  • gemini_key.txt โ€“ for Google Gemini models
  • claude_key.txt โ€“ for Anthropic Claude models
  • lambda_key.txt โ€“ for models accessed via the Lambda Cloud API (e.g., Llama, etc.)

Step 2 - Running Inference Scripts

We provide ready-to-use inference scripts in the scripts/ directory for evaluating the following models:

  • OpenAI Models
    • o4-mini: run_inference_o4_mini.sh
    • GPT-4.1: run_inference_gpt_4p1.sh
    • GPT-4.1-mini: run_inference_gpt_4p1_mini.sh
    • GPT-4o: run_inference_gpt_4o.sh
    • GPT-4o-mini: run_inference_gpt_4o_mini.sh
  • Meta Llama Models
    • Llama-4-Maverick: run_inference_llama4_maverick.sh
    • Llama-4-Scout: run_inference_llama4_scout.sh
    • Llama-3.2-11B-Vision: run_inference_llama_3p2_11b_vision.sh
    • Llama-3.2-3B: run_inference_llama_3p2_3b.sh
    • Llama-3.1-8B: run_inference_llama_3p1_8b.sh

๐Ÿ”ฎ To run evaluation for a specific model, simply execute the corresponding script. For example:

bash scripts/run_inference_gpt4o.sh [MODALITY] [START_IDX] [END_IDX]
  • [MODALITY] can be one of the following: text, code, or image.

    ๐Ÿ’กNote that the image modality is only available for GPT-4o, GPT-4o-mini, o1, GPT-4.5-Preview, Claude-3.5-Haiku, Claude-3.7-Sonnet, Gemini-2.0-Flash, Gemini-1.5-Flash, Llama-4-Maverick, Llama-4-Scout, Llama-3.2-90B-Vision, and Llama-3.2-11B-Vision.

  • [START_IDX] and [END_IDX] define the range of question indices for inference. The script will run inference starting at [START_IDX] and ending just before [END_IDX] (non-inclusive).

    ๐Ÿ’กNote that whenever you set [END_IDX] to -1, the script will run inference from [START_IDX] until the end of the dataset. Meanwhile, if you set [START_IDX] to 0, the script will clean up the exsiting file and start clean. Otherwise, the script will append new evaluation results to the existing result file.

  • If you are using internal OpenAI models accessed by an URL, add use_url to the command line. For example:

bash scripts/run_inference_gpt4o.sh text 0 -1 use_url

We provide a complete checklist of all scripts used in our benchmark.

Step 3 - Saving Inference Results

Evaluation results will be automatically saved in the result/ directory, with filenames that include both the model name and the data modality for easy identification. For example, eval_results_gpt-4o_text.json.

๐Ÿงฉ All Template Questions

image/png

๐Ÿ’ฌ Interaction Overview

image/png