Dataset Viewer (First 5GB)
Auto-converted to Parquet
The dataset viewer is not available for this split.
Not found.
Error code:   ResponseNotFound

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

🌍 EarthDial-Dataset

The EarthDial-Dataset is a curated collection of evaluation-only datasets focused on remote sensing and Earth observation downstream tasks. It is designed to benchmark vision-language models (VLMs) and multimodal reasoning systems on real-world scenarios involving satellite and aerial imagery.


πŸ“š Key Features

  • Evaluation-focused: All datasets are for inference/testing only β€” no train/val splits.
  • Diverse Tasks:
    • Classification
    • Object Detection
    • Change Detection
    • Grounding Description
    • Region Captioning
    • Image Captioning
    • Visual Question Answering (GeoChat Bench)
  • Remote Sensing Specific: Tailored for multispectral, RGB, and high-resolution satellite data.
  • Multimodal Format: Includes images, questions, captions, annotations, and geospatial metadata.

πŸ—‚οΈ Dataset Structure

The dataset is structured under the root folder:
EarthDial_downstream_task_datasets/

Each task has its own subdirectory with .arrow-formatted shards, structured as:

EarthDial_downstream_task_datasets/
β”‚
β”œβ”€β”€ Classification/
β”‚   β”œβ”€β”€ AID/
β”‚   β”‚   └── test/data-00000-of-00001.arrow
β”‚   └── ...
β”‚
β”œβ”€β”€ Detection/
β”‚   β”œβ”€β”€ NWPU_VHR_10_test/
β”‚   β”œβ”€β”€ Swimming_pool_dataset_test/
β”‚   └── ...
β”‚
β”œβ”€β”€ Region_captioning/
β”‚   └── NWPU_VHR_10_test_region_captioning/
β”‚
β”œβ”€β”€ Image_captioning/
β”‚   β”œβ”€β”€ RSICD_Captions/
β”‚   └── UCM_Captions/
β”‚...

## πŸ—‚οΈ Example data usage

from datasets import load_dataset

dataset = load_dataset(
    "akshaydudhane/EarthDial-Dataset",
    data_dir="EarthDial_downstream_task_datasets/Classification/AID/test"
)

## Example Demo Usage

import argparse
import torch
from PIL import Image
from transformers import AutoTokenizer
from earthdial.model.internvl_chat import InternVLChatModel
from earthdial.train.dataset import build_transform

def run_single_inference(args):
    # Load model and tokenizer from Hugging Face Hub
    print(f"Loading model and tokenizer from Hugging Face: {args.checkpoint}")
    tokenizer = AutoTokenizer.from_pretrained(args.checkpoint, trust_remote_code=True, use_fast=False)
    model = InternVLChatModel.from_pretrained(
        args.checkpoint,
        low_cpu_mem_usage=True,
        torch_dtype=torch.bfloat16,
        device_map="auto" if args.auto else None,
        load_in_8bit=args.load_in_8bit,
        load_in_4bit=args.load_in_4bit
    ).eval()

    if not args.load_in_8bit and not args.load_in_4bit and not args.auto:
        model = model.cuda()

    # Load and preprocess image
    image = Image.open(args.image_path).convert("RGB")
    image_size = model.config.force_image_size or model.config.vision_config.image_size
    transform = build_transform(is_train=False, input_size=image_size, normalize_type='imagenet')
    pixel_values = transform(image).unsqueeze(0).cuda().to(torch.bfloat16)

    # Generate answer
    generation_config = {
        "num_beams": args.num_beams,
        "max_new_tokens": 100,
        "min_new_tokens": 1,
        "do_sample": args.temperature > 0,
        "temperature": args.temperature,
    }

    answer = model.chat(
        tokenizer=tokenizer,
        pixel_values=pixel_values,
        question=args.question,
        generation_config=generation_config,
        verbose=True
    )

    print("\n=== Inference Result ===")
    print(f"Question: {args.question}")
    print(f"Answer: {answer}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument('--checkpoint', type=str, required=True, help='Model repo ID on Hugging Face Hub')
    parser.add_argument('--image-path', type=str, required=True, help='Path to local input image')
    parser.add_argument('--question', type=str, required=True, help='Question to ask about the image')
    parser.add_argument('--num-beams', type=int, default=5)
    parser.add_argument('--temperature', type=float, default=0.0)
    parser.add_argument('--load-in-8bit', action='store_true')
    parser.add_argument('--load-in-4bit', action='store_true')
    parser.add_argument('--auto', action='store_true')

    args = parser.parse_args()
    run_single_inference(args)



python demo_infer.py \
  --checkpoint akshaydudhane/EarthDial_4B_RGB \
  --image-path ./test.jpg \
  --question "Which road has more vehicles?" \
  --auto
Downloads last month
8,258