Datasets:

Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Geolayers-Data

Sample Geographic Inputs with the USAVars Dataset

--> This dataset card contains usage instructions and metadata for all data-products released with our paper:
Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery. We release 3 modified versions of 3 benchmark datasets spanning land-cover segmentation, tree-cover regression, and multi-label land-cover classification tasks. These datasets are augmented with auxiliary, geographic inputs. A full list of contributed data products is shown in the table below.

Dataset Task Description Multispectral Input Model Additional Data Layers Dataset Size OOD Test Set Present?
Compressed Uncompressed
SustainBench Farmland boundary delineation Sentinel-2 RGB U-Net OSM rasters, EU-DEM 1.76 GB 1.78 GB βœ—
EnviroAtlas Land-cover segmentation NAIP RGB + NIR FCN Prior, OSM rasters N/A N/A βœ“
BigEarthNet v2.0 Land-cover classification Sentinel-2 (10 bands) ViT SatCLIP embeddings 120 GB (raw), 91 GB (H5) 205 GB (raw), 259 GB (H5) βœ“
USAVars Tree-cover regression NAIP RGB + NIR ResNet-50 OSM rasters 23.56 GB 167 GB βœ—

Usage Instructions

  • Download the .h5.gz files in data/<source dataset name>. Our source datasets include SustainBench, USAVars, and BigEarthNet2.0. Each dataset with the augmented geographic inputs is detailed in this section πŸ“¦
  • You may use pigz (https://linux.die.net/man/1/pigz) to decompress the archive. This is especially recommended for USAVars' train-split, which is 117 GB when uncompressed. This can be done with pigz -d <.h5.gz>
  • Datasets with auxiliary geographic inputs can be read with H5PY.

Usage Instructions for the BigEarthNetv2.0 dataset (Clasen et. al. (2025))

We use the original dataset BigEarthNetv2.0 dataset which is processed with spatially-buffered train-test splits. We release two processed versions of the datasets introduced in Casen et. al. (2025) The first version is stored in directory data/bigearthnet/raw/ This dataset, although called raw is a pre-processed version of the raw BigEarthNetv2.0 dataset. We follow instructions listed on this repository. Steps performed:

  1. We download the raw BigEarthNet-S2.tar.zst Sentinel-2 BigEarthNet dataset.
  2. We extract and process the raw S2 tiles to a LMDB 'Lightning' Database. This allows for faster reads during training. We use the rico-hdl tool here to accomplish this.
  3. We download reference maps and sentinel-2 tile metadata with snow and cloud cover rasters
  4. This final dataset is compressed into several chunks and stored in data/bigearthnet/raw/bigearth.tar.gz.part-a<x>. Each chunk is 5G large. There are 24 total chunks.

To uncompress and re-assemble the compressed files in data/bigearthnet/raw/, download all the parts and run:

cat bigearthnet.tar.gz.part-* \
  | pigz -dc \
  | tar -xpf -

Note that if this version of the dataset is used, SatCLIP embeddings would need to be re-computed on-the-fly. To use this dataset with the pre-computed SatCLIP embeddings, refer to the note below.

πŸ’‘ Do you want to try your own input fusion mechanism with BigEarthNetv2.0?

The second version of the BigEarthNetv2.0 dataset is stored in data/bigearthnet/. These datasets are stored as 3 H5PY datasets (.h5) for each split in the dataset. This version of the processed dataset comes with (i) raw location co-ordinates, and (ii) pre-computed SatCLIP embeddings (L=10, ResNet50 image encoder backbone). You may access these embeddings and location metadata with keys location and satclip_embedding.

Usage Instructions for the SustainBench Farmland Boundary Delineation Dataset (Yeh et. al. (2021))

  1. Unzip the archive in data/sustainbench-field-boundary-delineation with unzip sustainbench.zip
  2. You should see a directory structure as follows:
dataset_release/
β”œβ”€β”€ id_augmented_test_split_with_osm_new.h5.gz.zip
β”œβ”€β”€ id_augmented_train_split_with_osm_new.h5.gz.zip
β”œβ”€β”€ id_augmented_val_split_with_osm_new.h5.gz.zip
β”œβ”€β”€ raw_id_augmented_test_split_with_osm_new.h5.gz.zip
β”œβ”€β”€ raw_id_augmented_train_split_with_osm_new.h5.gz.zip
└── raw_id_augmented_val_split_with_osm_new.h5.gz.zip
  1. Unzip all files using unzip and pigz -d <path to .h5.gz file> There are two versions of data released: Datasets that begin with id_augmented refer to the version of the SustainBench farmland boundary delineation dataset with the OSM and DEM rasters pre-processed to RGB space following the application of the Gaussian Blur. Datasets that begin with raw_id_augmented contain the RGB imagery with 19 categorical rasters for OSM, and 1 raster for the DEM geographic input.

πŸ“¦ Datasets & Georeferenced Auxiliary Layers

SustainBench – Farmland Boundary Delineation

  • Optical input: Sentinel-2 RGB patches (224Γ—224 px, 10 m GSD) covering French cropland in 2017; β‰ˆ 1.6 k training images.
  • Auxiliary layers (all geo-aligned):
    • 19-channel OpenStreetMap (OSM) raster stack (roads, waterways, buildings, biome classes, …)
    • EU-DEM (20 m GSD, down-sampled to 10 m)
  • Why: OSM + DEM give an 8 % Dice boost when labels are scarce; gains appear once the training set drops below β‰ˆ 700 images.

EnviroAtlas – Land-Cover Segmentation

  • Optical input: NAIP 4-band RGB-NIR aerial imagery at 1 m resolution.
  • Auxiliary layers:
    • OSM rasters (roads, waterbodies, waterways)
    • Prior raster – a hand-crafted fusion of NLCD land-cover and OSM layers (PROC-STACK)
  • Splits: Train = Pittsburgh; OOD validation/test = Austin & Durham. Auxiliary layers raise OOD overall accuracy by ~4 pp without extra fine-tuning.

BigEarthNet v2.0 – Multi-Label Land-Cover Classification

  • Optical input: 10-band Sentinel-2 tile pairs; β‰ˆ 550 k patch/label pairs over 19 classes.
  • Auxiliary layer:
    • SatCLIP location embedding (256-D), one per image center, injected as an extra ViT token (TOKEN-FUSE).
  • Splits: Grid-based; val/test tiles lie outside the training footprint (spatial OOD by design). SatCLIP token lifts macro-F1 by ~3 pp across all subset sizes.

USAVars – Tree-Cover Regression

  • Optical input: NAIP RGB-NIR images (1 kmΒ² tiles); β‰ˆ 100 k samples with tree-cover % labels.
  • Auxiliary layers:
    • Extended OSM raster stack (roads, buildings, land-use, biome classes, …)
  • Notes: Stacking the OSM raster boosts RΒ² by 0.16 in the low-data regime (< 250 images); DEM is provided raw for flexibility.

Citation:

@inproceedings{
  rao2025using,
  title={Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for {ML} with Satellite Imagery},
  author={Arjun Rao and Esther Rolf},
  booktitle={TerraBytes - ICML 2025 workshop},
  year={2025},
  url={https://openreview.net/forum?id=p5nSQMPUyo}
}

Downloads last month
127