---
pretty_name: Geolayers
language: en
language_creators:
- found
license: cc-by-4.0
multilinguality: monolingual
size_categories:
- 10K
-->
This dataset card contains usage instructions and metadata for all data-products released with our paper:
*Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery.* We release 3 modified versions of 3 benchmark datasets spanning land-cover segmentation, tree-cover regression, and multi-label land-cover classification tasks. These datasets are augmented with auxiliary, geographic inputs. A full list of contributed data products is shown in the table below.
## Usage Instructions
* Download the `.h5.gz` files in `data/`. Our source datasets include SustainBench, USAVars, and BigEarthNet2.0. Each dataset with the augmented geographic inputs is detailed in [this section 📦](#geolayersused)
* You may use pigz (https://linux.die.net/man/1/pigz) to decompress the archive. This is especially recommended for USAVars' train-split, which is 117 GB when uncompressed. This can be done with `pigz -d <.h5.gz>`
* Datasets with auxiliary geographic inputs can be read with H5PY.
### Usage Instructions for the BigEarthNetv2.0 dataset (Clasen et. al. (2025))
We use the original dataset [BigEarthNetv2.0](https://bigearth.net/) dataset which is processed with spatially-buffered train-test splits. We release two **processed** versions of the datasets introduced in Casen et. al. (2025)
The first version is stored in directory `data/bigearthnet/raw/` This dataset, although called `raw` is a pre-processed version of the raw BigEarthNetv2.0 dataset. We follow instructions listed on [this repository](https://git.tu-berlin.de/rsim/reben-training-scripts/-/tree/main?ref_type=heads#data). Steps performed:
1. We download the raw `BigEarthNet-S2.tar.zst` Sentinel-2 BigEarthNet dataset.
2. We extract and process the raw S2 tiles to a LMDB 'Lightning' Database. This allows for faster reads during training. We use the rico-hdl tool [here](https://github.com/kai-tub/rico-hdl) to accomplish this.
3. We download reference maps and sentinel-2 tile metadata with snow and cloud cover rasters
4. This final dataset is compressed into several chunks and stored in `data/bigearthnet/raw/bigearth.tar.gz.part-a`. Each chunk is 5G large. There are 24 total chunks.
To uncompress and re-assemble the compressed files in `data/bigearthnet/raw/`, download all the parts and run:
```
cat bigearthnet.tar.gz.part-* \
| pigz -dc \
| tar -xpf -
```
Note that if this version of the dataset is used, SatCLIP embeddings would need to be re-computed on-the-fly. To use this dataset with the pre-computed SatCLIP embeddings, refer to the note below.
#### 💡 Do you want to try your own input fusion mechanism with BigEarthNetv2.0?
The second version of the BigEarthNetv2.0 dataset is stored in `data/bigearthnet/`. These datasets are stored as 3 H5PY datasets (`.h5`) for each split in the dataset.
This version of the processed dataset comes with (i) raw location co-ordinates, and (ii) pre-computed SatCLIP embeddings (L=10, ResNet50 image encoder backbone).
You may access these embeddings and location metadata with keys `location` and `satclip_embedding`.
### Usage Instructions for the SustainBench Farmland Boundary Delineation Dataset (Yeh et. al. (2021))
1. Unzip the archive in `data/sustainbench-field-boundary-delineation` with `unzip sustainbench.zip`
2. You should see a directory structure as follows:
```
dataset_release/
├── id_augmented_test_split_with_osm_new.h5.gz.zip
├── id_augmented_train_split_with_osm_new.h5.gz.zip
├── id_augmented_val_split_with_osm_new.h5.gz.zip
├── raw_id_augmented_test_split_with_osm_new.h5.gz.zip
├── raw_id_augmented_train_split_with_osm_new.h5.gz.zip
└── raw_id_augmented_val_split_with_osm_new.h5.gz.zip
```
3. Unzip all files using `unzip` and `pigz -d `
There are two versions of data released: Datasets that begin with `id_augmented` refer to the version of the SustainBench farmland boundary delineation dataset with the OSM and DEM rasters pre-processed to RGB space following the application of the Gaussian Blur. Datasets that begin with `raw_id_augmented` contain the RGB imagery with 19 categorical rasters for OSM, and 1 raster for the DEM geographic input.
## 📦 Datasets & Georeferenced Auxiliary Layers
### SustainBench – Farmland Boundary Delineation
* **Optical input:** Sentinel-2 RGB patches (224×224 px, 10 m GSD) covering French cropland in 2017; ≈ 1.6 k training images.
* **Auxiliary layers (all geo-aligned):**
* 19-channel OpenStreetMap (OSM) raster stack (roads, waterways, buildings, biome classes, …)
* EU-DEM (20 m GSD, down-sampled to 10 m)
* **Why:** OSM + DEM give an 8 % Dice boost when labels are scarce; gains appear once the training set drops below ≈ 700 images.
---
### EnviroAtlas – Land-Cover Segmentation
* **Optical input:** NAIP 4-band RGB-NIR aerial imagery at 1 m resolution.
* **Auxiliary layers:**
* OSM rasters (roads, waterbodies, waterways)
* **Prior** raster – a hand-crafted fusion of NLCD land-cover and OSM layers (PROC-STACK)
* **Splits:** Train = Pittsburgh; OOD validation/test = Austin & Durham. Auxiliary layers raise OOD overall accuracy by ~4 pp without extra fine-tuning.
---
### BigEarthNet v2.0 – Multi-Label Land-Cover Classification
* **Optical input:** 10-band Sentinel-2 tile pairs; ≈ 550 k patch/label pairs over 19 classes.
* **Auxiliary layer:**
* **SatCLIP** location embedding (256-D), one per image center, injected as an extra ViT token (TOKEN-FUSE).
* **Splits:** Grid-based; val/test tiles lie outside the training footprint (spatial OOD by design). SatCLIP token lifts macro-F1 by ~3 pp across *all* subset sizes.
---
### USAVars – Tree-Cover Regression
* **Optical input:** NAIP RGB-NIR images (1 km² tiles); ≈ 100 k samples with tree-cover % labels.
* **Auxiliary layers:**
* Extended OSM raster stack (roads, buildings, land-use, biome classes, …)
* **Notes:** Stacking the OSM raster boosts R² by 0.16 in the low-data regime (< 250 images); DEM is provided raw for flexibility.
Citation:
```
@inproceedings{
rao2025using,
title={Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for {ML} with Satellite Imagery},
author={Arjun Rao and Esther Rolf},
booktitle={TerraBytes - ICML 2025 workshop},
year={2025},
url={https://openreview.net/forum?id=p5nSQMPUyo}
}
```
---