|
--- |
|
pretty_name: Geolayers |
|
language: en |
|
language_creators: |
|
- found |
|
license: cc-by-4.0 |
|
multilinguality: monolingual |
|
size_categories: |
|
- 10K<n<100K |
|
task_categories: |
|
- image-classification |
|
- image-segmentation |
|
source_datasets: |
|
- SustainBench |
|
- USAVars |
|
- BigEarthNetv2.0 |
|
- EnviroAtlas |
|
homepage: https://huggingface.co/datasets/arjunrao2000/geolayers |
|
repository: https://huggingface.co/datasets/arjunrao2000/geolayers |
|
download_size: 25570000000 |
|
tags: |
|
- climate |
|
- remote-sensing |
|
|
|
|
|
data_files: |
|
- "huggingface_preview/**/*.jpg" |
|
|
|
|
|
- "metadata.csv" |
|
|
|
|
|
|
|
|
|
preview: |
|
path: metadata.csv |
|
images: |
|
- rgb |
|
- osm |
|
- dem |
|
- mask |
|
configs: |
|
- config_name: benchmark |
|
data_files: |
|
- split: sustainbench |
|
path: metadata.csv |
|
--- |
|
|
|
|
|
# Geolayers-Data |
|
<img src="osm_usavars.png" alt="Sample Geographic Inputs with the USAVars Dataset" width="800"/> |
|
|
|
--> |
|
This dataset card contains usage instructions and metadata for all data-products released with our paper: |
|
*Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery.* We release 3 modified versions of 3 benchmark datasets spanning land-cover segmentation, tree-cover regression, and multi-label land-cover classification tasks. These datasets are augmented with auxiliary, geographic inputs. A full list of contributed data products is shown in the table below. |
|
|
|
<table> |
|
<thead> |
|
<tr> |
|
<th>Dataset</th> |
|
<th>Task Description</th> |
|
<th>Multispectral Input</th> |
|
<th>Model</th> |
|
<th>Additional Data Layers</th> |
|
<th colspan="2">Dataset Size</th> |
|
<th>OOD Test Set Present?</th> |
|
</tr> |
|
<tr> |
|
<th></th> |
|
<th></th> |
|
<th></th> |
|
<th></th> |
|
<th></th> |
|
<th>Compressed</th> |
|
<th>Uncompressed</th> |
|
<th></th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td><a href="https://arxiv.org/abs/2111.04724">SustainBench</a></td> |
|
<td>Farmland boundary delineation</td> |
|
<td>Sentinel-2 RGB</td> |
|
<td>U-Net</td> |
|
<td>OSM rasters, EU-DEM</td> |
|
<td>1.76 GB</td> |
|
<td>1.78 GB</td> |
|
<td>β</td> |
|
</tr> |
|
<tr> |
|
<td><a href="https://arxiv.org/abs/2202.14000">EnviroAtlas</a></td> |
|
<td>Land-cover segmentation</td> |
|
<td>NAIP RGB + NIR</td> |
|
<td>FCN</td> |
|
<td><a href="https://arxiv.org/abs/2202.14000">Prior</a>, OSM rasters</td> |
|
<td>N/A</td> |
|
<td>N/A</td> |
|
<td>β</td> |
|
</tr> |
|
<tr> |
|
<td><a href="https://bigearth.net/static/documents/Description_BigEarthNet_v2.pdf">BigEarthNet v2.0</a></td> |
|
<td>Land-cover classification</td> |
|
<td>Sentinel-2 (10 bands)</td> |
|
<td>ViT</td> |
|
<td><a href="https://arxiv.org/abs/2311.17179">SatCLIP</a> embeddings</td> |
|
<td>120 GB (raw), 91 GB (H5)</td> |
|
<td>205 GB (raw), 259 GB (H5) </td> |
|
<td>β</td> |
|
</tr> |
|
<tr> |
|
<td><a href="https://arxiv.org/abs/2010.08168">USAVars</a></td> |
|
<td>Tree-cover regression</td> |
|
<td>NAIP RGB + NIR</td> |
|
<td>ResNet-50</td> |
|
<td>OSM rasters</td> |
|
<td> 23.56 GB </td> |
|
<td> 167 GB</td> |
|
<td>β</td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
|
|
## Usage Instructions |
|
* Download the `.h5.gz` files in `data/<source dataset name>`. Our source datasets include SustainBench, USAVars, and BigEarthNet2.0. Each dataset with the augmented geographic inputs is detailed in [this section π¦](#geolayersused) |
|
* You may use pigz (https://linux.die.net/man/1/pigz) to decompress the archive. This is especially recommended for USAVars' train-split, which is 117 GB when uncompressed. This can be done with `pigz -d <.h5.gz>` |
|
* Datasets with auxiliary geographic inputs can be read with H5PY. |
|
|
|
### Usage Instructions for the BigEarthNetv2.0 dataset (Clasen et. al. (2025)) |
|
We use the original dataset [BigEarthNetv2.0](https://bigearth.net/) dataset which is processed with spatially-buffered train-test splits. We release two **processed** versions of the datasets introduced in Casen et. al. (2025) |
|
The first version is stored in directory `data/bigearthnet/raw/` This dataset, although called `raw` is a pre-processed version of the raw BigEarthNetv2.0 dataset. We follow instructions listed on [this repository](https://git.tu-berlin.de/rsim/reben-training-scripts/-/tree/main?ref_type=heads#data). Steps performed: |
|
1. We download the raw `BigEarthNet-S2.tar.zst` Sentinel-2 BigEarthNet dataset. |
|
2. We extract and process the raw S2 tiles to a LMDB 'Lightning' Database. This allows for faster reads during training. We use the rico-hdl tool [here](https://github.com/kai-tub/rico-hdl) to accomplish this. |
|
3. We download reference maps and sentinel-2 tile metadata with snow and cloud cover rasters |
|
4. This final dataset is compressed into several chunks and stored in `data/bigearthnet/raw/bigearth.tar.gz.part-a<x>`. Each chunk is 5G large. There are 24 total chunks. |
|
|
|
To uncompress and re-assemble the compressed files in `data/bigearthnet/raw/`, download all the parts and run: |
|
``` |
|
cat bigearthnet.tar.gz.part-* \ |
|
| pigz -dc \ |
|
| tar -xpf - |
|
``` |
|
|
|
Note that if this version of the dataset is used, SatCLIP embeddings would need to be re-computed on-the-fly. To use this dataset with the pre-computed SatCLIP embeddings, refer to the note below. |
|
|
|
#### π‘ Do you want to try your own input fusion mechanism with BigEarthNetv2.0? |
|
The second version of the BigEarthNetv2.0 dataset is stored in `data/bigearthnet/`. These datasets are stored as 3 H5PY datasets (`.h5`) for each split in the dataset. |
|
This version of the processed dataset comes with (i) raw location co-ordinates, and (ii) pre-computed SatCLIP embeddings (L=10, ResNet50 image encoder backbone). |
|
You may access these embeddings and location metadata with keys `location` and `satclip_embedding`. |
|
|
|
### Usage Instructions for the SustainBench Farmland Boundary Delineation Dataset (Yeh et. al. (2021)) |
|
1. Unzip the archive in `data/sustainbench-field-boundary-delineation` with `unzip sustainbench.zip` |
|
2. You should see a directory structure as follows: |
|
``` |
|
dataset_release/ |
|
βββ id_augmented_test_split_with_osm_new.h5.gz.zip |
|
βββ id_augmented_train_split_with_osm_new.h5.gz.zip |
|
βββ id_augmented_val_split_with_osm_new.h5.gz.zip |
|
βββ raw_id_augmented_test_split_with_osm_new.h5.gz.zip |
|
βββ raw_id_augmented_train_split_with_osm_new.h5.gz.zip |
|
βββ raw_id_augmented_val_split_with_osm_new.h5.gz.zip |
|
``` |
|
3. Unzip all files using `unzip` and `pigz -d <path to .h5.gz file>` |
|
There are two versions of data released: Datasets that begin with `id_augmented` refer to the version of the SustainBench farmland boundary delineation dataset with the OSM and DEM rasters pre-processed to RGB space following the application of the Gaussian Blur. Datasets that begin with `raw_id_augmented` contain the RGB imagery with 19 categorical rasters for OSM, and 1 raster for the DEM geographic input. |
|
|
|
## π¦ <a name="geolayersused"></a> Datasets & Georeferenced Auxiliary Layers |
|
### SustainBench β Farmland Boundary Delineation |
|
* **Optical input:** Sentinel-2 RGB patches (224Γ224 px, 10 m GSD) covering French cropland in 2017; β 1.6 k training images. |
|
* **Auxiliary layers (all geo-aligned):** |
|
* 19-channel OpenStreetMap (OSM) raster stack (roads, waterways, buildings, biome classes, β¦) |
|
* EU-DEM (20 m GSD, down-sampled to 10 m) |
|
* **Why:** OSM + DEM give an 8 % Dice boost when labels are scarce; gains appear once the training set drops below β 700 images. |
|
|
|
--- |
|
|
|
### EnviroAtlas β Land-Cover Segmentation |
|
* **Optical input:** NAIP 4-band RGB-NIR aerial imagery at 1 m resolution. |
|
* **Auxiliary layers:** |
|
* OSM rasters (roads, waterbodies, waterways) |
|
* **Prior** raster β a hand-crafted fusion of NLCD land-cover and OSM layers (PROC-STACK) |
|
* **Splits:** Train = Pittsburgh; OOD validation/test = Austin & Durham. Auxiliary layers raise OOD overall accuracy by ~4 pp without extra fine-tuning. |
|
|
|
--- |
|
|
|
### BigEarthNet v2.0 β Multi-Label Land-Cover Classification |
|
* **Optical input:** 10-band Sentinel-2 tile pairs; β 550 k patch/label pairs over 19 classes. |
|
* **Auxiliary layer:** |
|
* **SatCLIP** location embedding (256-D), one per image center, injected as an extra ViT token (TOKEN-FUSE). |
|
* **Splits:** Grid-based; val/test tiles lie outside the training footprint (spatial OOD by design). SatCLIP token lifts macro-F1 by ~3 pp across *all* subset sizes. |
|
|
|
--- |
|
|
|
### USAVars β Tree-Cover Regression |
|
* **Optical input:** NAIP RGB-NIR images (1 kmΒ² tiles); β 100 k samples with tree-cover % labels. |
|
* **Auxiliary layers:** |
|
* Extended OSM raster stack (roads, buildings, land-use, biome classes, β¦) |
|
* **Notes:** Stacking the OSM raster boosts RΒ² by 0.16 in the low-data regime (< 250 images); DEM is provided raw for flexibility. |
|
|
|
|
|
|
|
Citation: |
|
|
|
``` |
|
@inproceedings{ |
|
rao2025using, |
|
title={Using Multiple Input Modalities can Improve Data-Efficiency and O.O.D. Generalization for {ML} with Satellite Imagery}, |
|
author={Arjun Rao and Esther Rolf}, |
|
booktitle={TerraBytes - ICML 2025 workshop}, |
|
year={2025}, |
|
url={https://openreview.net/forum?id=p5nSQMPUyo} |
|
} |
|
``` |
|
--- |
|
|
|
|