Datasets:
STARCOP dataset
STARCOP dataset: Semantic Segmentation of Methane Plumes with Hyperspectral Machine Learning Models 🌈🛰️
Authors: Vít Růžička, Gonzalo Mateo-Garcia, Luis Gómez-Chova, Anna Vaughan, Luis Guanter and Andrew Markham
Fast data preview in: dataset_exploration.ipynb
Main repository: github/spaceml-org/STARCOP
Task:
Methane is the second most important greenhouse gas contributor to climate change; at the same time its reduction has been denoted as one of the fastest pathways to preventing temperature growth due to its short atmospheric lifetime. In particular, the mitigation of active point-sources associated to the fossil fuel industry has a strong and cost-effective mitigation potential. Detection of methane plumes in remote sensing data is possible, but the existing approaches exhibit high false positive rates and need manual intervention. Machine learning research in this area is limited due to the lack of large real-world annotated datasets.
Dataset:
In this work, we are publicly releasing a machine learning ready dataset with manually refined annotation of methane plumes. We present labelled hyperspectral data from the AVIRIS-NG sensor and provide simulated multispectral WorldView-3 views of the same data to allow for model benchmarking across hyperspectral and multispectral sensors.
"All bands dataset version"
This version contains the so called "all bands" data, more specifically a selected of all data relevant to train our models on the task of methane plume segmentation (125 bands: RGB and 1573-1699nm, 2004-2480nm). In contrast with the other versions of this dataset, it contains more hyperspectral bands - and in turn is also much larger (633GB versus about 60GB).
The dataset has been split into 4 parts, to enable simpler download and limit the potential overhead in storage. To save storage space, we recommend cloning the repositories one by one and deleting the ".git" folder (this contains copies of the files due to the way git handles history - normally this is not really an issue, but with these large datasets it may be needed). We have split the Train dataset into 3 subsets and the Test separately:
- Train part 1: https://huggingface.co/datasets/previtus/STARCOP_allbands_Train1 (~174GB, all easy and some hard)
- Train part 2: https://huggingface.co/datasets/previtus/STARCOP_allbands_Train2 (~182GB, remaining hard and some no-plume)
- Train part 3: https://huggingface.co/datasets/previtus/STARCOP_allbands_Train3 (~195GB, remaining no-plume samples)
- Test: https://huggingface.co/datasets/previtus/STARCOP_allbands_Eval (~82GB)
After download place all location folders (such as for example "ang20190922t192642_r3270_c384_w151_h151") into one shared directory. Then use the train.csv and test.csv files as lists of which folders belong to which data split.
Other versions: RGB, MF and simulated WV3 product at https://zenodo.org/records/7863343
For more information please consult the main project repository at: https://github.com/spaceml-org/STARCOP
Download instructions
We suggest using cloning all of the relevant repositories from Hugging Face and placing all event folders into the same main folder - the separate csv files can be used to distingush between the subsets, and our code base is prepared to handle data in this format.
# Install git Large File Storage (LFS), also authenticate to your HuggingFace login
git lfs install
# Clone each of the training and evaluation subsets (we recommend also immediately deleting the .git folders in each of the repos, they tend to almost double the storage requirements)
git clone [email protected]:datasets/previtus/STARCOP_allbands_Train1
rm STARCOP_allbands_Train1/.git/ -frd
git clone [email protected]:datasets/previtus/STARCOP_allbands_Train2
rm STARCOP_allbands_Train2/.git/ -frd
git clone [email protected]:datasets/previtus/STARCOP_allbands_Train3
rm STARCOP_allbands_Train3/.git/ -frd
git clone [email protected]:datasets/previtus/STARCOP_allbands_Eval
rm STARCOP_allbands_Eval/.git/ -frd
# Place everything into one folder:
mkdir STARCOP_allbands
mv STARCOP_allbands_Train1/* STARCOP_allbands/
mv STARCOP_allbands_Train2/* STARCOP_allbands/
mv STARCOP_allbands_Train3/* STARCOP_allbands/
mv STARCOP_allbands_Eval/* STARCOP_allbands/
# Then refer to data splits in STARCOP_allbands/train.csv and STARCOP_allbands/test.csv
Now you have downloaded the dataset in a format ready for our codebase, usually you can set the path in config/settings parameter "dataset.root_folder" to this folder.
Citation
If you find the STARCOP models or dataset useful in your research, please consider citing our work.
@article{ruzicka_starcop_2023,
title = {Semantic segmentation of methane plumes with hyperspectral machine learning models},
volume = {13},
issn = {2045-2322},
url = {https://www.nature.com/articles/s41598-023-44918-6},
doi = {10.1038/s41598-023-44918-6},
number = {1},
journal = {Scientific Reports},
author={Růžička, Vít and Mateo-Garcia, Gonzalo and G{\'o}mez-Chova, Luis and Vaughan, Anna and Guanter, Luis and Markham, Andrew},
month = nov,
year = {2023},
pages = {19999}
}
- Downloads last month
- 31