The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: a1cccf20-ca2a-4d31-82e7-9bb3906c72ff)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 977, in get_module standalone_yaml_path = cached_path( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 179, in cached_path resolved_path = huggingface_hub.HfFileSystem( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: a1cccf20-ca2a-4d31-82e7-9bb3906c72ff)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SSL4EO-S12-downstream
Welcome to the SSL4EO-S12-downstream dataset. This dataset is used in the Embed2Scale Challenge.
The Test phase challenge data is available
The test phase challenge data is now available under the data_eval
folder. It comprises 8111 datacubes, ~90 GB, of the same format as the dev set.
The data constitutes two sets, the dev set of 5149 datacubes and the test set of 8111 datacubes, approximately 60 GB and 90 GB, respectively. The datacubes are visualized below and contain Sentinel-1 and Sentinel-2 data. The data is organized under the data_dev
and data_eval
folders, which contains one subfolder per modality (s1, s2l1c and s2l2a) where the datacubes reside. Each datacube constitute one location, with S1 VV and VH polarisations, S2 L1C and S2 L2A channels. Each location is sampled at four times, one during months March-May, one during June-August, one during September-November and finally one during months December-February, in this order. The datacubes are stored in zipped zarr files; see here for instructions how to load the data. The data in the zarr files is structured as (number of locations, number of timestamps, number of channels, heigh, width) with the dimensions (1, 4, 27, 264, 264); the 27 channels coming from 2 S1 polarizations (VV and VH), 13 S2 L1C channels (B1, B2, B3, B4, B5, B6, B7, B8, B8A, B9, B10, B11, B12), and 12 S2 L2A channels (B1, B2, B3, B4, B5, B6, B7, B8, B8A, B9, B11, B12).
The data is structured identically to the SSL4EOS12 v1.1 dataset:
@article{blumenstiel2025ssl4eos12,
title={{SSL4EOS12 v1.1 - A Multimodal, Multiseasonal Dataset for Pretraining}},
author={Blumenstiel, Benedikt and Braham, Nassim Ait Ali and Albrecht, Conrad M and Maurogiovanni, Stefano and Fraccaro, Paolo},
journal={arXiv preprint arXiv:2503.00168},
year={2025}
}
which is based on
@article{wang2022ssl4eo,
title={{SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for Self-Supervised Learning in Earth Observation}},
author={Wang, Yi and Braham, Nassim Ait Ali and Xiong, Zhitong and Liu, Chenying and Albrecht, Conrad M and Zhu, Xiao Xiang},
journal={arXiv preprint arXiv:2211.07044},
year={2022}
}
- Downloads last month
- 1,100