mespinosami commited on
Commit
b20f3f6
·
verified ·
1 Parent(s): 1560011

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ ![image/png](images/banner-github-simpler.png)
5
+
6
+ # [CVPRW 2025] 🌍 COP-GEN-Beta: Unified Generative Modelling of COPernicus Imagery Thumbnails
7
+
8
+ <div align="center" style="line-height: 1;">
9
+ <a href="https://huggingface.co/mespinosami/COP-GEN-Beta" style="margin: 2px;">
10
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97-Demo-yellow" alt="HF Demo" style="display: inline-block; vertical-align: middle;"/>
11
+ </a>
12
+ <a href="https://github.com/miquel-espinosa/COP-GEN-Beta" style="margin: 2px;">
13
+ <img src="https://img.shields.io/badge/%E2%80%8B-COP--GEN--Beta-black?logo=github" alt="GitHub" style="display: inline-block; vertical-align: middle;"/>
14
+ </a>
15
+ <a href="https://miquel-espinosa.github.io/cop-gen-beta/" style="margin: 2px;">
16
+ <img src="https://img.shields.io/badge/🌐-Website-grey" alt="Website" style="display: inline-block; vertical-align: middle;"/>
17
+ </a>
18
+ <a href="https://huggingface.co/mespinosami/COP-GEN-Beta" style="margin: 2px;">
19
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97-Model-yellow" alt="HF Model" style="display: inline-block; vertical-align: middle;"/>
20
+ </a>
21
+ <a href="https://www.arxiv.org/abs/2504.08548" style="margin: 2px;">
22
+ <img src="https://img.shields.io/badge/arXiv-2402.12095-D12424" alt="arXiv" style="display: inline-block; vertical-align: middle;"/>
23
+ </a>
24
+ <a href="https://colab.research.google.com/github/ESA-PhiLab/Major-TOM/blob/main/03-Filtering-in-Colab.ipynb" target="_parent" style="margin: 2px;">
25
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" style="display: inline-block; vertical-align: middle;"/>
26
+ </a>
27
+ </div>
28
+
29
+ ## Abstract
30
+ > _In remote sensing, multi-modal data from various sensors capturing the same scene_
31
+ _offers rich opportunities, but learning a unified representation across these modalities remains a significant challenge._
32
+ _Traditional methods have often been limited to single or dual-modality approaches._
33
+ _In this paper, we introduce COP-GEN-Beta, a generative diffusion model trained on optical, radar, and elevation data from the Major TOM dataset._
34
+ _What sets COP-GEN-Beta apart is its ability to map any subset of modalities to any other, enabling zero-shot modality translation after training._
35
+ _This is achieved through a sequence-based diffusion transformer, where each modality is controlled by its own timestep embedding._
36
+ _We extensively evaluate COP-GEN-Beta on thumbnail images from the Major TOM dataset, demonstrating its effectiveness in generating high-quality samples._
37
+ _Qualitative and quantitative evaluations validate the model's performance, highlighting its potential as a powerful pre-trained model for future remote sensing tasks._
38
+
39
+ ## COP-GEN-Beta: Architecture Overview
40
+
41
+ COP-GEN-Beta is a diffusion model designed to handle multiple remote sensing modalities, specifically: Digital Elevation Model (DEM), Sentinel-1 Radar Terrain Corrected (S1 RTC), Sentinel-2 Level 1C (S2 L1C), and Sentinel-2 Level 2A (S2 L2A). The model learns joint, conditional, and marginal distributions within a unified framework.
42
+
43
+ ![COP-GEN-Beta Architecture](images/cop-gen-beta-architecture.png)
44
+
45
+ ## COP-GEN-Beta: Results
46
+
47
+ COP-GEN-Beta's flexible sampling capabilities enable a wide range of downstream applications through various modality translation combinations. By allowing generation of any subset of modalities conditioned on any other subset, our model unlocks numerous practical use cases in remote sensing, from atmospheric correction and DEM generation to dataset expansion.
48
+
49
+ ![COP-GEN-Beta Results](images/use-case-horizontal.png)
50
+
51
+ ## Getting Started
52
+
53
+ For detailed instructions on installation, training, and inference, please visit our [GitHub repository](https://github.com/miquel-espinosa/COP-GEN-Beta).
54
+
55
+ COP-GEN-Beta offers great versatility for generating images, including:
56
+ - **Unconditional generation:** Generates tuples of 4 modalities without any condition.
57
+ - **Conditional generation:**
58
+ - Single modality conditioning
59
+ - 2 modality conditioning
60
+ - 3 modality conditioning
61
+
62
+ ## Citation
63
+
64
+ If you find this work useful, please cite it as follows:
65
+
66
+ ```bibtex
67
+ @inproceedings{espinosa2025copgenbeta,
68
+ title={COP-GEN-Beta: Unified Generative Modelling of COPernicus Imagery Thumbnails},
69
+ author={Espinosa, Miguel and Marsocci, Valerio and Jia, Yuru and Crowley, Elliot J. and Czerkawski, Mikolaj},
70
+ booktitle={CVPRW},
71
+ year={2025}
72
+ }
73
+ ```