|
--- |
|
library_name: pytorch |
|
license: apache-2.0 |
|
tags: |
|
- nvidia |
|
- gan |
|
- stylegan |
|
- stylegan3 |
|
- nvidia |
|
pipeline_tag: unconditional-image-generation |
|
language: |
|
- it |
|
extra_gated_prompt: "You agree to not use the model to conduct experiments that cause harm to human subjects. You agree to cite this model for every usage using its DOI." |
|
extra_gated_fields: |
|
Company: text |
|
Country: country |
|
I want to use this model for: |
|
type: select |
|
options: |
|
- Research |
|
- Education |
|
- Art & Exhibitions |
|
- label: Other |
|
value: other |
|
I agree to use this model for non-commercial use ONLY: checkbox |
|
extra_gated_heading: "Acknowledge license and conditions to accept the repository" |
|
extra_gated_description: "Our team may take 1-2 days to process your request" |
|
extra_gated_button_content: "I accept" |
|
--- |
|
|
|
# gaIA: Italian Landscape GAN Model |
|
|
|
gaIA is the first Italian GAN model trained on satellite images of a selection of Italy's main glaciers, forests, lakes, rivers, and coasts that are most affected by climate change. It is usable for scientific and artistic purposes. |
|
|
|
 |
|
|
|
## Dataset |
|
|
|
- **Images**: 12k |
|
- **Image Format**: 1024x1024 |
|
- **Source**: Copernicus Sentinel 2A |
|
- **Reference Years**: 2017 – June 2024 |
|
|
|
 |
|
|
|
- **29 Covered Areas**: |
|
- **Glaciers**: Adamello, Gran Paradiso, Marmolada, Presena, Forni, Belvedere |
|
- **Lakes**: Bracciano, Garda, Maggiore, Trasimeno, Iseo, Como |
|
- **Rivers**: Tiber, Adige, Arno, etc. |
|
- **Islands/Coasts**: Chia, Marina di Pisa, Venezia, Stromboli, Rosolina Mare, Costiera Amalfitana |
|
- **Parks**: Abruzzo, Casentinesi, Pollino, Sila, Gargano, Aspromonte |
|
|
|
 |
|
|
|
## Training |
|
|
|
- **Framework**: StyleGAN3-T |
|
- **GPUs**: 1 - NVIDIA A100 80GB |
|
- **Batch**: 32 |
|
- **Gamma**: 32 |
|
- **Kimg**: 5152.0 |
|
- **Augmentations**: 38,040 |
|
- **Time**: ~220 hours |
|
|
|
 |
|
|
|
## Requirements |
|
|
|
Please refer to Official NVIDIA Paper [Requirements](https://github.com/NVlabs/stylegan3?tab=readme-ov-file#requirements) |
|
|
|
## How to Start |
|
```python |
|
import torch |
|
from PIL import Image |
|
import numpy as np |
|
import pickle |
|
|
|
# Set the device to GPU |
|
device = torch.device('cuda') |
|
|
|
# Load the model |
|
with open('/thewhatifproject/gaIA_v1.pkl', 'rb') as f: |
|
G = pickle.load(f)['G_ema'].cuda() # torch.nn.Module |
|
|
|
# Set the model to evaluation mode |
|
G.eval() |
|
|
|
# Set the seed for reproducibility |
|
seed = 28 |
|
|
|
# Generate latent codes using the specified seed |
|
z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device) |
|
|
|
# Generate the image using the generator |
|
with torch.no_grad(): |
|
img = G(z, None, truncation_psi=1, noise_mode='const') |
|
|
|
# Process the image for saving |
|
# - Change dimensions order from NCHW to NHWC |
|
# - Scale from range [-1, +1] to [0, 255] |
|
# - Clamp values to ensure they are within [0, 255] |
|
# - Convert to uint8 |
|
img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) |
|
|
|
# Save the image using PIL |
|
Image.fromarray(img[0].cpu().numpy(), 'RGB').save('generated_image.png') |
|
|
|
print("Image saved as 'generated_image.png'") |
|
``` |
|
|
|
The above code requires torch_utils and `dnnlib` to be accessible via `PYTHONPATH`. It does not need source code for the networks themselves — their class definitions are loaded from the pickle via `torch_utils.persistence`. |
|
|
|
The pickle contains three networks. `G` and `D` are instantaneous snapshots taken during training, and `G_ema` represents a moving average of the generator weights over several training steps. The networks are regular instances of `torch.nn.Module`, with all of their parameters and buffers placed on the CPU at import and gradient computation disabled by default. |
|
|
|
The generator consists of two submodules, `G.mapping` and `G.synthesis`, that can be executed separately. |
|
|
|
See [NVIDIA Repo](https://github.com/NVlabs/stylegan3?tab=readme-ov-file#using-networks-from-python) for additional information. |
|
|
|
**A dedicated Repo for gaIA inference with ready-to-use scripts is on the way! Stay tuned!** |
|
|
|
## Inference Samples |
|
 |
|
|
|
## Uses |
|
|
|
### Scientific |
|
- Transfer Learning |
|
- Synthetic data generation |
|
- Future scenario simulations * |
|
- Comparative analysis * |
|
|
|
*It is necessary to integrate external predictive climate models to generate future scenarios sumulation |
|
|
|
### Artistic |
|
- Art installations & exhibitions |
|
- Public awareness campaigns |
|
- Multimedia performances |
|
|
|
## License |
|
This project and repository contains two licenses: |
|
|
|
1. **Apache 2.0 License**: Applies to the model and any modifications or additions made by The "What If" Project. |
|
2. **NVIDIA Source Code License for StyleGAN3**: Applies to the original StyleGAN3 software used for training the model. |
|
|
|
Please see the LICENSE files in the repository for more details. |
|
|
|
## How to Contribute |
|
Join us in using our model to make a differente! For more information and updates, visit [gaIA spotlight](https://share.thewhatifproject.com/gaia). |
|
|
|
## Contact |
|
For any questions or support, contact us through our [website](https://thewhatifproject.com) and follow us on [Instagram](https://www.instagram.com/the.whatifproject/). |