SolarScanner • U‑Net + ViT for Building Segmentation & Damage Classification
Model Details
Stage | Backbone | Dataset | Metric |
---|---|---|---|
Segmentation | U‑Net (ResNet‑50 encoder) | SpaceNet v2 | IoU 0.766 |
Damage CLS | ViT‑B/16 | xBD | Acc 0.856 |
Usage
from solars import load_seg_model, load_dmg_model
mask = load_seg_model().predict("image.tif")
labels = load_dmg_model().predict_patches("image.tif", mask)
Intended Use
Rapid mapping after earthquakes, floods, conflicts. Not for safety‑critical decisions without human review.
Limitations
City bias (4 training cities), damage‑class imbalance, RGB‑only.
Training
See GitHub repo for configs. AdamW, FP16, cosine schedule. (https://github.com/tugcantopaloglu/solarscanner-solars-paper-deep-learning)
Results
Task | Score |
---|---|
IoU | 0.766 |
Acc | 0.856 |
Citation
@unpublished{topaloglu2025solars,
author = {Tuğcan Topaloğlu},
title = {{SolarScanner}: Two‑Stage Deep Learning for Post‑Disaster Building Damage Assessment},
year = {2025}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for tugcantopaloglu/solarscanner-solars
Base model
google/vit-base-patch16-224-in21k