gen2seg: Generative Models Enable Generalizable Instance Segmentation
Abstract
Generative models fine-tuned for instance segmentation demonstrate strong zero-shot performance on unseen objects and styles, surpassing discriminatively pretrained models.
By pretraining to synthesize coherent images from perturbed inputs, generative models inherently learn to understand object boundaries and scene compositions. How can we repurpose these generative representations for general-purpose perceptual organization? We finetune Stable Diffusion and MAE (encoder+decoder) for category-agnostic instance segmentation using our instance coloring loss exclusively on a narrow set of object types (indoor furnishings and cars). Surprisingly, our models exhibit strong zero-shot generalization, accurately segmenting objects of types and styles unseen in finetuning (and in many cases, MAE's ImageNet-1K pretraining too). Our best-performing models closely approach the heavily supervised SAM when evaluated on unseen object types and styles, and outperform it when segmenting fine structures and ambiguous boundaries. In contrast, existing promptable segmentation architectures or discriminatively pretrained models fail to generalize. This suggests that generative models learn an inherent grouping mechanism that transfers across categories and domains, even without internet-scale pretraining. Code, pretrained models, and demos are available on our website.
Community
We are the first to showcase that generative models (i.e. Stable Diffusion, MAE) can be easily adapted to segment objects. We finetuned our model on a limited set of object categories (indoor furnishings and cars), yet both models generalize zero-shot to unseen object categories and styles (i.e. X-rays, animals in art, etc). Interestingly, for MAE this is outside the pretraining distribution too. This suggests generative models have learned an inherent perceptual grouping mechanism. We hope that our findings will inspire more research into the representations learned by generative pretraining, and how they can be adapted for perceptual tasks.
Please see our website for high-resolution qualitative comparisons.
Website: https://reachomk.github.io/gen2seg/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DC-SAM: In-Context Segment Anything in Images and Videos via Dual Consistency (2025)
- Split Matching for Inductive Zero-shot Semantic Segmentation (2025)
- DINOv2-powered Few-Shot Semantic Segmentation: A Unified Framework via Cross-Model Distillation and 4D Correlation Mining (2025)
- v-CLR: View-Consistent Learning for Open-World Instance Segmentation (2025)
- VSC: Visual Search Compositional Text-to-Image Diffusion Model (2025)
- Industrial Synthetic Segment Pre-training (2025)
- DPSeg: Dual-Prompt Cost Volume Learning for Open-Vocabulary Semantic Segmentation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper