SpacelyJohn commited on
Commit
75a1674
Β·
verified Β·
1 Parent(s): 042bccd

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +67 -46
  2. requirements.txt +6 -7
  3. run_virtual_staging.py +34 -0
  4. virtual_staging_app.py +406 -0
README.md CHANGED
@@ -1,64 +1,69 @@
1
  ---
2
- title: Spacely AI Interior Designer
3
  emoji: 🏠
4
  colorFrom: blue
5
  colorTo: green
6
  sdk: gradio
7
- app_file: app_gradio.py
8
  pinned: false
9
  hardware: zero-gpu
10
  ---
11
 
12
- # 🏒 Spacely AI 가ꡬ 배치 λ””μžμ΄λ„ˆ
13
 
14
- 빈 λ°© 사진을 μ—…λ‘œλ“œν•˜λ©΄ AIκ°€ 전문적인 μ˜€ν”ΌμŠ€ κ³΅κ°„μœΌλ‘œ λ””μžμΈν•΄μ£ΌλŠ” μ›Ή μ• ν”Œλ¦¬μΌ€μ΄μ…˜μž…λ‹ˆλ‹€.
 
 
15
 
16
  ## ✨ μ£Όμš” κΈ°λŠ₯
17
 
18
- - **6κ°€μ§€ μ˜€ν”ΌμŠ€ νƒ€μž…**: κ°œμΈμ‚¬λ¬΄μ‹€, νšŒμ˜μ‹€, νœ΄κ²Œμ‹€, μ˜€ν”ˆμ˜€ν”ΌμŠ€, λ¦¬μ…‰μ…˜, CEOμ‹€
19
- - **μ»€μŠ€ν…€ ν”„λ‘¬ν”„νŠΈ**: μ›ν•˜λŠ” μŠ€νƒ€μΌκ³Ό 가ꡬλ₯Ό 자유둭게 μ§€μ •
20
- - **μ‹€μ‹œκ°„ μ›Ή UI**: λ“œλž˜κ·Έ μ•€ λ“œλ‘­μœΌλ‘œ κ°„νŽΈν•œ 이미지 μ—…λ‘œλ“œ
21
- - **κ³ ν’ˆμ§ˆ AI 생성**: Stable Diffusion + ControlNet 기반
 
 
 
 
22
 
23
- ## πŸš€ 배포 μ˜΅μ…˜
24
 
25
  ### 1. Hugging Face Spaces (μΆ”μ²œ)
26
  ```bash
27
- # 1. huggingface.coμ—μ„œ μƒˆ Space 생성
28
- # 2. 이 ν΄λ”μ˜ λͺ¨λ“  νŒŒμΌμ„ μ—…λ‘œλ“œ
29
- # 3. μžλ™μœΌλ‘œ μ›Ή 앱이 λ°°ν¬λ©λ‹ˆλ‹€
30
- ```
31
-
32
- ### 2. Google Colab
33
- ```python
34
- # Colabμ—μ„œ μ‹€ν–‰
35
- !git clone [your-repo-url]
36
- %cd spacely-ai-furniture-designer
37
- !pip install -r requirements.txt
38
- !python app.py
39
  ```
40
 
41
- ### 3. 둜컬 μ‹€ν–‰
42
  ```bash
43
- git clone [your-repo-url]
44
- cd spacely-ai-furniture-designer
45
- pip install -r requirements.txt
46
- python app.py
 
 
 
47
  ```
48
 
49
  ## 🎯 μ‚¬μš©λ²•
50
 
51
- 1. **이미지 μ—…λ‘œλ“œ**: λΉˆλ°©μ΄λ‚˜ 사무싀 사진을 μ—…λ‘œλ“œ
52
- 2. **곡간 νƒ€μž… 선택**: λ“œλ‘­λ‹€μš΄μ—μ„œ μ›ν•˜λŠ” μ˜€ν”ΌμŠ€ νƒ€μž… 선택
53
- 3. **μ»€μŠ€ν…€ μ„€μ •** (선택): νŠΉλ³„ν•œ μš”κ΅¬μ‚¬ν•­μ΄ 있으면 ν”„λ‘¬ν”„νŠΈ μž…λ ₯
54
- 4. **생성 λ²„νŠΌ 클릭**: AIκ°€ 전문적인 μ˜€ν”ΌμŠ€ λ””μžμΈ 생성
55
 
56
  ## πŸ”§ 기술 μŠ€νƒ
57
 
58
- - **AI λͺ¨λΈ**: Realistic Vision V3.0 (Stable Diffusion)
59
- - **μ œμ–΄ 기술**: ControlNet (μ„Έκ·Έλ©˜ν…Œμ΄μ…˜ + MLSD)
60
- - **μ›Ή ν”„λ ˆμž„μ›Œν¬**: Streamlit
61
- - **λ”₯λŸ¬λ‹**: PyTorch, Hugging Face Transformers
 
 
 
 
 
62
 
63
  ## πŸ“± 배포 ν”Œλž«νΌ 비ꡐ
64
 
@@ -84,23 +89,39 @@ self.office_templates = {
84
  self.quality_suffix = "professional interior design, corporate style, clean, modern, functional, well-lit, 4K, high quality"
85
  ```
86
 
87
- ## πŸ—οΈ μ•„ν‚€ν…μ²˜
88
 
 
 
 
 
 
 
 
 
89
  ```
90
- 빈방 이미지 β†’ μ„Έκ·Έλ©˜ν…Œμ΄μ…˜ β†’ 마슀크 생성 β†’ ControlNet β†’ AI 생성 β†’ κ²°κ³Ό 이미지
91
- β†˜ MLSD 라인 κ²€μΆœ β†—
92
- ```
 
 
 
 
 
 
 
93
 
94
  ---
95
 
96
- ## πŸ“š 원본 ν”„λ‘œμ νŠΈ 정보
97
 
98
- 이 ν”„λ‘œμ νŠΈλŠ” [neuralwork의 sd-interior-design](https://github.com/neuralwork/sd-interior-design)을 기반으둜 νšŒμ‚¬μš© μ˜€ν”ΌμŠ€ λ””μžμΈμ— νŠΉν™”ν•˜μ—¬ κ°œλ°œλ˜μ—ˆμŠ΅λ‹ˆλ‹€.
99
 
100
- ### 기술 세뢀사항
101
- - **Base Model**: [Realistic Vision V3.0](https://huggingface.co/SG161222/Realistic_Vision_V3.0_VAE)
102
- - **ControlNets**: [Segmentation](https://huggingface.co/BertChristiaens/controlnet-seg-room) + [MLSD](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)
103
- - **License**: MIT License
 
104
 
105
- Original work from [neuralwork](https://neuralwork.ai/) ❀️
106
- Enhanced for corporate office design by Spacely Team
 
1
  ---
2
+ title: Virtual Staging AI
3
  emoji: 🏠
4
  colorFrom: blue
5
  colorTo: green
6
  sdk: gradio
7
+ app_file: virtual_staging_app.py
8
  pinned: false
9
  hardware: zero-gpu
10
  ---
11
 
12
+ # 🏠 Virtual Staging AI - 2-Stage Pipeline
13
 
14
+ **Advanced AI Interior Design with YOLO Detection & ControlNet Refinement**
15
+
16
+ 빈 λ°© 사진을 μ—…λ‘œλ“œν•˜λ©΄ 2단계 AI νŒŒμ΄ν”„λΌμΈμ΄ 전문적인 μΈν…Œλ¦¬μ–΄ λ””μžμΈμ„ μƒμ„±ν•©λ‹ˆλ‹€.
17
 
18
  ## ✨ μ£Όμš” κΈ°λŠ₯
19
 
20
+ ### 🎯 2-Stage Architecture
21
+ 1. **Stage 1**: ControlNet으둜 초기 가ꡬ 배치 생성
22
+ 2. **Stage 2**: YOLO둜 가ꡬ μ˜μ—­ 탐지 β†’ ControlNet으둜 ν’ˆμ§ˆ κ°œμ„ 
23
+
24
+ ### 🏠 지원 곡간 & μŠ€νƒ€μΌ
25
+ - **4κ°€μ§€ 곡간 νƒ€μž…**: Living Room, Bedroom, Kitchen, Dining Room
26
+ - **4κ°€μ§€ λ””μžμΈ μŠ€νƒ€μΌ**: Modern, Scandinavian, Industrial, Traditional
27
+ - **16κ°€μ§€ μ‘°ν•©**: 각 곡간별 λͺ¨λ“  μŠ€νƒ€μΌ 지원
28
 
29
+ ## πŸš€ 배포 & μ‹€ν–‰
30
 
31
  ### 1. Hugging Face Spaces (μΆ”μ²œ)
32
  ```bash
33
+ # 1. huggingface.coμ—μ„œ μƒˆ Space 생성 (GPU ν™˜κ²½)
34
+ # 2. λͺ¨λ“  파일 μ—…λ‘œλ“œ (virtual_staging_app.pyκ°€ 메인)
35
+ # 3. @spaces.GPU μžλ™ ν™œμ„±ν™”λ‘œ GPU 가속
 
 
 
 
 
 
 
 
 
36
  ```
37
 
38
+ ### 2. 둜컬 개발 ν™˜κ²½
39
  ```bash
40
+ # κ°€μƒν™˜κ²½ 생성 및 μ˜μ‘΄μ„± μ„€μΉ˜
41
+ python3 -m venv spacely_local
42
+ source spacely_local/bin/activate
43
+ pip install -r requirements_virtual_staging.txt
44
+
45
+ # 둜컬 μ‹€ν–‰
46
+ python run_virtual_staging.py
47
  ```
48
 
49
  ## 🎯 μ‚¬μš©λ²•
50
 
51
+ 1. **이미지 μ—…λ‘œλ“œ**: 빈 λ°© 사진을 μ—…λ‘œλ“œ (λ“œλž˜κ·Έ μ•€ λ“œλ‘­ 지원)
52
+ 2. **곡간 선택**: Living Room, Bedroom, Kitchen, Dining Room 쀑 선택
53
+ 3. **μŠ€νƒ€μΌ 선택**: Modern, Scandinavian, Industrial, Traditional 쀑 선택
54
+ 4. **생성 μ‹œμž‘**: AIκ°€ 2단계 νŒŒμ΄ν”„λΌμΈμœΌλ‘œ μ „λ¬Έ μΈν…Œλ¦¬μ–΄ 생성
55
 
56
  ## πŸ”§ 기술 μŠ€νƒ
57
 
58
+ ### 🧠 AI λͺ¨λΈ
59
+ - **Base Model**: Realistic Vision V3.0 VAE (Stable Diffusion)
60
+ - **ControlNet**: Segmentation + MLSD λ“€μ–Ό μ œμ–΄
61
+ - **Object Detection**: YOLOv8n (가ꡬ μ˜μ—­ μžλ™ 탐지)
62
+
63
+ ### πŸ’» ν”„λ ˆμž„μ›Œν¬
64
+ - **μ›Ή UI**: Gradio 5.41.0
65
+ - **λ”₯λŸ¬λ‹**: PyTorch, Diffusers, Transformers
66
+ - **이미지 처리**: OpenCV, PIL, Ultralytics
67
 
68
  ## πŸ“± 배포 ν”Œλž«νΌ 비ꡐ
69
 
 
89
  self.quality_suffix = "professional interior design, corporate style, clean, modern, functional, well-lit, 4K, high quality"
90
  ```
91
 
92
+ ## πŸ—οΈ 2-Stage Architecture
93
 
94
+ ```mermaid
95
+ flowchart TD
96
+ A[Empty Room Image] --> B[Stage 1: ControlNet Layout Generation]
97
+ B --> C[Initial Furnished Image]
98
+ C --> D[Stage 2: YOLO Furniture Detection]
99
+ D --> E[Furniture Region Masks]
100
+ E --> F[Stage 2: ControlNet Refinement]
101
+ F --> G[Final High-Quality Result]
102
  ```
103
+
104
+ ### πŸš€ Benefits Over Single-Stage
105
+
106
+ | Aspect | Single-Stage | 2-Stage Virtual Staging |
107
+ |--------|-------------|----------------------|
108
+ | Quality | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
109
+ | Furniture Detection | Manual masking | Automatic YOLO detection |
110
+ | Refinement | None | Targeted region refinement |
111
+ | Consistency | Variable | High |
112
+ | Control | Limited | Precise furniture control |
113
 
114
  ---
115
 
116
+ ## πŸ“š ν”„λ‘œμ νŠΈ 기원
117
 
118
+ 이 ν”„λ‘œοΏ½οΏ½οΏ½νŠΈλŠ” [ArvidWartenberg/virtual-staging](https://github.com/ArvidWartenberg/virtual-staging) 접근법을 기반으둜 ν•˜μ—¬ YOLO 톡합과 Hugging Face Spaces μ΅œμ ν™”λ₯Ό μΆ”κ°€ν•œ μ™„μ „ν•œ μž¬κ΅¬ν˜„μž…λ‹ˆλ‹€.
119
 
120
+ ### μ£Όμš” κ°œμ„ μ‚¬ν•­
121
+ - πŸ” **YOLO v8 톡합**: μžλ™ 가ꡬ μ˜μ—­ 탐지
122
+ - 🎨 **μŠ€νƒ€μΌ ν…œν”Œλ¦Ώ**: 4개 곡간 Γ— 4개 μŠ€νƒ€μΌ = 16κ°€μ§€ μ‘°ν•©
123
+ - πŸš€ **HF Spaces μ΅œμ ν™”**: @spaces.GPU와 λ©”λͺ¨λ¦¬ μ΅œμ ν™”
124
+ - πŸ’Ύ **ꡬ쑰 보쑴**: ν–₯μƒλœ λ°°κ²½/ꡬ쑰물 보쑴 둜직
125
 
126
+ Original virtual-staging approach by [ArvidWartenberg](https://github.com/ArvidWartenberg) ❀️
127
+ Enhanced with YOLO integration by Spacely Team
requirements.txt CHANGED
@@ -1,13 +1,12 @@
1
- streamlit
2
  torch>=2.0.0
3
  torchvision
4
- diffusers>=0.25.0
5
- transformers>=4.36.0
6
- accelerate>=0.26.0
7
  controlnet-aux
8
  opencv-python
9
- scipy
10
  Pillow
11
  numpy
12
- matplotlib
13
- huggingface_hub
 
 
1
+ gradio==5.41.0
2
  torch>=2.0.0
3
  torchvision
4
+ diffusers>=0.30.0
5
+ transformers>=4.50.0
 
6
  controlnet-aux
7
  opencv-python
 
8
  Pillow
9
  numpy
10
+ accelerate
11
+ ultralytics>=8.3.0
12
+ spaces
run_virtual_staging.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Run Virtual Staging AI locally for development
4
+ """
5
+
6
+ import sys
7
+ import os
8
+
9
+ # Add current directory to path
10
+ sys.path.append(os.path.dirname(os.path.abspath(__file__)))
11
+
12
+ # Import the new virtual staging app
13
+ from virtual_staging_app import create_interface
14
+
15
+ def main():
16
+ print("🏠 Starting Virtual Staging AI - 2-Stage Pipeline")
17
+ print("πŸ”§ Stage 1: ControlNet generates initial layout")
18
+ print("πŸ” Stage 2: YOLO detects furniture + ControlNet refines")
19
+ print("πŸ’» Running locally with debug mode")
20
+
21
+ # Create interface
22
+ demo = create_interface()
23
+
24
+ # Launch with local settings
25
+ demo.launch(
26
+ server_name="0.0.0.0",
27
+ server_port=7860,
28
+ share=False,
29
+ show_error=True,
30
+ debug=True
31
+ )
32
+
33
+ if __name__ == "__main__":
34
+ main()
virtual_staging_app.py ADDED
@@ -0,0 +1,406 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Virtual Staging AI - Complete Rebuild
4
+ Based on ArvidWartenberg/virtual-staging approach with 2-stage pipeline
5
+
6
+ Stage 1: Generate initial furniture layout using ControlNet
7
+ Stage 2: Detect furniture regions with YOLO and refine with inpainting
8
+ """
9
+
10
+ import gradio as gr
11
+ import torch
12
+ import numpy as np
13
+ from PIL import Image
14
+ import cv2
15
+ import os
16
+ from pathlib import Path
17
+
18
+ # Handle spaces import for both local and Hugging Face deployment
19
+ try:
20
+ import spaces # Required for Hugging Face Spaces GPU
21
+ SPACES_AVAILABLE = True
22
+ except ImportError:
23
+ # Local development - create dummy decorator
24
+ class DummySpaces:
25
+ @staticmethod
26
+ def GPU(func):
27
+ return func
28
+
29
+ spaces = DummySpaces()
30
+ SPACES_AVAILABLE = False
31
+
32
+ # Model imports with error handling
33
+ try:
34
+ from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline, UniPCMultistepScheduler
35
+ from transformers import AutoImageProcessor, SegformerForSemanticSegmentation
36
+ from controlnet_aux import MLSDdetector
37
+ from ultralytics import YOLO
38
+ MODELS_AVAILABLE = True
39
+ except ImportError as e:
40
+ print(f"Failed to load model libraries: {e}")
41
+ MODELS_AVAILABLE = False
42
+
43
+ # Room style templates (keep the best from original)
44
+ ROOM_STYLES = {
45
+ "Living Room": {
46
+ "Modern": "modern living room with sleek sectional sofa, glass coffee table, minimalist decor",
47
+ "Scandinavian": "scandinavian living room with cream linen sofa, light oak furniture, cozy textiles",
48
+ "Industrial": "industrial living room with vintage leather sofa, exposed brick, metal fixtures",
49
+ "Traditional": "traditional living room with wingback chairs, mahogany table, classic patterns"
50
+ },
51
+ "Bedroom": {
52
+ "Modern": "modern bedroom with platform bed, integrated nightstands, clean geometric lines",
53
+ "Scandinavian": "scandinavian bedroom with light wood bed frame, white linens, hygge atmosphere",
54
+ "Industrial": "industrial bedroom with wrought iron bed, exposed elements, vintage touches",
55
+ "Traditional": "traditional bedroom with ornate four-poster bed, mahogany finish, luxury bedding"
56
+ },
57
+ "Kitchen": {
58
+ "Modern": "modern kitchen with handleless cabinets, quartz countertops, integrated appliances",
59
+ "Scandinavian": "scandinavian kitchen with light oak cabinets, white marble, clean design",
60
+ "Industrial": "industrial kitchen with concrete countertops, exposed brick, metal cabinets",
61
+ "Traditional": "traditional kitchen with raised panel cabinets, granite countertops, warm wood"
62
+ },
63
+ "Dining Room": {
64
+ "Modern": "modern dining room with glass-top table, sculptural chairs, linear lighting",
65
+ "Scandinavian": "scandinavian dining room with light oak table, wishbone chairs, natural textures",
66
+ "Industrial": "industrial dining room with reclaimed wood table, metal chairs, exposed beams",
67
+ "Traditional": "traditional dining room with mahogany pedestal table, upholstered chairs, crystal chandelier"
68
+ }
69
+ }
70
+
71
+ # Global models
72
+ pipe = None
73
+ yolo_model = None
74
+ seg_processor = None
75
+ seg_model = None
76
+ mlsd_processor = None
77
+
78
+ def load_models():
79
+ """Load all required models"""
80
+ global pipe, yolo_model, seg_processor, seg_model, mlsd_processor
81
+
82
+ if not MODELS_AVAILABLE:
83
+ return "❌ Model libraries not available"
84
+
85
+ try:
86
+ print("πŸ”„ Loading Virtual Staging AI models...")
87
+
88
+ # Load YOLO for furniture detection (Stage 2)
89
+ print("Loading YOLO model...")
90
+ yolo_model = YOLO('yolov8n.pt') # Use nano model for speed
91
+
92
+ # Load ControlNet for layout generation (Stage 1)
93
+ print("Loading ControlNet models...")
94
+ device = "cuda" if torch.cuda.is_available() else "cpu"
95
+ dtype = torch.float16 if torch.cuda.is_available() else torch.float32
96
+
97
+ controlnet = [
98
+ ControlNetModel.from_pretrained(
99
+ "BertChristiaens/controlnet-seg-room",
100
+ torch_dtype=dtype,
101
+ low_cpu_mem_usage=True
102
+ ),
103
+ ControlNetModel.from_pretrained(
104
+ "lllyasviel/sd-controlnet-mlsd",
105
+ torch_dtype=dtype,
106
+ low_cpu_mem_usage=True
107
+ ),
108
+ ]
109
+
110
+ # Main pipeline
111
+ pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained(
112
+ "SG161222/Realistic_Vision_V3.0_VAE",
113
+ controlnet=controlnet,
114
+ safety_checker=None,
115
+ torch_dtype=dtype,
116
+ low_cpu_mem_usage=True,
117
+ variant="fp16" if torch.cuda.is_available() else None
118
+ )
119
+
120
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
121
+
122
+ if torch.cuda.is_available():
123
+ pipe = pipe.to("cuda")
124
+ # Enable memory efficient attention
125
+ try:
126
+ pipe.enable_xformers_memory_efficient_attention()
127
+ print("βœ… XFormers enabled")
128
+ except:
129
+ print("⚠️ XFormers not available")
130
+
131
+ # Enable model offloading to save GPU memory
132
+ pipe.enable_model_cpu_offload()
133
+ print("βœ… CPU offloading enabled")
134
+
135
+ # Load auxiliary models
136
+ seg_processor = AutoImageProcessor.from_pretrained("nvidia/segformer-b5-finetuned-ade-640-640")
137
+ seg_model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b5-finetuned-ade-640-640")
138
+ mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
139
+
140
+ print("βœ… All models loaded successfully!")
141
+ return "βœ… Models loaded successfully!"
142
+
143
+ except Exception as e:
144
+ import traceback
145
+ error_details = traceback.format_exc()
146
+ print(f"❌ Model loading failed: {error_details}")
147
+ return f"❌ Failed to load models: {e}"
148
+
149
+ def stage1_generate_initial_layout(input_image, room_type, design_style):
150
+ """Stage 1: Generate initial furniture layout using ControlNet"""
151
+ print("πŸ—οΈ Stage 1: Generating initial furniture layout...")
152
+
153
+ if pipe is None:
154
+ raise Exception("Pipeline not loaded")
155
+
156
+ # Get room prompt
157
+ prompt = ROOM_STYLES.get(room_type, ROOM_STYLES["Living Room"]).get(design_style, "modern furnished room")
158
+ full_prompt = f"photorealistic {prompt}, professional interior design photography, natural lighting, high quality, detailed furniture"
159
+
160
+ # Resize for processing
161
+ orig_w, orig_h = input_image.size
162
+ max_size = 768
163
+ if max(orig_w, orig_h) > max_size:
164
+ if orig_w > orig_h:
165
+ new_w, new_h = max_size, int(max_size * orig_h / orig_w)
166
+ else:
167
+ new_w, new_h = int(max_size * orig_w / orig_h), max_size
168
+ else:
169
+ new_w, new_h = orig_w, orig_h
170
+
171
+ resized_image = input_image.resize((new_w, new_h))
172
+
173
+ # Create full mask for initial generation
174
+ mask_image = Image.new('RGB', (new_w, new_h), (255, 255, 255))
175
+
176
+ # Prepare control images
177
+ seg_control = resized_image.copy()
178
+ if mlsd_processor:
179
+ mlsd_image = mlsd_processor(resized_image)
180
+ mlsd_image = mlsd_image.resize((new_w, new_h))
181
+ else:
182
+ mlsd_image = resized_image.copy()
183
+
184
+ # Generate initial layout
185
+ print(f"Generating: {full_prompt}")
186
+ result = pipe(
187
+ prompt=full_prompt,
188
+ negative_prompt="empty room, no furniture, bad quality, distorted, blurry, unrealistic, floating furniture",
189
+ num_inference_steps=30,
190
+ strength=0.8,
191
+ guidance_scale=7.5,
192
+ image=resized_image,
193
+ mask_image=mask_image,
194
+ control_image=[seg_control, mlsd_image],
195
+ controlnet_conditioning_scale=[0.6, 0.4],
196
+ control_guidance_start=[0, 0],
197
+ control_guidance_end=[0.7, 0.5],
198
+ ).images[0]
199
+
200
+ # Restore original size
201
+ stage1_result = result.resize((orig_w, orig_h), Image.Resampling.LANCZOS)
202
+
203
+ print("βœ… Stage 1 completed")
204
+ return stage1_result
205
+
206
+ def stage2_detect_and_refine(original_image, stage1_result, room_type, design_style):
207
+ """Stage 2: Detect furniture regions and refine with inpainting"""
208
+ print("πŸ” Stage 2: Detecting furniture and refining...")
209
+
210
+ if yolo_model is None:
211
+ print("⚠️ YOLO model not loaded, skipping Stage 2")
212
+ return stage1_result
213
+
214
+ # Convert PIL to numpy for YOLO
215
+ stage1_array = np.array(stage1_result)
216
+
217
+ # Detect furniture objects
218
+ print("Detecting furniture with YOLO...")
219
+ results = yolo_model(stage1_array, verbose=False)
220
+
221
+ # Create furniture mask based on detections
222
+ furniture_mask = np.zeros((stage1_result.height, stage1_result.width), dtype=np.uint8)
223
+
224
+ furniture_classes = [56, 57, 58, 59, 60, 61, 62, 63, 64, 65] # COCO furniture classes
225
+
226
+ for result in results:
227
+ boxes = result.boxes
228
+ if boxes is not None:
229
+ for box in boxes:
230
+ class_id = int(box.cls.cpu().numpy()[0])
231
+ confidence = float(box.conf.cpu().numpy()[0])
232
+
233
+ if class_id in furniture_classes and confidence > 0.3:
234
+ # Get bounding box coordinates
235
+ x1, y1, x2, y2 = box.xyxy.cpu().numpy()[0].astype(int)
236
+ # Add to furniture mask
237
+ furniture_mask[y1:y2, x1:x2] = 255
238
+
239
+ # Expand mask slightly
240
+ kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (10, 10))
241
+ furniture_mask = cv2.dilate(furniture_mask, kernel, iterations=1)
242
+
243
+ # Apply Gaussian blur for smooth edges
244
+ furniture_mask = cv2.GaussianBlur(furniture_mask, (21, 21), 0)
245
+
246
+ # Convert mask to PIL
247
+ mask_pil = Image.fromarray(furniture_mask).convert('RGB')
248
+
249
+ # If no furniture detected, return stage 1 result
250
+ if np.count_nonzero(furniture_mask) == 0:
251
+ print("⚠️ No furniture detected, returning Stage 1 result")
252
+ return stage1_result
253
+
254
+ print(f"Detected furniture regions: {np.count_nonzero(furniture_mask)} pixels")
255
+
256
+ # Stage 2 refinement with inpainting
257
+ prompt = ROOM_STYLES.get(room_type, ROOM_STYLES["Living Room"]).get(design_style, "modern furnished room")
258
+ refined_prompt = f"photorealistic {prompt}, high quality furniture, professional interior design, natural lighting, realistic textures"
259
+
260
+ # Resize for processing
261
+ orig_w, orig_h = original_image.size
262
+ max_size = 768
263
+ if max(orig_w, orig_h) > max_size:
264
+ if orig_w > orig_h:
265
+ new_w, new_h = max_size, int(max_size * orig_h / orig_w)
266
+ else:
267
+ new_w, new_h = int(max_size * orig_w / orig_h), max_size
268
+ else:
269
+ new_w, new_h = orig_w, orig_h
270
+
271
+ # Resize images
272
+ resized_original = original_image.resize((new_w, new_h))
273
+ resized_mask = mask_pil.resize((new_w, new_h))
274
+
275
+ # Prepare control images
276
+ seg_control = resized_original.copy()
277
+ if mlsd_processor:
278
+ mlsd_image = mlsd_processor(resized_original)
279
+ mlsd_image = mlsd_image.resize((new_w, new_h))
280
+ else:
281
+ mlsd_image = resized_original.copy()
282
+
283
+ # Refine furniture regions
284
+ print(f"Refining: {refined_prompt}")
285
+ refined_result = pipe(
286
+ prompt=refined_prompt,
287
+ negative_prompt="bad quality, distorted, blurry, unrealistic, empty room, no furniture, floating furniture, bad proportions",
288
+ num_inference_steps=25,
289
+ strength=0.6,
290
+ guidance_scale=8.0,
291
+ image=resized_original,
292
+ mask_image=resized_mask,
293
+ control_image=[seg_control, mlsd_image],
294
+ controlnet_conditioning_scale=[0.7, 0.5],
295
+ control_guidance_start=[0, 0],
296
+ control_guidance_end=[0.8, 0.6],
297
+ ).images[0]
298
+
299
+ # Restore original size
300
+ final_result = refined_result.resize((orig_w, orig_h), Image.Resampling.LANCZOS)
301
+
302
+ print("βœ… Stage 2 completed")
303
+ return final_result
304
+
305
+ @spaces.GPU
306
+ def virtual_stage_room(input_image, room_type, design_style):
307
+ """Main virtual staging pipeline - 2-stage approach"""
308
+
309
+ if input_image is None:
310
+ return None, "❌ Please upload an image!"
311
+
312
+ try:
313
+ # Load models if needed
314
+ if pipe is None or yolo_model is None:
315
+ status = load_models()
316
+ if "❌" in status:
317
+ return None, status
318
+
319
+ print(f"🏠 Starting Virtual Staging: {room_type} in {design_style} style")
320
+
321
+ # Stage 1: Generate initial layout
322
+ stage1_result = stage1_generate_initial_layout(input_image, room_type, design_style)
323
+
324
+ # Stage 2: Detect and refine furniture
325
+ final_result = stage2_detect_and_refine(input_image, stage1_result, room_type, design_style)
326
+
327
+ success_msg = f"βœ… Virtual staging completed! {room_type} furnished in {design_style} style using 2-stage AI pipeline."
328
+
329
+ return final_result, success_msg
330
+
331
+ except Exception as e:
332
+ import traceback
333
+ error_details = traceback.format_exc()
334
+ error_msg = f"❌ Virtual staging failed: {str(e)}\n\nDetails:\n{error_details}"
335
+ print(error_msg)
336
+ return None, error_msg
337
+
338
+ def create_interface():
339
+ """Create Gradio interface for Virtual Staging"""
340
+
341
+ with gr.Blocks(title="Virtual Staging AI - 2-Stage Pipeline", theme=gr.themes.Soft()) as demo:
342
+ gr.HTML("<h1>🏠 Virtual Staging AI - Complete Rebuild</h1>")
343
+ gr.Markdown("**2-Stage Pipeline**: Stage 1 generates initial layout, Stage 2 detects and refines furniture using YOLO + ControlNet")
344
+
345
+ with gr.Row():
346
+ with gr.Column(scale=1):
347
+ # Input controls
348
+ input_image = gr.Image(
349
+ label="Upload Empty Room Image",
350
+ type="pil",
351
+ height=300
352
+ )
353
+
354
+ room_type = gr.Dropdown(
355
+ choices=list(ROOM_STYLES.keys()),
356
+ value="Living Room",
357
+ label="Select Room Type"
358
+ )
359
+
360
+ design_style = gr.Dropdown(
361
+ choices=["Modern", "Scandinavian", "Industrial", "Traditional"],
362
+ value="Modern",
363
+ label="Select Design Style"
364
+ )
365
+
366
+ generate_btn = gr.Button("πŸš€ Start Virtual Staging", variant="primary", size="lg")
367
+
368
+ gr.Markdown("""
369
+ ### πŸ”§ How it works:
370
+ 1. **Stage 1**: ControlNet generates initial furniture layout
371
+ 2. **Stage 2**: YOLO detects furniture β†’ ControlNet refines regions
372
+
373
+ ### ✨ Features:
374
+ - 🎯 Furniture detection with YOLO
375
+ - 🎨 Style-specific templates
376
+ - πŸ—οΈ 2-stage quality improvement
377
+ - πŸ’Ύ Background preservation
378
+ """)
379
+
380
+ with gr.Column(scale=1):
381
+ # Output
382
+ output_image = gr.Image(
383
+ label="Virtual Staging Result",
384
+ height=400
385
+ )
386
+ result_message = gr.Textbox(
387
+ label="Status",
388
+ interactive=False,
389
+ value="Ready to start virtual staging"
390
+ )
391
+
392
+ # Event handler
393
+ generate_btn.click(
394
+ fn=virtual_stage_room,
395
+ inputs=[input_image, room_type, design_style],
396
+ outputs=[output_image, result_message]
397
+ )
398
+
399
+ return demo
400
+
401
+ if __name__ == "__main__":
402
+ print("πŸš€ Starting Virtual Staging AI - Complete Rebuild...")
403
+
404
+ # Create and launch interface
405
+ demo = create_interface()
406
+ demo.launch(share=True)