Image-to-Image
Phoenix-95107 commited on
Commit
a13e7bb
·
verified ·
1 Parent(s): d385dd5

Upload readme.md

Browse files
Files changed (1) hide show
  1. readme.md +166 -0
readme.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TEMU-VTOFF: Virtual Try-Off & Fashion Understanding Toolkit
2
+ TEMU-VTOFF is a state-of-the-art toolkit for virtual try-off and fashion image understanding. It leverages advanced diffusion models, vision-language models, and semantic segmentation to enable garment transfer, attribute captioning, and mask generation for fashion images.
3
+ <img src="./assets/teaser.png" alt="example">
4
+ ## Table of Contents
5
+
6
+ - [Features](#features)
7
+ - [Installation](#installation)
8
+ - [Quick Start](#quick-start)
9
+ - [Core Components](#core-components)
10
+ - [1. Inference Pipeline (`inference.py`)](#1-inference-pipeline-inferencepy)
11
+ - [2. Visual Attribute Captioning (`precompute_utils/captioning_qwen.py`)](#2-visual-attribute-captioning-precompute_utilscaptioning_qwenpy)
12
+ - [3. Clothing Segmentation (`SegCloth.py`)](#3-clothing-segmentation-segclothpy)
13
+ - [Examples](#examples)
14
+ - [Citation](#citation)
15
+ - [License](#license)
16
+
17
+ ---
18
+
19
+ ## Features
20
+
21
+ - **Virtual Try-On**: Generate realistic try-on images using Stable Diffusion 3-based pipelines.
22
+ - **Visual Attribute Captioning**: Extract fine-grained garment attributes using Qwen-VL.
23
+ - **Clothing Segmentation**: Obtain binary and fine masks for garments using SegFormer.
24
+ - **Dataset Support**: Works with DressCode and VITON-HD datasets.
25
+
26
+ ---
27
+
28
+ ## Installation
29
+
30
+ 1. **Clone the repository:**
31
+
32
+ ```bash
33
+ git clone https://github.com/yourusername/TEMU-VTOFF.git
34
+ cd TEMU-VTOFF
35
+ ```
36
+
37
+ 2. **Install dependencies:**
38
+
39
+ ```bash
40
+ pip install -r requirements.txt
41
+ ```
42
+
43
+ 3. **(Optional) Setup virtual environment:**
44
+ ```bash
45
+ python -m venv venv
46
+ source venv/bin/activate # On Windows: venv\Scripts\activate
47
+ ```
48
+
49
+ ---
50
+
51
+ ## Quick Start
52
+
53
+ ### 1. Virtual Try-On Inference
54
+
55
+ ```bash
56
+ python inference.py \
57
+ --pretrained_model_name_or_path <path/to/model> \
58
+ --pretrained_model_name_or_path_sd3_tryoff <path/to/tryoff/model> \
59
+ --example_image examples/example1.jpg \
60
+ --output_dir outputs \
61
+ --width 768 --height 1024 \
62
+ --guidance_scale 2.0 \
63
+ --num_inference_steps 28 \
64
+ --category upper_body
65
+ ```
66
+
67
+ ### 2. Visual Attribute Captioning
68
+
69
+ ```bash
70
+ python precompute_utils/captioning_qwen.py \
71
+ --pretrained_model_name_or_path Qwen/Qwen2.5-VL-3B-Instruct \
72
+ --image_path examples/example1.jpg \
73
+ --output_path outputs/example1_caption.txt \
74
+ --image_category upper_body
75
+ ```
76
+
77
+ ### 3. Clothing Segmentation
78
+
79
+ ```python
80
+ from PIL import Image
81
+ from SegCloth import segment_clothing
82
+
83
+ img = Image.open("examples/example1.jpg")
84
+ binary_mask, fine_mask = segment_clothing(img, category="upper_body")
85
+ binary_mask.save("outputs/example1_binary_mask.jpg")
86
+ fine_mask.save("outputs/example1_fine_mask.jpg")
87
+ ```
88
+
89
+ ---
90
+
91
+ ## Core Components
92
+
93
+ ### 1. Inference Pipeline (`inference.py`)
94
+
95
+ - **Purpose**: Generates virtual try-on images using a Stable Diffusion 3-based pipeline.
96
+ - **How it works**:
97
+ - Loads pretrained models (VAE, transformers, schedulers, encoders).
98
+ - Segments the clothing region using `SegCloth.py`.
99
+ - Generates a descriptive caption for the garment using Qwen-VL (`captioning_qwen.py`).
100
+ - Runs the diffusion pipeline to synthesize a new try-on image.
101
+ - **Key Arguments**:
102
+ - `--pretrained_model_name_or_path`: Path or HuggingFace model ID for the main model.
103
+ - `--pretrained_model_name_or_path_sd3_tryoff`: Path or ID for the try-off transformer.
104
+ - `--example_image`: Input image path.
105
+ - `--output_dir`: Output directory.
106
+ - `--category`: Clothing category (`upper_body`, `lower_body`, `dresses`).
107
+ - `--width`, `--height`: Output image size.
108
+ - `--guidance_scale`, `--num_inference_steps`: Generation parameters.
109
+
110
+ ### 2. Visual Attribute Captioning (`precompute_utils/captioning_qwen.py`)
111
+
112
+ - **Purpose**: Generates fine-grained, structured captions for fashion images using Qwen2.5-VL.
113
+ - **How it works**:
114
+ - Loads the Qwen2.5-VL model and processor.
115
+ - For a given image, predicts garment attributes (e.g., type, fit, hem, neckline) in a controlled, structured format.
116
+ - Can process single images or entire datasets (DressCode, VITON-HD).
117
+ - **Key Arguments**:
118
+ - `--pretrained_model_name_or_path`: Path or HuggingFace model ID for Qwen2.5-VL.
119
+ - `--image_path`: Path to a single image (for single-image captioning).
120
+ - `--output_path`: Where to save the generated caption.
121
+ - `--image_category`: Garment category (`upper_body`, `lower_body`, `dresses`).
122
+ - For batch/dataset mode: `--dataset_name`, `--dataset_root`, `--filename`.
123
+
124
+ ### 3. Clothing Segmentation (`SegCloth.py`)
125
+
126
+ - **Purpose**: Segments clothing regions in images, producing:
127
+ - A binary mask (black & white) of the garment.
128
+ - A fine mask image where the garment is grayed out.
129
+ - **How it works**:
130
+ - Uses a SegFormer model (`mattmdjaga/segformer_b2_clothes`) via HuggingFace `transformers` pipeline.
131
+ - Supports categories: `upper_body`, `dresses`, `lower_body`.
132
+ - Provides both single-image and batch processing functions.
133
+ - **Usage**:
134
+ - `segment_clothing(img, category)`: Returns `(binary_mask, fine_mask)` for a PIL image.
135
+ - `batch_segment_clothing(img_dir, out_dir)`: Processes all images in a directory.
136
+
137
+ ---
138
+
139
+ ## Examples
140
+
141
+ See the `examples/` directory for sample images, masks and captions. Example usage scripts are provided for each core component.
142
+ Here is the workflow of this model and a comparison of its results with other models.
143
+ **Workflow
144
+ <img src="./assets/workflow.png" alt="Workflow" />
145
+ **Compair
146
+ <img src="./assets/compair.png" alt="compair" />
147
+ ---
148
+
149
+ ## Citation
150
+
151
+ If you use TEMU-VTOFF in your research or product, please cite this repository and the relevant models (e.g., Stable Diffusion 3, Qwen2.5-VL, SegFormer).
152
+
153
+ ```
154
+ @misc{temu-vtoff,
155
+ author = {Your Name or Organization},
156
+ title = {TEMU-VTOFF: Virtual Try-On & Fashion Understanding Toolkit},
157
+ year = {2024},
158
+ howpublished = {\url{https://github.com/yourusername/TEMU-VTOFF}}
159
+ }
160
+ ```
161
+
162
+ ---
163
+
164
+ ## License
165
+
166
+ This project is licensed under the [LICENSE](LICENSE) provided in the repository. Please check individual model and dataset licenses for additional terms.