Ekenayy commited on
Commit
e635ff3
Β·
1 Parent(s): c40b72a

init project

Browse files
Files changed (7) hide show
  1. .gitignore +56 -0
  2. Dockerfile +34 -0
  3. README.md +108 -6
  4. app.py +311 -0
  5. config.py +71 -0
  6. deploy.md +121 -0
  7. requirements.txt +13 -0
.gitignore ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+
23
+ # Virtual environments
24
+ venv/
25
+ env/
26
+ ENV/
27
+
28
+ # IDEs
29
+ .vscode/
30
+ .idea/
31
+ *.swp
32
+ *.swo
33
+
34
+ # OS
35
+ .DS_Store
36
+ Thumbs.db
37
+
38
+ # Model files and cache
39
+ LoRAs/
40
+ *.safetensors
41
+ *.bin
42
+ *.ckpt
43
+ *.pth
44
+
45
+ # Temporary files
46
+ *.tmp
47
+ *.log
48
+ temp/
49
+ gradio_cached_examples/
50
+
51
+ # Environment variables
52
+ .env
53
+ .env.local
54
+
55
+ # Hugging Face cache
56
+ .cache/
Dockerfile ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use the official Python runtime as the base image
2
+ FROM python:3.10-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install system dependencies
8
+ RUN apt-get update && apt-get install -y \
9
+ git \
10
+ wget \
11
+ curl \
12
+ && rm -rf /var/lib/apt/lists/*
13
+
14
+ # Copy requirements first for better caching
15
+ COPY requirements.txt .
16
+
17
+ # Install Python dependencies
18
+ RUN pip install --no-cache-dir -r requirements.txt
19
+
20
+ # Copy application code
21
+ COPY . .
22
+
23
+ # Create directory for LoRA weights
24
+ RUN mkdir -p LoRAs
25
+
26
+ # Expose the port that Gradio runs on
27
+ EXPOSE 7860
28
+
29
+ # Set environment variables
30
+ ENV GRADIO_SERVER_NAME="0.0.0.0"
31
+ ENV GRADIO_SERVER_PORT=7860
32
+
33
+ # Run the application
34
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,12 +1,114 @@
1
  ---
2
- title: Owen777 Kontext Style Loras
3
- emoji: ⚑
4
- colorFrom: yellow
5
- colorTo: blue
6
  sdk: gradio
7
- sdk_version: 5.42.0
8
  app_file: app.py
9
  pinned: false
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: FLUX Kontext Style Transfer
3
+ emoji: 🎨
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: gradio
7
+ sdk_version: 4.0.0
8
  app_file: app.py
9
  pinned: false
10
+ license: apache-2.0
11
+ hardware: zero-gpu
12
  ---
13
 
14
+ # 🎨 FLUX Kontext Style Transfer
15
+
16
+ Transform your images with 20+ artistic styles using the powerful FLUX.1 Kontext model and style-specific LoRA adapters!
17
+
18
+ This Hugging Face Space provides an easy-to-use interface for applying various artistic styles to your images using the [Owen777/Kontext-Style-Loras](https://huggingface.co/Owen777/Kontext-Style-Loras) model.
19
+
20
+ ## 🌟 Features
21
+
22
+ - **20+ Artistic Styles**: From Studio Ghibli to Van Gogh, Pixel Art to 3D Chibi
23
+ - **High-Quality Results**: Powered by FLUX.1 Kontext model
24
+ - **Easy-to-Use Interface**: Simple upload and style selection
25
+ - **Customizable**: Adjust inference steps, guidance scale, and LoRA strength
26
+ - **GPU-Accelerated**: Runs on Hugging Face ZeroGPU for fast inference
27
+
28
+ ## 🎭 Available Styles
29
+
30
+ | Style | Description |
31
+ |-------|-------------|
32
+ | **3D_Chibi** | Cute 3D chibi character style |
33
+ | **American_Cartoon** | Classic American cartoon aesthetics |
34
+ | **Chinese_Ink** | Traditional Chinese ink painting |
35
+ | **Clay_Toy** | Clay toy/sculpture appearance |
36
+ | **Fabric** | Textile and fabric textures |
37
+ | **Ghibli** | Studio Ghibli anime style |
38
+ | **Irasutoya** | Japanese illustration style |
39
+ | **Jojo** | JoJo's Bizarre Adventure anime style |
40
+ | **Oil_Painting** | Classic oil painting technique |
41
+ | **Pixel** | Retro pixel art style |
42
+ | **Snoopy** | Peanuts comic strip style |
43
+ | **Poly** | Low-poly 3D art style |
44
+ | **LEGO** | LEGO brick construction style |
45
+ | **Origami** | Paper folding art style |
46
+ | **Pop_Art** | Pop art movement style |
47
+ | **Van_Gogh** | Van Gogh's distinctive painting style |
48
+ | **Paper_Cutting** | Paper cutting art technique |
49
+ | **Line** | Clean line art style |
50
+ | **Vector** | Vector graphics style |
51
+ | **Picasso** | Picasso's cubist style |
52
+ | **Macaron** | Soft pastel macaron colors |
53
+ | **Rick_Morty** | Rick and Morty cartoon style |
54
+
55
+ ## πŸš€ How to Use
56
+
57
+ 1. **Upload an Image**: Click on the image upload area and select your image
58
+ 2. **Choose a Style**: Select from 20+ available artistic styles
59
+ 3. **Customize (Optional)**:
60
+ - Add a custom prompt for specific styling
61
+ - Adjust advanced settings like inference steps and LoRA strength
62
+ 4. **Generate**: Click the "Generate Styled Image" button
63
+ 5. **Download**: Save your stylized image
64
+
65
+ ## βš™οΈ Advanced Settings
66
+
67
+ - **Inference Steps**: Higher values (20-50) generally produce better quality but take longer
68
+ - **Guidance Scale**: Controls how closely the model follows the prompt (7.5 is recommended)
69
+ - **LoRA Strength**: Adjusts the intensity of the style application (0.1-2.0)
70
+ - **Dimensions**: Control output image size (512-1536 pixels)
71
+ - **Seed**: Set a specific seed for reproducible results
72
+
73
+ ## 🎯 Tips for Best Results
74
+
75
+ - Use high-quality input images (1024x1024 recommended)
76
+ - Experiment with different LoRA strengths for varying style intensity
77
+ - Try custom prompts to guide the style transformation
78
+ - For detailed styles like "Line" or "Vector", consider using higher inference steps
79
+
80
+ ## πŸ”§ Technical Details
81
+
82
+ - **Base Model**: [black-forest-labs/FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)
83
+ - **LoRA Adapters**: [Owen777/Kontext-Style-Loras](https://huggingface.co/Owen777/Kontext-Style-Loras)
84
+ - **Framework**: Diffusers, PyTorch
85
+ - **Interface**: Gradio
86
+ - **Hardware**: Hugging Face ZeroGPU
87
+
88
+ ## πŸ“š Model Information
89
+
90
+ The LoRA adapters are trained on high-quality paired data generated by GPT-4o and sourced from Omniconsistency. The training methodology ensures consistent and high-quality style transfers across various artistic domains.
91
+
92
+ **Training Code**: Available at [Owen718/Kontext-Lora-Trainer](https://github.com/Owen718/Kontext-Lora-Trainer)
93
+
94
+ ## πŸ† Contributors
95
+
96
+ - **Tian YE & Song FEI** - HKUST Guangzhou
97
+
98
+ ## πŸ“„ License
99
+
100
+ This project is licensed under the Apache 2.0 License. See the original model repository for detailed license information.
101
+
102
+ ## 🀝 Contributing
103
+
104
+ Feel free to open issues or contact the original authors for feedback or collaboration! More style LoRAs will be released soon.
105
+
106
+ ## πŸ”— Links
107
+
108
+ - [Model Repository](https://huggingface.co/Owen777/Kontext-Style-Loras)
109
+ - [Training Code](https://github.com/Owen718/Kontext-Lora-Trainer)
110
+ - [FLUX.1 Kontext Base Model](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)
111
+
112
+ ---
113
+
114
+ *Transform your creativity with AI-powered artistic style transfer!* ✨
app.py ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import torch
3
+ from huggingface_hub import hf_hub_download
4
+ from diffusers import FluxKontextPipeline
5
+ from diffusers.utils import load_image
6
+ from PIL import Image
7
+ import spaces
8
+ import os
9
+ import gc
10
+
11
+ # Style LoRA mapping
12
+ STYLE_TYPE_LORA_DICT = {
13
+ "3D_Chibi": "3D_Chibi_lora_weights.safetensors",
14
+ "American_Cartoon": "American_Cartoon_lora_weights.safetensors",
15
+ "Chinese_Ink": "Chinese_Ink_lora_weights.safetensors",
16
+ "Clay_Toy": "Clay_Toy_lora_weights.safetensors",
17
+ "Fabric": "Fabric_lora_weights.safetensors",
18
+ "Ghibli": "Ghibli_lora_weights.safetensors",
19
+ "Irasutoya": "Irasutoya_lora_weights.safetensors",
20
+ "Jojo": "Jojo_lora_weights.safetensors",
21
+ "Oil_Painting": "Oil_Painting_lora_weights.safetensors",
22
+ "Pixel": "Pixel_lora_weights.safetensors",
23
+ "Snoopy": "Snoopy_lora_weights.safetensors",
24
+ "Poly": "Poly_lora_weights.safetensors",
25
+ "LEGO": "LEGO_lora_weights.safetensors",
26
+ "Origami": "Origami_lora_weights.safetensors",
27
+ "Pop_Art": "Pop_Art_lora_weights.safetensors",
28
+ "Van_Gogh": "Van_Gogh_lora_weights.safetensors",
29
+ "Paper_Cutting": "Paper_Cutting_lora_weights.safetensors",
30
+ "Line": "Line_lora_weights.safetensors",
31
+ "Vector": "Vector_lora_weights.safetensors",
32
+ "Picasso": "Picasso_lora_weights.safetensors",
33
+ "Macaron": "Macaron_lora_weights.safetensors",
34
+ "Rick_Morty": "Rick_Morty_lora_weights.safetensors"
35
+ }
36
+
37
+ # Global variables for pipeline management
38
+ pipeline = None
39
+ current_lora = None
40
+
41
+ def load_pipeline():
42
+ """Load the base FLUX Kontext pipeline"""
43
+ global pipeline
44
+ if pipeline is None:
45
+ print("Loading FLUX Kontext pipeline...")
46
+ pipeline = FluxKontextPipeline.from_pretrained(
47
+ "black-forest-labs/FLUX.1-Kontext-dev",
48
+ torch_dtype=torch.bfloat16
49
+ ).to('cuda')
50
+ print("Pipeline loaded successfully!")
51
+ return pipeline
52
+
53
+ def download_lora(style_name):
54
+ """Download LoRA weights if not already cached"""
55
+ lora_filename = STYLE_TYPE_LORA_DICT[style_name]
56
+ local_path = f"./LoRAs/{lora_filename}"
57
+
58
+ if not os.path.exists(local_path):
59
+ print(f"Downloading LoRA for {style_name}...")
60
+ os.makedirs("./LoRAs", exist_ok=True)
61
+ hf_hub_download(
62
+ repo_id="Owen777/Kontext-Style-Loras",
63
+ filename=lora_filename,
64
+ local_dir="./LoRAs"
65
+ )
66
+ print(f"LoRA downloaded: {local_path}")
67
+ return local_path
68
+
69
+ @spaces.GPU
70
+ def generate_styled_image(
71
+ input_image,
72
+ style_name,
73
+ custom_prompt="",
74
+ num_inference_steps=24,
75
+ guidance_scale=7.5,
76
+ lora_strength=1.0,
77
+ width=1024,
78
+ height=1024,
79
+ seed=-1
80
+ ):
81
+ """Generate styled image using FLUX Kontext with LoRA"""
82
+ global pipeline, current_lora
83
+
84
+ try:
85
+ # Load pipeline if not loaded
86
+ pipeline = load_pipeline()
87
+
88
+ # Download and load LoRA if different from current
89
+ if current_lora != style_name:
90
+ lora_path = download_lora(style_name)
91
+
92
+ # Unload previous LoRA if any
93
+ if current_lora is not None:
94
+ try:
95
+ pipeline.unload_lora_weights()
96
+ except:
97
+ pass
98
+
99
+ # Load new LoRA
100
+ pipeline.load_lora_weights(lora_path, adapter_name="lora")
101
+ pipeline.set_adapters(["lora"], adapter_weights=[lora_strength])
102
+ current_lora = style_name
103
+ print(f"Loaded LoRA: {style_name}")
104
+ else:
105
+ # Update LoRA strength if same LoRA
106
+ pipeline.set_adapters(["lora"], adapter_weights=[lora_strength])
107
+
108
+ # Prepare input image
109
+ if input_image is None:
110
+ raise ValueError("Please provide an input image")
111
+
112
+ # Resize input image
113
+ input_image = input_image.resize((width, height))
114
+
115
+ # Prepare prompt
116
+ if custom_prompt.strip():
117
+ prompt = custom_prompt
118
+ else:
119
+ prompt = f"Turn this image into the {style_name.replace('_', ' ')} style."
120
+
121
+ # Set seed for reproducibility
122
+ if seed != -1:
123
+ torch.manual_seed(seed)
124
+
125
+ # Generate image
126
+ print(f"Generating image with style: {style_name}")
127
+ print(f"Prompt: {prompt}")
128
+
129
+ with torch.autocast("cuda"):
130
+ result = pipeline(
131
+ image=input_image,
132
+ prompt=prompt,
133
+ height=height,
134
+ width=width,
135
+ num_inference_steps=num_inference_steps,
136
+ guidance_scale=guidance_scale
137
+ )
138
+
139
+ output_image = result.images[0]
140
+
141
+ # Clean up GPU memory
142
+ torch.cuda.empty_cache()
143
+ gc.collect()
144
+
145
+ return output_image
146
+
147
+ except Exception as e:
148
+ print(f"Error generating image: {str(e)}")
149
+ return None
150
+
151
+ # Custom CSS for better UI
152
+ css = """
153
+ .gradio-container {
154
+ font-family: 'Helvetica Neue', Arial, sans-serif;
155
+ }
156
+ .title {
157
+ text-align: center;
158
+ font-size: 2.5em;
159
+ font-weight: bold;
160
+ margin-bottom: 1em;
161
+ color: #2c3e50;
162
+ }
163
+ .subtitle {
164
+ text-align: center;
165
+ font-size: 1.2em;
166
+ color: #7f8c8d;
167
+ margin-bottom: 2em;
168
+ }
169
+ """
170
+
171
+ # Create Gradio interface
172
+ with gr.Blocks(css=css, theme=gr.themes.Soft()) as demo:
173
+ gr.HTML('<div class="title">🎨 FLUX Kontext Style Transfer</div>')
174
+ gr.HTML('<div class="subtitle">Transform your images with 20+ artistic styles using LoRA adapters</div>')
175
+
176
+ with gr.Row():
177
+ with gr.Column(scale=1):
178
+ gr.Markdown("### Input")
179
+ input_image = gr.Image(
180
+ label="Upload Image",
181
+ type="pil",
182
+ height=400
183
+ )
184
+
185
+ style_dropdown = gr.Dropdown(
186
+ choices=list(STYLE_TYPE_LORA_DICT.keys()),
187
+ label="Choose Style",
188
+ value="Ghibli",
189
+ interactive=True
190
+ )
191
+
192
+ custom_prompt = gr.Textbox(
193
+ label="Custom Prompt (Optional)",
194
+ placeholder="Leave empty to use default style prompt",
195
+ lines=2
196
+ )
197
+
198
+ with gr.Accordion("Advanced Settings", open=False):
199
+ num_inference_steps = gr.Slider(
200
+ minimum=10,
201
+ maximum=50,
202
+ value=24,
203
+ step=1,
204
+ label="Inference Steps"
205
+ )
206
+
207
+ guidance_scale = gr.Slider(
208
+ minimum=1.0,
209
+ maximum=20.0,
210
+ value=7.5,
211
+ step=0.1,
212
+ label="Guidance Scale"
213
+ )
214
+
215
+ lora_strength = gr.Slider(
216
+ minimum=0.1,
217
+ maximum=2.0,
218
+ value=1.0,
219
+ step=0.1,
220
+ label="LoRA Strength"
221
+ )
222
+
223
+ with gr.Row():
224
+ width = gr.Slider(
225
+ minimum=512,
226
+ maximum=1536,
227
+ value=1024,
228
+ step=64,
229
+ label="Width"
230
+ )
231
+ height = gr.Slider(
232
+ minimum=512,
233
+ maximum=1536,
234
+ value=1024,
235
+ step=64,
236
+ label="Height"
237
+ )
238
+
239
+ seed = gr.Number(
240
+ label="Seed (-1 for random)",
241
+ value=-1,
242
+ precision=0
243
+ )
244
+
245
+ generate_btn = gr.Button("🎨 Generate Styled Image", variant="primary", size="lg")
246
+
247
+ with gr.Column(scale=1):
248
+ gr.Markdown("### Output")
249
+ output_image = gr.Image(
250
+ label="Styled Image",
251
+ height=400
252
+ )
253
+
254
+ # Examples
255
+ gr.Markdown("### Examples")
256
+ gr.Examples(
257
+ examples=[
258
+ ["https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg", "Ghibli", "", 24, 7.5, 1.0, 1024, 1024, -1],
259
+ ["https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg", "Pixel", "", 24, 7.5, 1.0, 1024, 1024, -1],
260
+ ["https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg", "Van_Gogh", "", 24, 7.5, 1.0, 1024, 1024, -1],
261
+ ],
262
+ inputs=[input_image, style_dropdown, custom_prompt, num_inference_steps, guidance_scale, lora_strength, width, height, seed],
263
+ outputs=[output_image],
264
+ fn=generate_styled_image,
265
+ cache_examples=False,
266
+ )
267
+
268
+ # Event handlers
269
+ generate_btn.click(
270
+ fn=generate_styled_image,
271
+ inputs=[
272
+ input_image,
273
+ style_dropdown,
274
+ custom_prompt,
275
+ num_inference_steps,
276
+ guidance_scale,
277
+ lora_strength,
278
+ width,
279
+ height,
280
+ seed
281
+ ],
282
+ outputs=[output_image]
283
+ )
284
+
285
+ # Information section
286
+ with gr.Accordion("About", open=False):
287
+ gr.Markdown("""
288
+ ### FLUX Kontext Style Transfer
289
+
290
+ This application uses the FLUX.1 Kontext model with style-specific LoRA adapters to transform images into various artistic styles.
291
+
292
+ **Available Styles:**
293
+ - 3D Chibi, American Cartoon, Chinese Ink, Clay Toy
294
+ - Fabric, Ghibli, Irasutoya, Jojo, Oil Painting
295
+ - Pixel, Snoopy, Poly, LEGO, Origami
296
+ - Pop Art, Van Gogh, Paper Cutting, Line, Vector
297
+ - Picasso, Macaron, Rick & Morty
298
+
299
+ **Tips:**
300
+ - Upload high-quality images for best results
301
+ - Experiment with different LoRA strengths
302
+ - Use custom prompts for more specific styling
303
+ - Higher inference steps generally produce better quality
304
+
305
+ **Model:** [Owen777/Kontext-Style-Loras](https://huggingface.co/Owen777/Kontext-Style-Loras)
306
+
307
+ **Training Code:** [GitHub Repository](https://github.com/Owen718/Kontext-Lora-Trainer)
308
+ """)
309
+
310
+ if __name__ == "__main__":
311
+ demo.launch()
config.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Configuration file for FLUX Kontext Style Transfer Space
3
+ """
4
+
5
+ import os
6
+
7
+ # Model configuration
8
+ BASE_MODEL_ID = "black-forest-labs/FLUX.1-Kontext-dev"
9
+ LORA_REPO_ID = "Owen777/Kontext-Style-Loras"
10
+
11
+ # Default generation parameters
12
+ DEFAULT_INFERENCE_STEPS = 24
13
+ DEFAULT_GUIDANCE_SCALE = 7.5
14
+ DEFAULT_LORA_STRENGTH = 1.0
15
+ DEFAULT_WIDTH = 1024
16
+ DEFAULT_HEIGHT = 1024
17
+
18
+ # GPU and memory settings
19
+ USE_TORCH_COMPILE = False # Set to True if you want to use torch.compile for faster inference
20
+ ENABLE_CPU_OFFLOAD = True # Enable CPU offloading to save GPU memory
21
+ LOW_VRAM_MODE = False # Set to True for systems with limited VRAM
22
+
23
+ # Gradio configuration
24
+ GRADIO_SHARE = False
25
+ GRADIO_DEBUG = False
26
+ GRADIO_ENABLE_QUEUE = True
27
+ MAX_QUEUE_SIZE = 20
28
+
29
+ # Cache settings
30
+ CACHE_DIR = "./cache"
31
+ LORA_CACHE_DIR = "./LoRAs"
32
+
33
+ # Security settings
34
+ MAX_IMAGE_SIZE = 2048 # Maximum image dimension
35
+ MIN_IMAGE_SIZE = 256 # Minimum image dimension
36
+ MAX_BATCH_SIZE = 1 # Maximum number of images to process at once
37
+
38
+ # Hugging Face settings
39
+ HF_TOKEN = os.getenv("HF_TOKEN", None) # Optional: Set if you need authentication
40
+
41
+ # Style configuration with descriptions
42
+ STYLE_DESCRIPTIONS = {
43
+ "3D_Chibi": "Transform images into cute 3D chibi character style",
44
+ "American_Cartoon": "Apply classic American cartoon aesthetics",
45
+ "Chinese_Ink": "Convert to traditional Chinese ink painting style",
46
+ "Clay_Toy": "Give images a clay toy/sculpture appearance",
47
+ "Fabric": "Apply textile and fabric texture effects",
48
+ "Ghibli": "Transform into Studio Ghibli anime style",
49
+ "Irasutoya": "Apply Japanese Irasutoya illustration style",
50
+ "Jojo": "Convert to JoJo's Bizarre Adventure anime style",
51
+ "Oil_Painting": "Apply classic oil painting techniques",
52
+ "Pixel": "Transform into retro pixel art style",
53
+ "Snoopy": "Apply Peanuts comic strip style",
54
+ "Poly": "Convert to low-poly 3D art style",
55
+ "LEGO": "Transform into LEGO brick construction style",
56
+ "Origami": "Apply paper folding origami art style",
57
+ "Pop_Art": "Convert to pop art movement style",
58
+ "Van_Gogh": "Apply Van Gogh's distinctive painting style",
59
+ "Paper_Cutting": "Transform using paper cutting art technique",
60
+ "Line": "Convert to clean line art style",
61
+ "Vector": "Apply vector graphics style",
62
+ "Picasso": "Transform using Picasso's cubist style",
63
+ "Macaron": "Apply soft pastel macaron color palette",
64
+ "Rick_Morty": "Convert to Rick and Morty cartoon style"
65
+ }
66
+
67
+ # Example images for demonstration
68
+ EXAMPLE_IMAGES = [
69
+ "https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg",
70
+ # Add more example image URLs here if needed
71
+ ]
deploy.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deployment Guide for FLUX Kontext Style Transfer Space
2
+
3
+ ## Quick Start
4
+
5
+ ### 1. Create a New Hugging Face Space
6
+
7
+ 1. Go to [Hugging Face Spaces](https://huggingface.co/spaces)
8
+ 2. Click "Create new Space"
9
+ 3. Choose:
10
+ - **Space name**: `flux-kontext-style-transfer` (or your preferred name)
11
+ - **License**: Apache 2.0
12
+ - **SDK**: Gradio
13
+ - **Hardware**: ZeroGPU (recommended) or T4 Medium
14
+ 4. Click "Create Space"
15
+
16
+ ### 2. Upload Files
17
+
18
+ Upload all the files from this directory to your new Space:
19
+
20
+ - `app.py` - Main application file
21
+ - `requirements.txt` - Python dependencies
22
+ - `README.md` - Space documentation
23
+ - `config.py` - Configuration settings
24
+ - `.gitignore` - Git ignore file
25
+ - `Dockerfile` - Docker configuration (optional)
26
+
27
+ ### 3. Space Configuration
28
+
29
+ The Space should automatically start building once you upload the files. The `README.md` contains the necessary YAML frontmatter with the Space configuration.
30
+
31
+ ### 4. Hardware Requirements
32
+
33
+ For optimal performance, use:
34
+ - **ZeroGPU**: Best for public spaces (free with queue)
35
+ - **T4 Medium or Large**: For consistent performance
36
+ - **A10G Small or Medium**: For faster inference
37
+
38
+ ### 5. Environment Variables (Optional)
39
+
40
+ If you need to set environment variables:
41
+ 1. Go to your Space settings
42
+ 2. Add variables in the "Variables and secrets" section
43
+ 3. Common variables:
44
+ - `HF_TOKEN`: Hugging Face token (if needed for private models)
45
+
46
+ ## File Structure
47
+
48
+ ```
49
+ your-space/
50
+ β”œβ”€β”€ app.py # Main Gradio application
51
+ β”œβ”€β”€ requirements.txt # Python dependencies
52
+ β”œβ”€β”€ README.md # Space documentation with metadata
53
+ β”œβ”€β”€ config.py # Configuration settings
54
+ β”œβ”€β”€ .gitignore # Git ignore patterns
55
+ β”œβ”€β”€ Dockerfile # Docker configuration (optional)
56
+ └── deploy.md # This deployment guide
57
+ ```
58
+
59
+ ## Features Included
60
+
61
+ - **Complete Gradio Interface**: Ready-to-use web interface
62
+ - **20+ Style LoRAs**: All styles from the original model
63
+ - **GPU Optimization**: Configured for ZeroGPU
64
+ - **Memory Management**: Efficient GPU memory usage
65
+ - **Examples**: Pre-loaded example images
66
+ - **Advanced Settings**: Customizable parameters
67
+ - **Professional UI**: Clean, modern interface
68
+
69
+ ## Customization Options
70
+
71
+ ### Adding New Styles
72
+ 1. Update `STYLE_TYPE_LORA_DICT` in `app.py`
73
+ 2. Add new LoRA files to the model repository
74
+ 3. Update style descriptions in `config.py`
75
+
76
+ ### UI Modifications
77
+ - Edit the CSS in `app.py` for custom styling
78
+ - Modify the Gradio layout in the interface section
79
+ - Add new components or remove existing ones
80
+
81
+ ### Performance Tuning
82
+ - Adjust default parameters in `config.py`
83
+ - Modify memory management settings
84
+ - Update hardware requirements in README.md
85
+
86
+ ## Troubleshooting
87
+
88
+ ### Common Issues
89
+
90
+ 1. **Out of Memory Errors**
91
+ - Reduce default image size
92
+ - Enable CPU offloading in config
93
+ - Use smaller batch sizes
94
+
95
+ 2. **Slow Loading**
96
+ - LoRAs are downloaded on first use
97
+ - Consider pre-downloading popular LoRAs
98
+ - Use faster hardware tier
99
+
100
+ 3. **Import Errors**
101
+ - Check requirements.txt versions
102
+ - Ensure all dependencies are compatible
103
+ - Update to latest diffusers version
104
+
105
+ ### Performance Tips
106
+
107
+ - Use ZeroGPU for cost-effective deployment
108
+ - Cache LoRA files for faster loading
109
+ - Implement model compilation for speed
110
+ - Monitor GPU memory usage
111
+
112
+ ## Support
113
+
114
+ For issues with:
115
+ - **Original Model**: Contact [Owen777](https://huggingface.co/Owen777)
116
+ - **Training Code**: Check [GitHub Repository](https://github.com/Owen718/Kontext-Lora-Trainer)
117
+ - **Hugging Face Spaces**: Use [Community Forums](https://huggingface.co/forums)
118
+
119
+ ## License
120
+
121
+ This deployment is under Apache 2.0 License, following the original model's licensing.
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ torch>=2.0.0
2
+ diffusers>=0.28.0
3
+ transformers>=4.38.0
4
+ accelerate>=0.26.0
5
+ safetensors>=0.4.0
6
+ huggingface-hub>=0.20.0
7
+ gradio>=4.0.0
8
+ spaces>=0.19.0
9
+ Pillow>=9.5.0
10
+ numpy>=1.24.0
11
+ xformers>=0.0.20
12
+ sentencepiece>=0.1.99
13
+ protobuf>=3.20.3