WhiteAiZ commited on
Commit
25a6abd
·
verified ·
1 Parent(s): ceecc0a

update forge classic to 1.7

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
18
 
19
  <br>
20
 
21
- ## Features [May. 21]
22
  > Most base features of the original [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) should still function
23
 
24
  #### New Features
@@ -55,17 +55,23 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
55
  > - Both `fp16_accumulation` and `cublas_ops` achieve the same speed up; if you already install/update to PyTorch **2.7.0**, you do not need to go for `cublas_ops`
56
  > - The `fp16_accumulation` and `cublas_ops` require `fp16` precision, thus is not compatible with the `fp8` operation
57
 
 
 
 
58
  - [X] Implement new Samplers
59
  - *(ported from reForge Webui)*
60
  - [X] Implement Scheduler Dropdown
61
  - *(backported from Automatic1111 Webui upstream)*
62
- - enable in **Settings/UI alternatives**
 
63
  - [X] Implement RescaleCFG
64
  - reduce burnt colors; mainly for `v-pred` checkpoints
65
- - enable in **Settings/UI alternatives**
66
  - [X] Implement MaHiRo
67
  - alternative CFG calculation; improve prompt adherence
68
- - enable in **Settings/UI alternatives**
 
 
69
  - [X] Implement `diskcache` for hashes
70
  - *(backported from Automatic1111 Webui upstream)*
71
  - [X] Implement `skip_early_cond`
@@ -117,13 +123,14 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
117
  - [X] Remove unused `args_parser`
118
  - [X] Remove unused `shared_options`
119
  - [X] Remove legacy codes
120
- - [X] Remove duplicated upscaler codes
 
121
  - put every upscaler inside the `ESRGAN` folder
122
- - optimize upscaler logics
123
  - [X] Improve color correction
124
  - [X] Improve hash caching
125
  - [X] Improve error logs
126
- - no longer just print `TypeError: 'NoneType' object is not iterable`
127
  - [X] Revamp settings
128
  - improve formatting
129
  - update descriptions
@@ -135,7 +142,7 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
135
  - change `visible` toggle to `interactive` toggle; now the UI will no longer jump around
136
  - improved `Presets` application
137
  - [X] Disable Refiner by default
138
- - enable again in **Settings/UI alternatives**
139
  - [X] Disable Tree View by default
140
  - enable again in **Settings/Extra Networks**
141
  - [X] Run `text encoder` on CPU by default
@@ -150,7 +157,7 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
150
  - `torch==2.7.0+cu128`
151
  - `xformers==0.0.30`
152
 
153
- > [!Tip]
154
  > If your GPU does not support the latest PyTorch, manually [install](#install-older-pytorch) older version of PyTorch
155
 
156
  - [X] No longer install `open-clip` twice
@@ -208,6 +215,10 @@ The name "Forge" is inspired by "Minecraft Forge". This project aims to become t
208
  > [!Important]
209
  > This simply **replaces** the `models` folder, rather than adding on top of it
210
 
 
 
 
 
211
  - `--fast-fp16`: Enable the `allow_fp16_accumulation` option
212
  - requires PyTorch **2.7.0** +
213
  - `--sage`: Install the `sageattention` package to speed up generation
 
18
 
19
  <br>
20
 
21
+ ## Features [May. 28]
22
  > Most base features of the original [Automatic1111 Webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) should still function
23
 
24
  #### New Features
 
55
  > - Both `fp16_accumulation` and `cublas_ops` achieve the same speed up; if you already install/update to PyTorch **2.7.0**, you do not need to go for `cublas_ops`
56
  > - The `fp16_accumulation` and `cublas_ops` require `fp16` precision, thus is not compatible with the `fp8` operation
57
 
58
+ - [X] Persistent LoRA Patching
59
+ - speed up LoRA loading in subsequent generations
60
+ - see [Commandline](#by-classic)
61
  - [X] Implement new Samplers
62
  - *(ported from reForge Webui)*
63
  - [X] Implement Scheduler Dropdown
64
  - *(backported from Automatic1111 Webui upstream)*
65
+ - enable in **Settings/UI Alternatives**
66
+ - [X] Add `CFG` slider to the `Hires. fix` section
67
  - [X] Implement RescaleCFG
68
  - reduce burnt colors; mainly for `v-pred` checkpoints
69
+ - enable in **Settings/UI Alternatives**
70
  - [X] Implement MaHiRo
71
  - alternative CFG calculation; improve prompt adherence
72
+ - enable in **Settings/UI Alternatives**
73
+ - [X] Implement full precision calculation for `Mask blur` blending
74
+ - enable in **Settings/img2img**
75
  - [X] Implement `diskcache` for hashes
76
  - *(backported from Automatic1111 Webui upstream)*
77
  - [X] Implement `skip_early_cond`
 
123
  - [X] Remove unused `args_parser`
124
  - [X] Remove unused `shared_options`
125
  - [X] Remove legacy codes
126
+ - [X] Fix some typos
127
+ - [X] Remove redundant upscaler codes
128
  - put every upscaler inside the `ESRGAN` folder
129
+ - [X] Optimize upscaler logics
130
  - [X] Improve color correction
131
  - [X] Improve hash caching
132
  - [X] Improve error logs
133
+ - no longer print `TypeError: 'NoneType' object is not iterable`
134
  - [X] Revamp settings
135
  - improve formatting
136
  - update descriptions
 
142
  - change `visible` toggle to `interactive` toggle; now the UI will no longer jump around
143
  - improved `Presets` application
144
  - [X] Disable Refiner by default
145
+ - enable again in **Settings/UI Alternatives**
146
  - [X] Disable Tree View by default
147
  - enable again in **Settings/Extra Networks**
148
  - [X] Run `text encoder` on CPU by default
 
157
  - `torch==2.7.0+cu128`
158
  - `xformers==0.0.30`
159
 
160
+ > [!Note]
161
  > If your GPU does not support the latest PyTorch, manually [install](#install-older-pytorch) older version of PyTorch
162
 
163
  - [X] No longer install `open-clip` twice
 
215
  > [!Important]
216
  > This simply **replaces** the `models` folder, rather than adding on top of it
217
 
218
+ - `--persistent-patches`: Enable the persistent LoRA patching
219
+ - no longer apply LoRA every single generation, if the weight is unchanged
220
+ - save around 1 second per generation when using LoRA
221
+
222
  - `--fast-fp16`: Enable the `allow_fp16_accumulation` option
223
  - requires PyTorch **2.7.0** +
224
  - `--sage`: Install the `sageattention` package to speed up generation
extensions-builtin/sd_forge_neveroom/scripts/forge_never_oom.py CHANGED
@@ -1,5 +1,6 @@
1
  from ldm_patched.modules import model_management
2
- from modules import scripts
 
3
 
4
  import gradio as gr
5
 
@@ -18,34 +19,35 @@ class NeverOOMForForge(scripts.Script):
18
  return scripts.AlwaysVisible
19
 
20
  def ui(self, *args, **kwargs):
21
- with gr.Accordion(open=False, label=self.title()):
22
- unet_enabled = gr.Checkbox(
23
- label="Enabled for UNet (always offload)",
24
- value=False,
25
- )
26
- vae_enabled = gr.Checkbox(
27
- label="Enabled for VAE (always tile)",
28
- value=False,
29
- )
30
- return unet_enabled, vae_enabled
31
 
32
- def process(self, p, *script_args, **kwargs):
33
- unet_enabled, vae_enabled = script_args
34
 
35
- if unet_enabled:
36
- print("NeverOOM Enabled for UNet")
37
 
38
- if vae_enabled:
39
- print("NeverOOM Enabled for VAE")
40
 
41
- model_management.VAE_ALWAYS_TILED = vae_enabled
 
 
 
 
 
 
 
 
42
 
43
- if self.previous_unet_enabled != unet_enabled:
44
  model_management.unload_all_models()
45
- if unet_enabled:
46
  self.original_vram_state = model_management.vram_state
47
  model_management.vram_state = model_management.VRAMState.NO_VRAM
48
  else:
49
  model_management.vram_state = self.original_vram_state
50
- print(f"VRAM State Changed To {model_management.vram_state.name}")
51
- self.previous_unet_enabled = unet_enabled
 
1
  from ldm_patched.modules import model_management
2
+ from modules.ui_components import FormRow
3
+ from modules import scripts, shared
4
 
5
  import gradio as gr
6
 
 
19
  return scripts.AlwaysVisible
20
 
21
  def ui(self, *args, **kwargs):
22
+ with gr.Accordion(label=self.title(), open=False):
23
+ with FormRow():
24
+ unet_enable = gr.Checkbox(value=False, label="Enable for UNet", info="always offload to memory")
25
+ vae_enable = gr.Checkbox(value=False, label="Enabled for VAE", info="always tiled encoding/decoding")
26
+ with FormRow():
27
+ tile_size = gr.Slider(minimum=64, maximum=1024, step=64, value=512, label="Tile Size", info="in pixels")
28
+ tile_overlap = gr.Slider(minimum=16, maximum=256, step=4, value=64, label="Tile Overlap", info="in pixels")
 
 
 
29
 
30
+ return unet_enable, vae_enable, tile_size, tile_overlap
 
31
 
32
+ def process(self, p, unet_enable: bool, vae_enable: bool, tile_size: int, tile_overlap: int, **kwargs):
 
33
 
34
+ if unet_enable:
35
+ print("[Never OOM] Enabled for UNet")
36
 
37
+ if vae_enable:
38
+ print("[Never OOM] Enabled for VAE")
39
+ shared.opts.tile_size = tile_size
40
+ shared.opts.tile_overlap = min(tile_size // 4, tile_overlap)
41
+
42
+ model_management.VAE_ALWAYS_TILED = vae_enable
43
+
44
+ if self.previous_unet_enabled != unet_enable:
45
+ self.previous_unet_enabled = unet_enable
46
 
 
47
  model_management.unload_all_models()
48
+ if unet_enable:
49
  self.original_vram_state = model_management.vram_state
50
  model_management.vram_state = model_management.VRAMState.NO_VRAM
51
  else:
52
  model_management.vram_state = self.original_vram_state
53
+ print(f"Changed VRAM State To {model_management.vram_state.name}")
 
extensions-builtin/xyz/lib_xyz/axis_application.py CHANGED
@@ -98,5 +98,14 @@ def apply_override(field, boolean: bool = False):
98
  return fun
99
 
100
 
 
 
 
 
 
 
 
 
 
101
  def do_nothing(p, x, xs):
102
  pass
 
98
  return fun
99
 
100
 
101
+ def apply_size(p, x: str, xs) -> None:
102
+ try:
103
+ width, height = x.split("x")
104
+ p.width = int(width.strip())
105
+ p.height = int(height.strip())
106
+ except Exception:
107
+ print(f"Invalid size in XYZ plot: {x}")
108
+
109
+
110
  def do_nothing(p, x, xs):
111
  pass
extensions-builtin/xyz/lib_xyz/builtins.py CHANGED
@@ -8,6 +8,7 @@ from .axis_application import (
8
  apply_order,
9
  apply_override,
10
  apply_prompt,
 
11
  apply_styles,
12
  apply_uni_pc_order,
13
  apply_vae,
@@ -43,6 +44,7 @@ builtin_options = [
43
  AxisOptionImg2Img("Sampler", str, apply_field("sampler_name"), format_value=format_value, confirm=confirm_samplers, choices=sd_samplers.visible_sampler_names),
44
  AxisOption("Checkpoint name", str, apply_checkpoint, format_value=format_remove_path, confirm=confirm_checkpoints, cost=1.0, choices=lambda: sorted(sd_models.checkpoints_list, key=str.casefold)),
45
  AxisOption("Negative Guidance minimum sigma", float, apply_field("s_min_uncond")),
 
46
  AxisOption("Sigma Churn", float, apply_field("s_churn")),
47
  AxisOption("Sigma min", float, apply_field("s_tmin")),
48
  AxisOption("Sigma max", float, apply_field("s_tmax")),
 
8
  apply_order,
9
  apply_override,
10
  apply_prompt,
11
+ apply_size,
12
  apply_styles,
13
  apply_uni_pc_order,
14
  apply_vae,
 
44
  AxisOptionImg2Img("Sampler", str, apply_field("sampler_name"), format_value=format_value, confirm=confirm_samplers, choices=sd_samplers.visible_sampler_names),
45
  AxisOption("Checkpoint name", str, apply_checkpoint, format_value=format_remove_path, confirm=confirm_checkpoints, cost=1.0, choices=lambda: sorted(sd_models.checkpoints_list, key=str.casefold)),
46
  AxisOption("Negative Guidance minimum sigma", float, apply_field("s_min_uncond")),
47
+ AxisOption("Size", str, apply_size),
48
  AxisOption("Sigma Churn", float, apply_field("s_churn")),
49
  AxisOption("Sigma min", float, apply_field("s_tmin")),
50
  AxisOption("Sigma max", float, apply_field("s_tmax")),
ldm_patched/k_diffusion/sampling.py CHANGED
@@ -407,6 +407,51 @@ def sample_dpmpp_2m(model, x, sigmas, extra_args=None, callback=None, disable=No
407
  return x
408
 
409
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
410
  @torch.no_grad()
411
  def sample_dpmpp_3m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1.0, s_noise=1.0, noise_sampler=None):
412
  """DPM-Solver++(3M) SDE"""
 
407
  return x
408
 
409
 
410
+ @torch.no_grad()
411
+ def sample_dpmpp_2m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1.0, s_noise=1.0, noise_sampler=None, solver_type="midpoint"):
412
+ """DPM-Solver++(2M) SDE"""
413
+
414
+ if solver_type not in {"heun", "midpoint"}:
415
+ raise ValueError("solver_type must be 'heun' or 'midpoint'")
416
+
417
+ seed = extra_args.get("seed", None)
418
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
419
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed, cpu=True) if noise_sampler is None else noise_sampler
420
+ extra_args = {} if extra_args is None else extra_args
421
+ s_in = x.new_ones([x.shape[0]])
422
+
423
+ old_denoised = None
424
+ h_last = None
425
+ h = None
426
+
427
+ for i in trange(len(sigmas) - 1, disable=disable):
428
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
429
+ if callback is not None:
430
+ callback({"x": x, "i": i, "sigma": sigmas[i], "sigma_hat": sigmas[i], "denoised": denoised})
431
+ if sigmas[i + 1] == 0:
432
+ x = denoised
433
+ else:
434
+ t, s = -sigmas[i].log(), -sigmas[i + 1].log()
435
+ h = s - t
436
+ eta_h = eta * h
437
+
438
+ x = sigmas[i + 1] / sigmas[i] * (-eta_h).exp() * x + (-h - eta_h).expm1().neg() * denoised
439
+
440
+ if old_denoised is not None:
441
+ r = h_last / h
442
+ if solver_type == "heun":
443
+ x = x + ((-h - eta_h).expm1().neg() / (-h - eta_h) + 1) * (1 / r) * (denoised - old_denoised)
444
+ elif solver_type == "midpoint":
445
+ x = x + 0.5 * (-h - eta_h).expm1().neg() * (1 / r) * (denoised - old_denoised)
446
+
447
+ if eta:
448
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * sigmas[i + 1] * (-2 * eta_h).expm1().neg().sqrt() * s_noise
449
+
450
+ old_denoised = denoised
451
+ h_last = h
452
+ return x
453
+
454
+
455
  @torch.no_grad()
456
  def sample_dpmpp_3m_sde(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1.0, s_noise=1.0, noise_sampler=None):
457
  """DPM-Solver++(3M) SDE"""
ldm_patched/ldm/modules/diffusionmodules/model.py CHANGED
@@ -549,7 +549,7 @@ class Decoder(nn.Module):
549
  _h = self.up[i_level].upsample(h)
550
  del h
551
  h = _h
552
- torch.cuda.empty_cache()
553
 
554
  # end
555
  if self.give_pre_end:
 
549
  _h = self.up[i_level].upsample(h)
550
  del h
551
  h = _h
552
+ model_management.soft_empty_cache()
553
 
554
  # end
555
  if self.give_pre_end:
ldm_patched/modules/args_parser.py CHANGED
@@ -1,5 +1,10 @@
1
- # Reference: https://github.com/comfyanonymous/ComfyUI
 
 
2
 
 
 
 
3
 
4
  import argparse
5
  import enum
@@ -77,6 +82,7 @@ parser.add_argument("--cuda-stream", action="store_true")
77
  parser.add_argument("--pin-shared-memory", action="store_true")
78
 
79
  parser.add_argument("--fast-fp16", action="store_true")
 
80
 
81
 
82
  class SageAttentionAPIs(enum.Enum):
 
1
+ """
2
+ Credit: ComfyUI
3
+ https://github.com/comfyanonymous/ComfyUI
4
 
5
+ - Edited by. Forge Official
6
+ - Edited by. Haoming02
7
+ """
8
 
9
  import argparse
10
  import enum
 
82
  parser.add_argument("--pin-shared-memory", action="store_true")
83
 
84
  parser.add_argument("--fast-fp16", action="store_true")
85
+ parser.add_argument("--persistent-patches", action="store_true")
86
 
87
 
88
  class SageAttentionAPIs(enum.Enum):
ldm_patched/modules/model_management.py CHANGED
@@ -1,8 +1,11 @@
1
- # 1st edit by https://github.com/comfyanonymous/ComfyUI
2
- # 2nd edit by Forge Official
 
3
 
 
 
 
4
 
5
- import gc
6
  import time
7
  from enum import Enum
8
  from functools import lru_cache
@@ -376,11 +379,9 @@ class LoadedModel:
376
  if disable_async_load:
377
  patch_model_to = self.device
378
 
379
- self.model.model_patches_to(self.device)
380
- self.model.model_patches_to(self.model.model_dtype())
381
 
382
  try:
383
- # TODO: do something with loras and offloading to CPU
384
  self.real_model = self.model.patch_model(device_to=patch_model_to)
385
  except Exception as e:
386
  self.model.unpatch_model(self.model.offload_device)
@@ -434,7 +435,7 @@ class LoadedModel:
434
 
435
  return self.real_model
436
 
437
- def model_unload(self, avoid_model_moving: bool = False):
438
  if self.model_accelerated:
439
  for m in self.real_model.modules():
440
  if hasattr(m, "prev_ldm_patched_cast_weights"):
@@ -446,7 +447,7 @@ class LoadedModel:
446
  if avoid_model_moving:
447
  self.model.unpatch_model()
448
  else:
449
- self.model.unpatch_model(self.model.offload_device)
450
  self.model.model_patches_to(self.model.offload_device)
451
 
452
  def __eq__(self, other: "LoadedModel"):
@@ -471,7 +472,6 @@ def unload_model_clones(model):
471
  if len(to_unload) > 0:
472
  print(f"Reusing {len(to_unload)} loaded model{'s' if len(to_unload) > 1 else ''}")
473
  soft_empty_cache()
474
- gc.collect()
475
 
476
 
477
  def free_memory(memory_required, device, keep_loaded=[]):
 
1
+ """
2
+ Credit: ComfyUI
3
+ https://github.com/comfyanonymous/ComfyUI
4
 
5
+ - Edited by. Forge Official
6
+ - Edited by. Haoming02
7
+ """
8
 
 
9
  import time
10
  from enum import Enum
11
  from functools import lru_cache
 
379
  if disable_async_load:
380
  patch_model_to = self.device
381
 
382
+ self.model.model_patches_to(device=self.device, dtype=self.model.model_dtype())
 
383
 
384
  try:
 
385
  self.real_model = self.model.patch_model(device_to=patch_model_to)
386
  except Exception as e:
387
  self.model.unpatch_model(self.model.offload_device)
 
435
 
436
  return self.real_model
437
 
438
+ def model_unload(self, *, avoid_model_moving: bool = False):
439
  if self.model_accelerated:
440
  for m in self.real_model.modules():
441
  if hasattr(m, "prev_ldm_patched_cast_weights"):
 
447
  if avoid_model_moving:
448
  self.model.unpatch_model()
449
  else:
450
+ self.model.unpatch_model(device_to=self.model.offload_device)
451
  self.model.model_patches_to(self.model.offload_device)
452
 
453
  def __eq__(self, other: "LoadedModel"):
 
472
  if len(to_unload) > 0:
473
  print(f"Reusing {len(to_unload)} loaded model{'s' if len(to_unload) > 1 else ''}")
474
  soft_empty_cache()
 
475
 
476
 
477
  def free_memory(memory_required, device, keep_loaded=[]):
ldm_patched/modules/model_patcher.py CHANGED
@@ -1,15 +1,63 @@
1
- # 1st edit by https://github.com/comfyanonymous/ComfyUI
2
- # 2nd edit by Forge Official
 
3
 
 
 
 
4
 
5
  import copy
6
  import inspect
7
 
 
 
8
  import ldm_patched.modules.model_management
9
  import ldm_patched.modules.utils
10
- import torch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- extra_weight_calculators = {}
 
 
 
 
13
 
14
 
15
  class ModelPatcher:
@@ -24,13 +72,11 @@ class ModelPatcher:
24
  self.model_size()
25
  self.load_device = load_device
26
  self.offload_device = offload_device
27
- if current_device is None:
28
- self.current_device = self.offload_device
29
- else:
30
- self.current_device = current_device
31
-
32
  self.weight_inplace_update = weight_inplace_update
33
 
 
 
34
  def model_size(self):
35
  if self.size > 0:
36
  return self.size
@@ -46,21 +92,22 @@ class ModelPatcher:
46
  self.offload_device,
47
  self.size,
48
  self.current_device,
49
- weight_inplace_update=self.weight_inplace_update,
50
  )
51
- n.patches = {}
52
  for k in self.patches:
53
  n.patches[k] = self.patches[k][:]
54
 
 
55
  n.object_patches = self.object_patches.copy()
56
  n.model_options = copy.deepcopy(self.model_options)
57
  n.model_keys = self.model_keys
 
 
58
  return n
59
 
60
  def is_clone(self, other):
61
- if hasattr(other, "model") and self.model is other.model:
62
- return True
63
- return False
64
 
65
  def memory_required(self, input_shape):
66
  return self.model.memory_required(input_shape=input_shape)
@@ -135,26 +182,26 @@ class ModelPatcher:
135
  def add_object_patch(self, name, obj):
136
  self.object_patches[name] = obj
137
 
138
- def model_patches_to(self, device):
139
- to = self.model_options["transformer_options"]
140
  if "patches" in to:
141
  patches = to["patches"]
142
  for name in patches:
143
  patch_list = patches[name]
144
  for i in range(len(patch_list)):
145
  if hasattr(patch_list[i], "to"):
146
- patch_list[i] = patch_list[i].to(device)
147
  if "patches_replace" in to:
148
  patches = to["patches_replace"]
149
  for name in patches:
150
  patch_list = patches[name]
151
  for k in patch_list:
152
  if hasattr(patch_list[k], "to"):
153
- patch_list[k] = patch_list[k].to(device)
154
  if "model_function_wrapper" in self.model_options:
155
- wrap_func = self.model_options["model_function_wrapper"]
156
  if hasattr(wrap_func, "to"):
157
- self.model_options["model_function_wrapper"] = wrap_func.to(device)
158
 
159
  def model_dtype(self):
160
  if hasattr(self.model, "get_dtype"):
@@ -169,6 +216,7 @@ class ModelPatcher:
169
  current_patches.append((strength_patch, patches[k], strength_model))
170
  self.patches[k] = current_patches
171
 
 
172
  return list(p)
173
 
174
  def get_key_patches(self, filter_prefix=None):
@@ -201,7 +249,10 @@ class ModelPatcher:
201
  self.object_patches_backup[k] = old
202
  ldm_patched.modules.utils.set_attr_raw(self.model, k, self.object_patches[k])
203
 
204
- if patch_weights:
 
 
 
205
  model_sd = self.model_state_dict()
206
  for key in self.patches:
207
  if key not in model_sd:
@@ -226,9 +277,11 @@ class ModelPatcher:
226
  ldm_patched.modules.utils.set_attr(self.model, key, out_weight)
227
  del temp_weight
228
 
229
- if device_to is not None:
230
- self.model.to(device_to)
231
- self.current_device = device_to
 
 
232
 
233
  return self.model
234
 
@@ -403,16 +456,18 @@ class ModelPatcher:
403
  return weight
404
 
405
  def unpatch_model(self, device_to=None):
406
- keys = list(self.backup.keys())
 
407
 
408
- if self.weight_inplace_update:
409
- for k in keys:
410
- ldm_patched.modules.utils.copy_to_param(self.model, k, self.backup[k])
411
- else:
412
- for k in keys:
413
- ldm_patched.modules.utils.set_attr(self.model, k, self.backup[k])
414
 
415
- self.backup = {}
 
416
 
417
  if device_to is not None:
418
  self.model.to(device_to)
@@ -422,7 +477,7 @@ class ModelPatcher:
422
  for k in keys:
423
  ldm_patched.modules.utils.set_attr_raw(self.model, k, self.object_patches_backup[k])
424
 
425
- self.object_patches_backup = {}
426
 
427
  def __del__(self):
428
  del self.patches
 
1
+ """
2
+ Credit: ComfyUI
3
+ https://github.com/comfyanonymous/ComfyUI
4
 
5
+ - Edited by. Forge Official
6
+ - Edited by. Haoming02
7
+ """
8
 
9
  import copy
10
  import inspect
11
 
12
+ import torch
13
+
14
  import ldm_patched.modules.model_management
15
  import ldm_patched.modules.utils
16
+ from ldm_patched.modules.args_parser import args
17
+
18
+ extra_weight_calculators = {} # backward compatibility
19
+
20
+
21
+ PERSISTENT_PATCHES = args.persistent_patches
22
+ if PERSISTENT_PATCHES:
23
+ print("[Experimental] Persistent Patches:", PERSISTENT_PATCHES)
24
+
25
+
26
+ class PatchStatus:
27
+ def __init__(self):
28
+ self.current = 0 # the current status of the ModelPatcher
29
+ self.updated = 0 # the last time a patch was modified
30
+
31
+ def require_patch(self) -> bool:
32
+ if not PERSISTENT_PATCHES:
33
+ return True
34
+
35
+ return self.current == 0
36
+
37
+ def require_unpatch(self) -> bool:
38
+ if not PERSISTENT_PATCHES:
39
+ return True
40
+
41
+ if not PatchStatus.has_lora():
42
+ return True
43
+
44
+ return self.current != self.updated
45
+
46
+ def patch(self):
47
+ if self.updated > 0:
48
+ self.current = self.updated
49
+
50
+ def unpatch(self):
51
+ self.current = 0
52
+
53
+ def update(self):
54
+ self.updated += 1
55
 
56
+ @staticmethod
57
+ def has_lora() -> bool:
58
+ from modules.shared import sd_model
59
+
60
+ return sd_model.current_lora_hash != str([])
61
 
62
 
63
  class ModelPatcher:
 
72
  self.model_size()
73
  self.load_device = load_device
74
  self.offload_device = offload_device
75
+ self.current_device = self.offload_device if current_device is None else current_device
 
 
 
 
76
  self.weight_inplace_update = weight_inplace_update
77
 
78
+ self.patch_status = PatchStatus()
79
+
80
  def model_size(self):
81
  if self.size > 0:
82
  return self.size
 
92
  self.offload_device,
93
  self.size,
94
  self.current_device,
95
+ self.weight_inplace_update,
96
  )
97
+
98
  for k in self.patches:
99
  n.patches[k] = self.patches[k][:]
100
 
101
+ n.backup = self.backup
102
  n.object_patches = self.object_patches.copy()
103
  n.model_options = copy.deepcopy(self.model_options)
104
  n.model_keys = self.model_keys
105
+ n.patch_status = self.patch_status
106
+
107
  return n
108
 
109
  def is_clone(self, other):
110
+ return getattr(other, "model", None) is self.model
 
 
111
 
112
  def memory_required(self, input_shape):
113
  return self.model.memory_required(input_shape=input_shape)
 
182
  def add_object_patch(self, name, obj):
183
  self.object_patches[name] = obj
184
 
185
+ def model_patches_to(self, device, *, dtype=None):
186
+ to: dict[str, dict[str, list["torch.Tensor"] | dict[str, "torch.Tensor"]]] = self.model_options["transformer_options"]
187
  if "patches" in to:
188
  patches = to["patches"]
189
  for name in patches:
190
  patch_list = patches[name]
191
  for i in range(len(patch_list)):
192
  if hasattr(patch_list[i], "to"):
193
+ patch_list[i] = patch_list[i].to(device=device, dtype=dtype)
194
  if "patches_replace" in to:
195
  patches = to["patches_replace"]
196
  for name in patches:
197
  patch_list = patches[name]
198
  for k in patch_list:
199
  if hasattr(patch_list[k], "to"):
200
+ patch_list[k] = patch_list[k].to(device=device, dtype=dtype)
201
  if "model_function_wrapper" in self.model_options:
202
+ wrap_func: "torch.Tensor" = self.model_options["model_function_wrapper"]
203
  if hasattr(wrap_func, "to"):
204
+ self.model_options["model_function_wrapper"] = wrap_func.to(device=device, dtype=dtype)
205
 
206
  def model_dtype(self):
207
  if hasattr(self.model, "get_dtype"):
 
216
  current_patches.append((strength_patch, patches[k], strength_model))
217
  self.patches[k] = current_patches
218
 
219
+ self.patch_status.update()
220
  return list(p)
221
 
222
  def get_key_patches(self, filter_prefix=None):
 
249
  self.object_patches_backup[k] = old
250
  ldm_patched.modules.utils.set_attr_raw(self.model, k, self.object_patches[k])
251
 
252
+ if not patch_weights:
253
+ return self.model
254
+
255
+ if self.patches and self.patch_status.require_patch():
256
  model_sd = self.model_state_dict()
257
  for key in self.patches:
258
  if key not in model_sd:
 
277
  ldm_patched.modules.utils.set_attr(self.model, key, out_weight)
278
  del temp_weight
279
 
280
+ self.patch_status.patch()
281
+
282
+ if device_to is not None:
283
+ self.model.to(device_to)
284
+ self.current_device = device_to
285
 
286
  return self.model
287
 
 
456
  return weight
457
 
458
  def unpatch_model(self, device_to=None):
459
+ if self.backup and self.patch_status.require_unpatch():
460
+ keys = list(self.backup.keys())
461
 
462
+ if self.weight_inplace_update:
463
+ for k in keys:
464
+ ldm_patched.modules.utils.copy_to_param(self.model, k, self.backup[k])
465
+ else:
466
+ for k in keys:
467
+ ldm_patched.modules.utils.set_attr(self.model, k, self.backup[k])
468
 
469
+ self.backup.clear()
470
+ self.patch_status.unpatch()
471
 
472
  if device_to is not None:
473
  self.model.to(device_to)
 
477
  for k in keys:
478
  ldm_patched.modules.utils.set_attr_raw(self.model, k, self.object_patches_backup[k])
479
 
480
+ self.object_patches_backup.clear()
481
 
482
  def __del__(self):
483
  del self.patches
ldm_patched/modules/sd.py CHANGED
@@ -1,16 +1,15 @@
1
  # Reference: https://github.com/comfyanonymous/ComfyUI
2
 
3
 
 
 
4
  import ldm_patched.modules.lora
5
  import ldm_patched.modules.model_patcher
6
- import ldm_patched.modules.supported_models_base
7
  import ldm_patched.modules.utils
8
- import ldm_patched.t2ia.adapter
9
  import ldm_patched.taesd.taesd
10
- import torch
11
-
12
- from ldm_patched.ldm.models.autoencoder import AutoencoderKL, AutoencodingEngine
13
  from ldm_patched.modules import model_management
 
14
 
15
  from . import diffusers_convert
16
 
@@ -218,7 +217,11 @@ class VAE:
218
  n.output_device = self.output_device
219
  return n
220
 
221
- def decode_tiled_(self, samples, tile_x=64, tile_y=64, overlap=16):
 
 
 
 
222
  steps = samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(samples.shape[3], samples.shape[2], tile_x, tile_y, overlap)
223
  steps += samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(samples.shape[3], samples.shape[2], tile_x // 2, tile_y * 2, overlap)
224
  steps += samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(samples.shape[3], samples.shape[2], tile_x * 2, tile_y // 2, overlap)
@@ -267,7 +270,11 @@ class VAE:
267
  )
268
  return output
269
 
270
- def encode_tiled_(self, pixel_samples, tile_x=512, tile_y=512, overlap=64):
 
 
 
 
271
  steps = pixel_samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(pixel_samples.shape[3], pixel_samples.shape[2], tile_x, tile_y, overlap)
272
  steps += pixel_samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(
273
  pixel_samples.shape[3],
@@ -363,9 +370,9 @@ class VAE:
363
  else:
364
  return wrapper(self.decode_inner, samples_in)
365
 
366
- def decode_tiled(self, samples, tile_x=64, tile_y=64, overlap=16):
367
  model_management.load_model_gpu(self.patcher)
368
- output = self.decode_tiled_(samples, tile_x, tile_y, overlap)
369
  return output.movedim(1, -1)
370
 
371
  def encode_inner(self, pixel_samples):
@@ -405,10 +412,10 @@ class VAE:
405
  else:
406
  return wrapper(self.encode_inner, pixel_samples)
407
 
408
- def encode_tiled(self, pixel_samples, tile_x=512, tile_y=512, overlap=64):
409
  model_management.load_model_gpu(self.patcher)
410
  pixel_samples = pixel_samples.movedim(-1, 1)
411
- samples = self.encode_tiled_(pixel_samples, tile_x=tile_x, tile_y=tile_y, overlap=overlap)
412
  return samples
413
 
414
  def get_sd(self):
 
1
  # Reference: https://github.com/comfyanonymous/ComfyUI
2
 
3
 
4
+ import torch
5
+
6
  import ldm_patched.modules.lora
7
  import ldm_patched.modules.model_patcher
 
8
  import ldm_patched.modules.utils
 
9
  import ldm_patched.taesd.taesd
10
+ from ldm_patched.ldm.models.autoencoder import AutoencoderKL
 
 
11
  from ldm_patched.modules import model_management
12
+ from modules.shared import opts
13
 
14
  from . import diffusers_convert
15
 
 
217
  n.output_device = self.output_device
218
  return n
219
 
220
+ def decode_tiled_(self, samples, tile_x=0, tile_y=0, overlap=0):
221
+ tile_x = int(tile_x or (opts.tile_size / 8))
222
+ tile_y = int(tile_y or (opts.tile_size / 8))
223
+ overlap = int(overlap or (opts.tile_overlap / 8))
224
+
225
  steps = samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(samples.shape[3], samples.shape[2], tile_x, tile_y, overlap)
226
  steps += samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(samples.shape[3], samples.shape[2], tile_x // 2, tile_y * 2, overlap)
227
  steps += samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(samples.shape[3], samples.shape[2], tile_x * 2, tile_y // 2, overlap)
 
270
  )
271
  return output
272
 
273
+ def encode_tiled_(self, pixel_samples, tile_x=0, tile_y=0, overlap=0):
274
+ tile_x = int(tile_x or opts.tile_size)
275
+ tile_y = int(tile_y or opts.tile_size)
276
+ overlap = int(overlap or opts.tile_overlap)
277
+
278
  steps = pixel_samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(pixel_samples.shape[3], pixel_samples.shape[2], tile_x, tile_y, overlap)
279
  steps += pixel_samples.shape[0] * ldm_patched.modules.utils.get_tiled_scale_steps(
280
  pixel_samples.shape[3],
 
370
  else:
371
  return wrapper(self.decode_inner, samples_in)
372
 
373
+ def decode_tiled(self, samples):
374
  model_management.load_model_gpu(self.patcher)
375
+ output = self.decode_tiled_(samples)
376
  return output.movedim(1, -1)
377
 
378
  def encode_inner(self, pixel_samples):
 
412
  else:
413
  return wrapper(self.encode_inner, pixel_samples)
414
 
415
+ def encode_tiled(self, pixel_samples):
416
  model_management.load_model_gpu(self.patcher)
417
  pixel_samples = pixel_samples.movedim(-1, 1)
418
+ samples = self.encode_tiled_(pixel_samples)
419
  return samples
420
 
421
  def get_sd(self):
modules/cmd_args.py CHANGED
@@ -11,14 +11,12 @@ parser.add_argument("-f", action="store_true", help=argparse.SUPPRESS) # allows
11
 
12
  parser.add_argument("--update-all-extensions", action="store_true", help="launch.py argument: download updates for all extensions when starting the program")
13
  parser.add_argument("--skip-python-version-check", action="store_true", help="launch.py argument: do not check python version")
 
14
  parser.add_argument("--skip-torch-cuda-test", action="store_true", help="launch.py argument: do not check if CUDA is able to work properly")
 
15
  parser.add_argument("--reinstall-xformers", action="store_true", help="launch.py argument: install the appropriate version of xformers even if you have some version already installed")
16
  parser.add_argument("--reinstall-torch", action="store_true", help="launch.py argument: install the appropriate version of torch even if you have some version already installed")
17
- parser.add_argument("--update-check", action="store_true", help="launch.py argument: check for updates at startup")
18
- parser.add_argument("--test-server", action="store_true", help="launch.py argument: configure server for testing")
19
  parser.add_argument("--log-startup", action="store_true", help="launch.py argument: print a detailed log of what's happening at startup")
20
- parser.add_argument("--skip-prepare-environment", action="store_true", help="launch.py argument: skip all environment preparation")
21
- parser.add_argument("--skip-install", action="store_true", help="launch.py argument: skip installation of packages")
22
  parser.add_argument("--dump-sysinfo", action="store_true", help="launch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit")
23
  parser.add_argument("--loglevel", type=str, help="log level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG", default=None)
24
  parser.add_argument("--data-dir", type=normalized_filepath, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored")
@@ -30,11 +28,8 @@ parser.add_argument("--vae-dir", type=normalized_filepath, default=None, help="P
30
  parser.add_argument("--gfpgan-dir", type=normalized_filepath, help="GFPGAN directory", default=("./src/gfpgan" if os.path.exists("./src/gfpgan") else "./GFPGAN"))
31
  parser.add_argument("--gfpgan-model", type=normalized_filepath, help="GFPGAN model file name", default=None)
32
  parser.add_argument("--no-half", action="store_true", help="do not switch the model to 16-bit floats")
33
- parser.add_argument("--no-half-vae", action="store_true", help="do not switch the VAE model to 16-bit floats")
34
- parser.add_argument("--no-progressbar-hiding", action="store_true", help="do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware acceleration in browser)")
35
  parser.add_argument("--embeddings-dir", type=normalized_filepath, default=os.path.join(models_path, "embeddings"), help="textual inversion directory")
36
  parser.add_argument("--localizations-dir", type=normalized_filepath, default=os.path.join(script_path, "localizations"), help="localizations directory")
37
- parser.add_argument("--upcast-sampling", action="store_true", help="upcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.")
38
  parser.add_argument("--share", action="store_true", help="use share=True for gradio and make the UI accessible through their site")
39
  parser.add_argument("--ngrok", type=str, help="ngrok authtoken, alternative to gradio --share", default=None)
40
  parser.add_argument("--ngrok-options", type=json.loads, help='The options to pass to ngrok in JSON format, e.g.: \'{"authtoken_from_env":true, "basic_auth":"user:password", "oauth_provider":"google", "oauth_allow_emails":"[email protected]"}\'', default=dict())
@@ -44,11 +39,9 @@ parser.add_argument("--gfpgan-models-path", type=normalized_filepath, help="Path
44
  parser.add_argument("--esrgan-models-path", type=normalized_filepath, help="Path to directory with ESRGAN model file(s).", default=os.path.join(models_path, "ESRGAN"))
45
  parser.add_argument("--xformers", action="store_true", help="enable xformers for cross attention layers")
46
  parser.add_argument("--sage", action="store_true", help="enable sage for cross attention layers")
47
- parser.add_argument("--force-enable-xformers", action="store_true", help="enable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work")
48
  parser.add_argument("--disable-nan-check", action="store_true", help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI")
49
  parser.add_argument("--use-cpu", nargs="+", help="use CPU as torch device for specified modules", default=[], type=str.lower)
50
  parser.add_argument("--use-ipex", action="store_true", help="use Intel XPU as torch device")
51
- parser.add_argument("--disable-model-loading-ram-optimization", action="store_true", help="disable an optimization that reduces RAM use when loading a model")
52
  parser.add_argument("--listen", action="store_true", help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests")
53
  parser.add_argument("--port", type=int, help="launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available", default=None)
54
  parser.add_argument("--ui-config-file", type=str, help="filename to use for ui configuration", default=os.path.join(data_path, "ui-config.json"))
@@ -67,7 +60,6 @@ parser.add_argument("--theme", type=str, help="launches the UI with light or dar
67
  parser.add_argument("--use-textbox-seed", action="store_true", help="use textbox for seeds in UI (no up/down, but possible to input long seeds)", default=False)
68
  parser.add_argument("--disable-console-progressbars", action="store_true", help="do not output progressbars to console", default=False)
69
  parser.add_argument("--vae-path", type=normalized_filepath, help="Checkpoint to use as VAE; setting this argument disables all settings related to VAE", default=None)
70
- parser.add_argument("--disable-safe-unpickle", action="store_true", help="disable checking pytorch models for malicious code", default=False)
71
  parser.add_argument("--api", action="store_true", help="use api=True to launch the API together with the webui (use --nowebui instead for only the API)")
72
  parser.add_argument("--api-auth", type=str, help='Set authentication for API like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None)
73
  parser.add_argument("--api-log", action="store_true", help="use api-log=True to enable logging of all API requests")
@@ -101,3 +93,6 @@ parser.add_argument("--fps", type=int, default=30, help="refresh rate for thread
101
  pkm = parser.add_mutually_exclusive_group()
102
  pkm.add_argument("--uv", action="store_true", help="Use the uv package manager")
103
  pkm.add_argument("--uv-symlink", action="store_true", help="Use the uv package manager with symlink")
 
 
 
 
11
 
12
  parser.add_argument("--update-all-extensions", action="store_true", help="launch.py argument: download updates for all extensions when starting the program")
13
  parser.add_argument("--skip-python-version-check", action="store_true", help="launch.py argument: do not check python version")
14
+ parser.add_argument("--skip-prepare-environment", action="store_true", help="launch.py argument: skip all environment preparation")
15
  parser.add_argument("--skip-torch-cuda-test", action="store_true", help="launch.py argument: do not check if CUDA is able to work properly")
16
+ parser.add_argument("--skip-install", action="store_true", help="launch.py argument: skip installation of packages")
17
  parser.add_argument("--reinstall-xformers", action="store_true", help="launch.py argument: install the appropriate version of xformers even if you have some version already installed")
18
  parser.add_argument("--reinstall-torch", action="store_true", help="launch.py argument: install the appropriate version of torch even if you have some version already installed")
 
 
19
  parser.add_argument("--log-startup", action="store_true", help="launch.py argument: print a detailed log of what's happening at startup")
 
 
20
  parser.add_argument("--dump-sysinfo", action="store_true", help="launch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit")
21
  parser.add_argument("--loglevel", type=str, help="log level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG", default=None)
22
  parser.add_argument("--data-dir", type=normalized_filepath, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored")
 
28
  parser.add_argument("--gfpgan-dir", type=normalized_filepath, help="GFPGAN directory", default=("./src/gfpgan" if os.path.exists("./src/gfpgan") else "./GFPGAN"))
29
  parser.add_argument("--gfpgan-model", type=normalized_filepath, help="GFPGAN model file name", default=None)
30
  parser.add_argument("--no-half", action="store_true", help="do not switch the model to 16-bit floats")
 
 
31
  parser.add_argument("--embeddings-dir", type=normalized_filepath, default=os.path.join(models_path, "embeddings"), help="textual inversion directory")
32
  parser.add_argument("--localizations-dir", type=normalized_filepath, default=os.path.join(script_path, "localizations"), help="localizations directory")
 
33
  parser.add_argument("--share", action="store_true", help="use share=True for gradio and make the UI accessible through their site")
34
  parser.add_argument("--ngrok", type=str, help="ngrok authtoken, alternative to gradio --share", default=None)
35
  parser.add_argument("--ngrok-options", type=json.loads, help='The options to pass to ngrok in JSON format, e.g.: \'{"authtoken_from_env":true, "basic_auth":"user:password", "oauth_provider":"google", "oauth_allow_emails":"[email protected]"}\'', default=dict())
 
39
  parser.add_argument("--esrgan-models-path", type=normalized_filepath, help="Path to directory with ESRGAN model file(s).", default=os.path.join(models_path, "ESRGAN"))
40
  parser.add_argument("--xformers", action="store_true", help="enable xformers for cross attention layers")
41
  parser.add_argument("--sage", action="store_true", help="enable sage for cross attention layers")
 
42
  parser.add_argument("--disable-nan-check", action="store_true", help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI")
43
  parser.add_argument("--use-cpu", nargs="+", help="use CPU as torch device for specified modules", default=[], type=str.lower)
44
  parser.add_argument("--use-ipex", action="store_true", help="use Intel XPU as torch device")
 
45
  parser.add_argument("--listen", action="store_true", help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests")
46
  parser.add_argument("--port", type=int, help="launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available", default=None)
47
  parser.add_argument("--ui-config-file", type=str, help="filename to use for ui configuration", default=os.path.join(data_path, "ui-config.json"))
 
60
  parser.add_argument("--use-textbox-seed", action="store_true", help="use textbox for seeds in UI (no up/down, but possible to input long seeds)", default=False)
61
  parser.add_argument("--disable-console-progressbars", action="store_true", help="do not output progressbars to console", default=False)
62
  parser.add_argument("--vae-path", type=normalized_filepath, help="Checkpoint to use as VAE; setting this argument disables all settings related to VAE", default=None)
 
63
  parser.add_argument("--api", action="store_true", help="use api=True to launch the API together with the webui (use --nowebui instead for only the API)")
64
  parser.add_argument("--api-auth", type=str, help='Set authentication for API like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None)
65
  parser.add_argument("--api-log", action="store_true", help="use api-log=True to enable logging of all API requests")
 
93
  pkm = parser.add_mutually_exclusive_group()
94
  pkm.add_argument("--uv", action="store_true", help="Use the uv package manager")
95
  pkm.add_argument("--uv-symlink", action="store_true", help="Use the uv package manager with symlink")
96
+
97
+ # ===== backward compatibility ===== #
98
+ parser.add_argument("--disable-safe-unpickle", action="store_true", help="does absolutely nothing", default=False) # adetailer
modules/mac_specific.py CHANGED
@@ -12,7 +12,7 @@ log = logging.getLogger(__name__)
12
 
13
  # before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+,
14
  # use check `getattr` and try it for compatibility.
15
- # in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availabilty,
16
  # since torch 2.0.1+ nightly build, getattr(torch, 'has_mps', False) was deprecated, see https://github.com/pytorch/pytorch/pull/103279
17
  def check_for_mps() -> bool:
18
  if version.parse(torch.__version__) <= version.parse("2.0.1"):
 
12
 
13
  # before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+,
14
  # use check `getattr` and try it for compatibility.
15
+ # in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availability,
16
  # since torch 2.0.1+ nightly build, getattr(torch, 'has_mps', False) was deprecated, see https://github.com/pytorch/pytorch/pull/103279
17
  def check_for_mps() -> bool:
18
  if version.parse(torch.__version__) <= version.parse("2.0.1"):
modules/processing.py CHANGED
@@ -1,38 +1,54 @@
1
  from __future__ import annotations
 
 
2
  import json
3
  import math
4
  import os
 
5
  import sys
6
- import hashlib
7
  from dataclasses import dataclass, field
 
8
 
9
- import torch
10
  import numpy as np
 
 
11
  from PIL import Image, ImageOps
12
- import random
13
- import cv2
14
  from skimage.exposure import match_histograms
15
- from typing import Any
16
 
17
- import modules.sd_hijack
18
- from modules import devices, prompt_parser, masking, sd_samplers, infotext_utils, extra_networks, sd_vae_approx, scripts, sd_samplers_common, sd_unet, errors, rng
19
- from modules.rng import slerp # noqa: F401
20
- from modules.sd_hijack import model_hijack
21
- from modules.sd_samplers_common import images_tensor_to_samples, decode_first_stage, approximation_indexes
22
- from modules.shared import opts, cmd_opts, state
23
- import modules.shared as shared
24
- import modules.paths as paths
25
  import modules.face_restoration
26
- import modules.images as images
27
- import modules.styles
28
  import modules.sd_models as sd_models
29
  import modules.sd_vae as sd_vae
30
-
31
- from einops import repeat, rearrange
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  from modules.sd_models import apply_token_merging
33
- from modules_forge.forge_util import apply_circular_forge
 
 
 
 
 
34
  from modules_forge.forge_loader import apply_alpha_schedule_override
35
-
36
 
37
  # some of those options should not be changed at all because they would break the model, so I removed them from options.
38
  opt_C = 4
@@ -59,7 +75,7 @@ def apply_color_correction(correction_target: np.ndarray, original_image: Image.
59
 
60
  def uncrop(image, dest_size, paste_loc):
61
  x, y, w, h = paste_loc
62
- base_image = Image.new('RGBA', dest_size)
63
  image = images.resize_image(1, image, w, h)
64
  base_image.paste(image, (x, y))
65
  image = base_image
@@ -67,33 +83,59 @@ def uncrop(image, dest_size, paste_loc):
67
  return image
68
 
69
 
70
- def apply_overlay(image, paste_loc, overlay):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  if overlay is None:
72
  return image, image.copy()
73
 
 
 
 
74
  if paste_loc is not None:
75
  image = uncrop(image, (overlay.width, overlay.height), paste_loc)
76
 
77
  original_denoised_image = image.copy()
78
 
79
- image = image.convert('RGBA')
80
  image.alpha_composite(overlay)
81
- image = image.convert('RGB')
82
 
83
  return image, original_denoised_image
84
 
 
85
  def create_binary_mask(image, round=True):
86
- if image.mode == 'RGBA' and image.getextrema()[-1] != (255, 255):
87
  if round:
88
  image = image.split()[-1].convert("L").point(lambda x: 255 if x > 128 else 0)
89
  else:
90
  image = image.split()[-1].convert("L")
91
  else:
92
- image = image.convert('L')
93
  return image
94
 
 
95
  def txt2img_image_conditioning(sd_model, x, width, height):
96
- if sd_model.model.conditioning_key in {'hybrid', 'concat'}: # Inpainting models
97
 
98
  # The "masked-image" in this case will just be all 0.5 since the entire image is masked.
99
  image_conditioning = torch.ones(x.shape[0], 3, height, width, device=x.device) * 0.5
@@ -105,19 +147,18 @@ def txt2img_image_conditioning(sd_model, x, width, height):
105
 
106
  return image_conditioning
107
 
108
- elif sd_model.model.conditioning_key == "crossattn-adm": # UnCLIP models
109
 
110
- return x.new_zeros(x.shape[0], 2*sd_model.noise_augmentor.time_embed.dim, dtype=x.dtype, device=x.device)
111
 
112
  else:
113
  sd = sd_model.model.state_dict()
114
- diffusion_model_input = sd.get('diffusion_model.input_blocks.0.0.weight', None)
115
  if diffusion_model_input is not None:
116
  if diffusion_model_input.shape[1] == 9:
117
  # The "masked-image" in this case will just be all 0.5 since the entire image is masked.
118
  image_conditioning = torch.ones(x.shape[0], 3, height, width, device=x.device) * 0.5
119
- image_conditioning = images_tensor_to_samples(image_conditioning,
120
- approximation_indexes.get(opts.sd_vae_encode_method))
121
 
122
  # Add the fake full 1s mask to the first dimension.
123
  image_conditioning = torch.nn.functional.pad(image_conditioning, (0, 0, 0, 0, 1, 0), value=1.0)
@@ -236,7 +277,7 @@ class StableDiffusionProcessing:
236
  self.s_min_uncond = self.s_min_uncond if self.s_min_uncond is not None else opts.s_min_uncond
237
  self.s_churn = self.s_churn if self.s_churn is not None else opts.s_churn
238
  self.s_tmin = self.s_tmin if self.s_tmin is not None else opts.s_tmin
239
- self.s_tmax = (self.s_tmax if self.s_tmax is not None else opts.s_tmax) or float('inf')
240
  self.s_noise = self.s_noise if self.s_noise is not None else opts.s_noise
241
 
242
  self.extra_generation_params = self.extra_generation_params or {}
@@ -296,7 +337,7 @@ class StableDiffusionProcessing:
296
  self.comments[text] = 1
297
 
298
  def txt2img_image_conditioning(self, x, width=None, height=None):
299
- self.is_using_inpainting_conditioning = self.sd_model.model.conditioning_key in {'hybrid', 'concat'}
300
 
301
  return txt2img_image_conditioning(self.sd_model, x, width or self.width, height or self.height)
302
 
@@ -308,8 +349,8 @@ class StableDiffusionProcessing:
308
  def unclip_image_conditioning(self, source_image):
309
  c_adm = self.sd_model.embedder(source_image)
310
  if self.sd_model.noise_augmentor is not None:
311
- noise_level = 0 # TODO: Allow other noise levels?
312
- c_adm, noise_level_emb = self.sd_model.noise_augmentor(c_adm, noise_level=repeat(torch.tensor([noise_level]).to(c_adm.device), '1 -> b', b=c_adm.shape[0]))
313
  c_adm = torch.cat((c_adm, noise_level_emb), 1)
314
  return c_adm
315
 
@@ -335,11 +376,7 @@ class StableDiffusionProcessing:
335
  # Create another latent image, this time with a masked version of the original input.
336
  # Smoothly interpolate between the masked and unmasked latent conditioning image using a parameter.
337
  conditioning_mask = conditioning_mask.to(device=source_image.device, dtype=source_image.dtype)
338
- conditioning_image = torch.lerp(
339
- source_image,
340
- source_image * (1.0 - conditioning_mask),
341
- getattr(self, "inpainting_mask_weight", shared.opts.inpainting_mask_weight)
342
- )
343
 
344
  # Encode the new masked image using first stage of network.
345
  conditioning_image = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(conditioning_image))
@@ -358,14 +395,14 @@ class StableDiffusionProcessing:
358
  if self.sd_model.cond_stage_key == "edit":
359
  return self.edit_image_conditioning(source_image)
360
 
361
- if self.sampler.conditioning_key in {'hybrid', 'concat'}:
362
  return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask, round_image_mask=round_image_mask)
363
 
364
  if self.sampler.conditioning_key == "crossattn-adm":
365
  return self.unclip_image_conditioning(source_image)
366
 
367
  sd = self.sampler.model_wrap.inner_model.model.state_dict()
368
- diffusion_model_input = sd.get('diffusion_model.input_blocks.0.0.weight', None)
369
  if diffusion_model_input is not None:
370
  if diffusion_model_input.shape[1] == 9:
371
  return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask)
@@ -394,7 +431,7 @@ class StableDiffusionProcessing:
394
  return self.token_merging_ratio or opts.token_merging_ratio
395
 
396
  def setup_prompts(self):
397
- if isinstance(self.prompt,list):
398
  self.all_prompts = self.prompt
399
  elif isinstance(self.negative_prompt, list):
400
  self.all_prompts = [self.prompt] * len(self.negative_prompt)
@@ -505,7 +542,7 @@ class Processed:
505
  self.height = p.height
506
  self.sampler_name = p.sampler_name
507
  self.cfg_scale = p.cfg_scale
508
- self.image_cfg_scale = getattr(p, 'image_cfg_scale', None)
509
  self.steps = p.steps
510
  self.batch_size = p.batch_size
511
  self.restore_faces = p.restore_faces
@@ -516,7 +553,7 @@ class Processed:
516
  self.sd_vae_hash = p.sd_vae_hash
517
  self.seed_resize_from_w = p.seed_resize_from_w
518
  self.seed_resize_from_h = p.seed_resize_from_h
519
- self.denoising_strength = getattr(p, 'denoising_strength', None)
520
  self.extra_generation_params = p.extra_generation_params
521
  self.index_of_first_image = index_of_first_image
522
  self.styles = p.styles
@@ -611,7 +648,7 @@ def decode_latent_batch(model, batch, target_device=None, check_for_nans=False):
611
 
612
 
613
  def get_fixed_seed(seed):
614
- if seed == '' or seed is None:
615
  seed = -1
616
  elif isinstance(seed, str):
617
  try:
@@ -643,8 +680,8 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
643
  if all_negative_prompts is None:
644
  all_negative_prompts = p.all_negative_prompts
645
 
646
- clip_skip = getattr(p, 'clip_skip', opts.CLIP_stop_at_last_layers)
647
- enable_hr = getattr(p, 'enable_hr', False)
648
  token_merging_ratio = p.get_token_merging_ratio()
649
  token_merging_ratio_hr = p.get_token_merging_ratio(for_hr=True)
650
 
@@ -663,7 +700,7 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
663
  "Sampler": p.sampler_name,
664
  "Schedule type": p.scheduler,
665
  "CFG scale": p.cfg_scale,
666
- "Image CFG scale": getattr(p, 'image_cfg_scale', None),
667
  "Seed": p.all_seeds[0] if use_main_prompt else all_seeds[index],
668
  "Face restoration": opts.face_restoration_model if p.restore_faces else None,
669
  "Size": f"{p.width}x{p.height}",
@@ -681,7 +718,7 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
681
  "ENSD": opts.eta_noise_seed_delta if uses_ensd else None,
682
  "Token merging ratio": None if token_merging_ratio == 0 else token_merging_ratio,
683
  "Token merging ratio hr": None if not enable_hr or token_merging_ratio_hr == 0 else token_merging_ratio_hr,
684
- "Init image hash": getattr(p, 'init_img_hash', None),
685
  "RNG": opts.randn_source if opts.randn_source != "GPU" else None,
686
  "NGMS": None if p.s_min_uncond == 0 else p.s_min_uncond,
687
  "Tiling": True if p.tiling else None,
@@ -701,7 +738,7 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
701
  errors.report(f'Error creating infotext for key "{key}"', exc_info=True)
702
  generation_params[key] = None
703
 
704
- generation_params_text = ", ".join([k if k == v else f'{k}: {infotext_utils.quote(v)}' for k, v in generation_params.items() if v is not None])
705
 
706
  negative_prompt_text = f"\nNegative prompt: {negative_prompt}" if negative_prompt else ""
707
 
@@ -717,17 +754,17 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
717
  try:
718
  # if no checkpoint override or the override checkpoint can't be found, remove override entry and load opts checkpoint
719
  # and if after running refiner, the refiner model is not unloaded - webui swaps back to main model here, if model over is present it will be reloaded afterwards
720
- if sd_models.checkpoint_aliases.get(p.override_settings.get('sd_model_checkpoint')) is None:
721
- p.override_settings.pop('sd_model_checkpoint', None)
722
  sd_models.reload_model_weights()
723
 
724
  for k, v in p.override_settings.items():
725
  opts.set(k, v, is_api=True, run_callbacks=False)
726
 
727
- if k == 'sd_model_checkpoint':
728
  sd_models.reload_model_weights()
729
 
730
- if k == 'sd_vae':
731
  sd_vae.reload_vae_weights()
732
 
733
  sd_samplers.fix_p_invalid_sampler_and_scheduler(p)
@@ -740,7 +777,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
740
  for k, v in stored_opts.items():
741
  setattr(opts, k, v)
742
 
743
- if k == 'sd_vae':
744
  sd_vae.reload_vae_weights()
745
 
746
  return res
@@ -750,7 +787,7 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
750
  """this is the main loop that both txt2img and img2img use; it calls func_init once inside all the scopes and func_sample once per batch"""
751
 
752
  if isinstance(p.prompt, list):
753
- assert(len(p.prompt) > 0)
754
  else:
755
  assert p.prompt is not None
756
 
@@ -768,7 +805,7 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
768
  if p.refiner_checkpoint not in (None, "", "None", "none"):
769
  p.refiner_checkpoint_info = sd_models.get_closet_checkpoint_match(p.refiner_checkpoint)
770
  if p.refiner_checkpoint_info is None:
771
- raise Exception(f'Could not find checkpoint with name {p.refiner_checkpoint}')
772
 
773
  p.sd_model_name = shared.sd_model.sd_checkpoint_info.name_for_extra
774
  p.sd_model_hash = shared.sd_model.sd_model_hash
@@ -823,10 +860,10 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
823
  sd_models.reload_model_weights() # model can be changed for example by refiner
824
 
825
  p.sd_model.forge_objects = p.sd_model.forge_objects_original.shallow_copy()
826
- p.prompts = p.all_prompts[n * p.batch_size:(n + 1) * p.batch_size]
827
- p.negative_prompts = p.all_negative_prompts[n * p.batch_size:(n + 1) * p.batch_size]
828
- p.seeds = p.all_seeds[n * p.batch_size:(n + 1) * p.batch_size]
829
- p.subseeds = p.all_subseeds[n * p.batch_size:(n + 1) * p.batch_size]
830
 
831
  p.rng = rng.ImageRNG((opt_C, p.height // opt_f, p.width // opt_f), p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, seed_resize_from_h=p.seed_resize_from_h, seed_resize_from_w=p.seed_resize_from_w)
832
 
@@ -874,11 +911,11 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
874
  p.scripts.post_sample(p, ps)
875
  samples_ddim = ps.samples
876
 
877
- if getattr(samples_ddim, 'already_decoded', False):
878
  x_samples_ddim = samples_ddim
879
  else:
880
- if opts.sd_vae_decode_method != 'Full':
881
- p.extra_generation_params['VAE Decoder'] = opts.sd_vae_decode_method
882
  x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
883
 
884
  x_samples_ddim = torch.stack(x_samples_ddim).float()
@@ -893,8 +930,8 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
893
  if p.scripts is not None:
894
  p.scripts.postprocess_batch(p, x_samples_ddim, batch_number=n)
895
 
896
- p.prompts = p.all_prompts[n * p.batch_size:(n + 1) * p.batch_size]
897
- p.negative_prompts = p.all_negative_prompts[n * p.batch_size:(n + 1) * p.batch_size]
898
 
899
  batch_params = scripts.PostprocessBatchListArgs(list(x_samples_ddim))
900
  p.scripts.postprocess_batch_list(p, batch_params, batch_number=n)
@@ -908,7 +945,7 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
908
  for i, x_sample in enumerate(x_samples_ddim):
909
  p.batch_index = i
910
 
911
- x_sample = 255. * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
912
  x_sample = x_sample.astype(np.uint8)
913
 
914
  if p.restore_faces:
@@ -969,14 +1006,14 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
969
 
970
  if mask_for_overlay is not None:
971
  if opts.return_mask or opts.save_mask:
972
- image_mask = mask_for_overlay.convert('RGB')
973
  if save_samples and opts.save_mask:
974
  images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask")
975
  if opts.return_mask:
976
  output_images.append(image_mask)
977
 
978
  if opts.return_mask_composite or opts.save_mask_composite:
979
- image_mask_composite = Image.composite(original_denoised_image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), images.resize_image(2, mask_for_overlay, image.width, image.height).convert('L')).convert('RGBA')
980
  if save_samples and opts.save_mask_composite:
981
  images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask-composite")
982
  if opts.return_mask_composite:
@@ -1054,8 +1091,10 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1054
  hr_checkpoint_name: str = None
1055
  hr_sampler_name: str = None
1056
  hr_scheduler: str = None
1057
- hr_prompt: str = ''
1058
- hr_negative_prompt: str = ''
 
 
1059
  force_task_id: str = None
1060
 
1061
  cached_hr_uc = [None, None]
@@ -1128,57 +1167,68 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1128
  self.truncate_y = (self.hr_upscale_to_y - target_h) // opt_f
1129
 
1130
  def init(self, all_prompts, all_seeds, all_subseeds):
1131
- if self.enable_hr:
1132
- self.extra_generation_params["Denoising strength"] = self.denoising_strength
1133
 
1134
- if self.hr_checkpoint_name and self.hr_checkpoint_name != 'Use same checkpoint':
1135
- self.hr_checkpoint_info = sd_models.get_closet_checkpoint_match(self.hr_checkpoint_name)
1136
 
1137
- if self.hr_checkpoint_info is None:
1138
- raise Exception(f'Could not find checkpoint with name {self.hr_checkpoint_name}')
1139
 
1140
- if shared.sd_model.sd_checkpoint_info == self.hr_checkpoint_info:
1141
- self.hr_checkpoint_info = None
1142
- else:
1143
- self.extra_generation_params["Hires checkpoint"] = self.hr_checkpoint_info.short_title
 
 
 
1144
 
1145
- if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name:
1146
- self.extra_generation_params["Hires sampler"] = self.hr_sampler_name
1147
 
1148
- self.extra_generation_params["Hires schedule type"] = None # to be set in sd_samplers_kdiffusion.py
1149
-
1150
- if self.hr_scheduler is None:
1151
- self.hr_scheduler = self.scheduler
1152
 
1153
- if tuple(self.hr_prompt) != tuple(self.prompt):
1154
- self.extra_generation_params["Hires prompt"] = self.hr_prompt
1155
 
1156
- if tuple(self.hr_negative_prompt) != tuple(self.negative_prompt):
1157
- self.extra_generation_params["Hires negative prompt"] = self.hr_negative_prompt
1158
 
1159
- self.latent_scale_mode = shared.latent_upscale_modes.get(self.hr_upscaler, None) if self.hr_upscaler is not None else shared.latent_upscale_modes.get(shared.latent_upscale_default_mode, "nearest")
1160
- if self.enable_hr and self.latent_scale_mode is None:
1161
- if not any(x.name == self.hr_upscaler for x in shared.sd_upscalers):
1162
- raise Exception(f"could not find upscaler named {self.hr_upscaler}")
1163
 
1164
- self.calculate_target_resolution()
 
1165
 
1166
- if not state.processing_has_refined_job_count:
1167
- if state.job_count == -1:
1168
- state.job_count = self.n_iter
1169
- if getattr(self, 'txt2img_upscale', False):
1170
- total_steps = (self.hr_second_pass_steps or self.steps) * state.job_count
1171
- else:
1172
- total_steps = (self.steps + (self.hr_second_pass_steps or self.steps)) * state.job_count
1173
- shared.total_tqdm.updateTotal(total_steps)
1174
- state.job_count = state.job_count * 2
1175
- state.processing_has_refined_job_count = True
1176
 
1177
- if self.hr_second_pass_steps:
1178
- self.extra_generation_params["Hires steps"] = self.hr_second_pass_steps
1179
 
1180
- if self.hr_upscaler is not None:
1181
- self.extra_generation_params["Hires upscaler"] = self.hr_upscaler
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1182
 
1183
  def sample(self, conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts):
1184
  self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
@@ -1199,8 +1249,8 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1199
  image = torch.from_numpy(np.expand_dims(image, axis=0))
1200
  image = image.to(shared.device, dtype=torch.float32)
1201
 
1202
- if opts.sd_vae_encode_method != 'Full':
1203
- self.extra_generation_params['VAE Encoder'] = opts.sd_vae_encode_method
1204
 
1205
  samples = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
1206
  decoded_samples = None
@@ -1215,11 +1265,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1215
  apply_token_merging(self.sd_model, self.get_token_merging_ratio())
1216
 
1217
  if self.scripts is not None:
1218
- self.scripts.process_before_every_sampling(self,
1219
- x=x,
1220
- noise=x,
1221
- c=conditioning,
1222
- uc=unconditional_conditioning)
1223
 
1224
  if self.modified_noise is not None:
1225
  x = self.modified_noise
@@ -1284,7 +1330,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1284
 
1285
  batch_images = []
1286
  for i, x_sample in enumerate(lowres_samples):
1287
- x_sample = 255. * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
1288
  x_sample = x_sample.astype(np.uint8)
1289
  image = Image.fromarray(x_sample)
1290
 
@@ -1298,15 +1344,15 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1298
  decoded_samples = torch.from_numpy(np.array(batch_images))
1299
  decoded_samples = decoded_samples.to(shared.device, dtype=torch.float32)
1300
 
1301
- if opts.sd_vae_encode_method != 'Full':
1302
- self.extra_generation_params['VAE Encoder'] = opts.sd_vae_encode_method
1303
  samples = images_tensor_to_samples(decoded_samples, approximation_indexes.get(opts.sd_vae_encode_method))
1304
 
1305
  image_conditioning = self.img2img_image_conditioning(decoded_samples, samples)
1306
 
1307
  shared.state.nextjob()
1308
 
1309
- samples = samples[:, :, self.truncate_y//2:samples.shape[2]-(self.truncate_y+1)//2, self.truncate_x//2:samples.shape[3]-(self.truncate_x+1)//2]
1310
 
1311
  self.rng = rng.ImageRNG(samples.shape[1:], self.seeds, subseeds=self.subseeds, subseed_strength=self.subseed_strength, seed_resize_from_h=self.seed_resize_from_h, seed_resize_from_w=self.seed_resize_from_w)
1312
  noise = self.rng.next()
@@ -1328,11 +1374,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1328
  apply_token_merging(self.sd_model, self.get_token_merging_ratio(for_hr=True))
1329
 
1330
  if self.scripts is not None:
1331
- self.scripts.process_before_every_sampling(self,
1332
- x=samples,
1333
- noise=noise,
1334
- c=self.hr_c,
1335
- uc=self.hr_uc)
1336
 
1337
  if self.modified_noise is not None:
1338
  noise = self.modified_noise
@@ -1362,10 +1404,10 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1362
  if not self.enable_hr:
1363
  return
1364
 
1365
- if self.hr_prompt == '':
1366
  self.hr_prompt = self.prompt
1367
 
1368
- if self.hr_negative_prompt == '':
1369
  self.hr_negative_prompt = self.negative_prompt
1370
 
1371
  if isinstance(self.hr_prompt, list):
@@ -1430,8 +1472,8 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
1430
  res = super().parse_extra_network_prompts()
1431
 
1432
  if self.enable_hr:
1433
- self.hr_prompts = self.all_hr_prompts[self.iteration * self.batch_size:(self.iteration + 1) * self.batch_size]
1434
- self.hr_negative_prompts = self.all_hr_negative_prompts[self.iteration * self.batch_size:(self.iteration + 1) * self.batch_size]
1435
 
1436
  self.hr_prompts, self.hr_extra_network_data = extra_networks.parse_prompts(self.hr_prompts)
1437
 
@@ -1515,19 +1557,24 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1515
  np_mask = cv2.GaussianBlur(np_mask, (1, kernel_size), self.mask_blur_y)
1516
  image_mask = Image.fromarray(np_mask)
1517
 
 
 
 
 
 
1518
  if self.mask_blur_x > 0 or self.mask_blur_y > 0:
1519
  self.extra_generation_params["Mask blur"] = self.mask_blur
1520
 
1521
  if self.inpaint_full_res:
1522
  self.mask_for_overlay = image_mask
1523
- mask = image_mask.convert('L')
1524
  crop_region = masking.get_crop_region(mask, self.inpaint_full_res_padding)
1525
  crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
1526
  x1, y1, x2, y2 = crop_region
1527
 
1528
  mask = mask.crop(crop_region)
1529
  image_mask = images.resize_image(2, mask, self.width, self.height)
1530
- self.paste_to = (x1, y1, x2-x1, y2-y1)
1531
 
1532
  self.extra_generation_params["Inpaint area"] = "Only masked"
1533
  self.extra_generation_params["Masked area padding"] = self.inpaint_full_res_padding
@@ -1558,10 +1605,12 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1558
  image = images.resize_image(self.resize_mode, image, self.width, self.height)
1559
 
1560
  if image_mask is not None:
1561
- image_masked = Image.new('RGBa', (image.width, image.height))
1562
- image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert('L')))
1563
-
1564
- self.overlay_images.append(image_masked.convert('RGBA'))
 
 
1565
 
1566
  # crop_region is not None if we are doing inpaint full res
1567
  if crop_region is not None:
@@ -1573,7 +1622,7 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1573
  image = masking.fill(image, latent_mask)
1574
 
1575
  if self.inpainting_fill == 0:
1576
- self.extra_generation_params["Masked content"] = 'fill'
1577
 
1578
  if add_color_corrections:
1579
  self.color_corrections.append(setup_color_correction(image))
@@ -1600,8 +1649,8 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1600
  image = torch.from_numpy(batch_images)
1601
  image = image.to(shared.device, dtype=torch.float32)
1602
 
1603
- if opts.sd_vae_encode_method != 'Full':
1604
- self.extra_generation_params['VAE Encoder'] = opts.sd_vae_encode_method
1605
 
1606
  self.init_latent = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
1607
  devices.torch_gc()
@@ -1611,7 +1660,7 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1611
 
1612
  if image_mask is not None:
1613
  init_mask = latent_mask
1614
- latmask = init_mask.convert('RGB').resize((self.init_latent.shape[3], self.init_latent.shape[2]))
1615
  latmask = np.moveaxis(np.array(latmask, dtype=np.float32), 2, 0) / 255
1616
  latmask = latmask[0]
1617
  if self.mask_round:
@@ -1623,12 +1672,12 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1623
 
1624
  # this needs to be fixed to be done in sample() using actual seeds for batches
1625
  if self.inpainting_fill == 2:
1626
- self.init_latent = self.init_latent * self.mask + create_random_tensors(self.init_latent.shape[1:], all_seeds[0:self.init_latent.shape[0]]) * self.nmask
1627
- self.extra_generation_params["Masked content"] = 'latent noise'
1628
 
1629
  elif self.inpainting_fill == 3:
1630
  self.init_latent = self.init_latent * self.mask
1631
- self.extra_generation_params["Masked content"] = 'latent nothing'
1632
 
1633
  self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_mask, self.mask_round)
1634
 
@@ -1643,11 +1692,7 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
1643
  apply_token_merging(self.sd_model, self.get_token_merging_ratio())
1644
 
1645
  if self.scripts is not None:
1646
- self.scripts.process_before_every_sampling(self,
1647
- x=self.init_latent,
1648
- noise=x,
1649
- c=conditioning,
1650
- uc=unconditional_conditioning)
1651
 
1652
  if self.modified_noise is not None:
1653
  x = self.modified_noise
 
1
  from __future__ import annotations
2
+
3
+ import hashlib
4
  import json
5
  import math
6
  import os
7
+ import random
8
  import sys
 
9
  from dataclasses import dataclass, field
10
+ from typing import Any
11
 
12
+ import cv2
13
  import numpy as np
14
+ import torch
15
+ from einops import repeat
16
  from PIL import Image, ImageOps
 
 
17
  from skimage.exposure import match_histograms
 
18
 
 
 
 
 
 
 
 
 
19
  import modules.face_restoration
20
+ import modules.paths as paths
21
+ import modules.sd_hijack
22
  import modules.sd_models as sd_models
23
  import modules.sd_vae as sd_vae
24
+ import modules.shared as shared
25
+ import modules.styles # noqa
26
+ import modules.images as images # noqa (circular import)
27
+ from modules import (
28
+ devices,
29
+ errors,
30
+ extra_networks,
31
+ infotext_utils,
32
+ masking,
33
+ prompt_parser,
34
+ rng,
35
+ scripts,
36
+ sd_samplers,
37
+ sd_samplers_common,
38
+ sd_unet,
39
+ sd_vae_approx,
40
+ )
41
+ from modules.rng import slerp # noqa
42
+ from modules.sd_hijack import model_hijack
43
  from modules.sd_models import apply_token_merging
44
+ from modules.sd_samplers_common import (
45
+ approximation_indexes,
46
+ decode_first_stage,
47
+ images_tensor_to_samples,
48
+ )
49
+ from modules.shared import cmd_opts, opts, state
50
  from modules_forge.forge_loader import apply_alpha_schedule_override
51
+ from modules_forge.forge_util import apply_circular_forge
52
 
53
  # some of those options should not be changed at all because they would break the model, so I removed them from options.
54
  opt_C = 4
 
75
 
76
  def uncrop(image, dest_size, paste_loc):
77
  x, y, w, h = paste_loc
78
+ base_image = Image.new("RGBA", dest_size)
79
  image = images.resize_image(1, image, w, h)
80
  base_image.paste(image, (x, y))
81
  image = base_image
 
83
  return image
84
 
85
 
86
+ def apply_overlay_precise(image: Image.Image, paste_loc: tuple[int], overlay: tuple[Image.Image, np.ndarray]):
87
+ _overlay, _mask = overlay
88
+
89
+ if paste_loc is not None:
90
+ image = uncrop(image, (_overlay.width, _overlay.height), paste_loc)
91
+
92
+ original_denoised_image = image.copy()
93
+
94
+ overlay_rgb = np.array(_overlay).astype(np.float32) / 255.0
95
+ image_np = np.array(image).astype(np.float32) / 255.0
96
+ image_rgb = image_np[:, :, :3]
97
+
98
+ _mask = np.expand_dims(_mask, axis=-1)
99
+ final = image_rgb * _mask + overlay_rgb * (1.0 - _mask)
100
+
101
+ _image = np.clip((final * 255.0).round(), 0, 255).astype(np.uint8)
102
+ image = Image.fromarray(_image)
103
+
104
+ return image, original_denoised_image
105
+
106
+
107
+ def apply_overlay(image: Image.Image, paste_loc: tuple[int], overlay: Image.Image):
108
  if overlay is None:
109
  return image, image.copy()
110
 
111
+ if opts.img2img_inpaint_precise_mask:
112
+ return apply_overlay_precise(image, paste_loc, overlay)
113
+
114
  if paste_loc is not None:
115
  image = uncrop(image, (overlay.width, overlay.height), paste_loc)
116
 
117
  original_denoised_image = image.copy()
118
 
119
+ image = image.convert("RGBA")
120
  image.alpha_composite(overlay)
121
+ image = image.convert("RGB")
122
 
123
  return image, original_denoised_image
124
 
125
+
126
  def create_binary_mask(image, round=True):
127
+ if image.mode == "RGBA" and image.getextrema()[-1] != (255, 255):
128
  if round:
129
  image = image.split()[-1].convert("L").point(lambda x: 255 if x > 128 else 0)
130
  else:
131
  image = image.split()[-1].convert("L")
132
  else:
133
+ image = image.convert("L")
134
  return image
135
 
136
+
137
  def txt2img_image_conditioning(sd_model, x, width, height):
138
+ if sd_model.model.conditioning_key in {"hybrid", "concat"}: # Inpainting models
139
 
140
  # The "masked-image" in this case will just be all 0.5 since the entire image is masked.
141
  image_conditioning = torch.ones(x.shape[0], 3, height, width, device=x.device) * 0.5
 
147
 
148
  return image_conditioning
149
 
150
+ elif sd_model.model.conditioning_key == "crossattn-adm": # UnCLIP models
151
 
152
+ return x.new_zeros(x.shape[0], 2 * sd_model.noise_augmentor.time_embed.dim, dtype=x.dtype, device=x.device)
153
 
154
  else:
155
  sd = sd_model.model.state_dict()
156
+ diffusion_model_input = sd.get("diffusion_model.input_blocks.0.0.weight", None)
157
  if diffusion_model_input is not None:
158
  if diffusion_model_input.shape[1] == 9:
159
  # The "masked-image" in this case will just be all 0.5 since the entire image is masked.
160
  image_conditioning = torch.ones(x.shape[0], 3, height, width, device=x.device) * 0.5
161
+ image_conditioning = images_tensor_to_samples(image_conditioning, approximation_indexes.get(opts.sd_vae_encode_method))
 
162
 
163
  # Add the fake full 1s mask to the first dimension.
164
  image_conditioning = torch.nn.functional.pad(image_conditioning, (0, 0, 0, 0, 1, 0), value=1.0)
 
277
  self.s_min_uncond = self.s_min_uncond if self.s_min_uncond is not None else opts.s_min_uncond
278
  self.s_churn = self.s_churn if self.s_churn is not None else opts.s_churn
279
  self.s_tmin = self.s_tmin if self.s_tmin is not None else opts.s_tmin
280
+ self.s_tmax = (self.s_tmax if self.s_tmax is not None else opts.s_tmax) or float("inf")
281
  self.s_noise = self.s_noise if self.s_noise is not None else opts.s_noise
282
 
283
  self.extra_generation_params = self.extra_generation_params or {}
 
337
  self.comments[text] = 1
338
 
339
  def txt2img_image_conditioning(self, x, width=None, height=None):
340
+ self.is_using_inpainting_conditioning = self.sd_model.model.conditioning_key in {"hybrid", "concat"}
341
 
342
  return txt2img_image_conditioning(self.sd_model, x, width or self.width, height or self.height)
343
 
 
349
  def unclip_image_conditioning(self, source_image):
350
  c_adm = self.sd_model.embedder(source_image)
351
  if self.sd_model.noise_augmentor is not None:
352
+ noise_level = 0 # TODO: Allow other noise levels?
353
+ c_adm, noise_level_emb = self.sd_model.noise_augmentor(c_adm, noise_level=repeat(torch.tensor([noise_level]).to(c_adm.device), "1 -> b", b=c_adm.shape[0]))
354
  c_adm = torch.cat((c_adm, noise_level_emb), 1)
355
  return c_adm
356
 
 
376
  # Create another latent image, this time with a masked version of the original input.
377
  # Smoothly interpolate between the masked and unmasked latent conditioning image using a parameter.
378
  conditioning_mask = conditioning_mask.to(device=source_image.device, dtype=source_image.dtype)
379
+ conditioning_image = torch.lerp(source_image, source_image * (1.0 - conditioning_mask), getattr(self, "inpainting_mask_weight", shared.opts.inpainting_mask_weight))
 
 
 
 
380
 
381
  # Encode the new masked image using first stage of network.
382
  conditioning_image = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(conditioning_image))
 
395
  if self.sd_model.cond_stage_key == "edit":
396
  return self.edit_image_conditioning(source_image)
397
 
398
+ if self.sampler.conditioning_key in {"hybrid", "concat"}:
399
  return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask, round_image_mask=round_image_mask)
400
 
401
  if self.sampler.conditioning_key == "crossattn-adm":
402
  return self.unclip_image_conditioning(source_image)
403
 
404
  sd = self.sampler.model_wrap.inner_model.model.state_dict()
405
+ diffusion_model_input = sd.get("diffusion_model.input_blocks.0.0.weight", None)
406
  if diffusion_model_input is not None:
407
  if diffusion_model_input.shape[1] == 9:
408
  return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask)
 
431
  return self.token_merging_ratio or opts.token_merging_ratio
432
 
433
  def setup_prompts(self):
434
+ if isinstance(self.prompt, list):
435
  self.all_prompts = self.prompt
436
  elif isinstance(self.negative_prompt, list):
437
  self.all_prompts = [self.prompt] * len(self.negative_prompt)
 
542
  self.height = p.height
543
  self.sampler_name = p.sampler_name
544
  self.cfg_scale = p.cfg_scale
545
+ self.image_cfg_scale = getattr(p, "image_cfg_scale", None)
546
  self.steps = p.steps
547
  self.batch_size = p.batch_size
548
  self.restore_faces = p.restore_faces
 
553
  self.sd_vae_hash = p.sd_vae_hash
554
  self.seed_resize_from_w = p.seed_resize_from_w
555
  self.seed_resize_from_h = p.seed_resize_from_h
556
+ self.denoising_strength = getattr(p, "denoising_strength", None)
557
  self.extra_generation_params = p.extra_generation_params
558
  self.index_of_first_image = index_of_first_image
559
  self.styles = p.styles
 
648
 
649
 
650
  def get_fixed_seed(seed):
651
+ if seed == "" or seed is None:
652
  seed = -1
653
  elif isinstance(seed, str):
654
  try:
 
680
  if all_negative_prompts is None:
681
  all_negative_prompts = p.all_negative_prompts
682
 
683
+ clip_skip = getattr(p, "clip_skip", opts.CLIP_stop_at_last_layers)
684
+ enable_hr = getattr(p, "enable_hr", False)
685
  token_merging_ratio = p.get_token_merging_ratio()
686
  token_merging_ratio_hr = p.get_token_merging_ratio(for_hr=True)
687
 
 
700
  "Sampler": p.sampler_name,
701
  "Schedule type": p.scheduler,
702
  "CFG scale": p.cfg_scale,
703
+ "Image CFG scale": getattr(p, "image_cfg_scale", None),
704
  "Seed": p.all_seeds[0] if use_main_prompt else all_seeds[index],
705
  "Face restoration": opts.face_restoration_model if p.restore_faces else None,
706
  "Size": f"{p.width}x{p.height}",
 
718
  "ENSD": opts.eta_noise_seed_delta if uses_ensd else None,
719
  "Token merging ratio": None if token_merging_ratio == 0 else token_merging_ratio,
720
  "Token merging ratio hr": None if not enable_hr or token_merging_ratio_hr == 0 else token_merging_ratio_hr,
721
+ "Init image hash": getattr(p, "init_img_hash", None),
722
  "RNG": opts.randn_source if opts.randn_source != "GPU" else None,
723
  "NGMS": None if p.s_min_uncond == 0 else p.s_min_uncond,
724
  "Tiling": True if p.tiling else None,
 
738
  errors.report(f'Error creating infotext for key "{key}"', exc_info=True)
739
  generation_params[key] = None
740
 
741
+ generation_params_text = ", ".join([k if k == v else f"{k}: {infotext_utils.quote(v)}" for k, v in generation_params.items() if v is not None])
742
 
743
  negative_prompt_text = f"\nNegative prompt: {negative_prompt}" if negative_prompt else ""
744
 
 
754
  try:
755
  # if no checkpoint override or the override checkpoint can't be found, remove override entry and load opts checkpoint
756
  # and if after running refiner, the refiner model is not unloaded - webui swaps back to main model here, if model over is present it will be reloaded afterwards
757
+ if sd_models.checkpoint_aliases.get(p.override_settings.get("sd_model_checkpoint")) is None:
758
+ p.override_settings.pop("sd_model_checkpoint", None)
759
  sd_models.reload_model_weights()
760
 
761
  for k, v in p.override_settings.items():
762
  opts.set(k, v, is_api=True, run_callbacks=False)
763
 
764
+ if k == "sd_model_checkpoint":
765
  sd_models.reload_model_weights()
766
 
767
+ if k == "sd_vae":
768
  sd_vae.reload_vae_weights()
769
 
770
  sd_samplers.fix_p_invalid_sampler_and_scheduler(p)
 
777
  for k, v in stored_opts.items():
778
  setattr(opts, k, v)
779
 
780
+ if k == "sd_vae":
781
  sd_vae.reload_vae_weights()
782
 
783
  return res
 
787
  """this is the main loop that both txt2img and img2img use; it calls func_init once inside all the scopes and func_sample once per batch"""
788
 
789
  if isinstance(p.prompt, list):
790
+ assert len(p.prompt) > 0
791
  else:
792
  assert p.prompt is not None
793
 
 
805
  if p.refiner_checkpoint not in (None, "", "None", "none"):
806
  p.refiner_checkpoint_info = sd_models.get_closet_checkpoint_match(p.refiner_checkpoint)
807
  if p.refiner_checkpoint_info is None:
808
+ raise Exception(f"Could not find checkpoint with name {p.refiner_checkpoint}")
809
 
810
  p.sd_model_name = shared.sd_model.sd_checkpoint_info.name_for_extra
811
  p.sd_model_hash = shared.sd_model.sd_model_hash
 
860
  sd_models.reload_model_weights() # model can be changed for example by refiner
861
 
862
  p.sd_model.forge_objects = p.sd_model.forge_objects_original.shallow_copy()
863
+ p.prompts = p.all_prompts[n * p.batch_size : (n + 1) * p.batch_size]
864
+ p.negative_prompts = p.all_negative_prompts[n * p.batch_size : (n + 1) * p.batch_size]
865
+ p.seeds = p.all_seeds[n * p.batch_size : (n + 1) * p.batch_size]
866
+ p.subseeds = p.all_subseeds[n * p.batch_size : (n + 1) * p.batch_size]
867
 
868
  p.rng = rng.ImageRNG((opt_C, p.height // opt_f, p.width // opt_f), p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, seed_resize_from_h=p.seed_resize_from_h, seed_resize_from_w=p.seed_resize_from_w)
869
 
 
911
  p.scripts.post_sample(p, ps)
912
  samples_ddim = ps.samples
913
 
914
+ if getattr(samples_ddim, "already_decoded", False):
915
  x_samples_ddim = samples_ddim
916
  else:
917
+ if opts.sd_vae_decode_method != "Full":
918
+ p.extra_generation_params["VAE Decoder"] = opts.sd_vae_decode_method
919
  x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
920
 
921
  x_samples_ddim = torch.stack(x_samples_ddim).float()
 
930
  if p.scripts is not None:
931
  p.scripts.postprocess_batch(p, x_samples_ddim, batch_number=n)
932
 
933
+ p.prompts = p.all_prompts[n * p.batch_size : (n + 1) * p.batch_size]
934
+ p.negative_prompts = p.all_negative_prompts[n * p.batch_size : (n + 1) * p.batch_size]
935
 
936
  batch_params = scripts.PostprocessBatchListArgs(list(x_samples_ddim))
937
  p.scripts.postprocess_batch_list(p, batch_params, batch_number=n)
 
945
  for i, x_sample in enumerate(x_samples_ddim):
946
  p.batch_index = i
947
 
948
+ x_sample = 255.0 * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
949
  x_sample = x_sample.astype(np.uint8)
950
 
951
  if p.restore_faces:
 
1006
 
1007
  if mask_for_overlay is not None:
1008
  if opts.return_mask or opts.save_mask:
1009
+ image_mask = mask_for_overlay.convert("RGB")
1010
  if save_samples and opts.save_mask:
1011
  images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask")
1012
  if opts.return_mask:
1013
  output_images.append(image_mask)
1014
 
1015
  if opts.return_mask_composite or opts.save_mask_composite:
1016
+ image_mask_composite = Image.composite(original_denoised_image.convert("RGBA").convert("RGBa"), Image.new("RGBa", image.size), images.resize_image(2, mask_for_overlay, image.width, image.height).convert("L")).convert("RGBA")
1017
  if save_samples and opts.save_mask_composite:
1018
  images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask-composite")
1019
  if opts.return_mask_composite:
 
1091
  hr_checkpoint_name: str = None
1092
  hr_sampler_name: str = None
1093
  hr_scheduler: str = None
1094
+ hr_cfg_scale: float = None
1095
+ hr_rescale_cfg: float = None
1096
+ hr_prompt: str = ""
1097
+ hr_negative_prompt: str = ""
1098
  force_task_id: str = None
1099
 
1100
  cached_hr_uc = [None, None]
 
1167
  self.truncate_y = (self.hr_upscale_to_y - target_h) // opt_f
1168
 
1169
  def init(self, all_prompts, all_seeds, all_subseeds):
1170
+ if not self.enable_hr:
1171
+ return
1172
 
1173
+ self.extra_generation_params["Denoising strength"] = self.denoising_strength
 
1174
 
1175
+ if self.hr_checkpoint_name and self.hr_checkpoint_name != "Use same checkpoint":
1176
+ self.hr_checkpoint_info = sd_models.get_closet_checkpoint_match(self.hr_checkpoint_name)
1177
 
1178
+ if self.hr_checkpoint_info is None:
1179
+ raise Exception(f"Could not find checkpoint with name {self.hr_checkpoint_name}")
1180
+
1181
+ if shared.sd_model.sd_checkpoint_info == self.hr_checkpoint_info:
1182
+ self.hr_checkpoint_info = None
1183
+ else:
1184
+ self.extra_generation_params["Hires checkpoint"] = self.hr_checkpoint_info.short_title
1185
 
1186
+ if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name:
1187
+ self.extra_generation_params["Hires sampler"] = self.hr_sampler_name
1188
 
1189
+ self.extra_generation_params["Hires schedule type"] = None # to be set in sd_samplers_kdiffusion.py
 
 
 
1190
 
1191
+ if self.hr_scheduler is None:
1192
+ self.hr_scheduler = self.scheduler
1193
 
1194
+ if self.hr_cfg_scale != self.cfg_scale:
1195
+ self.extra_generation_params["Hires CFG Scale"] = self.hr_cfg_scale
1196
 
1197
+ if self.hr_rescale_cfg:
1198
+ from modules.processing_scripts.rescale_cfg import ScriptRescaleCFG
 
 
1199
 
1200
+ ScriptRescaleCFG.apply_rescale_cfg(self, self.hr_rescale_cfg)
1201
+ self.extra_generation_params["Hires Rescale CFG"] = self.hr_rescale_cfg
1202
 
1203
+ if tuple(self.hr_prompt) != tuple(self.prompt):
1204
+ self.extra_generation_params["Hires prompt"] = self.hr_prompt
 
 
 
 
 
 
 
 
1205
 
1206
+ if tuple(self.hr_negative_prompt) != tuple(self.negative_prompt):
1207
+ self.extra_generation_params["Hires negative prompt"] = self.hr_negative_prompt
1208
 
1209
+ self.latent_scale_mode = shared.latent_upscale_modes.get(self.hr_upscaler, None) if self.hr_upscaler is not None else shared.latent_upscale_modes.get(shared.latent_upscale_default_mode, "nearest")
1210
+ if self.enable_hr and self.latent_scale_mode is None:
1211
+ if not any(x.name == self.hr_upscaler for x in shared.sd_upscalers):
1212
+ raise Exception(f"could not find upscaler named {self.hr_upscaler}")
1213
+
1214
+ self.calculate_target_resolution()
1215
+
1216
+ if not state.processing_has_refined_job_count:
1217
+ if state.job_count == -1:
1218
+ state.job_count = self.n_iter
1219
+ if getattr(self, "txt2img_upscale", False):
1220
+ total_steps = (self.hr_second_pass_steps or self.steps) * state.job_count
1221
+ else:
1222
+ total_steps = (self.steps + (self.hr_second_pass_steps or self.steps)) * state.job_count
1223
+ shared.total_tqdm.updateTotal(total_steps)
1224
+ state.job_count = state.job_count * 2
1225
+ state.processing_has_refined_job_count = True
1226
+
1227
+ if self.hr_second_pass_steps:
1228
+ self.extra_generation_params["Hires steps"] = self.hr_second_pass_steps
1229
+
1230
+ if self.hr_upscaler is not None:
1231
+ self.extra_generation_params["Hires upscaler"] = self.hr_upscaler
1232
 
1233
  def sample(self, conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts):
1234
  self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
 
1249
  image = torch.from_numpy(np.expand_dims(image, axis=0))
1250
  image = image.to(shared.device, dtype=torch.float32)
1251
 
1252
+ if opts.sd_vae_encode_method != "Full":
1253
+ self.extra_generation_params["VAE Encoder"] = opts.sd_vae_encode_method
1254
 
1255
  samples = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
1256
  decoded_samples = None
 
1265
  apply_token_merging(self.sd_model, self.get_token_merging_ratio())
1266
 
1267
  if self.scripts is not None:
1268
+ self.scripts.process_before_every_sampling(self, x=x, noise=x, c=conditioning, uc=unconditional_conditioning)
 
 
 
 
1269
 
1270
  if self.modified_noise is not None:
1271
  x = self.modified_noise
 
1330
 
1331
  batch_images = []
1332
  for i, x_sample in enumerate(lowres_samples):
1333
+ x_sample = 255.0 * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
1334
  x_sample = x_sample.astype(np.uint8)
1335
  image = Image.fromarray(x_sample)
1336
 
 
1344
  decoded_samples = torch.from_numpy(np.array(batch_images))
1345
  decoded_samples = decoded_samples.to(shared.device, dtype=torch.float32)
1346
 
1347
+ if opts.sd_vae_encode_method != "Full":
1348
+ self.extra_generation_params["VAE Encoder"] = opts.sd_vae_encode_method
1349
  samples = images_tensor_to_samples(decoded_samples, approximation_indexes.get(opts.sd_vae_encode_method))
1350
 
1351
  image_conditioning = self.img2img_image_conditioning(decoded_samples, samples)
1352
 
1353
  shared.state.nextjob()
1354
 
1355
+ samples = samples[:, :, self.truncate_y // 2 : samples.shape[2] - (self.truncate_y + 1) // 2, self.truncate_x // 2 : samples.shape[3] - (self.truncate_x + 1) // 2]
1356
 
1357
  self.rng = rng.ImageRNG(samples.shape[1:], self.seeds, subseeds=self.subseeds, subseed_strength=self.subseed_strength, seed_resize_from_h=self.seed_resize_from_h, seed_resize_from_w=self.seed_resize_from_w)
1358
  noise = self.rng.next()
 
1374
  apply_token_merging(self.sd_model, self.get_token_merging_ratio(for_hr=True))
1375
 
1376
  if self.scripts is not None:
1377
+ self.scripts.process_before_every_sampling(self, x=samples, noise=noise, c=self.hr_c, uc=self.hr_uc)
 
 
 
 
1378
 
1379
  if self.modified_noise is not None:
1380
  noise = self.modified_noise
 
1404
  if not self.enable_hr:
1405
  return
1406
 
1407
+ if self.hr_prompt == "":
1408
  self.hr_prompt = self.prompt
1409
 
1410
+ if self.hr_negative_prompt == "":
1411
  self.hr_negative_prompt = self.negative_prompt
1412
 
1413
  if isinstance(self.hr_prompt, list):
 
1472
  res = super().parse_extra_network_prompts()
1473
 
1474
  if self.enable_hr:
1475
+ self.hr_prompts = self.all_hr_prompts[self.iteration * self.batch_size : (self.iteration + 1) * self.batch_size]
1476
+ self.hr_negative_prompts = self.all_hr_negative_prompts[self.iteration * self.batch_size : (self.iteration + 1) * self.batch_size]
1477
 
1478
  self.hr_prompts, self.hr_extra_network_data = extra_networks.parse_prompts(self.hr_prompts)
1479
 
 
1557
  np_mask = cv2.GaussianBlur(np_mask, (1, kernel_size), self.mask_blur_y)
1558
  image_mask = Image.fromarray(np_mask)
1559
 
1560
+ if opts.img2img_inpaint_precise_mask and self.mask_blur_x * self.mask_blur_y > 0:
1561
+ _np_mask = np.array(image_mask).astype(np.float32) / 255.0
1562
+ kernel_size = 2 * int(2.5 * self.mask_blur_x + 0.5) + 1
1563
+ _image_mask = cv2.GaussianBlur(_np_mask, (kernel_size, kernel_size), self.mask_blur_x)
1564
+
1565
  if self.mask_blur_x > 0 or self.mask_blur_y > 0:
1566
  self.extra_generation_params["Mask blur"] = self.mask_blur
1567
 
1568
  if self.inpaint_full_res:
1569
  self.mask_for_overlay = image_mask
1570
+ mask = image_mask.convert("L")
1571
  crop_region = masking.get_crop_region(mask, self.inpaint_full_res_padding)
1572
  crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
1573
  x1, y1, x2, y2 = crop_region
1574
 
1575
  mask = mask.crop(crop_region)
1576
  image_mask = images.resize_image(2, mask, self.width, self.height)
1577
+ self.paste_to = (x1, y1, x2 - x1, y2 - y1)
1578
 
1579
  self.extra_generation_params["Inpaint area"] = "Only masked"
1580
  self.extra_generation_params["Masked area padding"] = self.inpaint_full_res_padding
 
1605
  image = images.resize_image(self.resize_mode, image, self.width, self.height)
1606
 
1607
  if image_mask is not None:
1608
+ if opts.img2img_inpaint_precise_mask:
1609
+ self.overlay_images.append((image, _image_mask))
1610
+ else:
1611
+ image_masked = Image.new("RGBa", (image.width, image.height))
1612
+ image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert("L")))
1613
+ self.overlay_images.append(image_masked.convert("RGBA"))
1614
 
1615
  # crop_region is not None if we are doing inpaint full res
1616
  if crop_region is not None:
 
1622
  image = masking.fill(image, latent_mask)
1623
 
1624
  if self.inpainting_fill == 0:
1625
+ self.extra_generation_params["Masked content"] = "fill"
1626
 
1627
  if add_color_corrections:
1628
  self.color_corrections.append(setup_color_correction(image))
 
1649
  image = torch.from_numpy(batch_images)
1650
  image = image.to(shared.device, dtype=torch.float32)
1651
 
1652
+ if opts.sd_vae_encode_method != "Full":
1653
+ self.extra_generation_params["VAE Encoder"] = opts.sd_vae_encode_method
1654
 
1655
  self.init_latent = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
1656
  devices.torch_gc()
 
1660
 
1661
  if image_mask is not None:
1662
  init_mask = latent_mask
1663
+ latmask = init_mask.convert("RGB").resize((self.init_latent.shape[3], self.init_latent.shape[2]))
1664
  latmask = np.moveaxis(np.array(latmask, dtype=np.float32), 2, 0) / 255
1665
  latmask = latmask[0]
1666
  if self.mask_round:
 
1672
 
1673
  # this needs to be fixed to be done in sample() using actual seeds for batches
1674
  if self.inpainting_fill == 2:
1675
+ self.init_latent = self.init_latent * self.mask + create_random_tensors(self.init_latent.shape[1:], all_seeds[0 : self.init_latent.shape[0]]) * self.nmask
1676
+ self.extra_generation_params["Masked content"] = "latent noise"
1677
 
1678
  elif self.inpainting_fill == 3:
1679
  self.init_latent = self.init_latent * self.mask
1680
+ self.extra_generation_params["Masked content"] = "latent nothing"
1681
 
1682
  self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_mask, self.mask_round)
1683
 
 
1692
  apply_token_merging(self.sd_model, self.get_token_merging_ratio())
1693
 
1694
  if self.scripts is not None:
1695
+ self.scripts.process_before_every_sampling(self, x=self.init_latent, noise=x, c=conditioning, uc=unconditional_conditioning)
 
 
 
 
1696
 
1697
  if self.modified_noise is not None:
1698
  x = self.modified_noise
modules/processing_scripts/comments.py CHANGED
@@ -1,7 +1,11 @@
1
  import re
 
2
 
3
  from modules import script_callbacks, scripts, shared
4
 
 
 
 
5
 
6
  def strip_comments(text):
7
  text = re.sub("(^|\n)#[^\n]*(\n|$)", "\n", text) # while line comment
@@ -17,7 +21,7 @@ class ScriptComments(scripts.Script):
17
  def show(self, is_img2img):
18
  return scripts.AlwaysVisible
19
 
20
- def process(self, p, *args):
21
  if not shared.opts.enable_prompt_comments:
22
  return
23
 
@@ -27,6 +31,13 @@ class ScriptComments(scripts.Script):
27
  p.main_prompt = strip_comments(p.main_prompt)
28
  p.main_negative_prompt = strip_comments(p.main_negative_prompt)
29
 
 
 
 
 
 
 
 
30
 
31
  def before_token_counter(params: script_callbacks.BeforeTokenCounterParams):
32
  if not shared.opts.enable_prompt_comments:
 
1
  import re
2
+ from typing import TYPE_CHECKING
3
 
4
  from modules import script_callbacks, scripts, shared
5
 
6
+ if TYPE_CHECKING:
7
+ from modules.processing import StableDiffusionProcessing
8
+
9
 
10
  def strip_comments(text):
11
  text = re.sub("(^|\n)#[^\n]*(\n|$)", "\n", text) # while line comment
 
21
  def show(self, is_img2img):
22
  return scripts.AlwaysVisible
23
 
24
+ def process(self, p: "StableDiffusionProcessing", *args):
25
  if not shared.opts.enable_prompt_comments:
26
  return
27
 
 
31
  p.main_prompt = strip_comments(p.main_prompt)
32
  p.main_negative_prompt = strip_comments(p.main_negative_prompt)
33
 
34
+ if getattr(p, "enable_hr", False):
35
+ p.all_hr_prompts = [strip_comments(x) for x in p.all_hr_prompts]
36
+ p.all_hr_negative_prompts = [strip_comments(x) for x in p.all_hr_negative_prompts]
37
+
38
+ p.hr_prompt = strip_comments(p.hr_prompt)
39
+ p.hr_negative_prompt = strip_comments(p.hr_negative_prompt)
40
+
41
 
42
  def before_token_counter(params: script_callbacks.BeforeTokenCounterParams):
43
  if not shared.opts.enable_prompt_comments:
modules/processing_scripts/rescale_cfg.py CHANGED
@@ -43,6 +43,13 @@ class ScriptRescaleCFG(scripts.ScriptBuiltinUI):
43
  def process_before_every_sampling(self, p, cfg, *args, **kwargs):
44
  if not opts.show_rescale_cfg or cfg < 0.05:
45
  return
 
 
 
 
 
 
 
46
 
47
  @torch.inference_mode()
48
  def rescale_cfg(args):
 
43
  def process_before_every_sampling(self, p, cfg, *args, **kwargs):
44
  if not opts.show_rescale_cfg or cfg < 0.05:
45
  return
46
+ if p.is_hr_pass:
47
+ return
48
+
49
+ self.apply_rescale_cfg(p, cfg)
50
+
51
+ @staticmethod
52
+ def apply_rescale_cfg(p, cfg):
53
 
54
  @torch.inference_mode()
55
  def rescale_cfg(args):
modules/processing_scripts/sampler.py CHANGED
@@ -26,14 +26,14 @@ class ScriptSampler(scripts.ScriptBuiltinUI):
26
 
27
  with FormRow(elem_id=f"sampler_selection_{self.tabname}"):
28
  self.sampler_name = gr.Dropdown(
29
- label="Sampling method",
30
  elem_id=f"{self.tabname}_sampling",
31
  choices=sampler_names,
32
  value=sampler_names[0],
33
  )
34
  if shared.opts.show_scheduler:
35
  self.scheduler = gr.Dropdown(
36
- label="Schedule type",
37
  elem_id=f"{self.tabname}_scheduler",
38
  choices=scheduler_names,
39
  value=scheduler_names[0],
@@ -43,10 +43,10 @@ class ScriptSampler(scripts.ScriptBuiltinUI):
43
  self.scheduler.do_not_save_to_config = True
44
  self.steps = gr.Slider(
45
  minimum=1,
46
- maximum=150,
47
  step=1,
48
  elem_id=f"{self.tabname}_steps",
49
- label="Sampling steps",
50
  value=20,
51
  )
52
 
 
26
 
27
  with FormRow(elem_id=f"sampler_selection_{self.tabname}"):
28
  self.sampler_name = gr.Dropdown(
29
+ label="Sampling Method",
30
  elem_id=f"{self.tabname}_sampling",
31
  choices=sampler_names,
32
  value=sampler_names[0],
33
  )
34
  if shared.opts.show_scheduler:
35
  self.scheduler = gr.Dropdown(
36
+ label="Schedule Type",
37
  elem_id=f"{self.tabname}_scheduler",
38
  choices=scheduler_names,
39
  value=scheduler_names[0],
 
43
  self.scheduler.do_not_save_to_config = True
44
  self.steps = gr.Slider(
45
  minimum=1,
46
+ maximum=128,
47
  step=1,
48
  elem_id=f"{self.tabname}_steps",
49
+ label="Sampling Steps",
50
  value=20,
51
  )
52
 
modules/rng.py CHANGED
@@ -40,7 +40,7 @@ def randn_local(seed, shape):
40
 
41
 
42
  def randn_like(x):
43
- """Generate a tensor with random numbers from a normal distribution using the previously initialized genrator.
44
 
45
  Use either randn() or manual_seed() to initialize the generator."""
46
 
@@ -54,7 +54,7 @@ def randn_like(x):
54
 
55
 
56
  def randn_without_seed(shape, generator=None):
57
- """Generate a tensor with random numbers from a normal distribution using the previously initialized genrator.
58
 
59
  Use either randn() or manual_seed() to initialize the generator."""
60
 
 
40
 
41
 
42
  def randn_like(x):
43
+ """Generate a tensor with random numbers from a normal distribution using the previously initialized generator.
44
 
45
  Use either randn() or manual_seed() to initialize the generator."""
46
 
 
54
 
55
 
56
  def randn_without_seed(shape, generator=None):
57
+ """Generate a tensor with random numbers from a normal distribution using the previously initialized generator.
58
 
59
  Use either randn() or manual_seed() to initialize the generator."""
60
 
modules/scripts_postprocessing.py CHANGED
@@ -59,7 +59,7 @@ class ScriptPostprocessing:
59
  args_to = None
60
 
61
  order = 1000
62
- """scripts will be ordred by this value in postprocessing UI"""
63
 
64
  name = None
65
  """this function should return the title of the script."""
 
59
  args_to = None
60
 
61
  order = 1000
62
+ """scripts will be ordered by this value in postprocessing UI"""
63
 
64
  name = None
65
  """this function should return the title of the script."""
modules/sd_hijack_clip.py CHANGED
@@ -215,7 +215,7 @@ class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
215
  be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, for SD2 it's 1024, and for SDXL it's 1280.
216
  An example shape returned by this function can be: (2, 77, 768).
217
  For SDXL, instead of returning one tensor above, it returns a tuple with two: the other one with shape (B, 1280) with pooled values.
218
- Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet
219
  is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream"
220
  """
221
 
 
215
  be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, for SD2 it's 1024, and for SDXL it's 1280.
216
  An example shape returned by this function can be: (2, 77, 768).
217
  For SDXL, instead of returning one tensor above, it returns a tuple with two: the other one with shape (B, 1280) with pooled values.
218
+ Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one element
219
  is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream"
220
  """
221
 
modules/sd_samplers_common.py CHANGED
@@ -96,9 +96,12 @@ def samples_to_images_tensor(sample, approximation=None, model=None):
96
  def single_sample_to_image(sample, approximation=None):
97
  x_sample = samples_to_images_tensor(sample.unsqueeze(0), approximation)[0] * 0.5 + 0.5
98
 
99
- x_sample = torch.clamp(x_sample, min=0.0, max=1.0)
100
- x_sample = 255.0 * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
101
- x_sample = x_sample.astype(np.uint8)
 
 
 
102
 
103
  return Image.fromarray(x_sample)
104
 
 
96
  def single_sample_to_image(sample, approximation=None):
97
  x_sample = samples_to_images_tensor(sample.unsqueeze(0), approximation)[0] * 0.5 + 0.5
98
 
99
+ x_sample = x_sample.cpu()
100
+ x_sample.mul_(255.0)
101
+ x_sample.round_()
102
+ x_sample.clamp_(0.0, 255.0)
103
+ x_sample = x_sample.to(torch.uint8)
104
+ x_sample = np.moveaxis(x_sample.numpy(), 0, 2)
105
 
106
  return Image.fromarray(x_sample)
107
 
modules/sd_samplers_kdiffusion.py CHANGED
@@ -17,6 +17,7 @@ from modules_forge.forge_sampler import sampling_cleanup, sampling_prepare
17
  samplers_k_diffusion = [
18
  ("DPM++ 2M", "sample_dpmpp_2m", ["k_dpmpp_2m"], {"scheduler": "karras"}),
19
  ("DPM++ SDE", "sample_dpmpp_sde", ["k_dpmpp_sde"], {"scheduler": "karras", "second_order": True, "brownian_noise": True}),
 
20
  ("DPM++ 3M SDE", "sample_dpmpp_3m_sde", ["k_dpmpp_3m_sde"], {"scheduler": "exponential", "discard_next_to_last_sigma": True, "brownian_noise": True}),
21
  ("Euler a", "sample_euler_ancestral", ["k_euler_a", "k_euler_ancestral"], {"uses_ensd": True}),
22
  ("Euler", "sample_euler", ["k_euler"], {}),
@@ -41,6 +42,7 @@ samplers_data_k_diffusion = [
41
 
42
  sampler_extra_params = {
43
  "sample_dpmpp_sde": ["eta", "s_noise", "r"],
 
44
  "sample_dpmpp_3m_sde": ["eta", "s_noise"],
45
  "sample_euler_ancestral": ["eta", "s_noise"],
46
  "sample_euler": ["s_churn", "s_tmin", "s_tmax", "s_noise"],
@@ -56,11 +58,7 @@ class CFGDenoiserKDiffusion(sd_samplers_cfg_denoiser.CFGDenoiser):
56
  @property
57
  def inner_model(self):
58
  if self.model_wrap is None:
59
- denoiser = (
60
- k_diffusion.external.CompVisVDenoiser
61
- if shared.sd_model.parameterization == "v"
62
- else k_diffusion.external.CompVisDenoiser
63
- )
64
  self.model_wrap = denoiser(shared.sd_model, quantize=True)
65
 
66
  return self.model_wrap
@@ -208,7 +206,7 @@ class KDiffusionSampler(sd_samplers_common.Sampler):
208
  "cond": conditioning,
209
  "image_cond": image_conditioning,
210
  "uncond": unconditional_conditioning,
211
- "cond_scale": p.cfg_scale,
212
  "s_min_uncond": self.s_min_uncond,
213
  }
214
 
@@ -269,7 +267,7 @@ class KDiffusionSampler(sd_samplers_common.Sampler):
269
  "cond": conditioning,
270
  "image_cond": image_conditioning,
271
  "uncond": unconditional_conditioning,
272
- "cond_scale": p.cfg_scale,
273
  "s_min_uncond": self.s_min_uncond,
274
  }
275
 
 
17
  samplers_k_diffusion = [
18
  ("DPM++ 2M", "sample_dpmpp_2m", ["k_dpmpp_2m"], {"scheduler": "karras"}),
19
  ("DPM++ SDE", "sample_dpmpp_sde", ["k_dpmpp_sde"], {"scheduler": "karras", "second_order": True, "brownian_noise": True}),
20
+ ("DPM++ 2M SDE", "sample_dpmpp_2m_sde", ["k_dpmpp_2m_sde_ka"], {"brownian_noise": True}),
21
  ("DPM++ 3M SDE", "sample_dpmpp_3m_sde", ["k_dpmpp_3m_sde"], {"scheduler": "exponential", "discard_next_to_last_sigma": True, "brownian_noise": True}),
22
  ("Euler a", "sample_euler_ancestral", ["k_euler_a", "k_euler_ancestral"], {"uses_ensd": True}),
23
  ("Euler", "sample_euler", ["k_euler"], {}),
 
42
 
43
  sampler_extra_params = {
44
  "sample_dpmpp_sde": ["eta", "s_noise", "r"],
45
+ "sample_dpmpp_2m_sde": ["eta", "s_noise"],
46
  "sample_dpmpp_3m_sde": ["eta", "s_noise"],
47
  "sample_euler_ancestral": ["eta", "s_noise"],
48
  "sample_euler": ["s_churn", "s_tmin", "s_tmax", "s_noise"],
 
58
  @property
59
  def inner_model(self):
60
  if self.model_wrap is None:
61
+ denoiser = k_diffusion.external.CompVisVDenoiser if shared.sd_model.parameterization == "v" else k_diffusion.external.CompVisDenoiser
 
 
 
 
62
  self.model_wrap = denoiser(shared.sd_model, quantize=True)
63
 
64
  return self.model_wrap
 
206
  "cond": conditioning,
207
  "image_cond": image_conditioning,
208
  "uncond": unconditional_conditioning,
209
+ "cond_scale": p.hr_cfg_scale if p.is_hr_pass else p.cfg_scale,
210
  "s_min_uncond": self.s_min_uncond,
211
  }
212
 
 
267
  "cond": conditioning,
268
  "image_cond": image_conditioning,
269
  "uncond": unconditional_conditioning,
270
+ "cond_scale": p.hr_cfg_scale if p.is_hr_pass else p.cfg_scale,
271
  "s_min_uncond": self.s_min_uncond,
272
  }
273
 
modules/shared.py CHANGED
@@ -95,3 +95,6 @@ reload_gradio_theme = shared_gradio_themes.reload_gradio_theme
95
  list_checkpoint_tiles = shared_items.list_checkpoint_tiles
96
  refresh_checkpoints = shared_items.refresh_checkpoints
97
  list_samplers = shared_items.list_samplers
 
 
 
 
95
  list_checkpoint_tiles = shared_items.list_checkpoint_tiles
96
  refresh_checkpoints = shared_items.refresh_checkpoints
97
  list_samplers = shared_items.list_samplers
98
+
99
+ # ===== backward compatibility ===== #
100
+ batch_cond_uncond = True
modules/shared_options.py CHANGED
@@ -226,8 +226,10 @@ options_templates.update(
226
  "img2img_inpaint_sketch_default_brush_color": OptionInfo("#ff0000", "Initial Brush Color for Inpaint Sketch", ui_components.FormColorPicker, {}).needs_reload_ui(),
227
  "return_mask": OptionInfo(False, "For inpainting, append the greyscale mask to results"),
228
  "return_mask_composite": OptionInfo(False, "For inpainting, append the masked composite to results"),
229
- "overlay_inpaint": OptionInfo(True, "For inpainting, overlay the original image over the areas that were untouched"),
230
  "img2img_batch_show_results_limit": OptionInfo(32, "Show the first N batch of img2img results in UI", gr.Slider, {"minimum": -1, "maximum": 256, "step": 1}).info("0 = disable; -1 = show all; too many images causes severe lag"),
 
 
231
  },
232
  )
233
  )
@@ -353,8 +355,8 @@ options_templates.update(
353
  "compact_prompt_box": OptionInfo(False, "Compact Prompt Layout").info("put prompts inside the Generate tab, leaving more space for the gallery").needs_reload_ui(),
354
  "dimensions_and_batch_together": OptionInfo(True, "Show Width/Height and Batch sliders in same row").needs_reload_ui(),
355
  "sd_checkpoint_dropdown_use_short": OptionInfo(False, "Show filenames without folder in the Checkpoint dropdown").info("if disabled, models under subdirectories will be listed like sdxl/anime.safetensors"),
356
- "hires_fix_show_prompts": OptionInfo(False, "[Hires. fix]: Show prompt and negative prompt").needs_reload_ui(),
357
- "hires_fix_show_sampler": OptionInfo(False, "[Hires. fix]: Show checkpoint and sampler selection").needs_reload_ui(),
358
  "txt2img_settings_accordion": OptionInfo(False, "Put txt2img parameters under Accordion").needs_reload_ui(),
359
  "img2img_settings_accordion": OptionInfo(False, "Put img2img parameters under Accordion").needs_reload_ui(),
360
  "interrupt_after_current": OptionInfo(False, "Don't Interrupt in the middle").info("when using the Interrupt button, if generating more than one image, stop after the current generation of an image has finished instead of immediately"),
@@ -493,6 +495,8 @@ options_templates.update(
493
  "disable_all_extensions": OptionInfo("none", "Disable all extensions (preserves the list of disabled extensions)", gr.Radio, {"choices": ("none", "extra", "all")}),
494
  "restore_config_state_file": OptionInfo("", 'Config state file to restore from, under "config-states/" folder'),
495
  "sd_checkpoint_hash": OptionInfo("", "SHA256 hash of the current checkpoint"),
 
 
496
  },
497
  )
498
  )
 
226
  "img2img_inpaint_sketch_default_brush_color": OptionInfo("#ff0000", "Initial Brush Color for Inpaint Sketch", ui_components.FormColorPicker, {}).needs_reload_ui(),
227
  "return_mask": OptionInfo(False, "For inpainting, append the greyscale mask to results"),
228
  "return_mask_composite": OptionInfo(False, "For inpainting, append the masked composite to results"),
229
+ "overlay_inpaint": OptionInfo(True, "For inpainting, overlay the resulting image back onto the original image").info('when using the "Only masked" option'),
230
  "img2img_batch_show_results_limit": OptionInfo(32, "Show the first N batch of img2img results in UI", gr.Slider, {"minimum": -1, "maximum": 256, "step": 1}).info("0 = disable; -1 = show all; too many images causes severe lag"),
231
+ "div_exp": OptionDiv(),
232
+ "img2img_inpaint_precise_mask": OptionInfo(False, 'For inpainting, process the "Mask blur" in fp32 instead of uint8 precision; improve blending result').info('<b>Experimental</b> ; may break functions that access the "overlay_images"'),
233
  },
234
  )
235
  )
 
355
  "compact_prompt_box": OptionInfo(False, "Compact Prompt Layout").info("put prompts inside the Generate tab, leaving more space for the gallery").needs_reload_ui(),
356
  "dimensions_and_batch_together": OptionInfo(True, "Show Width/Height and Batch sliders in same row").needs_reload_ui(),
357
  "sd_checkpoint_dropdown_use_short": OptionInfo(False, "Show filenames without folder in the Checkpoint dropdown").info("if disabled, models under subdirectories will be listed like sdxl/anime.safetensors"),
358
+ "hires_fix_show_sampler": OptionInfo(False, "[Hires. fix]: Show checkpoint, sampler, scheduler, and cfg options").needs_reload_ui(),
359
+ "hires_fix_show_prompts": OptionInfo(False, "[Hires. fix]: Show prompt and negative prompt textboxes").needs_reload_ui(),
360
  "txt2img_settings_accordion": OptionInfo(False, "Put txt2img parameters under Accordion").needs_reload_ui(),
361
  "img2img_settings_accordion": OptionInfo(False, "Put img2img parameters under Accordion").needs_reload_ui(),
362
  "interrupt_after_current": OptionInfo(False, "Don't Interrupt in the middle").info("when using the Interrupt button, if generating more than one image, stop after the current generation of an image has finished instead of immediately"),
 
495
  "disable_all_extensions": OptionInfo("none", "Disable all extensions (preserves the list of disabled extensions)", gr.Radio, {"choices": ("none", "extra", "all")}),
496
  "restore_config_state_file": OptionInfo("", 'Config state file to restore from, under "config-states/" folder'),
497
  "sd_checkpoint_hash": OptionInfo("", "SHA256 hash of the current checkpoint"),
498
+ "tile_size": OptionInfo(512, "Tile Size for Tiled VAE", gr.Number, {"precision": 0}),
499
+ "tile_overlap": OptionInfo(64, "Overlap for Tiled VAE", gr.Number, {"precision": 0}),
500
  },
501
  )
502
  )
modules/shared_state.py CHANGED
@@ -2,10 +2,11 @@ import datetime
2
  import logging
3
  import threading
4
  import time
 
 
5
  import torch
6
 
7
- from modules import errors, shared, devices
8
- from typing import Optional
9
 
10
  log = logging.getLogger(__name__)
11
 
@@ -143,10 +144,7 @@ class State:
143
  if not shared.parallel_processing_allowed:
144
  return
145
 
146
- if (
147
- (shared.opts.live_previews_enable and shared.opts.show_progress_every_n_steps != -1) and
148
- ((self.sampling_step - self.current_image_sampling_step) >= shared.opts.show_progress_every_n_steps)
149
- ):
150
  self.do_set_current_image()
151
 
152
  @torch.inference_mode()
@@ -154,14 +152,13 @@ class State:
154
  if self.current_latent is None:
155
  return
156
 
157
- import modules.sd_samplers
 
 
 
158
 
159
  try:
160
- if shared.opts.show_progress_grid:
161
- self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
162
- else:
163
- self.assign_current_image(modules.sd_samplers.sample_to_image(self.current_latent))
164
-
165
  self.current_image_sampling_step = self.sampling_step
166
  self.current_latent = None
167
 
 
2
  import logging
3
  import threading
4
  import time
5
+ from typing import Optional
6
+
7
  import torch
8
 
9
+ from modules import devices, errors, shared
 
10
 
11
  log = logging.getLogger(__name__)
12
 
 
144
  if not shared.parallel_processing_allowed:
145
  return
146
 
147
+ if (shared.opts.live_previews_enable and shared.opts.show_progress_every_n_steps != -1) and ((self.sampling_step - self.current_image_sampling_step) >= shared.opts.show_progress_every_n_steps):
 
 
 
148
  self.do_set_current_image()
149
 
150
  @torch.inference_mode()
 
152
  if self.current_latent is None:
153
  return
154
 
155
+ if shared.opts.show_progress_grid:
156
+ from modules.sd_samplers import samples_to_image_grid as sample
157
+ else:
158
+ from modules.sd_samplers import sample_to_image as sample
159
 
160
  try:
161
+ self.assign_current_image(sample(self.current_latent))
 
 
 
 
162
  self.current_image_sampling_step = self.sampling_step
163
  self.current_latent = None
164
 
modules/txt2img.py CHANGED
@@ -33,8 +33,10 @@ def txt2img_create_processing(
33
  hr_checkpoint_name: str,
34
  hr_sampler_name: str,
35
  hr_scheduler: str,
 
 
36
  hr_prompt: str,
37
- hr_negative_prompt,
38
  override_settings_texts,
39
  *args,
40
  force_enable_hr=False,
@@ -66,6 +68,8 @@ def txt2img_create_processing(
66
  hr_checkpoint_name=None if hr_checkpoint_name == "Use same checkpoint" else hr_checkpoint_name,
67
  hr_sampler_name=None if hr_sampler_name == "Use same sampler" else hr_sampler_name,
68
  hr_scheduler=None if hr_scheduler == "Use same scheduler" else hr_scheduler,
 
 
69
  hr_prompt=hr_prompt,
70
  hr_negative_prompt=hr_negative_prompt,
71
  override_settings=override_settings,
 
33
  hr_checkpoint_name: str,
34
  hr_sampler_name: str,
35
  hr_scheduler: str,
36
+ hr_cfg_scale: float,
37
+ hr_rescale_cfg: float,
38
  hr_prompt: str,
39
+ hr_negative_prompt: str,
40
  override_settings_texts,
41
  *args,
42
  force_enable_hr=False,
 
68
  hr_checkpoint_name=None if hr_checkpoint_name == "Use same checkpoint" else hr_checkpoint_name,
69
  hr_sampler_name=None if hr_sampler_name == "Use same sampler" else hr_sampler_name,
70
  hr_scheduler=None if hr_scheduler == "Use same scheduler" else hr_scheduler,
71
+ hr_cfg_scale=hr_cfg_scale if opts.hires_fix_show_sampler else cfg_scale,
72
+ hr_rescale_cfg=hr_rescale_cfg if opts.hires_fix_show_sampler else None,
73
  hr_prompt=hr_prompt,
74
  hr_negative_prompt=hr_negative_prompt,
75
  override_settings=override_settings,
modules/ui.py CHANGED
@@ -21,7 +21,7 @@ from modules.paths import script_path
21
  from modules.sd_hijack import model_hijack
22
  from modules.shared import cmd_opts, opts
23
  from modules.ui_common import create_refresh_button
24
- from modules.ui_components import FormGroup, FormHTML, FormRow, InputAccordion, ResizeHandleRow, ToolButton
25
  from modules.ui_gradio_extensions import reload_javascript
26
 
27
  create_setting_component = ui_settings.create_setting_component
@@ -223,21 +223,21 @@ def create_ui():
223
  height = gr.Slider(minimum=64, maximum=2048, step=64, label="Height", value=512, elem_id="txt2img_height")
224
 
225
  with gr.Column(elem_id="txt2img_dimensions_row", scale=1, elem_classes="dimensions-tools"):
226
- res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="txt2img_res_switch_btn", tooltip="Switch width/height")
227
 
228
  if opts.dimensions_and_batch_together:
229
  with gr.Column(elem_id="txt2img_column_batch"):
230
- batch_count = gr.Slider(minimum=1, step=1, label="Batch count", value=1, elem_id="txt2img_batch_count")
231
- batch_size = gr.Slider(minimum=1, maximum=8, step=1, label="Batch size", value=1, elem_id="txt2img_batch_size")
232
 
233
  elif category == "cfg":
234
  with gr.Row():
235
- cfg_scale = gr.Slider(minimum=1.0, maximum=30.0, step=0.5, label="CFG Scale", value=7.0, elem_id="txt2img_cfg_scale", scale=4)
236
  scripts.scripts_txt2img.setup_ui_for_section(category)
237
 
238
- elif category == "checkboxes":
239
- with FormRow(elem_classes="checkboxes-row", variant="compact"):
240
- pass
241
 
242
  elif category == "accordions":
243
  with gr.Row(elem_id="txt2img_accordions", elem_classes="accordions"):
@@ -247,37 +247,41 @@ def create_ui():
247
 
248
  with FormRow(elem_id="txt2img_hires_fix_row1", variant="compact"):
249
  hr_upscaler = gr.Dropdown(label="Upscaler", elem_id="txt2img_hr_upscaler", choices=[*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]], value=shared.latent_upscale_default_mode)
250
- hr_second_pass_steps = gr.Slider(minimum=0, maximum=150, step=1, label="Hires steps", value=0, elem_id="txt2img_hires_steps")
251
- denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label="Denoising strength", value=0.7, elem_id="txt2img_denoising_strength")
252
 
253
  with FormRow(elem_id="txt2img_hires_fix_row2", variant="compact"):
254
  hr_scale = gr.Slider(minimum=1.0, maximum=4.0, step=0.05, label="Upscale by", value=2.0, elem_id="txt2img_hr_scale")
255
  hr_resize_x = gr.Slider(minimum=0, maximum=2048, step=64, label="Resize width to", value=0, elem_id="txt2img_hr_resize_x")
256
  hr_resize_y = gr.Slider(minimum=0, maximum=2048, step=64, label="Resize height to", value=0, elem_id="txt2img_hr_resize_y")
257
 
258
- with FormRow(elem_id="txt2img_hires_fix_row3", variant="compact", visible=opts.hires_fix_show_sampler) as hr_sampler_container:
259
 
260
- hr_checkpoint_name = gr.Dropdown(label="Hires checkpoint", elem_id="hr_checkpoint", choices=["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True), value="Use same checkpoint")
261
- create_refresh_button(hr_checkpoint_name, modules.sd_models.list_models, lambda: {"choices": ["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True)}, "hr_checkpoint_refresh")
 
 
 
262
 
263
- hr_sampler_name = gr.Dropdown(label="Hires sampling method", elem_id="hr_sampler", choices=["Use same sampler"] + sd_samplers.visible_sampler_names(), value="Use same sampler")
264
- hr_scheduler = gr.Dropdown(label="Hires schedule type", elem_id="hr_scheduler", choices=["Use same scheduler"] + [x.label for x in sd_schedulers.schedulers], value="Use same scheduler")
 
265
 
266
  with FormRow(elem_id="txt2img_hires_fix_row4", variant="compact", visible=opts.hires_fix_show_prompts) as hr_prompts_container:
267
  with gr.Column(scale=80):
268
  with gr.Row():
269
- hr_prompt = gr.Textbox(label="Hires prompt", elem_id="hires_prompt", show_label=False, lines=3, placeholder="Prompt for hires fix pass.\nLeave empty to use the same prompt as in first pass.", elem_classes=["prompt"])
270
  with gr.Column(scale=80):
271
  with gr.Row():
272
- hr_negative_prompt = gr.Textbox(label="Hires negative prompt", elem_id="hires_neg_prompt", show_label=False, lines=3, placeholder="Negative prompt for hires fix pass.\nLeave empty to use the same negative prompt as in first pass.", elem_classes=["prompt"])
273
 
274
  scripts.scripts_txt2img.setup_ui_for_section(category)
275
 
276
  elif category == "batch":
277
  if not opts.dimensions_and_batch_together:
278
  with FormRow(elem_id="txt2img_column_batch"):
279
- batch_count = gr.Slider(minimum=1, step=1, label="Batch count", value=1, elem_id="txt2img_batch_count")
280
- batch_size = gr.Slider(minimum=1, maximum=8, step=1, label="Batch size", value=1, elem_id="txt2img_batch_size")
281
 
282
  elif category == "override_settings":
283
  with FormRow(elem_id="txt2img_override_settings_row") as row:
@@ -331,6 +335,8 @@ def create_ui():
331
  hr_checkpoint_name,
332
  hr_sampler_name,
333
  hr_scheduler,
 
 
334
  hr_prompt,
335
  hr_negative_prompt,
336
  override_settings,
@@ -395,6 +401,8 @@ def create_ui():
395
  PasteField(hr_checkpoint_name, "Hires checkpoint", api="hr_checkpoint_name"),
396
  PasteField(hr_sampler_name, sd_samplers.get_hr_sampler_from_infotext, api="hr_sampler_name"),
397
  PasteField(hr_scheduler, sd_samplers.get_hr_scheduler_from_infotext, api="hr_scheduler"),
 
 
398
  PasteField(hr_sampler_container, lambda d: gr.update(visible=True) if d.get("Hires sampler", "Use same sampler") != "Use same sampler" or d.get("Hires checkpoint", "Use same checkpoint") != "Use same checkpoint" or d.get("Hires schedule type", "Use same scheduler") != "Use same scheduler" else gr.update()),
399
  PasteField(hr_prompt, "Hires prompt", api="hr_prompt"),
400
  PasteField(hr_negative_prompt, "Hires negative prompt", api="hr_negative_prompt"),
@@ -558,11 +566,11 @@ def create_ui():
558
  width = gr.Slider(minimum=64, maximum=2048, step=64, label="Width", value=512, elem_id="img2img_width")
559
  height = gr.Slider(minimum=64, maximum=2048, step=64, label="Height", value=512, elem_id="img2img_height")
560
  with gr.Column(elem_id="img2img_dimensions_row", scale=1, elem_classes="dimensions-tools"):
561
- res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="img2img_res_switch_btn", tooltip="Switch width/height")
562
  detect_image_size_btn = ToolButton(value=detect_image_size_symbol, elem_id="img2img_detect_image_size_btn", tooltip="Auto detect size from img2img")
563
 
564
  with gr.Tab(label="Resize by", elem_id="img2img_tab_resize_by") as tab_scale_by:
565
- scale_by = gr.Slider(minimum=0.05, maximum=4.0, step=0.05, label="Scale", value=1.0, elem_id="img2img_scale")
566
 
567
  with FormRow():
568
  scale_by_html = FormHTML(resize_from_to_html(0, 0, 0.0), elem_id="img2img_scale_resolution_preview")
@@ -585,21 +593,21 @@ def create_ui():
585
 
586
  if opts.dimensions_and_batch_together:
587
  with gr.Column(elem_id="img2img_column_batch"):
588
- batch_count = gr.Slider(minimum=1, step=1, label="Batch count", value=1, elem_id="img2img_batch_count")
589
- batch_size = gr.Slider(minimum=1, maximum=8, step=1, label="Batch size", value=1, elem_id="img2img_batch_size")
590
 
591
  elif category == "denoising":
592
  denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label="Denoising strength", value=0.75, elem_id="img2img_denoising_strength")
593
 
594
  elif category == "cfg":
595
  with gr.Row():
596
- cfg_scale = gr.Slider(minimum=1.0, maximum=30.0, step=0.5, label="CFG Scale", value=7.0, elem_id="img2img_cfg_scale", scale=4)
597
  scripts.scripts_img2img.setup_ui_for_section(category)
598
  image_cfg_scale = gr.Slider(minimum=0, maximum=3.0, step=0.05, label="Image CFG Scale", value=1.5, elem_id="img2img_image_cfg_scale", visible=False)
599
 
600
- elif category == "checkboxes":
601
- with FormRow(elem_classes="checkboxes-row", variant="compact"):
602
- pass
603
 
604
  elif category == "accordions":
605
  with gr.Row(elem_id="img2img_accordions", elem_classes="accordions"):
@@ -608,8 +616,8 @@ def create_ui():
608
  elif category == "batch":
609
  if not opts.dimensions_and_batch_together:
610
  with FormRow(elem_id="img2img_column_batch"):
611
- batch_count = gr.Slider(minimum=1, step=1, label="Batch count", value=1, elem_id="img2img_batch_count")
612
- batch_size = gr.Slider(minimum=1, maximum=8, step=1, label="Batch size", value=1, elem_id="img2img_batch_size")
613
 
614
  elif category == "override_settings":
615
  with FormRow(elem_id="img2img_override_settings_row") as row:
 
21
  from modules.sd_hijack import model_hijack
22
  from modules.shared import cmd_opts, opts
23
  from modules.ui_common import create_refresh_button
24
+ from modules.ui_components import FormGroup, FormHTML, FormRow, FormColumn, InputAccordion, ResizeHandleRow, ToolButton
25
  from modules.ui_gradio_extensions import reload_javascript
26
 
27
  create_setting_component = ui_settings.create_setting_component
 
223
  height = gr.Slider(minimum=64, maximum=2048, step=64, label="Height", value=512, elem_id="txt2img_height")
224
 
225
  with gr.Column(elem_id="txt2img_dimensions_row", scale=1, elem_classes="dimensions-tools"):
226
+ res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="txt2img_res_switch_btn", tooltip="Swap width/height")
227
 
228
  if opts.dimensions_and_batch_together:
229
  with gr.Column(elem_id="txt2img_column_batch"):
230
+ batch_count = gr.Slider(minimum=1, maximum=128, step=1, label="Batch Count", value=1, elem_id="txt2img_batch_count")
231
+ batch_size = gr.Slider(minimum=1, maximum=16, step=1, label="Batch Size", value=1, elem_id="txt2img_batch_size")
232
 
233
  elif category == "cfg":
234
  with gr.Row():
235
+ cfg_scale = gr.Slider(minimum=1.0, maximum=24.0, step=0.5, label="CFG Scale", value=6.0, elem_id="txt2img_cfg_scale", scale=4)
236
  scripts.scripts_txt2img.setup_ui_for_section(category)
237
 
238
+ # elif category == "checkboxes":
239
+ # with FormRow(elem_classes="checkboxes-row", variant="compact"):
240
+ # pass
241
 
242
  elif category == "accordions":
243
  with gr.Row(elem_id="txt2img_accordions", elem_classes="accordions"):
 
247
 
248
  with FormRow(elem_id="txt2img_hires_fix_row1", variant="compact"):
249
  hr_upscaler = gr.Dropdown(label="Upscaler", elem_id="txt2img_hr_upscaler", choices=[*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]], value=shared.latent_upscale_default_mode)
250
+ hr_second_pass_steps = gr.Slider(minimum=0, maximum=128, step=1, label="Hires steps", value=0, elem_id="txt2img_hires_steps")
251
+ denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.05, label="Denoising strength", value=0.6, elem_id="txt2img_denoising_strength")
252
 
253
  with FormRow(elem_id="txt2img_hires_fix_row2", variant="compact"):
254
  hr_scale = gr.Slider(minimum=1.0, maximum=4.0, step=0.05, label="Upscale by", value=2.0, elem_id="txt2img_hr_scale")
255
  hr_resize_x = gr.Slider(minimum=0, maximum=2048, step=64, label="Resize width to", value=0, elem_id="txt2img_hr_resize_x")
256
  hr_resize_y = gr.Slider(minimum=0, maximum=2048, step=64, label="Resize height to", value=0, elem_id="txt2img_hr_resize_y")
257
 
258
+ with FormColumn(elem_id="txt2img_hires_fix_row3", variant="compact", visible=opts.hires_fix_show_sampler) as hr_sampler_container:
259
 
260
+ with gr.Row():
261
+ hr_checkpoint_name = gr.Dropdown(label="Hires checkpoint", elem_id="hr_checkpoint", choices=["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True), value="Use same checkpoint")
262
+ create_refresh_button(hr_checkpoint_name, modules.sd_models.list_models, lambda: {"choices": ["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=True)}, "hr_checkpoint_refresh")
263
+ hr_sampler_name = gr.Dropdown(label="Hires sampling method", elem_id="hr_sampler", choices=["Use same sampler"] + sd_samplers.visible_sampler_names(), value="Use same sampler")
264
+ hr_scheduler = gr.Dropdown(label="Hires schedule type", elem_id="hr_scheduler", choices=["Use same scheduler"] + [x.label for x in sd_schedulers.schedulers], value="Use same scheduler")
265
 
266
+ with gr.Row():
267
+ hr_cfg_scale = gr.Slider(minimum=1.0, maximum=24.0, step=0.5, label="Hires CFG Scale", value=6.0, elem_id="hr_cfg_scale")
268
+ hr_rescale_cfg = gr.Slider(value=0.0, minimum=0.0, maximum=1.0, step=0.05, label="Hires Rescale CFG", elem_id="hr_rescale_cfg_scale", visible=opts.show_rescale_cfg)
269
 
270
  with FormRow(elem_id="txt2img_hires_fix_row4", variant="compact", visible=opts.hires_fix_show_prompts) as hr_prompts_container:
271
  with gr.Column(scale=80):
272
  with gr.Row():
273
+ hr_prompt = gr.Textbox(label="Hires prompt", elem_id="hires_prompt", show_label=False, lines=3, placeholder="Prompt for Hires. fix\n(leave empty to use the same prompt as txt2img)", elem_classes=["prompt"])
274
  with gr.Column(scale=80):
275
  with gr.Row():
276
+ hr_negative_prompt = gr.Textbox(label="Hires negative prompt", elem_id="hires_neg_prompt", show_label=False, lines=3, placeholder="Negative Prompt for Hires. fix\n(leave empty to use the same negative prompt as txt2img)", elem_classes=["prompt"])
277
 
278
  scripts.scripts_txt2img.setup_ui_for_section(category)
279
 
280
  elif category == "batch":
281
  if not opts.dimensions_and_batch_together:
282
  with FormRow(elem_id="txt2img_column_batch"):
283
+ batch_count = gr.Slider(minimum=1, maximum=128, step=1, label="Batch Count", value=1, elem_id="txt2img_batch_count")
284
+ batch_size = gr.Slider(minimum=1, maximum=16, step=1, label="Batch Size", value=1, elem_id="txt2img_batch_size")
285
 
286
  elif category == "override_settings":
287
  with FormRow(elem_id="txt2img_override_settings_row") as row:
 
335
  hr_checkpoint_name,
336
  hr_sampler_name,
337
  hr_scheduler,
338
+ hr_cfg_scale,
339
+ hr_rescale_cfg,
340
  hr_prompt,
341
  hr_negative_prompt,
342
  override_settings,
 
401
  PasteField(hr_checkpoint_name, "Hires checkpoint", api="hr_checkpoint_name"),
402
  PasteField(hr_sampler_name, sd_samplers.get_hr_sampler_from_infotext, api="hr_sampler_name"),
403
  PasteField(hr_scheduler, sd_samplers.get_hr_scheduler_from_infotext, api="hr_scheduler"),
404
+ PasteField(hr_cfg_scale, "Hires CFG Scale", api="hr_cfg_scale"),
405
+ PasteField(hr_rescale_cfg, "Hires Rescale CFG", api="hr_rescale_cfg"),
406
  PasteField(hr_sampler_container, lambda d: gr.update(visible=True) if d.get("Hires sampler", "Use same sampler") != "Use same sampler" or d.get("Hires checkpoint", "Use same checkpoint") != "Use same checkpoint" or d.get("Hires schedule type", "Use same scheduler") != "Use same scheduler" else gr.update()),
407
  PasteField(hr_prompt, "Hires prompt", api="hr_prompt"),
408
  PasteField(hr_negative_prompt, "Hires negative prompt", api="hr_negative_prompt"),
 
566
  width = gr.Slider(minimum=64, maximum=2048, step=64, label="Width", value=512, elem_id="img2img_width")
567
  height = gr.Slider(minimum=64, maximum=2048, step=64, label="Height", value=512, elem_id="img2img_height")
568
  with gr.Column(elem_id="img2img_dimensions_row", scale=1, elem_classes="dimensions-tools"):
569
+ res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="img2img_res_switch_btn", tooltip="Swap width/height")
570
  detect_image_size_btn = ToolButton(value=detect_image_size_symbol, elem_id="img2img_detect_image_size_btn", tooltip="Auto detect size from img2img")
571
 
572
  with gr.Tab(label="Resize by", elem_id="img2img_tab_resize_by") as tab_scale_by:
573
+ scale_by = gr.Slider(minimum=0.5, maximum=4.0, step=0.05, label="Scale", value=1.0, elem_id="img2img_scale")
574
 
575
  with FormRow():
576
  scale_by_html = FormHTML(resize_from_to_html(0, 0, 0.0), elem_id="img2img_scale_resolution_preview")
 
593
 
594
  if opts.dimensions_and_batch_together:
595
  with gr.Column(elem_id="img2img_column_batch"):
596
+ batch_count = gr.Slider(minimum=1, maximum=128, step=1, label="Batch Count", value=1, elem_id="img2img_batch_count")
597
+ batch_size = gr.Slider(minimum=1, maximum=16, step=1, label="Batch Size", value=1, elem_id="img2img_batch_size")
598
 
599
  elif category == "denoising":
600
  denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label="Denoising strength", value=0.75, elem_id="img2img_denoising_strength")
601
 
602
  elif category == "cfg":
603
  with gr.Row():
604
+ cfg_scale = gr.Slider(minimum=1.0, maximum=24.0, step=0.5, label="CFG Scale", value=6.0, elem_id="img2img_cfg_scale", scale=4)
605
  scripts.scripts_img2img.setup_ui_for_section(category)
606
  image_cfg_scale = gr.Slider(minimum=0, maximum=3.0, step=0.05, label="Image CFG Scale", value=1.5, elem_id="img2img_image_cfg_scale", visible=False)
607
 
608
+ # elif category == "checkboxes":
609
+ # with FormRow(elem_classes="checkboxes-row", variant="compact"):
610
+ # pass
611
 
612
  elif category == "accordions":
613
  with gr.Row(elem_id="img2img_accordions", elem_classes="accordions"):
 
616
  elif category == "batch":
617
  if not opts.dimensions_and_batch_together:
618
  with FormRow(elem_id="img2img_column_batch"):
619
+ batch_count = gr.Slider(minimum=1, maximum=128, step=1, label="Batch Count", value=1, elem_id="img2img_batch_count")
620
+ batch_size = gr.Slider(minimum=1, maximum=16, step=1, label="Batch Size", value=1, elem_id="img2img_batch_size")
621
 
622
  elif category == "override_settings":
623
  with FormRow(elem_id="img2img_override_settings_row") as row:
modules/ui_common.py CHANGED
@@ -105,7 +105,7 @@ def save_files(js_data, images, do_make_zip, index):
105
  logfile_path = os.path.join(shared.opts.outdir_save, "log.csv")
106
 
107
  # NOTE: ensure csv integrity when fields are added by
108
- # updating headers and padding with delimeters where needed
109
  if os.path.exists(logfile_path):
110
  update_logfile(logfile_path, fields)
111
 
 
105
  logfile_path = os.path.join(shared.opts.outdir_save, "log.csv")
106
 
107
  # NOTE: ensure csv integrity when fields are added by
108
+ # updating headers and padding with delimiters where needed
109
  if os.path.exists(logfile_path):
110
  update_logfile(logfile_path, fields)
111
 
modules/ui_components.py CHANGED
@@ -86,9 +86,9 @@ class DropdownEditable(FormComponent, gr.Dropdown):
86
 
87
 
88
  class InputAccordion(gr.Checkbox):
89
- """A gr.Accordion that can be used as an input - returns True if open, False if closed.
90
-
91
- Actaully just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox.
92
  """
93
 
94
  global_index = 0
 
86
 
87
 
88
  class InputAccordion(gr.Checkbox):
89
+ """
90
+ A gr.Accordion that can be used as an input - returns True if open, False if closed.
91
+ Actually just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox.
92
  """
93
 
94
  global_index = 0
modules/ui_prompt_styles.py CHANGED
@@ -67,7 +67,7 @@ class UiPromptStyles:
67
  with gr.Row():
68
  self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
69
  ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
70
- self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply_dialog", tooltip="Apply all selected styles from the style selction dropdown in main UI to the prompt.")
71
  self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.")
72
 
73
  with gr.Row():
 
67
  with gr.Row():
68
  self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
69
  ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
70
+ self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply_dialog", tooltip="Apply all selected styles from the style selection dropdown in main UI to the prompt.")
71
  self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.")
72
 
73
  with gr.Row():
modules/ui_toprow.py CHANGED
@@ -77,11 +77,11 @@ class Toprow:
77
  def create_prompts(self):
78
  with gr.Column(elem_id=f"{self.id_part}_prompt_container", elem_classes=["prompt-container-compact"] if self.is_compact else [], scale=6):
79
  with gr.Row(elem_id=f"{self.id_part}_prompt_row", elem_classes=["prompt-row"]):
80
- self.prompt = gr.Textbox(label="Prompt", elem_id=f"{self.id_part}_prompt", show_label=False, lines=3, placeholder="Prompt\n(Press Ctrl+Enter to generate, Alt+Enter to skip, Esc to interrupt)", elem_classes=["prompt"])
81
  self.prompt_img = gr.File(label="", elem_id=f"{self.id_part}_prompt_image", file_count="single", type="binary", visible=False)
82
 
83
  with gr.Row(elem_id=f"{self.id_part}_neg_prompt_row", elem_classes=["prompt-row"]):
84
- self.negative_prompt = gr.Textbox(label="Negative prompt", elem_id=f"{self.id_part}_neg_prompt", show_label=False, lines=3, placeholder="Negative prompt\n(Press Ctrl+Enter to generate, Alt+Enter to skip, Esc to interrupt)", elem_classes=["prompt"])
85
 
86
  self.prompt_img.change(
87
  fn=modules.images.image_data,
@@ -94,15 +94,15 @@ class Toprow:
94
  with gr.Row(elem_id=f"{self.id_part}_generate_box", elem_classes=["generate-box"] + (["generate-box-compact"] if self.is_compact else []), render=not self.is_compact) as submit_box:
95
  self.submit_box = submit_box
96
 
97
- self.interrupt = gr.Button("Interrupt", elem_id=f"{self.id_part}_interrupt", elem_classes="generate-box-interrupt", tooltip="End generation immediately or after completing current batch")
98
- self.skip = gr.Button("Skip", elem_id=f"{self.id_part}_skip", elem_classes="generate-box-skip", tooltip="Stop generation of current batch and continues onto next batch")
99
- self.interrupting = gr.Button("Interrupting...", elem_id=f"{self.id_part}_interrupting", elem_classes="generate-box-interrupting", tooltip="Interrupting generation...")
100
- self.submit = gr.Button("Generate", elem_id=f"{self.id_part}_generate", variant="primary", tooltip="Right click generate forever menu")
101
 
102
  def interrupt_function():
103
  if not shared.state.stopping_generation and shared.state.job_count > 1 and shared.opts.interrupt_after_current:
104
  shared.state.stop_generating()
105
- gr.Info("Generation will stop after finishing this image, click again to stop immediately.")
106
  else:
107
  shared.state.interrupt()
108
 
@@ -114,10 +114,10 @@ class Toprow:
114
  with gr.Row(elem_id=f"{self.id_part}_tools"):
115
  from modules.ui import paste_symbol, clear_prompt_symbol, restore_progress_symbol
116
 
117
- self.paste = ToolButton(value=paste_symbol, elem_id="paste", tooltip="Read generation parameters from prompt or last generation if prompt is empty into user interface.")
118
- self.clear_prompt_button = ToolButton(value=clear_prompt_symbol, elem_id=f"{self.id_part}_clear_prompt", tooltip="Clear prompt")
119
- self.apply_styles = ToolButton(value=ui_prompt_styles.styles_materialize_symbol, elem_id=f"{self.id_part}_style_apply", tooltip="Apply all selected styles to prompts.")
120
- self.restore_progress_button = ToolButton(value=restore_progress_symbol, elem_id=f"{self.id_part}_restore_progress", visible=False, tooltip="Restore progress")
121
 
122
  self.token_counter = gr.HTML(value="<span>0/75</span>", elem_id=f"{self.id_part}_token_counter", elem_classes=["token-counter"], visible=False)
123
  self.token_button = gr.Button(visible=False, elem_id=f"{self.id_part}_token_button")
@@ -139,12 +139,13 @@ class Toprow:
139
  def hook_paste_guard(self):
140
  assert self.negative_prompt is not None and self.paste is not None
141
 
142
- def auto(prompt: str) -> bool:
143
- return gr.update(interactive=(not bool(prompt)))
144
 
145
  self.negative_prompt.change(
146
- fn=auto,
147
  inputs=[self.negative_prompt],
148
  outputs=[self.paste],
149
  show_progress="hidden",
 
150
  )
 
77
  def create_prompts(self):
78
  with gr.Column(elem_id=f"{self.id_part}_prompt_container", elem_classes=["prompt-container-compact"] if self.is_compact else [], scale=6):
79
  with gr.Row(elem_id=f"{self.id_part}_prompt_row", elem_classes=["prompt-row"]):
80
+ self.prompt = gr.Textbox(label="Prompt", elem_id=f"{self.id_part}_prompt", show_label=False, lines=3, placeholder="Prompt\n(Ctrl+Enter to Generate ; Alt+Enter to Skip ; Esc to Interrupt)", elem_classes=["prompt"])
81
  self.prompt_img = gr.File(label="", elem_id=f"{self.id_part}_prompt_image", file_count="single", type="binary", visible=False)
82
 
83
  with gr.Row(elem_id=f"{self.id_part}_neg_prompt_row", elem_classes=["prompt-row"]):
84
+ self.negative_prompt = gr.Textbox(label="Negative Prompt", elem_id=f"{self.id_part}_neg_prompt", show_label=False, lines=3, placeholder="Negative Prompt\n(Ctrl+Enter to Generate ; Alt+Enter to Skip ; Esc to Interrupt)", elem_classes=["prompt"])
85
 
86
  self.prompt_img.change(
87
  fn=modules.images.image_data,
 
94
  with gr.Row(elem_id=f"{self.id_part}_generate_box", elem_classes=["generate-box"] + (["generate-box-compact"] if self.is_compact else []), render=not self.is_compact) as submit_box:
95
  self.submit_box = submit_box
96
 
97
+ self.interrupt = gr.Button("Interrupt", elem_id=f"{self.id_part}_interrupt", elem_classes="generate-box-interrupt", tooltip="End batch after current generation finishes" if shared.opts.interrupt_after_current else "End current generation immediately")
98
+ self.skip = gr.Button("Skip", elem_id=f"{self.id_part}_skip", elem_classes="generate-box-skip", tooltip="Stop current batch and continue onto next batch")
99
+ self.interrupting = gr.Button("Interrupting...", elem_id=f"{self.id_part}_interrupting", elem_classes="generate-box-interrupting", tooltip="Interrupting...")
100
+ self.submit = gr.Button("Generate", elem_id=f"{self.id_part}_generate", variant="primary", tooltip='Right Click to open the "Generate Forever" menu')
101
 
102
  def interrupt_function():
103
  if not shared.state.stopping_generation and shared.state.job_count > 1 and shared.opts.interrupt_after_current:
104
  shared.state.stop_generating()
105
+ gr.Info("Generation will stop after finishing this image, click again to stop immediately")
106
  else:
107
  shared.state.interrupt()
108
 
 
114
  with gr.Row(elem_id=f"{self.id_part}_tools"):
115
  from modules.ui import paste_symbol, clear_prompt_symbol, restore_progress_symbol
116
 
117
+ self.paste = ToolButton(value=paste_symbol, elem_id="paste", tooltip="Read generation parameters from prompts, or the last generation if prompt is empty, into user interface")
118
+ self.clear_prompt_button = ToolButton(value=clear_prompt_symbol, elem_id=f"{self.id_part}_clear_prompt", tooltip="Clear Prompt")
119
+ self.apply_styles = ToolButton(value=ui_prompt_styles.styles_materialize_symbol, elem_id=f"{self.id_part}_style_apply", tooltip="Apply selected Styles to Prompts")
120
+ self.restore_progress_button = ToolButton(value=restore_progress_symbol, elem_id=f"{self.id_part}_restore_progress", visible=False, tooltip="Restore Progress")
121
 
122
  self.token_counter = gr.HTML(value="<span>0/75</span>", elem_id=f"{self.id_part}_token_counter", elem_classes=["token-counter"], visible=False)
123
  self.token_button = gr.Button(visible=False, elem_id=f"{self.id_part}_token_button")
 
139
  def hook_paste_guard(self):
140
  assert self.negative_prompt is not None and self.paste is not None
141
 
142
+ def guard(prompt: str) -> bool:
143
+ return gr.update(interactive=(not bool(prompt.strip())))
144
 
145
  self.negative_prompt.change(
146
+ fn=guard,
147
  inputs=[self.negative_prompt],
148
  outputs=[self.paste],
149
  show_progress="hidden",
150
+ queue=False,
151
  )
modules_forge/stream.py CHANGED
@@ -1,7 +1,9 @@
1
  # https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14855
2
 
3
  import torch
4
- from ldm_patched.modules import args_parser, model_management
 
 
5
 
6
 
7
  def stream_context():
@@ -58,7 +60,7 @@ current_stream = None
58
  mover_stream = None
59
  using_stream = False
60
 
61
- if args_parser.args.cuda_stream:
62
  current_stream = get_current_stream()
63
  mover_stream = get_new_stream()
64
  using_stream = current_stream is not None and mover_stream is not None
 
1
  # https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14855
2
 
3
  import torch
4
+
5
+ from ldm_patched.modules import model_management
6
+ from ldm_patched.modules.args_parser import args
7
 
8
 
9
  def stream_context():
 
60
  mover_stream = None
61
  using_stream = False
62
 
63
+ if args.cuda_stream:
64
  current_stream = get_current_stream()
65
  mover_stream = get_new_stream()
66
  using_stream = current_stream is not None and mover_stream is not None
modules_forge/unet_patcher.py CHANGED
@@ -19,13 +19,13 @@ class UnetPatcher(ModelPatcher):
19
  self.offload_device,
20
  self.size,
21
  self.current_device,
22
- weight_inplace_update=self.weight_inplace_update,
23
  )
24
 
25
- n.patches = {}
26
  for k in self.patches:
27
  n.patches[k] = self.patches[k][:]
28
 
 
29
  n.object_patches = self.object_patches.copy()
30
  n.model_options = copy.deepcopy(self.model_options)
31
  n.model_keys = self.model_keys
@@ -33,6 +33,8 @@ class UnetPatcher(ModelPatcher):
33
  n.extra_preserved_memory_during_sampling = self.extra_preserved_memory_during_sampling
34
  n.extra_model_patchers_during_sampling = self.extra_model_patchers_during_sampling.copy()
35
  n.extra_concat_condition = self.extra_concat_condition
 
 
36
  return n
37
 
38
  def add_extra_preserved_memory_during_sampling(self, memory_in_bytes: int):
 
19
  self.offload_device,
20
  self.size,
21
  self.current_device,
22
+ self.weight_inplace_update,
23
  )
24
 
 
25
  for k in self.patches:
26
  n.patches[k] = self.patches[k][:]
27
 
28
+ n.backup = self.backup
29
  n.object_patches = self.object_patches.copy()
30
  n.model_options = copy.deepcopy(self.model_options)
31
  n.model_keys = self.model_keys
 
33
  n.extra_preserved_memory_during_sampling = self.extra_preserved_memory_during_sampling
34
  n.extra_model_patchers_during_sampling = self.extra_model_patchers_during_sampling.copy()
35
  n.extra_concat_condition = self.extra_concat_condition
36
+ n.patch_status = self.patch_status
37
+
38
  return n
39
 
40
  def add_extra_preserved_memory_during_sampling(self, memory_in_bytes: int):