Text-to-Image
Diffusers
Safetensors
StableDiffusionPipeline
stable-diffusion
Inference Endpoints

Can't seem to get txt2img to work properly Details:

#64
by pfmm - opened

Hello,
I am using windows 10 with Python 3.10.6. I have installed the requirements with pip install -r requirements.txt and that went well. I have also added the ckpt file (v2-1_768-ema-pruned.ckpt) to my Checkpoints folder.

When I attempt to run the txt2img script using the supplied example, modified with my real paths (C:\Users\User01\stablediffusion>python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt C:\Users\User01\stablediffusion\checkpoints\v2-1
_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768) I get this error:

Output Below:

C:\Users\User01\stablediffusion>python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt C:\Users\User01\stablediffusion\checkpoints\v2-1
_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768
Global seed set to 42
Loading model from C:\Users\User01\stablediffusion\checkpoints\v2-1_768-ema-pruned.ckpt
Global Step: 110000
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.0.1+cpu)
Python 3.10.9 (you have 3.10.6)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
LatentDiffusion: Running in v-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
DiffusionWrapper has 865.91 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...
data: 0%| | 0/1 [00:01<?, ?it/s]
Sampling: 0%| | 0/3 [00:01<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\User01\stablediffusion\scripts\txt2img.py", line 388, in
main(opt)
File "C:\Users\User01\stablediffusion\scripts\txt2img.py", line 342, in main
uc = model.get_learned_conditioning(batch_size * [""])
File "c:\users\user01\stablediffusion\ldm\models\diffusion\ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "c:\users\user01\stablediffusion\ldm\modules\encoders\modules.py", line 236, in encode
return self(text)
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "c:\users\user01\stablediffusion\ldm\modules\encoders\modules.py", line 213, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "c:\users\user01\stablediffusion\ldm\modules\encoders\modules.py", line 220, in encode_with_transformer
x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
File "c:\users\user01\stablediffusion\ldm\modules\encoders\modules.py", line 232, in text_transformer_forward
x = r(x, attn_mask=attn_mask)
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\open_clip\transformer.py", line 154, in forward
x = x + self.ls_1(self.attention(self.ln_1(x), attn_mask=attn_mask))
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\open_clip\transformer.py", line 151, in attention
return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0]
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\activation.py", line 1205, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "C:\Users\User01\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 5373, in multi_head_attention_forward
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
RuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and query.dtype: struct c10::BFloat16 instead.


End Output

Does anyone have any suggestions for me? Thanks very much.

have you solved this? I have the same issue

Sign up or log in to comment