Black output
getting only black output images, image breaks around the 17th step:
RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Had to remove --fast and --using-sageattention from the cmd. using the kjnode to turn off sageattention doesn't work
There seems to be similar reports of that with FP8 as well, so it may be an issue with the base ComfyUI code https://github.com/comfyanonymous/ComfyUI/issues/9184
Same problem out of 20 steps it becomes black on 11-th step even in Sampler preview
@city96
Hiya....I updated comfy this morning, nothing was said about the GGUF nodes though?
Edit: GGUF updated, problem solved.
Thank you for the heads up π
Had to remove --fast and --using-sageattention from the cmd. using the kjnode to turn off sageattention doesn't work
I am using the official Diffusers script to load the model offline, but I get a completely black image. How to solve it?
How do I disable -fast & -sage if I never used it in workflow at all ????
How do I disable -fast & -sage if I never used it in workflow at all ????
That's a launch argument, not in the workflow. Remove the --fast and --use_sage_attention flag in the run_(whatever you use).bat file.
I have also black screen, tested the harmless "red cat" so it cannot be any filter or anything. I had --use-flash-attention and tried removing it without any changes.
I have a Amd Radeon 7800xt on linux using ROCm 6.4
I have also black screen, tested the harmless "red cat" so it cannot be any filter or anything. I had --use-flash-attention and tried removing it without any changes.
I have a Amd Radeon 7800xt on linux using ROCm 6.4
ComfyUI's default split attention also seems to be problematic. Perhaps trying again with --use-pytorch-cross-attention
will solve the problem.
I have also black screen, tested the harmless "red cat" so it cannot be any filter or anything. I had --use-flash-attention and tried removing it without any changes.
I have a Amd Radeon 7800xt on linux using ROCm 6.4
ComfyUI's default split attention also seems to be problematic. Perhaps trying again with
--use-pytorch-cross-attention
will solve the problem.
I was away during weekend, but I tried with --use-pytorch-cross-attention without any difference
Using Patch Sage Attention KJ with "sageattn_qk_int8_pv_fp16_cuda" option fixes the problem for me.
Using Patch Sage Attention KJ with "sageattn_qk_int8_pv_fp16_cuda" option fixes the problem for me.
That Sage requires cuda and that's unavailable to me on AMD hardware.
I look into why using pytorch attention didn't work as it seems to work for some
https://github.com/thu-ml/SageAttention/issues/234#issuecomment-3220150376