Strange results
Hello,
following:
https://docs.comfy.org/tutorials/image/qwen/qwen-image-edit
I tried it, using comfui with json from them;
removed the big model and connected the unet guff -Q5_0.gguf.
Result is not... let say correct. (it ruined the image)
So i made an experiment with a face, asking only "remove hair", but also here... no way, i got my image overexposed and noised.
I know that what i wrote is not so much, but... what i'm doing wrong?
(i'm running on a 5070, but i dont' think that is the problem)
I made another example with a photo from the web, the atteched file is the result
Thanks for any advise :)
Hi, Can you share the workflow that you are using?
check the text encoder (do you have mmproj-F16 paired with the text encoder?), disable the lightning or any speed up lora, deload the models, try again but pay attention to the terminal log any errors?
Hi, Can you share the workflow that you are using?
it is embedded in the png file.
Seems that i can't attach a json file here,
i'm using this one:
https://raw.githubusercontent.com/Comfy-Org/workflow_templates/refs/heads/main/templates/image_qwen_image_edit.json
from: https://docs.comfy.org/tutorials/image/qwen/qwen-image-edit
btw, because i can't run the real model, i disconnected the qwen_image_edit_fp8_e4m3fn.safetensors model
added a Unet Loader (GGUF) with a model from this page.
(Following instructions on the video of "AI search" on utube)
(i had to replace LoraLoaderModelOnly because it comes kind of pink transparent)
: do you have mmproj-F16 paired with the text encoder?
//: i don't think, i can't see anything similar in the Step 3 - Prompt TextEncodeQwenImageEdit box
: disable the lightning or any speed up lora
//:Remove the node LoraLoaderModelOnly Qwen-Image-Lightning-4steps-V1.0.safetensors seems that doesn't have any effect.
:deload the models
//Can i? How?
terminal log clean or do i have to increase the debug level somewhere?
It’s fine. Well since you are using the 4 steps lora (if you use the 8 steps lora, set 8) in the KSampler node make sure to set steps in 4 and cfg in 1.0
When a node its like you say in a pink transparent is because is set in bypass. That’s means the workflow is going to executed but that node is not going to be used in that execution. To disable the bypass just select the node and then CRTL+B
34x263 is not supported, and GGUF models behave unexpectedly if the sizes are not standard. 928x1664, 1056x1584, 1140x1472, 1328x1328, 1664x928, 1584x1056, 1472x1140
I changed a few things in your workflow. Remember, the Qwen text encoder model is very powerful. Your prompt was: 'remove hair.' From where? Be specific. use AI for writing your prompt. Don't use Lightning with GGUF, as it slows the process. GGUF + Lightning takes 55 seconds per iteration, while without it, it takes 6 seconds per iteration (on my PC). Look carefully at the screenshots.
size
size 1328x1328
Lightning
Thanks, i have to study carefully what you sent.
:)
I run it making changes by myself --> no good
I run it using your config but seletcting Qwen_Image_Edit-Q5_0.gguf instead of Qwen-GGUF/Qwen_Image_Edit-Q3_K_M.gguf --> no good
I'm downloading now Qwen_Image_Edit-Q3_K_M.gguf
This model has a green chip icon on the download page, the Q5 has a yellow one, but it should run anyway. no?
Strange...
Can you try to update your ComfyUI?
Did it before doing all.
i had to go with a pip install -r requirements.txt
because it claims that i had an old interface
(But also run the update all in manager from the web interface)
You update using the update.bat?
Sorry, i'm running it on linux
(well... an lxc container blabla... a bit complicated, but stable diffusion works in a similar env. cuda ok, kernel ok...)
I will try to re-check the updates procedure.
Did it before doing all.
i had to go with a pip install -r requirements.txt
because it claims that i had an old interface
(But also run the update all in manager from the web interface)
Do you use ComfyUI portable? or GitHub stable?
In your original workflow , you selected the Qwen-Image-Q5-O.gguf, double check if you have selected Qwen-image-edit model.
if you run 'nvidia-smi' in the terminal, what do you see?
Run this command as well (make sure the ComfyUI environment is active when you run it): ```
python -c "import torch; print(f'PyTorch Version: {torch.version}, Compiled with GPU: {torch.cuda.is_available()}')
With Qwen_Image_Edit-Q3_K_M.gguf
seems to work
let me re-try to remove hair
With Qwen_Image_Edit-Q3_K_M.gguf
seems to work
let me re-try to remove hair
Q3_K_M is good for low details prompts, anime, illustration, smooth portrait ect, and works even on rtx1050 with 4gb vram
Did you use the quants from calcuis og quantstack?
https://huggingface.co/calcuis/qwen-image-edit-gguf
The first may give strange results.
Hi, sorry to ask. But which one give you strange results?
Did you use the quants from calcuis og quantstack?
https://huggingface.co/calcuis/qwen-image-edit-ggufThe first may give strange results.
The first - calcuis. It did for me.
Ahaha yes🤣. Anyone see the model you was using… You was using Qwen Image and that’s for text-to-image. I think that’s why the output was like that