Qwen Image edit broken

#13
by SomAnon - opened

Yeah wasted a week getting different datasets working just to realize both qwen image edit workflows are broken.
Regular qwen image works without a problem though.

  File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/feature_extraction_utils.py", line 92, in __getattr__

    return self.data[item]

           ~~~~~~~~~^^^^^^

KeyError: 'pixel_values'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

  File "/ai-toolkit/run.py", line 120, in <module>

    main()

  File "/ai-toolkit/run.py", line 108, in main

    raise e

  File "/ai-toolkit/run.py", line 96, in main

    job.run()

  File "/ai-toolkit/jobs/ExtensionJob.py", line 22, in run

    process.run()

  File "/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 2154, in run

    loss_dict = self.hook_train_loop(batch_list)

                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 2023, in hook_train_loop

    loss = self.train_single_accumulation(batch)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 1549, in train_single_accumulation

    conditional_embeds = self.sd.encode_prompt(

                         ^^^^^^^^^^^^^^^^^^^^^^

  File "/ai-toolkit/toolkit/models/base_model.py", line 1029, in encode_prompt

    return self.get_prompt_embeds(prompt, control_images=control_images)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/ai-toolkit/extensions_built_in/diffusion_models/qwen_image/qwen_image_edit.py", line 194, in get_prompt_embeds

    prompt_embeds, prompt_embeds_mask = self.pipeline.encode_prompt(

                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py", line 304, in encode_prompt

    prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, image, device)

                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage_edit.py", line 252, in _get_qwen_prompt_embeds

    pixel_values=model_inputs.pixel_values,

                 ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/feature_extraction_utils.py", line 94, in __getattr__

    raise AttributeError

AttributeError

I know it's too late to fix this, but I just wanted to let the developers know.

Also ran into issues training qwen image edit. Even though my dataset (input and control) are the same size, I got this error:

> Error running job: tuple index out of range
> 
> 
> ========================================
> 
> Result:
> 
> qwen_image_edit_2509_woven_fabric_01:   0%|          | 0/4000 [00:00<?, ?it/s]Traceback (most recent call last):
> 
> File "/ai-toolkit/run.py", line 120, in <module>
> 
> - 0 completed jobs
> 
> - 1 failure
> 
> ========================================
> 
> main()
> 
> File "/ai-toolkit/run.py", line 108, in main
> 
> raise e
> 
> File "/ai-toolkit/run.py", line 96, in main
> 
> job.run()
> 
> File "/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
> 
> process.run()
> 
> File "/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 2154, in run
> 
> loss_dict = self.hook_train_loop(batch_list)
> 
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> File "/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 2023, in hook_train_loop
> 
> loss = self.train_single_accumulation(batch)
> 
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> File "/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 1549, in train_single_accumulation
> 
> conditional_embeds = self.sd.encode_prompt(
> 
> ^^^^^^^^^^^^^^^^^^^^^^
> 
> File "/ai-toolkit/toolkit/models/base_model.py", line 1029, in encode_prompt
> 
> return self.get_prompt_embeds(prompt, control_images=control_images)
> 
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> File "/ai-toolkit/extensions_built_in/diffusion_models/qwen_image/qwen_image_edit_plus.py", line 172, in get_prompt_embeds
> 
> ratio = control_images[i].shape[2] / control_images[i].shape[3]
> 
> ~~~~~~~~~~~~~~~~~~~~~~~^^^
> 
> IndexError: tuple index out of range
> 
> qwen_image_edit_2509_woven_fabric_01:   0%|          | 0/4000 [00:01<?, ?it/s]
> 
> Downloaded repo to: /root/.cache/huggingface/hub/datasets--wouterverweirder--qwen_image_edit_2509_woven_fabric_01-dataset/snapshots/0546829ad3fa781aaf528379061f278f011281d4
> 
> Contents: ['datasets', '.gitattributes', 'manifest.json']
> 
> Collecting data files from /root/.cache/huggingface/hub/datasets--wouterverweirder--qwen_image_edit_2509_woven_fabric_01-dataset/snapshots/0546829ad3fa781aaf528379061f278f011281d4
> 
> Prepared 42 images, 0 videos, and 21 captions in /tmp/tmp00ncxckn/dataset
> 
> Starting training...
> 
> Traceback (most recent call last):
> 
> File "/tmp/script.py", line 885, in <module>
> 
> main()
> 
> File "/tmp/script.py", line 873, in main
> 
> subprocess.run([
> 
> File "/usr/local/lib/python3.12/subprocess.py", line 571, in run
> 
> raise CalledProcessError(retcode, process.args,
> 
> subprocess.CalledProcessError: Command '['/root/.cache/uv/environments-v2/script-912247c0edd68a55/bin/python', 'run.py', '/tmp/tmp00ncxckn/config.yaml']' returned non-zero exit status 1.
>

Sign up or log in to comment