Base models

#9
by Pr0f3ssi0n4ln00b - opened

Will you be updating the base models?
What do I change to add some on my own alternatively? Found JibMix v5 on here and would like to add.

Will you be updating the base models?

Come to think of it, I forgot to convert and add models for FLUX. I don't have any particular plans, but I'll add them when I can.

What do I change to add some on my own alternatively?

https://huggingface.co/spaces/John6666/flux-lora-the-explorer/blob/main/env.py#L13
Just add it to the list and you're done. If it's at the top of the list, it will be the default.
If you don't need to add it to the list, you can just enter it directly into the base model selection drop-box on the GUI and it will be loaded.

Hmm, they haven’t tagged it as diffusers so I’m getting an error. It’s jibmix v5

Oh... I've updated it and added it, but if there's an error with that, it might be for a different reason.

Sorry, I didn’t see how active you’ve been the last hour. Haven’t tried it yet, but your version looks much more correct.

This space have pretty much everything except inpainting now. Is that hard to implement?

Alternatively, if you specify the file name like this instead of the repo, you can use files that are not for Diffusers. At the moment, this is not possible with GGUF or NF4...
https://huggingface.co/datasets/John6666/flux1-backup-202411/blob/main/thirstTrapGirlTiktok_v10.safetensors

inpainting

Was it in the Advanced tab?

Input for i2i yes, not inpaint with masking

inpaint with masking

I see. It's late here, so I'll have a look at it tomorrow.
I've turned off Control Net because Diffusers are prone to bugs, but if it's possible to do it with Inpaint, then it might be possible.

Getting this error after trying some of the new models by adding them to env.py:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1567, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 128, in change_base_model
raise gr.Error(f"Model load Error: {repo_id} {e}") from e
gradio.exceptions.Error: "Model load Error: John6666/acorn-is-spinning-flux-aisfluxdedistilled-fp8-flux Pipeline <class 'diffusers.pipelines.flux.pipeline_flux.FluxPipeline'> expected {'tokenizer', 'vae', 'text_encoder', 'tokenizer_2', 'scheduler', 'transformer', 'text_encoder_2'}, but only {'text_encoder', 'tokenizer_2', 'tokenizer', 'scheduler', 'vae'} were passed."

Thanks for the report. Maybe a different implementation is needed for the De-Distilled version.
That one may no longer be strictly flux.

The inpainting masks don't seem to be difficult except for the time it takes to create the GUI.😃
Let me know if you have any good inpaint GUI Spaces that would be helpful.
I'm planning to divert DiffuseCraft's for now.

I’ve only duplicated this one, which has been working fine for its intended purpose. But I’d love to have it with a lora gallery like here.

Sham786/flux-inpainting-with-lora

I see the original has a runtime error now for some reason.

Gah, I’m new to all this. I see you have commits in several files. Can’t figure out how to fetch them in my space. Is it perhaps smarter to just duplicate the space again and save my loras json first?

I see the original has a runtime error now for some reason.

Since I've gone to the trouble, I've fixed the original. If you duplicate this, it should be up to date. You can include HF_TOKEN or not.
This is a good sample. I'll use this as a base. Thanks!
https://huggingface.co/spaces/John6666/flux-inpainting-with-lora

No problem, thank you! Seriously, you get a lot of stuff done.

I’m having trouble with the UI on phone on this one (lora explorer) for some reason. Mine doesn’t do that.

So I can’t test the models you added. I added some in mine and get this error with jibmixv5:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1567, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 128, in change_base_model
raise gr.Error(f"Model load Error: {repo_id} {e}") from e
gradio.exceptions.Error: 'Model load Error: jibMixFlux_v5ItsAlive.safetensors cannot unpack non-iterable NoneType object'

Thank you. It seems that there was a mistake in the download. Maybe there was a change in the original function specification. I'm investigating it now.

I’m having trouble with the UI on phone on this one

I think that's the error. It's a very popular error in a small part of HF.😇 I'll fix it later.
https://discuss.huggingface.co/t/python-gradio-web-pages-suddenly-dont-render-properly-on-ipad-browsers/126669
https://discord.com/channels/879548962464493619/1295847667515129877

Inpaint was easy, but Gradio's ImageEditor was really buggy and took a while. It should work now.

It works really well, I love it! Now it’s almost a complete creative suite. Thank you for your efforts :)

If you don’t mind me asking, is there a way I can add my Loras in groups that get fetched whenever I add more to the group? And add them with a dropdown list like you have put the base models. I’m thinking of perhaps having distinct groupings of character, style and concept loras.

Hmm, I'm not really sure what kind of feeling you're going for...
As usual, it would be easier if you had some reference material. It doesn't have to be perfect, as long as it's similar.
In any case, it's late at night, so it's tomorrow.😪

Edit:
I'll write down what was easy and what was difficult, in case it's helpful.
Inpainting was easy because the function itself existed. The part that was difficult was that the GUI didn't work as specified in the specifications. (When the creator of HF Spaces has difficulties, it's almost always because of this or the library version.)
Usually, it's relatively easy to incorporate things that have been achieved in other HF Spaces.
Also, if the GUI changes are possible within the scope of Gradio's assumptions, the work is not difficult.
However, if we try to make Gradio do something difficult for Gradio, we will see HELL. Well, if you give an example, the person making it will generally know the difficulty by looking at it.

I don’t know how to contact you elsewhere. Have you seen the new Qwen/Flux multimodal model? Are you working on it per chance? Looks powerful as hell.

Thanks for getting in touch. I'm also interested in that, and there are a few other new FLUX species...😀
I'm not working on them at the moment, but I'll check tomorrow to see if they'll work if I incorporate them into this space.

Edit:
I don't have enough VRAM... (48GB / 40GB) There is a possibility that this could be solved by quantization, but since quantization makes LoRA unusable, it would be faster to do it in a different space.😅
However, I don't have any free slots in my Zero GPU space...
https://github.com/erwold/qwen2vl-flux
Before that, it's not FLUX anymore... Is it just barely possible to run it on HF...? If it's CPU space, it'll die at 100% just from loading.

Sign up or log in to comment