Example prompts from selected LoRAs

#17
by Pr0f3ssi0n4ln00b - opened

The external civitai loras sometimes show example prompts. Is it possible to add a similar field for the loras loaded from HF repos? With a copy button preferably.

You are the best btw, love how responsive you are to requests and feedback.

Owner

I've tried implementing it.
I'd forgotten about that.πŸ˜…

actually after this change the prompt examples(trigger words) bar for CIVITAI loras - are always empty and just say 'none'.
and then there's the other thing: before this change those trigger words were automatically added to the actual prompt, at least that's how it looked like if you checked the metadata and the prompt section, example: {"prompt": "prompt here bla bla bla , MythP0rt"
(MythP0rt - being a trigger word for lora models/599757) so yeah, and now since there's nothing in prompt example bars, it doesn't add anything. I'm pretty clueless about what would be the best solution for all of that, just giving some feedback

Thanks! Maybe fixed.πŸ˜…

john
(arbuckle i assume)?
i havent tested your models
yet but
Niggendar , i have , and ive FINALLY gotten past the 401 error
(pretty sure it'll address your mdoels too) 85% sure

"HF_TOKEN" , as a R or W or FT (as an environmental) access token , DOESNT WORK, at least not for me
i had to enter the hf_xxxxxxxxxxxxxxxxxxx as a secret, to each of my spaces , name it "mynewwhatever"

then
each line in their proper place according to you r (who ever reads this)'s code

import os
HF_TOKEN = os.environ.get("mynewwhatever") if os.environ.get("mynewwhatever") else None
m = gr_Interface_load(f'models/{model}', hf_token=HF_TOKEN)

i dont realy know if this is basic key entries 101 or not
but i never saw it as a direct solution to these new f angled inference rollout "server issues"

im going to test your models now

update ---------------
weird
i can load them, and i can communicate (need to debug though) but they're not responding correctly
before they'd just immediatly drop

seems im seeing a
gr_Interface_load..() got an unexpected keyword argument 'prompt'
seems it was coming from here
task = asyncio.create_task(asyncio.to_thread(models_load[model_str].fn, prompt=f'{prompt} {noise}', token=HF_TOKEN, **kwargs))

i need to not post this here
im sorry.
suggestions on where to post to continue a resolution for this ?
(ill gather what debug info i can)

The behavior of gr.load() (specifically, the contents of the huggingface_hub and Gradio libraries and the way HF's server handles tokens) has been changing a lot in recent weeks, and in particular, the way tokens are handled has changed several times in the last few days.
So sometimes it breaks and sometimes it fixes itself...πŸ˜…
I think the most stable way is to set the HF_TOKEN secret and use gr.load() without other options, after fixing the versions of Gradio and huggingface_hub to something reasonable. If you're using a private repo, you'll have to pass the token.
https://huggingface.co/spaces/Yntec/blitz_diffusion/discussions/5#67d13d0fa189f397864a3833

Edit:

suggestions on where to post to continue a resolution for this ?

I think the forum, the Hugging Face Discord, or the hub-docs github issue would be appropriate. The hub-docs github is also a place to handle other Hub issues in general. In the case of account-dependent issues or error information that is better not made public, there is also the option of sending an email. [email protected]
BTW, maybe about 401 issue: https://discuss.huggingface.co/t/model-does-not-exist-inference-api-dont-work/145242/4

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment