Sorry

#4
by kpadpa - opened

First of all - sorry for being such a pain in the ass about this model.
Secondly, I see that it was updated 2 days ago, it didn't work so good before that, now it doesn't work at all - does this have something to do with HF again? I tried generating something on a different models of yours and it didn't work either, are they ever gonna work again or are you considering giving up the whole thing?

Since you are so obsessed with this model, maybe you should start paying me for it use.

Keltezaa changed discussion status to closed

I asked a simple question, no need to get childish lmfao.

Wow , sensitive huh, it was intended as a joke.

@kpadpa ,
" - does this have something to do with HF again? I tried generating something on a different models of yours and it didn't work either, are they ever gonna work again or are you considering giving up the whole thing?-
HF want to milk users for money so they changed things,

Starting with the ZeroGpu's Cooldown , as well as the allocated usage totals that seems nerved. Pro users barely got enough GPU usage even though it says x5 times more quota and cooldown speed.

Then HF introduced the merging/usage/of other top Inference API's with their own. leading to many "blob" errors.
I think ,in an attempt to fix that, they now screwed up the API totally where the STD HF API is totally broken, with errors ranging from "Blob", "missing model", "Runtime error", and a few others.
Below are some posts about it. but it is out of my hands... it is the HF staff that should get it sorted.

https://huggingface.co/posts/Reality123b/747178272365117
https://discuss.huggingface.co/t/request-failed-500/138871

among others.

Keltezaa changed discussion status to open
Keltezaa changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment