Spaces:
Running
on
Zero
Very interesting but isn't working
I tried the demo a few times, it started with a progress bar but then stopped with an error every time. Then i ran out of free tokens on HF
Could be cuz of everyone trying the model at the same time?
Could be cuz of everyone trying the model at the same time?
Yes not much we can do right now....
yep everyone is spamming, zero GPU cant handle our excitement
Could be cuz of everyone trying the model at the same time?
Yes not much we can do right now....
That's why I made this… fully accelerating on H200.
Space/App : https://huggingface.co/spaces/prithivMLmods/Qwen-Image-Diffusion
yep everyone is spamming, zero GPU cant handle our excitement
@mascIT
Really, look closely at the app build. it’s running on APIs, not on the native 104GB H200. 🙂
https://huggingface.co/spaces/prithivMLmods/Qwen-Image-Diffusion works like a charm, congrats @prithivMLmods
I just enabled auto-scaling on it so it will be able to use more than 5 GPUs concurrently when needed 🚀
https://huggingface.co/spaces/prithivMLmods/Qwen-Image-Diffusion works like a charm, congrats @prithivMLmods
I just enabled auto-scaling on it so it will be able to use more than 5 GPUs concurrently when needed 🚀
@cbensimon Wow, it's huge. Thank you!
yep everyone is spamming, zero GPU cant handle our excitement
@mascIT
Really, look closely at the app build. it’s running on APIs, not on the native 104GB H200. 🙂
Wow, thanks!
But doesn't this cost a lot of money to run?
Wow, thanks!
But doesn't this cost a lot of money to run?
@BasToTheMax Rate limit of 25 minutes per day on Zero GPU H200 compute for $9/month. That's all I know.
Please visit huggingface.co/pro for more details.
I created another HF Space that use this same model, with the option to generate smaller (and bigger) images, so less inference time is consumed from the free daily quota of regular users.