Spaces:
Running
on
Zero
Apply for community grant: Academic project (gpu)
λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space
Overview
The rapid advancement of generative models facilitates the creation of hyper-realistic images from textual descriptions. Downstream tasks such as Personalized Text-to-Image (P-T2I) further require thousands of hours of computing resources to perform the zero-shot P-T2I.
To address this challenge, we extend the ECLIPSE via image-text interleaved training and present the λ-ECLIPSE model that is trained on a mere 74 GPU hours compared to its counterparts such as BLIP-Diffusion (2000 GPU hours) and Kosmos-G (12000 GPU hours).
We want to open-source the demo to promote more research on robustness and efficient T2I models. We have already released the codebase and model weights on huggingface. Please find these details attached below:
Project Details
- Project Page: https://eclipse-t2i.github.io/Lambda-ECLIPSE/
- PDF Paper: Paper (ArXiv paper will be released with the next announcement)
Hoping to hear back!
Thank you,
Maitreya Patel
Hi @mpatel57 , we have assigned a gpu to this space. Note that GPU Grants are provided temporarily and might be removed after some time if the usage is very low.
BTW, we recently started using ZeroGPU as a default for grants, so can you check the usage section of this page to see if your Space can run on ZeroGPU?
@mpatel57
Ohh, sorry! I just approved your request to join the ZeroGPU explorers org. It seems that I missed your message for some reason. Maybe it's because the repo was private back then. I recently noticed that I don't receive notifications from private repo, though I have special permission to see them.
Anyway, thanks for checking out ZeroGPU!
No worries! I moved the demo to ZeroGPU and it's indeed good!
Thanks!