Enhance ControlNet model with embedding
Would it be possible to create embeddings for the ControlNet segmentation model to add more concepts? And if so could this be done by using a method like dreambooth and having the color as a token or do you have to retrain the whole data set at once?
Maybe this is not thought through far enough but it would be very nice to be able to have even more control over the generated image.
Would it be possible to create embeddings for the ControlNet segmentation model to add more concepts? And if so could this be done by using a method like dreambooth and having the color as a token or do you have to retrain the whole data set at once?
Maybe this is not thought through far enough but it would be very nice to be able to have even more control over the generated image.
Not sure about that, but Latent Couple with masking works very similarly to segmentation model from Control Net except you are actually completely free to specifically appoint a sub-prompt to each colored region and be sure that each region contains the specific object or person you want to generate there.
@Ascendant Thank you very much for the response! I will look into it and see if it helps me out.
Update:
Ok so i looked into it and it is quite interesting. I would consider it as a yes but no. So the idea would be to make a renderer using stable diffusion where you asign a lot (or a few) different colors to objects and it automatically, without a lot of prompting, knows what to do. To have more control over the generated image it would be helpfull to add these extra materials and textures.