Datasets:
Questions regarding training
What model did you train against? What were your vectors per token, and did you use any initialization text?
How many images did you train on, what was the resolution, and what was your learning rate / steps ?
How did you pick images for training? Just bad ones or did you have an eye for it?
I am doing embed training myself and I have a whole wealth of experimentation results that I don't know what to do with LOL. But I want to try to recreate your embed training process for my own model specific generation errors/artifacts.
I described it in the reddit post a little bit but here is it again with some more detailes:
Step 1: Generate Images, suited for the task:
I have created several images with different samplers using a standard negative prompt that look similar to the images created when using the negative embedding in the normal prompt.
The prompt I used was:
lowres, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))
Resolution: The Resolution was 512x512.
What were your vectors per token, and did you use any initialization text?
I trained the first version with 16 vectors per token and the second one with 8 --> initialization text: I used something like: ugly, low res, etc.
Step 2: Filename / Prompt description:
Before training I wrote the described prompt in a .txt file, which the AI should use for the training.
Step 3: Training:
I just used the TI extension implemented by Automatic1111 embedded in his Webui to train the negative embedding. The learning rate was set to default. For the maximum number of steps, I chose 8000, since I usually train my embeddings for two epochs, which is 200 * number of images.
I am open to feedback or more detailed examples and ideas for training. :)
I wish you could please consider including several tens of real photos and let it run till ~15k steps.
Are you using SD1.5 as a model?
Anyway, great work! It makes so great impact. I'm impressed. Thank you!
Hello, the question I want to ask is, how do you train your hands and repair images to a certain extent? This is really a big innovation for AI today. How did you train and add it to the negative prompts? Do you pay attention to anything when selecting the original pictures? What should you pay attention to when training. Your answer is very helpful to me. I hope to get your answer and guidance. In addition, I want to ask if you will produce models like limb repair in the future? If so, it would be very helpful to us.
Would you mind uploading the txt file of the prompts? kind of unsure what your styles.txt template file actually looked like.
Also for me, I have 64 sample images, and i set the max steps to 64 images * 200 per img * 2 epochs = 25600 but in the webui.sh stdout, it is calculating 400x epochs (instead of 2?)
this makes sense, since if i have 64 images, and 26500 steps, I am using each image 400 times (i thought epoch meant how many times you iterate over your dataset)