image
imagewidth (px) 517
1.51k
| label
class label 2
classes |
---|---|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
1white
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
|
0orig
|
Genshin_IP Dataset
The dataset contains different angles images of 64 characters captured from Genshin game manually, 20 pictures for each character.
The dataset is intended for Genshin character lora model training.
You can also get the dataset from google drive.
Dataset Details
Character Proportion
The character should occupy a large proportion of the image. There should not be too much background, and ideally, the image should be just wrapped around the body (screenshots can be taken if necessary).
Character Features
- The character features (such as hair, eyes, clothing) should be consistent;
- Limbs should be naturally extended without extra movements;
- Clothing should be simple and clear without impurities (like leaves or starlight);
- Reflections or shadows should not be included.
Image Format
- Acceptable formats are .jpg, .jpeg, and .png (not transparent).
- Do not use webp or transparent png.
Image Resolution
The image resolution should be as high as possible, greater than (512, 1024) is ideal (use Gigapixel AI for upscaling if necessary). The higher the resolution, the higher the parameter settings and computational power required.
Image Quantity
- The number of images to be extracted for each character: >=20, ideally 16-20 (usually 5 or 10 images are sufficient). More is not necessarily better, choose the optimal number.
Note: The generated images are influenced by the angle of the prompt such as "looking at viewer", "side standing", "looking behind", and also by the angle proportion in the dataset.
- Ideally, full-body shots are preferred, but half-body shots are acceptable as well. The proportions can refer to my settings:
total images
: 20front full-body
: 15top-down left full-body
: 1top-down full-body
: 1top-down right full-body
: 1left diagonal full-body
: 2front full-body
: 5right diagonal full-body
: 2bottom-up left full-body
: 1bottom-up full-body
: 1bottom-up right full-body
: 1
left-side full-body
: 1right-side full-body
: 1back full-body
: 1front half-body
: 2
Background
If the images come from the same source and have similar backgrounds, they need to be edited to have a white background, and the prompt.txt should be set as:
1[Gender], [EN_name], solo, white_background
You can adopt anime character matting project anime-segmentation to change the background to white, file "inference.py" may need to be modified to:
if opt.only_matted: # img = np.concatenate((mask * img + 1 - mask, mask * 255), axis=2).astype(np.uint8) # img = cv2.cvtColor(img, cv2.COLOR_RGBA2BGRA) # cv2.imwrite(f'{opt.out}/{i:06d}.png', img) # Change the background to white white_mask = np.ones_like(mask) * 255 img = np.concatenate((mask * img + (white_mask * (1 - mask)), mask * 255),axis=2).astype(np.uint8) img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) cv2.imwrite(f'{opt.out}/{i:06d}.png', img)
If the images come from different sources and have varying backgrounds, the prompt.txt can be set as:
1[Gender], [EN_name], solo
Adjust for training LoRA
If using the https://github.com/Akegarasu/lora-scripts code for training, the following settings are also required:
- Each image should be repeated 6 times by default. The more repetitions, the better the model understands the image, but it takes more time and can lead to overfitting. For example, use "20_conan"; for real people, you can use even more, like "100_conan".
- The prompt.txt for each image should be set as:
1[Gender], [EN_name], solo, white_background.
- Downloads last month
- 220