
KBlueLeaf/EQ-SDXL-VAE
Updated
•
235
•
22
image
imagewidth (px) 256
1.62k
| label
class label 213
classes |
---|---|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
|
0tench, Tinca tinca
|
The same ImageNet dataset but all the smaller side resized to 256. A lot of pretraining workflows contain resizing images to 256 and random cropping to 224x224, this is why 256 is chosen. The resized dataset can also be downloaded much faster and consume less space than the original one. See here for detailed readme.
Below is the example of one row of data. Note that the labels in the test split are all -1.
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=256x384 at 0x276021C5EB8>,
'label': 23
}
The number of rows per split is the same as the original ImageNet.
train | validation | test | |
---|---|---|---|
# of examples | 1281167 | 50000 | 100000 |
In exchange for permission to use the ImageNet database (the "Database") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:
@article{imagenet15russakovsky,
Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
Title = { {ImageNet Large Scale Visual Recognition Challenge} },
Year = {2015},
journal = {International Journal of Computer Vision (IJCV)},
doi = {10.1007/s11263-015-0816-y},
volume={115},
number={3},
pages={211-252}
}