Datasets:
metadata
size_categories:
- 1K<n<10K
task_categories:
- text-to-image
pretty_name: anime
dataset_info:
features:
- name: image
dtype: image
splits:
- name: miku
num_bytes: 558050909
num_examples: 1000
- name: MikumoGuynemer
num_bytes: 101689673
num_examples: 298
- name: aiohto
num_bytes: 77725840
num_examples: 138
- name: mima
num_bytes: 5008975
num_examples: 5
- name: esdeath
num_bytes: 1603306
num_examples: 13
download_size: 742545259
dataset_size: 744078703
configs:
- config_name: default
data_files:
- split: miku
path: data/miku-*
- split: MikumoGuynemer
path: data/MikumoGuynemer-*
- split: aiohto
path: data/aiohto-*
- split: mima
path: data/mima-*
- split: esdeath
path: data/esdeath-*
tags:
- art
anime characters datasets
This is an anime/manga/2D characters dataset, it is intended to be an encyclopedia for anime characters.
The dataset is open source to use without limitations or any restrictions.
how to use
from datasets import load_dataset
from huggingface_hub.utils import _runtime
_runtime._is_google_colab = False # workaround for problems with colab
dataset = load_dataset("parsee-mizuhashi/miku")
how to contribute
- to add your own dataset, simply join the organization and create a new dataset repo and upload your images there. else you can open a new discussion and we'll check it out
- to merge your dataset with this repo simply run the following code :
from huggingface_hub import notebook_login
notebook_login() # 👈 add your token with "writing access"
# you can find your token in settings>tokens and grab your token
from datasets import load_dataset
from huggingface_hub.utils import _runtime
_runtime._is_google_colab = False # workaround for problems with colab
repo_id = "lowres/aiohto" # 👈 change this
ds = load_dataset("lowres/anime")
ds2 = load_dataset(repo_id)
character_name = repo_id.split("/")[1]
ds[character_name] = ds["train"]
ds.push_to_hub("lowres/anime")