portrait_2_avatar / README.md
murali1729S's picture
Update README.md
a5ee2e7 verified
metadata
dataset_info:
  features:
    - name: input_image
      dtype: image
    - name: edit_prompt
      dtype: string
    - name: edited_image
      dtype: image
    - name: index
      dtype: int64
  splits:
    - name: train
      num_bytes: 8686582352.97
      num_examples: 7265
  download_size: 8686714223
  dataset_size: 8686582352.97
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-nc-4.0
task_categories:
  - image-to-image
  - text-to-image
language:
  - en
tags:
  - ar
size_categories:
  - 1K<n<10K

πŸ–ΌοΈ Portrait to Anime Style Tranfer Data

This dataset consists of paired human and corresponding anime-style images, accompanied by descriptive prompts. The human images are sourced from the CelebA dataset, and the anime-style counterparts were generated using a combination of state-of-the-art GAN architectures and diffusion models.

It is designed to support a wide range of tasks,

  • GAN research
  • Diffusion model fine-tuning
  • Model evaluation
  • Benchmarking for image-to-image and text-to-image generation.

πŸ“ Dataset Structure

Each sample contains:

  • input_image: Original image
  • edit_prompt: Text instruction describing the desired style
  • edited_image: Resulting image after applying the edit
  • index: default integer with 0 value

πŸš€ How to Use

from datasets import load_dataset

# Replace with your dataset path
dataset = load_dataset("murali1729S/portrait_2_avatar",split="train")

πŸ“š References

This dataset builds upon the following works:

  • W. Xiao et al., "Appearance-Preserved Portrait-to-Anime Translation via Proxy-Guided Domain Adaptation," IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 7, pp. 3104–3120, July 2024. https://doi.org/10.1109/TVCG.2022.3228707

  • Z. Liu, P. Luo, X. Wang, and X. Tang, "Deep Learning Face Attributes in the Wild," in Proceedings of the International Conference on Computer Vision (ICCV), December 2015.