ACON / README.md
jiwan-chung's picture
Update README.md
c5e2149 verified
metadata
license: cc-by-4.0
task_categories:
  - image-to-text
  - text-to-image
language:
  - en
size_categories:
  - 1K<n<10K

Dataset Card for ACON Benchmark

Dataset Summary

Data from: Are Any-to-Any Models More Consistent Across Modality Transfers Than Specialists?

@inproceedings{chung2025are,
  title={Are Any-to-Any Models More Consistent Across Modality Transfers Than Specialists?},
  author={Chung, Jiwan and Yoon, Janghan and Park, Junhyeong and Lee, Sangeyl and Yang, Joowon and Park, Sooyeon and Yu, Youngjae},
  booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  year={2025}
}

We provide a controlled benchmark to evaluate consistency in modality transfers of any-to-any models.

Please cite our work if you find our data helpful.

Language

English

Dataset Structure

Here's an overview of our dataset structure:

{
    'image_name': str,  # Unique image identifier.
    'image': PIL.Image,
    'description': str,  # Human-annotated detailed caption aimed for correct replication of the visual details when used as inputs to image generators.
    'Q&A': [  # human-annotated VQAs to be used for VQA-based image similarity evaluation.
      {
        "Question": str,
        "Answer": [
          "T",  # True or False label for the original image
          "T"   # (This label is not directly used for experiments) True or False label for the hidden modified image
        ]
      },
      ...
    ],
    'modification': [  
      {
        "Prompt": str,  # Image editing prompt 
        "Question": str,
        "Answer": [
          "T",  # True or False label for the edited image
        ]
      },
      ...
    ]
}

Data Instances

See above

Data Fields

See above

Data Splits

Data splits can be accessed as:

from datasets import load_dataset
data = load_dataset("jiwan-chung/ACON", split='private')
data = load_dataset("jiwan-chung/ACON", split='coco')

Curation Rationale

Full details are in the paper.

Source Data

We contribute new 500 private images. Also, the COCO subset consist of images selected from COCO 2017 dataset.

Initial Data Collection

Full details are in the paper.

Annotations

Full details are in the paper.

Annotation Process

Full details are in the paper.

Who are the annotators?

Authors of the paper.

Licencing Information

The annotations and private images we provide are licensed as detailed above. Images from COCO retain their original rights.