|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- visual-question-answering |
|
language: |
|
- ja |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
# JIC-VQA |
|
|
|
# Dataset Description |
|
|
|
Japanese Image Classification Visual Question Answering (JIC-VQA) is a benchmark for evaluating Japanese Vision-Language Models (VLMs). We built this benchmark based on the [recruit-jp/japanese-image-classification-evaluation-dataset](https://huggingface.co/datasets/recruit-jp/japanese-image-classification-evaluation-dataset) by adding questions to each sample. All questions are multiple-choice, each with four options. We select options that closely relate to their respective labels in order to increase the task's difficulty. |
|
|
|
The images include 101 types of Japanese food, 30 types of Japanese flowers, 20 types of Japanese facilities, and 10 types of Japanese landmarks. |
|
|
|
# Uses |
|
All the images in this dataset are licensed under CC-BY-2.0, CC-BY-NC-2.0, Public Domain Mark 1.0, or Public Domain Dedication. Please note that CC-BY-NC-2.0 prohibits commercial use. Also, CC-BY-2.0, CC-BY-NC-2.0, and Public Domain Mark 1.0 prohibit sublicensing, so the collected image data cannot be published. |
|
|
|
# Citation |
|
``` |
|
@misc{jic-vqa, |
|
title={Japanese Image Classification Visual Question Answering (JIC-VQA) Dataset}, |
|
author={Mikihiro Tanaka and Peifei Zhu and Shuhei Yokoo}, |
|
url={https://huggingface.co/line-corporation/JIC-VQA}, |
|
} |
|
``` |
|
|