Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

SEED-Bench Card

Benchmark details

Benchmark type: SEED-Bench-2 is a comprehensive large-scale benchmark for evaluating Multimodal Large Language Models (MLLMs), featuring 24K multiple-choice questions with precise human annotations. It spans 27 evaluation dimensions, assessing both text and image generation.

Benchmark date: SEED-Bench was collected in November 2023.

Paper or resources for more information: https://github.com/AILab-CVC/SEED-Bench

License: Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.

Data Sources:

Please contact us if you believe any data infringes upon your rights, and we will remove it.

Where to send questions or comments about the benchmark: https://github.com/AILab-CVC/SEED-Bench/issues

Intended use

Primary intended uses: SEED-Bench-2 is primarily designed to evaluate Multimodal Large Language Models in text and image generation tasks.

Primary intended users: Researchers and enthusiasts in computer vision, natural language processing, machine learning, and artificial intelligence are the main target users of the benchmark.

Downloads last month
2,559