Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Bad split: train. Available splits: ['test']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 61, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2081, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1272, in as_streaming_dataset
                  raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
              ValueError: Bad split: train. Available splits: ['test']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Scientists' First Exam

Scientific discoveries are driven by complex multimodal reasoning based on information-intensive scientific data and domain-specific expertise. With supervision from expert-level scientific benchmarks, scientific multimodal Large Language Models (MLLMs) could significantly enhance this discovery process in realistic workflows. However, current scientific benchmarks current scientific benchmarks inadequately assess MLLMs’ perception, understanding, and reasoning skills necessary for scientific breakthroughs across multiple disciplines. To address this gap, we present the Scientists’ First Example (SFE) benchmark, designed to comprehensively evaluate the scientific cognitive capacities of MLLMs through three interconnected levels: \textit{scientific signal perception}, \textit{scientific attribute understanding}, \textit{scientific comparative reasoning}. Specifically, SFE comprises 839 expert-verified MQA/VQA pairs spanning 66 multimodal tasks across five high-value disciplines. Extensive experimental results reveal that current \textit{state-of-the-art} GPT-4.1 and InternVL-2.5 achieve only 30.8% and 24.43% on SFE, highlighting a significant room for MLLMs to improve in scientific realms. We hope insights obtained in SFE could facilitate further developments in AI-enhanced scientific discoveries.

Downloads last month
16