You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

DrVD-Bench: Do Vision-Language Models Reason Like Human Doctors in Medical Image Diagnosis?

paperkagglehuggingfacegithub

This repository is the official implementation of the paper: DrVD-Bench: Do Vision-Language Models Reason Like Human Doctors in Medical Image Diagnosis?

Introduction

Vision–language models (VLMs) exhibit strong zero-shot generalization on natural images and show early promise in interpretable medical image analysis. However, existing benchmarks do not systematically evaluate whether these models truly reason like human clinicians or merely imitate superficial patterns.
To address this gap, we propose DrVD-Bench, the first multimodal benchmark for clinical visual reasoning. DrVD-Bench consists of three modules: Visual Evidence Comprehension, Reasoning Trajectory Assessment, and Report Generation Evaluation, comprising 7 789 image–question pairs.
Our benchmark covers 20 task types, 17 diagnostic categories, and five imaging modalities—CT, MRI, ultrasound, X-ray, and pathology. DrVD-Bench mirrors the clinical workflow from modality recognition to lesion identification and diagnosis.
We benchmark 19 VLMs (general-purpose & medical-specific, open-source & proprietary) and observe that performance drops sharply as reasoning complexity increases. While some models begin to exhibit traces of human-like reasoning, they often rely on shortcut correlations rather than grounded visual understanding. DrVD-Bench therefore provides a rigorous framework for developing clinically trustworthy VLMs.

cover image

Quick Start

Prepare Environment

pip3 install -r requirements.txt

Obtain DeepSeek API Key

Report generation will use DeepSeek to extract report keywords, and instruction-following weaker models can also leverage DeepSeek to extract answers from their outputs.
You can apply for an API key on the DeepSeek platform.
For more details, please refer to the official documentation: DeepSeek API Docs.

Obtain Model Outputs

  1. Download the dataset from Kaggle or Hugging Face.
  2. Run inference with your model and append the results to the model_response field in the corresponding files.
  3. model_response format requirements
    • visual_evidence_qa.jsonl / independent_qa.jsonl: Single letter A / B / C
    • joint_qa.jsonl: List containing only letters, separated by commas, e.g., ['B','D','A']
    • report_generation.jsonl: Full string

Inference Example Using Qwen-2.5-VL-72B API

The Qwen-2.5-VL-72B API can be obtained on the Alibaba Cloud Bailian platform (link).

· task - joint_qa.jsonl

python qwen2.5vl_example.py \
  --API_KEY="your_qwen_api_key" \
  --INPUT_PATH="/path/to/joint_qa.jsonl" \
  --OUTPUT_PATH="/path/to/result.jsonl" \
  --IMAGE_ROOT='path/to/benchmark/data/root' \
  --type="joint"

· other tasks

python qwen2.5vl_example.py \
  --API_KEY="your_qwen_api_key" \
  --INPUT_PATH="/path/to/qa.jsonl" \
  --OUTPUT_PATH="/path/to/result.jsonl" \
  --IMAGE_ROOT='path/to/benchmark/data/root' \
  --type="single"

Mapping Script

Applicable for instruction-following weaker models; if your model cannot standardize outputs according to the above format, you can use the following script to extract option answers from the model_response field:

python map.py \
  --API_KEY="your_deepseek_api_key" \
  --INPUT_FILE="/path/to/model_result.jsonl" \
  --OUTPUT_FILE="/path/to/model_result_mapped.jsonl"

Compute Metrics

task - visual_evidence_qa.jsonl / independent_qa.jsonl

python compute_choice_metric.py \
  --json_path="/path/to/results.jsonl" \
  --type='single'

task - joint_qa.jsonl

python compute_choice_metric.py \
  --json_path="/path/to/results.jsonl" \
  --type='joint'

task - report_generation.jsonl

python report_generation_metric.py \
  --API_KEY='your_deepseek_api_key' \
  --JSON_PATH='/path/to/results.jsonl'

Contact

Downloads last month
21