MedS-Bench / README.md
Henrychur's picture
Create README.md
7e3fc90 verified
metadata
language:
  - en
tags:
  - medical

MMedS-Bench

💻Github Repo 🖨️arXiv Paper

The official benchmark for "Towards Evaluating and Building Versatile Large Language Models for Medicine".

Introduction

MedS-Bench is a comprehensive benchmark designed to assess the performance of various large language models (LLMs) in clinical settings. It extends beyond traditional multiple-choice questions to include a wider range of medical tasks, providing a robust framework for evaluating LLM capabilities in healthcare.

The benchmark is structured around 11 high-level clinical task categories, each derived from a collection of 28 existing datasets. These datasets have been reformatted into an instruction-prompted question-answering format, which includes hand-crafted task definitions to guide the LLM in generating responses. The categories included in MedS-Bench are diverse and cover essential aspects of clinical decision-making and data handling:

  • Multi-choice Question Answering: Tests the ability of LLMs to select correct answers from multiple options based on clinical knowledge.
  • Text Summarization: Assesses the capability to concisely summarize medical texts.
  • Information Extraction: Evaluates how effectively an LLM can identify and extract relevant information from complex medical documents.
  • Explanation and Rationale: Requires the model to provide detailed explanations or justifications for clinical decisions or data.
  • Named Entity Recognition: Focuses on the ability to detect and classify entities within a medical text.
  • Diagnosis: Tests diagnostic skills, requiring the LLM to identify diseases or conditions from symptoms and case histories.
  • Treatment Planning: Involves generating appropriate treatment plans based on patient information.
  • Clinical Outcome Prediction: Assesses the ability to predict patient outcomes based on clinical data.
  • Text Classification: Involves categorizing text into predefined medical categories.
  • Fact Verification: Tests the ability to verify the accuracy of medical facts.
  • Natural Language Inference: Requires deducing logical relationships from medical text.

Notably, as the evaluation involves commercial models, for example, GPT-4 and Claude 3.5, it is extremely costly to adopt the original large-scale test split. Therefore, for some benchmarks, we randomly sampling a number of test cases. The cases used to reeproduce the results in the paper are in MedS-Bench-SPLIT. For more details, please refer to our paper。

Data Format

The data format is the same as MedS-Ins.

{
  "Contributors": [""],
  "Source": [""],
  "URL": [""],
  "Categories": [""],
  "Reasoning": [""],
  "Definition": [""],
  "Input_language": [""], 
  "Output_language": [""],
  "Instruction_language": [""],  
  "Domains": [""],    
  "Positive Examples": [ { "input": "", "output": "",  "explanation": ""} ], 
  "Negative Examples": [ { "input": "", "output": "",  "explanation": ""} ],
  "Instances": [ { "id": "", "input": "", "output": [""]} ],
}