|
--- |
|
language: |
|
- en |
|
tags: |
|
- medical |
|
--- |
|
# MMedS-Bench |
|
[💻Github Repo](https://github.com/MAGIC-AI4Med/MedS-Ins) [🖨️arXiv Paper](https://arxiv.org/abs/2408.12547) |
|
|
|
The official instruction finetuning dataset for "Towards Evaluating and Building Versatile Large Language Models for Medicine". |
|
|
|
|
|
|
|
## Introduction |
|
The MedS-Ins dataset is a meticulously curated instruction-tuning dataset, specifically crafted to enhance the capabilities of large language models (LLMs) in handling complex medical tasks. This dataset draws from a diverse array of text domains, encompassing exams, clinical texts, academic papers, medical knowledge bases, and daily conversations. These domains have been carefully selected to represent a wide spectrum of medical knowledge and interaction, providing a well-rounded foundation for training medical LLMs. |
|
## Data Format |
|
The data format is the same as [MedS-Ins](https://huggingface.co/datasets/Henrychur/MedS-Ins). |
|
```bash |
|
{ |
|
"Contributors": [""], |
|
"Source": [""], |
|
"URL": [""], |
|
"Categories": [""], |
|
"Reasoning": [""], |
|
"Definition": [""], |
|
"Input_language": [""], |
|
"Output_language": [""], |
|
"Instruction_language": [""], |
|
"Domains": [""], |
|
"Positive Examples": [ { "input": "", "output": "", "explanation": ""} ], |
|
"Negative Examples": [ { "input": "", "output": "", "explanation": ""} ], |
|
"Instances": [ { "id": "", "input": "", "output": [""]} ], |
|
} |
|
``` |
|
|
|
|