Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowTypeError
Message:      ("Expected bytes, got a 'list' object", 'Conversion failed for column Seborrheic Dermatitis of Scalp with type object')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 181, in _generate_tables
                  pa_table = pa.Table.from_pandas(df, preserve_index=False)
                File "pyarrow/table.pxi", line 3874, in pyarrow.lib.Table.from_pandas
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in dataframe_to_arrays
                  arrays = [convert_column(c, f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in <listcomp>
                  arrays = [convert_column(c, f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 598, in convert_column
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 592, in convert_column
                  result = pa.array(col, type=type_, from_pandas=True, safe=safe)
                File "pyarrow/array.pxi", line 339, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 85, in pyarrow.lib._ndarray_to_array
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'list' object", 'Conversion failed for column Seborrheic Dermatitis of Scalp with type object')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

For detailed data aquisition and processing, please refer to DiagRL github repository and our paper.

Introduction

We propose a medical retrieval corpus to accelerate the improvement of medical RAG systems especially for diagnosis. Here we provide two disease-symptom (phenotype) guideline files and two patient records.

There are 2 guideline data files in this repository:

Guideline_common.json: This file contains typical symptoms and their standarlization of more than 10k common diseases. The guideline is processed from multi reliable web sources such as MAYO CLNIC, WebMD, NIH, etc.

Guideline_rare_json: This file contains possible symptoms (phenotypes) of rare diseases including their apperance rate. This guideline is from Orphanet.

There are another 2 patient record databases:

PatientRecord_common.json: This is a subset of our common disease related patient record database.

PatientRecord_rare.json: This is a subset of our rare disease related patient record database.

About Training Data

The training, validation, testing data are from:

MIMIC-IV-note. The official data repository is https://physionet.org/content/mimiciv/3.1/ . Pay attention to the ethics statement. Here we use a subset of it and further processed into the common, rare part.

PMC-Patients. This data is fully achievable at https://huggingface.co/datasets/zhengyun21/PMC-Patients . We further process it to a more cleaned subset of common disease related patient records.

MedDialog. This data is from https://huggingface.co/datasets/bigbio/meddialog . Users need to download through their provided scripts as the data is continuously growing. We further process it to fit our task.

RareArena. This data is also public available at https://huggingface.co/datasets/THUMedInfo/RareArena . It is a subset of PMC-Patients that only contain the rare disease related patients. We use the official version.

RareBench. This data is achievable at https://huggingface.co/datasets/chenxz/RareBench and we use the official version without further processing.

Mendeley. This data is achievable at https://data.mendeley.com/datasets/rjgjh8hgrt/2 for free. we use the English version for zeroshot test.

Xinhua Hospital. This is the in-house data for zero-shot only due to privacy concerns.

Downloads last month
-

Collection including QiaoyuZheng/DiagRL-Corpus