Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
id: int64
task: string
smiles: string
image: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 691
to
{'task': Value(dtype='string', id=None), 'id': Value(dtype='int64', id=None), 'smiles': Value(dtype='string', id=None), 'image_path': Value(dtype='string', id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2285, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1879, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: int64
              task: string
              smiles: string
              image: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 691
              to
              {'task': Value(dtype='string', id=None), 'id': Value(dtype='int64', id=None), 'smiles': Value(dtype='string', id=None), 'image_path': Value(dtype='string', id=None)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ToxiMol: A Benchmark for Structure-Level Molecular Detoxification

arXiv Dataset

Overview

ToxiMol is the first comprehensive benchmark for molecular toxicity repair tailored to general-purpose Multimodal Large Language Models (MLLMs). This is the dataset repository for the paper "Breaking Bad Molecules: Are MLLMs Ready for Structure-Level Molecular Detoxification?".

Key Features

🧬 Comprehensive Dataset

  • 560 representative toxic molecules spanning diverse toxicity mechanisms and varying granularities
  • 11 primary toxicity repair tasks based on Therapeutics Data Commons (TDC) platform
  • Multi-granular coverage: Tox21 (12 sub-tasks), ToxCast (10 sub-tasks), and 9 additional datasets
  • Multimodal inputs: SMILES strings + 2D molecular structure images rendered using RDKit

🎯 Challenging Task Definition

The molecular toxicity repair task requires models to:

  1. Identify potential toxicity endpoints from molecular structures
  2. Interpret semantic constraints from natural language descriptions
  3. Generate structurally similar substitute molecules that eliminate toxic fragments
  4. Satisfy drug-likeness and synthetic feasibility requirements

πŸ“Š Systematic Evaluation

  • ToxiEval framework: Automated evaluation integrating toxicity prediction, synthetic accessibility, drug-likeness, and structural similarity
  • Comprehensive analysis: Evaluation of ~30 mainstream MLLMs with ablation studies
  • Multi-dimensional metrics: Success rate analysis across different evaluation criteria and failure modes

Dataset Structure

Task Overview

Dataset Task Type # Molecules Description
AMES Binary Classification 50 Mutagenicity testing
Carcinogens Binary Classification 50 Carcinogenicity prediction
ClinTox Binary Classification 50 Clinical toxicity data
DILI Binary Classification 50 Drug-induced liver injury
hERG Binary Classification 50 hERG channel inhibition
hERG_Central Binary Classification 50 Large-scale hERG database with integrated cardiac safety profiles
hERG_Karim Binary Classification 50 hERG data from Karim et al.
LD50_Zhu Regression (log(LD50) < 2) 50 Acute toxicity
Skin_Reaction Binary Classification 50 Adverse skin reactions
Tox21 Binary Classification (12 sub-tasks) 60 Nuclear receptors, stress response pathways, and cellular toxicity mechanisms (ARE, p53, ER, AR, etc.)
ToxCast Binary Classification (10 sub-tasks) 50 Diverse toxicity pathways including mitochondrial dysfunction, immunosuppression, and neurotoxicity

Data Structure

Each entry contains:

{
  "task": "string",      // Toxicity task identifier
  "id": "int",           // Molecule ID
  "smiles": "string",    // SMILES representation
  "image": "binary" // 2D molecular structure image binary
}

Available Subdatasets

subdatasets = [
    "ames", "carcinogens_lagunin", "clintox", "dili", 
    "herg", "herg_central", "herg_karim", "ld50_zhu", 
    "skin_reaction", "tox21", "toxcast"
]

# Load all datasets
datasets = {}
for name in subdatasets:
    datasets[name] = load_dataset("DeepYoke/ToxiMol-benchmark", data_dir=name)

Experimental Results

Our systematic evaluation of ~30 mainstream MLLMs reveals:

  • Current limitations: Overall success rates remain relatively low across models
  • Emerging capabilities: Models demonstrate initial potential in toxicity understanding, semantic constraint adherence, and structure-aware molecule editing
  • Key challenges: Structural validity, multi-dimensional constraint satisfaction, and failure mode attribution

Citation

If you use this dataset in your research, please cite:

@misc{lin2025breakingbadmoleculesmllms,
      title={Breaking Bad Molecules: Are MLLMs Ready for Structure-Level Molecular Detoxification?}, 
      author={Fei Lin and Ziyang Gong and Cong Wang and Yonglin Tian and Tengchao Zhang and Xue Yang and Gen Luo and Fei-Yue Wang},
      year={2025},
      eprint={2506.10912},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2506.10912}, 
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Downloads last month
618