Mol-LLaMA-Instruct / README.md
DongkiKim's picture
Update README.md
96453e5 verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - en
tags:
  - chemistry
  - biology
  - medical
size_categories:
  - 100K<n<1M
pretty_name: Mol-LLaMA-Instruct

Mol-LLaMA-Instruct Data Card

Introduction

This dataset is used to fine-tune Mol-LLaMA, a molecular LLM for building a general-purpose assistant for molecular analysis. The instruction dataset is constructed by prompting GPT-4o-2024-08-06, based on annotated IUPAC names and their descriptions from PubChem.

Data Types

This dataset is focused on the key molecular features, including structural, chemical, and biological features, and LLMs' capabilities of molecular reasoning and explanability. The data types and their distributions are as follows:

Category # Data
Detailed Structural Descriptions 77,239
Structure-to-Chemical Features Relationships 73,712
Structure-to-Biologial Features Relationships 73,645
Comprehensive Conversations 60,147

Trained Model Card

You can find the trained model, Mol-LLaMA, in the following links: DongkiKim/Mol-Llama-2-7b-chat and DongkiKim/Mol-Llama-3.1-8B-Instruct

For more details, please see our project page, paper and GitHub repository.

Citation

If you find our data useful, please consider citing our work.

@misc{kim2025molllama,
    title={Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model},
    author={Dongki Kim and Wonbin Lee and Sung Ju Hwang},
    year={2025},
    eprint={2502.13449},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Acknowledgements

We appreciate 3D-MoIT, for their open-source contributions.