Datasets:
File size: 1,968 Bytes
748ed5e 96491a9 748ed5e d4b8da1 748ed5e 96453e5 748ed5e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- chemistry
- biology
- medical
size_categories:
- 100K<n<1M
pretty_name: Mol-LLaMA-Instruct
---
# Mol-LLaMA-Instruct Data Card
## Introduction
This dataset is used to fine-tune Mol-LLaMA, a molecular LLM for building a general-purpose assistant for molecular analysis.
The instruction dataset is constructed by prompting GPT-4o-2024-08-06, based on annotated IUPAC names and their descriptions from PubChem.
## Data Types
This dataset is focused on the key molecular features, including structural, chemical, and biological features, and LLMs' capabilities of molecular reasoning and explanability.
The data types and their distributions are as follows:
| Category | # Data |
|----------|-----------|
| Detailed Structural Descriptions | 77,239 |
| Structure-to-Chemical Features Relationships | 73,712 |
| Structure-to-Biologial Features Relationships | 73,645 |
| Comprehensive Conversations | 60,147 |
## Trained Model Card
You can find the trained model, Mol-LLaMA, in the following links: [DongkiKim/Mol-Llama-2-7b-chat](https://huggingface.co/DongkiKim/Mol-Llama-2-7b-chat/) and [DongkiKim/Mol-Llama-3.1-8B-Instruct](https://huggingface.co/DongkiKim/Mol-Llama-3.1-8B-Instruct)
For more details, please see our [project page](https://mol-llama.github.io/), [paper](https://arxiv.org/abs/2502.13449) and [GitHub repository](https://github.com/DongkiKim95/Mol-LLaMA).
## Citation
If you find our data useful, please consider citing our work.
```
@misc{kim2025molllama,
title={Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model},
author={Dongki Kim and Wonbin Lee and Sung Ju Hwang},
year={2025},
eprint={2502.13449},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## Acknowledgements
We appreciate [3D-MoIT](https://huggingface.co/datasets/Sihangli/3D-MoIT), for their open-source contributions.
|