File size: 3,253 Bytes
5d37437 846d3ff 5d37437 846d3ff 5d37437 846d3ff 114a021 846d3ff 5d37437 846d3ff 5d37437 846d3ff 5d37437 846d3ff 5d37437 846d3ff 5d37437 846d3ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
license: apache-2.0
# Add other relevant tags here, e.g., task_categories, size_categories
tags:
- biomedical
- text-retrieval
- abstract-retrieval
---
# PubMed Dataset Loader (Configurable 2014-2025) - Full Abstract Parsing
## Overview
This repository provides a modified Hugging Face `datasets` loading script for MEDLINE/PubMed data. It is designed to download and parse PubMed baseline XML files, with specific enhancements for **extracting the complete text from structured abstracts**.
This script is based on the original NCBI PubMed dataset loader from Hugging Face and includes modifications by **[Hoang Ha (LIG)](https://www.linkedin.com/in/hoanghavn/)** and abstract parsing enhancements was adapted for the **[NanoBubble Project](https://nanobubbles.hypotheses.org/)** contributed by **Tiziri Terkmani (Research Engineer, LIG, Team SIGMA)**.
## Key Features
- Parses PubMed baseline XML files (`.xml.gz`).
- **Full Abstract Extraction:** Correctly handles structured abstracts (e.g., BACKGROUND, METHODS, RESULTS) and extracts the complete text, unlike some previous parsers that might truncate.
- **Configurable Date Range:** Intended for use with data from **2015 to 2025**, but **requires manual configuration** of the download URLs within the script (`pubmed_fulltext_dataset.py`).
- Generates data compatible with the Hugging Face `datasets` library schema.
## Dataset Information
- **Intended Time Period:** 2015 - 2025 (Requires user configuration)
- **Data Source:** U.S. National Library of Medicine (NLM) FTP server.
- **License:** Apache 2.0 (for the script), NLM terms apply to the data itself.
- **Size:** Variable depending on configured download range. The full 2015-2025 range contains 14 millions of abstracts.
## !! Important Caution !!
The Python script (`pubmed_fulltext_dataset.py`) **requires manual modification** to download the desired data range (e.g., 2015-2025). The default configuration only downloads a *small sample* of files from the 2025 baseline for demonstration purposes.
**You MUST edit the `_URLs` list in the script** to include the paths to **ALL** the `.xml.gz` files for **each year** you want to include.
**How to Configure URLs:**
1. Go to the NLM FTP baseline directory: [ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/](ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/)
2. For each year (e.g., 2015, 2016, ..., 2025), identify all the `pubmedYYnXXXX.xml.gz` files. The number of files (`XXXX`) varies per year. Checksum files (e.g., `pubmed24n.xml.gz.md5`) often list all files for that year.
3. Construct the full URL for each file (e.g., `https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmedYYnXXXX.xml.gz`).
4. Add all these URLs to the `_URLs` list in the script. See the comments within the script for examples.
**Use this loader with caution and always verify the scope of the data you have actually downloaded and processed.**
## Usage
To use this script to load the data (after configuring the `_URLs` list):
```python
from datasets import load_dataset
dataset = load_dataset("HoangHa/pubmed25_debug", split="train", trust_remote_code=True, cache_dir=".")
print(dataset)
print(dataset['train'][0]['MedlineCitation']['Article']['Abstract']['AbstractText'])
``` |