TransCorpus-bio / README.md
jknafou's picture
Update README.md
794acd9 verified
metadata
license: mit
language:
  - en
tags:
  - life-sciences
  - clinical
  - biomedical
  - bio
  - medical
  - biology
  - synthetic
pretty_name: TransCorpus-bio
size_categories:
  - 10M<n<100M

TransCorpus-bio

TransCorpus-bio is a large-scale, parallel biomedical corpus consisting of PubMed abstracts. This dataset is used in the TransCorpus Toolkit and is designed to enable high-quality multi-lingual biomedical language modeling and downstream NLP research.

Currently Translated with TransCorpus Toolkit

Dataset Details

  • Source: PubMed abstracts (English)
  • Size: 22 million abstracts, 30.2GB of text
  • Domain: Biomedical, clinical, life sciences
  • Format: one abstract per line

Motivation

Non-English languages are low-resource languages for biomedical NLP, with limited availability of large, high-quality corpora. TransCorpus-bio bridges this gap by leveraging state-of-the-art neural machine translation to generate a massive, high-quality synthetic corpus, enabling robust pretraining and evaluation of Spanish biomedical language models.

from datasets import load_dataset

dataset = load_dataset("jknafou/TransCorpus-bio", split="train")

print(dataset)
# Output:
# Dataset({
#    features: ['text'],
#    num_rows: 21567136
# })

print(dataset[0])

Benchmark Results in our French Experiment

TransBERT-bio-fr pretrained on TransCorpus-bio-fr achieve state-of-the-art results on the French biomedical benchmark DrBenchmark, outperforming both general-domain and previous domain-specific models on classification, NER, POS, and STS tasks. See TransBERT-bio-fr for details.

Why Synthetic Translation?

  • Scalable: Enables creation of large-scale corpora for any language with a strong MT system.
  • Effective: Supports state-of-the-art performance in downstream tasks.
  • Accessible: Makes domain-specific NLP feasible for any languages.

Citation

If you use this corpus, please cite:

  @inproceedings{
    knafou2025transbert,
    title={Trans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling},
    author={Julien Knafou and Luc Mottin and Ana{\"\i}s Mottaz and Alexandre Flament and Patrick Ruch},
    booktitle={The 2025 Conference on Empirical Methods in Natural Language Processing},
    year={2025},
    url={https://transbert.s3.text-analytics.ch/TransBERT.pdf}
  }