|
--- |
|
viewer: true |
|
dataset_info: |
|
- config_name: Chinese |
|
features: |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 16000 |
|
- name: duration |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: traditional_chinese |
|
dtype: string |
|
- name: English |
|
dtype: string |
|
- name: Vietnamese |
|
dtype: string |
|
- name: French |
|
dtype: string |
|
- name: German |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 1242 |
|
- name: eval |
|
num_examples: 91 |
|
|
|
|
|
- name: corrected.test |
|
num_examples: 225 |
|
- config_name: English |
|
features: |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 16000 |
|
- name: duration |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: Vietnamese |
|
dtype: string |
|
- name: Chinese |
|
dtype: string |
|
- name: traditional_chinese |
|
dtype: string |
|
- name: French |
|
dtype: string |
|
- name: German |
|
dtype: string |
|
- name: source |
|
dtype: string |
|
- name: link |
|
dtype: string |
|
- name: type |
|
dtype: string |
|
- name: topic |
|
dtype: string |
|
- name: icd-10 code |
|
dtype: string |
|
- name: speaker |
|
dtype: string |
|
- name: role |
|
dtype: string |
|
- name: gender |
|
dtype: string |
|
- name: accent |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 25512 |
|
- name: eval |
|
num_examples: 2816 |
|
|
|
|
|
- name: corrected.test |
|
num_examples: 4751 |
|
- config_name: French |
|
features: |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 16000 |
|
- name: duration |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: English |
|
dtype: string |
|
- name: Vietnamese |
|
dtype: string |
|
- name: Chinese |
|
dtype: string |
|
- name: traditional_chinese |
|
dtype: string |
|
- name: German |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 1403 |
|
- name: eval |
|
num_examples: 42 |
|
|
|
|
|
- name: corrected.test |
|
num_examples: 344 |
|
- config_name: German |
|
features: |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 16000 |
|
- name: duration |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: English |
|
dtype: string |
|
- name: Vietnamese |
|
dtype: string |
|
- name: Chinese |
|
dtype: string |
|
- name: traditional_chinese |
|
dtype: string |
|
- name: French |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 1443 |
|
|
|
|
|
- name: eval |
|
num_examples: 287 |
|
- name: corrected.test |
|
num_examples: 1091 |
|
- config_name: Vietnamese |
|
features: |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 16000 |
|
- name: duration |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: English |
|
dtype: string |
|
- name: Chinese |
|
dtype: string |
|
- name: traditional_chinese |
|
dtype: string |
|
- name: French |
|
dtype: string |
|
- name: German |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_examples: 4548 |
|
|
|
|
|
- name: eval |
|
num_examples: 1137 |
|
- name: corrected.test |
|
num_examples: 3437 |
|
configs: |
|
- config_name: Chinese |
|
data_files: |
|
- split: train |
|
path: chinese/train-* |
|
- split: eval |
|
path: chinese/eval-* |
|
|
|
|
|
- split: corrected.test |
|
path: chinese/corrected.test-* |
|
- config_name: English |
|
data_files: |
|
- split: train |
|
path: english/train-* |
|
- split: eval |
|
path: english/eval-* |
|
|
|
|
|
- split: corrected.test |
|
path: english/corrected.test-* |
|
- config_name: French |
|
data_files: |
|
- split: train |
|
path: french/train-* |
|
- split: eval |
|
path: french/eval-* |
|
|
|
|
|
- split: corrected.test |
|
path: french/corrected.test-* |
|
- config_name: German |
|
data_files: |
|
- split: train |
|
path: german/train-* |
|
|
|
|
|
- split: eval |
|
path: german/eval-* |
|
- split: corrected.test |
|
path: german/corrected.test-* |
|
- config_name: Vietnamese |
|
data_files: |
|
- split: train |
|
path: vietnamese/train-* |
|
|
|
|
|
- split: eval |
|
path: vietnamese/eval-* |
|
- split: corrected.test |
|
path: vietnamese/corrected.test-* |
|
task_categories: |
|
- translation |
|
language: |
|
- vi |
|
- en |
|
- de |
|
- zh |
|
- fr |
|
--- |
|
# MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation |
|
|
|
**<div align="center">Preprint</div>** |
|
|
|
<div align="center">Khai Le-Duc*, Tuyen Tran*,</div> |
|
<div align="center">Bach Phan Tat, Nguyen Kim Hai Bui, Quan Dang, Hung-Phong Tran, Thanh-Thuy Nguyen, Ly Nguyen, Tuan-Minh Phan, Thi Thu Phuong Tran, Chris Ngo,</div> |
|
<div align="center">Nguyen X. Khanh**, Thanh Nguyen-Tang**</div> |
|
|
|
|
|
<div align="center">*Equal contribution</div> |
|
<div align="center">**Equal supervision</div> |
|
|
|
* **Abstract:** |
|
Multilingual speech translation (ST) in the medical domain enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we present the first systematic study on medical ST, to our best knowledge, by releasing *MultiMed-ST*, a large-scale ST dataset for the medical domain, spanning all translation directions in five languages: Vietnamese, English, German, French, Traditional Chinese and Simplified Chinese, together with the models. With 290,000 samples, our dataset is the largest medical machine translation (MT) dataset and the largest many-to-many multilingual ST among all domains. Secondly, we present the most extensive analysis study in ST research to date, including: empirical baselines, bilingual-multilingual comparative study, end-to-end vs. cascaded comparative study, task-specific vs. multi-task sequence-to-sequence (seq2seq) comparative study, code-switch analysis, and quantitative-qualitative error analysis. All code, data, and models are available online: [https://github.com/leduckhai/MultiMed-ST](https://github.com/leduckhai/MultiMed-ST). |
|
|
|
> Please press ⭐ button and/or cite papers if you feel helpful. |
|
|
|
* **GitHub:** |
|
[https://github.com/leduckhai/MultiMed-ST](https://github.com/leduckhai/MultiMed-ST) |
|
|
|
* **Citation:** |
|
Please cite this paper: [https://arxiv.org/abs/2504.03546](https://arxiv.org/abs/2504.03546) |
|
|
|
``` bibtex |
|
@article{le2025multimedst, |
|
title={MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation}, |
|
author={Le-Duc, Khai and Tran, Tuyen and Tat, Bach Phan and Bui, Nguyen Kim Hai and Dang, Quan and Tran, Hung-Phong and Nguyen, Thanh-Thuy and Nguyen, Ly and Phan, Tuan-Minh and Tran, Thi Thu Phuong and others}, |
|
journal={arXiv preprint arXiv:2504.03546}, |
|
year={2025} |
|
} |
|
``` |
|
|
|
## Dataset and Models: |
|
|
|
Dataset: [HuggingFace dataset](https://huggingface.co/datasets/leduckhai/MultiMed-ST) |
|
|
|
Fine-tuned models: [HuggingFace models](https://huggingface.co/leduckhai/MultiMed-ST) |
|
|
|
## Contact: |
|
|
|
Core developers: |
|
|
|
**Khai Le-Duc** |
|
``` |
|
University of Toronto, Canada |
|
Email: [email protected] |
|
GitHub: https://github.com/leduckhai |
|
``` |
|
|
|
**Tuyen Tran** |
|
``` |
|
Hanoi University of Science and Technology, Vietnam |
|
Email: [email protected] |
|
``` |
|
|
|
**Bui Nguyen Kim Hai** |
|
``` |
|
Eötvös Loránd University, Hungary |
|
Email: [email protected] |
|
``` |
|
|