jam-alt / README.md
cifkao's picture
Add line timings (#6)
cb5325e verified
metadata
task_categories:
  - automatic-speech-recognition
multilinguality:
  - multilingual
language:
  - en
  - fr
  - de
  - es
tags:
  - music
  - lyrics
  - evaluation
  - benchmark
  - transcription
  - pnc
pretty_name: 'Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark'
paperswithcode_id: jam-alt
configs:
  - config_name: all
    data_files:
      - split: test
        path:
          - metadata.jsonl
          - subsets/*/audio/*.mp3
    default: true
  - config_name: de
    data_files:
      - split: test
        path:
          - subsets/de/metadata.jsonl
          - subsets/de/audio/*.mp3
  - config_name: en
    data_files:
      - split: test
        path:
          - subsets/en/metadata.jsonl
          - subsets/en/audio/*.mp3
  - config_name: es
    data_files:
      - split: test
        path:
          - subsets/es/metadata.jsonl
          - subsets/es/audio/*.mp3
  - config_name: fr
    data_files:
      - split: test
        path:
          - subsets/fr/metadata.jsonl
          - subsets/fr/audio/*.mp3

Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark

Dataset description

Jam-ALT is a revision of the JamendoLyrics dataset (79 songs in 4 languages), intended for use as an automatic lyrics transcription (ALT) benchmark. It has been published in the ISMIR 2024 paper (full citation below):
📄 Lyrics Transcription for Humans: A Readability-Aware Benchmark
👥 O. Cífka, H. Schreiber, L. Miner, F.-R. Stöter
🏢 AudioShake

The lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling and formatting, as well as punctuation and capitalization (PnC). The audio is identical to the JamendoLyrics dataset.

💥 New: The dataset now has line-level timings. They were added in the paper 📄 Exploiting Music Source Separation for Automatic Lyrics Transcription with Whisper by J. Syed, I. Meresman-Higgs, O. Cífka, and M. Sandler, presented at the 2025 ICME Workshop AI for Music.

Note: The dataset is not time-aligned at the word level. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics, which is the standard benchmark for that task.

See the project website for details and the JamendoLyrics community for related datasets.

Loading the data

from datasets import load_dataset
dataset = load_dataset("jamendolyrics/jam-alt", split="test")

A subset is defined for each language (en, fr, de, es); for example, use load_dataset("jamendolyrics/jam-alt", "es") to load only the Spanish songs.

To control how the audio is decoded, cast the audio column using dataset.cast_column("audio", datasets.Audio(...)). Useful arguments to datasets.Audio() are:

  • sampling_rate and mono=True to control the sampling rate and number of channels.
  • decode=False to skip decoding the audio and just get the MP3 file paths and contents.

Running the benchmark

The evaluation is implemented in our alt-eval package:

from datasets import load_dataset
from alt_eval import compute_metrics

dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.3.0", split="test")
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])

For example, the following code can be used to evaluate Whisper:

dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.3.0", split="test")
dataset = dataset.cast_column("audio", datasets.Audio(decode=False))  # Get the raw audio file, let Whisper decode it

model = whisper.load_model("tiny")
transcriptions = [
  "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
  for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])

Alternatively, if you already have transcriptions, you might prefer to skip loading the audio column:

dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.3.0", split="test").remove_columns("audio")

Citation

When using the benchmark, please cite our ISMIR paper as well as the original JamendoLyrics paper. For the line-level timings, please cite the ICME workshop paper.

@inproceedings{cifka-2024-jam-alt,
  author       = {Ond{\v{r}}ej C{\'{\i}}fka and
                  Hendrik Schreiber and
                  Luke Miner and
                  Fabian{-}Robert St{\"{o}}ter},
  title        = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark},
  booktitle    = {Proceedings of the 25th International Society for 
                  Music Information Retrieval Conference},
  pages        = {737--744},
  year         = 2024,
  publisher    = {ISMIR},
  doi          = {10.5281/ZENODO.14877443},
  url          = {https://doi.org/10.5281/zenodo.14877443}
}
@inproceedings{durand-2023-contrastive,
  author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
  booktitle={2023 {IEEE} International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, 
  year={2023},
  pages={1-5},
  address={Rhodes Island, Greece},
  doi={10.1109/ICASSP49357.2023.10096725}
}
@inproceedings{syed-2025-mss-alt,
  author       = {Jaza Syed and
                  Ivan Meresman-Higgs and
                  Ond{\v{r}}ej C{\'{\i}}fka and
                  Mark Sandler},
  title        = {Exploiting Music Source Separation for Automatic Lyrics Transcription with {Whisper}},
  booktitle    = {2025 {IEEE} International Conference on Multimedia and Expo Workshops (ICMEW)},
  publisher    = {IEEE},
  year         = {2025},
  note         = {to appear}
}

Contributions

The transcripts, originally from the JamendoLyrics dataset, were revised by Ondřej Cífka, Hendrik Schreiber, Fabian-Robert Stöter, Luke Miner, Laura Ibáñez, Pamela Ode, Mathieu Fontaine, Claudia Faller, April Anderson, Constantinos Dimitriou, and Kateřina Apolínová. Line-level timings were automatically transferred from JamendoLyrics and manually corrected by Ondřej Cífka and Jaza Syed to fit the revised transcripts.