Audio-Text-to-Text
PEFT
Safetensors
English
mistral-lmm

SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning

SonicVerse is a multi-task music captioning model that integrates caption generation with auxiliary music feature detection tasks such as key detection, vocals detection, and more. The model directly captures both low-level acoustic details as well as high-level musical attributes through a novel projection-based architecture that transforms audio input into natural language captions while simultaneously detecting music features through dedicated auxiliary heads. Additionally, SonicVerse enables the generation of temporally informed long captions for extended music pieces by chaining outputs from short segments using large language models, providing detailed time-informed descriptions that capture the evolving musical narrative.

View demo on our HuggingFace Space

Read the paper: SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning

GitHub: https://github.com/AMAAI-Lab/SonicVerse

How to Get Started

Use the instructions provided on the GitHub repository to run inference locally. Alternatively try out the model on the Spaces demo.

Citation

If you use SonicVerse, please cite our paper:

@article{chopra2025sonicverse,
  title={SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning},
  author={Chopra, Anuradha and Roy, Abhinaba and Herremans, Dorien},
  journal={Proceedings of the 6th Conference on AI Music Creativity (AIMC 2025)},
  year={2025},
  address={Brussels, Belgium},
  month={September},
  url={https://arxiv.org/abs/2506.15154},
}

DOI: 10.48550/arXiv.2506.15154

Downloads last month
248
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for amaai-lab/SonicVerse

Base model

m-a-p/MERT-v1-95M
Adapter
(1)
this model

Dataset used to train amaai-lab/SonicVerse

Space using amaai-lab/SonicVerse 1

Collection including amaai-lab/SonicVerse