From Generality to Mastery: Composer-Style Conditioned Music Generation
Trained model weights and training datasets for the paper:
- Mingyang Yao and Ke Chen
"From Generality to Mastery: Composer-Style Symbolic Music Generation via Large-Scale Pre-training."
Conference of AI Music Creativity (AIMC), 2025
Note: Please find project details and usage at our Github repo
Model Architecture
"Generality" Stage
The model learns general music patterns and knowledge from diverse genres of music
- Model backbone: 12-layer Transformer with relative positional encoding
- Num trainable params: 39.6M
"Mastery" Stage
The model adapts its knowledge to specific composers' characteristics
- Model backbone: 12-layer Transformer with relative positional encoding plus adapter modules inserted after every two transformer layers
- Num trainable params: 46M
Citation
If you find this project useful, please cite our paper:
@inproceedings{generalitymastery2025,
author = {Mingyang Yao and Ke Chen},
title = {From Generality to Mastery: Composer-Style Symbolic Music Generation via Large-Scale Pre-training},
booktitle={Proceedings of the AI Music Creativity, {AIMC}},
year = {2025}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support