Datasets:

ArXiv:
License:
The Dataset Viewer has been disabled on this dataset.

EMusicGen

The EMusicGen dataset comprises four subsets: Analysis, EMOPIA, VGMIDI, and Rough4Q. The EMOPIA and VGMIDI subsets are derived from MIDI files in their respective source datasets, where all melodies in V1 soundtrack have been converted to ABC notation through a data processing script. These subsets are enriched with enhanced emotional labels. The Analysis subset involves statistical analysis of the original EMOPIA and VGMIDI datasets, aimed at guiding the enhancement and automatic annotation of musical emotional data. Lastly, the Rough4Q subset is created by merging ABC notation collections from the IrishMAN-XML, EsAC, Wikifonia, Nottingham, JSBach Chorales, and CCMusic datasets. These collections are processed and augmented based on insights from the Analysis subset, followed by rough emotional labeling using the music21 library.

Viewer

https://www.modelscope.cn/datasets/monetjoe/EMusicGen/dataPeview

Maintenance

git clone [email protected]:datasets/monetjoe/EMusicGen
cd EMusicGen

Usage

from datasets import load_dataset

# VGMIDI (default) / EMOPIA / Rough4Q subset
ds = load_dataset("monetjoe/EMusicGen", name="VGMIDI")
for item in ds["train"]:
    print(item)
 
for item in ds["test"]:
    print(item)

# Analysis subset
ds = load_dataset("monetjoe/EMusicGen", name="Analysis", split="train")
for item in ds:
    print(item)

Analysis

Statistical values

Feature Min Max Range Median Mean
tempo 47.85 184.57 136.72 117.45 119.38
pitch 36.0 89.22 53.22 60.98 61.38
range 2.0 91.0 89.0 47.0 47.47
pitchSD 0.64 24.82 24.18 12.91 13.09
volume 0.02 0.17 0.16 0.09 0.09

Pearson correlation table

Emo-feature r Correlation p-value Confidence
valence - tempo +0.0621 weak positive 2.645e-02 p<0.05 significant
valence - pitch +0.0109 weak positive 6.960e-01 p>=0.05 insignificant
valence - range -0.0771 weak negative 5.794e-03 p<0.05 significant
valence - key +0.0119 weak positive 6.705e-01 p>=0.05 insignificant
valence - mode +0.3880 positive 3.640e-47 p<0.05 significant
valence - pitchSD -0.0666 weak negative 1.729e-02 p<0.05 significant
valence - direction +0.0010 weak positive 9.709e-01 p>=0.05 insignificant
valence - volume +0.1174 weak positive 2.597e-05 p<0.05 significant
arousal - tempo +0.1579 weak positive 1.382e-08 p<0.05 significant
arousal - pitch -0.1819 weak negative 5.714e-11 p<0.05 significant
arousal - range +0.3276 positive 2.324e-33 p<0.05 significant
arousal - key +0.0030 weak positive 9.138e-01 p>=0.05 insignificant
arousal - mode -0.0962 weak negative 5.775e-04 p<0.05 significant
arousal - pitchSD +0.3511 positive 2.201e-38 p<0.05 significant
arousal - direction -0.0958 weak negative 6.013e-04 p<0.05 significant
arousal - volume +0.3800 positive 3.558e-45 p<0.05 significant

Feature distribution

Feature Distribution chart
key
pitch
range
pitchSD
tempo
volume
mode
direction

Processed EMOPIA & VGMIDI

The processed EMOPIA and processed VGMIDI datasets will be used to evaluate the error-free rate of music scores generated by fine-tuning the backbone with existing emotion-labeled datasets. Therefore, it is essential to ensure that the processed data is compatible with the input format required by the pre-trained backbone.

We found that the average number of measures in the dataset used for pre-training backbone is approximately 20, and the maximum number of measures supported by the pre-trained backbone input is 32. Consequently, we converted the original EMOPIA and VGMIDI data into XML scores filtering out erroneous items and segmented them into chunks of 20 measures each. Each chunk was appended with an ending marker to prevent the model from generating endlessly in cases of repetitive melodies without seeing a terminating mark. For the ending segments of the scores, if a segment exceeded 10 measures, it was further divided; otherwise, it was combined with the previous segment. This approach ensures that the resulting score slices do not exceed 30 measures, thereby guaranteeing that all slices are within the maximum measure limit supported by backbone, with an average of approximately 20 measures.

It is noted that when converting MIDI to XML using current tools, repeat sections cannot be folded back. In fact, after converting the dataset used for pre-training backbone into MIDI and expanding all repeat sections, the average number of measures was approximately 35. However, due to the maximum measure limit supported during pre-training, repeat markers were not expanded at that stage, and since repeat markers themselves occupy only two characters, we could not use 35 measures as the slicing unit even for MIDI data.

Subsequently, we converted the segmented XML slices into ABC notation format, performed data augmentation by transposing to 15 keys, and extracted the melodic lines and control codes to produce the final processed EMOPIA and processed VGMIDI datasets. Both datasets have a consistent structure comprising three columns: the first column is the control code, the second column is ABC chars, and the third column contains the 4Q emotion labels inherited from the original dataset. The total number of samples is 21,480 for processed EMOPIA and 9,315 for processed VGMIDI, which were split into training and test sets at a 10:1 ratio. There is almost no correlation between emotion and key. Therefore, the data augmentation by transposing to 15 keys is unlikely to significantly impact the label distribution.

Data source of Rough4Q

The Rough4Q dataset is a large-scale dataset created by automatically annotating a substantial amount of well-structured sheet music based on conclusions from correlation statistics. The data sources for this dataset, include both scores in XML series (XML / MXL / MusicXML) and ABC notation format scores. It is noted that not all datasets within the data source include chord markings. Since this paper focuses solely on melody generation, the absence of chord information is not a significant concern for the current study. After filtering out erroneous or duplicated scores and consolidating these into a unified XML format, we utilized music21 to rapidly extract features. Due to the high volume of data, we chose a few representative and computationally manageable features for approximate emotional annotation.

According to the correlation statistics, valence is significantly positively correlated only with mode. Therefore, mode was selected as the feature for determining the valence dimension, with minor mode classified as low valence and major mode as high valence. For arousal, it is significantly positively correlated with pitch range, pitch SD, and RMS. Given that RMS calculation requires audio rendering, which is impractical for large-scale automatic annotation, it was excluded. Among the features pitch range and pitch SD, the correlation between arousal and pitch SD is stronger. Moreover, pitch SD not only partially reflects pitch range but also indicates the intensity of musical variation, providing a richer set of information. Therefore, we tentatively select pitch SD as the benchmark for determining the arousal dimension, classifying scores below the median as low arousal and those above the median as high arousal. This approach yields a rough Russell 4Q label based on the V/A quadrant.

This rough labeling with noise primarily serves to record the state of mode and pitch SD as emotion-related embeddings, ensuring consistency with the format of the two processed datasets EMOPIA and VGMIDI. Following this, we applied the same data processing methods as those described for the two datasets, preserving labels while segmenting the scores. Notably, the IrishMAN was also the dataset used for backbone pre-training. But it discards scores longer than 32 measures, leading to a significant loss of data. In contrast, our segmentation approach preserves these longer scores.

We discovered that the data were highly imbalanced after processing, with the quantities of Q3 and Q4 labels differing by an order of magnitude from the other categories. To address this imbalance, we performed data augmentation by transposing Q3 and Q4 categories across 15 different keys only. As a result of these processes, we ultimately obtained the Rough4Q dataset, which now comprises approximately 521K samples in total and is split into training and test sets at a 10:1 ratio.

Statistics

Dataset Pie chart Total Train Test
Analysis 1278 1278 -
VGMIDI 9315 8383 932
EMOPIA 21480 19332 2148
Rough4Q 520673 468605 52068

Mirror

The data processor is also included in https://www.modelscope.cn/datasets/monetjoe/EMusicGen

Cite

@article{Zhou2024EMusicGen,
  title     = {EMusicGen: Emotion-Conditioned Melody Generation in ABC Notation},
  author    = {Monan Zhou, Xiaobing Li, Feng Yu and Wei Li},
  month     = {Sep},
  year      = {2024},
  publisher = {GitHub},
  version   = {0.1},
  url       = {https://github.com/monetjoe/EMusicGen}
}
Downloads last month
35

Space using monetjoe/EMusicGen 1