1000 songs has been selected from Free Music Archive (FMA). The excerpts which were annotated are available in the same package song ids 1 to 1000. We identified some redundancies, which reduced the dataset down to 744 songs. The dataset is split between the development set (619 songs) and the evaluation set (125 songs). The extracted 45 seconds excerpts are all re-encoded to have the same sampling frequency, i.e, 44100Hz. Full songs are available at are also provided in the same package. The 45 seconds excerpts are extracted from random (uniformly distributed) starting point in a given song. The continuous annotations were collected at a sampling rate which varied by browsers and computer capabilities. Therefore, we resampled the annotations and generated the averaged annotations with 2Hz sampling rate. In addition to the average, we will provide the standard deviation of the annotations so that you can have an idea about the margin of error. The continuous annotations are between -1 and +1 and excludes the first 15 seconds due to instability of the annotations at the start of the clips. To combine the annotations collected for the whole song, on nine points scale, we report the average and the standard deviation of the ratings ranging from one to nine. A detailed explanation of data collection methods as well as baseline results are provided in our CrowdMM paper. The submission results of the three teams who have participated in Mediaeval 2013, Emotion in Music task is available in the manual.
This database was developed by the organizers of the "Emotion in Music" task, Mohammad Soleymani, Mike N. Caro, Erik M. Schmidt, Cheng-Ya Sha, and Yi-Hsuan Yang. We also acknowledege teh ctornibutions by Anna Aljanaki for extracting the features and spotting some of the problems with the initial version. We also thank all the other pariticpants of the task in 2013.