As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Music emotion recognition (MER) studies have made great progress in detecting the emotions of music segments and analyzing the emotional dynamics of songs. The overall emotion and depth information of entire songs may be more suitable for real-life applications in certain scenarios. This study focuses on recognizing the overall emotion and depth of entire songs. First, we constructed a public dataset containing 3839 popular songs in China (PSIC3839) by conducting an online experiment to collect the arousal, valence, and depth annotation of each song. Second, we used handcrafted feature-based method to predict the overall emotion and depth values. Support vector regressions using Mel frequency cepstrum coefficients features as inputs achieve good model performance (arousal: R2 = 0.609; valence: R2 = 0.354; and depth: R2 = 0.465). Finally, the groupwise and personalized results were also investigated by training a unique regressor for each group or individual, which provides a reference for future research.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.