As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In this talk I will provide an overview of my research in the field of Music Information Retrieval (MIR), which tries to understand the way humans describe music and emulate these descriptions by computational models dealing with big music data. By integrating knowledge from signal processing, music theory, cognition and artificial intelligence, we have developed methods to automatically describe music audio signals in terms of melody, tonality and rhythm; to measure similarity between pieces and automatically classify music according to style, emotion or culture. Over the last years, we have focused on two different application contexts. On one hand, we try to innovate the way we experience classical music concerts. On the other hand, we research on the computational modeling of flamenco music, improving current techniques for singing voice description and style classification.