Our paper on melodic similarity is finally online! The paper is titled
Melodic Contour and Mid-Level Global Features Applied to the Analysis of Flamenco Cantes
This work focuses on the topic of melodic characterization and similarity in a specific musical repertoire: a cappella flamenco singing, more specifically in debla and martinete styles. We propose the combination of manual and automatic description. First, we use a state-of-the-art automatic transcription method to account for general melodic similarity from music recordings. Second, we define a specific set of representative mid-level melodic features, which are manually labelled by flamenco experts. Both approaches are then contrasted and combined into a global similarity measure. This similarity measure is assessed by inspecting the clusters obtained through phylogenetic algorithms and by relating similarity to categorization in terms of style. Finally, we discuss the advantage of combining automatic and expert annotations as well as the need to include repertoire-specific descriptions for meaningful melodic characterization in traditional music collections.
This is the result of a joint work of the COFLA group, where I am contributing with tecnologies for the automatic transcription and melody description of music recordings.
This is an example on how we compare flamenco tonás using melodic similarity and phylogenetic trees:
And this is a video example of the type of styles we analyze in this paper, done by Nadine Kroher based on her work at the MTG:
You can read the full paper online:
Current experiment (updated October 2015)
We are running an experiment on note segmentation in flamenco, in order to understand the mechanisms behind manual transcriptions and improve our automatic transcription methods.
You can help by doing this exercise where you will have to segment 10 short flamenco excerpts into notes (it requires less than 1 hour of your time), and you will have the chance to listen in detail to some flamenco singing.
My current research in music information retrieval also addresses flamenco music, specially flamenco singing. I am interested to understand and model with computing tools the way humans transcribe flamenco music in order to generate automatic transcriptions of flamenco performances. Transcriptions are useful for musical analysis in terms of scale, patterns and style. More info on the context of my research can be found at the COFLA web site.
Our review article on melody extraction algorithms for the IEEE Signal Processing Magazine is finally available online! The printed edition will be coming out in March 2014.
I believe (not just as I am a co-author!) that it will become a key reference in the Music Information Retrieval area and beyond, as it provides a very nice overview of approaches, challenges and applications for melody extraction from polyphonic music signals. Justin Salamon has been the main author (congratulations, Justin!) and the paper has benefit from the contribution of two key experts: Gaël Richard, and Dan Ellis, with who I had the chance to collaborate on a previous comparative study on melody extraction published at IEEE TASP (128 citations according to google scholar).
Finally, I like very much this kind of tutorial papers providing a comprehensive introduction to a given topic and with a very attractive design. I hope you will enjoy it!
J. Salamon, E. Gómez, D. P. W. Ellis and G. Richard, “Melody Extraction from Polyphonic Music Signals: Approaches, Applications and Challenges“, IEEE Signal Processing Magazine, 31(2):118-134, Mar. 2014.
Abstract—Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ‘melody’ from both musical and signal processing perspectives, and provide a case study which interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation and applications which build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.
For further information about this article please visit Justin Salamon’s research page.
This is the video of my keynote talk at FMA 2013, titled “Towards Computer-Assisted Transcription and Description of Music Recordings”. This is the abstract of the talk. I hope you will like it!
Automatic transcription, i.e. computing a symbolic musical representation from a music recording, is one of the main research challenges in the field of sound and music computing. For monophonic music material the obtained transcription is a single musical line, usually a melody, and in polyphonic music there is an interest in transcribing the predominant melodic line. In addition to transcribing, current technologies are able to extract other musical descriptions related to tonality, rhythm or instrumentation from music recordings. Automatic description could potentially complement traditional methodologies for music analysis.
In this talk I present the state-of-the art on automatic transcription and description of music audio signals. I illustrate it with our own research on tonality estimation, melodic transcription and rhythmic characterization. I show that, although current research is promising, current algorithms are still limited in accuracy and there is a semantic gap between automatic feature extractors and expert analyses.
Moreover, I present some strategies to address these challenges by developing methods adapted to different repertoire and defining strategies to integrate expert knowledge into computational models, as a way to build systems following a “computer-assisted” paradigm.
Last week I attended my favorite conference, the International Society of Music Information Retrieval Conference. It took place in Porto, Portugal. I gave a presentation on our flamenco project. If you are interested, these are the slides.
It was a very intense conference, where I attended very nice presentations and I got many great ideas for future research. I specially enjoyed the last-minute demo session, which was something different to what I am used to.
Now, back to work!
Filed under events, research
It’s a pity I cannot attend this year’s ISMIR, as the place is very nice (I have some very good friends in Miami) and the program seems very nice too. It’s not easy with two small children to be away for so long!! In any case there are some small contributions to MIREX I have been involved in, leaded by our PhD students Justin Salamon (predominant melody estimation) and Jose Zapata (tempo estimation). We had very good results. You can have a look at our MIREX poster for a brief overview and read the abstracts for more details.