As part of his recent PhD thesis, Agustín Martorell has studied the potential of multi-scale representations in music analysis. In particular, he focuses on the description of tonality from score representations and on the analysis of pitch-class sets. We have recently published the results of this study in Journal of Mathematics and Music: Mathematical and Computational Approaches to Music Theory, Analysis, Composition and Performance. The paper is now online!
Several analyses are discussed within the paper while addressing the problem of visualization. As a result of the work, there is also a MATLAB Toolbox that you are able to download from here.
Agustín Martorell & Emilia Gómez
This work presents a systematic methodology for set-class surface analysis using temporal multi-scale techniques. The method extracts the set-class content of all the possible temporal segments, addressing the representational problems derived from the massive overlapping of segments. A time versus time-scale representation, named class-scape, provides a global hierarchical overview of the class content in the piece, and it serves as a visual index for interactive inspection. Additional data structures summarize the set-class inclusion relations over time and quantify the class and subclass content in pieces or collections, helping to decide about sets of analytical interest. Case studies include the comparative subclass characterization of diatonicism in Victoria’s masses (in Ionian mode) and Bach’s preludes and fugues (in major mode), as well as the structural analysis of Webern’s Variations for piano op. 27, under different class-equivalences.
I contributed by means of an enriching interview to the “Forum on Transcription”, authored by Jason Stanyek (University of Oxford) in the journal Twentieth-Century Music. As stated on the web site, this journal disseminates research on all aspects of music in the long twentieth century to a broad readership. Emphasis is placed upon the presentation of the full spectrum of scholarly insight, with the goal of fostering exchange and debate between disciplinary fields.
I share an interesting conversation about transcription with Parag Chordia. In this conversation with Jason we discussed about the challenges and potential of audio analysis tools for computer-assisted transcription and description of music recordings. I gave some examples on my work on the transcription of flamenco singing that is being carried out within the COFLA project.
You can find the results of the forum and the rest of a very impressive special issue on transcription on the web.
Our article on multi-feature beat tracking for the IEEE/ACM Transactions on Audio, Speech and Signal Processing is now available online! This is a work carried leaded by Jose Ricardo Zapata for his PhD thesis in collaboration with Mathew Davies from the SMC group in Porto, based on the idea of combining different experts, represented by periodicity from different onset detection functions, for beat estimation. This is a simple and clever idea, already used to combine different beat tracking algorithms and evaluate the difficulty of the task, that has been integrated in a different method.
Zapata, J. R.
, Davies M. E. P.
, & Gómez E.
(2014). Multi-feature beat tracking
. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 22
(4), 816 – 825. RTF, Tagged, XML, BibTex, Google Scholar
A recent trend in the field of beat tracking for musical audio signals has been to explore techniques for measuring the level of agreement and disagreement between a committee of beat tracking algorithms. By using beat tracking evaluation methods to compare all pairwise combinations of beat tracker outputs, it has been shown that selecting the beat tracker which most agrees with the remainder of the committee, on a song-by-song basis, leads to improved performance which surpasses the accuracy of any individual beat tracker used on its own. In this paper we extend this idea towards presenting a single, standalone beat tracking solution which can exploit the benefit of mutual agreement without the need to run multiple separate beat tracking algorithms. In contrast to existing work, we re-cast the problem as one of selecting between the beat outputs resulting from a single beat tracking model with multiple, diverse input features. Through extended evaluation on a large annotated database, we show that our multi-feature beat tracker can outperform the state of the art, and thereby demonstrate that there is sufficient diversity in input features for beat tracking, without the need for multiple tracking models.
Our review article on melody extraction algorithms for the IEEE Signal Processing Magazine is finally available online! The printed edition will be coming out in March 2014.
I believe (not just as I am a co-author!) that it will become a key reference in the Music Information Retrieval area and beyond, as it provides a very nice overview of approaches, challenges and applications for melody extraction from polyphonic music signals. Justin Salamon has been the main author (congratulations, Justin!) and the paper has benefit from the contribution of two key experts: Gaël Richard, and Dan Ellis, with who I had the chance to collaborate on a previous comparative study on melody extraction published at IEEE TASP (128 citations according to google scholar).
Finally, I like very much this kind of tutorial papers providing a comprehensive introduction to a given topic and with a very attractive design. I hope you will enjoy it!
J. Salamon, E. Gómez, D. P. W. Ellis and G. Richard, “Melody Extraction from Polyphonic Music Signals: Approaches, Applications and Challenges“, IEEE Signal Processing Magazine, 31(2):118-134, Mar. 2014.
Abstract—Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ‘melody’ from both musical and signal processing perspectives, and provide a case study which interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation and applications which build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.
For further information about this article please visit Justin Salamon’s research page.
I have been editing, together with Perfecto Herrera and Paco Gómez, a Special Issue on Computational Ethnomusicology at the Journal of New Music Research.
The goal of this special issue is to gather relevant, high-quality research on computational methods and applications in ethnomusicology. The papers included here deal with different musical facets such as pitch, pulse and tempo, and voice timbre. They address different musical repertoires, from Central-African to Basque folk music. They also cover a broad area: tools, including data collections, methodology and Ethnomusicology core-problems. Althgouth it was a hard work, thanks to the authors and reviewers we managed to get a varied and interesting set of articles:
- Computational Ethnomusicology: perspectives and challenges
- Antipattern Discovery in Folk Tunes
- Tarsos, a Modular Platform for Precise Pitch Analysis of Western and Non-Western Music
- Evaluation and Recommendation of Pulse and Tempo Annotation in Ethnic Music
- Breathy, Resonant, Pressed – Automatic Detection of Phonation Mode from Audio Recordings of Singing
- A Location-Tracking Interface for Ethnomusicological Collections
The issue is now available online at JNMR web site. And our introduction is available here and here.