Tag Archives: tonality

Paper & Matlab framework for hierarchical multi-scale set-class analysis

Journal on Mathematics and Music

As part of his recent PhD thesis, Agustín Martorell has studied the potential of multi-scale representations in music analysis. In particular, he focuses on the description of tonality from score representations and on the analysis of pitch-class sets.  We have recently published the results of this study  in Journal of Mathematics and Music: Mathematical and Computational Approaches to Music Theory, Analysis, Composition and Performance. The paper is now online!

Several analyses are discussed within the paper while addressing the problem of visualization. As a result of the work, there is also a MATLAB Toolbox that you are able to download from here.

Agustín Martorell & Emilia Gómez

Abstract

This work presents a systematic methodology for set-class surface analysis using temporal multi-scale techniques. The method extracts the set-class content of all the possible temporal segments, addressing the representational problems derived from the massive overlapping of segments. A time versus time-scale representation, named class-scape, provides a global hierarchical overview of the class content in the piece, and it serves as a visual index for interactive inspection. Additional data structures summarize the set-class inclusion relations over time and quantify the class and subclass content in pieces or collections, helping to decide about sets of analytical interest. Case studies include the comparative subclass characterization of diatonicism in Victoria’s masses (in Ionian mode) and Bach’s preludes and fugues (in major mode), as well as the structural analysis of Webern’s Variations for piano op. 27, under different class-equivalences.

 

Leave a comment

Filed under publications, research, software

Video of my keynote talk at the 3rd International Workshop on Folk Music Analysis 2013

This is the video of my keynote talk at FMA 2013, titled “Towards Computer-Assisted Transcription and Description of Music Recordings”. This is the abstract of the talk. I hope you will like it!

Automatic transcription, i.e. computing a symbolic musical representation from a music recording, is one of the main research challenges in the field of sound and music computing. For monophonic music material the obtained transcription is a single musical line, usually a melody, and in polyphonic music there is an interest in transcribing the predominant melodic line. In addition to transcribing, current technologies are able to extract other musical descriptions related to tonality, rhythm or instrumentation from music recordings. Automatic description could potentially complement traditional methodologies for music analysis.

In this talk I present the state-of-the art on automatic transcription and description of music audio signals. I illustrate it with our own research on tonality estimation, melodic transcription and rhythmic characterization. I show that, although current research is promising, current algorithms are still limited in accuracy and there is a semantic gap between automatic feature extractors and expert analyses.
Moreover, I present some strategies to address these challenges by developing methods adapted to different repertoire and defining strategies to integrate expert knowledge into computational models, as a way to build systems following a “computer-assisted” paradigm.

Leave a comment

12/07/2013 · 12:56

Computational Ethnomusicology and FMA (3rd IW on Folk Music Analysis)

Over the last few years, there has been an increasing interest in the study of music from different traditions from a computational perspective. Researchers with interests in this area have been meeting at the ISMIR conferences and communicate through an interest group in computational ethnomusicology, ethnocomp.

My interests started in 2008, with a study about how tonal features, extracted from music audio signals, can be useful to automatically organize music recordings from different traditions. It basically consisted on characterizing the scale by means of high-resolution HPCP features and combining these features with timbre and rhythm descriptors. As a result, we established some relationships between audio features and geography in our ISMIR2009 paper on Music and geography: content description of musical audio from different parts of the world. After that, I got interested in MIR and Flamenco music, and I have been working in a system for the automatic transcription of flamenco singing, thanks to the COFLA project. This is a challenging task, that will require a dedicated post!

ethnocomp has always been a small community, and two years ago we had the first event devoted to this research area, the first Folk Music Analysis (FMA) workshop that took place in Athens, Greece. Last year I had the chance of co-organizing the 2nd FMA in Seville, my home town, which was jointly organized with a conference on flamenco research. At the last ISMIR in Porto, we could see an increasing interest in this small field, and there was a large number of people attending the ethnocomp ‘dinner?. Moreover, at my research group, my boss Xavier Serra is leading an ERC grant dealing with MIR and traditional music, compmusic. I am very happy that this field gets more attention, and that we address the fact that all our technology has been designed for Western popular music. There is much work to do to develop culture-specific or culture-aware tools.

I then hope that this year’s FMA, which will take place in Amsterdam, will be a success! I am sure it will be a truly interdisciplinary event, gathering people from ethnomusicology, music performance and music information retrieval.

Topics include:
– Computational ethnomusicology
– Retrieval systems for non-western and folk musics
– New methods for music transcription
– Formalization of musical data
– Folk music classification systems
– Models of oral transmission of music
– Cognitive modelling of music
– Aesthetics and related philosophical issues
– Methodological issues
– Representational issues and models
– Audio and symbolic representations
– Formal and computational music analysis

Important dates:
3 February 2013: Deadline for abstract submissions
10 March 2013: Notification of acceptance/rejection of submissions
5 May 2013: Deadline for submission of revised abstracts or full papers
6 and 7 June: Workshop

Don’t miss it!!!!

Leave a comment

Filed under CFP, events, research

HPCP pluging available for free download

Image

We finally managed to share a simple version of our algorithm for chroma feature extraction (Harmonic Pitch Class Profile) with the research community by means of a vamp plugin. It’s currently available for windows, but we hope it will be soon available for MacOS and LINUX. You can find it here.

I am very happy for the success we had with the MELODIA plugin by Justin and I hope people will find this one interesting, even if the algorithm is from 2006!

The HPCP is an approach for chroma feature extraction. It provides a frame representation of the relative intensity of each pitch-class within an octave. I developed it as part of my PhD thesis and it has been extensively used for different Music Information Retrieval applications such as key and chord estimation, cover version identification, music structure analysis, classification and recommendation.

Leave a comment

19/10/2012 · 13:21