Last week, I had the honour to be mentioned in the 2015 #12x12donatic awards to Women in Information & Technologies, in the category “research & academia”.
The goal of these awards is to recognize the role of women in professional, business and academic domains related to information and communication technologies. They are organized by Tertulia digital, idigital, an initiative from the Catalan Government for digital innovation, the observatory for women, entreprise and economy of Cambra de Commerç de Barcelona and Sinergia digital marketing.
In the speeches, all women were passionate about their work and mentioned our wish that we won’t need prizes like that in the future as the women will be very present in everyday media. That’s why initiative such as Girls in Lab, promoting the interest of girls for technology, are so relevant. I had the chance to participate as a volunteer in a hackathon they organized at UPF with the following leit motiv:
Filed under outreach, press
I am spending the summer (June-September 2015) at the Centre for digital Music, Queen Mary University of London, thanks to a José de Castillejo Fellowship from the Spanish Government.
During my stay, I am collaborating with Simon Dixon, trying to understand the criteria people use to transcribe flamenco singing, and analyse the influence of musical knowledge and exposure to flamenco music. Our ultimate goal is to improve current methods for automatic transcription, specially for this particular music.
At the same time I am getting familiar to all wonderful work carried out at C4DM, specially in the FAST_IMPACt project, leaded by Prof. Mark. Sandler. It is an EPSRC-funded research project combining Audio & Music Technology, Semantic Web, e-Science and Human-Computer Interaction, where I am part of the advisory board.
I already attended a couple of interesting events! London is a wonderful city, specially for researchers in music technology.
Last week I attended a workshop on Music Similarity (Concepts, cognition and computation) held at the Lorentz Center (International Center for Workshops in the Sciences) in Leiden, Netherlands. The Lorentz Center is an international center that coordinates and hosts workshops in the sciences, based on the philosophy that science thrives on interaction between creative researchers. Lorentz Center workshops focus on new collaborations and interactions between scientists from different countries and fields, and with varying seniority.
As a contrast from the photo showed on their web, only female researchers working in music from different perspectives organized this workshop, which is already noticeable in our field:
They represent the different disciplines covered in the workshop:
- Conceptual and computational aspects in music similarity
- Music similarity and cognition
- Music similarity in practice
The program was a combination of plenary presentation of different topics, discussion sessions in small groups and large groups, long breaks for lunch, coffee combined with interaction among participants, and nice social activities.
I enjoyed a lot this event, as it was a nice mixture of colleagues I already knew, researchers I had read their work but not met personally and new students with fresh ideas.
I focused my presentation on the applications of music similarity measures in Music Information Retrieval and the challenges of “building real applications for real people”. You can find my slides here:
And some photos here:
Music similarity: list of topics
My wonderful office at Lorentz Center
One of the sessions
Group photo (I had already left!!!)
My three PhD students, Justin Salamon, Jose R. Zapata and Agustín Martorell, graduated last September. We had an intense week with the three defenses and a wonderful panel of experts for the jury. These are three very nice pieces of work and very different and varied in terms of contributions and scopes.
- Agustín Martorell, “Modelling tonal context dynamics by temporal multi-scale analysis”. Jury members: Petri Toiviainen (University of Jyväskylä), Geoffroy Peeters (IRCAM), Sergi Jordà (UPF). This thesis provides nice insights on the concept of tonality and its computational modelling, discussing the different proposals for visualization and evaluation and proposing a new approach based on temporal multi-scale.
- José R. Zapata “Comparative Evaluation and Combination of Automatic Rhythm Description Systems”. Jury members: Fabien Gouyon (INESC-Porto), Juan Bello (NYU), Xavier Serra (UPF). The work by Jose is extremely important as it provides a quantitative evaluation of state of the art methods for rhythm description (tempo and beat tracking), a way to automatically detect difficult examples, and propose a way to combine different strategies in different contexts (tracks, onset detection functions and beat tracking model) to address the current glass ceiling in those methods.
- Justin Salamon. “Melody Extraction from Polyphonic Music Signals”. Jury Members: Geoffroy Peeters (IRCAM), Fabien Gouyon (INESC-Porto), Juan Bello (NYU). The thesis by Justin is an excellent contribution to the field of music content description, in particular predominang fundamental frequency estimation, including the MELODIA algorithm and many applications to evaluate and exploit the method.
I learned a lot by supervising the three of them, I am now happy that they succeed. I hope we will keep collaborating and I wish them all the best in their future careers!
This is the video of my keynote talk at FMA 2013, titled “Towards Computer-Assisted Transcription and Description of Music Recordings”. This is the abstract of the talk. I hope you will like it!
Automatic transcription, i.e. computing a symbolic musical representation from a music recording, is one of the main research challenges in the field of sound and music computing. For monophonic music material the obtained transcription is a single musical line, usually a melody, and in polyphonic music there is an interest in transcribing the predominant melodic line. In addition to transcribing, current technologies are able to extract other musical descriptions related to tonality, rhythm or instrumentation from music recordings. Automatic description could potentially complement traditional methodologies for music analysis.
In this talk I present the state-of-the art on automatic transcription and description of music audio signals. I illustrate it with our own research on tonality estimation, melodic transcription and rhythmic characterization. I show that, although current research is promising, current algorithms are still limited in accuracy and there is a semantic gap between automatic feature extractors and expert analyses.
Moreover, I present some strategies to address these challenges by developing methods adapted to different repertoire and defining strategies to integrate expert knowledge into computational models, as a way to build systems following a “computer-assisted” paradigm.
Some weeks ago we announced the release of a new dataset of flamenco singing: TONAS
The dataset includes 72 sung excerpts representative of three a cappella flamenco singing styles, i.e. Tonás (Debla and two variants of Martinete), together with manually corrected fundamental frequency and note transcriptions.
This collection was built by the COFLA team in the context of our research project for melodic transcription, similarity and style classification in flamenco music.
- Mora, J., Gomez, F., Gomez, E., Escobar-Borrego, F.J., Diaz-Banez, J.M. (2010). Melodic Characterization and Similarity in A Cappella Flamenco Cantes. 11th International Society for Music Information Retrieval Conference (ISMIR 2010).
- Gomez, E., Bonada, J. (in Press). Towards Computer-Assisted Flamenco Transcription: An Experimental Comparison of Automatic Transcription Algorithms As Applied to A Cappella Singing. Computer Music Journal.
Further information about the music collection, how the samples were transcribed and by who, is available on the dataset website, where you can of course download the audio, metadata and transcription files.
We hope that this collection will be useful, whether for automatic transcription of the singing voice or any other research topic (e.g. pitch estimation, onset detection, melodic similarity, singer identification, style classification), and we hope this dataset will increase the interest of our scientific community on the particular challenges of flamenco singing.
For the moment we got quite a number of downloads for different purposes: research on music transcription, onset detection, folk music, personal study of singing techniques, and even for curiosity! 🙂
The UPF web site published on their web site a news on the Sound and Music Computing Conference 2010, that will take place at my University this week. I am one of the three Scientific Programme Chairs.
La setena edició del Sound and Music Computing estarà organitzada pel GTM
I will miss it as I am currently in Montreal, but I hope everyone will enjoy the nice program both in terms of scientific presentations, concerts and social activities.
I am scientific programme co-chair of the 7th Sound and Music Computing Conference, that will take place in Barcelona from 21-24th July. If you have a look at the list of accepted presentations and concerts you will see that it will be a great event!
Filed under events, research
Thanks to a grant from AGAUR, I am spending 4 months at CIRMMT in the context of a research visit. I am hosted by Catherine Guastavino (CIRMMT) and collaborating also with Paco Gómez (UPM). Our project deals with melodic similarity in flamenco singing.