Tag Archives: music information retrieval

New open-access journal Transactions of ISMIR, open for submissions

I am happy to announce that the International Society for Music Information Retrieval launched the Transactions of the International Society for Music Information Retrieval, the open access journal of the ISMIR society at Ubiquity press. I am serving as Editor-in-Chief, together with Simon Dixon and Anja Volk.

TISMIR publishes novel scientific research in the field of music information retrieval (MIR).

We welcome submissions from a wide range of disciplines: computer science, musicology, cognitive science,  library & information science and electrical engineering.

We currently accept submissions.

View our submission guidelines for more information.

TISMIR

Leave a comment

Filed under publications, research

Journal paper and open dataset for source separation in Orchestra music

As part of the PHENICX project, we have recently published our research results in the task of audio sound source separation, which is the main research topic of one of our PhD students, Marius Miron.

During this work, we developed a method for orchestral music source separation along with a new dataset: the PHENICX-Anechoic dataset. The methods were integrated into the  PHENICX project for tasks as orchestra focus/instrument enhancement. To our knowledge, this is the first time source separation is objectively evaluated in such a complex scenario. 

This is the complete reference to the paper:

M. Miron, J. Carabias-Orti, J. J. Bosch, E. Gómez and J. Janer, “Score-informed source separation for multi-channel orchestral recordings”, Journal of Electrical and Computer Engineering (2016))”

Abstract: This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.

The PHENICX-Anechoic dataset includes audio and annotations useful for different MIR tasks as score-informed source separation, score following, multi-pitch estimation, transcription or instrument detection, in the context of symphonic music. This dataset is based on the anechoic recordings described in this paper:

Pätynen, J., Pulkki, V., and Lokki, T., “Anechoic recording system for symphony orchestra,” Acta Acustica united with Acustica, vol. 94, nr. 6, pp. 856-865, November/December 2008.

For more information about the dataset and how to download you can access the PHENICX-Anechoic web page.

Leave a comment

Filed under datasets, publications, research

¿Pueden los hombres diseñar tecnologías relevantes para las mujeres? El ejemplo en las aplicaciones para la música

Justo antes de vacaciones tuve la oportunidad de escribir un post informal sobre el tema de género gracias a la invitación de MujerTekSpace, un proyecto liderado de la Universidad de Deusto donde se intenta mejorar la visibilidad de la mujer en la ingeniería.

Aquí podéis leer el post publicado, pero lo copio a continuación en su versión extensa, ya que empecé a escribir y se me hizo demasiado largo…

……..—-…….

Desde hace algún tiempo veo a más y más personas preocupadas porque hay pocas mujeres en ingeniería: hay pocas ya en los primeros cursos de la carrera y van quedando menos que hagan el doctorado o lleguen a lo alto de la pirámide profesional (altos cargos directivos o catedráticas). Yo personalmente me he convertido poco a poco en una acérrima defensora de la mujer en la ingeniería, en particular en la investigación y el desarrollo tecnológico.

Mi preocupación fundamental es la siguiente: ¿Cómo será el mundo del futuro si las tecnologías que utilizaremos son investigadas, desarrolladas y evaluadas mayoritariamente por hombres?

Como ejemplo pongamos mi comunidad de investigación: la International Society for Music Information Retrieval (www.ismir.net) (sociedad internacional para la recuperación de la información musical), la cual tengo el honor de presidir (primera presidenta electa por cuestiones estadísticas, como luego verán), formada por investigadores de todo el mundo. Nuestra comunidad está relacionada con compañías punteras hoy en día como shazam, spotify, iTunes, soundcloud (https://soundcloud.com/), deezer (http://www.deezer.com/) BMAT o pandora (¡no las pulseras sino la radio por internet!), empresas que configuran el panorama comercial en sistemas de recomendación de música. Seguro que tienen algunas de éstas aplicaciones en sus ordenadores o teléfonos móviles.

Un estudio que se presentará éste Agosto en la conferencia de Nueva York, y que ha sido liderado por Xiao Hu, investigadora de la Universidad de Hong Kong (Hu et al. 2016), constata la desigual distribución por género (14.7% mujeres vs 85.3%) de autores de artículos científicos a lo largo de los años. De hecho son muy pocas mujeres las que presentan oralmente en la conferencia, y en los últimos 3 años todas las ponencias invitadas las han dado hombres.

1

Además, tanto en nuestro proyecto de mentorías para mujeres como en el panel industrial de la conferencia, hemos podido constatar que la proporción de mujeres es incluso menor en la industria que en la investigación, posiblemente dado que las condiciones laborales son más favorables para la conciliación. Esto parece confirmar que las pocas mujeres que hay se dedican a una investigación que está menos en contacto con el producto.

En el lado positivo, éste estudio refiere que las mujeres más productivas lo son igual que los hombres, que las tendencias no varían entre continentes, que las mujeres que están en grupos de investigación grandes tienen más impacto, y que trabajan en entornos más aplicados, aunque lejos de un producto, lo que parece indicar que la interdisciplinariedad puede proporcionar entornos más diversos en la ingeniería.

Con éstos datos, yo diría que podemos afirmar que nuestras aplicaciones musicales están siendo diseñados por el género masculino, con las consecuentes barreras para la mujer, ya que se incorporan inconscientemente decisiones de diseño no equilibradas. ¿Puede ser que por eso éstas tecnologías no son tan atractivas para la mujer? ¿Puede eso explicar en parte por qué las niñas de hoy en día se sienten poco atraídas por el entorno tecnológico?

Supongo que es algo general en otro tipo de aplicaciones (por ejemplo videojuegos, televisión digital, tecnologías del automóvil o revistas online). Imaginemos entonces que en el futuro pase algo como nos ocurre a los zurdos: ¿será el futuro un mundo donde no podrás cortar bien un papel o tendrás dificultades para abrir una lata de conservas, pero en el dominio digital?

Esperemos que podamos poner remedio antes.

Sobre la autora

Emilia Gómez

Soy el típico caso del bicho raro, como casi todas las mujeres de mi ámbito: una de las dos mujeres que eligió dibujo técnico en mi promoción, una minoría en ingeniería de telecomunicaciones, una de las dos mujeres de mi promoción en el máster en Acústica, Procesado de Señal e Informática Musical del IRCAM en Paris, la única doctoranda hasta ahora de mi director de tesis y la única profesora de mi grupo de investigación. También soy la primera mujer presidenta electa de la ISMIR (International Society in Music Information Retrieval), y la primera en muchas otras cosas, no por ser muy buena sino por cuestiones estadísticas. De hecho soy a menudo una mujer dando clase a un grupo de hombres. Y además soy zurda.

Referencia

Hu, X., Choi, K., Lee, J. H., Laplante, A., Hao, Y., Cunningham, S. J., Downie, J. S. (2016). WiMIR: An Informetric Study on Women Authors in ISMIR. In Proceedings of the 17th International Conference on Music Information Retrieval (ISMIR).

 

Leave a comment

Filed under outreach, personal

New project on MIR & singing: CASAS

At my lab we are starting a new project where we integrate our expertise in singing voice processing and music information retrieval to generate tools for choir singers.

CASAS (Community-Assisted Singing Analysis and Synthesis) is a project funded by the Ministry of Economy and Competitiveness of the Spanish Government (TIN2015-70816-R), that started in  January 1st 2016 and will end in December 31st 2018.

https://i2.wp.com/mtg.upf.edu/system/files/imagecache/projects_tech_thumbs/projects/Logo.jpgHumans use singing to create identity, express emotion, tell stories, exercise creativity, and connect with each other while singing together. This is demonstrated by the large community of music singers active in choirs and the fact that vocal music makes up an important part of our cultural heritage. Currently, an increasing amount of music resources are becoming digital, and the Web has become an important tool for singers to discover and study music, as a feedback resource and as a way to share their singing performances. The CASAS project has two complementary goals:

  • The first one is to improve state-of-the-art technologies that assist singers in their musical practice. We research on algorithms for singing analysis and synthesis (ex: automatic transcription, description, synthesis, classification and visualization), following a user-centered perspective, and with the goal of making them more robust, scalable and musically meaningful.
  • The second one is to enhance current public-domain vocal music archives and create research data for our target music information retrieval (MIR) tasks. Our project put a special emphasis on choral repertoire in Catalan and Spanish.

We exploit our current methods for Music Information Retrieval and Singing Voice Processing, and we involve a community of singers that use our technologies and provide their evaluations, ground truth data and relevance feedback.

I did my first logo, which is inspired by choirs, audio & “houses”, which is the english translation of “casas”. It will be an amazing project!

Leave a comment

Filed under projects, research

Paper on melodic similarity in flamenco now online

Our paper on melodic similarity is finally online! The paper is titled

Melodic Contour and Mid-Level Global Features Applied to the Analysis of Flamenco Cantes

This work focuses on the topic of melodic characterization and similarity in a specific musical repertoire: a cappella flamenco singing, more specifically in debla and martinete styles. We propose the combination of manual and automatic description. First, we use a state-of-the-art automatic transcription method to account for general melodic similarity from music recordings. Second, we define a specific set of representative mid-level melodic features, which are manually labelled by flamenco experts. Both approaches are then contrasted and combined into a global similarity measure. This similarity measure is assessed by inspecting the clusters obtained through phylogenetic algorithms and by relating similarity to categorization in terms of style. Finally, we discuss the advantage of combining automatic and expert annotations as well as the need to include repertoire-specific descriptions for meaningful melodic characterization in traditional music collections.

This is the result of a joint work of the COFLA group, where I am contributing with tecnologies for the automatic transcription and melody description of music recordings.

This is an example on how we compare flamenco tonás using melodic similarity and phylogenetic trees:

nnmr_a_1174717_f0007_b

And this is a video example of the type of styles we analyze in this paper, done by Nadine Kroher based on her work at the MTG:

You can read the full paper online:

http://www.tandfonline.com/doi/full/10.1080/09298215.2016.1174717

Leave a comment

Filed under publications, research, Uncategorized

Looking for a smart PhD student for next year

The Music Technology Group (MTG) of the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona is opening a PhD fellowship in the area of Music Information Retrieval to start in the Fall of 2016.

Application closing date: 05/05/2016

Start date: 01/10/2016

Research lab:  Music Information Research lab, Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra
Supervisor: Emilia Gómez

Duration: 3+1 years

Topics: automatic transcription, sound source separation, music classification, singing voice processing, melody extraction, music synchronization, classical music, computational ethnomusicology.

Requirements: Candidates must have a good Master Degree in Computer Science, Electronic Engineering, Physics or Mathematics. Candidates must be confident in some of these areas: signal processing, information retrieval, machine learning, have excellent programming skills, be fluent in English and possess good communication skills. Musical knowledge would be an advantage, as would previous experience in research and a track record of publications.

More information on grant details:
http://portal.upf.edu/web/etic/doctorat
http://portal.upf.edu/web/etic/predoctoral-research-contracts
Provisional starting date: October 1st 2016

Application: Interested candidates should send a motivation letter, a CV (preferably with references), and academic transcripts to Prof. Emilia Gómez (emilia.gomez@upf.edu) before May 1st 2016. Please include in the subject [PhD MIR].

Leave a comment

Filed under research

FAST project: Acoustic and semantic technologies for intelligent music production and consumption

Yesterday I arrived from Paris, where I attended, as Advisory Board member, a meeting of the FAST Project (www.semanticaudio.ac.uk).

FAST-IMPACT stands for “Fusing Acoustic and Semantic Technologies for Intelligent Music Production and Consumption” and it is funded by EPRSC, Engineering and Physical Sciences Research Countil, UK with 5,199,944 £ (side note: OMG this is real funding, they should know at the new Spanish Agencia Estatal para la Investigación)

According to their web site, This five-year EPSRC project brings the very latest technologies to bear on the entire recorded music industry, end-to-end, producer to consumer, making the production process more fruitful, the consumption process more engaging, and the delivery and intermediation more automated and robust. It addresses three main premises:

(i) that Semantic Web technologies should be deployed throughout the content value chain from producer to consumer;

(ii) that advanced signal processing should be employed in the content production phases to extract “pure” features of perceptual significance and represent these in standard vocabularies;

(iii) that this combination of semantic technologies and content-derived metadata leads to advantages (and new products and services) at many points in the value chain, from recording studio to end-user (listener) devices and applications.

The project is leaded by Dr Mark Sandler, Queen Mary University of London, and include as project participants University of Nottingham (leaded by Dr. Steve Benford), University of Oxford (leaded by Dr. David Deroure), Abbey Road Studios, BBC R&D, The Internet Archives, Microsoft Research and Audiolaboratories Eerlangen.

The results for this first year are amazing, as it can bee seen on the web, in terms of publication, scientific and technological outcomes but more important, great and inspiring ideas!

I am honoured to be part of the advisory board with such excellent researchers and contribute to the Project as much as I can. Some photos of the meeting:

 

 

Leave a comment

Filed under projects, research, Uncategorized

President-elect of the International Society for Music Information Retrieval (ISMIR)

ismir_04barstaff

Since January 1st I became the president-elect of the ISMIR. That means that I will be assisting the current president and I will become ISMIR president in two years.

I am very honoured to serve the community but it is a big responsibility! The current board was elected at last ISMIR (business meeting) and it includes colleagues around the world Fabien Gouyon as the current president, Eric J. Humphrey as secretary, Xiao Hu as treasurer, and Amélie AngladeMeinard Müller and Geoffroy Peeters and as board members.

For those who do not know, the International Society for Music Information Retrieval , as it appears on its website, is a non-profit organization seeking to advance the access, organization, and understanding of music information. As a field, music information retrieval (MIR) focuses on the research and development of computational systems to help humans better make sense of this data, drawing from a diverse set of disciplines, including, but my no means limited to, music theory, computer science, psychology, neuroscience, library science, electrical engineering, and machine learning. More formally, the goals of  ISMIR are:

  1. to foster the exchange of ideas between and among members whose activities, though diverse, stem from a common interest in music information retrieval,
  2. to stimulate research, development, and improvement in teaching in all branches of music information retrieval,
  3. to encourage publication and distribution of theoretical, empirical, and applied studies,
  4. to cooperate with representatives of other organizations and disciplines toward the furtherance of music information retrieval, and
  5. to support and encourage diversity in membership and the disciplines involved as a fundamental aspect of the society.

ISMIR was incorporated in Canada on July 4, 2008. It was previously run by a Steering Committee, and you can become a member by applying here.

The main activity of ISMIR is happening at the annual ISMIR conference, taking place in different countries worldwide (ISMIR 2015 was in Málaga and ISMIR 2016 will be in New York). In this graph published by ISMIR 2015 organizers, it can be noted that ISMIR is a well established conference with an attendance of 200 to 300 people and around 100 papers published in each edition.

estadisticas

Those papers have a great impact:  ISMIR is currently the 5th ranked publication in the “Multimedia” subcategory of “Engineering and Computer Science” and the 1st ranked in the “Music&Musicology” subcategory of “Humanities, Literature, and Arts“. 

If you cannot make it, you can Join the ISMIR Community group. Since its inception in 2000, the ISMIR community mailing list has grown into a forum of over 1,800 members from across the world, and routinely receives announcements about conferences, career opportunities, concerts, and wide variety of other issues relevant to music information retrieval.

My mains goals are to make ISMIR more accessible and interdisciplinary. I am also involved in the definition of an open access journal and I am particularly involved in WiMIR (I am in fact the first female president as far as I know). WiMIR is a group of people dedicated to promoting the role of, and increasing opportunities for, women in the MIR field. We meet to socialize, share information, and discuss in an informal setting, with the goal of building a community around women in our field. A photo of ismir 2014 meeting:

IMG_2984

Leave a comment

Filed under awards, personal, press

Music Information Retrieval & Flamenco: Experiment on note segmentation

Current experiment (updated October 2015)

We are running an experiment on note segmentation in flamenco, in order to understand the mechanisms behind manual transcriptions and improve our automatic transcription methods.

You can help by doing this exercise where you will have to segment 10 short flamenco excerpts into notes (it requires less than 1 hour of your time), and you will have the chance to listen in detail to some flamenco singing.

About

My current research in music information retrieval also addresses flamenco music, specially flamenco singing. I am interested to understand and model with computing tools the way humans transcribe flamenco music in order to generate automatic transcriptions of flamenco performances. Transcriptions are useful for musical analysis in terms of scale, patterns and style. More info on the context of my research can be found at the COFLA web site.

Leave a comment

Filed under research

Correlation between musical descriptors and emotions recognized in Beethoven’s Eroica

Last Wednesday I presented a poster at the Ninth Triennial Conference of the European Society for the Cognitive Sciences of Music (ESCOM 2015), that took place at the Royal Northern College of Music, Manchester, UK. It was a very interesting conference, including a very nice symposium in understanding musical audiences and inspiring talks on music education, psychology and wellbeing. Really impressed by how music have influence to improve quality of live from early years to the end of our lives.

The work I presented was leaded by Erika Trent, a student from the MIT that spent last summer at my lab thanks to the MIT Spain program. It was a very productive stay!

In this study we analysed the emotions that listener perceive when listening to Beethoven Symphony No. 3, Eroica, PHENICX target piece, played by the Royal Concertgebouw Orchestra Amsterdam. We then quantify the correlation between listeners’ perceived emotions from music and 1) musical descriptors, and 2) listeners’ backgrounds (country of origin, musical knowledge, exposure to classical music and knowledge of Eroica).

One conclusion of this study is that tonal strength (i.e. key clarity) correlates significantly with listener ratings of peacefulness, joyful activation, tension and sadness. Other significant correlations between emotion ratings and musical descriptors agree with the literature. This agreed with our hypothesis, being different parts on the same musical piece.

But there are two other unexpected and interesting findings that we might need to continue researching on.

First, we found out that listeners of varying backgrounds agree most on their ratings of sadness, compared to other emotions. Would that be similar for other musical pieces?

Second, listeners of similarly unmusical backgrounds, and listeners of young ages, recognise similar emotions to same music. On the contrary, listeners with more musical experience recognise different emotions to the same music. Caused by personal biases?

Interesting results that might corroborate the need for personalisation in music recommendation engines!

You can read the whole paper and access the poster here. 

0168TrentGomezPoster

Leave a comment

Filed under publications