Noticia en la web de la UPF sobre el nuevo proyecto HUMAINT
Podéis leerla aquí.
Noticia en la web de la UPF sobre el nuevo proyecto HUMAINT
Podéis leerla aquí.
Postdoc application deadline: OCTOBER 26, 2017
I am excited to lead a novel research initiative inside the European Commission’s Joint Research Centre, on the topic of machine learning and human behaviour.
The Joint Research Centre (JRC) is the European Commission’s science and knowledge service which employs scientists to carry out research in order to provide independent scientific advice and support to EU policy. The JRC Centre for Advanced Studies (JRC-CAS) was established to enhance the JRC’s capabilities to meet emerging challenges at the science-policy interface. JRC-CAS is now launching a three year interdisciplinary project to understand the potential impact of machine learning in human behaviour and societal welfare. It will be carried out at JRC centre in Seville, Spain. There will be close collaboration with the Music Technology Group and the Department of Information and Communication Technologies of Universitat Pompeu Fabra in Barcelona, Spain.
The HUMAINT project will (1) provide a scientific understanding of machine vs human intelligence; (2) analyze the influence of machine learning algorithms into human behaviour (3) investigate to what extent these findings should influence the European regulatory framework. Given my research expertise, music will be an important use case to address.
In the context of this project, three postdoc positions in the area of machine learning and human behaviour are open for appointment from January 1, 2018, at the Joint Research Centre (European Commission) in Seville, Spain. The fully funded positions are available for a period of three years. Particular areas of interests:
Fairness, accountability, transparency, explainability of machine learning methods.
Social, ethical and economic aspects of artificial intelligence.
Human-computer interaction and human-centered machine learning.
Digital and behavioural economy.
Application domains: music and arts, social networks, health, transport, energy.
We are looking for highly motivated, independent, and outstanding postdoc candidates with a strong background in machine learning and/or human behaviour. An excellent research track record, ability to communicate research results and involvement in community initiatives is expected. Candidates should have EU/EEA citizenship.
The JRC offers an enriching multi-cultural and multi-lingual work environment with lifelong learning and professional development opportunities, and close links to top research organisations and international bodies around the world. Postdoctoral researchers receive a competitive salary and excellent working conditions, and will define their own research agenda inline with the project goals.
JRC-Seville is located in Cartuja 93 scientific and technological park. Seville is the fourth-largest city in Spain. With more than 30 centuries of history (gateway of America for two centuries, main actor in the first circumnavigation of the Earth), three UNESCO World Heritage Sites, and privileged climate, it combines its historical and touristic character with a consolidated economic development and innovation potential.
We are also open for collaborations with external researchers, as one of our goals is to build an expert network in the topics.
You may obtain further information about the scientific aspects of the positions from Dr. Emilia Gómez (project scientific leader: firstname.lastname@example.org, using the subject [humaint]) and at the following web pages http://recruitment.jrc.ec.europa.eu/?site=SVQ&type=AX&category=FGIV and https://ec.europa.eu/jrc/en/working-with-us/jobs.
YOU CAN APPLY HERE BY OCTOBER 26, 2017 (first application round).
By the way, the positions are in Seville, the city I was born, one of the most beautiful places in the world 🙂
I am happy to announce that the International Society for Music Information Retrieval launched the Transactions of the International Society for Music Information Retrieval, the open access journal of the ISMIR society at Ubiquity press. I am serving as Editor-in-Chief, together with Simon Dixon and Anja Volk.
TISMIR publishes novel scientific research in the field of music information retrieval (MIR).
We welcome submissions from a wide range of disciplines: computer science, musicology, cognitive science, library & information science and electrical engineering.
We currently accept submissions.
View our submission guidelines for more information.
I have been collaborating for a while now on the edition of a Special Issue at IEEE Multimedia Magazine, which gathers state-of-the-art research on multimedia methods and technologies aimed at enriching music performance, production and consumption.
It is the second time I act as a co-editor for a journal (the first one was at JNMR and related to computational ethnomusicology) and I learnt a lot from the process. Editors have to asure good submissions, good reviews and recommendations, keeping the coherence and theme that we wanted to give as a message to our community. Yes: access, distribution and experiences in music are changing with new technologies. I am very happy with the outcomes! Check our editorial paper here, and the full issue here.
And I love the design!
As part of the PHENICX project, we have recently published our research results in the task of audio sound source separation, which is the main research topic of one of our PhD students, Marius Miron.
During this work, we developed a method for orchestral music source separation along with a new dataset: the PHENICX-Anechoic dataset. The methods were integrated into the PHENICX project for tasks as orchestra focus/instrument enhancement. To our knowledge, this is the first time source separation is objectively evaluated in such a complex scenario.
This is the complete reference to the paper:
M. Miron, J. Carabias-Orti, J. J. Bosch, E. Gómez and J. Janer, “Score-informed source separation for multi-channel orchestral recordings”, Journal of Electrical and Computer Engineering (2016))”
Abstract: This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.
The PHENICX-Anechoic dataset includes audio and annotations useful for different MIR tasks as score-informed source separation, score following, multi-pitch estimation, transcription or instrument detection, in the context of symphonic music. This dataset is based on the anechoic recordings described in this paper:
Pätynen, J., Pulkki, V., and Lokki, T., “Anechoic recording system for symphony orchestra,” Acta Acustica united with Acustica, vol. 94, nr. 6, pp. 856-865, November/December 2008.
At my lab we are starting a new project where we integrate our expertise in singing voice processing and music information retrieval to generate tools for choir singers.
CASAS (Community-Assisted Singing Analysis and Synthesis) is a project funded by the Ministry of Economy and Competitiveness of the Spanish Government (TIN2015-70816-R), that started in January 1st 2016 and will end in December 31st 2018.
Humans use singing to create identity, express emotion, tell stories, exercise creativity, and connect with each other while singing together. This is demonstrated by the large community of music singers active in choirs and the fact that vocal music makes up an important part of our cultural heritage. Currently, an increasing amount of music resources are becoming digital, and the Web has become an important tool for singers to discover and study music, as a feedback resource and as a way to share their singing performances. The CASAS project has two complementary goals:
We exploit our current methods for Music Information Retrieval and Singing Voice Processing, and we involve a community of singers that use our technologies and provide their evaluations, ground truth data and relevance feedback.
I did my first logo, which is inspired by choirs, audio & “houses”, which is the english translation of “casas”. It will be an amazing project!
Our paper on melodic similarity is finally online! The paper is titled
This is the result of a joint work of the COFLA group, where I am contributing with tecnologies for the automatic transcription and melody description of music recordings.
This is an example on how we compare flamenco tonás using melodic similarity and phylogenetic trees:
And this is a video example of the type of styles we analyze in this paper, done by Nadine Kroher based on her work at the MTG:
You can read the full paper online:
The Music Technology Group (MTG) of the Department of Information and Communication Technologies, Universitat Pompeu Fabra in Barcelona is opening a PhD fellowship in the area of Music Information Retrieval to start in the Fall of 2016.
Application closing date: 05/05/2016
Start date: 01/10/2016
Duration: 3+1 years
Topics: automatic transcription, sound source separation, music classification, singing voice processing, melody extraction, music synchronization, classical music, computational ethnomusicology.
Requirements: Candidates must have a good Master Degree in Computer Science, Electronic Engineering, Physics or Mathematics. Candidates must be confident in some of these areas: signal processing, information retrieval, machine learning, have excellent programming skills, be fluent in English and possess good communication skills. Musical knowledge would be an advantage, as would previous experience in research and a track record of publications.
More information on grant details:
Provisional starting date: October 1st 2016
Application: Interested candidates should send a motivation letter, a CV (preferably with references), and academic transcripts to Prof. Emilia Gómez (email@example.com) before May 1st 2016. Please include in the subject [PhD MIR].
Over the last months, several journal publications related to our research on flamenco & technology are finally online.
One of them is a work with my former PhD student, Nadine Kroher (who now moved to Universidad de Sevilla), on the automatic transcription of flamenco singing. Flamenco singing is really challenging in terms of computational modelling, given its ornamented character and variety, and we have designed a system for its automatic transcription, focusing on polyphonic recordings.
The proposed system outperforms state of the art singing transcription systems with respect to voicing accuracy, onset detection, and overall performance when evaluated on flamenco singing datasets. We hope it think will be a contribution not only to flamenco research but to other singing styles.
You can read about our algorithm at the paper we published at IEEE TASP, where we present the method, strategies for evaluation and comparison with state of the art approaches. You can not only read, but actually try it, as we published an open source software for the algorithm, plus a music dataset for its comparative evaluation, cante2midi (I will talk about flamenco corpus in another post). All of this to foster research reproducibility and motivate people to work on flamenco music.
FAST-IMPACT stands for “Fusing Acoustic and Semantic Technologies for Intelligent Music Production and Consumption” and it is funded by EPRSC, Engineering and Physical Sciences Research Countil, UK with 5,199,944 £ (side note: OMG this is real funding, they should know at the new Spanish Agencia Estatal para la Investigación)
According to their web site, This five-year EPSRC project brings the very latest technologies to bear on the entire recorded music industry, end-to-end, producer to consumer, making the production process more fruitful, the consumption process more engaging, and the delivery and intermediation more automated and robust. It addresses three main premises:
(i) that Semantic Web technologies should be deployed throughout the content value chain from producer to consumer;
(ii) that advanced signal processing should be employed in the content production phases to extract “pure” features of perceptual significance and represent these in standard vocabularies;
(iii) that this combination of semantic technologies and content-derived metadata leads to advantages (and new products and services) at many points in the value chain, from recording studio to end-user (listener) devices and applications.
The project is leaded by Dr Mark Sandler, Queen Mary University of London, and include as project participants University of Nottingham (leaded by Dr. Steve Benford), University of Oxford (leaded by Dr. David Deroure), Abbey Road Studios, BBC R&D, The Internet Archives, Microsoft Research and Audiolaboratories Eerlangen.
The results for this first year are amazing, as it can bee seen on the web, in terms of publication, scientific and technological outcomes but more important, great and inspiring ideas!
I am honoured to be part of the advisory board with such excellent researchers and contribute to the Project as much as I can. Some photos of the meeting: