Tag Archives: phenicx
I have been collaborating for a while now on the edition of a Special Issue at IEEE Multimedia Magazine, which gathers state-of-the-art research on multimedia methods and technologies aimed at enriching music performance, production and consumption.
It is the second time I act as a co-editor for a journal (the first one was at JNMR and related to computational ethnomusicology) and I learnt a lot from the process. Editors have to asure good submissions, good reviews and recommendations, keeping the coherence and theme that we wanted to give as a message to our community. Yes: access, distribution and experiences in music are changing with new technologies. I am very happy with the outcomes! Check our editorial paper here, and the full issue here.
And I love the design!
As part of the PHENICX project, we have recently published our research results in the task of audio sound source separation, which is the main research topic of one of our PhD students, Marius Miron.
During this work, we developed a method for orchestral music source separation along with a new dataset: the PHENICX-Anechoic dataset. The methods were integrated into the PHENICX project for tasks as orchestra focus/instrument enhancement. To our knowledge, this is the first time source separation is objectively evaluated in such a complex scenario.
This is the complete reference to the paper:
M. Miron, J. Carabias-Orti, J. J. Bosch, E. Gómez and J. Janer, “Score-informed source separation for multi-channel orchestral recordings”, Journal of Electrical and Computer Engineering (2016))”
Abstract: This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.
The PHENICX-Anechoic dataset includes audio and annotations useful for different MIR tasks as score-informed source separation, score following, multi-pitch estimation, transcription or instrument detection, in the context of symphonic music. This dataset is based on the anechoic recordings described in this paper:
Pätynen, J., Pulkki, V., and Lokki, T., “Anechoic recording system for symphony orchestra,” Acta Acustica united with Acustica, vol. 94, nr. 6, pp. 856-865, November/December 2008.
Last Wednesday I presented a poster at the Ninth Triennial Conference of the European Society for the Cognitive Sciences of Music (ESCOM 2015), that took place at the Royal Northern College of Music, Manchester, UK. It was a very interesting conference, including a very nice symposium in understanding musical audiences and inspiring talks on music education, psychology and wellbeing. Really impressed by how music have influence to improve quality of live from early years to the end of our lives.
In this study we analysed the emotions that listener perceive when listening to Beethoven Symphony No. 3, Eroica, PHENICX target piece, played by the Royal Concertgebouw Orchestra Amsterdam. We then quantify the correlation between listeners’ perceived emotions from music and 1) musical descriptors, and 2) listeners’ backgrounds (country of origin, musical knowledge, exposure to classical music and knowledge of Eroica).
One conclusion of this study is that tonal strength (i.e. key clarity) correlates significantly with listener ratings of peacefulness, joyful activation, tension and sadness. Other significant correlations between emotion ratings and musical descriptors agree with the literature. This agreed with our hypothesis, being different parts on the same musical piece.
But there are two other unexpected and interesting findings that we might need to continue researching on.
First, we found out that listeners of varying backgrounds agree most on their ratings of sadness, compared to other emotions. Would that be similar for other musical pieces?
Second, listeners of similarly unmusical backgrounds, and listeners of young ages, recognise similar emotions to same music. On the contrary, listeners with more musical experience recognise different emotions to the same music. Caused by personal biases?
Interesting results that might corroborate the need for personalisation in music recommendation engines!
This is the title of my keynote speech yesterday at the Mathematics and Computation in Music Conference that is taking place in London this week. I presented our work in the PHENICX project I am coordinating to apply MIR technologies to symphonic repertoire. This is the abstract:
An orchestral classical concert embraces a wealth of musical information, which may not be easily perceived or understood for general audiences. Current machine listening and visualization technologies can facilitate the appreciation of distinct musical facets, contributing to innovative and more enjoyable concert experiences. This presentation provides an overview of the challenges and opportunities that symphonic music poses for these technologies. We will summarize our current efforts in the improving of state-of-the-art methods for melody extraction, structural analysis, source separation when applied to this particular repertoire. Special emphasis will be given to the combination of symbolic, audio and gestural music descriptors, and to the development of meaningful visualizations designed to be exploited in off-line and live concert situations.
This is a video of the event which illustrates our work in the phenicx project.
It was featured in the DIGITAL AGENDA FOR EUROPE.
Yes, last month has been so unique for me that I wanted to share it with a post.
From October 27th to November 1st, I attended the 15h International Conference of Music Information Retrieval in Taipei, Taiwan. ISMIR is by far my favorite conference, where I meet most of my colleagues, get to know the advancements in the field, and get fresh ideas for my research. This year, it was a busy edition for me. We presented some work related to the PHENICX project, where we try to apply MIR techniques for classical music, in particular for symphonic repertoire and within the context of a concert, so including real-time description. In addition, we had several meetings of the society board, where I take part as a member. Finally, I was co-authoring a poster on the robustness of low-level features, another one on melodic similarity of flamenco music, in the scope of our COFLA project, and presenting a demo of our MIR.EDU library for music education! A lot for a single week!
It was a great conference: amazing city and landscape, very good organization, nice presentations and research outcomes, and good perspectives for next year in Málaga, Spain.
After coming back from ISMIR, I had my tenured defense on November 5th. After some years working at UPF and a long waiting period due to economic restrictions, I got tenured assistant professor thanks to the Serra Hunter program of the Catalan government. I am really happy for that!
One week later, this Wednesday, I attended a workshop in Madrid that the Spanish Association of Symphonic Orchestras (AEOS) and BBVA foundation devoted, in its first day, to the new challenges and opportunities that technology offers for orchestras. I presented the PHENICX project, including use cases and the technologies that the different partners are researching and developping, integrated in our prototype. You can find here my slides and complementary material. It has already appeared in press.
Although we are not far from summer, I now feel I need some rest!
A late post about an amazing event. As pointed out in their website, there is no scape from the expansion of information! Yes, we need strategies for structuring, visualizing and locating what you need.
I have been getting more and more interested on the role of music visualization. There has been a large amount of research within the Music Information Retrieval (MIR) field intended to extract meaningful descriptions from music in audio format, to compute similarity between music pieces and to classify them according to semantic concepts such as mood, style or preference. However, less effort has been devoted to investigate which are the best strategies to present, in a visual way, this information to users with different profiles (e.g. expert musicians and people with no theoretical musical knowledge) and in different contexts (e.g. music listening or education). The main challenges are to provide intuitive visualizations of large music collections, to present information related to different temporal scales (from real-time to global descriptors), and to combine descriptions related to different musical facets such as score, rhythm, tonality or instrumentation.
I had the chance to be invited as a speaker to a workshop on knowledge order an science. In this talk I presented some of our approaches to music visualization in terms of tonality, dynamics, tempo, structure, mood and music preference. I also discussed how these approaches are being considered in the PHENICX project to enrich live music concert performances in classical music. I also wanted to discuss about the need of multi-scale, personalized and adaptive representations of music collections.
It was a great event, and you can find on the web the abstracts of the different presentations and the slides. There is also here a summary of the workshop. It was an enriching multi-disciplinary experience, I was happy to see more women than is usually in tech events and now I have the chance to be part of this COST action and great community. Let me illustrate that with a figure from Agustin Martorell’s thesis.