In the EU-funded research project PHENICX, we have developed a prototype with which you can learn more about and enjoy classical music. It contains innovative features to visualize and explore pieces of classical music. The objective of PHENICX is to make classical music more attractive to a larger audience by means of technology. The prototype is the result of a collaboration between an orchestra, research institutes, universities, and online video application developers. I would like to invite you to try it out!
Go to http://www.surveygizmo.com/s3/2383491/phenicx-web and follow the instructions to start participating!
Are you curious? I would much appreciate your input. Trying out the prototype and filling in a questionnaire will take you more or less half an hour. If you do, you have the chance to win a €50 (1x), €25 (2x), or €10 (5x) gift certificate. So please try out the prototype and support our research by answering a number of questions!
This is the title of my keynote speech yesterday at the Mathematics and Computation in Music Conference that is taking place in London this week. I presented our work in the PHENICX project I am coordinating to apply MIR technologies to symphonic repertoire. This is the abstract:
An orchestral classical concert embraces a wealth of musical information, which may not be easily perceived or understood for general audiences. Current machine listening and visualization technologies can facilitate the appreciation of distinct musical facets, contributing to innovative and more enjoyable concert experiences. This presentation provides an overview of the challenges and opportunities that symphonic music poses for these technologies. We will summarize our current efforts in the improving of state-of-the-art methods for melody extraction, structural analysis, source separation when applied to this particular repertoire. Special emphasis will be given to the combination of symbolic, audio and gestural music descriptors, and to the development of meaningful visualizations designed to be exploited in off-line and live concert situations.
Among other things, I presented the work we carried out in Seville for the Exponential Prometheus opening concert of the Singularity Summit Spain, Seville, March 12th 2015.
This is a video of the event which illustrates our work in the phenicx project.
It was featured in the DIGITAL AGENDA FOR EUROPE.
Tomorrow June 6 at 15:00 I will be giving my first keynote speech at the 3rd FMA workshop in Amsterdam. I am honored for that!
I will talk about the state of the state of the art and challenges of automatic music transcription and description technologies and I will illustrate it with some examples of the projects I have been involved in and research from other institutions. I hope the audience will enjoy it!
Keynote talk: Towards Computer-Assisted Transcription and Description of Music Recordings
By Dr. Emilia Gómez (Universitat Pompeu Fabra)
Automatic transcription, i.e. computing a symbolic musical representation from a music recording, is one of the main research challenges in the field of sound and music computing. For monophonic music material the obtained transcription is a single musical line, usually a melody, and in polyphonic music there is an interest in transcribing the predominant melodic line. In addition to transcribing, current technologies are able to extract other musical descriptions related to tonality, rhythm or instrumentation from music recordings. Automatic description could potentially complement traditional methodologies for music analysis.
In this talk I will first present the state-of-the art on automatic transcription and description of music audio signals. I will illustrate it with our own research on tonality estimation, melodic transcription and rhythmic characterization. I will show that, although current research is promising, current algorithms are still limited in accuracy and there is a semantic gap between automatic feature extractors and expert analyses.
Finally, I will present some strategies to address these challenges by developing methods adapted to different repertoire and defining strategies to integrate expert knowledge into computational models, as a way to build systems following a “computer-assisted” paradigm.
Some weeks ago we announced the release of a new dataset of flamenco singing: TONAS
The dataset includes 72 sung excerpts representative of three a cappella flamenco singing styles, i.e. Tonás (Debla and two variants of Martinete), together with manually corrected fundamental frequency and note transcriptions.
This collection was built by the COFLA team in the context of our research project for melodic transcription, similarity and style classification in flamenco music.
- Mora, J., Gomez, F., Gomez, E., Escobar-Borrego, F.J., Diaz-Banez, J.M. (2010). Melodic Characterization and Similarity in A Cappella Flamenco Cantes. 11th International Society for Music Information Retrieval Conference (ISMIR 2010).
- Gomez, E., Bonada, J. (in Press). Towards Computer-Assisted Flamenco Transcription: An Experimental Comparison of Automatic Transcription Algorithms As Applied to A Cappella Singing. Computer Music Journal.
Further information about the music collection, how the samples were transcribed and by who, is available on the dataset website, where you can of course download the audio, metadata and transcription files.
We hope that this collection will be useful, whether for automatic transcription of the singing voice or any other research topic (e.g. pitch estimation, onset detection, melodic similarity, singer identification, style classification), and we hope this dataset will increase the interest of our scientific community on the particular challenges of flamenco singing.
For the moment we got quite a number of downloads for different purposes: research on music transcription, onset detection, folk music, personal study of singing techniques, and even for curiosity! 🙂
Please check these two excellent works from two SMC students at the MTG. Congratulations to Bruno and Maria for their work.