At my lab we are starting a new project where we integrate our expertise in singing voice processing and music information retrieval to generate tools for choir singers.
CASAS (Community-Assisted Singing Analysis and Synthesis) is a project funded by the Ministry of Economy and Competitiveness of the Spanish Government (TIN2015-70816-R), that started in January 1st 2016 and will end in December 31st 2018.
Humans use singing to create identity, express emotion, tell stories, exercise creativity, and connect with each other while singing together. This is demonstrated by the large community of music singers active in choirs and the fact that vocal music makes up an important part of our cultural heritage. Currently, an increasing amount of music resources are becoming digital, and the Web has become an important tool for singers to discover and study music, as a feedback resource and as a way to share their singing performances. The CASAS project has two complementary goals:
- The first one is to improve state-of-the-art technologies that assist singers in their musical practice. We research on algorithms for singing analysis and synthesis (ex: automatic transcription, description, synthesis, classification and visualization), following a user-centered perspective, and with the goal of making them more robust, scalable and musically meaningful.
- The second one is to enhance current public-domain vocal music archives and create research data for our target music information retrieval (MIR) tasks. Our project put a special emphasis on choral repertoire in Catalan and Spanish.
We exploit our current methods for Music Information Retrieval and Singing Voice Processing, and we involve a community of singers that use our technologies and provide their evaluations, ground truth data and relevance feedback.
I did my first logo, which is inspired by choirs, audio & “houses”, which is the english translation of “casas”. It will be an amazing project!
We feel ourselves identified with the type of music we like and we sometimes use music to define our personality. I guess one of the questions I ask to any new person I know is “whad kind of music do you listen to?”.
During the last few years, I have been taking part in a research project, where the main goal is to visualize one’s musical preferences, “The Musical Avatar“. The idea behind is to use computational tools to automatically describe your music (in audio format) in terms of melody, instrumentation, rhythm, etc and use this information to build an iconic representation of one’s musical preferences and to recommend you new music. All the system is only based on content description, i.e. on the signal itself and not on information about the music (context) as found on web sites, etc. And it works! 🙂
We finally published a paper describing the technology behind and its scientific evaluation at Information Processing & Management journal. This is the complete reference:
Dmitry Bogdanov, Martín Haro, Ferdinand Fuhrmann, Anna Xambó, Emilia Gómez, Perfecto Herrera Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management
Volume 49, Issue 1, pp. 13-33, January 2013
There is much to improve, but you can see my musical avatar below. Can you guess how my favorite music sounds like? You can of course build yours from your last-FM profile here.
My automatically generated musical avatar
► We propose preference elicitation technique based on explicit preference examples. ► We study audio-based approaches to music recommendation and preference visualization. ► Approaches based on semantics inferred from audio surpass low-level timbre methods. ► Such approaches are close to metadata-based system being suitable for music discovery. ► Proposed visualization captures the core musical preferences of the participants.
I’m very happy to have coauthored a chapter in this book on Multimodal Music Processing, as a result of a seminar that Meinard Muëller, Masataka Goto and Simon Dixon organized last year.
I contributed to a chapter about user modeling and personalization, which I think it’s a key aspect of future MIR systems. Searches, descriptors, similarity measures and classification algorithms should be adapted to different user needs, in order to provide powerful and informative services of recommendation and retrieval.
I hope you will find it interesting!
Filed under news, research