As part of his recent PhD thesis, Agustín Martorell has studied the potential of multi-scale representations in music analysis. In particular, he focuses on the description of tonality from score representations and on the analysis of pitch-class sets. We have recently published the results of this study in Journal of Mathematics and Music: Mathematical and Computational Approaches to Music Theory, Analysis, Composition and Performance. The paper is now online!
Several analyses are discussed within the paper while addressing the problem of visualization. As a result of the work, there is also a MATLAB Toolbox that you are able to download from here.
Agustín Martorell & Emilia Gómez
This work presents a systematic methodology for set-class surface analysis using temporal multi-scale techniques. The method extracts the set-class content of all the possible temporal segments, addressing the representational problems derived from the massive overlapping of segments. A time versus time-scale representation, named class-scape, provides a global hierarchical overview of the class content in the piece, and it serves as a visual index for interactive inspection. Additional data structures summarize the set-class inclusion relations over time and quantify the class and subclass content in pieces or collections, helping to decide about sets of analytical interest. Case studies include the comparative subclass characterization of diatonicism in Victoria’s masses (in Ionian mode) and Bach’s preludes and fugues (in major mode), as well as the structural analysis of Webern’s Variations for piano op. 27, under different class-equivalences.
A late post about an amazing event. As pointed out in their website, there is no scape from the expansion of information! Yes, we need strategies for structuring, visualizing and locating what you need.
I have been getting more and more interested on the role of music visualization. There has been a large amount of research within the Music Information Retrieval (MIR) field intended to extract meaningful descriptions from music in audio format, to compute similarity between music pieces and to classify them according to semantic concepts such as mood, style or preference. However, less effort has been devoted to investigate which are the best strategies to present, in a visual way, this information to users with different profiles (e.g. expert musicians and people with no theoretical musical knowledge) and in different contexts (e.g. music listening or education). The main challenges are to provide intuitive visualizations of large music collections, to present information related to different temporal scales (from real-time to global descriptors), and to combine descriptions related to different musical facets such as score, rhythm, tonality or instrumentation.
I had the chance to be invited as a speaker to a workshop on knowledge order an science. In this talk I presented some of our approaches to music visualization in terms of tonality, dynamics, tempo, structure, mood and music preference. I also discussed how these approaches are being considered in the PHENICX project to enrich live music concert performances in classical music. I also wanted to discuss about the need of multi-scale, personalized and adaptive representations of music collections.
It was a great event, and you can find on the web the abstracts of the different presentations and the slides. There is also here a summary of the workshop. It was an enriching multi-disciplinary experience, I was happy to see more women than is usually in tech events and now I have the chance to be part of this COST action and great community. Let me illustrate that with a figure from Agustin Martorell’s thesis.
Filed under events, research
I have been editing, together with Perfecto Herrera and Paco Gómez, a Special Issue on Computational Ethnomusicology at the Journal of New Music Research.
The goal of this special issue is to gather relevant, high-quality research on computational methods and applications in ethnomusicology. The papers included here deal with different musical facets such as pitch, pulse and tempo, and voice timbre. They address different musical repertoires, from Central-African to Basque folk music. They also cover a broad area: tools, including data collections, methodology and Ethnomusicology core-problems. Althgouth it was a hard work, thanks to the authors and reviewers we managed to get a varied and interesting set of articles:
- Computational Ethnomusicology: perspectives and challenges
- Antipattern Discovery in Folk Tunes
- Tarsos, a Modular Platform for Precise Pitch Analysis of Western and Non-Western Music
- Evaluation and Recommendation of Pulse and Tempo Annotation in Ethnic Music
- Breathy, Resonant, Pressed – Automatic Detection of Phonation Mode from Audio Recordings of Singing
- A Location-Tracking Interface for Ethnomusicological Collections
The issue is now available online at JNMR web site. And our introduction is available here and here.
We feel ourselves identified with the type of music we like and we sometimes use music to define our personality. I guess one of the questions I ask to any new person I know is “whad kind of music do you listen to?”.
During the last few years, I have been taking part in a research project, where the main goal is to visualize one’s musical preferences, “The Musical Avatar“. The idea behind is to use computational tools to automatically describe your music (in audio format) in terms of melody, instrumentation, rhythm, etc and use this information to build an iconic representation of one’s musical preferences and to recommend you new music. All the system is only based on content description, i.e. on the signal itself and not on information about the music (context) as found on web sites, etc. And it works! 🙂
We finally published a paper describing the technology behind and its scientific evaluation at Information Processing & Management journal. This is the complete reference:
Dmitry Bogdanov, Martín Haro, Ferdinand Fuhrmann, Anna Xambó, Emilia Gómez, Perfecto Herrera Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management
Volume 49, Issue 1, pp. 13-33, January 2013
There is much to improve, but you can see my musical avatar below. Can you guess how my favorite music sounds like? You can of course build yours from your last-FM profile here.
My automatically generated musical avatar
► We propose preference elicitation technique based on explicit preference examples. ► We study audio-based approaches to music recommendation and preference visualization. ► Approaches based on semantics inferred from audio surpass low-level timbre methods. ► Such approaches are close to metadata-based system being suitable for music discovery. ► Proposed visualization captures the core musical preferences of the participants.
One of the projects I’ve been involved on got a great award. Congratulations to FIA!!
I quote here some information:
“The renowned Polish composer Krzysztof Penderecki wins the Lifetime Achievement Award; French pianist Jean-Efflam Bavouzet is Artist of the Year; German pianist Joseph Moog Young Artist of the Year; Ondine Label of the Year. A Special Achievement Award goes to the producer and re-recording engineer Ward Marston. The Classical Website Award goes to ‘classicalplanet.com’, an outstanding project coordinated by the Fundación Albéniz and offering musical content as well as a social networking platform for young musicians. Among the recipients of the Awards in the 14 CD and DVD, the Jury selected the ECM recording of piano works by Robert Schumann played by Andras Schiff as Recording of the Year.”