I had the chance to be interviewed by Bennie Moll, together with inspiring researcher Gerard Assayag (IRCAM, Paris) to discuss the impact that AI is having on music listening and creation and the related social and ethical challenges.
This interview was published at Science Media Hub, an initiative of the European Parliament to bring scientists, journalists and policy makers together.
Lat September 10th, I had the chance of contributing to the Academia Roundtable on the Future of AI organized by the AIDA Committee of the European Parliament. The goal of my intervention was to set the scene for a further discussion on the role of the academia in the AI ecosystem and the current and future challenges it addresses.
It was really an honour to share the scree and listen to very inspiring researchers such as Holger Hoos, co-founder of CLAIRE, or Andrea Renda (CEPS).
My presentation was titled “the role of academia in the AI field: towards a diverse and excellence-based AI research ecosystem”, and you can find the slides at the AIDA webpage. Happy to get feedback on it!
I am very proud to share the result of a truly interdisciplinary work as part of the HUMAINT project I lead, with Songül Tolan (economist), Annarosa Pesole (social scientist), Fernando Martínez-Plumed (computer scientist), Enrique Fernández-Macías (social scientist), and myself (engineering).
In this paper we develop a framework for analysing the impact of Artificial Intelligence (AI) on occupations. This framework maps 59 generic tasks from worker surveys and an occupational database to 14 cognitive abilities (that we extract from the cognitive science literature) and these to a comprehensive list of 328 AI benchmarks used to evaluate research intensity across a broad range of different AI areas. The use of cognitive abilities as an intermediate layer, instead of mapping work tasks to AI benchmarks directly, allows for an identification of potential AI exposure for tasks for which AI applications have not been explicitly created. An application of our framework to occupational databases gives insights into the abilities through which AI is most likely to affect jobs and allows for a ranking of occupations with respect to AI exposure. Moreover, we show that some jobs that were not known to be affected by previous waves of automation may now be subject to higher AI exposure. Finally, we find that some of the abilities where AI research is currently very intense are linked to tasks with comparatively limited labour input in the labour markets of advanced economies (e.g., visual and auditory processing using deep learning, and sensorimotor interaction through (deep) reinforcement learning).
This article appears in the special track on AI and Society.
‘El placer de admirar’ es un programa en el que el paleontólogo Juan Luis Arsuaga dialoga durante media hora con diversos científicos e investigadores de todo tipo. Como él lo define, es un programa para inconformistas y soñadores, porque todo científico lo es.
Tuve la suerte de participar en éste programa donde discutimos sobre inteligencia artificial, tecnología, investigación y música. Ahora podéis escuchar el podcast aquí:
I had the honor to be invited as keynote speaker at the International Conference on Computational Creativity 2020. The conference took place in a virtual format given the health emergency situation. In my talk, I discussed on the different motivations for music information retrieval, the paradigm shift from knowledge-driven to data-driven systems, the main challenges of this field of research and its impact on society.
It was an interesting experience to give a virtual keynote. First, it was a lot of work to prepare for the keynote, including recording the video properly at home. In addition, I missed the experience of seeing people’s faces and expression when listening to your talk. This visual feedback is really important to see how people react to the different parts of your talk. And I missed the travel and visit to beautiful Coimbra. Virtual events also have positive sides, as the confort of speaking at home, accessibility for anyone in the world to follow on streaming, and all energy saved by people not travelling to sites.
You can watch it in youtube! Any comment or feedback is welcome.
In the context of the HUMAINT (Human behaviour and machine intelligence) project, we research on the impact that social robots have on children. In this context, I have had the chance to carry out my first research on the amazing field of child-robot interaction, thanks to the collaboration with Vicky Charisi, Luis Merino and their lab at Universidad Pablo Olavide and Honda Research Institute Japan.
Running a user study with children and robots is very challenging from a technical perspective, and analysing their data is challenging as well. We just published in frontiers the result of our first study, where we experimented with two strategies for child-robot interaction in a problem solving task: turn taking and child-initiated interaction, and we showed the need for this voluntary interaction. You can check the details below. It is amazing to learn and contribute to research on this topic!
Vicky Charisi, Emilia Gomez, Gonzalo Mier, Luis Merino and Randy Gomez
Abstract: The emergence and development of cognitive strategies for the transition from exploratory actions towards intentional problem-solving in children is a key question for the understanding of the development of human cognition. Researchers in developmental psychology have studied cognitive strategies and have highlighted the catalytic role of the social environment. However, it is not yet adequately understood how this capacity emerges and develops in biological systems when they perform a problem-solving task in collaboration with a robotic social agent. This paper presents an empirical study in a human-robot interaction (HRI) setting which investigates children’s problem-solving from a developmental perspective. In order to theoretically conceptualize children’s developmental process of problem-solving in HRI context, we use principles based on the intuitive theory and we take into consideration existing research on executive functions with a focus on inhibitory control. We considered the paradigm of the Tower of Hanoi and we conducted an HRI behavioral experiment to evaluate task performance. We designed two types of robot interventions, “voluntary” and “turn-taking”—manipulating exclusively the timing of the intervention. Our results indicate that the children who participated in the voluntary interaction setting showed a better performance in the problem solving activity during the evaluation session despite their large variability in the frequency of self-initiated interactions with the robot. Additionally, we present a detailed description of the problem-solving trajectory for a representative single case-study, which reveals specific developmental patterns in the context of the specific task. Implications and future work are discussed regarding the development of intelligent robotic systems that allow child-initiated interaction as well as targeted and not constant robot interventions.
I am very happy to share with you the publication of a truly interdisciplinary study on the impact of AI on music, including considerations from copyright and engineering praxis. It has been an amazing experience to collaborate with scholars in the field of creative practices, engineering and law, and I hope the paper will serve to start discussing some relevant aspects related to the use of AI in music production.
The application of artificial intelligence (AI) to music stretches back many decades, and presents numerous unique opportunities for a variety of uses, such as the recommendation of recorded music from massive commercial archives, or the (semi-)automated creation of music. Due to unparalleled access to music data and effective learning algorithms running on high-powered computational hardware, AI is now producing surprising outcomes in a domain fully entrenched in human creativity—not to mention a revenue source around the globe. These developments call for a close inspection of what is occurring, and consideration of how it is changing and can change our relationship with music for better and for worse. This article looks at AI applied to music from two perspectives: copyright law and engineering praxis. It grounds its discussion in the development and use of a specific application of AI in music creation, which raises further and unanticipated questions. Most of the questions collected in this article are open as their answers are not yet clear at this time, but they are nonetheless important to consider as AI technologies develop and are applied more widely to music, not to mention other domains centred on human creativity.
Last Friday March 8th I was invited to speak at a lunch event of the European Commission intended to provide a scientific perspective to the challenges of gender equality. I gave a talk titled “Women in Artificial Intelligence: mitigating the gender bias”, that is summarized here.
In this context today my colleague Ana Freire and I are launching the divinAI initative to monitor the presence of women in AI events. Please come to our HACKFEST event in Barcelona in June 1st!