Projects

MIMA is collaborating with cutting-edge artists, researchers and companies on a range of AI-related musical projects.

On

Voice-banking for trans voices

Our voice is a central part of our identity, which can be experienced as a positive means of self-expression and connection with others, and a source of anxiety for those experiencing gender dysphoria.

We are working with the founders of Trans Voices UK, the UK’s first trans+ choir, to create software which enables novel vocal creativity, resulting in original music performances and recordings. Vocal recordings, 'banked' both before and after gender-affirming surgery and treatment, are combined using a bespoke software tool.

Building on technical precedents in sample-matching technologies and musical precedents in extended vocal techniques, lip-synching and drag performance, the software tool can call up banked vocal samples, which the music creator can then incorporate into their studio workflow to create new music.

Interviews with our singer-collaborators will document their lived experience of vocality during gender-affirming healthcare as they reflect on the lived experience of their creative and vocal identity during this process. This will enable singers to explore their vocal identity in new ways, will bring insight onto the role of the voice and creative expression in gender identity, and offers a way of valuing trans musicians and voices and helping to make them heard – literally and metaphorically.

The project is funded by a Small Project Award from the Arts and Humanities Knowledge Exchange Fund (2022–23).


AI music generation

Commercial music production is entering a new era in which music is created with AI. However, this raises challenges for artistic and computational practices and complex ethical challenges for intellectual property law.

We are aiming to build an evidence-based foundation for the creation of music generation software. This will help formulate new ethical standards of practice on AI-assisted music generation that anticipate emerging regulatory problems. While there have been a number of reports and consultations on this topic, these do not provide evidence regarding the concrete moments at which issues of AI and IP emerge nor how they are navigated in practice. Our project addresses that gap.

We have developed an Artificial Musical Intelligence (AMI), a large-
scale, general-purpose, deep neural network that our musician partners are using to generate musical compositions. By working with musicians in a participatory design (PD) framework, we are able to critically observe how they work with AI-assisted composition and therefore where they encounter – and how they navigate – ethical and legal questions. The relationship between ethics and law, and between what musicians believe and what the law actually says, is explored 'bottom up' as it emerges from the PD process with musicians and representative bodies.

In this context, AMI provides a platform for musicians to explore human–AI co-creation. This enables us to better understand the challenges raised by AI music generation, and to work towards co-creating a best imagined future for music AI.

The project has been funded by two UKRI schemes: Higher Education Innovation Funding (2021–22) and the QR-Policy fund (2022–23).


Music summarisation for sonic branding

Sonic branding is everywhere in the contemporary mediascape: in the short-duration social media clips and sonic logos of the new media niches of mobile computing through to longer-form broadcast media of TV ads and background music.

This project addresses a conceptual and practical challenge: What constitutes a coherent sonic identity across multiple 'touchpoints' – the individual contact points between consumer and brand – of different durations?

We’re working with sonic branding agency Maison Mercury Jones to investigate sonic identity across musical materials of different durations, propose a theoretical framework for sonic identity in branding contexts, and create a musical summarisation software tool helpful to sound-branding practitioners. 

There has substantial research into the complex task of reducing sound in temporal space in a meaningful way. For example, time compression and harmonic reduction are unsatisfactory, as they omit what is often perceptually important, and concatenating evenly distributed short excerpts can miss perceptually relevant moments. Currently, standard practice is to identify a single, salient, short excerpt or to join distributed salient excerpts. However, most research has explored these issues through the lens of music information retrieval ('search') rather than by considering music creation. Our project addresses this gap by working together with sonic-branding experts.

The project is funded by an AHRC/WRoCAH Collaborative Doctoral Award (2023–26).


Teaching coding through music creation: Sonic Pi

UK secondary school education in computer science is patchy and fragile. Despite the introduction of new GCSE and A-Level curricula, pupils are not engaging with the discipline.

Cross-curricular computing is one approach that is being actively investigated to address this issue. For example, by creating music using software tools, people can learn to think algorithmically and logically, such as through breaking down complex compositions into smaller pieces and using programming concepts to manipulate them.

A key exponent of this approach is Dr Sam Aaron, who has developed the widely used Sonic Pi platform for the live coding of music – a form of electronic music-making founded 20 years ago. Sonic Pi includes a built-in sound synthesizer, so when the user makes changes to their program code, they are heard immediately as changes in the musical sound that is produced.

We are developing an approach in which a named musical artist endorses a 'pack' of tutorial material for live coding, consisting of various exercises, a video introduction from the artist, and sound clips drawn from their recordings. The tutorials show how a song could be recreated (approximately) in Sonic Pi, using this to introduce some coding and musical concepts. Experimentation is encouraged using the code snippets and sound samples provided in the pack.

The aim is to engage young people in computer coding through music-making and, at the same time, to learn more about some of the fundamentals of electronic music composition.

The project is funded by UKRI's Higher Education Innovation Funding scheme (2023–24).

Centres of excellence

The University's cross-faculty research centres harness our interdisciplinary expertise to solve the world's most pressing challenges.