A list of publications produced by MIMA.
We present AMI (Artificial Musical Intelligence), a deep neural network that can generate musical composition for various musical instruments and different musical styles with a coherent long-term structure. AMI uses a state-of-the-art attention-based deep neural network architecture to discover patterns of musical structures such as melodies, chords, and rhythm, from tens of thousands of MIDI files. We encode music data in a way that is similar to reading a music score, which enables the model to better capture music structures. Learning is done in an unsupervised manner, allowing exploitation of large collections of MIDI files that are available on the internet. As an autoregressive model, AMI predicts one musical note at a time, depending on not just the last note, but a long sequence of notes (up to thousands) from previous time steps. Furthermore, we enhance the learning of musical structures by adding embeddings at different time scales. As a result, the model is able to maintain a coherent long-term structure and even occasionally transition to a different movement. Output examples can be heard at https://meddis.dcs.shef.ac.uk/melody/samples.
Ma, N., Brown, G. J., & Vecchiotti, P. (2021). AMI: Creating coherent musical composition with attention. In R. F. Cádiz (Ed.), Proceedings of the 2021 International Computer Music Conference (pp. 414–418). International Computer Music Association.
Download this paper via the ICMA or from White Rose Online.
The University’s four flagship institutes bring together our key strengths to tackle global issues, turning interdisciplinary and translational research into real-world solutions.