Research seminars

The Research Seminar Series is a series of talks by leading researchers at this university and elsewhere. The series is aimed at postgraduate students and staff but undergraduates and external visitors are welcome at every talk.

List of details in front of soundhouse
Off

Friday Research Seminars, 12.00-1.00pm online. Each session will include a virtual presentation by an invited speaker followed by questions and discussion. For some events, materials may be shared ahead of time with links on this page.

The series is open to all staff and students of the University of Sheffield as well as external visitors. Staff and PG students should enter via Blackboard>MUS Postgraduate Hub>Collaborate>Research Seminar Room. UGs and external visitors should email to request the link f.hield@sheffield.ac.uk

2020-2021 Research Seminars

6/11/20 - Whose Ears? Cage, A King, and Humming - Dr Suk-jun Kim, The University of Aberdeen.

Suk-Jun Kim studied theology at Yonsei university, South Korea and Recording Engineering at OIART (Ontario Institute of Audio and Recording Technology). He earned a master’s degree in Music Technology in Northwestern University and a Ph.D. in composition at the University of Florida. Currently, Kim is senior lecturer in Electroacoustic Music and Sound Art at the University of Aberdeen, Scotland and Director of PG Research in the School of Language, Literature, Music, and Visual Culture. As a composer and sound artist, Kim has won several international composition awards and attracted commissions. Kim was a DAAD resident composer between 2009-2010. Kim has written two books - Hasla and Humming – and is now working on new project on sound studies, including one that examines key aspects that have established the audience in the 21st century.
 

We often hear composers and sound artists say “Listen!” with an apostrophe. But what else would it mean to listen, if it is not an act of lending our ears to something that is not our own? In this talk, Suk-Jun Kim focusses on three incidents in which we lend our ears for listening, which have been explored in his recent book, Humming: John Cage with his Lecture on Nothing, A King from Italo Calvino’s short story, A King Listens, and humming, a vocalic act closest to being mute, or silence.

20/11/20 - Modelling the perception of emotion and meaning in music using probability theory - Professor Renee Timmers, The University of Sheffield.

"Perception of emotion and meaning in music is to a large extent probabilistic rather  than deterministic. Certain properties of music may increase the likelihood that a  particular emotion is perceived over another or a particular imagery or association  is evoked. What emotion or imagery is perceived also depends on contextual factors such as the apriori probability of emotions, listeners’ sensitivities and biases, 
and the distinctives of the properties within the musical context. In this presentation,  I will explore the use of Bayes’ rule to model the perception of emotion and meaning, and to capture the influence of these contextually shaping factors.

Considering emotion perception, according to Bayes’ rule, the posterior probability of perceiving an emotion given a musical property M is equal to the likelihood of the  observation of the musical property if the hypothesis of that emotion was true, times  the prior probability of that emotion (in the context of competing emotions). To  develop this method, measures of prior probability of emotions are required as well 
as probability estimates of musical properties in emotional expressions. Analogously, the posterior probability of multimodal imagery given musical property M is equal to the likelihood of that musical property in the context of the hypothesised multimodal phenomenon, additionally taking into account the prior probability of the phenomenon and the frequency of occurrence of the musical property across multimodal phenomena. Finally, probability calculations can be used to examine relationships between emotion and meaning in music: what is the posterior probability of an emotion given a multimodal association or vice versa what is the probability of a given multimodal imagery given an emotion? 

Data from existing research articles are used to get a proof of concept of these applications of Bayes’ rule to model perception of emotion and meaning in music. Future directions for research are discussed as well as benefits and limitations of the adoption of a Bayesian approach to music cognition."

4/12/20 - The secret inner life of the piano: Cosmologies for piano and 3D electronics - Dr Aaron Einbond, City University London

Description

How does a listener know immediately when she or he walks into a room with a live grand piano instead of a recording? One reason is the complex interactions between the piano sound and the space that surrounds it. Artificial Intelligence (AI) research is ubiquitous, yet often ignores the spatial presence of the live instrument and performer. Yet research in the field of music perception points to the essential role of situated or embodied cognition our listening experience. My composition Cosmologies for piano and three-dimensional electronics seeks to place the embodied presence of the instrument and its performer at the centre, using machine learning of audio features to decipher the intricate interdependencies of timbre and space that bring an instrument to life. The results explode the space inside the piano out to the space of the concert hall, creating a virtual reality (VR) environment for the ears, and situating the listener inside the instrument to experience its secret inner life.

Speaker's bio

Aaron Einbond’s work explores the intersection of instrumental music, field recording, sound installation, and interactive technology. He released portrait album Without Words with Ensemble Dal Niente on Carrier Records and Cities with Yarn/Wire and Matilde Meireles on multi.modal/NMC Recordings. His awards include a Giga-Hertz Förderpreis, a Guggenheim Fellowship, and artistic research residencies at IRCAM and ZKM. He teaches music composition, sound, and technology at City, University of London.

29/1/20 - On the Singing Hologram: Miku, Love and Labour - Professor Nick Prior, University of Edinburgh

The advent of the performing hologram opens up significant questionsaround the fate of “liveness” in the digital age, blurring if not collapsing distinctions between absence/presence, live/real, original/copy as well
as transforming well-worn ideas such as authenticity. Drawing on ethnographic fieldwork undertaken in Japan and the UK, this talk explores the case of virtual idol, Hatsune Miku, originally a marketing mascot for voice synthesis software but who now tours globally and is a representative agent of a new breed of virtual performers. As well as introducing the Miku media model – a relatively flat media ecology where fans are also Miku producers – the talk will offer some speculative thoughts on how a Miku performance “works” as an assemblage of love, labour and socio-technical affordances. Who or what is performing, how is liveness managed when the performer is pure code, and what does this tell us about relations between live music, participatory cultures and virtuality?

2021 Dates to be confirmed

TBC

Cellist performing and talking at a seminar

Flagship institutes

The University’s four flagship institutes bring together our key strengths to tackle global issues, turning interdisciplinary and translational research into real-world solutions.