Sheffield SIAMIMA Conference 2021
Event details
Description
Welcome to the Sheffield SIAMIMA Conference 2021! A chance to share ideas about pure and applied mathematics, statistics, operations research, and data science, to build job opportunities for mathematical scientists, communicate the value of mathematical science in the workplace, and facilitate connections between students, faculty, recruiters and managers. We are hosting this conference online using Zoom on July 27th and 28th 2021.
With this conference, we celebrate the recent formation of the Sheffield joint SIAMIMA Student Chapter, and the IMA is kindly offering full oneyear membership to student speakers. We aim to provide a friendly environment for earlycareer researchers to showcase their research. The event will feature four keynote talks given by academics and research mathematicians in industry, plus a number of shorter presentations, a Q&A about working in industry, and a poster showcase. There will be prizes for the best posters in both the undergraduate and the postgraduate category, plus opportunities for social meetings to interact and socialize, including lunchtime games on Gather Town and a Pub Quiz to close the conference.
Poster showcase
We're also showcasing the work of postgraduate and undergraduate students in applied mathematics, statistics and other quantitative fields, with a group of independent academics awarding prizes for the best posters. You can view this year's entries at the poster showcase.
Conference schedule
Download schedule (PDF, 130 KB)
Tuesday July 27th
 9.159.25 Opening remarks

Professor Nick Monk (Head of the School of Mathematics and Statistics, University of Sheffield) and Bryony Moody (President of Sheffield SIAMIMA Student Chapter) deliver their opening remarks for the conference.

9.259.30 Erica Tyson  Membership of the IMA

Erica Tyson (Institute for Mathematics and its Applications)
Abstract: The IMA is a professional body and learned society for mathematicians in the UK. Membership recognises your qualifications and supports your professional development, helps you become part of the wider mathematical community and join us in promoting mathematics.
 9.3010.00 Dr Gregory Chaplain  Elastic Metasurfaces

Dr Gregory Chaplain (Imperial College London)
Abstract: Elastic waves guided along surfaces dominate applications in geophysics, ultrasonic inspection, mechanical vibration, and surface acoustic wave devices; precise manipulation of surface Rayleigh waves and their coupling with polarized body waves presents a challenge that offers to unlock the flexibility in wave transport required for efficient energy harvesting and vibration mitigation devices. In this talk I will describe various designs of elastic metasurfaces, based around a graded array of rod resonators attached to an elastic substrate that, together with critical insight from solidstate physics, allow us to modeconvert Rayleigh surface waves into bulk waves that form tunable beams.

10.0011.00 Keynote 1: Dr Silvia Gazzola  Iterative Regularisation Methods for LargeScale Inverse Problems

Dr Silvia Gazzola (University of Bath)
Abstract: Inverse problems are ubiquitous in many areas of Science and Engineering and, once discretized, they lead to illconditioned linear systems, often of huge dimensions: regularization consists in replacing the original system by a nearby problem with better numerical properties, in order to find a meaningful approximation of its solution. After briefly surveying some standard regularization methods, both iterative (such as many Krylov methods) and direct (such as Tikhonov method), this talk will introduce a recent class of methods that merge an iterative and a direct approach to regularization. In particular, strategies for choosing the regularization parameter and the regularization matrix will be emphasized, eventually leading to the computation of approximate solutions of Tikhonov problems involving a regularization term expressed in some pnorms.

11.1511.45 Dr Malena Sabaté Landman  Flexible Krylov methods for inverse problems

Dr Malena Sabaté Landman (Cambridge University)
Abstract: Krylov subspace methods are iterative solvers for largescale linear inverse problems, such as those arising in image deblurring and computed tomography. Recently, flexible Krylov methods in combination with other standard techniques have been used to efficiently solve regularized problems that enforce sparsity in the solution. In this talk I will give a brief introduction to flexible Krylov methods, and their interplay with Tikhonov regularization. After showing you some properties that hold (or don't hold) for this kind of methods, I will give you a couple of visual examples to show you 'how' these methods work, and I will try to convince you that these are very powerful tools with great potential.
 11.4512.00 Tommy Moorcroft  The AblowitzLadik equation, discrete ring cavity system and pattern formation

Tommy Moorcroft (University of Salford)
Abstract: In Physics the discrete Nonlinear Schrodinger (dNLS) equation typically used when modelling the propagation of waves in periodic optical systems. The pitfall of the dNLS for modelling is that it is nonintegrable and not giving exact solutions. However, there is an exactly integrable counterpart to the dNLS known as the AblowitzLadik (AL) equation. This equation comes at the cost of a straightforward physical interpretation in the nonlinear component of the equation granting the integrable property. Despite their differences, both equations share properties in the continuum limit. Here the AL equation will be explored in one spatial dimension considering simple and perturbed planewave solutions, along with known soliton solutions. A discrete ring cavity system will be implemented via the application of mean field theory considering uniform and perturbed equilibrium states of the cavity. Lastly, analysis of the discrete ring cavity in two spatial dimensions will be briefly explored and its subsequent limitations discussed.

12.0012.30 Dr Robert Shaw  Finite basis sets in quantum chemistry

Dr Robert Shaw (University of Sheffield)
Abstract: Quantum chemistry is concerned with solving the Schrödinger equation for molecules of varying sizes, ranging from single atoms up to proteins. Standard discretisation techniques, as used in for example density functional theory, expand the solutions in two finite bases. The eigenfunctions (or wavefunctions) are first written in terms of antisymmetrized products of oneparticle functions, with these functions expanded as a linear combination of usually either Gaussians or plane waves. We investigate the best processes for finding the most compact such expansions, and answer fundamental questions about whether these basis sets will converge systematically to the exact solution. Understanding these properties leads to considerable computational efficiencies, and improved accuracy through extrapolation schemes, for systems that are notoriously high dimensional.
12:3013:00 Lunch  games on Gather Town

13.0013.30 Erin Russell  Inverse Eigenvalue Problems for Special Stochastic Matrices

Erin Russell (University of Bristol)
Abstract: We investigate various special matrices, such as stochastic matrices and Laplacians, all of which bear strong relevance to graph theory and mathematical models for networks. We construct these special matrices from a prescribed spectrum of eigenvalues (this is known as an inverse eigenvalue problem or IEP). The main result of this paper is a solution to a subset of a problem known as the inverse eigenvalue problem for symmetric doubly stochastic matrices (SDIEP), namely we restrict SDIEP to circulant matrices. We also review some recent results regarding SDIEP and IEP for Laplacians. We then present examples of graphs resulting from the matrices constructed from a spectrum. In the final section, we discuss some applications and the potential future direction of this research. One such application is the use of Cheeger's inequality and IEP results to impose restrictions on the conductance of randomly generated graphs by prescribing spectra with a certain Fielder eigenvalue.
 13.3014.00 Alex Trenam  Topology Optimisation Of Structural Electrolyte Microstructures Using A Topological Sensitivity And Levelset Approach

Alex Trenam (University of Bath)
Abstract: Structural batteries serve dual purposes as both loadbearing components and energy storage devices. Whilst potentially offering mass and volume savings when compared with monofunctional systems, they often exhibit competing properties. In particular, the ionic conductivity of the electrolyte tends to decrease as stiffness increases, and vice versa. To make structural batteries a viable alternative to traditional systems, both properties should be maximised. One approach being investigated is to use an electrolyte with separate solid and liquid phases to promote high stiffness and ionic conductivity, respectively. Good multifunctional results have been achieved by backfilling a porous solid matrix with liquid electrolyte, and existing studies have looked into optimising the topology of the solid phase using the solid isotropic material with penalisation (SIMP) method to maximise performance. This talk will present ongoing work towards developing an alternative topology optimisation algorithm for the solid phase using a combination of the topological derivative and a levelset method.

14.1514.30 Rose Archer  Emulating the Last Eurasian Ice Sheet

Rose Archer (University of Sheffield)
Abstract: Understanding the mechanics of ice sheets and how they may react to different climate scenarios is increasingly becoming an important task in order to learn more about future sealevel rise. Numerical models are required to forecast the potential mass loss of Greenland and Antarctica in our warming world. These models require validation against observations. However, the direct observational record of the contemporary ice sheets is limited in time to the satelliteera. Palaeoice sheets have left behind a wealth of physical evidence regarding their past behaviour. This data is historically underutilised in improving numerical models of ice sheets. Furthermore, numerical models of palaeoice sheets can be computationally expensive, due to the large areas and time scales involved. An emulator is a statistical tool that can be used to represent the output of a numerical model, given input parameters. In order to train an emulator, the success of a model run in replicating observations needs to be quantified. Tools for modeldata comparison are currently underdeveloped. This project will improve techniques for comparing the model output to the observed data on palaeoice sheets. There are three main sources of evidence available to use, indicating the flow direction, ice margin extent and icefree timings. Current tools for this modeldata comparisons are statistically primitive and unsuitable for emulation due to the binary “fit” or “not fit” output, so here I present an improved tool for comparing the observed flow direction to the model runs.
 14.3015.00 Bryony Moody  The evolution of radiocarbon dating: Why model choice matters

Bryony Moody (University of Sheffield)
Abstract: When Willard Libby discovered radiocarbon dating in the late 1940s, it was hailed as a revolution in archaeological dating. But this wasn't the end of the story, what followed was decades of work in statistical modelling to obtain the most accurate calendar date estimates possible for a given sample of organic matter. The presentation will follow this evolution, with a particular focus on the pivotal role that Bayesian inference played in it. The elicitation of prior knowledge from archaeologists will be explored, alongside the implications of not considering the impact of a given joint posterior distribution on the marginal posterior distributions of parameters that we wish to estimate.

15.0016.00 Keynote 2: Dr Aretha Teckentrup  Convergence and Robustness of Gaussian Process Regression

Dr Aretha Teckentrup (University of Edinburgh)
Abstract: We are interested in the task of estimating an unknown function from data, given as a set of point evaluations. In this context, Gaussian process regression is often used as a Bayesian inference procedure, and we are interested in the convergence as the number of data points goes to infinity. Hyperparameters appearing in the mean and covariance structure of the Gaussian process prior, such as smoothness of the function and typical length scales, are often unknown and learnt from the data, along with the posterior mean and covariance. We work in the framework of empirical Bayes', where a point estimate of the hyperparameters is computed, using the data, and then used within the standard Gaussian process prior to posterior update. Using results from scattered data approximation, we provide a convergence analysis of the method applied to a fixed, unknown function of interest.
_{[1] A.L. Teckentrup. Convergence of Gaussian process regression with estimated hyperparameters and applications in Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantification, 8(4), p. 13101337, 2020.}
 16.0516.35 Victoria Sánchez Muñoz  Nash Equilibria in certain twochoice multiplayer games played on the ladder graph

Victoria Sánchez Muñoz (NUI Galway)
Abstract: In this research we compute analytically the number of Nash Equilibria (NE) for a twochoice game played on a ladder graph and a circular ladder with 2n players, identified by the vertices of the graph. We do not fix the payoff parameters of the underlying twoplayer game, except for the requirement that a NE occurs if the players choose opposite strategies (anticoordination game). The results show that for both, the ladder and circular ladder, the number of NE, NE(2n), grows exponentially with (half) the number of players n, as Cɸ^{n}, where ɸ is the golden ratio and C is a constant. The constant for the ladder and circular ladder obey C_{circ}>C_{ladder}. In addition, the value of the scaling factor C_{ladder} depends on the value of the payoff parameters. However, that is no longer true for the circular ladder (3degree graph), that is C_{circ} is constant, which might suggest that the topology of the graph indeed plays an important role for setting the number of NE.
 16.3516.55 Abhimanyu Pallavi Sudhir  A mathematical definition of property rights

Abhimanyu Pallavi Sudhir (Imperial College London)
Abstract: I will give a brief introduction to the field of theoretical economics and then discuss my recent work in the area, in particular on the mathematical definition of property rights in a generalized nperson game (which surprisingly involves some category theory and topology!).
 17.0018.00 Careers in Industry for mathematicians and statisticians (panel and Q&A)

Dr Matthew Allcock (EDF) hosts a panel and Q&A about working in industry, with a focus on opportunities and challenges for mathematicians.
Wednesday July 28th
 9.009.15 Agnesa Smajli  International Trade and Economic Growth in the Balkans

Agnesa Smajli (Sheffield City College)
Abstract: The level of international trade plays a crucial role in the economy, however, the effect of it is still open to debate based on the reviewed literature. Besides international trade, there are other macroeconomic factors who have an impact on economic growth. The main objective of the current research is to examine the relationship between international trade and economic growth in Balkan over the period of 20002019. In addition to this, the relationship of economic growth with foreign direct investment, gross fixed capital formation, and unemployment was also investigated in order to check how those variables impact the economic growth in Balkan. This was conducted through panel analysis with unbalanced data for the eleven countries of Balkan; Albania, Bulgaria, Bosnia and Herzegovina, Croatia, Greece, Montenegro, North Macedonia, Kosovo, Romania, Serbia and Slovenia. The results clearly depict that trade, foreign direct investments, gross fixed capital formation have a positive statistically significant effect on the economic growth in Balkan. Moreover, the effect of unemployment on GDP is negative and statistically significant. Those results are in line with few studies about how those variables affect economic growth, however, they also oppose few other contrary investigations.
 9.159.30 Millie Tobin  Multifractal Analysis for Financial Time Series

Millie Tobin (Manchester Metropolitan University)
Abstract: This investigation focuses in the direction of the financial markets; a complex system subject to much nonlinearity. Multifractal theory provides a method for our comprehension and examination of such complexity, where it had been previously impossible, and rarely and unfavourably explored, mathematically. A fractal approach to financial analysis would ignore modern finance’s intuitive deletion of outliers from forecasting analyses. Assuming instead that the frequently calamitous long tails present in daily price return charts act as quantifiable indicators to future returns. The particular manner of the analysis used henceforth was selected due to the seminal 2018 paper, ‘Dynamical Variety of Shapes in Financial Multifractality’, by S. Drożdż et al. which performed a rolling window calculation of multifractal spectra for U.S. indices prices. Aware that a similar analysis had not yet been performed upon other world indices, I decided to perform a similar analysis of the FTSE 100 and the Deutscher Aktienindex, over a 28year period, with a 14year rolling window, graduated by 20 days (assumed trading time). The method implemented use of MATLAB software, to perform a discrete wavelet transform on each 3540 data point set. The findings are such that definite benefits to knowledge surrounding market behaviour can be gained with multifractal analysis, the definitive f(alpha) curves see major structural changes in alignment with major economic events in the timeline.
 9.309.45 Lorena Radavci  Modelling and Forecasting Volatility in Stock Markets using GARCH Models: A Comparison between Emerging and Developed Economies

Lorena Radavci (Sheffield City College)
Abstract: The aim of this research is to examine the ability of different GARCHclass models including GARCH, EGARCH, GJRGARCH, APARCH and CGARCH, to model the stock return volatility of eight markets: Hungary (BUX), Poland (WIG), Slovakia (SAX), Romania (BET XT), France (CAC 40), UK (FTSE 100), Germany (DAX), and Switzerland (SMI), and to evaluate the predictive ability of GARCHclass models in terms of insample and outofsample forecasting accuracy of the selected stock markets. From empirical findings, the best model to estimate the insample period results to be the APARCH model for all indices except for Romanian stock market index, where the CGARCH model outperforms others, both under studentt distribution. Whereas for the evaluation of outofsample period different results are found. It is found that for the Polish and Slovakian stock markets, the best model to capture and forecast the volatility of stock return is the APARCH model, while for the Romanian and Hungarian stock markets, the component GARCH and symmetric GARCH models have the best forecasting performance. In developed markets, the component GARCH and GJRGARCH outperform other models. All stock markets are characterized with volatility clustering, asymmetric effect, heavy tails, and leptokurtosis. All stock markets are highly volatile except the Slovakian market. The developed economies exhibit higher leverage effect than the emerging ones. The bad news increase more the volatility of all stock markets, with one exception for SAX return (Slovakia). In Slovakian stock market, investors are expecting for higher booms to happen rather than for the decline of prices, thus in other words, the effect of positive news on volatility is greater than the effect of negative news of the same magnitude.
 10.0011.00 Keynote 3: Dr Konstantinos Zygalakis  Bayesian inverse problems, prior modelling and algorithms for posterior sampling

Dr Konstantinos Zygalakis (University of Edinburgh)
Abstract: Bayesian inverse problems provide a coherent mathematical and algorithmic framework that enables researchers to combine mathematical models with data. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the posterior distribution, which itself requires the solution of high dimensional optimization and sampling problems. In this talk, we will study different algorithms for efficient sampling from the posterior distribution under two different prior modelling paradigms. In the first one, we use specific nonsmooth functions, such as for example the total variation norm, to model the prior. The main computational challenge in this case is the nonsmoothness of the prior which leads to “stiffness” for the corresponding stochastic differential equations that need to be discretised to perform sampling. We address this issue by using tailored stochastic numerical integrators, known as stochastic orthogonal RungeKutta Chebyshev (SROCK) methods, and show that the corresponding algorithms are able to outperform the current state of the art methods. In the second modelling paradigm, the prior knowledge available is given in the form of training examples and we use machine learning techniques to learn an analytic representation for the prior. The main computational challenge in this case is that the corresponding posterior distribution becomes multimodal which results in a challenging sampling problem since standard Markov Chain Monte Carlo methods (MCMC) can get stuck in different local maxima of the posterior distribution. We address this issue, by using specifically designed MCMC methods and exhibit numerically that this “datadriven” approach improves the performance in a number of different imaging tasks, such as image denoising and image deblurring.
 11.0011.30 Savvas Melidonis  MCMC methods for Image Restoration

Savvas Melidonis (HeriotWatt University)
Abstract: Imaging inverse problems concern the restoration of images after corruption such as noising, blurring or painting. Mathematically, they can be treated as optimization problems and several methodologies have now been developed to deal with them. A main class of such methodologies efficiently combines Bayesian inference and convex optimization. Specifically, a Bayesian model over the image of interest is usually defined, and then the posterior mode is calculated through convex optimization techniques. This strategy is known as maximumaposteriori (MAP) estimation and allows for precise image restoration. However, even if such Bayesian computation techniques are efficient and scalable, they do not allow for important statistical analyses such as uncertainty quantification or model selection. In this context, we will describe another Bayesian computation technique, mainly a Markov Chain Monte Carlo (MCMC) methodology, which aims to solve highly illposed imaging problems and allows for more complex analyses. We consider problems that can be encountered in quantumenhanced imaging applications where the images are corrupted by Poisson, Binomial or Geometric random noise processes. In a Bayesian context, such random noise processes are difficult to be tackled since their respective likelihoods are not Lipschitz differentiable and involve nonnegativity constraints. To deal with them, we use and accordingly adapt existing MCMC methodologies which satisfy favorable convergence properties and allow for the construction of efficient Markov chains. This strategy can then lead to the calculation of highly efficient Bayesian estimators of the ground truth image as well as to Bayesian statistical analyses such as uncertainty quantification or model selection.
 11.3011.45 Sam Armstrong  An AgentBased Model of Food Web Evolution

Sam Armstrong (University of Sheffield)
Abstract: The way in which a food web evolves to create such diversity in an ecosystem has long been a key area of research for ecological scientists. Simulating the evolution of such food webs can help to unlock the complex dynamics that lead to this diversity and display possible causes and implications. The work of Norling (2007) explored the use of an agentbased model for such an application. However, the food webs produced using this model did not produce a diverse array of species, and left many potential pathways for further development of the model. The aim of this project was to build upon the model from Norling (2007) to further explore the potential for the application of agentbased modelling to evolutionary simulation. Three models were iteratively developed that reimplemented the original work and explored potential improvements that could allow realistic food webs and intricate population dynamics to emerge. The initial model attempted to reimplement the model created in the original work, with the second model then extending this by adding geographic diversity to the model, alongside other improvements to model mechanics. The final model utilised a different approach to evolutionary simulation by introducing the concept of heterogeneous behaviour. This model also investigated the application of unsupervised learning techniques for clustering agents into species based upon their characteristics. The project ultimately finds that food webs containing complexity similar to that observable in the real world is attainable through the agentbased simulation of an ecosystem.
 12.0012.30 Dr Valeria Giunta  Aggregation phenomena and transition to chaotic dynamics in a chemotaxis model of acute inflammation

Dr Valeria Giunta (University of Sheffield)
Abstract: Inflammation is the natural body's response to outside threats. Although it is a protective mechanism, a derangement of this biological process can impair physiological functions, leading to a significant number of severe diseases. The complex dynamics of the inflammatory process are not yet fully known and a thorough knowledge of these mechanisms could be the key to control the onset and the evolution of inflammatory diseases. In this talk, I shall present a study on a ReactionDiffusionChemotaxis model aiming to explore the mechanisms of the inflammatory response. It is a recently proposed model, which captures key mechanisms of inflammation. After a brief presentation of the system, in which I will focus on the most relevant modeling aspects, I shall present a study on the onset of both stationary and oscillating primary instabilities. I shall show that, using numerical values of the parameters taken from the experimental literature, the resulting patterns are able to reproduce different clinical scenarios, as ringshaped skin eruptions. I will also focus on some relevant mathematical properties, such as the emergence of oscillatory and irregular spatiotemporal solutions.
 12.3013.00 Prerna Singh  Resist the disease or tolerate it?

Prerna Singh (University of Sheffield)
Abstract: Tolerance and resistance are two modes of defense mechanisms
used by hosts when faced with parasites. Here we assume tolerance reduces
infectioninduced mortality rate and resistance reduces the susceptibility of
getting infected. Importantly, a negative association between these two
strategies has often been found experimentally. We study the simultaneous
evolution of resistance and tolerance in a host population where they are related
by such a tradeoff. Our focus is on predicting which of the two strategies is
favoured under various epidemiological and ecological conditions.  13.0013.15 Chloe Schooling  Tensor electrical impedance myography of the tongue in amyotrophic lateral sclerosis identifies the impedance signature of disease progression

Chloe Schooling (University of Sheffield)
Abstract: Objectives: Electrical impedance myography (EIM) is a promising biomarker for amyotrophic lateral sclerosis (ALS). A key issue is how best to utilise high dimensional, multifrequency data to fully characterise muscle health and monitor progression of disease.
Methods: Muscle volume conducted properties were obtained from EIM recordings across three electrode configurations and 14 frequencies and nonnegative tensor factorisation (tensor EIM) was applied. Data were collected over a maximum of 9 months in 28 patients with ALS and 17 controls. Tensor EIM was evaluated against EMG data, the amyotrophic lateral sclerosis functional rating scale (ALSFRS) bulbar subscore, tongue strength and an overall bulbar disease burden score.
Results: EIM spectra with differing spectral shapes were seen in association with EMG findings of acute and chronic denervation. Tensor EIM identified these shifts in longitudinal measurements in patients with ALS, but with an increasing trend towards the spectral pattern associated with chronic denervation. Tensor EIM increased within three months (p<0.01) and continued to do so over the 9 month duration (p<0.001). In a hypothetical clinical trial scenario tensor EIM required fewer participants (n=15), than single frequency measures (n range 28189) or ALSFRS bulbar subscore (n=54).
Conclusions: Tensor EIM captures the effects of denervation/reinnervation and provides a sensitive measure of disease progression of time.
Significance: There is currently a lack of objective biomarkers for the assessment of ALS bulbar disease. Tensor EIM enhances the biomarker potential of EIM and can improve bulbar symptom monitoring in clinical trials.
13.1513.55 Lunch  games on Gather Town
 13.5514.00 Nada Baessa  Visualising Complex Networks

Nada Baessa (King’s College London)
Abstract: There are many instances when large complex networks arising from biological phenomena, social structures, technological interactions or other data sets need to be visualised in order to draw useful conclusions. However, the nature of a large complex network presents a difficulty in producing such a visualisation and requires a balance to be struck between simplification and utility. This presentation discusses the initial research carried out to investigate methods of visualising a large ecological network and the various potential solutions to creating a useful visual tool from an informationheavy data set; this includes separation into strongly connected subgraphs, finding cyclical and tree structures as well as more creative solutions such as deploying 3D visualisation or clustering using machine learning.
 14.0014.15 Praveen Shahani  Mathematics of Reinforcement Learning and its Applications

Praveen Shahani (Amity University)
Abstract: Reinforcement Learning (RL) is an agentbased algorithm, where the agents interact with the environment and learn an optimal set of actions in order to maximize the reward. It is widely used in different domains. This study focuses on exploring how reinforcement learning has evolved. The study is done on history and evolution of logics from average returns to policy search and contextual bandits of Immediate RL. The objective of this study is twofold, one is the applied part and another is the abstract part of RL. Some of the applications in RL are found in Inventory Management, Robotics, Finance and focuses on Photonic Reservoir Computing.
 14.1514.45 Farhana Akond Pramy  A computerassisted proof of dynamo growth in the stretchfoldshear map

Farhana Akond Pramy (Open University)
Abstract: We have worked on a functional linear operator called the StretchFoldShear (SFS) operator [Gilbert 2002] which arises from a model of dynamo growth. The SFS operator ? or ?α on a function ?(?) is defined by
?(?)(?)=?^[??(?−1)/2]?((?−1)/2)−?^[??(1−?)/2]?((1−?)/2), where ?≥0 is the shear parameter. For ?= 0, ? takes the simpler form ?(?)(?)=?((?−1)/2)−?((1−?)/2).
The existence of an eigenvalue of this operator of magnitude greater than one ensures dynamo growth, and my work is on proving such existence.
The SFS operator, S, acts on a function space, ℱ, of complexvalued functions of a real variable, ?, on some interval containing [−1,1] in ℝ. The operator ? is compact on Banach spaces of functions analytic on a disc ? with finite ?1 norm and therefore has spectrum consisting of discrete eigenvalues together with 0. When the shear parameter is zero, the spectrum of ? can be determined exactly, but this may not be possible when the shear parameter is greater than zero. In this research work, at first the eigenvalueeigenfunction pairs of the SFS operator have been computed using computer algebra software when the shear parameter is zero. Then, using these values, numerical approximations of eigenvalueeigenfunction pairs have been calculated for dynamically increasing shear parameters. Finally, a computerassisted proof is used to find a rigorous bound around a closed disc function ball of each approximation by implementing interval arithmetic in Julia.
 15.0016.00 Keynote 4: Dr Matthew Allcock  Extreme statistics: protecting nuclear power stations from natural disasters

Dr Matthew Allcock (EDF)
Abstract: Safety is an overriding priority at EDF’s nuclear power stations. This includes protection from natural hazards such as flooding, earthquakes, and space weather. Space weather occurs when solar flares and other explosive phenomena on the Sun disturb the Earth’s magnetic field and send cosmic rays into the atmosphere, which can damage electrical equipment. At EDF, we use a variety of mathematical and statistical modelling techniques, such as extreme value analysis, to characterise and mitigate these natural hazards. Extreme value analysis involves fitting a statistical model to the extreme deviations from the mean of a dataset. I will demonstrate how we apply extreme value analysis to space weather data from subatomic particle detectors in Scotland, tree rings in Japan, and ice cores in the Antarctic.
 16.0016.30 Dr Jude Ejeh  Mathematical Optimisation Approaches for Energy Systems

Dr Jude Ejeh (University of Sheffield)
Abstract: Through the 20th century, mathematical optimisation has grown to become an important interdisciplinary area combining mathematics, computer science, engineering and management science. This majorly has been due to the fact that the decisionmaking process in several industries has transitioned from approaches based on trialanderror and experience, to a more scientific, evidencebased approach. All these, combined with the rapid growth in optimisation techniques, tools and computer technology, has made the integration of such mathematical approaches inevitable in everyday applications. In this talk, we explore some applications of mathematical optimisation techniques to energy systems, specifically behindthemeter operation of community electrical energy storage systems and the cost benefit analysis of electric vehicle (EV) adoption in the transition to a netzero economy. We will highlight key considerations of our proposed mathematical optimisation models for these applications, the benefits to endusers obtained from the results (economical, environmental, etc.) and the drawbacks and possible areas for improvement.
 16.3017.00 Raphael Fortulan  Nonlinear Stability Analysis in Power Systems with Renewable Energy Sources

Raphael Fortulan (Sheffield Hallam University)
Abstract: An uninterrupted supply of electrical power is now a basic necessity of any modern society and making the stable and continuous operation of these systems is essential. Power systems are the most complex system ever created by man, however. In this short talk, we will discuss how nonlinear stability analysis can be applied to study these large dynamic systems. In addition, we will show how we are including uncertainties in our analysis to deal with the growing presence of intermittent renewable energy sources.
 17.0017.30 Yang Zhou (Tina)  Deep Learning models in confronting ADAPT and satellite observations.

Yang Zhou (University of Bath)
Abstract: In this presentation I describe how I use deep learning tools and numerical methods to improve the accuracy of space weather prediction models. An important intermediate model of the forecasting chain is the WangSheeleyArge (WSA) model. WSA is initialised using synoptic maps based on magnetogram observations, but these maps often have large errors. This is especially apparent in the representation of coronal holes (CHs). CHs are not directly observed in the synoptic maps but are the result of a numerical minimisation algorithm. We know if there is large uncertainty in the description of the CHs then that error is propagated through WSA. One way to overcome this uncertainty is the application of the ensemble method. The UK Met Office already uses ADAPT (Air Force Data Assimilative Photospheric Flux Transport) operationally which gives 12 different ensemble realisations in WSA. However, these 12 ensembles give different realisations of the CHs. It would be useful to understand which ADAPT ensemble members have the best representation by comparing them with observations. I have developed a model to register satellite images and ADAPTWSA ensemble outputs. The satellite images were first processed with an autosegmentation software CHIMERA (Coronal Hole Identification via Multithermal Emission Recognition Algorithm). This gives me the hourly coronal boundaries segmentation from spaceborne observations. These observations are projected onto the same coordinates, and this can allow us to compare ADAPT with satellite images liketolike. I will show some primary results of my research where I apply a CNN (Convolutional Neural Network) based algorithm to analyse the ADAPT ensembles. The broader aim of this work is to improve the solar wind forecast on Earth.
 17.3018.00 Abdulrahman Albidah  A novel approach to identify magnetohydrodynamic (MHD) wave modes in sunspots

Abdulrahman Albidah (University of Sheffield)
Abstract: Observations of sunspots show a complex variety of oscillatory temporal and spatial phenomena. To decompose this data into individual MHD wave modes the techniques of Fourier analysis, Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) are implemented. Fourier analysis gives us an overall view of the temporal and spatial scales of the modes. Whereas, POD classifies modes that are orthogonal in space and DMD classifies modes that are orthogonal in time.
18.0019.00 Pub Quiz
About the keynote speakers
 Dr Matthew Allcock  Extreme statistics: protecting nuclear power stations from natural disasters

Dr Matthew Allcock (Natural Hazards Research Engineer, EDF) leads space weather research as part of the natural hazards research team at EDF, Britain’s largest provider of low carbon electricity. During his PhD at the University of Sheffield he used magnetohydrodynamics to model waves in the solar atmosphere. Prior to this, he completed an undergraduate mathematics degree at the University of Sheffield and Monash University.
 Dr Silvia Gazzola  Iterative Regularisation Methods for LargeScale Inverse Problems

Dr Silvia Gazzola (Lecturer, University of Bath) is a Lecturer in Applied Mathematics at the University of Bath. Before joining Bath in 2016, she held research associate positions at Heriot Watt University (2015) and at the University of Padova (2014). Silvia studied mathematics in Italy, first at the University of Parma and then at the University of Padova, where she earned a PhD in Computational Mathematics in 2014. Silvia’s research interests are in regularisation techniques for inverse problems, with a particular focus on the derivation of new algorithms that exploit advanced numerical linear algebra tools, and imaging applications.
 Dr Aretha Teckentrup  Convergence and Robustness of Gaussian Process Regression

Dr Aretha Teckentrup (Lecturer, University of Edinburgh) is currently a Lecturer in the Mathematics of Data Science at the University of Edinburgh. Before joining Edinburgh in 2016, she held postdoctoral research positions at the University of Warwick and Florida State University. She obtained her PhD from the University of Bath in 2013. Her research interests are at the interface of numerical analysis, statistics and data science. She is particularly interested in uncertainty quantification in simulation with complex computer models, with recent research focussing on multilevel sampling methods, Bayesian inverse problems, and Gaussian process emulators.
 Dr Konstantinos Zygalakis  Bayesian inverse problems, prior modelling and algorithms for posterior sampling

Dr Konstantinos Zygalakis (Reader, University of Edinburgh) is a Reader in the Mathematics of Data Science at the University of Edinburgh. He received a 5year Diploma in Applied Mathematics and Physics from the National Technical University of Athens in 2004, and his MSc and PhD from the University of Warwick in 2005 and 2009 respectively. Before Edinburgh he was a David Chrigton fellow at the University of Cambridge and held further postdoctoral positions at the University of Oxford and the Swiss Federal Institute of Technology, Lausanne as well as a lectureship in Applied Mathematics at the University of Southampton. His research spans a number of areas in the intersection of applied mathematics, numerical analysis, statistics and data science. In 2011, he was awarded a Leslie Fox Prize in Numerical Analysis (IMA UK) and he is a Fellow of the Alan Turing Institute since 2016. He has coauthored over forty research articles, as well as a graduate textbook in the Mathematics of Data Assimilation.