My group's research is in computational applied mathematics, including numerical analysis, mathematical modelling and software. We are interested in taking challenging science or engineering problems and identifying efficient computational strategies to address them. We work with applications scientists in many different areas, as well as computer scientists, high performance computing specialists, and with industry. We use methods from analysis (especially numerical analysis), probability, and statistics in our work. We also incorporate and rely on many concepts from statistical physics and theoretical chemistry. We also develop our own industrial-strength software for various applications, an increasingly demanding part of our work.

Molecular Dynamics.
Much of our work has been on the foundations of numerical methods for simulating the dynamics of molecules. We usually assume the that the model can be described in terms of the motions of the nuclei of the system. It turns out that this is the natural starting point in many cases. So essentially we have a large system of atoms, they interact through forces that are described by potential energy functions, and the goal is to figure out how the molecules move and arrange themselves over time. For example, the images below show two conformational states of a certain molecule.

These would represent more or less likely states, and it is important that we observe simulated states with a likelihood that is similar to that with which they would be observed in reality; that is, we would wish to have an accurate approximation of the natural distribution of different states.

In the simplest setting one solves equations defined by Newton's 2nd law (F=MA), but more commonly one has to include additional constraints on the system or make the model relevant for a laboratory environment (where temperature and pressure come into play) or include some additional contributions to the forces from a noisy environment or an auxiliary quantum computation. So things get more complicated, but the basic idea is straightforward: We start with a conformation and approximate the motion after a brief instant, then take another step, and another and so eventually construct a series of snapshots that describe the motion of the system. Ultimately we might be interested in how the molecule arranges itself after a very long time--the so-called "invariant distribution".

As examples of our recent successes in this area, I would highlight the understanding of theoretical foundations and the development of numerical methods for Langevin dynamics. In Langevin dynamics, the Newtonian dynamics model is supplemented by dissipative and random perturbations which capture the interactions with suppressed degrees of freedom or compensate for other limitations of the conservative description. The system of interest is thus a set of stochastic differential equations. We have consider the formulation of the equations of motion, the properties of those equations (e.g. questions of ergodicity), and the design of accurate and efficient numerical discretisation methods. Some of our work in this area is described in the textbook I have written with Charles Matthews (Molecular Dynamics, Springer 2015).

More recent work has generalised the algorithms to constrained systems and systems with memory (generalised Langevin), or looked at the redesign of the stochastic-dynamical model to enhance exploration in the presence of extreme metastability due to energetic or entropic barriers. We also have an important stream of research on mesoscale particle models such as dissipative particle dynamics.

Molecular Dynamics Software. It is also worth mentioning our recent work on molecular dynamics software. In this domain there are some very large consortia which have produced the most important products, such as CHARMM, NAMD, Gromacs, Amber, Lammps. These packages generally involve combined contributions of dozens of authors from different disciplines. Because all classical molecular dynamics models involve compromises (quantum mechanics would be more accurate, but is computationally astronomically expensive due to the 'curse of dimensionality'), a lot of work is put into making the force fields as accurate as possible. This means fitting empirical force field models to functions of the pair distances, coordinates of local triples, etc. It's highly nontrivial. Other substantial work goes into making the codes run fast on the latest hardware, for example GPUs.

When we develop new algorithms, it can take a long time to get them implemented in these codes. There are often development queues and priorities that would delay implementation. So our algorithms languish on the shelf sometimes for many years before finally getting tested, which can be frustrating. For this reason, we have worked with scientists from the Edinburgh Parallel Computing Centre (EPCC) to create a software interface called MIST ("Molecular Integrator Software Tools") that allows us to very easily implement our algorithms. The software interface handles all the communication with various molecular simulation software packages such as NAMD, GROMACS, etc. In this way we can take advantage of the optimisations built into those codes while also having the ability to test our algorithms rapidly on "real-world" problems. MIST is freely available.

Bayesian Models for Data Analysis. We have also been progressively exploring algorithms that can be used for Bayesian parameterisation of complex models. The model can be a neural network or a traditional mathematical model. Let us denote the model by $\Phi$ and suppose that it takes input $X$ to output $Y$, given parameters $\theta$:

$Y = \Phi(X,\theta)$.

Now in a common setup, one assumes there is stochastic uncertainty in the relationship and so it is natural to consider a model in which the in puts and outputs are interpreted as random variables and the strict equality above is replaced by a statistical model, say

(1) $\pi \left (x,y|\theta \right ) = Z^{-1}\exp \left ( \frac{\left [ y-\Phi(x,\theta)\right ]^2}{2\sigma^2} \right )$.

where, for simplicity, we take $\sigma$ fixed. This is just one (elementary) choice for the statistical model, but it will serve for this exposition. Now given a collection of samples (data) $D=\{(x_{\alpha},y_{\alpha})\}$, we may ask what is the best choice the parameters $\theta$ to fit the given model to the given data. From the Bayesian perspective, the parameters themselves can be viewed as uncertain, with a distribution defined by Bayesian inversion:

$\pi\left (\theta| D \right) \propto \pi(D|\theta)\pi_0(\theta).

The factors here are all probability densities. $\pi_0$ is an assumed ``prior'' for the parameters which might, for example, restrict the parameters to be taken from a specified finite domain (or it may be Gaussian). The usual choice for the conditional distribution, $\pi(D|\theta)$, is based on the likelihood which is defined by the statistical model (1) under the assumption that all the data points are independent:

$\pi(D|\theta) = \prod_{\alpha=1}^{|D|} \pi(x_{\alpha},y_{\alpha}|\theta).$

It is now possible to treat the parameterization problem in various ways. The most usual approach is to optimise the parameters by finding the global maximum of the probability density $\pi(\theta|D)$. A more nuanced approach is to embrace the uncertainty in the choice of $\theta$ in the Bayesian perspective, and explore, i.e. ``sample'' the parameter distribution. The distribution is normally multimodal and we may attempt to find a number of the modes or to average the parameter values across the distribution. This is the approach we are currently taking in our data analytics work.

Among the specific problems we have looked at/are looking at in this domain, I would mention the following. First, there is a basic problem in the simple technique I mentioned above in the context of large data sets. Every time we compute a new value of the density (or its gradient, as is often needed), we must compute a product of likelihoods across the entire data set which will be very expensive if $|D|$ is large. One way to reduce the computational burden is to replace the full likelihood by an approximation obtained by using a subsampling of the data set. This might just be a small percentage of the full data set, so the properties of the data set, e.g. its concentration, play a critical role in determining when this approach will be successful in practice. Nonetheless it is widely used in the data community, where it goes under the name of "Stochastic Gradients". A very similar problem arises in multiscale modelling, when the forces are computed by some device which perturbs the conservative structure. This gradient noise creates an additional source of noise in the parameterisation process. It turns out that this noise can be neutralised by use of a ``thermostat'' method, something we studied extensively in the context of chemical/physical modelling. This remains very much an open problem and we are currently exploring it as part of other work we are doing.

Enhancing Sampling. Another important issue is the treatment of multimodality of the probability distribution. In many cases there will be multiple reasonable parameterizations of the model
defined by the specific data provided. In such situations it becomes crucial to explore or * sample* the probability distribution defined by Bayes' formula. This task is computationally demanding since the standard MCMC schemes will get stuck in one or the other localised region of the parameter space. In order to enhance exploration one may use innovative sampling strategies which are very like the schemes we would commonly employ to sample for example the conformations of biological macromolecules. A very promising strategy for this purpose is to employ a collection of coupled 'walkers' to enhanced the exploration by providing a means of rescaling the dynamics to increase exploration, using our so-called ensemble quasinewton preconditioned MCMC scheme. Another exciting possibility is to use multiple temperatures to enhance the exploration, as in our infinite swap simulated tempering scheme. A third intriguing method is based on the diffusion map analysis of simulated data (the data being parameters, in this case). Research in these areas, and on their application to real world applications, is a very active current direction.

Benedict Leimkuhler, Charles Matthews, Tiffany Vlaar, Preprint(2019) web

2 Hypocoercivity properties of adaptive Langevin dynamics

Benedict Leimkuhler, Matthias Sachs, and Gabriel Stoltz, Preprint(2019) web

3 Transferability of Operational Status Classification Models Among Different Wind Turbine Types

Z. Trstanova, A. Martinsson, C. Matthews, S. Jimenez, B. Leimkuhler, T. Van Delft, and M. Wilkinson,

Journal of Physics: Conference Series, 1222: 12041, 2019 journal web

4 TATi-Thermodynamic Analytics ToolkIt: TensorFlow-based software for posterior sampling in machine learning applications

F. Heber, Z. Trstanova, and B. Leimkuhler, Preprint(2019) web

5 Local and Global Perspectives on Diffusion Maps in the Analysis of Molecular Systems

Z. Trstanova, B. Leimkuhler and T. Lelievre, Preprint(2019) pdf web

6 MIST: A Simple and Efficient Molecular Dynamics Abstraction Library for Integrator Development

I. Bethune et al,

Computer Physics Communications, , 2018 journal pdf web

7 Ergodic properties of quasi-Markovian generalized Langevin equations with configuration dependent noise

M. Sachs and B. Leimkuhler, Preprint(2018) web

8 Quantifying configuration-sampling error in Langevin simulations of complex molecular systems

J Fass, D Sivak, GE Crooks, KA Beauchamp, B Leimkuhler, and J Chodera,

Entropy, 20: 318, 2018 journal web

9 Simulated Tempering Method in the Infinite Switch Limit with Adaptive Weight Learning

A. Martinsson, J. Lu, B. Leimkuhler and E. Vanden-Eijnden, Preprint(2018) web

10 Assessing numerical methods for molecular and particle simulation

X. Shang, M. Kroger and B. Leimkuhler,

Soft Matter, 13: 8565-8578, 2017 journal

11 Langevin Dynamics with Variable Coefficients and Nonconservative Forces: From Stationary States to Numerical Methods

M. Sachs, B. Leimkuhler, and V. Danos,

Entropy, 19: 647-667, 2017 journal

12 Ensemble preconditioning for Markov chain Monte Carlo simulation

C. Matthews, J. Weare, B. Leimkuhler,

Statistics and Computing, 28: 277-290, 2018 journal

13 Observation-based correction of dynamical models using thermostats

K. Myerscough, J. Frank, and B. Leimkuhler,

Proceedings of the Royal Society A, 473: 20160730, 2017 journal

14 Efficient molecular dynamics using geodesic integration and solvent-solute splitting

B. Leimkuhler and C. Matthews,

Proceedings of the Royal Society A, 472: 20160138, 2016 journal web

15 A stochastic algorithm for the isobaric-isothermal ensemble with Ewald summations for all long range forces

M. Di Pierro, R. Elber,and B. Leimkuhler,

Journal of Chemical Theory and Compuitation, to appear: 5624-5637, 2015 journal web

16 Covariance-controlled adaptive Langevin thermostat for large-scale Bayesian sampling

X. Shang, Z. Zhu, B. Leimkuhler and A. Storkey,

Advances in Neural Information Processing Systems (NIPS 2015), 28: Preprint(2015) web

17 Extended Hamiltonian approach to continuous tempering

G. Gobbo and B. Leimkuhler,

Phys Rev E, 91: 61301, 2015 journal web

18 Adaptive Thermostats for Noisy Gradient Systems

B. Leimkuhler and X. Shang,

SIAM Journal on Scientific Computing, 38: Preprint(2015) web

19 Least-biased correction of extended dynamical systems using observational data

K. Myerscough, J. Frank, and B. Leimkuhler, Preprint(2014) web

20 Direct control of the small-scale energy balance in 2D fluid dynamics

J. Frank, B. Leimkuhler and K. Myerscough,

Journal of Fluid Mechanics, 782: 240-259, 2015 journal web

21 The Adaptive Buffered Force QM/MM method in the CP2K and Amber software packages

L. Mones, A. Jones, A. Götz, T. Laino, R. Walker, B. Leimkuhler, G. Csànyi, and N. Bernstein,

Journal of Computational Chemistry, 36: 633-648, 2015 journal web

22 Numerical simulations of nonlinear modes in mica: past, present and future

J. Bajars, C. Eilbeck and B. Leimkuhler,

in Quodons in Mica: Nonlinear Localized Travelling Excitations in Crystals; Springer Series in Materials Science Vol 221, 35-67, 2015 journal web

23 Nonlinear propagating localized modes in a 2D hexagonal crystal lattice

J. Bajars, C. Eilbeck and B. Leimkuhler,

Physica D, 301-302: 8-20, 2015 journal web

24 On the numerical treatment of dissipative particle dynamics and related systems

B. Leimkuhler and X. Shang,

Journal of Computational Physics, 280: 72-95, 2015 journal web

25 On the long-time integration of stochastic gradient systems

B. Leimkuhler, C. Matthews and M. Tretyakov,

Proceedings of the Royal Society A, 470: 20140120, 2014 journal web

26 The computation of averages from equilibrium and nonequilibrium Langevin molecular dynamics

B. Leimkuhler, C. Matthews and G. Stoltz,

IMA Journal on Numerical Analysis, , 2015 journal pdf web

27 Robust and efficient configurational molecular sampling via Langevin Dynamics

B. Leimkuhler and C. Matthews,

Journal of Chemical Physics, 138: 174102, 2013 journal pdf web

28 Stochastic resonance-free multiple time-step algorithm for molecular dynamics with very large time steps

B. Leimkuhler, D. Margul and M. Tuckerman,

Molecular Physics, 111: 3579-3594, 2013 journal pdf web

29 Weakly coupled heat bath models for Gibbs-like invariant states in nonlinear wave equations

J. Bajars, J. Frank and B. Leimkuhler,

Nonlinearity, 26: 1945-1973, 2013 journal pdf

30 Rational construction of stochastic numerical methods for molecular sampling

B. Leimkuhler and C. Matthews,

Applied Mathematics Research Express, 2013: 34-56, 2013 journal pdf web

31 Comparing the efficiencies of stochastic isothermal molecular dynamics methods

B. Leimkuhler, E. Noorizadeh and O. Penrose,

Journal of Statistical Physics, 143: 921-942, 2011 journal pdf

32 Adaptive stochastic methods for sampling driven molecular systems

A. Jones and B. Leimkuhler,

Journal of Chemical Physics, 135: 84125, 2011 journal

33 Dimensional reductions for the computation of time-dependent quantum expectations

B. Leimkuhler and G. Mazzi,

SIAM Journal on Scientific Computing, 33: 2024-2038, 2011 journal

34 Stochastic-dynamical thermostats for constraints and stiff restraints

J. Bajars, J. Frank and B. Leimkuhler,

The European Physical Journal, 200: 131-152, 2011 journal pdf

35 Simplified modelling of a thermal bath, with application to a fluid vortex system

S. Dubinkina, J. Frank, and B. Leimkuhler,

SIAM Multiscale Modelling and Simulation, 8: 1882-1901, 2010 journal pdf

36 Generalized Bulgac-Kusnezov methods for sampling of the Gibbs-Boltzmann measure

B. Leimkuhler,

Physical Review E, 81: 26703, 2010 journal pdf

37 A gentle stochastic thermostat for molecular dynamics

B. Leimkuhler, E. Noorizadeh and F. Theil,

Journal of Statistical Physics, 135: 261-277, 2009 journal pdf

38 A Metropolis-adjusted Nose-Hoover thermostat

B. Leimkuhler and S. Reich,

ESAIM:Mathematical modelling and numerical analysis, 43: 743-755, 2009 journal pdf

39 A temperature control technique for nonequilibrium molecular simulation

B. Leimkuhler, F. Legoll and E. Noorizadeh,

Journal of Chemical Physics, 128: 74105, 2008 journal pdf

40 Molecular dynamics and the accuracy of numerically computed averages

S. Bond and B. Leimkuhler,

Acta Numerica, 16: 1-65, 2007 journal pdf

41 Stabilized integration of Hamiltonian systems with hard-sphere inequality constraints

S. Bond and B. Leimkuhler,

SIAM Journal on Scientific Computing, 30: 134-147, 2007 journal pdf

42 Molecular simulation in the canonical ensemble and beyond

Z. Jia and B. Leimkuhler,

ESAIM:Mathematical modelling and numerical analysis, 41: 333-350, 2007 journal

43 Rapid thermal equilibration of coarse-grained molecular dynamics

S. Gill, Z. Jia, B. Leimkuhler and A. Cocks,

Physical Review B, 73: 184304, 2006 journal pdf

44 Geometric integrators for multiple timescale simulation

Z. Jia and B. Leimkuhler,

Journal of Physics A, 39: 5379-5403, 2006 journal

45 Approach to thermal equilibrium in biomolecular simulation

E. Barth, B. Leimkuhler, and C. Sweet,

New Algorithms for Macromolecular Simulation (Springer Lecture Notes in Computational Science and Engineering), 49: 125-140, 2006 journal pdf

46 A projective thermostatting technique

Z. Jia and B. Leimkuhler,

SIAM Multiscale Modelling and Simulation, 4: 563-583, 2005 journal pdf

47 A Hamiltonian formulation for recursive multiple thermostats in a common timescale

B. Leimkuhler and C. Sweet,

SIAM Journal on Applied Dynamical Systems, 4: 187-216, 2005 journal pdf

48 An efficient geometric integrator for thermostatted anti-/ferromagnetic Models

T. Arponen and B. Leimkuhler,

BIT Numerical Mathematics, 44: 403-424, 2004 journal pdf

49 The canonical ensemble via symplectic integration using Nose and Nose-Poincare chains

B. Leimkuhler and C. Sweet,

Journal of Chemical Physics, 121: 108-117, 2004 journal pdf

50 A parallel multiple time-scale reversible integrator for dynamics simulation

Z. Jia and B. Leimkuhler,

Future Generation Computer Systems, 19: 415-424, 2003 journal pdf

51 Generating generalized distributions from dynamical simulation

E. Barth, B. Laird and B. Leimkuhler,

Journal of Chemical Physics, 118: 5759-5769, 2003 journal pdf

52 Generalized dynamical thermostatting technique

B. Laird and B. Leimkuhler,

Physical Review E 68, 016704 (2003), 68: 16704, 2003 journal pdf

53 On the approximation of Feynman-Kac path integrals

S. Bond, B. Laird, and B. Leimkuhler,

Journal of Computational Physics, 185: 472-483, 2003 journal pdf

54 An efficient multiple time-scale reversible integrator for the gravitational N-body problem

B. Leimkuhler,

Applied Numerical Mathematics, 43: 175-190, 2002 journal pdf

55 A separated form of Nosé dynamics for constant temperature and pressure simulation

B. Leimkuhler,

Computer Physics Communications, 148: 206-213, 2002 journal pdf

56 A test set for molecular dynamics

E. Barth, B. Leimkuhler, and S. Reich,

Lecture Notes in Computational Science and Engineering, 24: 73-103, 2002 journal pdf

57 A reversible averaging integrator for multiple timescale dynamics

B. Leimkuhler and S. Reich,

Journal of Computational Physics, 171: 95-114, 2001 journal pdf

58 Explicit, time-reversible and variable stepsize integration

T. Holder, B. Leimkuhler, and S. Reich,

Applied Numerical Mathematics, 39: 367-377, 2001 journal pdf

59 Scaling invariance and adaptivity

C. Budd, B. Leimkuhler and M. Piggott,

Applied Numerical Mathematics, 39: 261-288, 2001 journal pdf

60 Molecular dynamics algorithms for mixed hard-core/continuous potentials

Y. Houndonougbo, B. Laird, and B. Leimkuhler,

Journal of Molecular Physics, 98: 309-316, 2000 journal pdf

61 A time-reversible, regularized, switching integrator for the N-body problem

A. Kvaerno and B. Leimkuhler,

SIAM Journal on Scientific Computing, 22: 1016-1035, 2000 journal pdf

62 Geometric integrators based on scaling and switching

B. Leimkuhler,

Proceedings of the Equadiff Conference (Berlin, 1999), World Scientific, Preprint(2000) pdf

63 A Semi-explicit, variable-stepsize integrator for constrained dynamics

E. Barth, B. Leimkuhler, and S. Reich,

SIAM Journal on Scientific Computing, 21: 1027-1044, 1999 journal pdf

64 Reversible adaptive regularization: perturbed Kepler motion and classical atomic trajectories

B. Leimkuhler,

Philosophical Transactions of the Royal Society of London A, 357: 1101-1134, 1999 journal pdf

65 Reversible adaptive regularization methods for atomic N-body problems in applied fields

B. Leimkuhler,

Applied Numerical Mathematics, 29: 31-43, 1999 journal pdf

66 Asymptotic error analysis of the Adaptive Verlet method

S. Cirilli, E. Hairer and B. Leimkuhler,

BIT Numerical Mathematics, 39: 25-33, 1999 journal pdf

67 The Nose-Poincare method for constant temperature molecular dynamics

S. Bond, B. Leimkuhler and B. Laird,

Journal of Computational Physics, 151: 114-134, 1999 journal pdf

68 Timestep acceleration of waveform relaxation

B. Leimkuhler,

SIAM Journal on Numerical Analysis, 35: 31-50, 1998 journal pdf

69 Symplectic methods for rigid body dynamics

B. Leimkuhler,

in Computational Molecular Dynamics: Challenges, Methods, Ideas, Springer Lecture Notes in Computational Science and Engineering, 4: , 1998 journal

70 Time transformations for reversible variable stepsize integration

S. Bond and B. Leimkuhler,

Numerical Algorithms, 19: 55-71, 1998 journal

71 The Adaptive Verlet method

W. Huang and B. Leimkuhler,

SIAM Journal on Scientific Computing, 18: 239-256, 1997 journal pdf

72 Geometric integrators for classical spin systems

J. Frank, W. Huang and B. Leimkuhler,

Journal of Computational Physics, 133: 160-172, 1997 journal pdf

73 Split-Hamiltonian methods for rigid body molecular dynamics

A. Dullweber, B. Leimkuhler and R. McLachlan,

Journal of Chemical Physics, 107: 5840-5851, 1997 journal pdf

74 A symplectic method for rigid-body molecular simulation

A. Kol, B. Laird and B. Leimkuhler,

Journal of Chemical Physics , 107: 2580–2588, 1997 journal

75 Integration Methods for Molecular Dynamics

B. Leimkuhler, S. Reich and R.D. Skeel,

in IMA Volumes in Mathematics and Its Applications, Springer-Verlag, 82: , 1997 journal

76 A symplectic integrator for Riemannian manifolds

B. Leimkuhler and G. Patrick,

Journal of Nonlinear Science, 6: 367-384, 1996 journal pdf

77 Orthosymplectic integration of linear Hamiltonian systems

B. Leimkuhler and E. Van Vleck,

Numerische Mathematik, 77: 269-282, 1996 journal pdf

78 Symplectic Methods for Conservative Multibody Systems

E. Barth and B. Leimkuhler,

Communications of the Fields Institute , 10: 25-43, 1996 journal

79 Algorithms for constrained molecular dynamics

E. Barth, K. Kuczera, B. Leimkuhler and R. Skeel,

Journal of Computational Chemistry, 16: 1192-1209, 1995 journal pdf

80 Symplectic integration of constrained Hamiltonian systems

B. Leimkuhler and S. Reich,

Mathematics of Computation, 63: 589-605, 1994 journal pdf

81 Symplectic numerical integrators in constrained Hamiltonian systems

B. Leimkuhler and R. Skeel,

Journal of Computational Physics, 112: 117-125, 1994 journal pdf

82 Estimating waveform relaxation convergence

B. Leimkuhler,

SIAM Journal on Scientific Computing, 14: 872-889, 1993 journal pdf

83 Relaxation Techniques in Multibody Dynamics

B. Leimkuhler,

Transactions of the Canadian Society of Mechanical Engineering, 17: 459-471, 1993 journal

84 Rapid Convergence of Waveform Relaxation

B. Leimkuhler and A. Ruehli,

Applied Numerical Mathematics, 11: 211-224, 1993 journal

85 Dynamic Iteration Schemes for Differential-Algebraic Equations in Large Scale Circuit Simulation

B. Leimkuhler,

Proceedings of the Conference on Computational Ordinary Differential Equations(London, 1990), IMA, 39: Preprint(1992)

86 Waveform Relaxation for Linear RC-Circuits

B. Leimkuhler, U. Miekkala and O.Nevanlinna,

IMPACT of Computing in Science and Engineering, 3: 123-145, 1991 journal

87 Differentiation of Constraints in Differential-Algebraic Equations

S.L. Campbell and B. Leimkuhler,

Mechanics of Structures and Machines, 19: 19-39, 1991 journal

88 Numerical solution of differential-algebraic equations for constrained mechanical motion

C. Fuehrer and B. Leimkuhler,

Numerische Mathematik, 59: 55-69, 1991 journal pdf

89 Formulation and Numerical Solution of the Equations of Constrained Mechanical Motion

C. Fuehrer and B. Leimkuhler,

Proceedings of NUMDIFF-5 (Halle, 1989), Teubner- Texte in Mathematics, 121: , 1991 journal

90 Approximation Methods for the Consistent Initialization of Differential Algebraic Equations

B. Leimkuhler, L.R.Petzold and C.W. Gear,,

SIAM Journal on Numerical Analysis, 28: 205-226, 1991 journal

91 A New Class of Generalized Inverses for Discretized Euler-Lagrange Equations

C. Fuehrer and B. Leimkuhler,

in NATO Advanced Research Workshop on Real-Time Integration Methods for Mechanical System Simulation, Springer, , 1990 journal

92 Automatic integration of Euler-Lagrange equations with constraints

C.W. Gear, B. Leimkuhler, G.K. Gupta,

Journal of Computational and Applied Mathematics, 12-13: 77-90, 1985 journal pdf

Simulating Hamiltonian Dynamics

B. Leimkuhler and S. Reich,

Cambridge University Press, 2005 publisher

Geometric integrators are time-stepping methods, designed such that they exactly satisfy conservation laws, symmetries or symplectic properties of a system of differential equations. In this book the authors outline the principles of geometric integration and demonstrate how they can be applied to provide efficient numerical methods for simulating conservative models. Beginning from basic principles and continuing with discussions regarding the advantageous properties of such schemes, the book introduces methods for the N-body problem, systems with holonomic constraints, and rigid bodies. More advanced topics treated include high-order and variable stepsize methods, schemes for treating problems involving multiple time-scales, and applications to molecular dynamics and partial differential equations. The emphasis is on providing a unified theoretical framework as well as a practical guide for users. The inclusion of examples, background material and exercises enhance the usefulness of the book for self-instruction or as a text for a graduate course on the subject.

• Thorough treatment of a relatively new subject, covers theory, applications and also gives practical advice on implementing the techniques • Emphasis on 'efficient' numerical methods • Large number of examples and exercises.

** Contents**

1. Introduction; 2. Numerical methods; 3. Hamiltonian mechanics; 4. Geometric integrators; 5. The modified equations; 6. Higher order methods; 7. Contained mechanical systems; 8. Rigid Body dynamics; 9. Adaptive geometric integrators; 10. Highly oscillatory problems; 11. Molecular dynamics; 12. Hamiltonian PDEs.

B. Leimkuhler and S. Reich,

Cambridge University Press, 2005 publisher

Geometric integrators are time-stepping methods, designed such that they exactly satisfy conservation laws, symmetries or symplectic properties of a system of differential equations. In this book the authors outline the principles of geometric integration and demonstrate how they can be applied to provide efficient numerical methods for simulating conservative models. Beginning from basic principles and continuing with discussions regarding the advantageous properties of such schemes, the book introduces methods for the N-body problem, systems with holonomic constraints, and rigid bodies. More advanced topics treated include high-order and variable stepsize methods, schemes for treating problems involving multiple time-scales, and applications to molecular dynamics and partial differential equations. The emphasis is on providing a unified theoretical framework as well as a practical guide for users. The inclusion of examples, background material and exercises enhance the usefulness of the book for self-instruction or as a text for a graduate course on the subject.

• Thorough treatment of a relatively new subject, covers theory, applications and also gives practical advice on implementing the techniques • Emphasis on 'efficient' numerical methods • Large number of examples and exercises.

1. Introduction; 2. Numerical methods; 3. Hamiltonian mechanics; 4. Geometric integrators; 5. The modified equations; 6. Higher order methods; 7. Contained mechanical systems; 8. Rigid Body dynamics; 9. Adaptive geometric integrators; 10. Highly oscillatory problems; 11. Molecular dynamics; 12. Hamiltonian PDEs.

Molecular Dynamics

B. Leimkuhler and C. Matthews,

Springer Verlag, 2015 publisher

This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method.

** Contents**

1. Introduction; 2. Numerical integrators; 3. Analyzing geometric integrators; 4. The stability threshold; 5. Phase space distributions and microcanonical averages; 6. The canonical distribution and stochastic differential equations; 7. Numerical methods for stochastic differential equations; 8. Extended variable methods.

B. Leimkuhler and C. Matthews,

Springer Verlag, 2015 publisher

This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method.

1. Introduction; 2. Numerical integrators; 3. Analyzing geometric integrators; 4. The stability threshold; 5. Phase space distributions and microcanonical averages; 6. The canonical distribution and stochastic differential equations; 7. Numerical methods for stochastic differential equations; 8. Extended variable methods.

TATi stands for "Thermodynamic Analytics Toolkit". It is a state-of-the-art python software suite for examination of loss landscapes of neural networks, based on tensorflow’s Python API. It brings advanced sampling methods to neural network training. Its tools allow to explore the loss manifold which is a function of the employed neural network and the dataset. The simulation module makes applying sampling Python codes in the context of neural networks easy and straight-forward. The goal of the software is to enable the user to analyze and adapt the network employed for a specific classification problem to best fit her or his needs. TATi has received financial support from a seed funding grant and through a Rutherford fellowship from the Alan Turing Institute in London (R-SIS-003, R-RUT-001) and EPSRC grant no. EP/P006175/1 (Data Driven Coarse Graining using Space-Time Diffusion Maps, B. Leimkuhler PI). For a discussion of TATi refer to the Arxiv article at https://arxiv.org/abs/1903.08640,

Authors: F. Heber, Z. Trstanova, B. Leimkuhler

Link: https://pypi.org/project/tati/

MIST is the the Molecular Integration Simulation Toolkit, a lightweight and efficient software library written in C++ which provides an abstract interface to common molecular dynamics codes, enabling rapid and portable development of new integration schemes for molecular dynamics. The initial release provides plug-in interfaces to NAMD-Lite, GROMACS and Amber, and includes several standard integration schemes, a constraint solver, tem- perature control using Langevin Dynamics, and two tempering schemes. The architecture and functionality of the library is meant to be highly flexible and includes C and Fortran APIs which can be used to interface additional MD codes to MIST. We have shown, for a range of test systems, that MIST introduces negligible overheads for serial, shared-memory parallel, and GPU-accelerated cases, except for Amber where the native integrators run directly on the GPU itself. It has been applied to implement simulated tempering algorithms and been used to study the free energy landscapes of biomolecules in both vacuum and detailed solvent conditions. A research article describing MIST is available: https://www.sciencedirect.com/science/article/pii/S0010465518303540?via%3Dihu,

Authors: I. Bethune, R. Banisch, E. Breitmoser, A.B.K.Collis, G. Gibb, G. Gobbo, C. Matthews, G.J.Ackland, and B.J. Leimkuhler

Link: https://bitbucket.org/extasy-project/mist/wiki/Home

This is a small software package for MATLAB, designed to teach students about MD in a familiar and easy-to-use setting. HTML documentation is included in the folder. (After unzipping, find and load the file MDdotM/Doc/MDdotM.html in your web browser.),

Authors: B. Leimkuhler and C. Matthews

Link: http://kac.maths.ed.ac.uk/pub/MDdotM.zip

C++ and QT tool for studying machine learning algorithms,

Authors: B Leimkuhler

Link: http://kac.maths.ed.ac.uk/ThermoML

As soon as I get one, I promise to post it here.

My research group is based in the School of Mathematics at the University of Edinburgh. We combine expertise in dynamics, classical, quantum and statistical mechanics with advanced scientific computing techniques to develop forefront simulation methodology for applications such as molecular dynamics. We also have a growing interesting in understanding, using and improving techniques from statistics and machine learning, both for applications in molecular simulation and for a wider range of problems in big and small data science.

Current Group Members:

- Charlie Matthews, EPSRC Postdoctoral Research Assistant
- Anton Martinsson, MIGSAA PhD Student
- Tiffany Vlaar, MIGSAA PhD Student
- Iain Bethune, Part-Time PhD Student (Physics)

Recent Group Members

- Assyr Abdulle(EPFL, Lausanne)
- Teijo Arponen(Aalto, Finland)
- John Aston(Cambridge)
- Eric Barth(Kalamazoo)
- Stephen Bond(Sandia, New Mexico)
- Nawaf Bou-Rabee(Rutgers)
- Chris Budd(Bath)
- Giovanni Ciccotti(Dublin/Rome)
- John Chodera(MSKCC)
- Cecilia Clementi(Rice)
- Gabor Csanyi(Cambridge)
- Vincent Danos(Edinburgh/Paris)
- Carl Dettmann(Bristol)
- Arnaud Doucet(Oxford)
- Ron Elber(Texas)
- Jason Frank(CWI-Amsterdam)
- Gero Friesecke(Munich)
- Mark Girolami(Turing)
- Martin Hairer(Warwick)
- Ernst Hairer(Geneva)
- Arieh Iserles(Cambridge)
- George Karniadakis(Brown)
- Markos Katsoulakis(Amherst)
- Yannis Kevrekidis(Johns Hopkins)
- Heinz Koeppl(Darmstadt)
- Claude Lebris(ENPC, Paris)
- Frederic Legoll(ENPC, Paris)
- Tony Lelievre(ENPC, Paris)
- Jianfeng Lu(Duke)
- Mitch Luskin(Minnesota)
- Mauro Maggioni(Johns Hopkins)
- Youssef Marzouk(MIT)
- Jonathan Mattingly(Duke)
- Charles Matthews(Edinburgh)
- Houman Owhadi(Caltech)
- Greg Pavliotis(Imperial)
- Oliver Penrose(Heriot-Watt)
- Petr Plechac(Delaware)
- Sebastian Reich(Potsdam)
- Jesus Sanz-Serna(Valladolid)
- Christof Schuette(Berlin)
- Robert Skeel(Purdue)
- Gabriel Stoltz(ENPC, Paris)
- Amos Storkey(Edinburgh)
- Andrew Stuart(Warwick)
- Chris Sweet(Notre Dame)
- Yee-Whye The(Oxford)
- Florian Theil(Warwick)
- Michael Tretyakov(Nottingham)
- Mark Tuckerman(NYU)
- Eric Vanden-Eijnden(NYU)
- Jonathan Weare(Chicago)
- Jonannes Zimmer(Bath)
- Kostas Zygalakis(Edinburgh)

I typically work with a small group of PhD students. The University of Edinburgh has been fortunate to have funding from several UK-funded Centres for Doctoral Training which provide the best mechanism for PhD support. Currently, I am eligible to supervise students in the following four CDTs: The latter two have just been established in 2019, each 10-15 student places per year.